Start your day with intelligence. Get The OODA Daily Pulse.
Artificial intelligence agent and assistant platform provider Vectara Inc. today announced the launch of a new Hallucination Corrector directly integrated into its service, designed to detect and mitigate costly, unreliable responses from enterprise AI models. Hallucinations, when generative AI large language models confidently provide false information, have long plagued the industry. For traditional models, it’s estimated that they occur at a rate of about 3% to 10% of queries on average, depending on the model. The recent advent of reasoning AI models, which break down complex questions into step-by-step solutions to “think” through them, has led to a noted increase in the hallucination rate. According to a report from Vectara, DeepSeek-R1, a reasoning model, hallucinates significantly more at 14.3% than its predecessor DeepSeek R3 at 3.9%. Similarly, OpenAI’s GPT-o1, also a reasoning model, jumped to a 2.4% rate from GPT-4o at 1.5%. New Scientist published a similar report and found even higher rates with the same and other reasoning models. “While LLMs have recently made significant progress in addressing the issue of hallucinations, they still fall distressingly short of the standards for accuracy that are required in highly regulated industries like financial services, healthcare, law and many others,” said Vectara founder and Chief Executive Amr Awadallah.
For more see the OODA Company Profile on Vectara.