Start your day with intelligence. Get The OODA Daily Pulse.
The world’s leading artificial intelligence groups are stepping up efforts to reduce the number of “hallucinations” in large language models, as they seek to solve one of the big obstacles limiting take-up of the powerful technology. Google, Amazon, Cohere and Mistral are among those trying to bring down the rate of these fabricated answers by rolling out technical fixes, improving the quality of the data in AI models, and building verification and fact-checking systems across their generative AI products. The move to reduce these so-called hallucinations is seen as crucial to increase the use of AI tools across industries such as law and health, which require accurate information, and help boost the AI sector’s revenues. It comes as chatbot errors have already resulted in costly mistakes and litigation. Last year, a tribunal ordered Air Canada to honour a discount that its customer service chatbot had made up, and lawyers who have used AI tools in court documents have faced sanctions after it made up citations. But AI experts warn that eliminating hallucinations completely from large language models is impossible because of how the systems operate.