Start your day with intelligence. Get The OODA Daily Pulse.
Mistral AI, the rapidly ascending European artificial intelligence startup, unveiled a new language model today that it claims matches the performance of models three times its size while dramatically reducing computing costs — a development that could reshape the economics of advanced AI deployment. The new model, called Mistral Small 3, has 24 billion parameters and achieves 81% accuracy on standard benchmarks while processing 150 tokens per second. The company is releasing it under the permissive Apache 2.0 license, allowing businesses to freely modify and deploy it. “We believe it is the best model among all models of less than 70 billion parameters,” said Guillaume Lample, Mistral’s chief science officer, in an exclusive interview with VentureBeat. “We estimate that it’s basically on par with the Meta’s Llama 3.3 70B that was released a couple months ago, which is a model three times larger.” The announcement comes amid intense scrutiny of AI development costs following claims by Chinese startup DeepSeek that it trained a competitive model for just $5.6 million — assertions that wiped nearly $600 billion from Nvidia’s market value this week as investors questioned the massive investments being made by U.S. tech giants.
Full report : Mistral Small 3 brings open-source AI to the masses — smaller, faster and cheaper.