Start your day with intelligence. Get The OODA Daily Pulse.
When ChatGPT, Gemini and its other generative AI cohorts burst onto the scene a little over two years ago, talk about large language models — artificial intelligence models trained on large volumes of datasets to understand and generate human-like texts and visuals — dominated the technology scene. For years, the AI race was defined by scale — bigger models, more data and greater compute. But lately, there’s been a growing move away from large language models like GPT-4 and Gemini toward something smaller, leaner and perhaps even more powerful in certain business applications. “The next wave of AI is being built for specificity,”Jahan Ali, founder and CEO of MobileLive, told me in an interview. “Small language models allow us to train AI on domain-specific knowledge, making them far more effective for real-world business needs.” SLMs are AI models fine-tuned for specific industries, tasks and operational workflows. Unlike LLMs, which process vast amounts of general knowledge, SLMs are built with precision and efficiency in mind. This means they require less computation power, cost significantly less to run and, crucially, deliver more business-relevant insights than their larger counterparts.
Full report : Why Small Language Models could be drive the next artificial intelligence race.