Start your day with intelligence. Get The OODA Daily Pulse.

Home > Briefs > Technology > Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%

Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%

Japanese AI lab Sakana AI has introduced a new technique that allows multiple large language models (LLMs) to cooperate on a single task, effectively creating a “dream team” of AI agents. The method, called Multi-LLM AB-MCTS, enables models to perform trial-and-error and combine their unique strengths to solve problems that are too complex for any individual model. For enterprises, this approach provides a means to develop more robust and capable AI systems. Instead of being locked into a single provider or model, businesses could dynamically leverage the best aspects of different frontier models, assigning the right AI for the right part of a task to achieve superior results. Frontier AI models are evolving rapidly. However, each model has its own distinct strengths and weaknesses derived from its unique training data and architecture. One might excel at coding, while another excels at creative writing. Sakana AI’s researchers argue that these differences are not a bug, but a feature. “We see these biases and varied aptitudes not as limitations, but as precious resources for creating collective intelligence,” the researchers state in their blog post. They believe that just as humanity’s greatest achievements come from diverse teams, AI systems can also achieve more by working together. “By pooling their intelligence, AI systems can solve problems that are insurmountable for any single model.”

Full report : Tokyo-based Sakana AI details a new Monte Carlo Tree Search-based technique that lets multiple LLMs cooperate on a single task, outperforming individual LLMs.