Start your day with intelligence. Get The OODA Daily Pulse.

Google DeepMind’s new AI models help robots perform physical tasks, even without training

Google DeepMind is launching two new AI models designed to help robots “perform a wider range of real-world tasks than ever before.” The first, called Gemini Robotics, is a vision-language-action model capable of understanding new situations, even if it hasn’t been trained on them. Gemini Robotics is built on Gemini 2.0, the latest version of Google’s flagship AI model. During a press briefing, Carolina Parada, the senior director and head of robotics at Google DeepMind, said Gemini Robotics “draws from Gemini’s multimodal world understanding and transfers it to the real world by adding physical actions as a new modality.” The new model makes advancements in three key areas that Google DeepMind says are essential to building helpful robots: generality, interactivity, and dexterity. In addition to the ability to generalize new scenarios, Gemini Robotics is better at interacting with people and their environment. It’s also capable of performing more precise physical tasks, such as folding a piece of paper or removing a bottle cap. “While we have made progress in each one of these areas individually in the past with general robotics, we’re bringing [drastically] increasing performance in all three areas with a single model,” Parada said. “This enables us to build robots that are more capable, that are more responsive and that are more robust to changes in their environment.”

Full report : Google DeepMind launches two AI models, Gemini 2.0-based Robotics and Robotics-ER, to help robots “perform a wider range of real-world tasks than ever before.”