Start your day with intelligence. Get The OODA Daily Pulse.

Home > Briefs > Technology > Nvidia announces new open AI models and tools for autonomous driving research

Nvidia announces new open AI models and tools for autonomous driving research

Nvidia announced new infrastructure and AI models on Monday as it works to build the backbone technology for physical AI, including robots and autonomous vehicles that can perceive and interact with the real world. The semiconductor giant announced Alpamayo-R1, an open reasoning vision language model for autonomous driving research at the NeurIPS AI conference in San Diego, California. The company claims this is the first vision language action model focused on autonomous driving. Visual language models can process both text and images together, allowing vehicles to “see” their surroundings and make decisions based on what they perceive. This new model is based on Nvidia’s Cosmos-Reason model, a reasoning model that thinks through decisions before it responds. Nvidia initially released the Cosmos model family in January 2025. Additional models were released in August. Technology like the Alpamayo-R1 is critical for companies looking to reach level 4 autonomous driving, which means full autonomy in a defined area and under specific circumstances, Nvidia said in a blog post.

Full report : Nvidia announces Alpamayo-R1, an AI model for autonomous driving research, and calls it the “first industry-scale open reasoning vision language action model.”