Storyboard18 Awards

Nvidia unveils Alpamayo to bring human-like reasoning to autonomous vehicles

As part of the rollout, Nvidia is releasing an open dataset containing more than 1,700 hours of driving data gathered across diverse geographies and environmental conditions, with a focus on rare and complex real-world situations.

By  Storyboard18Jan 6, 2026 8:45 AM
Follow us
Nvidia unveils Alpamayo to bring human-like reasoning to autonomous vehicles

At CES 2026, Nvidia introduced Alpamayo, a new open-source family of AI models, simulation tools, and datasets designed to help autonomous vehicles and physical robots navigate the complexities of the real world with greater intelligence and safety.

Describing the launch as a milestone moment for the industry, Nvidia CEO Jensen Huang said Alpamayo marks “the ChatGPT moment for physical AI,” signalling a shift toward machines that can understand, reason, and act in dynamic physical environments. According to Huang, the platform enables autonomous vehicles to reason through rare and unfamiliar driving scenarios, operate safely in highly complex conditions, and even explain the decisions they make while driving.

At the core of the platform is Alpamayo 1, a 10-billion-parameter vision language action model built around chain-of-thought reasoning. The model is designed to allow autonomous vehicles to think more like humans by breaking down unfamiliar problems into step-by-step reasoning processes. This enables vehicles to handle difficult edge cases, such as navigating a major intersection during a traffic light outage, without requiring prior exposure to that exact scenario.

During a press briefing on Monday, Nvidia’s vice president of automotive Ali Kani explained that Alpamayo evaluates every possible action in a given situation before selecting the safest option. Huang echoed this during his keynote, noting that Alpamayo does more than simply convert sensor data into vehicle controls. The system reasons about the actions it plans to take, explains the logic behind those decisions, and maps out the resulting driving trajectory.

Nvidia has made Alpamayo 1’s underlying code available on Hugging Face, allowing developers to fine-tune the model into smaller, faster variants for vehicle development, train simpler driving systems, or build new tools on top of it. These include auto-labeling systems for tagging video data and evaluators that assess whether a vehicle made an intelligent driving choice.

The platform is also tightly integrated with Nvidia’s Cosmos generative world models, which allow developers to generate synthetic driving data. Using Cosmos, developers can combine real-world and synthetic datasets to train and test Alpamayo-based autonomous vehicle systems across a broad range of driving scenarios.

As part of the rollout, Nvidia is releasing an open dataset containing more than 1,700 hours of driving data gathered across diverse geographies and environmental conditions, with a focus on rare and complex real-world situations. The company is also launching AlpaSim, an open-source simulation framework available on GitHub, designed to recreate full-scale driving environments, including sensors and traffic behavior, so developers can safely test autonomous systems at scale.

Together, Alpamayo, Cosmos, the new dataset, and AlpaSim form a comprehensive development stack aimed at accelerating the next phase of physical AI and autonomous driving innovation.

First Published on Jan 6, 2026 8:44 AM

More from Storyboard18