Start your day with intelligence. Get The OODA Daily Pulse.

Nvidia CEO Huang says AI has to do ‘100 times more’ computation now than when ChatGPT was released

Nvidia CEO Jensen Huang said next-generation AI will need 100 times more compute than older models as a result of new reasoning approaches that think “about how best to answer” questions step by step. “The amount of computation necessary to do that reasoning process is 100 times more than what we used to do,” Huang told CNBC’s Jon Fortt in an interview on Wednesday following the chipmaker’s fiscal fourth-quarter earnings report. He cited models including DeepSeek’s R1, OpenAI’s GPT-4 and xAI’s Grok 3 as models that use a reasoning process. Nvidia reported results that topped analysts’ estimates across the board, with revenue jumping 78% from a year earlier to $39.33 billion. Data center revenue, which includes Nvidia’s market-leading graphics processing units, or GPUs, for artificial intelligence workloads, soared 93% to $35.6 billion, now accounting for more than 90% of total revenue. The company’s stock still hasn’t recovered after losing 17% of its value on Jan. 27, its worst drop since 2020. That plunge came due to concerns sparked by Chinese AI lab DeepSeek that companies could potentially get greater performance in AI on far lower infrastructure costs. Huang pushed back on that idea in the interview on Wednesday, saying DeepSeek popularized reasoning models that will need more chips.

Full opinion : Nvidia CEO Jensen Huang said next-generation AI will need 100 times more compute than older models as a result of new reasoning approaches that think “about how best to answer” questions step by step.