Nvidia: AI will need more computing power, not less
Also: Putin agrees to partial ceasefire, powerful legal team to go after Boeing.

- In today’s CEO Daily: Sharon Goldman at Nvidia’s GTC conference.
- The big story: A partial ceasefire in Ukraine?
- The markets: The Fed is expected to hold interest rates at 4.25%-4.5% but investors will be focused on the future guidance.
- Analyst notes from Ark Invest’s Cathie Woods on the “AI crisis” at Apple, Goldman Sachs on the effect of “trade policy uncertainty” on jobs, UBS on the U.S. economy.
- Plus: All the news and watercooler chat from Fortune.
Good morning. I’m writing from tech’s Superbowl—aka Nvidia’s GTC conference, which kicked off yesterday in San Jose, CA. Of course, all eyes were on leather-jacket-clad CEO Jensen Huang (#2 on Fortune’s list of the Most Powerful People in Business) and his predictions for the future of AI. Below, three takeaways for CEOs from the biggest stage in tech:
AI will need more computing power, not less. DeepSeek’s R1 model debut, with claims it had trained its model for a fraction of the cost and computing power of US models, caused a sharp drop in Nvidia’s stock price. But Huang thinks those selling off made a big mistake. Newer models will need a lot more computing power thanks to their more detailed answers, or in the parlance of AI folks, “inference.” The chatbots of yore spit out answers to queries—but today’s models need to “think” harder, which requires more “tokens”—the fundamental units of text models use—whether it is a word from a phrase, a subword, or a character in a word.
Extreme output speed and better reasoning will be the difference between success and failure. Nvidia’s Blackwell GPUs are in full production—with 3.6 million of them already used. An upgraded version, the Blackwell Ultra, boasts 3X performance. The new Vera Rubin chip and infrastructure is coming down the pike. The “world’s smallest AI supercomputer” is at the ready. Software for AI agents is coming fast and furious into the physical world, including self-driving cars, robotics and manufacturing.
Investors should buckle up for the long run. Stock-watchers might well have wanted to see an accelerated timeline for Nvidia’s new AI chip, the Vera Rubin, to be released at the end of 2026, or more details about the company’s short-term roadmap. But Huang focused squarely on the fact that while AI pundits had insisted over the past year that improvements were slowing down, Nvidia believes getting AI improvements to “scale” is improving faster than ever (to Nvidia’s sales benefit, of course). “The amount of computation we need as a result of agentic AI, as a result of reasoning, is easily 100 times more than we thought we needed this time last year,” Huang said.
But will Nvidia’s efforts to drive growth be enough to keep enterprise companies investing in all things Nvidia? Will buying Nvidia’s costly AI chips—which can run between $30,000-40,000 each, prove too burdensome, given the still-unclear-ROI of AI investments? Ultimately, Nvidia’s premium picks and shovels require enough customers that are willing to keep digging.
More news below.
Contact CEO Daily via Diane Brady at diane.brady@fortune.com
This story was originally featured on Fortune.com