Back to top

Image: Bigstock

NVIDIA Earnings: 3 Giga Takeaways

As always, NVIDIA ((NVDA - Free Report) ) earnings conference calls are almost as exciting as their technology conferences. We look for certain surprises going in, and we always get more than we bargained for.

My focus here is on three big areas: GPU Demand, AI Factories, and the China Predicament.

Headed into yesterday's reveal, I was looking for two items in particular:

First, would additional clarity around company write-downs for lost H20 GPU sales to China be as well-received as it had been when first announced and NVDA shares rallied 40% since mid-April?

The answer was a resounding "yes" because CEO Jensen Huang and CFO Colette Kress had already explained last month that the hit would be substantial at $15 billion for the first half of the year. All they had to do was confirm that guidance and explain that Q1 took a larger hit of $7 billion (including inventory) and that Q2 would bear $8 billion.

Jensen also spent considerable time on the call putting the China predicament into perspective because it is a potential addressable market for NVIDIA of $50 billion. More on that coming up.

Second, would we hear of a significant ramp in deliveries in the new flagship GB200 NVL72 rack systems that enterprises are eager to deploy -- since their average selling price is over $3 million?

I had given some very specific commentary to TechCrunch+ senior writer Becca Szkutak on this topic last week...

Why export restrictions aren’t the only thing to pay attention to in Nvidia’s earnings

Since word was that NVIDIA delivered 1,500 GB200 architectures in April (last month of Q1 FY2026), we should expect no less than 5,000 units projected for Q2. And if we hear something over 10,000 units, investors should be very pleased and bullish.

The resoundingly bullish answer here came from Colette in her opening remarks...

On average, major hyperscalers are each deploying nearly 1,000 NVL 72 racks -- or 72,000 Blackwell GPUs per week -- and are on track to further ramp output this quarter. Microsoft, for example, has already deployed tens of thousands of Blackwell GPUs and is expected to ramp to hundreds of thousands of GB200s with OpenAI as one of its key customers. Key learnings from the GB200 ramp will allow for a smooth transition to the next phase of our product road map, Blackwell Ultra.

1,000 GB200s per week and growing sounds like a very strong quarter indeed. With a solid 13 weeks in Q2, we could potentially see over 15,000 units shipped!

That's very bullish. NVDA shares popped up to $144 in after-hours on that tidbit and look poised to open there this morning. I seen new highs above $150 soon because the company guide for only $45 billion this quarter is very conservative.

By my math, they could do that in GB200 NVL72 alone: 15,000 x $3 million = $45 billion.

Behold the AI Factory

Last week after Jensen's keynote at Computex in Taiwan, I felt compelled to write an article explaining why he is using the phrase "AI factories" so much this year...

NVIDIA AI Factories: More Than Clever Marketing?

In Colette's opening remarks, she confirmed what I was seeing. Let me set the context first with this utterance: "Our customers’ commitments are firm."

She said this while talking about Datacenter revenue growth of 73% and "AI factory build outs... driving significant revenue."

The reason I think this is important is because we are talking about significant capital expenditure for these companies, given the $3 million sticker.

So it's natural for investors and analysts to wonder how sustainable this demand trend is.

The reality is that the "hyperscalers" including Cloud Service Providers (CSPs) Microsoft Azure, Amazon AWS, Google Cloud, and Oracle Cloud -- plus Meta Platforms (
(META - Free Report) ), Tesla, and OpenAI -- have insatiable demand for NVIDIA GPUs and advanced rack systems like the GB200 NVL72.

They are all in a massive build-out phase that will last for two to three years. Just think how few companies are in the position of NVIDIA to say "our customers' commitments are firm" for such capex over multi-quarter periods.

Even a single company, Tesla (
(TSLA - Free Report) ), can make the argument for AI Factories because they need accelerated, hyperscale compute to train and operate FSD (full self-driving) cars, Grok AI and other xAI initiatives, and Optimus humanoid robots. Jensen believes robotics will be a multi-trillion-dollar industry.

He also sums up the ramping demand in these few words: "Reasoning models are driving a step function surge in inference demand."

For NVIDIA, this insatiable demand is like having government contracts galore! And we haven't even talked about sovereign AI adoption yet.

Like the iPhone Cycle, But Better

The next upgrade of Blackwell is going to start shipping this quarter too. Again from Colette...

"Sampling of GB300 systems began earlier this month at the major CSPs, and we expect production shipments to commerce later this quarter."

GB300 will leverage the same architecture, same physical footprint, and the same electrical and mechanical specifications as GB200. More importantly, the GB300 drop-in design will allow CSPs to seamlessly transition their systems and manufacturing used for GB200 while maintaining high yields in performance and memory.

This has been a key element of the NVIDIA product roadmap and annual cadence: everything works together and nothing becomes obsolete. This is why I say that "it's like the iPhone cycle and Apple ecosystem -- but better."

NVIDIA can sell better, faster, and more expensive GPUs to their customers every year because everything is seamless and has what I call "multiplicative integration."

In other words, systems get better because it can all be upgraded with constantly improving and expanding CUDA software libraries. I first noticed this with the transition from Grace Hopper (GH) systems to Grace Blackwell (GB) last year.

Colette: "For example, we increased the inference performance of Hopper by four times over two years. This is the benefit of NVIDIA’s programmable CUDA architecture and rich ecosystem."

AI Factories Aren't Just for ChatGPT

In my AI factories article, I left out one customer that Jensen often talks about: the nation-state. He believes that every country will learn they need to control their own data, and not just for security reasons.

In the AI economy, where there is knowledge and intelligence, there is potential wealth. So every country should be seeking to harness their data -- about their land and resources, their people, their economy, and their potential -- and be able to "mine and model" it for maximum value.

Robotics companies and enterprises deploying agentic reasoning models already know the power of simulation and synthetic data training. Ironically, it was a car company, BMW, who first deployed NVIDIA Omniverse to help them design new factory operations using "digital twins."

Now nation-states will begin experimenting with their data to solve their problems across land, resources, urban planning, agriculture, education, medicine, science, transportation, materials, and supply chains -- all to make the lives of their people better and lift more out of poverty.

From Colette's remarks...

And more AI factory projects are starting across industries and geographies. NVIDIA’s full stack architecture is underpinning AI factory deployments as industry leaders like AT&T, BYD, Capital One, Foxconn, MediaTek, and Telenor are strategically vital sovereign clouds like those recently announced in Saudi Arabia, Taiwan, and The UAE. We have a line of sight to projects requiring tens of gigawatts of NVIDIA AI infrastructure in the not too distant future.

She also noted the pace and scale of AI factory deployments with nearly 100 NVIDIA powered AI factories taking off this quarter, a two-fold increase vs. last year with the average number of GPUs powering each factory also doubling in the same period.

Jensen: Why We Need to Sell to China

About 2/3 of Jensen's opening remarks were about export controls and China AI development.

"On export control, China is one of the world’s largest AI markets and a springboard to global success. With half of the world’s AI researchers based there, the platform that wins China is positioned to lead globally."

With the current White House policy, the $50 billion China market is effectively closed to US industry. I think Jensen's views are very important here and it's a nuance you don't hear most investors and analysts discuss. Here's my summary:

The US weakens its position with China by withholding technology because it forces them to develop all their own -- which they will -- instead of allowing them to become dependent on our infrastructure standards and leadership.

Now here are four key paragraphs from Jensen in his own words...

China’s AI moves on with or without US chips. It has to compute to train and deploy advanced models. The question is not whether China will have AI. It already does. The question is whether one of the world’s largest AI markets will run on American platforms.

Shielding Chinese chipmakers from US competition only strengthens them abroad and weakens America’s position. Export restrictions have spurred China’s innovation and scale. The AI race is not just about chips. It’s about which stack the world runs on. As that stack grows to include 6G and quantum, US global infrastructure leadership is at stake.

The US has based its policy on the assumption that China cannot make AI chips. That assumption was always questionable, and now it’s clearly wrong. China has enormous manufacturing capability. In the end, the platform that wins the AI developers, wins AI. Export controls should strengthen US platforms, not drive half of the world’s AI talent to rivals.

US platforms must remain the preferred platform for open source AI. That means supporting collaboration with top developers globally, including in China. America wins when models like DeepSeek and Qwen runs best on American infrastructure. When popular models are trained and optimized on US platforms, it drives usage, feedback, and continuous improvement, reinforcing American leadership across the stack.


(end of excerpts from Jensen's opening remarks)

With NVDA shares looking for a brighter open this morning above $140, I would look for any initial selling as a buying opportunity. It's clear that the demand trends will push the company into many successive quarters of $50 billion in sales, starting with this one.

As I projected at the end of last year, NVIDIA is on a path to $500 billion in annual revenues over the next five years. Now that you understand the "AI factory" infrastructure concept, this is not a stretch.

It's actually only 38% compound annual growth for 5 years. That's very doable in the AI Economy where NVIDIA is the premier provider of its picks and shovels (GPU innovations and networking hardware) and its brains and dreams (CUDA, Omniverse, and Cosmos).

And thus my longer-term forecast for NVIDIA to become one of the few companies with $1 trillion in sales sometime in the 2030s. That will probably make it the first $5 trillion company by market cap sometime before that decade begins.


See More Zacks Research for These Tickers


Normally $25 each - click below to receive one report FREE:


NVIDIA Corporation (NVDA) - free report >>

Tesla, Inc. (TSLA) - free report >>

Meta Platforms, Inc. (META) - free report >>

Published in