Back to top

Image: Bigstock

Bull of the Day: NVIDIA (NVDA)

Read MoreHide Full Article

In my June 10 article on NVIDIA (NVDA - Free Report) , I explained why the stock was still undervalued at $700 because of the growth rate in its Datacenter business that was about to overtake its Gaming segment. In this report, I'll explain why this shift is so important and how innovations that investors have barely understood just ignited the recent explosion in the stock above $800.

That rally has brought the company over the $500 billion mark. It was only in April, when NVDA shares traded for $600, that I was trying to assuage investor concerns about valuation with these words...

With a $380 billion market cap and projected revenues in 2022 of $25 billion, the stock trades at a 15X price-to-sales multiple -- quite rich for a semiconductor player.

But NVIDIA isn't just any chip maker. They lead the field in multiple areas, from gaming, machine learning, and autonomous driving technology to artificial intelligence, medical imaging, and hyperscale computing for scientific research.


Two Catalysts for the Rapid Rally

On June 21, in a special report for Zacks Confidential subscribers, I raised my longer-term price target on NVDA shares to $1,000. In my "UFO Messages: How to Think About Advanced Technology" report, here's what I wrote in a short blurb recommending shares on pullbacks under $700...

NVIDIA (NVDA - Free Report) : I've written about Zacks #2 Rank NVDA as the the Bull of the Day twice in the past three months if you want to catch up on why I raised my price target to $750 before any other analyst on Wall Street. It got there much faster than I expected after breaking up and out of that base around $540 where we added in April. If I'm looking out 5 years, NVDA is a buy now with my new long-term price target of $1,000. That rolls back to $850-900 some time next year. During the next 5-10% mini-correction in the Nasdaq Tech/Growth complex, NVDA shares could revisit support at $650. I would have bids layered in from there down to $600 to make sure you catch some algo stop flushes.

NVDA shares were trading under $740 then and clearly large investors had other designs for the stock instead of a pullback, as it moved to new highs above $800 in the past two weeks. Maybe I helped a little more with this public article and video on June 23...

AI Dreams and Reality: Investing in Advanced Technology Artificial intelligence will create $50 trillion in economic value this decade as it transforms the world. The video contains a great 3-minute clip of how machine learning works from Simplilearn.

I have to take some (small) credit for the NVDA surge because the other big potential catalyst came on Thursday July 1 and only moved the stock another $20, or 2%, higher. It was this upgrade from BMO Capital...

NVDA price target raised to $1,000 from $750 at BMO Capital: Analyst Ambrish Srivastava expressed rising confidence in the data center business as it continues to evolve from a hardware business to "hardware with a meaningful software component further out." The analyst now sees the business growing to a $32 billion business a few years out, versus a prior expectation of $25 billion, and thinks that as software starts to become a bigger portion of the business that gross margin will continue to trend up.

What Should You Pay for the King of AI Technology?

Again, I was ahead of all Wall Street analysts, all year, in recognizing what growth investors should and would pay for the King of AI. And the shocker is that it's taken large investors so long to finally grasp the power of the NVIDIA "stack" (details coming up) and they will now FOMO into the stock to relieve their shame about not doing so under $540 where I was telling my followers to be adding all year!

What's even more interesting is how the BMO analyst reached hard for the high-ground Street target since he now sees Datacenter (DC) growing to $32 billion. Well, maybe he's showing off a little with the $1,000 call now to be first. But he didn't beat me -- boy, I sound like a 10-year old -- and coming up, I'll tell you about the analyst who saw this first last September.

In the video/article linked above on AI, I also go over the big news that "Tesla Builds Giant Supercomputer Using NVIDIA GPUs". On June 22, we learned of Tesla (TSLA - Free Report) unveiling the in-house supercomputer the EV maker is using to train deep neural networks for Autopilot and self-driving capabilities. The cluster uses 720 nodes of 8x NVIDIA A100 Tensor Core GPUs (5,760 GPUs total) to achieve an industry-leading 1.8 exaflops of performance.

“This is a really incredible supercomputer,” said senior director of AI at Tesla, Andrej Karpathy. “I actually believe that in terms of flops, this is roughly the No. 5 supercomputer in the world.”

In the video, I explain the evolution of NVIDIA super-computer architecture from a handful of "petaFLOPS" (one quadrillion floating-point operations per second) to exaflops, or one quintillion FLOPS, equal to 1,000 petaflops.

Now let's explore why NVIDIA can and should trade for 20X next year's new sales consensus of $27.5B -- $550B market cap or about $900 per share by year-end -- as it grows the topline at 20-25% for the next 5 years.

From Gaming Comes the Powertool of AI

The NVIDIA Gaming business has built a core sales and profit driver for the company for nearly 3 decades. But along the way, the gold mine that Jensen Huang and his engineering teams discovered was that the GPU (graphical processing unit) semiconductors -- essential to high-speed, high-resolution gaming graphics -- were also perfect for the "massively parallel architectures" of advanced computation on large data sets.

In 2006 they unveiled CUDA (Compute Unified Device Architecture), a revolutionary platform for general purpose GPU computing, designed to enable scientists and researchers to harness the parallel processing capabilities of GPUs to tackle their most complex computing challenges.

By 2010, NVIDIA Tesla GPUs were being used in the world's fastest supercomputer in China. In 2015, they announced a big dive into AI with a wave of new products including NVIDIA DRIVE for autonomous vehicles, GeForce GTX Titan for training deep learning neural networks, Tegra X1 for 1-teraFLOPS processing power for computer vision, and Jetson TX1 for robotics.

And 2016 brought the beginnings of an HPC (high-performance computing) revolution with the DGX-1, the world's first deep-learning supercomputer in a box, to supercharge AI applications for thousands more smaller companies -- with the genius of design being how they could be "stacked" together to make bigger, customized supercomputers.

In 2017, the Volta GPU architecture was introduced, with NVIDIA Tesla V100 GPU accelerators powering the DGX family of AI supercomputers now found around the world in corporations, national institutions and university research labs. And all of these milestones set the stage for exponential growth in hardware and software capabilities achieved in just the past 5 years.

Gaming Drives GPU Innovation

Throughout this journey, two dynamics became consistently clear at every new product introduction:

1) R&D in gaming graphics creates GPU innovations that can be applied to AI. See my 2018 article Who Cares NVIDIA Makes Great Gaming Graphics? for more insight.

2) Since the birth of the CUDA platform, NVIDIA engineers have been focused on superior software that can run the GPU hardware to perform all of its "industrial light and magic" tricks in everything from games and films to scientific simulation and autonomous driving which require billions of calculations per second.

This becomes the hardware-plus-software stack that is like a giant work bench or laboratory for companies and researchers to explore data and make use of it. After discovering this crazy little gaming chip company called NVIDIA at $25 a share in 2016, I'll admit it took me a whole year to catch on to the power of massively parallel architectures (see my 2017 Get Your MPA in Deep Learning).

But once I did, I have been fanatic about their brand of data "mining and modeling" analytics and artificial intelligence (machine learning and deep learning) as transformational technologies that were just beginning to be exploited by corporations, governments and scientific researchers.  

Digital Flywheels of Super-Exponential Advance

This sub-title is straight from NVIDIA. It was used in a keynote that Jensen Huang gave on June 25 to a supercomputing conference in France called Teratec. The 18-minute video is worth your time because Jensen tells his company's story far better than I can. And he's got all the cool graphics too that describe the power of HPC in molecular biology.

To illustrate how NVIDIA continues to reinvent Moore's Law, Jensen said “In 1997, RIVA 128 was a 4-million transistors chip built by 50 engineers. Today the Ampere 100 is 15,000 times larger and took 5,000 engineers.”

First off, that’s 60 BILLION transistors in a space the size of a shoe box! I thought it was a big deal a few years ago when the Volta card packed 21 billion on the card. This is what is possible with chip circuitry designs going below 10 nanometers, or 10-20% the size of the coronavirus.

After you wrap your head around that, try to understand what Jensen means when he says the pace of computational power growth exceeds Moore's Law (doubling power every 2 years roughly) by 1,000 times. The speed and power amplification of the chipsets NVIDIA is building are mind-boggling.

Here was another slice of the presentation that stood out before Jensen introduced the concepts of exponential advance in computing power for scientific applications that need industrial-strength simulation capabilities...

Consumer recommender systems are some of the largest-scale commercial HPC systems and they run continuously. Some 30 million servers are deployed each year into hyperscale and cloud data centers. A significant number of these are learning predictive features and user preferences.

But industries that make products based on physical and biological sciences have problems that are too large and complex to properly capture in simulation. To aid in creating actual products -- whether to decode and understand biomolecules, discover novel drugs, find more sustainable food and fuel sources, or safely operate autonomous machines to make and deliver goods -- industrial HPC can only be widespread when companies can design and stress-test the entire product in simulation.

There's a threshold of scale that needs to be achieved. Until now, the scale of simulation possible in HPC was simply too small to be useful to industry. As the saying goes, having the tallest ladder is useless if you're going to the moon. We need a rocket.


(end of Jensen keynote excerpts)

To put this processing power in perspective, think again of how it is measured in FLOPS, or floating point operations per second. One teraFLOP equals 1 trillion FLOPS. A petaFLOP equals one quadrillion and an exaFLOP one quintillion. Exascale computing is a billion billion FLOPS. To match what an exascale computer can do, you'd have to perform one calculation per second for 31.7 billion years.

I cover a lot more of these "power law" innovations in detail in my latest video and article...

NVIDIA and AMD: Digital Flywheels of Super-Exponential Innovation AI is accelerating at exascale with the marriage of GPU super-computers and advanced simulation software.

Speaking of Advanced Micro Devices (AMD - Free Report) , there was a bunch of great news for the company in June that finally launched shares out of the doldrums, as they are the next-best provider in both gaming and datacenter, but were trading at half the valuation of NVDA. The video and article have all the details including another potential Intel misstep in the recently announced delay of its Sapphire Rapids server platform, opening the door for more AMD order wins.

The Analyst Who First Saw DC Overtaking Gaming

If you are not a gamer or NVDA investor, you might find it hard to care about all this. While I am not a gamer, I am an NVDA investor, and I'm here to give you the same two big reasons to care that I've given my followers since 2017...

1) NVIDIA gaming graphics R&D is giving them deep knowledge about the next frontier of human evolution: AI

2) AI is going to dramatically change the world as you know it

This is why we always buy the dips in NVDA, even if their Gaming segment remains the biggest source of revenues, with the Datacenter segment nipping at its heels. Because that mix is about the shift dramatically over the next few years.

In February, when NVIDIA reported results for their FY21 Q4 and year (ended January), you could see the rapid growth advances in both segments despite (or maybe because of) the pandemic.

Gaming revenue for Q4 was a record $2.50 billion, up 10% from the previous quarter and up 67% from a year earlier. Full-year revenue was a record $7.76 billion, up 41%. NVIDIA also announced the company’s biggest-ever laptop launch, with 70+ new laptops for gamers and creators, powered by NVIDIA GeForce RTX™ 30 Series Laptop GPUs.

Datacenter Q4 revenue was a record $1.90 billion, slightly above the previous quarter and up 97% from a year earlier. Full-year revenue was a record $6.70 billion, up 124%. NVIDIA also announced that the world’s leading OEMs unveiled the first wave of NVIDIA-Certified Systems™ with NVIDIA A100 Tensor Core GPUs — the industry’s only accelerated servers tested for machine learning and data analytics workloads.

These two segments accounted for nearly 87% of the company's total revenue of $16.68 billion last year -- which was a 53% topline jump from the previous year!

I wasn't surprised at all by these numbers, especially the higher growth rates in Datacenter vs Gaming. Because last September I really started to dive into the numbers thanks to a new report by Jefferies semiconductor analyst Mark Lipacis. He described that the former is growing steadily at 40% CAGR vs just 10% for the latter.

Based on that math, this is the year that Datacenter (DC) takes over Gaming in revenues. And based on his 5-year projections, Lipacis sees Gaming growing to a nearly $12 billion business while DC blows the doors off to $34 billion.

You can see how the Street is just catching up to this 20-year veteran of chip cycles and new products.

To be clear, Lipacis was inspired to write this new report after NVIDIA announced it had an agreement to buy UK-based ARM Holdings for $40 billion. That deal remains controversial -- for everyone but NVDA -- and also uncertain in terms of the probability of actually closing any time soon.

The primary issues revolve around ARM's technology for mobile devices which is licensed to everyone from Apple (AAPL - Free Report) to Qualcomm (QCOM - Free Report) .

ARM also deals in CPUs (serial processing), which is part of what made it attractive to NVIDIA's GPU (parallel processing) expertise. I covered some of the issues here on Sep 17: "NVIDIA Strikes an ARMs Deal to Take All the Chips."

BREAKING: Arm Says Merger Better Than IPO

As I write this on Friday afternoon, a Reuters story just hit the wires that is very intriguing.

Stephen Nellis wrote "Nvidia Corp's NVDA proposed $40 billion acquisition of Arm Ltd would better support the creation of UK technology jobs than the SoftBank Group unit becoming a standalone public company once again, Arm's chief executive said on Friday."

"We contemplated an IPO but determined that the pressure to deliver short-term revenue growth and profitability would suffocate our ability to invest, expand, move fast and innovate," Arm CEO Simon Segars wrote in a blog post. "Combining with Nvidia will give us the scale, resources and agility needed to maximize the opportunities ahead," Segars wrote.

Nellis also related a story I missed last week. Apparently very recently Qualcomm CEO Cristiano Amon had told The Telegraph newspaper and other media outlets that Qualcomm was open to investing in an initial public offering by Arm if the Nvidia deal fell apart.

This makes sense that Amon would see joint ownership of Arm by industry peers as a preferred "chip design" to keep the firm independent. I can't argue with that since I own QCOM shares too.

Bottom line: Buy NVDA on the dips but be prepared to think long-term. $1,000 won't come until later in the year -- in fact, after the 4-for-1 stock split in two weeks, my new price target gets cut to $250. And that number may come sooner than anyone has calculated -- even me!

Zacks' Top Picks to Cash in on Artificial Intelligence

In 2021, this world-changing technology is projected to generate $327.5 billion in revenue. Now Shark Tank star and billionaire investor Mark Cuban says AI will create "the world's first trillionaires." Zacks' urgent special report reveals 3 AI picks investors need to know about today.

See 3 Artificial Intelligence Stocks With Extreme Upside Potential>>

Published in