With Nvidia Corp. (NVDA:NASDAQ) set to announce quarterly results in the coming days, my colleague Chris Reilly suggested I share my outlook on the company's trajectory.
In a nutshell, I believe Nvidia is destined to be the inaugural company to attain a market valuation of US$10 trillion, regardless of what management says during next week's earnings presentation. That figure is more than twice Nvidia's present market cap of US$4.5 trillion. And I predict they'll get there faster than most anticipate - within a mere three years. That's well ahead of even the most optimistic projections from Nvidia's staunchest advocates.
I implore you to consider my perspective, as I'm guided by the power of compounding fundamentals.
Today, I'll present my straightforward three-part framework for arriving at this conclusion.
As background, my colleagues and I originally recommended Nvidia in April 2013 at a split-adjusted price of just US$0.29. And we've maintained a long position in one of my current advisories since September 2020, established at a split-adjusted US$13.02. Suffice it to say, I've been analyzing the company and accurately forecasting its trajectory for quite some time.
Thanks for reading Grow or Die! Subscribe for free to receive new posts and support my work.
Part 1: Track the Money
The colossal capital expenditures (capex) of big tech giants are fueling the explosive growth in AI infrastructure. This is the money trail we must follow.
The four leading US-based hyperscalers — Amazon.com Inc. (AMZN:NASDAQ), Alphabet Inc. Class A (GOOGL:NASDAQ), Microsoft Corp. (MSFT:NASDAQ), and Meta Platforms Inc. (META:NASDAQ) — are investing hundreds of billions annually to construct AI data centers.
In 2024, these four behemoths collectively spent around US$250 billion on capex, with the lion's share allocated to AI infrastructure.
By 2025, that number had skyrocketed to roughly US$400 billion.
Looking ahead to 2026, these same companies have signaled capex intentions in the ballpark of US$650 billion. Fold in Oracle Corp. (ORCL:NYSE) and CoreWeave (CRWV:NASDAQ), and we're easily talking north of US$700 billion.
These aren't speculative projections from starry-eyed analysts. They represent the committed investment plans of the most deep-pocketed corporations on the planet.
And Nvidia is their go-to supplier, currently capturing about half of total AI capex.
The underlying motivation for this unrelenting AI investment is crystal clear: These companies simply cannot afford to lag behind in the escalating AI arms race.
When you're convinced — as they are — that the ultimate prize is a "digital-god-scale" platform worth trillions over decades, you'll stop at nothing to keep investing as aggressively and rapidly as possible.
Part 2: Understand the Evolved Business Model
It's crucial to recognize that Nvidia has transcended its roots as a mere "chip company" and transformed into something far greater.
Today, Nvidia is an "AI systems company."
Its flagship AI system, the GB300 NVL72, is an absolute game-changer. Alongside the GB200 NVL72, it represents the most consequential product introduction in Nvidia's history, signaling the shift from 8-GPU "server scale" systems to 72-GPU "rack scale" systems.
A server is essentially a single metal computer enclosure — akin to a muscular PC sans display or keyboard — brimming with components like GPUs, CPUs, and memory. A rack, meanwhile, is a towering 6-7 foot metal cabinet housing multiple servers arranged vertically like a stack of pizzas.
"Server scale" denotes constructing systems by concentrating on the innards of an individual server enclosure. Nvidia's previous configurations, such as the DGX H100 AI computer, feature eight GPUs interconnected to function as one mammoth GPU. For additional power, you daisy-chain multiple servers using conventional external networking.
"Rack scale" entails designing the entire rack as a single, unified, gargantuan AI system. Nvidia's trailblazing GB300 NVL72 crams 72 Blackwell Ultra B300 GPUs (along with 36 Grace CPUs, high-bandwidth memory, interconnects, etc.) into one cohesive rack, networked at lightning speeds to operate as a monolithic mega-GPU.
The pivotal distinction is that server scale is modular and slower (comparatively speaking), while rack scale is integrated and faster.
This pivot to rack-scale systems is a watershed moment for Nvidia because the company is evolving from peddling what are effectively AI "kits" to offering turnkey, all-in-one AI machines. This empowers Nvidia to:
- Exert greater control over the design (networking, cooling, software, etc).
- Command premium prices (a lone GB200 NVL72 rack fetches about US$3 million, while the latest GB300 system sells for US$3.5-4.5 million depending on configuration).
- Push the envelope on performance for training and running colossal AI models, delivering ~4X faster training, 30X faster inference, and 50X higher AI factory output per megawatt.
For tech titans like Microsoft and Alphabet — as well as "neocloud" providers like CoreWeave and AI-centric "startups" like OpenAI erecting data centers — it's a huge boon for speed and efficiency.
It's also a dramatically simpler architecture, obviating the need to wrangle countless cables between servers. And it enables you to operate within a much smaller footprint or extract far more from your existing footprint, as a single rack can now shoulder the load of what used to require a phalanx of racks.
Nvidia began shipping GB300 NVL72 systems at scale in the waning months of last year. And customers with the financial means and appetite for industrial-grade AI are devouring them as quickly as possible, because if they don't, they'll be left choking on the dust of their rivals.
As this year draws to a close, Nvidia will unleash its next-gen technology, Vera Rubin. These AI systems will boast twice as many GPUs as Blackwell Ultra while delivering more than 3X the performance.
Then, in late 2027, comes Rubin Ultra, which will crank the GPU count to nearly 600 and serve up roughly 13X the performance of Vera Rubin.
And waiting in the wings for 2028 is the Feynman architecture. Specifics are scant at this juncture, but it's slated to deliver at least a 10X performance leap over Rubin Ultra.
These timelines may (and likely will) slip a tad. But we're still looking at Nvidia unleashing a new generation of AI system architecture every 12-18 months.
That's mind-blowing.
Throughout this period, the aggregate cost of computing is projected to plummet. So you're getting exponentially more horsepower at an ever-decreasing cost.
Grasping the full implications of such colossal computing power is a tall order.
Put simply, you ain't seen nothin' yet when it comes to Nvidia's hardware prowess.
Yes, the bespoke silicon being forged by tech leviathans like Google and Amazon will carve out a niche for application-specific AI workloads.
And yes, AMD's inaugural rack-scale system "Helios," optimized for AI inference and slated to commence volume shipments later this year, will generate ample buzz and revenue.
But Nvidia will still sell every last AI system it can conceivably manufacture for years to come.
Part 3: Crunch the (Simple) Numbers
Nvidia posted US$51.2 billion in "data center" revenue — a close proxy for AI revenue — in its most recent quarter. That represented a blistering 66.8% year-over-year growth rate. And it equates to over US$200 billion on an annualized basis.
(Again, that's solely from AI. Total revenue for the quarter clocked in at US$57 billion.)
But as jaw-dropping as those figures are, they're merely the tip of the iceberg.
Just months ago, Nvidia CEO Jensen Huang declared he's pursuing a US$3-4 trillion AI infrastructure opportunity over the next five years… and that he has "visibility" into US$500 billion in data center revenue for the six quarters spanning through the end of calendar year 2026.
Huang wasn't asserting Nvidia would capture that entire US$500B chunk over those six quarters, but my projections aren't far off that mark.
I prefer to err on the side of conservatism when crafting financial forecasts to bake in some wiggle room. Even so, for calendar years 2026 through 2028, my model has Nvidia generating roughly US$270 billion, US$390 billion, and US$550 billion, respectively.
Nvidia currently trades at a price-to-sales multiple of about 24. That sounds steep, but it's quite reasonable considering the company's blistering growth trajectory. And it's right in line with the average of the past few years.
If I apply a slightly more conservative multiple of 19 (to account for the inevitable deceleration in growth over time) and multiply that by my US$550 billion data center revenue forecast for calendar year 2028, we arrive at a valuation of US$10.45 trillion.
In other words, this rudimentary math gets us to a US$10 trillion-plus market cap by the close of 2028.
That number is so gargantuan, it's difficult to fathom. But I believe it's entirely plausible.
The stock won't ascend in a straight line from here, and we could very well witness a correction of 40% (or more) in the coming years. But that would simply represent a buying opportunity, assuming the company's operational execution and demand dynamics remain on track.
One parting thought: Lead tech analyst at I/O Fund, Beth Kindig, who I consider the preeminent Nvidia analyst out there, makes a persuasive case for Nvidia attaining a US$20 trillion market cap by 2030. If you really want to go down the Nvidia rabbit hole, I strongly recommend following her work.
If this interests you, make sure to join our Grow or Die Substack. We post free content every M-F. Thanks.
| If you enjoyed this, make sure to sign up for the Jolt, Stephen McBride's twice-weekly investing letter-where innovation meets investing. | Go here to join |
Important Disclosures:
- Chris Wood: I, or members of my immediate household or family, own securities of: None. My company has a financial relationship with: None. My company has purchased stocks mentioned in this article for my management clients: None. I determined which companies would be included in this article based on my research and understanding of the sector.
- Statements and opinions expressed are the opinions of the author and not of Streetwise Reports, Street Smart, or their officers. The author is wholly responsible for the accuracy of the statements. Streetwise Reports was not paid by the author to publish or syndicate this article. Streetwise Reports requires contributing authors to disclose any shareholdings in, or economic relationships with, companies that they write about. Any disclosures from the author can be found below. Streetwise Reports relies upon the authors to accurately provide this information and Streetwise Reports has no means of verifying its accuracy.
- This article does not constitute investment advice and is not a solicitation for any investment. Streetwise Reports does not render general or specific investment advice and the information on Streetwise Reports should not be considered a recommendation to buy or sell any security. Each reader is encouraged to consult with his or her personal financial adviser and perform their own comprehensive investment research. By opening this page, each reader accepts and agrees to Streetwise Reports' terms of use and full legal disclaimer. Streetwise Reports does not endorse or recommend the business, products, services or securities of any company.
For additional disclosures, please click here.











































