By James Aspinwall, co-written by Alfred (your trusted AI agent) – February 26, 2026, 11:45
Source: Nvidia Q4 FY2026 Earnings Call Transcript (Motley Fool)
Nvidia posted $68 billion in quarterly revenue – up 73% year-over-year – and guided $78 billion for next quarter. The data center business alone did $62 billion. To put that in perspective, Nvidia’s single-quarter data center revenue is larger than most tech companies’ annual revenue.
Here’s what matters from the call and what it means for anyone building AI systems.
The Numbers
| Metric | Q4 FY2026 | YoY Change |
|---|---|---|
| Total Revenue | $68B | +73% |
| Data Center | $62B | +75% |
| Networking | $11B | +3.5x |
| Gaming | $3.7B | +47% |
| Professional Visualization | $1.3B | +159% |
| Automotive | $604M | +6% |
| Free Cash Flow | $35B | – |
| Gross Margin (non-GAAP) | 75.2% | improving |
Full-year fiscal 2026: $194 billion in data center revenue alone (68% growth). Networking crossed $31 billion for the year – more than 10x what it was when Nvidia acquired Mellanox in fiscal 2021.
They returned $41 billion to shareholders, which is 43% of their annual free cash flow of $97 billion. They’re generating cash faster than they can deploy or return it.
Guidance for Q1 FY2027: $78 billion plus or minus 2%, with gross margins holding at 75%. That’s a $312 billion annual run rate if Q1 holds flat through the year – which it won’t, because growth is still accelerating.
Blackwell Is the Money Machine
Grace Blackwell systems represented roughly two-thirds of data center revenue. Nearly 9 gigawatts of Blackwell infrastructure has been deployed across cloud providers and enterprises. Nine gigawatts. That’s the power consumption of a small country dedicated to running AI inference and training.
Gross margins improved sequentially as Blackwell ramped, which answers the question investors had last quarter about whether the new architecture’s margins would hold. They did. Colette Kress (CFO) was explicit: “The single most important lever of our gross margins is actually delivering generational leads to our customers” – meaning they compete on performance-per-watt, not price cutting.
Supply constraints persist. Demand still exceeds what they can ship. This is the dynamic that keeps margins at 75% for a hardware company.
Rubin: The Next Generation
They announced six new chips under the Rubin platform:
- Vera CPU – Nvidia’s own ARM-based data center CPU
- Rubin GPU – next-gen AI accelerator
- NVLink 6 switch – faster inter-GPU communication
- ConnectX-9 SuperNIC – network interface
- BlueField-4 DPU – data processing unit
- Spectrum-6 Ethernet switch – networking fabric
Vera Rubin samples have shipped. Production expected in H2 2026. The claimed improvement: 10x lower inference costs versus Blackwell. If that holds even partially, it changes the economics of inference-heavy applications (which is most of what’s being built now).
This is Nvidia’s playbook: make the current generation’s economics obsolete with the next generation, ensuring customers must upgrade to stay competitive. It’s a treadmill, and the entire AI industry is running on it.
Agentic AI: Jensen’s Thesis
Jensen Huang’s key message: “Compute equals revenues” in the AI era. He framed the current moment as the agentic AI inflection point, specifically calling out Claude Code, Claude Cowork, and OpenAI Codex as examples of AI systems achieving “useful intelligence” with “skyrocketing” adoption and “profitable tokens.”
The phrase “profitable tokens” is important. It means AI inference is generating more revenue than it costs to run. The economics have flipped from “we’re investing in AI and hoping it pays off” to “every token we generate makes money.” That’s the signal that drives the next wave of infrastructure spending – which is what Nvidia sells.
He also noted that agentic AI workloads are more compute-intensive than simple chat. An agent that searches the web, calls tools, writes code, tests it, and iterates uses 10-100x more tokens per task than a single-turn conversation. More tokens means more GPUs. More GPUs means more Nvidia revenue. The flywheel is obvious.
Sovereign AI: $30 Billion and Growing
Sovereign AI – nations building their own AI infrastructure – generated over $30 billion annually, more than tripling year-over-year. The top buyers: Canada, France, Netherlands, Singapore, and the UK.
This is a geopolitical trend that won’t reverse. Countries are deciding that AI compute is critical infrastructure, like power grids and telecommunications. They want domestic capacity, not dependence on US hyperscalers. Nvidia is the primary supplier for all of them.
Physical AI: The Quieter Story
Physical AI – robotics, autonomous vehicles, industrial automation – added over $6 billion in annual revenue. Robotaxi ride volumes are “growing exponentially.” Automotive was the smallest segment at $604 million, but the trajectory matters more than the current number.
Jensen’s long-term bet is that physical AI becomes as large as digital AI. Every robot, every autonomous vehicle, every industrial system that perceives and acts in the physical world needs Nvidia silicon. The TAM (total addressable market) is potentially larger than data centers, but on a longer timeline.
The $10 Billion Anthropic Investment
Nvidia announced a $10 billion investment in Anthropic. This is significant for several reasons:
- It ties Nvidia directly to the model layer, not just the hardware layer
- Anthropic builds Claude, which Jensen specifically called out as achieving “useful intelligence”
- It deepens the vertical integration: Nvidia makes the chips, invests in the model companies, and profits from both hardware sales and the success of the models running on that hardware
They also mentioned deepening partnership discussions with OpenAI and a new non-licensing agreement with Groq for low-latency inference technology. Nvidia is positioning itself not just as a chip vendor but as the platform that the entire AI ecosystem runs on.
What’s Not Going Well
China: The China segment remains uncertain. Small H200 export approvals generated no revenue. Chinese AI companies (DeepSeek and others) are building competitive models on less advanced hardware, and recent IPOs in China pose “long-term structural risks.” Nvidia is effectively locked out of the world’s second-largest AI market by export controls.
Supply constraints: Still can’t ship enough. This is both a strength (pricing power) and a risk (customers are actively seeking alternatives, and every GPU Nvidia can’t ship is revenue a competitor might capture).
Gaming Q1 headwinds: Despite healthy demand and clean channel inventory, supply constraints will limit gaming revenue next quarter. Gaming is now less than 6% of revenue – almost a rounding error compared to data center.
What This Means for AI Builders
If you’re building AI applications – which we are – the earnings call confirms several things:
Inference costs will drop dramatically. Rubin promises 10x lower inference costs. Even if that’s 5x in practice, it fundamentally changes what’s economically viable. Workloads that are too expensive today (real-time agentic AI with heavy tool use, continuous monitoring, always-on assistants) become affordable.
Agentic AI is the growth driver. Jensen calling out Claude Code and Codex by name, describing token generation as “profitable” – this validates the direction we’re building in. MCP tool orchestration, multi-step agent workflows, autonomous task execution – these are the use cases driving GPU demand.
The infrastructure buildout continues. $78 billion guidance for next quarter means hyperscalers are still spending aggressively on AI compute. The capacity expansion is real and sustained. More capacity means more AI services, which means more demand for orchestration platforms like ours.
Physical AI is coming. Not this year, maybe not next year, but the investment signals are clear. When robots and autonomous systems need the same kind of agent orchestration that digital AI needs today, the architectural patterns we’re building now (permission controls, tool calling, audit trails, real-time communication) apply directly.
Bottom Line
Nvidia is no longer just a GPU company. They’re the infrastructure backbone of the AI industry – hardware, networking, software stack, and now direct investment in the model companies. $68 billion in a quarter with 75% gross margins and accelerating growth.
The bet for everyone building on top of this infrastructure: inference gets cheaper, agents get more capable, and the economics get better with every generation. Build for the world where compute is abundant and tokens are profitable. That world is arriving faster than most people expected.