Jensen Huang Wants $1 Trillion While Demoing Features You'll Need $10,000 to Run
Remember when I wrote about the AI bubble making your gaming PC more expensive back in February? I warned that AI infrastructure spending was going to have real consequences for gamers—that the same companies hoarding memory chips and GPU production capacity were going to squeeze consumers out of the market.
Well, Nvidia just held their GTC 2026 keynote. And it's worse than I thought.
The $1 Trillion Flex
On Monday, Jensen Huang took the stage at San Jose's SAP Center to deliver his annual leather-jacket sermon. The headline? Nvidia now sees $1 trillion in AI chip demand through 2027—up from the $500 billion figure he quoted last year.
Let that sink in. In twelve months, Jensen's demand projection doubled.
He called this the "inflection point of inference"—the moment where AI stops being about training models and starts being about actually running them at scale. Data centers are apparently building out AI factories faster than Nvidia can supply them, and Jensen wants you to know that this is Very Good News for shareholders.
But here's the thing: 60% of Nvidia's revenue comes from hyperscalers—the Amazons, Microsofts, and Googles of the world. The same companies doing the circular financing deals I wrote about in February. The same companies that are consuming 70% of high-end memory production while you can't buy a reasonably priced GPU—and Valve can't get affordable RAM for the Steam Machine.
Jensen's standing on stage projecting $1 trillion in demand. Meanwhile, the RTX 5090 is going for over $4,000 on Amazon—more than double its $1,999 MSRP. And Nvidia's response to that? Here, let us show you a cool new feature.
DLSS 5: The Feature Nobody Asked For
During the keynote, Nvidia unveiled DLSS 5—and the reaction has been... something.
Previous versions of DLSS (Deep Learning Super Sampling) were about performance. You render the game at a lower resolution, the AI upscales it, and you get better framerates without sacrificing too much visual quality. It was genuinely useful technology that solved a real problem—so useful, in fact, that it's the entire selling point of the PS5 Pro. Sony's PSSR does essentially the same thing: render lower, upscale smart, get both performance and quality.
DLSS 5 is different. This isn't upscaling anymore. Nvidia calls it "neural rendering"—a generative AI model that takes your game's geometry, textures, and lighting data and reinterprets it to look more "photoreal."
Translation: AI decides what your game should look like.
Grace Ashcroft and the Yassification of Gaming
The demo footage showed characters from Resident Evil Requiem, and gamers immediately lost their minds. Grace Ashcroft and Leon Kennedy looked... different. Their features were smoothed out, idealized, with what people are describing as "Instagram filter" aesthetics. Grace appeared to have makeup added. Leon looked like he'd been run through FaceApp.
The internet's response was immediate and brutal:
- "This is just a garbage AI filter"
- "No character or soul to it"
- A rendering engineer at Respawn called it "an overbearing contrast, sharpness, and airbrush filter"
- The phrase "AI slop" appeared approximately ten thousand times
And Jensen's response to the criticism?
"Well, first of all, they're completely wrong."
Classic.
He went on to explain that developers have "full artistic control" over DLSS 5's effects. They can fine-tune the intensity, apply color grading, mask off areas where the effect shouldn't be applied. It's not a post-processing filter—it's "content-control generative AI" at the geometry level.
Sounds great in theory. Here's the problem.
Capcom Found Out When You Did
One of Jensen's key talking points was that developers maintain artistic control. Nvidia even released a statement saying the Resident Evil Requiem footage "was tuned and approved by Capcom."
Then this happened: Capcom's own developers said they learned about DLSS 5 being used on their game at the same time as the public.
Read that again. The people who actually made the art—the character modelers, the lighting artists, the people whose job is literally "artistic control"—found out their work was being used as the poster child for "look how much better AI makes this" when the rest of us did.
So much for developer control.
The Two-GPU Elephant in the Room
But here's where it gets really good.
The DLSS 5 demo at GTC? It wasn't running on a normal gaming PC. It was running on a rig with two RTX 5090 graphics cards—one dedicated entirely to running the AI model.
Let's do some napkin math on what that demo system actually costs.
The GPUs alone:
- 2× RTX 5090 at current street prices (~$4,200 each): $8,400
The rest of the system (because you need serious hardware to feed two 5090s):
- High-end CPU (i9-14900K or Ryzen 9 9950X): $550-700
- Motherboard with dual PCIe x16 slots and beefy VRMs: $500-800
- 64GB DDR5 (you know they weren't running 32): $250-400
- Power supply: Let me do some more math here...
Each RTX 5090 has a TDP of 575 watts, with peak spikes exceeding 700W. Two of them? That's 1,150W just for GPUs at rated TDP—potentially 1,400W+ with spikes. Add a 300W CPU, motherboard, RAM, storage, cooling, and you're looking at a system that needs an 1,800-2,000W power supply to run stable.
A Titanium-rated PSU at that wattage? $500-700.
Add a case that can actually fit two massive 5090s with proper airflow, cooling, storage, and you're looking at:
Total estimated cost: $10,900 - $12,100
For context: about a year ago I priced out an MSI Titan 18 HX with 128GB of RAM as a joke—just to see what peak absurdity looked like. That thing had a 175W mobile RTX 4090, was marketed as a "desktop replacement," and weighed nearly 8 pounds. It was the most ridiculous consumer laptop on the market. The config ran $5,000-7,000 depending on specs.
Nvidia's DLSS 5 demo rig costs more than my absurdity benchmark.
And here's the kicker: Resident Evil Requiem's minimum specs call for a GTX 1660—a six-year-old entry-level card. The recommended spec is an RTX 2060 Super. This is a game designed to run on modest hardware, targeting 1080p/60fps upscaled from 720p native.
Nvidia could have demoed DLSS 5 on Cyberpunk 2077 with full path tracing—a game notorious for melting GPUs—and shown their tech making an already-demanding title sing. Or Borderlands 4, or any graphically intensive game where "we made it look better" would actually impress.
Instead, they picked a game that runs on potato hardware and used $8,400 worth of GPUs to add makeup to Grace Ashcroft. Almost like they needed a game that didn't already look incredible so the AI "improvement" would seem more dramatic.
The Power Bill From Hell
Let's talk about what running this thing actually means.
A 2,000W system running at load for, say, 4 hours of gaming per day:
- 2kW × 4 hours = 8 kWh per day
- 8 kWh × 30 days = 240 kWh per month
- At the US average of $0.16/kWh = $38.40/month just to game
And that's conservative. During actual load, with both GPUs cranking and the CPU working to keep up, you could easily hit higher. That's a space heater with RGB lighting.
Remember when I wrote about AI data centers consuming nation-state levels of power? Nvidia's now trying to bring that energy footprint to your desktop—just to make Leon Kennedy look like a K-pop idol.
"Single GPU Optimization Coming Fall 2026"
Nvidia promises that by the time DLSS 5 actually launches this fall, it'll work on a single GPU. No more dual-5090 requirements.
Where have we heard this before?
"It'll be optimized later." "Performance improvements are coming." "Trust us, it'll work on normal hardware eventually."
The fact that Nvidia demoed this feature on a $10,000+ rig tells you exactly where the technology actually is versus where they want you to think it is. They're showing off a feature that requires two flagship GPUs to run, telling gamers they're "completely wrong" for being skeptical, and asking us to trust that it'll be consumer-ready in six months.
Meanwhile, those same flagship GPUs are impossible to buy at MSRP because AI data centers have priority.
The Artistic Control Lie
Let's circle back to Jensen's defense: "Developers have full artistic control."
Here's the problem with that argument:
-
Capcom's developers didn't know. The people who made the art weren't consulted before their work was used to demo a technology that fundamentally alters it.
-
It's opt-in for developers, but what about players? If DLSS 5 ships as a toggle in the Nvidia control panel, players can turn it on regardless of whether the developer "approved" it. Your carefully crafted art direction? Gone, replaced by whatever Nvidia's model thinks looks more "photoreal." And even if Nvidia keeps it developer-gated, how long before someone releases a third-party tool to force-enable it? The modding community has a long history of unlocking things that were supposed to stay locked. Once the tech is out there, "developer control" becomes a polite fiction.
-
The model decides what "better" means. When a generative AI "enhances" a character's face, it's making aesthetic choices. Those choices are baked into the training data. If the model thinks adding makeup and smoothing skin looks "better," that's what you get—whether the original artist intended it or not.Remember Disney's live-action Lion King? It was technically more realistic than the animated original—which I love, for the record—but it was also an emotionless slog because real lions don't have expressive eyebrows. Sometimes "more photoreal" isn't "better"—it's just uncanny valley with a bigger render budget. DLSS 5 gives off the same energy: characters that look wrong in a way that's hard to articulate. Too smooth. Too perfect. Like someone ran them through a filter that stripped out all the intentional artistic choices that gave them personality.
-
It homogenizes art styles. Multiple critics have pointed out that DLSS 5 makes different games look more similar. The AI has a house style, and it's applying that style to everything.
One veteran game artist defended DLSS 5 by showing how much lighting and shading can change a face without changing geometry. Fair point—better lighting does make art look better. But that's not what the demos showed. The demos showed faces that looked fundamentally different, with features that weren't in the original art.
If the AI is "hallucinating" details that weren't in the game's code, are we even playing the game the developers built anymore?
The Pattern Continues
This is the same playbook I've been writing about for months:
- AI demands insane resources (two 5090s, 2000W power supply, $10,000 systems)
- Those resources get prioritized for AI (data centers buying all the memory and GPU capacity)
- Consumers pay the price (5090s at double MSRP, RAM prices up 172%)
- Nvidia tells you it's actually great ("you're completely wrong")
- The feature doesn't solve a problem you had (nobody asked for AI to reinterpret their games)
Jensen's standing on stage projecting $1 trillion in AI chip demand while demoing a feature that:
- Requires hardware you can't afford
- Alters games without developer consent
- Homogenizes art styles into AI-approved aesthetics
- Won't actually run on consumer hardware until "later"
And when people push back, they're told they just don't understand.
The Bottom Line
Look, DLSS 1-4 were genuinely useful technologies. Upscaling and frame generation let people with mid-range GPUs play games at higher quality settings. That's a real benefit.
And here's the thing: DLSS 5 probably could be useful too. Better lighting, improved shadows, subtle enhancements that respect the original art direction — the stuff that veteran artist was demonstrating. There's a version of this technology that helps developers achieve their vision without the uncanny valley Instagram filter energy.
But that's not what Nvidia showed us. They led with yassified Grace Ashcroft on a $10,000 rig and told everyone who raised concerns that they were "completely wrong."
This is what happens when AI stops being a tool and becomes a religion. Big tech is cramming it into everything — not because it makes products better, but because the money has to keep flowing somewhere. You can't justify $1 trillion in demand projections if AI is just "a useful tool for some things." It has to be everything, everywhere, transforming every industry, disrupting every workflow, enhancing every frame of every game whether it needs it or not.
And if the circular financing deals I wrote about in February are any indication, a lot of that $1 trillion is Monopoly money anyway — companies investing in each other, spending it back on each other's services, and calling it growth. The hype has to keep building because the moment it slows down, someone's going to ask where all the actual revenue is.
DLSS 5 isn't a technology announcement. It's a proof-of-concept that AI can touch gaming too — another slide in the investor deck, another reason to believe the bubble is justified. Whether it actually makes games better is almost beside the point.
The AI bubble isn't just making your GPU more expensive anymore. It's actively trying to change what your games look like—whether you want it to or not.
Meanwhile:
- RTX 5090s are going for $4,200+ (MSRP: $1,999)
- Memory prices are still through the roof
- The next console generation is delayed because AI ate all the RAM
- Jensen sees $1 trillion in demand and thinks that's a flex
We're watching the same pattern play out that I warned about in February: AI infrastructure spending creating artificial scarcity, prices going up, consumers getting abandoned, and the companies causing it acting like they're doing us a favor.
Nvidia's not wrong that AI is transforming computing. They're wrong about who that transformation is actually helping.
It's not you. It's the hyperscalers signing billion-dollar deals with money that goes in circles. It's the shareholders watching that $1 trillion projection and seeing dollar signs. It's the data centers consuming 70% of high-end memory production while you wait for GPU prices to come down.
You? You get to watch a demo of a feature you'll never be able to run, on hardware you can't afford, that changes your games in ways you didn't ask for.
And if you don't like it, you're "completely wrong."
Welcome to the future of gaming, brought to you by the AI bubble.
What do you think about DLSS 5 and Nvidia's GTC announcements? Are you buying Jensen's vision, or does this feel like more AI hype at gamers' expense? Find me on Mastodon at @ppb1701@ppb.social and let's talk about it.