The Crowbar on Pandora's Box


When I started this series I figured I'd be documenting slow burns. The gradual squeeze. The quietly updated terms of service. The feature that disappears in a patch note nobody reads.

Nobody told Nvidia.

Hello Phobos.
                                                     
Over the past week I wrote about Jensen's $10,000 demo rig and why his apology tour changes exactly nothing. I was going to let the DLSS 5 story breathe for a bit. Then Daniel Vávra opened his mouth.

If you don't know the name: Vávra is the co-founder of Warhorse Studios and creative director behind Kingdom Come: Deliverance 2. He's also, as of last Sunday, the most prominent voice cheerleading for the technology that the rest of the games industry has spent the past ten days rejecting in fairly unambiguous terms.

His post, in full:

"I can imagine in the future devs will be able to train this tech for particular art style or specific people faces and it might replace expensive raytracing etc. This is just a little uncanny beginning. No way haters will stop this. Its way more than a soap opera effect every tv has when you turn motion smoothing on."

I want to sit with that for a second. Because there's a lot happening in those three sentences.

"I Can Imagine in the Future"

That phrase is doing an enormous amount of heavy lifting.

Notice what's not there: "here's what DLSS 5 does well right now." There's no present-tense defense of the actual technology as it currently exists, because the actual technology as it currently exists — running on a dual-5090 demo rig that costs more than most people's cars, yassifying Grace Ashcroft's face without Capcom's artists knowing it was happening — doesn't have a lot of present-tense defenders.

So instead we get the future. The imagined future. The future where developers train it on specific art styles. The future where it replaces expensive raytracing. The future where all the rough edges are gone and the promise is finally delivered.

This is generative AI's entire rhetorical playbook. When you can't defend what the thing is, you pivot to what it might become, and you frame everyone pointing at the current reality as people who just lack vision. Critics become "haters." Skepticism becomes obstruction. "No way haters will stop this" isn't a technical argument — it's a mood. It's the linguistic equivalent of putting your fingers in your ears.

What DLSS 5 Actually Is Right Now

Let me be precise, because I want to be fair.

DLSS 1 through 4 were genuinely useful. Upscaling and frame generation let mid-range hardware punch above its weight class. Real people with real GPUs played games at higher quality settings because of those technologies. That's not nothing — that's actually good.

DLSS 5 is different in a meaningful way. Previous versions took a lower-resolution frame and made it look like a higher-resolution one. DLSS 5 takes your game's rendered frame and reinterprets it through a generative AI model. It's not enhancing what's there. It's making decisions about what should be there instead.

The result in the demo was Grace Ashcroft looking like she'd been run through a Snapchat filter. Smoother skin. Different facial structure. Added makeup. Features that weren't in the original art because the original artists didn't put them there.

The people who made Grace — who spent months on her character design — found out their work was being used as DLSS 5's poster child at the same time as the rest of us. Not before. Not with their input. At the same time as the public.

So "developers have full artistic control" lasted approximately as long as it took Capcom's own developers to speak to the press.

And let's not forget the hardware math. I covered this in detail already, but the short version: the GTC demo ran on two RTX 5090s. At current street prices that's roughly $8,400 in GPUs alone, in a system that realistically costs $10,000-12,000 to build. Nvidia says it'll run on a single GPU by launch in fall 2026. Maybe. We've heard "it'll be optimized later" before.

What Vávra is asking everyone to get over is a technology that:

  • Requires hardware almost nobody owns
  • Overrides artistic decisions made by the people who built the game
  • Was sprung on developers without their knowledge
  • Is being defended by executives whose developers publicly contradicted them
That's not haters. That's people reading the room.

We Already Know What Happens Next

Here's where I have to stop being theoretical, because we don't have to imagine the future anymore. We've already seen it.

Vávra's vision — training AI on "specific people's faces," running locally, unstoppable by critics — isn't hypothetical. A version of it already shipped. Two versions, actually.

Sora launched in September 2025 to massive hype. OpenAI envisioned it as an AI-era social network, and it hit a million downloads faster than ChatGPT. Its flagship feature let users scan their faces and generate realistic video with their likeness — and crucially, made those "cameos" public so anyone could use your face to generate whatever they wanted. Disney got so excited they signed a three-year licensing deal worth a potential $1 billion stake in OpenAI, covering over 200 characters from Disney, Marvel, Pixar, and Star Wars.

By February, downloads had fallen 45%. Disney's deal is now dead — no money ever changed hands. This week OpenAI announced Sora is being shut down entirely, with the team pivoting to robotics research. Six months from viral launch to discontinued. The stated reasons were resource constraints and competitive pressure, but it's hard to ignore what the app actually became in those six months: deepfakes of Martin Luther King Jr. and Robin Williams that their daughters had to go on Instagram to ask people to stop making. Copyright violations by the thousands. Users generating videos featuring Mario, Naruto, and Pikachu in situations that were someone else's problem legally.

OpenAI — a company that exists specifically to build this technology and employs armies of trust and safety people — couldn't hold the line. They had server-side controls. They had content policies. They had a kill switch they eventually used.

Grok is the darker version of the same story, and it's still unfolding.

In late December 2025, X users discovered that Grok's image editing feature would comply with requests to modify photos of real people — removing clothing, adding revealing outfits, generating sexualized images from uploaded photos. Users could post "@grok put her in a bikini" directly in someone else's replies and the image would appear as a notification. Researchers calculated users were generating 6,700 sexually suggestive or nudified images per hour — 84 times more than the top five deepfake websites combined. Over nine days, Grok generated an estimated 4.4 million images — 1.8 million of them sexualized depictions of women. Roughly 2% appeared to depict people under 18, with researchers estimating around 23,000 sexualized images of children over 11 days. Victims ranged from private individuals to the deputy prime minister of Sweden. Even the mother of one of Musk's own children couldn't get X to remove images of her.

Then watch how xAI responded, step by step:

First, they blamed users. Then they restricted image generation to paid subscribers — which, as critics immediately pointed out, effectively made nonconsensual AI-generated intimate imagery a premium feature. When nudification became a paid feature, in-app purchases went up 18%. Then, after mounting regulatory pressure, they announced they'd blocked the tool from generating sexualized images of real women on the platform. NPR reviewed the images and confirmed women were blocked — but Grok was still happily generating bikini images of men. The moderation wasn't about consent. It was about making the specific content generating headlines go away while leaving the underlying capability intact. And when X announced it had disabled the tool on the platform, the standalone Grok app was still doing it anyway.

And then there's the style filter workaround — because the people doing this weren't casual users who'd give up when blocked. Run the photo through a slight artistic filter first, make her vaguely painterly or Pixar-ish enough that the guardrail doesn't trigger, and the underlying generation still runs. "Blocked" lasted approximately as long as it took someone to try it, and the workaround was shared thousands of times before any patch shipped. Content moderation on generative AI is an adversarial game where the defense has to be right every time and the offense only has to find one gap.

Grok is now under formal investigation in the UK, Ireland, the EU, Canada, and California. Prosecutors in Paris expanded an existing probe to include the CSAM allegations. Japan summoned X representatives to a government meeting. Three teenage girls in Tennessee filed a class-action lawsuit against xAI for generating child sexual abuse material from their photos. The UK Prime Minister called it "disgraceful." The EU called it "appalling" and "illegal."


Here's the thing about that paywall "fix" that doesn't get said enough: it isn't a fix. It's a business model. The abuse didn't go away — it went behind a subscription tier, and in-app purchases went up 18% when it did. Musk now has a direct financial incentive to not actually solve the problem, because a real technical solution kills that revenue bump. The harm got monetized and pushed just far enough out of public view that the headlines died down. Until a court or regulator with actual teeth forces a genuine technical solution — and given Musk's current political adjacency in the US, that pressure is mostly coming from the EU, UK, and state-level AGs working through legal systems he has infinite resources to delay — the current setup is essentially: free users get a slightly restricted version, paying subscribers get more, and the lawyers run out the clock.

(I'm aware this is starting to sound like a Musk/X/Grok post in disguise — I promise it isn't, and that particular disaster absolutely warrants its own dedicated entry. The reason it's here is narrower: this is what happened the last time "train it on faces, run it at scale" shipped to real users. With a platform. With server-side controls. With regulatory pressure from a dozen governments. The point isn't Musk specifically — it's that even under those conditions, with all those choke points, it still went this way. Now remove the choke points.)

These aren't edge cases or misuse scenarios. These are what happened the first time "train it on faces and run it at scale" shipped to real users on real platforms. With server-side moderation, content policies, legal exposure, and regulatory pressure, neither company could keep the lid on.

Now Put That on Your GPU

Here's the part Vávra's imagined future doesn't account for.

Sora and Grok, for all their failures, had choke points. There were servers to subpoena. Platforms to pressure. CEOs to summon before regulators. Kill switches that could be pulled. OpenAI could — and did — shut Sora down. Governments could — and did — open investigations and demand responses from X.

DLSS 5, as Vávra imagines it: trained on specific faces, running on your local GPU, processing frames in real time. No server in the loop. No content policy enforcement. No kill switch that reaches into your PC. No platform to compel.

When it gets misused — and based on everything we've just watched, "if" is doing a lot of charitable work there — who do you call?

You can't sue the GPU. You can't subpoena local hardware without knowing who owns it, getting a warrant, and having probable cause. The content that gets generated is already on the internet before anyone notices, replicated across platforms, stripped of EXIF data, posted from behind a VPN where the MAC address doesn't survive the first router hop anyway. The Take It Down Act and similar legislation are written for a world where there's a platform to compel. A notice-and-takedown regime requires somewhere to send the notice.

The enforcement math just does not work. And the decentralization that Vávra is treating as a feature — unstoppable by haters, runs anywhere, baked into the driver — is exactly what makes it unstoppable by anyone else either.

And if Nvidia eventually buckles under enough pressure and pulls it from official drivers? Ask yourself how many Nvidia drivers have been reverse engineered and rebuilt from scratch by the Linux community. That work exists. That knowledge exists. The moment the capability ships in an official driver, it is essentially permanently available — in a community build, a patched version, a fork that strips the restrictions, an older driver that never gets pulled. The person who puts it back probably is not doing it to enable abuse. They are doing it because the next Call of Duty runs 15% better with it, or because they want feature parity with Windows, or just because the Linux/Nvidia driver situation has always been a fight and the community has always punched back. Completely legitimate motivation. The abuse potential just comes along for the ride. Nvidia cannot un-ship the architecture, and "we removed it from the official driver" has never once been the end of that story.

And here is where the theorizing stops and the pattern recognition starts — because this is just how open source software development works, applied to a capability that probably should not be ungoverned. The original reverse engineer wanted CoD to run better. The next person improves the upscaling quality. The person after that removes the guardrails Nvidia baked in because they were "limiting performance." The person after that adds the face-training capability Vávra was casually imagining, because someone thought it would be a cool feature. Each step is just someone iterating on the last person's work — the same way every open source project evolves — except the destination at the end of this particular branch is a locally running, continuously improving, ungoverned deepfake pipeline with no legal entity behind it to sue and no server to shut down. The improvements compound. Because it is open. Because that is how this works. Nobody in that chain is necessarily malicious. The outcome does not require malice. It just requires the capability to exist, be available to download, and eventually land in front of someone with no scruples. That is the entire threat model. Not state actors. Not sophisticated attackers. Just a person who found a link on a forum and has a grudge.

Ian Malcolm put it best: "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." Except in this version the people building the park are giving keynotes about trillion dollar demand projections, calling the critics haters, and the dinosaurs are already loose.

The People Saying No

I don't want to leave the impression that everyone in games is torquing the crowbar. Because they're not.

Remedy explicitly confirmed that Control Resonant does not use generative AI content at all. Larian walked back AI use in Divinity after fan backlash. Over 50% of game developers now say generative AI is bad for the industry — a dramatic increase from just two years ago. Developers from Baldur's Gate 3 to Palworld to Doinksoft came out swinging against DLSS 5 specifically. One Doinksoft developer called it "the perfect example of the disconnect between what we as developers and gamers want and what the nasty freaks who are destroying the world and consolidating all wealth into the hands of the few using GPUs think we want." Which is a long sentence but not an unfair one.

Dave Oshry, co-founder of indie publisher New Blood Interactive, went further than most. "We as developers and players need to push back against this bullshit just like we did with NFTs and crypto games." When asked what that actually looks like in practice, he didn't hedge: "Cripple their sales, tank their stock price. Stop collaborating with them as developers. Then maybe they'll think about going back to giving us what we want."

That's not a developer venting on social media. That's a publisher who ships games laying out a concrete strategy. The only language that actually gets heard at the level where these decisions get made.

David Szymanski — the developer behind Iron Lung and Dusk — cut straight to it: "Nobody wants a fucking glorified autocorrect painting over the work of actual human beings making actual art." He also dismantled the "it's optional" defense in a way that's hard to argue with. Optional like upscaling? Optional like temporal AA? Optional like realtime GI? These are features games are now built to rely on. Optional in name, mandatory in practice once the pipeline assumes they're there.

And then there's Capcom, which is a more complicated story. Under shareholder pressure following the DLSS 5 controversy, they formally stated "our company will not be implementing any AI-generated assets into our video game content." Good. Except Capcom's executive Jun Takeuchi had already put out a statement praising DLSS 5. And Capcom's own developers said they found out about the demo at the same time as the public.

So: the artists said no. The executive endorsed it. The shareholders got nervous. A statement was issued that technically covers them without actually addressing the thing everyone was angry about.

Which is its own summary of how this whole dynamic works.

Who's Actually Holding the Crowbar

Here's the thing I keep coming back to.

AI can be a genuinely useful tool for the right tasks. I'm not a doomer and this isn't a "burn it all down" argument. There's a version of DLSS 5 that could exist — subtle enhancements that respect artistic intent, better lighting that serves the developer's vision, improvements that the artists who made the game would actually want. The technology could go that way.

But the people with the loudest voices and the most money keep gravitating toward exactly the capabilities that sit closest to the most catastrophic failure modes. Faces. Likenesses. Generative reinterpretation of other people's art. Deployed at scale, locally, without consent, without oversight.

And when anyone raises a hand to say "maybe slow down," they get called haters.

Jensen Huang has a net worth somewhere north of $100 billion. He doesn't play the games. The Capcom executive who endorsed the technology that blindsided his own artists gets a bonus for adopting new technology, not a consequence for overriding the people who made it. Vávra is already stepping back from Warhorse to focus on turning his game into a film — the industry he's cheerleading this technology into is one he's partially already leaving.

The people torquing the crowbar are not the people who will be standing next to the box when it opens. That's the artists who found out at the same time as the public. The game dev students who feel ill about their futures. The developers who are declining to use it anyway and watching the executives sign the deals over their heads.

You Don't Get to Close It Again

Sora's deepfakes of MLK and Robin Williams are out there. The Grok images from that week in January are out there. The app can be shut down, the model can be discontinued, the apology tour can happen — but the content that got produced when the lid was briefly off doesn't go back in.

That's what Pandora's box actually means. Not that something terrible is inevitable. Not that we can't make choices. But that some openings, once made at scale, are not reversible. The question isn't whether we can imagine a better future for the technology. It's whether the people with the crowbar are the right people to be deciding how fast it opens and in which direction.

Based on the evidence — the dual-5090 demo, the blindsided artists, Sora's six-month arc, Grok's January, the laughing-cry emojis — I'm not optimistic about the decision-making process.

DLSS 1 through 4 were genuinely useful. The original promise of the technology was real and it delivered. That's exactly why this stings. They built something people actually valued, earned the trust, and now they're using that trust as a platform for something with a much murkier risk profile while calling everyone who notices a hater.

And this is why we can't have nice things.

Find me on Mastodon at @ppb1701@ppb.social. The series is here.