Evil vs Evil, or: How Two Billionaires Accidentally Told the Truth in Oakland
There's a federal judge in Oakland named Yvonne Gonzalez Rogers who said something last week that nobody in that courtroom had probably heard before.
She looked at two of the most powerful men in tech — one who controls more capital than most countries, one who posts at 2am like consequences are a feature for other people — and told them to stay off social media while the trial was running.
"Perhaps you've never done that before," she said. "This would be a first."
Everyone in the room knew who she was really talking to.
This is the Musk v. Altman trial. Billed informally as evil vs evil, which is a fun frame but not quite right. Evil vs evil implies two separate operations with competing philosophies. What four days of testimony in Oakland actually revealed is something I've been documenting from the outside for a while now — it's the same operation, running two instances, occasionally tripping over each other in federal court.
Same playbook. Different costumes.
Musk's case is simple on the surface. He helped found OpenAI as a nonprofit in 2015. OpenAI became a for-profit. He's mad about it. He wants $130 billion in damages, Altman and Greg Brockman removed, and the whole structure rolled back.
Noble stuff. Very principled.
Except under cross-examination this week, a few things came out that complicated the narrative somewhat.
He pledged $1 billion to OpenAI. Delivered $38 million. When pressed on the gap, he told the court his reputation made up the difference. "These things have value," he said. Which is true. It's also the kind of thing you say when you don't have a better answer.
For what it's worth, a California jury valued that reputation at negative $2.6 billion six weeks ago, when they found his tweets about the Twitter acquisition materially false and misleading to investors. The math on "these things have value" gets complicated when a jury has already done the calculation and come up with a different number.
He had his own money manager quietly register a for-profit entity in OpenAI's name in 2017. Just in case it was needed. Emails showed he wanted control of it. Because he was providing almost all the money. The nonprofit mission was important. So was being in charge of it.
He admitted xAI used OpenAI's own models to train Grok. "Partly." Which is the same practice he publicly criticized Anthropic for earlier this year. The man suing over OpenAI's founding principles distilled OpenAI's own work to build his competitor. Under oath. In his own lawsuit.
He said Tesla has no plans to pursue AGI. Tesla announced $25 billion in AI capital expenditure this year. Tesla shareholders may want to look into that discrepancy at their convenience.
He said he didn't know what a safety card was. Safety cards are the documents AI companies publish alongside major model releases detailing capabilities, risks, and safety testing. They are the most basic unit of AI safety documentation in the industry. The man suing to restore OpenAI's safety-focused nonprofit mission had never seen one.
I have a guess about how that meeting went when someone tried to show him one.
And then there's the AI rankings. Later in testimony, asked to rank the world's leading AI providers, he put Anthropic first, OpenAI second, Google third, and described xAI as a much smaller company with just a few hundred employees. He ranked his own company fourth. In his own lawsuit. Under oath. I would argue he was being generous — Meta is running on 2 billion WhatsApp users and half the world's dev servers. Grok is running on people who forgot to turn it off.
OpenAI's attorney asked why, in the eight years since Musk left OpenAI, he hasn't started a nonprofit himself. Nothing stopped him. xAI could have launched as one. It did not. No answer was reported.
The nonprofit mission makes for a compelling lawsuit. It makes for a terrible explanation of why, in eight years and with $650 billion at his disposal, the man suing to restore it never once tried to build one. This was never about the mission. It's about who gets the golden goose when it lays its IPO egg.
Three days on the stand. One witness — his own — whose testimony his lawyers immediately tried to walk back. And a judge who had to remind him, in open court, that tweeting about active litigation is generally not advised.
For what it's worth, there's a prediction market with $8 million in trading volume forecasting how many times he posts per week. If you want a deep dive on why these markets are their own problem, John Oliver covered it. But on this particular question the crowd has put real money on the entirely reasonable assumption that he's going to keep posting. The judge asked for restraint. The market is not impressed.
Which brings us to a pattern worth understanding before we get to the orbital data centers.
Here's a brief intermission, because the trial is genuinely novel in one specific way: it's the first time someone has been able to put these people under oath and ask yes or no questions. At least one of them is still working on the concept. And it turns out the gap between the public persona and the sworn testimony is considerable.
This should surprise nobody who has followed the Full Self Driving saga.
Full Self Driving has been "one to two years away" since approximately 2016. Let's do a quick greatest hits:
2016 — Tesla announces all new vehicles have full self driving hardware. Musk tells the press a Tesla will drive itself from Los Angeles to New York City by end of 2017. "I feel pretty good about this goal," he said.
2017 — End of year comes and goes.
2018 — On a shareholder call, Musk promises a cross-country fully autonomous drive within three to six months.
2019 — "Feature complete" full self driving by end of year. One million robotaxis by end of year.
2020 — Still not feature complete. Still no robotaxis.
2021 — FSD beta released to "careful" drivers with approximately forty pages of caveats about how it's not actually self driving.
2022 — Truly autonomous FSD this year definitely.
2023 — Musk dubbed himself "the boy who cried FSD." Cybercab robotaxi announcement coming. Soon.
2024 — Cybercab actually announced. Coming 2026. Tesla's own lawyers argued in court that "self-driving" was merely "aspirational."
2025 — Limited robotaxi service launches in Austin. With safety drivers. In a car sold on the premise of not needing one.
People paid up to $15,000 upfront for Full Self Driving based on timelines that moved like suggestions for nearly a decade. A class action lawsuit now represents 3,000 plaintiffs in California alone. One owner paid $8,000 for the feature in 2017. Still waiting.
This is not a footnote. It's the pattern. Announce the audacious thing. Generate the headlines and the pre-orders and the stock price movement. Move the goalposts. Announce the thing again. The accountability never quite arrives because the next vision is already being announced.
Mars by 2024. Full self driving by 2017. Orbital data centers. Same structure. Different product. The FSD saga alone deserves its own deep dive — and that road is long, winding, and apparently never quite arrives at the destination. We'll get there eventually. Unlike the robotaxi.
Now here's where it gets interesting.
Because the trial isn't actually about OpenAI's soul. It's about an IPO.
OpenAI is heading toward what could be the largest tech public offering in years. Whoever controls the narrative around that moment controls the valuation. Musk winning doesn't restore anything — rolling back the for-profit structure right before the IPO craters the company he's directly competing with, while xAI sails into the same investor pool uncontested.
It's the same move he ran on advertisers. Tell them to go f*** themselves publicly. Sue them privately. The principle is the press release. The money is the point.
Speaking of money — let's look at what xAI actually is right now.
In March 2025, xAI acquired X, the social network formerly known as Twitter, the one he bought for $44 billion and renamed. Then in February 2026, SpaceX acquired xAI — including X — in what became the largest merger of all time, valued at $1.25 trillion. The official reason was "orbital data centers." The actual reason, as every analyst immediately noted: SpaceX had cash that xAI desperately needed. xAI generated approximately $250 million in revenue over six months while losing $2.5 billion in the process. If you want to understand why that burn rate isn't unusual in this industry and why it's still terrifying, the math on how running an AI company actually works is its own story.
When SpaceX IPOs later this year — reportedly timed to Musk's birthday and a planetary alignment, which is exactly as unhinged as it sounds — investors who want in must buy Twitter, xAI, and SpaceX all at once. There is no unbundling. The man suing over OpenAI's nonprofit structure built a $1.25 trillion corporate matryoshka doll that he controls at every layer and is now taking public.
The nonprofit mission is the press release. The IPO is the point.
The orbital data center pitch deserves its own moment.
Even if you hand him free power — better solar panels, a reactor, whatever — the costs that don't go away are the ones that make the math permanently broken.
Launch mass alone: even at Starship's most optimistic future pricing, which doesn't exist yet at scale, equivalent data center capacity runs hundreds of millions just to get the hardware off the ground. Before you plug anything in. Space-hardened GPUs run anywhere from 100x to 1000x the cost of terrestrial equivalents — and that's assuming Nvidia or someone redesigns entire product lines from the fab process up, which is a years-long undertaking, not a firmware update. Then the CPUs need the same treatment. SSDs — NAND flash is actually more vulnerable to radiation than DRAM in some failure modes. Standard RAM experiences constant bit flips from cosmic rays. You need radiation-hardened ECC memory with custom controllers, which means the GPU and memory interface needs redesigning too.
If you've been watching RAMmageddon already playing out from terrestrial AI demand alone — DRAM prices up 90% in a single quarter, part of what pushed the PS6's launch timeline — space hardening requirements piled on top would make that look like a rounding error. The cost cascades immediately through every layer of the stack.
Then cooling. In space there's no convection. You radiate heat through panels. The ISS — the largest radiator array currently in space — dissipates about 70 kilowatts. A meaningful AI training cluster needs megawatts. The radiator surface area required would be thousands of times larger than anything ever built and launched. Under sustained AI training load the thermal constraints directly hit clock speeds. You don't get the same performance numbers. You get worse ones. From hardware that cost 500x more to build.
Here's where the pitch gets truly creative. SpaceX filed an FCC application to launch up to one million satellites as orbital data centers. No crew. No maintenance. The actual stated plan is what experts call a "fly till you die scenario" — launch them, run them until they fail, launch replacements. GPU failure rates on Earth run around 9%. In a radiation environment that number goes up considerably. On a million satellites that's 90,000 dead units a year needing replacement launches, each adding to the debris field, each a collision risk for everything else in orbit including his own Starlink constellation.
There's already a name for what happens when orbital debris reaches a critical density. Kessler Syndrome — the cascading collision scenario that could make certain orbital bands permanently unusable. For anyone. Forever. Experts already say the bands where part of this constellation would live are showing early signs. The plan for a million disposable satellites doesn't mention this. The FCC application doesn't solve it. And unlike a terrestrial data center where you can walk in and fix things, the deorbit plan for 90,000 failing satellites a year requires coordinating controlled reentries through a single patch of Pacific Ocean — Point Nemo, the traditional spacecraft graveyard — that has handled maybe a few hundred objects across all of human spaceflight history.
To be fair — if humanity eventually establishes a lunar base or waystation, some of these costs get cut considerably. The moon stays put. You land infrastructure once. Maintenance becomes possible. Partial gravity is free. But we are nowhere near that — and building one expensive station to service another expensive station is not solving the problem, it's offloading it to a second enormous problem.
The actual plan — the one that makes financial sense right now — is captive users and forced adoption. Tesla uses Grok. X users are the audience whether they asked for it or not. The banks handling the SpaceX IPO were required to buy Grok subscriptions just to get a seat at the table. And SpaceX just announced Terafab — a joint semiconductor fabrication project with Tesla and xAI to build their own chip supply chain so they don't have to buy from Nvidia either.
That's not a product. That's a toll booth on the way to the largest IPO in history with a chip fab attached.
I've spent months documenting that exact playbook in software — build something people use, make leaving expensive, monetize the captivity. Musk is running it across entire companies instead of product features.
"Orbital data centers" is the new "entertainment purposes only." It's what's in the marketing. Read the S-1 when it drops.
Meanwhile, about that Mars thing.
To be fair, something has successfully launched toward Mars. NASA's Perseverance rover landed in 2021. On an Atlas V. Built by United Launch Alliance. Which is not SpaceX.
Musk did launch something Mars-adjacent. A Tesla Roadster. Strapped to a Falcon Heavy test launch in 2018 because he needed ballast for the test and figured why not make it spectacular. There's a mannequin in a spacesuit in the driver's seat named Starman. David Bowie on the stereo. Long elliptical orbit, Mars-adjacent every few years. You can track it in real time at whereisroadster.com, which means Starman currently has more transparent public infrastructure documentation than xAI's safety practices.
Starman has been running continuously since 2018 with zero maintenance windows, no support crew, and better uptime than Grok.
No complaints logged. No regulatory probes in the EU. Nobody has generated 4.4 million inappropriate images using the Tesla Roadster. It has not once failed to recognize a safety card. It is the most reliable thing Elon Musk has ever put in orbit.
Minnesota just passed the first state ban on nudification apps in the country — 65-0 in the Senate, 132-1 in the House — explicitly citing the Grok incident as the moment these tools went mainstream. What hasn't come up in the trial yet is Grok's documented record of generating nonconsensual images of adults and explicit images of children — even as Musk spent three days portraying himself as AI's foremost safety advocate. The administration that counts Musk as an ally has announced it may challenge the Minnesota law. Starman has not been mentioned in any of this legislation.
So where does that leave Altman.
The temptation is to root for him by default because he's the one being sued. That would be a mistake, and I have the receipts to explain why.
OpenAI is burning $25 billion this year. $57 billion in 2027. $665 billion through 2030. It won't be profitable until 2030 at the earliest. The ads that Altman once called a "last resort" hit $100 million in annual recurring revenue within six weeks of launch. The CFO who closed the largest private funding round in history got frozen out of financial planning meetings for raising concerns about whether the spending commitments were supportable. She now reports to the head of applications instead of the CEO.
The same week that came out, Altman published a 13-page document about saving capitalism and a universal basic income and a 32-hour workweek pilot. The same morning that dropped, Ronan Farrow published a piece containing Ilya Sutskever's confidential memo — the first word of his list of concerns was "Deception" — and Dario Amodei's 200-page private notes titled "The problem with OpenAI is Sam himself", and Microsoft executives on record saying he "distorts, twists, renegotiates, and violates agreements."
The difference is register. Musk performs the villain. Altman performs the reluctant visionary. The outcomes for users documented in this series are comparable. The styling couldn't be more different. One requires Ronan Farrow and 18 months of reporting to surface. The other just requires cross-examination.
A courtroom with subpoena power gets there faster than a leaked source file. This week we got both men in the same room, under oath, simultaneously. The gap between public performance and documented reality was unusually visible. No ToS footnotes required. A federal judge did the excavating.
She's better at getting yes or no answers. We'll give her that.
This is the post that sets the stage for what's coming in this series.
The deep dives — Grok's deepfake monetization, the distillation admission, the burn rate math, the "entertainment purposes only" terms of service, the Claude Code source leak, the Birchall donor fund implosion — are all more technically damning. But they land harder when you understand the frame first.
These aren't competing visions of what AI should be. They're competing extraction strategies dressed in competing ideologies. Same playbook. Different costumes. One federal judge. A four day weekend. And a man with 239 million followers and no apparent impulse control.
Starman is out there somewhere between Earth and Mars. Playing David Bowie. Zero incidents.
We'll check back on the humans Monday morning.
This is part of the Big Tech's War on Users series. The deep dives are coming. Read the terms. They're more honest than the marketing.