One Day of Complicated Feelings


The honest read of this lawsuit — the one nobody on either legal team will say out loud — is that Elon Musk is sulking over FOMO.

He helped build something. He wanted to be its sun. The other founders said no. He left. It became one of the most consequential technology companies in history (they would say "the") without him. He watched someone else get the credit, the valuation, and the cultural moment he thought he was building toward.

Everything since is documentation of that wound.

Tuesday, Sam Altman sat down and provided some of the most useful documentation yet.

The nonprofit mission argument dies on one question: what did Musk actually want?

Altman testified Tuesday that Musk's opening ask was 90% of OpenAI's equity. "It then softened, but it always was a majority." The man suing to restore a charitable nonprofit to its public mission wanted majority control of it from day one. You don't demand 90% of something you're giving to humanity.

Then there's the succession plan. Co-founders asked Musk what would happen to OpenAI if he died and had control. His answer: control should pass to his children. Altman testified he was not comfortable with that.

Publicly, since at least 2023, Musk has said the opposite — that passing companies to children is "a mistake" and he has other successors identified. He's floated the idea of an educational institution to hold his voting shares instead.

Sit with that for a second.

A private nonprofit foundation holds the votes that control his companies. Musk designates who runs the foundation. That could be family. Associates. Birchall. Whoever he trusts. The foundation votes his shares exactly how he would. Nobody challenges a private nonprofit the way they challenge a public company. No activist investors. No shareholder votes. No AGM where someone stands up and asks uncomfortable questions. The liability stops at the foundation's doors.

It's not passing control to his children directly. It's building a structure that his children and associates inhabit permanently — without their names on the deed.

A nonprofit wrapper around a control structure. Designed to be unaccountable.

The man suing OpenAI for putting a for-profit inside a nonprofit wrapper was designing exactly that structure for himself.

He didn't object to the architecture. He objected to not being the one inside it.

Altman pushed back.

"Mr. Musk did try to kill it," he testified — launching xAI as a competitor, trying to poach the talent, and engaging in what Altman called "business interference." He also repeatedly pushed to have Tesla absorb OpenAI, a proposal Altman said would not have aligned with the mission. Musk's departure in 2018 was, Altman testified, "a morale boost for employees who did not like his hardcore approach."

"I don't think Mr. Musk understood how to run a good research lab. He had demotivated some of our most key researchers."

Musk was, by Altman's account, "fairly mercurial" — someone who "only trusted himself to make the right decisions that were not obvious to others but which Musk believed would turn out to be correct."

The hair-raising moment Altman described: co-founders asked Musk what happens to OpenAI if he dies and has control. The answer did not inspire confidence. Altman said he was not comfortable with it.

His own position was the opposite.

Altman's counter: "Part of the reason we started OpenAI is we didn't think AGI could be under the control of any one person, no matter how good their intents are."

Then Musk's lawyer got his turn.

The cross-examination opened with the 2023 text Altman sent Musk: "I'm tremendously thankful for everything you've done to help. I don't think that OpenAI would have happened without you." Molo asked if his view had changed since then. "I have changed my view on Elon significantly," Altman said.

Then: "Are you a person who will tell people things because you believe that's what they want to hear rather than saying true things?"

Altman's response: "No, I have complicated feelings about Elon."

Molo cut him off. He was not asking about feelings.

"Are you completely trustworthy?" Molo asked. "I believe so," said Altman.

"Do you always tell the truth?" "I believe I'm a truthful person."

Molo brought up the New Yorker headline: "Sam Altman May Control Our Future — Can He Be Trusted?" He raised concerns from Loopt employees. From Dario Amodei. From former board members. From Ilya Sutskever, who spent a year documenting what he called a "consistent pattern of lying" before voting to remove him.

Altman said he did not want to speak for other people's claims.

He also stumbled on his own cap table. Asked if SoftBank's $30 billion was bigger than Microsoft's $13 billion: "About 2.5 times larger. I tripped up on that."

The CEO of a company valued at nearly a trillion dollars. Tripped up on his own fundraising numbers. Under oath.

To be clear: Altman isn't clean in this either. The series has documented that at length. Murati said "No" under oath about a safety clearance claim. The CFO went to the Journal saying the company wasn't ready for public reporting standards, with two sets of revenue numbers for different audiences. Brockman's journals ended up on a courtroom screen. Undisclosed Cerebras stakes confirmed by both him and Altman. The "last resort" ads that hit $100 million ARR six weeks after he called them a last resort.

Neither of these men emerged from three weeks of testimony as someone you'd hand the future of humanity to without a second thought.

But Musk's case rests on the argument that he sued over principle. Tuesday's testimony made that harder to sustain. The 90% equity ask. The children succession plan. The nonprofit wrapper he was designing for himself. The eight years of silence followed by a lawsuit filed conveniently after xAI launched and ChatGPT kept winning.

He was right that Altman isn't clean. He's sulking because it took off without him anyway.

While Altman was on the stand, Anthropic announced it is in early talks to raise at least $30 billion at a $900 billion-plus valuation — potentially closing by end of May. That would put Anthropic above OpenAI's $852 billion valuation.

The safety-focused responsible alternative just lapped the company in federal court. Amazon's $4 billion Anthropic stake is looking like the investment of the decade. Meanwhile Grok's monthly downloads fell from 20 million in January to 8.3 million in April. Paid subscribers are 0.174% of users versus 6% for ChatGPT.

The sun he wanted to be. Someone else built it. Two someone elses, actually.

Closing arguments are Thursday. A verdict from the advisory jury and Judge Gonzalez Rogers is possible next week.

Whatever the jury decides, it doesn't clean up the CFO situation, the two sets of numbers, the Cerebras undisclosed stakes, or the SoftBank $40 billion unsecured loan. It doesn't fix Grok's download numbers. It doesn't put Musk inside the structure he wanted to build.

It just decides whether the wound was also a legal claim.

The judge has been in that room for three weeks. She's heard everything. She gets the last word.

Part of the ongoing TheranasAI series, a sub-series of Big Tech's War on Users.

Read the terms. They're more honest than the marketing.