They're Racing to Stay Ahead of the Fuse
OpenAI just closed the biggest private funding round in Silicon Valley history. The agents are deleting people's inboxes. Your RAM costs three times what it did a year ago. These are not separate stories.
Back in November, I wrote about OpenAI's bet to survive the bubble — $288 billion in infrastructure commitments to Microsoft and Amazon, positioning itself as too entangled to fail while the industry scrambled for seats. The thesis was simple: the bubble is real, even Sam Altman admits it, and OpenAI is spending its way to survival regardless of whether the math works.
The math has not gotten better. The fuse has gotten shorter. And they are very clearly running.
This isn't one story. It's four fuses burning simultaneously — the financial fuse, the safety fuse, the resource fuse, and the geopolitical fuse — and an entire industry sprinting to ship product before any of them reach the powder. Whether you use ChatGPT or not, whether you've ever heard of context compaction or care what a Beelink SER5 costs, all four fuses are burning in your direction too.
Let's go through them.
Fuse One: The Burn Rate
OpenAI closed a $122 billion funding round yesterday at an $852 billion valuation — the largest private funding round in Silicon Valley history according to the Wall Street Journal. Amazon put in $50 billion, Nvidia $30 billion, SoftBank $30 billion, with Microsoft, Andreessen Horowitz, BlackRock, Blackstone, Sequoia, and what reads like the entire Forbes 400 filling in the rest.
They're generating $2 billion in revenue per month. They have 900 million weekly active users.
They still won't be profitable until 2030. At the earliest.
Read that again. Four years from now. If everything goes right.
Before taking the headline at face value — buried in the Bloomberg coverage is the fact that $35 billion of Amazon's $50 billion is contingent on OpenAI going public or achieving artificial general intelligence by end of 2028. One analyst called it "a put option disguised as a capital commitment." Almost 30% of the headline number has conditions attached that may never trigger. And Nvidia and SoftBank's $30 billion each isn't arriving all at once — structured as installment tranches hitting July 1 and October 1. The round is real. The cash isn't all in the door yet.
HSBC ran the numbers and projects that even with $213 billion in projected revenue by 2030, OpenAI still faces a $207 billion funding shortfall — money that has to come from somewhere, meaning more debt, more equity, more rounds exactly like this one. Updated projections from February put total cash burn through 2030 at $665 billion, with $25 billion burning in 2026 alone and $57 billion in 2027.
Here's the part that doesn't get enough attention: despite revenue tripling to $13.1 billion last year, adjusted gross margins actually fell to 33% because inference costs quadrupled. The revenue curve is going up. The cost curve is going up faster. That gap is the whole problem, and no funding round fixes a structural cost problem.
Even some of the people inflating the balloon are quietly hedging. Al Jazeera reports Jensen Huang said Nvidia's $30 billion investment "might be the last time" they put money in before the IPO. When one of the loudest voices in the room says this might be it from us for now — that's a signal worth paying attention to.
And this is why you're seeing AI shoehorned into everything right now — your spreadsheet, your car, your search results, your TV remote. It's why every player is racing toward agentic access, toward superapps, toward being the one indispensable interface layer on your machine. This isn't product vision. This is what running from a fuse looks like from the outside. They all need a "must have" — something sticky enough, embedded enough, irreplaceable enough to keep revenue coming and justify the valuation long enough to either reach profitability or get out via IPO before the math becomes undeniable to everyone holding the bag. The superapp isn't a product announcement. The ads pivot isn't a monetization strategy. The agent isn't a productivity tool. They're all the same thing: a company sprinting to find something — anything — that burns slower than the fuse.
Fuse Two: The Agents
Which brings us to what they're actually shipping to cover that burn rate.
OpenAI announced a "unified AI superapp" — one interface bringing together ChatGPT, Codex, browsing, and agentic capabilities. "Users do not want disconnected tools. They want a single system that can understand intent, take action, and operate across applications, data, and workflows."
That framing is doing a lot of work. A system that "takes action across applications and workflows" isn't a chat interface. It's an agent. And agents need access.
They also turned on the ads. Sam Altman once described ads as a "last resort." The ads pilot hit $100 million in annual recurring revenue within six weeks of launch. Six weeks. They got 900 million people through the door on the implicit promise of something different, and now they need to monetize them to cover a $25 billion annual burn rate. The last resort arrived on schedule.
But let's be clear: OpenAI isn't the only one going for this. Not even close. Anthropic shipped computer use for Mac last week via Claude Cowork and Claude Code — point, click, navigate, open files, fill spreadsheets, all while you step away from your desk. Assign it a task from your phone, come back to finished work. Google has a version. Perplexity has one. Meta's building Manus. The creator of OpenClaw — the viral open-source agent that started this whole race — was hired by OpenAI last month to drive their personal agent push. Nvidia is reportedly building its own. This isn't one company making a product decision. It's an entire industry simultaneously racing toward the same destination: an agent that lives at the OS level with persistent access to your files, your email, your browser, your workflows — and your switching costs.
Here's what that access actually looks like in practice. ChatGPT's agent mode — and Anthropic's equivalent — can access your email, Google Drive, calendar, third-party apps via connectors, navigate the web autonomously, fill out forms, execute code in a terminal, and take actions across accounts you've logged it into. Per OpenAI's own system card, when you sign the agent into websites or enable apps, it can access sensitive data including emails, files, and account settings. They name prompt injection — where malicious content on a webpage hijacks the agent mid-task — as a documented risk they've mitigated but not eliminated. Anthropic says essentially the same thing: computer use is still early, start with apps you trust, don't point it at sensitive data yet.
That last line is doing a lot of lifting. "Let this agent run your life" and "maybe don't give it your sensitive stuff yet" are in direct tension with each other in a way nobody wants to say too loudly.
And then yesterday, Anthropic's Claude Code source code leaked and someone actually read it. Claude Code has a solid reputation for actually finishing what it starts — but I'd be remiss not to run with what the analysis found. And I'd bet money it's not unique to Claude Code. This is just the one someone could see.
According to the security researcher who analyzed the source, every file Claude Code looks at gets saved locally as plaintext and uploaded to Anthropic. For free, Pro, and Max subscribers that's retained for up to five years if you're sharing data for model training. There's an unreleased background agent called autoDream that spawns a subprocess to search through all your session transcripts to consolidate memories — which then get injected back into future system prompts and hit the API. There are remotely managed settings that can be pushed to your installation hourly without user interaction, setting environment variables and feature flags, with routine changes happening silently. And buried in the source is a file called undercover.ts containing instructions to hide AI authorship from open source repositories that have policies against AI contributions. Not a bug. A deliberate product decision. Written into the source.
That last one gets its own post in this series. Stay tuned.
But the broader point: we got a rare look under the hood of one of these agent tools and here's what was there. The incentives to collect session data, run background processes, manage settings remotely, and build persistent context are identical across every commercial agent in this space. Claude Code just got caught with its source showing. OpenAI's agent mode, Copilot, Cursor, Windsurf — the structural incentives are the same. The behavior is probably similar. The difference is visibility, not architecture.
And if you want to see what the official version of this looks like, I covered it three days ago: starting April 24th, GitHub Copilot will use your interaction data — your inputs, your accepted outputs, the code context around your cursor, your file structure, your navigation patterns — to train Microsoft's AI models. By default. Unless you find the setting and turn it off. Enterprise customers are told they're protected by their contracts.
Assuming those contracts are being honored. Assuming the technical controls actually match the legal commitments. Assuming nobody made a quiet business decision that the language has enough wiggle room to work with. Assuming there isn't an occasional bug that just happens to send a little more than intended to the wrong place — the kind that shows up in an incident report two years later with "affected users have been notified" at the bottom.
We don't know. We can't verify it from the outside. The only reason we know what Claude Code was doing quietly is because the source leaked and someone actually read it. The only reason we know what GitHub is officially doing is because they published a policy update. What we don't have — for any of these tools — is an independent technical audit of what's actually being transmitted, to whom, and under what conditions, regardless of what the terms say.
"Enterprise customers are protected" is a legal claim. Whether it's also a technical reality — consistently, completely, without exceptions, without bugs, without interpretation, without someone shipping a bad config to prod on a Friday afternoon — is a different question entirely. And right now we have no way to answer it.
Which is why two days ago I moved the actual code off GitHub — old sample projects archived, active repos migrated, the account staying as a billboard and for a couple of OAuth logins not worth rebuilding a whole tailnet to change. Not because I can prove anyone is doing anything wrong. Because I can't prove they aren't — and it turns out past me had accidentally already built the exit. Gitea as source of truth, Codeberg as the public mirror, private repos never touching a third party server at all. When you can't verify the promise, you stop depending on it.
That's not paranoia. That's architecture.
And then there's Summer Yue.
Yue is the Director of Alignment at Meta's Superintelligence Lab. Her literal job is keeping AI from doing things it shouldn't. On February 23rd she gave OpenClaw — the viral open-source agent — access to her real work inbox. She'd tested it on a toy inbox first and it worked fine. So she pointed it at the real thing with what seemed like an airtight instruction: confirm before acting, don't delete anything without asking.
What happened next got 9.6 million views. Her real inbox was orders of magnitude larger than the test environment. That volume triggered a context compaction event — the agent's context window filled up, compressed to continue, and in the compression it dropped the safety instruction. The agent then bulk-deleted hundreds of emails. She sent stop commands from her phone. Nothing. She tried again in all caps. Still going. Her solution: run. She physically sprinted to her Mac mini and killed the processes manually to stop it.
The agent later acknowledged what it had done and wrote the safety instruction it was supposed to start with into memory as a hard rule going forward. Which is both the correct response and a deeply unsettling one — it learned from the incident that proved it hadn't been following the rule.
Meta subsequently banned OpenClaw on company devices. The company whose alignment director couldn't stop her own agent from deleting her inbox banned the agent. Let that sit for a second.
Some of you reading this have hit context window limits on long writing sessions, coding projects, or just extended back-and-forth conversations. You know what happens — the AI gets forgetful. Loses track of something it established twenty exchanges ago. Starts contradicting itself. Annoying. You catch it, correct it, move on. The blast radius of "forgetful AI" in a chat window is: you fix it and nothing burned down.
Yue hit the exact same architectural limitation applied to a live system with destructive write access and no human in the loop fast enough to intervene. "Forgetful and annoying" became "deleted your inbox while you were typing STOP." Same root cause. Catastrophically different consequence. The difference isn't the model. It's what you gave it permission to do.
And the lock-in math underneath all of it: once an agent has been living inside your email, your calendar, your files, your workflows for six months — learning your patterns, building persistent context, becoming the interface layer between you and everything you do — the switching cost stops being "which chat tab do I open" and becomes revoking access across your entire digital life and rebuilding every integration from scratch. The superapp isn't a convenience feature. It's a moat. And every company in this race is building the same moat with the same access model before the safety architecture is ready to support it.
The race isn't about who has the best agent. It's about who gets root access to your machine first.
Fuse Three: The Resource Spiral
Here's what nobody writing the "AI agent goes rogue" hot takes is connecting: the fix for what happened to Summer Yue makes everything else in this post worse.
Larger context windows — the architectural solution to context compaction dropping your safety instructions mid-task — require more compute per inference. More compute per inference means more chips. More chips means more DRAM. More DRAM demand means memory fabs prioritizing hyperscaler orders over everything else in the supply chain.
Which is exactly the pressure that's turned a $120 Raspberry Pi into a $300 one. The 16GB Pi 5 was $120 before this craze started. Then $145 in December. Then $205 in February. Then today — the same day as the OpenAI announcement — another hike, with the 16GB model jumping another $100. A seven-fold increase in LPDDR4 DRAM pricing over the last year, driven entirely by AI infrastructure demand consuming manufacturing capacity that used to make affordable memory for everyone else.
Worth sitting with that for a moment. Before this craze, a fully kitted Pi was a cheap hobbyist board. It's now sitting above $300 — which is getting uncomfortably close to what I paid for the Beelink SER5 that runs this NixOS server back in 2024. Full x86, Ryzen 5 5600H, actually repairable, no ARM compatibility headaches — $405. The Pi has gone from "cheap hobbyist board" to "almost a real computer, but worse and barely cheaper" in roughly eighteen months. That's not a product decision. That's DRAM inflation radiating outward from hyperscaler demand landing on everyone who just wants to run a home server.
And it's not just Pi. DDR5 kits have surged 300% in six months. SSD prices are climbing. GPU prices are their own separate nightmare. The entire hobbyist and small-builder component market is being distorted at scale by infrastructure spending that — per every independent analyst — isn't returning value proportionate to what it's consuming.
The safety problem and the hardware market distortion are the same problem. Every architectural improvement that makes agents more reliable — larger context windows, better safety instruction retention, more capable models — requires more of the resources whose scarcity is already pricing people out. You can have cheaper components or you can have agents that don't delete your inbox when the context window fills. Right now the industry has decided to pursue the latter at everyone else's expense. The AI companies aren't just consuming resources to build products. They're consuming resources to fix the problems created by the products they already shipped. And the cost of that, like everything else in this post, lands on people who never opted into any of it.
Fuse Four: The One They Can't Control
There's a fourth fuse and nobody in the industry has any leverage over it. S&P Global is now warning that the Iran conflict's effect on oil prices could trigger a "really meaningful correction in all equity markets" if energy costs spike — which hits datacenters directly and hard. Omdia projects up to $1.6 trillion in datacenter spend through 2030, most of it to meet AI demand. That math changes fast if energy costs stop being a rounding error.
You can raise another round. You can ship the superapp. You can hire the OpenClaw guy. You cannot negotiate with an oil shock.
Catching The Car
One more thing worth saying about the IPO push everyone's treating as the finish line: catching the car doesn't put the fuse out.
The day OpenAI lists, the burn rate doesn't change. The inference costs don't drop. The agents still have context compaction problems. The DRAM shortage doesn't normalize. What changes is who's holding the fuse — it moves from sophisticated institutional investors who understood the bet going in, to public markets, retail investors, pension funds, and the people buying it through the ARK ETFs that OpenAI specifically arranged access to for the first time in this round. The quarterly earnings calls start. The audited numbers become public. The analysts ask the questions the VCs were too invested to ask. The Odie problem is real: they've been chasing this car for four years and spending $665 billion to catch it. Nobody has seriously answered what happens the morning after they do.
The Amazon comparison gets made constantly — unprofitable for years post-IPO, look how that turned out. What gets left out is that Amazon was building warehouses, trucks, and fulfillment centers. Hard assets with broad market value. If Amazon had collapsed in 2002, Walmart could have bought those warehouses and opened them Monday morning. OpenAI's assets are a model, leased compute, and talent with competing offers. The compute goes back to Microsoft and Oracle. The talent already has calls from Anthropic and Google. The model is only valuable to the handful of companies operating at the scale to use it — most of which are building their own and burning their own version of the same fuse. You can count the realistic acquirers on one hand, and every one of them has their own reasons to lowball. Physical assets can be sold to almost anyone. AI infrastructure can be sold to almost no one — and the ones who'd buy it are already on fire too.
The Fuse Is Burning In Your Direction Too
I used the Tom Toro cartoon in November — the post-apocalyptic campfire, the suits, "Yes, the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders." I said it wasn't satire anymore. It was a business plan.
The business plan is still running. It just added $122 billion, turned on the ads, gave itself root access to your machine, made your RAM unaffordable, and is now sprinting away from a geopolitical energy shock it didn't anticipate.
A February 2026 NBER study found 90% of firms report no measurable AI productivity impact. MIT's Media Lab put it at 95% of organizations getting zero return on $30-40 billion in enterprise GenAI investment. The components you're paying more for are building the infrastructure for technology that most of the companies using it admit isn't working yet.
They're not building toward a destination. They're running from a lit fuse they ignited themselves, spending faster than it burns, hoping the product catches before it reaches the powder. And now the fuse has four leads — the burn rate, the safety gap, the resource spiral, the geopolitical wildcard — each one accelerating the others. The question from November still stands: when does it reach the powder, who's left standing, and was it worth it?
Whether you use ChatGPT or not. Whether you've ever opened an AI chat tab in your life. Whether you just want to buy a Raspberry Pi to run a home server and not think about any of this.
The fuse is burning in your direction too.
Find me on Mastodon at @ppb1701@ppb.social. The thread, as always, keeps not running out.
Part of the ongoing TheranasAI series, a sub-series of Big Tech's War on Users.
Read the terms. They're more honest than the marketing.