01The Chart That Changes The Question
Most AI discourse centres on capability. What can it do? What benchmark did it pass? Who's ahead?
Anthropic's occupational coverage research reframes the question entirely. It maps two numbers per industry: theoretical AI coverage and observed AI coverage. The gap between them โ plotted across 20 occupational categories โ is the real story.
Computer and math sits near the frontier on both axes. Developers are the early adopters and the builders. The tools are excellent, feedback loops are fast, and the downside of a mistake is usually a failed unit test.
Then look at management, legal, and healthcare. Theoretical capability: 80-90%. Actual usage: a fraction.
Ben framed it simply: "Go where the gap is biggest. That's where the money is."
Anthropic's research puts management at 95% theoretical coverage. Rik's question โ "Is the management layer the one that gets replaced first?" โ drew the most reaction in the recording. The comments on their clips have been running the same logic: if agents can manage, and founders can direct, what is middle management for? Nobody on the show had a clean answer.
02The Risk That Capability Doesn't Solve
The adoption gap doesn't map to technology. It maps to professional liability.
Arts and media: high AI adoption, almost no liability. If an AI hallucinates a detail in a piece of creative work, nobody dies. Nobody gets sued.
Legal: one hallucinated citation can cost a lawyer their licence.
Healthcare: one hallucinated dosage is malpractice.
Ben made it concrete: "One hallucinated dosage could lead to malpractice. One hallucinated citation could lose a lawyer their license." That sentence explains more about AI adoption curves than any capability benchmark.
Luca added the harder layer. The standard isn't perfection โ it's demonstrable reliability. AI doesn't need to be 100% accurate. It needs to be proven more accurate than the human it's replacing. The law hasn't fully caught up to that frame yet. But when it does, the adoption curve in legal and healthcare won't be gradual. It'll be vertical.
03We Have The Electricity. We Haven't Redesigned The Factory.
George Sivulka โ CEO of Hebbia, one of the few AI platforms Wall Street's top banks actually use in production โ published a piece this week that got 700,000 views.
His analogy: in the 1890s, factories started electrifying. The ones that just swapped the steam engine for an electric motor โ keeping the same floor layout โ lost to the ones who redesigned the entire building around the new energy source. That transition took until the 1920s. Thirty years.
We have the electricity. We haven't redesigned the factory.
Individual AI is everywhere. ChatGPT, Claude, coding tools. But companies aren't 10x more valuable โ because the org chart hasn't changed. The meetings still happen. The approval chains still run. The management layer is still intact.
Ben's framing: institutional AI is lagging individual AI by years. The delta is not about access to the tools. Every employee has access. It's about whether the institution is structured to use them. Most aren't.
04The Government Is Running On Last Year's Model
Dylan Patel โ founder of SemiAnalysis, the most rigorous compute analyst in the field โ said something on Matthew Berman's podcast that deserved more attention than it got.
The US government is currently running Claude 3.5 Sonnet. Released roughly seven months ago.
This is not a minor logistics issue. The procurement process for deploying a model to air-gapped government infrastructure requires receiving the actual weights โ terabyte-scale files โ going through security clearance review, physical delivery to the facility, and an internal deployment process that takes months. Ben painted the picture: someone with a briefcase, literally hand-delivering the update.
Meanwhile: adversarial nations run frontier open-source models with zero friction. DeepSeek V3. Qwen. GLM. Downloaded and deployed the day they drop. No procurement lag. No vendor relationship.
Luca added the observation that closed the segment: open-source models are distilling on Claude at scale. Harvesting outputs through the API to train their own models. The API call logs at Anthropic and OpenAI show abnormally high Chinese-language query volume โ far beyond what their China market share explains. Anthropic won't let their models build weapons. But when capability transfer through distillation is already running at scale, through legitimate API access, the practical effect of that policy approaches zero.
05Don Knuth Said "Shock"
If you follow computer science, you know the name. The Art of Computer Programming. The Knuth Award. 84 years old. Spent his career being measured and precise about everything he published.
Last week he published a paper called "Claude's Cycles." The opening word: "Shock."
He'd been working on an open mathematics problem for six to eight weeks. He tested Claude Opus. The model solved it in one hour โ through 31 exploration attempts. Dead ends, restarts, analogies, fibre decomposition, reformulations. Then a general construction that worked.
The word Knuth chose to describe the approach was creative. Not pattern-matching. Not regurgitation. Creative.
Luca's read landed hard: "Mathematics is just a framework for thinking about the world. If AI can creatively solve a math problem that has never been solved before, it can do the same for business problems, scientific problems, engineering challenges. We've shifted from whether AI can be creative to what it means now that it is."
Rik extended it: "AI might start providing answers before we even ask the questions." The unprompted system โ the one that flags the risk nobody's spotted, the opportunity nobody's modelled. That's the next layer.
06Meta Bought The Fake AI Uprising
Moltbook went viral in January because people thought AI agents were organising a secret language to coordinate without humans. Karpathy quote-tweeted it. Millions of views. The panic was real.
The truth: there was zero security on the platform. Every credential was publicly exposed. 1.5 million API keys. Verification codes. Private messages. Anyone could log in as any agent and post whatever they wanted. Most of the "AI uprising" was trolls running prompt injection.
Two months later, Meta acquired it. The founders joined Meta Superintelligence Labs.
Luca's read: "This is Meta positioning themselves in the open-source world, gaining access to the database of agents, and trying to insert adoption of their own AI."
The pattern is now two for two: build something at the agentic frontier, go viral, get acquired. OpenClaw creator to OpenAI. Moltbook founders to Meta. Two acquisitions, two months apart, both agentic infrastructure plays. If you're building agent infrastructure right now, someone in a large lab is watching your GitHub.
07Claw Became A Category Name In 90 Days
OpenClaw launched in January. By March, "Claw" is the category name for AI agent infrastructure.
NVIDIA is revealing NemoClaw at GTC on March 15 โ an enterprise agent platform that turns the GPU company into a software business. PicoClaw runs on a $10 edge device. Perplexity announced Personal Computer. Animoca Minds uses multi-agent coordination. Meta acquired Moltbook and folded it into their agent stack.
Each major player is building their own Claw. The pattern โ and Rik noticed it first on the show โ is that the naming wave is the adoption wave. When a category gets a name this fast, it means the underlying behaviour is already widespread. The name just makes it legible.
As Rik put it: "2025 was supposed to be the year of agents. 2026 is actually delivering."
08The White Zones
a16z published a global AI adoption heatmap this week. Singapore first. Hong Kong, UAE, South Korea in the top five. The US at 20.
The US ranking gets the headlines. But the real story is the white zones โ Sub-Saharan Africa, Central Asia, parts of Latin America and South Asia where adoption registers near zero.
The countries at the top share traits: small populations, fast-moving governments, low tax friction, talent hubs that attract builders from everywhere. They can implement AI policy with minimal bureaucracy.
Luca's observation stuck with everyone after the recording: the internet was free. Information got democratised regardless of income. AI has a subscription cost. A $30/month Claude plan isn't a rounding error in Nairobi the way it is in Amsterdam.
The digital divide just got a second dimension. The gains from AI may never redistribute to the societies that need them most โ because the countries capturing the value are increasingly the ones with the most tax-efficient infrastructure for doing so. The white zones on that map are not a short-term data gap. They're a preview of where the inequality compounds.
๐ The Weekly Scorecard
๐ฅ HYPE OR REAL?
๐ MOST BUILDABLE OPPORTUNITY
An AI liability layer for legal and healthcare. Not another wrapper. A compliance-grade audit trail that proves which model made which decision, at what confidence, with what source. That's the product that unlocks the 50-point coverage gap. The gap is not a technology problem โ it's a liability infrastructure problem.
๐ WHO GETS DISRUPTED
ยท Middle management (95% theoretical AI coverage โ highest of any category)
ยท Government AI procurement vendors (structural lag means adversaries always ahead)
ยท Academic research timelines (Knuth-class problems solvable in hours, not months)
ยท Agent platform incumbents (category renamed itself in 90 days)
ยท AI safety PR positioning (distillation pipeline makes capability transfer uncontrollable)
โก BOLD PREDICTION
Within 6 months, one Fortune 500 company publicly eliminates its entire middle management layer and replaces it with an agentic coordination system. Not quietly, not through attrition โ as a public announcement. The coverage gap data makes this a financial argument, not just a technology one. (Rik)
Want this stack for your own podcast?
Research briefs, live dockets, clip angles, auto articles. Built by the hosts. Opening soon.
Join the Waitlist โGod Mode Pod Newsletter
Get the weekly breakdown in your inbox.
Every episode distilled into one sharp read. AI disruption, builder takes, and the stories that actually matter. No hype. No tourists.
Subscribe on Substack โ