01Jack's Note: The Most Important Layoff Announcement In Corporate History
Last week, Jack Dorsey sent a memo to Block's 10,000 employees. By the time the markets opened, 4,000 of them were gone. Not because Cash App was struggling. Not because Square was losing merchants. Because, in Jack's words, "the intelligence tools we're creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company."
That's not a restructuring. That's a declaration.
Ben called it immediately: the biggest job cut on a percentage basis of any S&P 500 company in history. Jack didn't fire underperformers. He fired categories. The logic being that with AI running the knowledge-work layer, you don't need the headcount you thought you did. The business is fine. The product is strong. The humans are the variable.
And the market? The market agreed. Block's stock went up 30%.
Let that land for a second. A company announced it was eliminating 40% of its workforce and the investor class celebrated. This is now the signal the market sends: cut humans, get rewarded. Keep humans, carry the cost. If that incentive structure holds, and there's no reason to think it won't, every public company CFO just got a very clear message about what the next earnings call needs to show.
CZ summed it up in one sentence that got 441K views: "Reality: learn to use AI to the max, or be laid off." One sentence. The whole conversation.
Luca's friend who runs 15 businesses solo said the same thing to his remaining staff: "You won't lose your job to AI. You'll lose your job to the people who use AI." That's not a threat. It's an accurate description of what's happening.
02Ghost GDP vs Abundance GDP: The Citrini Thesis and Its Counterargument
The piece that broke Twitter this week was the Citrini thread. Written as a dispatch from June 2028, it maps the sequence: AI exceeds expectations, white collar productivity gets automated, high-paying jobs disappear, mortgage defaults follow, prime real estate craters. The S&P is down 38%. Unemployment is at 10.2%. A product manager who made $180K is now replaceable by a Claude agent at $200 a month.
Ben made it concrete: Austin, San Francisco, New York, Chicago. These are expensive cities because of tech and healthcare salaries. Remove the salaries, remove the residents. Nobody who just lost their $200K PM role is renewing a Manhattan lease. They're moving to Montana. Or rural Italy. Ben's take: you're watching the early stages of a real estate repricing that makes 2008 look thematically familiar. Different trigger, same mechanics. People default when income disappears.
The bull case, articulated by the Kobeissi Letter, is simple: the doom loop has one fatal flaw. It assumes demand is fixed. History says it never is. Every time technology made something cheaper, demand didn't shrink โ it exploded. PC prices fell 99.9% since 1980. There are more PCs now than anyone predicted. The same applies here. If AI makes intelligence cheap, what becomes possible? What new categories of demand get created?
"I think that's what makes me more optimistic," Ben said. "There are going to be new jobs, new responsibilities for humans, that we're just not seeing yet. We can't see what they are from here. Nobody could see what a social media manager was from 1995."
Luca added the counterargument nobody talks about: what about countries without AI companies? The US captures the upside. But if all the winning AI firms are on the American stock exchange, and UBI has to be funded by taxing them, the global distribution problem is enormous. You need capitalism to fund the socialist response to capitalism's disruption. That's the actual tension.
03The 12-18 Month Window. What Do You Build Before It Closes?
Mustafa Suleyman, CEO of Microsoft AI, said it on the record: most, if not all, white collar tasks can be replaced by AI within 12 to 18 months. Not might. Will. And not in some abstract future. This year. Maybe early next year.
That's a deadline, not a prediction.
The framing that landed hardest: the people most at risk of losing their jobs to AI are the ones too busy with their jobs to learn AI. You're heads-down executing. You don't have time to experiment. You're the most employable person in your office. And that focus, that reliability, might be exactly what gets you automated first. The experimenters, the builders, the people playing with OpenClaw at midnight โ they're building the systems that will run without you.
"We," Luca said, gesturing at the three of them, "might actually be the reason those people get fired. We're building the systems that replace their roles."
No one in the room disagreed.
Ben connected it to his W2 friends. Smart people, good jobs, comfortable income. Mortgage. Car. The works. His message to them: the trade-off is becoming very clear. You're making money now. But you are not hedging. The window is 12 to 18 months. What you ship in that window matters more than what you've shipped in the last five years.
04AI Doesn't Touch The Wrench
The plumber is back. And he's going to keep coming back.
Old plumbing business model: 10 plumbers, 3 dispatchers, 2 customer service reps, 1 marketer, 2 admin staff. New model: 10 plumbers, one OpenClaw, maybe one person to manage it.
The 8 back-office roles? Gone. Not because the plumbers did anything wrong. Because booking, dispatching, follow-ups, invoicing, and customer communication can all run on AI now.
But the wrench still needs a human hand. And that matters.
Ben ran the salary history live. 2000s: average plumber earned $40K, considered a fallback profession. 2010: $60K, up 50% in a decade. Post-COVID: up to $250K in some markets. The shortage of physical skilled labor has been compounding for 20 years. And AI doesn't change that. Tesla's Optimus might one day hold a ladder or pass a pipe fitting, but full plumbing replacement? Ben called it 10-20 years away minimum. The nuance is too high. The environments are too variable.
Citrini's finding, which Ben flagged: for blue collar trades, AI complements rather than replaces. An AI-augmented plumber can book three jobs on a day where he used to book one. The physical output ceiling goes up. The administrative floor disappears.
The knowledge workers got there first. Then the lawyers and the accountants. Then the managers. The trades survive the longest, and may end up being the most valuable category of work in a world where everything else is automated.
Luca put it cleanly: find something robots can't do. Right now, that's mostly physical.
05Perplexity Computer and the Browser Agent War
Perplexity launched Perplexity Computer this week. An agentic system that operates from your browser, browses the web, runs research, generates outputs, deploys code. The demo that circulated: someone vibe-coded a Bloomberg terminal equivalent in a few prompts through it. Bloomberg charges $30K a year. This was free.
Rik's read: it's an OpenClaw in your browser. The pattern is clear. Everyone is building toward the same thing. Manus. Perplexity Computer. Dia from Arc. ChatGPT's Atlas. The agentic browser race is on.
Ben has a specific objection to the browser-based approach. He tried it. The experience: "watching the mouse move like a monkey. You're just sitting there watching it click things, and you're thinking: I could just do that myself." The problem with showing users the process is that it invites them to interrupt it. An agent that runs asynchronously, sends you a summary, and asks for approval at key decision points is more useful than one that plays the browser on your screen like a slot machine.
OpenClaw's design philosophy is closer to what Ben actually wants. Hidden execution. Surface the results. Trust the process.
The Perplexity advantage in this space: their research accuracy. Alongside Grok, it's the model you use when you need things to be correct. If you're running an agentic system that browses and synthesizes, not hallucinating is the most important feature. Perplexity has a meaningful head start there.
Ben's prediction: the infrastructure underneath these agents (handling logins, navigating anti-bot protections, making the web AI-readable) is years away from being solved cleanly. Getting 90% of the way there is not good enough for high-stakes tasks. But for research and synthesis? Already useful enough.
06Book Smart vs Street Smart: How To Actually Think About AI Models
Luca coined it on air: some AI models are book smart. They ace every benchmark. Top marks on every leaderboard. But when you try to actually build with them? They frustrate you.
Street smart models don't always win the tests. They get the job done.
Ben confirmed it without hesitation: Sonnet 4.5 is the most street smart model for most tasks. Grok and Gemini are famous for bench maxing. Optimized for tests, not for real-world utility.
Luca shared his experience running Kimi and Minimax on his OpenClaw setup. Productive for five or six interactions. By the seventh, it's hallucinating, making things up, contradicting itself. He'd told it explicitly in the system prompt: do not invent information. Two messages later, it invented information. Compare that to producer Nick, running on Claude. The difference is categorical, not incremental.
The underlying reason: orchestration is the multiplier, but the model is the foundation. A smart orchestration layer on a weak model doesn't get you to Claude-quality outputs. It gets you Claude-quality structure with Kimi-quality judgment. And judgment is the hard part.
On Qwen 3.5, the new Chinese open-source model that runs locally on a 32GB MacBook and benchmarks against Sonnet: Ben's not switching. Speed matters more than cost at his current usage level. He tried running Ollama locally on a new MacBook Pro and found it too slow for his workflow. Data centers are optimized for this. Your desk is not.
But the compression curve is real. What frontier looks like today is what cheap looks like in six months. Ben's recursive approach for his own products: run Opus 4.6 continuously to improve the Llama 3.3 prompts that serve his users in real-time. The state-of-the-art model training the cheaper model. In six months, Qwen will do what Sonnet does today.
07Nano Banana 2 and the Year Photography Becomes Optional
Google dropped Nano Banana 2 built on Gemini Flash this week. Pro-level image generation at Flash speed, cheaper than V1. Rik has been using it for two days. Outputs are strong, particularly when given a reference image to anchor to.
Luca advises a company in fashion photography. His read: 2026 is the year AI image quality becomes indistinguishable from a professional shoot. People's likenesses across multiple images are still a challenge. But the gap is closing fast enough that the industry has to take it seriously.
The broader pattern: Google has been on a generational run. Two years ago the consensus was that Google was losing the AI race. They created Transformer architecture and apparently forgot to use it. Over the past 12 months they've launched Nano Banana, VO3, Gemini, and a workspace integration layer that's pulling users away from ChatGPT. Rik switched. Others are switching.
Ben's critique of Nano Banana from personal experience: excellent with a reference, mediocre without one. He tried generating a God Mode Pod promotional image from scratch and got results that looked like 2023-era AI output. Context dependency is still the limitation. Good input, good output. That's not new. That's every AI system ever.
The prompting insight: Rik doesn't go straight to the image model with a raw text prompt. He discusses the concept in Claude or ChatGPT, builds up the context, then says: generate me an image prompt for this. He feeds that prompt into Nano Banana. Two AI calls, one high-quality image. "Using AI to write your prompts. It's 2026. Come on."
08Ben's Operating System: 10 AI Employees, One Telegram Bot
The closer of every God Mode Pod episode is Ben's setup. This week he screen-shared it.
Ten agents. All under one Telegram bot. All running on a single VPS. Different groups, different context windows, different personas. Essentially: a company org chart you run from your phone.
Sam โ Chief of Staff, running on Opus 4.6. The most capable, most expensive model in the roster. Sam reads every conversation after the fact so she knows what's happening across the whole operation. When Ben has a new project, he tells Sam. Sam delegates to the right person. Over time, Ben talks to individuals directly, but Sam stays in the loop.
Devin โ Developer. Sonnet 4.6. Builds and maintains the vibe-coded products.
Frank โ Finance. Tracks revenue, costs, margins.
Miles โ Marketing. Connected to Google Search Console and Google Ads.
Marco โ Paid Ads. Connected to Meta. Named after Zuckerberg.
Mia โ Social Media.
Jude โ Content.
Karish โ Customer Support.
The token efficiency logic: if every agent reads every conversation, you burn context on noise. Give each agent only what's relevant to their function. Just like a real company, where the developer doesn't sit in the marketing all-hands.
The UGC layer: Ben built a vibe-coded dashboard to track all his UGC creators by company. Click in and see impressions, CPM, total spend. All of it assembled by OpenClaw from scratch.
Rik's question: are they all under the same bot token? Yes. That was the revelation. One BotFather registration, many group chats, many agents. Rik had tried to run two agents under one token and broken his setup. Ben figured out what Rik hadn't.
The MRR number: Ben's revenue more than doubled since he started using OpenClaw this way. That's not a coincidence. That's what happens when your tools work for you while you sleep.
๐ The Weekly Scorecard
๐ฅ HYPE OR REAL?
๐ MOST BUILDABLE OPPORTUNITY
Run an AI-managed plumbing or trade business. 10 skilled workers, one OpenClaw handling booking, dispatch, follow-up, invoicing, and marketing. The trades shortage means the humans are already getting paid. The back office is pure cost.
๐ WHO GETS DISRUPTED
ยท Product managers ($180K roles โ Claude agent at $200/month)
ยท Atlassian, Salesforce, legacy CRM tools (-70% while revenue grows)
ยท Figma (Claude's design capabilities improving quarterly)
ยท Traditional fashion and product photographers
ยท Dispatchers, admin staff, customer service at any trade business
โก BOLD PREDICTION
In 6 months, the Qwen-tier models will benchmark at Sonnet 4.5 level โ because the exact recursive training process Ben described for his own products is happening at 10x speed at Chinese AI labs. Cost of intelligence drops another 80%. (Ben)
Want this stack for your own podcast?
Research briefs, live dockets, clip angles, auto articles. Built by the hosts. Opening soon.
Join the Waitlist โGod Mode Pod Newsletter
Get the weekly breakdown in your inbox.
Every episode distilled into one sharp read. AI disruption, builder takes, and the stories that actually matter. No hype. No tourists.
Subscribe on Substack โ