COMPUTE RACE

WHO OWNS
THE GPUS

AI runs on chips. The companies that own the most compute are placing trillion-dollar bets that intelligence is the next scarce resource. Here's who controls the hardware layer of the AI economy.

ESTIMATED GPU / EQUIVALENT CLUSTER SIZE (2026)

H100 equivalents. Mix of NVIDIA GPUs and custom silicon (TPU, Trainium) where noted.

Microsoft + OpenAI
Stargate / Azure AI
300K
H100 / H200

Joint venture. Stargate Phase 1: 100K H100s Texas. $500B total planned by 2030.

CAPEX
$80B (2026 commitment)
Google DeepMind
TPU v5e / GCP AI
200K
TPU v5e equiv.

Google builds its own silicon. TPU v5e counts as GPU-equivalent for training workloads.

CAPEX
$75B (2026 capex)
xAI (Elon Musk)
Colossus — Memphis
200K
H100 / H200

Fastest data center build in history: 100K H100s in 122 days. Expanding to 200K+ H200s.

CAPEX
~$10B
Meta
Meta AI Compute
150K
H100

Targeting 350K H100 equivalents by end of 2025. Llama training at extreme scale.

CAPEX
$65B (2025)
Amazon (AWS)
Trainium + GPU
120K
Trainium 3 / H100

Mix of custom Trainium 3 and NVIDIA H100s. Training + inference across all AWS regions.

CAPEX
$100B+ (2026)
Anthropic (via AWS)
AWS Claude compute
60K
H100 / Trainium

No owned cluster — relies on AWS deal. Strategically dependent on Amazon infrastructure.

CAPEX
Via $4B AWS deal
DeepSeek
Domestic GPU cluster
50K
A100 (export-controlled)

Built frontier AI on restricted hardware. R1 training cost = $6M. The efficiency benchmark.

CAPEX
~$6M (R1 training run)
THE DEEPSEEK PARADOX

Every company above spent billions on GPUs. DeepSeek trained R1 — a frontier reasoning model — on export-restricted A100 chips for $6 million. It matched or beat OpenAI's o1 on multiple benchmarks.

This doesn't mean hardware doesn't matter. It means the GPU arms race and algorithmic efficiency are running in parallel. The companies buying the most chips are also the most exposed if the efficiency gap closes faster than expected.

THE ARMS RACE — KEY MOVES

Jan 2023 · MSFT/OAI
Microsoft commits $10B to OpenAI; Azure AI infrastructure starts scaling
Mar 2023 · NVIDIA
NVIDIA H100 ships — new benchmark for AI training. Supply instantly scarce.
Aug 2023 · Meta
Meta announces 350,000 H100 purchase — largest single chip order in history
Sep 2023 · xAI
xAI founded. Elon Musk begins sourcing 10,000 H100s for initial Grok cluster
Mar 2024 · Google
Google announces $75B 2024 capex — majority to TPU and GPU infrastructure
Jul 2024 · xAI
xAI Colossus Phase 1: 100,000 H100s online in Memphis in 122 days
Jan 2025 · MSFT/OAI
Stargate announced: $500B OpenAI+MSFT+SoftBank+Oracle AI infrastructure plan
Jan 2025 · DeepSeek
DeepSeek R1 ships — frontier quality trained on restricted A100s for $6M
Mar 2026 · NVIDIA
NVIDIA GB200 NVL72 racks shipping — next generation cluster buildout begins