WHO OWNS
THE GPUS
AI runs on chips. The companies that own the most compute are placing trillion-dollar bets that intelligence is the next scarce resource. Here's who controls the hardware layer of the AI economy.
ESTIMATED GPU / EQUIVALENT CLUSTER SIZE (2026)
H100 equivalents. Mix of NVIDIA GPUs and custom silicon (TPU, Trainium) where noted.
Joint venture. Stargate Phase 1: 100K H100s Texas. $500B total planned by 2030.
Google builds its own silicon. TPU v5e counts as GPU-equivalent for training workloads.
Fastest data center build in history: 100K H100s in 122 days. Expanding to 200K+ H200s.
Targeting 350K H100 equivalents by end of 2025. Llama training at extreme scale.
Mix of custom Trainium 3 and NVIDIA H100s. Training + inference across all AWS regions.
No owned cluster — relies on AWS deal. Strategically dependent on Amazon infrastructure.
Built frontier AI on restricted hardware. R1 training cost = $6M. The efficiency benchmark.
Every company above spent billions on GPUs. DeepSeek trained R1 — a frontier reasoning model — on export-restricted A100 chips for $6 million. It matched or beat OpenAI's o1 on multiple benchmarks.
This doesn't mean hardware doesn't matter. It means the GPU arms race and algorithmic efficiency are running in parallel. The companies buying the most chips are also the most exposed if the efficiency gap closes faster than expected.