Google, meta team up on “torchtpu” as nvidia faces $5 trillion market test
By Cygnus | 18 Dec 2025
Google and Meta Platforms have officially moved their “TorchTPU” collaboration into high gear, a strategic software push aimed at breaking Nvidia’s long-standing dominance in the AI hardware market.
The move comes as the AI industry faces a valuation reckoning. Nvidia, which briefly became the first company to hit a $5 trillion market capitalization on October 29, 2025, has seen its valuation stabilize around $4.3 trillion as of mid-December. The market dip follows increasing evidence that major hyperscalers—led by Meta and Anthropic—are successfully diversifying their infrastructure away from Nvidia’s costly Blackwell GPUs.
Breaking the cuda lock-in
The TorchTPU project is designed to make Google’s custom Tensor Processing Units (TPUs) work natively and efficiently with PyTorch, the world’s most popular AI framework. Historically, Nvidia’s proprietary CUDA software layer created a “lock-in” effect: because PyTorch ran best on CUDA, developers were reluctant to switch to rival hardware.
TorchTPU removes this friction, allowing developers to run PyTorch workloads on Google hardware with “plug-and-play” ease, eliminating the need for Google’s complex internal JAX framework.
The “ironwood” era
Central to this shift is Google’s seventh-generation chip, TPU v7 (Ironwood), which reached General Availability in November 2025.
- Performance: Ironwood offers a 10x peak performance jump over TPU v5p and is 4x faster per chip for training than the previous v6 (Trillium).
- Memory and interconnect: The chip features 192 GB of HBM3E memory and a revolutionary 9.6 Tb/s inter-chip interconnect, specifically optimized for massive inference “thinking” models.
- Scale: A single Ironwood “superpod” can link 9,216 chips, delivering 42.5 Exaflops of FP8 compute power—surpassing the world’s most powerful supercomputers.
A multi-vendor shift
The narrative that “frontier AI requires Nvidia” is being challenged by high-profile deployments. Anthropic recently secured a landmark deal for 1 million Google TPUs to power its Claude models through 2026, marking the largest hardware commitment in AI history.
For Meta, the TorchTPU project serves as critical leverage in price negotiations with Nvidia, while also preparing for the training of Llama 4 on a more diverse global infrastructure. Reports indicate Meta will rent massive TPU capacity in 2026 before potentially deploying them in private data centers by 2027.
Summary
Google and Meta are collaborating on TorchTPU, ensuring Google’s seventh-generation TPU v7 (Ironwood) is fully compatible with the PyTorch framework. By removing software bottlenecks associated with Nvidia’s CUDA, the duo aims to lower switching costs for developers. With Anthropic committing to 1 million TPUs and Nvidia’s market cap adjusting from its $5 trillion peak, the initiative represents the most coordinated challenge to Nvidia’s hardware monopoly to date.
Frequently asked questions (FAQS)
Q1: What is TorchTPU?
It is a Google-led initiative, supported by Meta, to optimize PyTorch for Google TPUs. It allows developers to use the industry-standard AI framework without the engineering overhead of Google’s JAX stack.
Q2: How did this affect Nvidia’s stock?
While Nvidia hit a record $5 trillion in October, the rise of “ASIC” (Application-Specific Integrated Circuit) competition like Ironwood has contributed to a valuation adjustment to approximately $4.3 trillion.
Q3: What makes TPU v7 (Ironwood) different?
It is designed for the “Age of Inference,” featuring 6x more memory bandwidth than its predecessor and the ability to link over 9,000 chips into a single, unified compute fabric.
Q4: Is Meta buying these chips?
Meta is currently renting capacity via Google Cloud but is reportedly co-developing software that would allow it to purchase and install TPUs in its own data centers by 2027.
Q5: Why is the Anthropic deal significant?
The 1-million TPU deal proves that frontier AI labs can train and deploy world-class models (like Claude) without being exclusively dependent on Nvidia hardware.
