Anthropic has announced a landmark expansion of its use of Google Cloud’s TPU chips, providing the company with access to the capacity and computing resources required to train and serve the next generations of Claude models. In total, Anthropic will have access to well over a gigawatt of capacity coming online in 2026.
This represents the largest expansion of Anthropic’s TPU usage to date. Anthropic will have access to up to one million TPU chips, as well as additional Google Cloud services, which will empower its research and development teams with leading AI-optimized infrastructure for years to come. Anthropic chose TPUs due to their price-performance and efficiency, and the company’s existing experience in training and serving its models with TPUs.
Anthropic and Google Cloud initially announced a strategic partnership in 2023, with Anthropic using Google Cloud’s AI infrastructure to train its models and making them available to businesses through Google Cloud’s Vertex AI platform and through Google Cloud Marketplace. Today, thousands of businesses utilize Anthropic’s Claude models on Google Cloud, including Figma, Palo Alto Networks, Cursor, and others.
“Anthropic and Google have a longstanding partnership and this latest expansion will help us continue to grow the compute we need to define the frontier of AI,” said Krishna Rao, CFO of Anthropic. “Our customers—from Fortune 500 companies to AI-native startups—depend on Claude for their most important work, and this expanded capacity ensures we can meet our exponentially growing demand while keeping our models at the cutting edge of the industry.”
“Anthropic’s choice to significantly expand its usage of TPUs reflects the strong price-performance and efficiency its teams have seen with TPUs for several years,” said Thomas Kurian, CEO at Google Cloud. “We are continuing to innovate and drive further efficiencies and increased capacity of our TPUs, building on our already mature AI accelerator portfolio, including our seventh generation TPU, Ironwood.”


