Anthropic Secures 3.5 Gigawatts of Google TPU Capacity in New Broadcom Deal

Esther Speak - Senior Reporter at Villpress
7 Min Read
Add us on Google
Add as preferred source on Google

Anthropic has secured one of its largest infrastructure bets yet, signing a new multi-party agreement with Google and Broadcom that locks in multiple gigawatts of next-generation Tensor Processing Unit (TPU) capacity starting in 2027. The deal, announced on April 6, 2026, deepens the AI company’s reliance on Google’s custom silicon while giving Broadcom a firmer grip on the supply chain for high-performance AI hardware.

Details emerged simultaneously from Anthropic’s official blog and Broadcom’s regulatory filing. Anthropic will gain access to approximately 3.5 gigawatts of TPU-based compute through Broadcom as part of a broader multi-gigawatt commitment. The vast majority of this new capacity will be sited in the United States, aligning with the company’s earlier pledge to invest $50 billion in American AI infrastructure. Most of the hardware is expected to come online beginning in 2027, with deployment potentially stretching toward 2031 under Broadcom’s long-term supply assurances.

The arrangement builds directly on prior collaborations. In late 2025, Anthropic struck a deal worth tens of billions of dollars for access to up to one million Google TPUs, with over a gigawatt of capacity slated for 2026. Broadcom had already surfaced as a key supplier in that earlier transaction, quietly fulfilling large orders for TPU racks. This latest expansion formalizes and scales that relationship, extending Broadcom’s role in designing and manufacturing future generations of Google’s TPUs and associated networking gear for next-gen AI racks.

For Anthropic, the move is framed as pragmatic scaling. “This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure,” said Krishna Rao, the company’s CFO. “We are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development.” The company continues to diversify its hardware mix, training and serving Claude across AWS Trainium, Google TPUs, and Nvidia GPUs, but the TPU allocation now represents a significant slice of its planned footprint.

The timing carries weight. Anthropic disclosed that its annualized revenue run rate has climbed to $30 billion, up sharply from $9 billion at the end of 2025. That trajectory reflects surging demand for Claude models among enterprise users, developers, and consumers. Securing predictable, large-scale compute has become table stakes for any lab chasing frontier performance; without it, training runs and inference capacity risk becoming bottlenecks. By tying into Google’s TPU ecosystem, Anthropic gains access to highly optimized silicon that is often more cost-effective for certain workloads than pure GPU clusters, while reducing single-vendor exposure.

Broadcom emerges as a quiet winner in the arrangement. The company secured a long-term agreement to develop and supply Google’s future TPUs, plus networking and other components for its AI racks through 2031. The Anthropic commitment adds visibility into demand, with CEO Hock Tan having already highlighted strong early traction with the startup on prior TPU deliveries. For a company whose AI revenue has been heavily tied to custom ASICs and networking, locking in multi-gigawatt-scale consumption from both Google and one of the industry’s hottest labs strengthens its positioning in the custom silicon arms race.

This is not an isolated transaction. The broader AI infrastructure market is shifting as hyperscalers and ambitious startups seek alternatives or complements to Nvidia’s dominant GPUs. Google has poured years into iterating TPUs for its own workloads; making them available at scale to partners like Anthropic helps amortize that investment and challenges the notion that only Nvidia can deliver at frontier levels. Amazon is pushing its Trainium chips aggressively, while Microsoft and OpenAI continue heavy GPU bets. Anthropic’s multi-vendor strategy, now with a beefed-up TPU component, reflects a maturing understanding that no single accelerator family will own every workload.

Geopolitically and economically, the U.S.-centric placement of the new capacity adds to a growing stack of domestic AI investments. It also underscores the enormous power demands involved. Multi-gigawatt clusters consume electricity on a scale that has already drawn comparisons to entire industries; Bitcoin miners, for instance, are watching these deals closely as they compete for the same cheap, reliable power sources.

Financial terms of the latest agreement were not disclosed, and Anthropic noted that actual consumption of the expanded capacity will depend on its continued commercial success. The parties are reportedly in discussions with operational and financial partners to support deployment. Still, the signal is unambiguous: even as valuations in private AI companies remain opaque, the capital intensity of staying competitive at the frontier is only increasing.

For the ecosystem, the deal highlights a maturing supply chain dynamic. Google designs the TPUs. Broadcom manufactures and optimizes key elements at volume. Anthropic, and potentially other large customers down the line, consumes the output at unprecedented scale. It is a model that could accelerate the shift toward custom silicon while giving Google a stronger foothold in the cloud AI race beyond its own Gemini models.

Anthropic’s latest infrastructure play won’t make headlines like a flashy model release, but it may prove more consequential. In an era where compute is the ultimate constraint, locking in gigawatts of next-generation capacity is less about bragging rights and more about survival, and continued leadership, at the cutting edge of AI development. How effectively the company deploys this hardware, and whether it delivers meaningful advances in Claude’s capabilities, will determine if the bet pays off. For now, the infrastructure foundation is firmly in place.

Share This Article
Esther Speak - Senior Reporter at Villpress
Senior Reporter
Follow:

Ester Speaks is a senior reporter and newsroom strategist at Villpress, where she shapes Africa-focused business, technology, and policy coverage.  She works at the intersection of journalism, and editorial systems, producing clear, high-impact news that travels globally while staying rooted in African realities.

notification icon

We want to send you notifications for the newest news and updates.