{"id":9865,"date":"2026-04-07T11:57:04","date_gmt":"2026-04-07T11:57:04","guid":{"rendered":"https:\/\/villpress.com\/?p=9865"},"modified":"2026-04-07T11:57:12","modified_gmt":"2026-04-07T11:57:12","slug":"anthropic-secures-3-5-gigawatts","status":"publish","type":"post","link":"https:\/\/villpress.com\/cs\/anthropic-secures-3-5-gigawatts\/","title":{"rendered":"Anthropic Secures 3.5 Gigawatts of Google TPU Capacity in New Broadcom Deal"},"content":{"rendered":"<p>Anthropic has secured one of its largest infrastructure bets yet, signing a new multi-party agreement with Google and Broadcom that locks in multiple gigawatts of next-generation Tensor Processing Unit (TPU) capacity starting in 2027. The deal, announced on April 6, 2026, deepens the AI company\u2019s reliance on Google\u2019s custom silicon while giving Broadcom a firmer grip on the supply chain for high-performance AI hardware.<\/p>\n\n\n\n<p>Details emerged simultaneously from Anthropic\u2019s official blog and Broadcom\u2019s regulatory filing. Anthropic will gain access to approximately 3.5 gigawatts of TPU-based compute through Broadcom as part of a broader multi-gigawatt commitment. The vast majority of this new capacity will be sited in the United States, aligning with the company\u2019s earlier pledge to invest $50 billion in American AI infrastructure. Most of the hardware is expected to come online beginning in 2027, with deployment potentially stretching toward 2031 under Broadcom\u2019s long-term supply assurances.<\/p>\n\n\n\n<p>The arrangement builds directly on prior collaborations. In late 2025, Anthropic struck a deal worth tens of billions of dollars for access to up to one million Google TPUs, with over a gigawatt of capacity slated for 2026. Broadcom had already surfaced as a key supplier in that earlier transaction, quietly fulfilling large orders for TPU racks. This latest expansion formalizes and scales that relationship, extending Broadcom\u2019s role in designing and manufacturing future generations of Google\u2019s TPUs and associated networking gear for next-gen AI racks.<\/p>\n\n\n\n<p>For Anthropic, the move is framed as pragmatic scaling. \u201cThis groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure,\u201d said Krishna Rao, the company\u2019s CFO. \u201cWe are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development.\u201d The company continues to diversify its hardware mix, training and serving Claude across AWS Trainium, Google TPUs, and Nvidia GPUs, but the TPU allocation now represents a significant slice of its planned footprint.<\/p>\n\n\n\n<p>The timing carries weight. Anthropic disclosed that its annualized revenue run rate has climbed to $30 billion, up sharply from $9 billion at the end of 2025. That trajectory reflects surging demand for Claude models among enterprise users, developers, and consumers. Securing predictable, large-scale compute has become table stakes for any lab chasing frontier performance; without it, training runs and inference capacity risk becoming bottlenecks. By tying into Google\u2019s TPU ecosystem, Anthropic gains access to highly optimized silicon that is often more cost-effective for certain workloads than pure GPU clusters, while reducing single-vendor exposure.<\/p>\n\n\n\n<p>Broadcom emerges as a quiet winner in the arrangement. The company secured a long-term agreement to develop and supply Google\u2019s future TPUs, plus networking and other components for its AI racks through 2031. The Anthropic commitment adds visibility into demand, with CEO Hock Tan having already highlighted strong early traction with the startup on prior TPU deliveries. For a company whose AI revenue has been heavily tied to custom ASICs and networking, locking in multi-gigawatt-scale consumption from both Google and one of the industry\u2019s hottest labs strengthens its positioning in the custom silicon arms race.<\/p>\n\n\n\n<p>This is not an isolated transaction. The broader AI infrastructure market is shifting as hyperscalers and ambitious startups seek alternatives or complements to Nvidia\u2019s dominant GPUs. Google has poured years into iterating TPUs for its own workloads; making them available at scale to partners like Anthropic helps amortize that investment and challenges the notion that only Nvidia can deliver at frontier levels. Amazon is pushing its Trainium chips aggressively, while Microsoft and OpenAI continue heavy GPU bets. Anthropic\u2019s multi-vendor strategy, now with a beefed-up TPU component, reflects a maturing understanding that no single accelerator family will own every workload.<\/p>\n\n\n\n<p>Geopolitically and economically, the U.S.-centric placement of the new capacity adds to a growing stack of domestic AI investments. It also underscores the enormous power demands involved. Multi-gigawatt clusters consume electricity on a scale that has already drawn comparisons to entire industries; Bitcoin miners, for instance, are watching these deals closely as they compete for the same cheap, reliable power sources.<\/p>\n\n\n\n<p>Financial terms of the latest agreement were not disclosed, and Anthropic noted that actual consumption of the expanded capacity will depend on its continued commercial success. The parties are reportedly in discussions with operational and financial partners to support deployment. Still, the signal is unambiguous: even as valuations in private AI companies remain opaque, the capital intensity of staying competitive at the frontier is only increasing.<\/p>\n\n\n\n<p>For the ecosystem, the deal highlights a maturing supply chain dynamic. Google designs the TPUs. Broadcom manufactures and optimizes key elements at volume. Anthropic, and potentially other large customers down the line, consumes the output at unprecedented scale. It is a model that could accelerate the shift toward custom silicon while giving Google a stronger foothold in the cloud AI race beyond its own Gemini models.<\/p>\n\n\n\n<p>Anthropic\u2019s latest infrastructure play won\u2019t make headlines like a flashy model release, but it may prove more consequential. In an era where compute is the ultimate constraint, locking in gigawatts of next-generation capacity is less about bragging rights and more about survival, and continued leadership, at the cutting edge of AI development. How effectively the company deploys this hardware, and whether it delivers meaningful advances in Claude\u2019s capabilities, will determine if the bet pays off. For now, the infrastructure foundation is firmly in place.<\/p>","protected":false},"excerpt":{"rendered":"<p>Anthropic has secured one of its largest infrastructure bets yet, signing a new multi-party agreement with Google and Broadcom that locks in multiple gigawatts of next-generation Tensor Processing Unit (TPU) capacity starting in 2027. The deal, announced on April 6, 2026, deepens the AI company\u2019s reliance on Google\u2019s custom silicon while giving Broadcom a firmer [&hellip;]<\/p>\n","protected":false},"author":31579,"featured_media":9866,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_mi_skip_tracking":false,"footnotes":""},"categories":[6],"tags":[138,1885,65,116,313],"ppma_author":[452],"class_list":{"0":"post-9865","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-technology","8":"tag-ai","9":"tag-anthropic-secures-3-5-gigawatts","10":"tag-artificial-intelligence","11":"tag-google","12":"tag-openai"},"authors":[{"term_id":452,"user_id":31579,"is_guest":0,"slug":"estherspeaks","display_name":"Esther Speaks","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/cdcaf0f94087bbfcad372d974a1a697382dc93112457104ff6535cf4984ea4de?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/villpress.com\/cs\/wp-json\/wp\/v2\/posts\/9865","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/villpress.com\/cs\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/villpress.com\/cs\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/villpress.com\/cs\/wp-json\/wp\/v2\/users\/31579"}],"replies":[{"embeddable":true,"href":"https:\/\/villpress.com\/cs\/wp-json\/wp\/v2\/comments?post=9865"}],"version-history":[{"count":1,"href":"https:\/\/villpress.com\/cs\/wp-json\/wp\/v2\/posts\/9865\/revisions"}],"predecessor-version":[{"id":9867,"href":"https:\/\/villpress.com\/cs\/wp-json\/wp\/v2\/posts\/9865\/revisions\/9867"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/villpress.com\/cs\/wp-json\/wp\/v2\/media\/9866"}],"wp:attachment":[{"href":"https:\/\/villpress.com\/cs\/wp-json\/wp\/v2\/media?parent=9865"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/villpress.com\/cs\/wp-json\/wp\/v2\/categories?post=9865"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/villpress.com\/cs\/wp-json\/wp\/v2\/tags?post=9865"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/villpress.com\/cs\/wp-json\/wp\/v2\/ppma_author?post=9865"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}