Anthropic Wins Court Order Against Pentagon Over AI Restrictions

Esther Speak - Senior Reporter at Villpress
6 Min Read
Image Credit: Michael M. Santiago | Getty Images
Add us on Google
Add as preferred source on Google

A federal judge in San Francisco handed Anthropic a significant early victory yesterday in its high-stakes clash with the Trump administration, temporarily blocking the Pentagon from enforcing its designation of the AI company as a national security supply chain risk.

U.S. District Judge Rita F. Lin issued the preliminary injunction late on March 26 in a 43-page order that pulled no punches. She found that the government’s actions strongly suggested retaliation for Anthropic’s public criticism of Pentagon contracting terms, describing it as “classic illegal First Amendment retaliation.” The ruling pauses both the Defense Department’s blacklist label and President Trump’s broader directive ordering federal agencies to stop using Anthropic’s Claude models, restoring the status quo that existed before late February.

The dispute erupted in February when contract negotiations between Anthropic and the Department of Defense broke down. Anthropic, which had previously secured a deal allowing limited use of Claude on classified systems, drew two hard lines: it would not permit its models to enable fully autonomous lethal weapons without meaningful human oversight, and it would not allow their deployment for mass surveillance of U.S. citizens. Defense Secretary Pete Hegseth and the administration viewed those restrictions as unacceptable, arguing that a private company should not dictate terms for lawful military applications. When talks stalled, the Pentagon invoked a statute typically used against foreign adversaries to label Anthropic a supply chain risk, and Trump directed a government-wide cutoff.

Anthropic responded swiftly, filing lawsuits in the Northern District of California and the D.C. Circuit on March 9. The company argued that the designation was not a genuine security measure but punishment for its public stance on responsible AI use, violating the First Amendment, due process, and the Administrative Procedure Act. It warned that the blacklist could wipe out hundreds of millions to billions in potential 2026 revenue and scare off commercial customers wary of associating with a firm frozen out of government work.

During a March 24 hearing, Judge Lin already signaled deep skepticism toward the government’s position, questioning the appearance of punishment for bringing the dispute into the open. Her written order on Thursday formalized that view, concluding that Anthropic had demonstrated a strong likelihood of success on the merits of its retaliation claim. She also described the supply chain risk designation as likely “contrary to law and arbitrary and capricious,” noting that nothing in the relevant statute supported branding an American company a potential saboteur simply for expressing disagreement over contract terms.

The injunction buys Anthropic critical breathing room. It lifts the immediate threat to its federal business and gives the company tangible evidence to reassure partners and clients that the cloud of uncertainty has been temporarily cleared. However, Lin stayed her order for seven days to allow the Justice Department time to appeal, and she required the government to file a compliance report by April 6 explaining how it intends to implement the ruling.

For the broader AI industry, the decision highlights the tension between national security imperatives and the constitutional protections that apply even to frontier technology companies. Anthropic has positioned itself as a proponent of “constitutional AI” with built-in safeguards, and its willingness to publicly defend usage restrictions drew support from an unusual mix of tech trade groups, former national security officials, and scientists from rival labs. Amicus briefs emphasized that allowing the government to weaponize procurement tools against companies engaged in legitimate safety debates could chill innovation and harm U.S. competitiveness, particularly against China.

Also read: OpenAI Pentagon Deal 2026: Layered Safeguards, Red Lines & Why It Succeeded Where Anthropic Failed

The Pentagon, for its part, has maintained that the issue is practical rather than punitive: private guardrails on dual-use AI create operational uncertainty in mission-critical systems, and courts should defer heavily to the executive branch on defense matters. Yet Judge Lin’s sharp language suggests that at least one federal court is unwilling to treat the designation as immune from scrutiny when it appears motivated by public criticism rather than concrete risk.

This remains an interim ruling. The underlying case on the merits will continue, likely with appeals that could stretch into higher courts and shape how future AI-government contracts are negotiated. In the meantime, Claude stays available to federal users, and the supply chain risk label is on ice. The episode underscores a deeper reality in the AI era: as powerful models move from research labs into defense applications, the boundaries between commercial ethics, corporate speech, and national security are becoming flashpoints that no side can easily resolve through blunt instruments alone. How this plays out will influence not just Anthropic’s trajectory but the willingness of other AI developers to set, and defend, red lines when dealing with the world’s most powerful customer.

Share This Article
Esther Speak - Senior Reporter at Villpress
Senior Reporter
Follow:

Ester Speaks is a senior reporter and newsroom strategist at Villpress, where she shapes Africa-focused business, technology, and policy coverage.  She works at the intersection of journalism, and editorial systems, producing clear, high-impact news that travels globally while staying rooted in African realities.

Villpress utilise Accessibility Checker pour surveiller l'accessibilité de notre site web. Lisez notre Politique d'accessibilité.

notification icon

We want to send you notifications for the newest news and updates.

Enable Notifications OK No thanks