Court Permits Pentagon's AI Limits Amid Anthropic Rulings Conflict

2 min readSources: Wired

A U.S. appeals court allows Pentagon to restrict Anthropic's Claude AI, causing confusion.

Why it matters: The rulings affect military contractor compliance, AI deployment strategies, and legal risk assessments.

  • March 26, 2026: Judge Lin halted Pentagon's directive against Claude AI.
  • April 8, 2026: Appeals court allows Pentagon's orders to proceed.
  • Pentagon transitions to AI alternatives like Gemini and ChatGPT.
  • Anthropic risks $180M in disrupted defense contracts.

Contrasting court rulings have created legal uncertainty over the use of Anthropic's Claude AI by the U.S. Department of Defense (DoD). On March 26, 2026, Judge Rita Lin temporarily halted the Pentagon's directive to cease using Claude AI, citing a violation of due process rights. This created a reprieve for Anthropic, which argued that its U.S.-developed AI should not face restrictions typically applied to foreign tech that poses a security risk.

The Pentagon levied these restrictions based on concerns over Claude being used in autonomous weapons or surveillance, part of its broader supply-chain risk strategy typically aimed at preventing foreign adversarial access.

However, the legal scenario shifted on April 8, 2026, when the D.C. Circuit appeals court ruled that the Department of Defense can enforce its measures, contradicting the earlier injunction. This includes transitioning from Claude to other AI tools like Gemini and ChatGPT.

This legal tug-of-war affects military contractors' strategies and requires careful recalibration of how defense contracts are drafted and managed. With over $180 million in contracts at stake, Anthropic must navigate significant compliance challenges.

By the numbers:

  • $180M — value of Anthropic's at-risk contracts with the Pentagon
  • March 26, 2026 — date of initial court injunction in favor of Anthropic
  • April 8, 2026 — appeals court ruling allowing Pentagon's restrictions

Yes, but: The conflict underscores ongoing tensions in balancing innovation with national security priorities.

What's next: Anthropic plans to appeal the decision to the Supreme Court, potentially altering future rulings.