Anthropic Tells Court: No Remote Kill Switch for Pentagon AI Deployments

2 min readSources: Axios

Anthropic told a federal court it cannot remotely disable or access its AI in military use.

Why it matters: Legal and compliance leaders must assess the risks when deploying AI in sensitive government operations. Without a remote kill switch, vendors and agencies face challenges in accountability and operational control, raising questions in procurement, contract drafting, and risk management.

  • Anthropic stated it cannot stop, alter, or access its AI models once the military deploys them.
  • The Pentagon labeled Anthropic a ‘supply chain risk’ in April after it refused to remove certain ethical safeguards.
  • A federal judge has paused the Pentagon's blacklisting pending a hearing set for May 19, 2026.
  • Despite Pentagon concerns, the NSA is still using Anthropic's Mythos Preview AI for detecting vulnerabilities.

In recent court filings, Anthropic confirmed it has no way to remotely alter, access, or shut down its AI models once deployed on classified U.S. military systems. Thiyagu Ramasamy, Anthropic’s Head of Public Sector, stated plainly: “Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations.”

The inability to intervene creates supply chain concerns: the Pentagon designated Anthropic as a 'supply chain risk' in April 2026 after the company declined to strip out AI guardrails preventing unauthorized surveillance and autonomous weapon uses. (Supply chain risk means the government sees potential threats to operational, ethical, or national security due to third-party technology.)

This designation led to the termination of a $200 million contract. Anthropic sued, claiming the action violates free speech and due process rights. A federal judge issued a temporary injunction pausing Anthropic's blacklisting, questioning whether the Pentagon's move was retaliatory. The case is now headed for an appellate hearing on May 19, 2026.

Meanwhile, the NSA continues using Anthropic's Mythos Preview tool to find cybersecurity vulnerabilities. The clash highlights continuing uncertainty over AI oversight, government procurement, and how to manage emerging technology in national security settings.

By the numbers:

  • $200 million — value of the Pentagon contract canceled after Anthropic was labeled a supply chain risk
  • May 19, 2026 — date set for the appellate court hearing on the dispute

Yes, but: The NSA has not suspended use of Anthropic's AI tools, reflecting divided risk assessments within the government.

What's next: A federal appellate court will hear Anthropic’s case on May 19, 2026, determining if the Pentagon's blacklisting can proceed.