The DoD labeled Anthropic a supply-chain risk, blocking Claude from military use amid fears Anthropic could remotely alter or disable the model.
Defense Secretary Pete Hegseth designated Anthropic a supply-chain risk, barring the DoD and its contractors from using Claude. Anthropic's head of public sector filed a court declaration stating the company has no technical ability to remotely alter or disable Claude once deployed. Anthropic filed two lawsuits challenging the ban's constitutionality, with a hearing scheduled for March 24 in San Francisco federal court. Federal agencies beyond the DoD are already dropping Claude, and customers have begun canceling contracts.
This is less a technical story and more a procurement one — but it signals that government and enterprise clients will increasingly demand air-gapped or self-hosted deployments where the vendor provably cannot alter model behavior post-deployment. If you're building on Claude APIs for any regulated or defense-adjacent customer, their legal and procurement teams are about to start asking hard questions about your dependency chain. The DoD's move to work with third-party cloud providers to neutralize Anthropic's control is a preview of how enterprise AI architecture requirements will evolve.
If you have any government, defense-contractor, or critical-infrastructure clients using Claude via API, pull your contract terms this week and identify whether your SLA covers model behavior guarantees — if it doesn't, flag this to legal before your client does.
Go to Anthropic's usage policy page (anthropic.com/legal/usage-policy) and search for 'military' or 'weapons' — read the exact clauses, then paste them into Claude.ai and ask: 'What legal risks does this create for a SaaS company serving defense contractors?'
Tags