A federal judge suggested the DoD illegally retaliated against Anthropic by designating it a supply-chain risk after Anthropic sought to restrict military use of its AI.
Anthropic filed two federal lawsuits alleging the Trump administration's Department of Defense illegally designated it a supply-chain security risk in retaliation for pushing limitations on military use of its Claude models. US District Judge Rita Lin stated the designation looks like 'an attempt to cripple Anthropic' and a potential First Amendment violation. Anthropic is seeking a temporary injunction to pause the designation, with a ruling expected within days. A parallel case at the DC federal appeals court is also pending.
This case directly tests whether AI providers can enforce acceptable use policies against government clients — or whether the government can override those restrictions entirely. The DoD argued it may have the right to update Anthropic's models without Anthropic's permission during contract transitions, which would be an unprecedented precedent for model integrity. If Anthropic loses, every AI provider's ToS becomes legally contestable by government actors.
Audit your current AI vendor contracts this week: check whether your provider (Anthropic, OpenAI, Google) has explicit acceptable use clauses and whether government override is addressed — especially if you're building in defense-adjacent or dual-use sectors.
Go to claude.ai and open a new conversation
Tags
Also today
Signals by role
Also today
Tools mentioned