A federal judge ruled against the Pentagon's attempts to punish Anthropic, finding the government violated due process and First Amendment protections in a contract dispute turned political.
Judge Rita Lin issued a 43-page opinion ruling against the Pentagon's attempts to designate Anthropic as a supply chain risk and bar federal agencies from using Claude. The dispute escalated after Trump posted on Truth Social calling Anthropic employees 'Leftwing nutjobs' and Defense Secretary Hegseth threatened to blacklist the company — actions the judge found constituted unconstitutional punishment of Anthropic's ideology. The government admitted in court it had no evidence Anthropic could implement a 'kill switch,' and Hegseth's sweeping contractor ban was found to have 'absolutely no legal effect.' The ruling protects Anthropic's existing government contracts, including its Claude deployment through Palantir for DoD personnel.
This ruling has minimal day-to-day technical impact for developers. Claude APIs remain available, Palantir's DoD integration continues operating, and no technical access restrictions were imposed. The case does signal that government procurement of AI tools is now a politically volatile surface — something to factor in if you're building systems that depend on continued government API access.
If you're building on Claude for any government-adjacent use case, verify your terms of service compliance against Anthropic's government usage policy — particularly around surveillance and autonomous systems — before your next sprint review.
Go to claude.ai and open a new conversation
Tags
Signals by role
Also today
Tools mentioned