Anthropic co-founder Jack Clark confirmed the company briefed the Trump administration on Mythos, a powerful AI model deemed too dangerous to release publicly.
Jack Clark, Anthropic's Head of Public Benefit, confirmed at the Semafor World Economy Summit that Anthropic briefed the Trump administration on its new Mythos model — an AI system with advanced cybersecurity capabilities considered too dangerous for public release. This disclosure came despite Anthropic actively suing Trump's Department of Defense over being labeled a supply-chain risk. Clark framed the government briefing as a national security obligation, not a contradiction, arguing that private AI companies must find new ways to partner with government on frontier models. The DOD dispute stemmed from Anthropic refusing to grant the Pentagon unrestricted AI access for mass surveillance and autonomous weapons — a contract OpenAI ultimately won.
Mythos establishes a precedent where the most powerful frontier models are government-briefed but not publicly deployed. Developers building security-sensitive tooling need to assume that the ceiling of commercially available AI capability is being deliberately capped by policy decisions, not just technical readiness. This is a structural constraint on what you can build with public APIs.
Audit which parts of your AI-powered product depend on capabilities that could be classified or restricted — specifically cybersecurity, code analysis, or autonomous agent tasks — and identify fallback model options before access is restricted further.
Go to claude.ai and open a new conversation
Tags
Related
Also today
Signals by role
Also today
Tools mentioned