Anthropic has built a new model internally codenamed Mythos — and says it is deliberately not releasing it publicly because of its exceptional capabilities in cybersecurity.

What Mythos can do

Anthropic describes Mythos as "far ahead" of other frontier models specifically in the cybersecurity domain. The model poses meaningful risks if deployed without controls — capable of assisting with offensive cyber operations in ways current models cannot.

Regulatory ripple effects

The existence of Mythos pushed cybersecurity concerns to a tipping point. The White House began weighing a formal review process for AI model releases. Within days, Microsoft, Google, and xAI agreed to give the US government early access to unreleased models for security evaluation through NIST — a direct response to the Mythos situation.

Why it matters

Mythos is the first publicly acknowledged case of a major AI lab deliberately withholding a model due to capability concerns — not safety theater, but a model Anthropic itself is not comfortable releasing.