Anthropic Investigating Unauthorized Access to Claude Mythos AI Model
AI startup Anthropic is looking into claims that a group of users compromised its Claude Mythos model, a sophisticated AI system specifically released only to a small, trusted circle of companies that have advanced cybersecurity capabilities.
A report revealed that Anthropic is examining allegations that individuals accessed the models via systems meant for third-party companies working in collaboration with the firm. “We are investigating reports of unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,” the company stated.
This situation raises pressing questions about Anthropic’s ability to safeguard its cutting-edge technology, especially since the firm has a valuation around $380 billion. The limited release of Claude Mythos Preview was a deliberate choice aimed at mitigating risks related to potential cyberattacks that could leverage the model’s capabilities beyond human speeds and abilities.
Reports suggest that the unauthorized access was facilitated by someone who had legitimate contractor access. However, Anthropic noted that there was no indication of any activity extending beyond the vendor environment—essentially, the infrastructure that third parties use to develop the model. AI labs commonly employ contractors for various tasks, but details about the specific vendors involved in this incident were not disclosed.
This security incident adds to the growing concerns surrounding Mythos, which has already created considerable upheaval in various markets and sparked discussions among financial institutions and regulatory bodies. Experts caution that if a hostile entity were to obtain this model, it could empower hackers to identify and exploit software vulnerabilities far more swiftly than organizations could respond with fixes.
Earlier this month, Anthropic presented Mythos to chosen corporate partners like Amazon, Microsoft, Apple, Cisco, and CrowdStrike. The company suggested that these partners would leverage Mythos’ advanced features to bolster defenses against cyber vulnerabilities before a broader public rollout.
In a previous incident, it was reported that Anthropic faced a significant security breach when some of its source code was mistakenly leaked online.
Moreover, just days prior, it was disclosed that Anthropic had unintentionally made nearly 3,000 internal files publicly accessible, including details about an upcoming AI model called Mythos or Capybara, which the company warned carried notable cybersecurity risks. This latest breach exposed around 500,000 lines of code across approximately 1,900 files. When pressed for comments, Anthropic acknowledged that “some internal source code” had been leaked due to a “claude code release.” A spokesperson clarified that no sensitive customer data was compromised, attributing the issue to human error rather than a security failure. They indicated that measures are being taken to prevent a recurrence.


