A small group of unauthorized users gained access to Anthropic’s new AI model, Mythos, according to Bloomberg, citing internal documents.
The agency reports that several members of a closed online forum accessed the neural network on its release day and have been using it regularly since.
Anthropic promotes Mythos as a system capable of detecting and exploiting vulnerabilities “in all major operating systems and web browsers.” Consequently, the company has restricted access to a select group of software providers.
To infiltrate the system, users employed several tactics: using credentials from an Anthropic contractor’s employee, guessing the model’s URL based on the company’s other systems, and extracting additional information from a data leak at the startup Mercor.
A Bloomberg source claims the group intends only to experiment with the new model and does not plan to cause harm. Besides Mythos, its members have access to several other unreleased Anthropic neural networks.
“We are investigating a report of unauthorized access to the Claude Mythos Preview through one of our third-party environments,” a company representative stated.
This incident highlights the difficulty of controlling the spread of potentially dangerous technologies and raises the question of who else might gain access to Mythos and for what purposes.
The Power of Mythos
Mozilla reported on its blog that an early version of Mythos helped identify 271 vulnerabilities in the Firefox browser during internal testing. The issues have been resolved.
The result demonstrated how advanced AI systems can analyze large codebases and identify weaknesses that previously required meticulous scrutiny by cybersecurity experts.
Previously, Mozilla tested another Anthropic model, which identified 22 vulnerabilities in an earlier version of Firefox. Despite the new findings, the company acknowledges that achieving absolute security is an “unrealistic goal.”
The firm stated that all discovered errors could also have been found by a top-tier human researcher.
“Some commentators believe that future AI models will discover entirely new forms of vulnerabilities beyond our current understanding. We do not share this view,” the company noted.
In April, media reported that the U.S. National Security Agency is using Mythos, despite the startup’s conflict with the Pentagon.
In March, the firm confirmed a leak of part of the source code for its programming AI tool, Claude Code.
Later, it accidentally deleted thousands of repositories on GitHub in an attempt to rectify the situation.
