Is Anthropic limiting the release of Mythos to protect the internet — or Anthropic?
Anthropic said this week that it limited the release of its newest model, dubbed Mythos, because it is too capable of finding security exploits in software relied upon by users around the world.
Instead of unleashing Mythos on the public, the frontier lab will share it with a group of large companies and organizations that operate critical online infrastructure, from Amazon Web Services to JPMorgan Chase.
OpenAI is reportedly considering a similar plan for its next cybersecurity tool. The ostensible idea is to let these big enterprises get ahead of bad actors who could leverage advanced LLMs to penetrate secure software.
But the “e-word” in the sentence above is a hint that there might be more to this release strategy than cybersecurity — or the hyping of model capabilities.
Dan Lahav, the CEO of the AI cybersecurity lab Irregular, told TechCrunch in March, before the release of Mythos, that while the discovery of vulnerabilities by AI tools matters, the specific value of any weakness to an attacker depends on many factors, including how they can be used in combination.
“The question I always have in my mind,” Lahav said, “is did they find something that is exploitable in a very meaningful way, whether individually or as part of a chain?”
Anthropic says Mythos is able to exploit vulnerabilities far more than its previous model, Opus. But it’s not clear that Mythos is actually the be-all and end-all of cybersecurity models. Aisle, an AI cybersecurity startup, said it was able to replicate much of what Anthropic says Mythos accomplished using smaller, open-weight models. Aisle’s team argues that these results show there is no single deep learning model for cybersecurity, but instead depends on the task at hand.
Given that Opus was already seen as a game changer for cybersecurity, there’s another reason that frontier labs may want to limit their releases to big organizations: It creates a flywheel for big enterprise contracts, while making it harder for competitors to copy their models using distillation, a technique that leverages frontier models to train new LLMs on the cheap.
“This is marketing cover for fact that top-end models are now gated by enterprise agreements and no longer available to small labs to distill,” David Crawshaw, a software engineer and CEO of the startup exe.dev, suggested in a social media post. “By the time you and I can use Mythos, there will be a new top-end rev that is enterprise only. That treadmill helps keep the enterprise dollars flowing (which is most of the dollars) by relegating distillation companies to second rank,” said Crawshaw.
That analysis jibes with what we’re seeing in the AI ecosystem: A race between frontier labs developing the largest, most capable models, and companies like Aisle that rely on multiple models and see open source LLMs, often from China and often allegedly developed through distillation, as a path to economic advantage.
The frontier labs have been taking a harder line on distillation this year, with Anthropic publicly revealing what it says are attempts by Chinese firms to copy its models, and three leading labs — Anthropic, Google, and OpenAI — teaming up to identify distillers and block them, according to a Bloomberg report.
Distillation is a threat to the business model of frontier labs because it eliminates the advantages conveyed by using huge amounts of capital to scale. Blocking distillation, then, is already a worthwhile endeavor, but the selective release approach to doing so also gives the labs a way to differentiate their enterprise offerings as the category becomes the key to profitable deployment.
Whether Mythos or any new model truly threatens the security of the internet remains to be seen, and a careful rollout of the technology is a responsible way forward.
Anthropic didn’t respond to our questions about whether the decision also relates to distillation concerns at press time, but the company may have found a clever approach to protecting the internet — and its bottom line.
Topics
AI, Anthropic, cybersecurity, Mythos
Tim Fernholz
Tim Fernholz is a journalist who writes about technology, finance and public policy. He has closely covered the rise of the private space industry and is the author of Rocket Billionaires: Elon Musk, Jeff Bezos and the New Space Race. Formerly, he was a senior reporter at Quartz, the global business news site, for more than a decade, and began his career as a political reporter in Washington, D.C.
You can contact or verify outreach from Tim by emailing tim.fernholz@techcrunch.com or via an encrypted message to tim_fernholz.21 on Signal.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Po