AI’s ability to see ‘mirages’ shows how alien machine brains really are
Hello and welcome to Eye on AI. In this edition…Anthropic suffers multiple sensitive data leaks…OpenAI ditches Sora, and loses its deal with Disney…Mistral raises money for AI data center drive…AI could reduce political polarization…and why countries that are late to adopt AI could be in even worse economic shape than you think.
The big news this week was my colleague Beatrice Nolan’s scoop from Friday that Anthropic has trained a new AI model, called “Mythos” (Capybara seems to be the internal code name for the same model), that the company says represents a “step change” in capabilities. Anthropic is particularly worried about the cybersecurity risks the model poses. Ironically, we found out about this new model because Anthropic inadvertently spilled the beans by leaving a draft blog post about it in an unsecured and publicly searchable database—along with other potentially sensitive documents about an upcoming CEO retreat and some internal documents that mentioned employees’ paternity leave.
Now, just today, it appears Anthropic has suffered another major security lapse, accidentally leaking the code of the agentic harness that sits around Claude Code. Bea has more on this latest, and potentially more consequential, data leak here. Meanwhile, Axios reports that the new cybersecurity capabilities of AI models are getting so concerning that Anthropic and OpenAI have both recently told the government about the new dangers of the models they are developing and provided government security experts with early access.
Recommended Video
Intern, expert, or dog?
Ok, now, if you own a dog, as I do, there will be moments you’ll recognize that we fundamentally don’t understand how dogs perceive the world.
This week, while walking my dog, I spied a strikingly beautiful cat with an unusual coat. It looked like an orange tabby mixed with gray tabby, with a good deal of white fur thrown in the mix too. I noticed the cat right away, but it was moving across a yard that was elevated from sidewalk level, so my dog couldn’t see it. She could definitely smell it, however. She put her nose in the air and tugged at her leash, pulling her way up the steps that led to the yard.
By the time she got to the top step, the cat had mostly hidden itself behind a nearby flower pot. It stood behind the pot motionless, but with its white head popping above the pot’s edge. It stared intently at my dog and me. I could see the cat quite clearly. But, despite being just 15 feet away, my dog could not. She sniffed the air intently and pivoted first left and then right, but she could not see the cat, even when seemingly looking directly at it.
Eventually, I persuaded my dog to give up her hunt for the unseen, but well-smelled, cat, and continue our walk. But I couldn’t stop thinking about our differences in perception—and how this applies to AI. People often offer executives advice for how they should think about using AI by making analogies to our relationships with various categories of people. Treat AI agents like talented interns, was a popular one a few years ago, in the months following ChatGPT’s debut. A graduate student who is occasionally off their meds, was a colorful variant that Emad Mostaque, the cofounder and former CEO of Stability AI, liked to use. You should treat AI like PhD.-level researchers, was an analogy in vogue last year. (OpenAI CEO Sam Altman was among those talking about this idea.) More recently, people have started saying it is better to regard AI models like wise and experienced, but occasionally still fallible, colleagues. Certainly their performance on certain tough benchmarks of professional tasks, such as OpenAI’s GDPval, would lead one to endorse that idea. Middle managers is another analogy that comes up often.
But the more we learn about the large language models that underpin today’s AI agents, the more clear it becomes how inadequate all these analogies are. LLMs are nothing like people at all. They are far more like other species, like your dog. We can no more understand what and how these LLMs perceive and reach their outputs than we can truly understand the thoughts of our pets.
Actually, it’s worse than this, because unlike with our pets, you can ask an LLM to explain to you what it’s thinking and it will tell you. That sounds like a great thing, much better than the situation with our non-verbal dogs, cats, and turtles. The problem: Researchers have begun probing the activations of the artificial neurons in AI’s digital brains, and these experiments indicate that what an AI model tells you it is thinking—the model’s so-called “reasoning traces”—may or may not actually reflect what it is, in fact, thinking.
So interacting with an LLM is probably the closest thing we’ve had so far to interacting with an alien, one that has some capabilities that far exceed our own, but also has glaring weaknesses, and which can, at times, be just like us—deceptive, dishonest, or dissembling.
Multimodal mode