TrendPulse Logo

AI is capable of remarkable feats. And has the power to kill. Meet one woman warning about the dangers ahead

Source: FortuneView Original
businessApril 8, 2026

The birth of ‘gunpowder warfare’ can be traced back to the 15th century and the invention of the matchlock gun, the first mechanical firing device. Now drone swarms attack across borders with impunity. In 1685, Giovanni Borelli, the Italian physicist, foresaw a world where machines driven by pulleys could ape the actions of animals. Elon Musk now talks of robots intelligent enough to do the shopping and take the place of surgeons.

Recommended Video

Technological development is both immediate and anchored in history, both Everything, Everywhere All at Once and Slow Horses. The fast/slow contrast is embedded in the artwork, Calculating Empires, a 24-meter-long mural, on display at the Design Museum in Barcelona. It visualizes the journey from the printing press to deep fakes, from quipu, an ancient Peruvian calculator made of knotted ropes, to ‘planetary scale’ data systems.

“What I find really interesting is, when people go into this installation, it helps you put this moment in perspective,” Kate Crawford told the Mobile World Congress in Barcelona in March. Crawford, artificial intelligence research professor at the University of Southern California, is the co-creator of the mural, which took four years to fabricate. With the visual artist, Vladen Joler, the work urges us all to consider who is making the rules and deciding what matters when it comes to fundamental technology shifts.

“People feel like we’re living in this technological presentism and crazy amount of change,” Crawford said. “So, the ability to step back and say, ‘what have we learned over 500 years?’ [matters]. For me, [the mural] was a transformative project, because what was very clear is that history is not just about technical innovation. It’s about who has the power to set the rules that we will be living within.”

“This is why agentic AI is so important right now, because it’s a rapidly evolving field. The standards are not yet set, and it’s going to be people here, in rooms like this, at places like Mobile World Congress, who are going to have these conversations—what do we want those standards to look like, how do we implement them in our systems, and how do we protect ourselves and our clients?”

“Because this is the big moment to actually make sure that this is a technology that is profoundly useful and helpful and not one that opens up vulnerabilities and attack vectors and new attack surfaces and actually could be cognitively really quite dangerous as well.”

Read more: The world’s largest tech gathering is talking about ‘accountability laundering’: Here’s why we should christen them Words of the Year

Mobile World Congress is a phenomenon. More than 100,000 delegates walk purposefully around eight cavernous halls, each packed with the technology of the future. Huge pavilions sponsored by Huawei and Google, Honor and Qualcomm, display remarkable new products linking our car to our phone, a robot to a disabled person, our glasses to the internet. Governments keen for influence and investment jostle for space with the companies that are hoping to win big in the artificial intelligence revolution.

MWC is also a place for debate. On large stages, the leading minds in the technology world have the conversations often lost among the flashing neon lights and interactive plasma screens. “Move fast and break things,” Mark Zuckerberg said in 2012. Today, the stakes are too high.

We are in a live discussion about the very meaning of intelligence. Demis Hassabis, the founder of DeepMind, has said artificial general intelligence could be with us in as little as five years. In that world, who, or what, will make decisions? Is it a question of human in the loop? Or is it human in the lead? Or no human needed at all? Mo Gawdat, the former chief business officer at Google, has spoken of the risks of “short-term dystopia” as governments, civil society, and regulators struggle to control the effects of machines that can learn and decide.

“What do we mean by intelligence?” Crawford asked. “The history of the term ‘intelligence’ is a troubled one.  It’s been used to divide populations, to drive programs about who is valuable and who is not.”

“We’re trying to compare agents to human intelligence. They’re actually completely different. This [intelligence] is statistical probability at scale. These are systems that are following tasks in complex environments. This is very different  to humans, but that means we need to have a different set of questions, which is: what are agents doing? How can we track that, and how can we better understand the way it’s going to change our own workflows and, much more importantly, how we live?”

> “The history of the term ‘intelligence’ is a troubled one…”

Artificial intelligence research professor at the University of Southern California, Kate Crawford

As the debate continues about the tensions between OpenAI, Anthropic and the Department for War in America, Crawford asks what are the red lines for agent use