TrendPulse Logo

AI can design viruses, toxins and other bioweapons. How worried should we be?

Source: NatureView Original
scienceMay 13, 2026

-

Email

-

Bluesky

-

Facebook

-

LinkedIn

-

Reddit

-

Whatsapp

-

X

Illustration: Adrià Voltà

It’s hard to imagine that a snail could kill a person, but a particularly venomous group of marine molluscs called cone snails can. Their stings contain a cocktail of small proteins called conotoxins, some of which can block ion channels in the nervous system. No antivenom exists.

There are hundreds of thousands of conotoxin structures, and many are harmless to people or even medicinally useful: an approved treatment for chronic pain is derived from one, for instance. But research on specific dangerous conotoxins is highly restricted in some countries.

So, in 2024, when Chinese scientists reported developing an artificial-intelligence tool to design conotoxins1, it raised eyebrows in some quarters. In an e-mail to a private AI and biotechnology discussion group seen by Nature, a senior US government employee flagged the study as a possible biosecurity risk. The employee, who asked not to be named because of concerns for their job, felt it was especially concerning that the conotoxin AI is based on an open-source protein language model developed by US scientists.

The textile cone snail (Conus textile), one of a number of venomous species of cone snail.Credit: Pascual Fernandez Gomez/iStock via Getty

One of the conotoxin study’s authors told Nature that the concern is unwarranted. The work was aimed squarely at discovering drugs, says Weiwei Xue, a computational chemist at Chongqing University in China and a co-author of the paper. Xue’s team has found some conotoxins with potential therapeutic qualities after testing designs in the laboratory, he says. Although it is important to consider the risk that the AI tool could be misused, it was not designed to make harmful proteins, he adds. What’s more, translating designs into physical molecules requires significant expertise and equipment. Other researchers also told Nature that the risks of the work seem minimal.

The episode, however, illustrates a growing concern over emerging AI tools in biology; although they are being developed to help produce innovative drugs and other societal benefits, they could also make it easier to create new threats. The revolution in biological AI tools, such as AlphaFold, has enabled scientists at a keystroke to design bespoke proteins and viruses that kill superbugs, and general-purpose chatbots can boost people’s knowledge of how to make these designs in a lab. Might the latest AIs also speed up the development of more-potent toxins, viruses or other bioweapons?

The biosecurity threat is serious, interviews with more than 20 scientists and policy researchers suggest. “Theoretically — and this is what keeps me up at night — one could now develop toxins on the level of ricin or other very deadly agents that would be virtually undetectable,” says Martin Pacesa, a structural biologist at the University of Zurich in Switzerland.

But there is debate over what to do about these risks. Some are calling for limits on biological AI and others are wary of negative impacts on research. “We’ve always made the assessment that the benefits to the world far outweigh the dangers,” says computational biophysicist David Baker at the University of Washington in Seattle, who shared a 2024 Nobel prize for his pioneering work on protein design. “But, as capabilities increase, I think that’s going to be an important question to keep considering.”

Some say the focus should be on detecting and countering AI bioweapon attacks, as opposed to trying to prevent them by imposing software restrictions. “That ship has sailed in my opinion,” says protein designer Timothy Jenkins at the Technical University of Denmark in Kongens Lyngby.

What’s the worst that could happen?

There are two broad concerns around AI and bioweapons, says James Black, an AI biosecurity researcher and visiting scholar at Johns Hopkins University in Baltimore, Maryland.

One is that individuals working in garage labs could use a chatbot to learn how to produce or deploy existing threats, such as anthrax. Another is that more-sophisticated actors, such as states or well-resourced terrorist groups, could combine chatbots with specialist biological software to design new bioweapons.

A nerve agent was used as a bioweapon in the United Kingdom in 2018; here, officers rush to cover the site.Credit: Matt Cardy/Getty

The greatest potential threat to humanity could be AI-designed pandemic viruses, researchers say. The most plausible route would be to modify existing viruses, such as SARS-CoV-2 or influenza, to enhance worrying properties — their ability to evade the immune system, for instance. Existing AI tools that can predict viral evolution (intended for use in surveillance and vaccine design) could be misused in this way, says Doni Bloomfield, a law professor who studies biosecurity at Fordham University in New York City.

Alternatively, AI models might design entirely new path