Op-Ed: the true potential of AI for lethal toxicity

World leaders gathered last week to discuss the possibility that Vladimir Putin could use chemical weapons in Ukraine. All the more alarming, then, to read a report released this month on how artificial intelligence software was used to design toxins, including the infamous nerve agent VX – classified by the United Nations as a weapon of mass destruction – and even more harmful compounds.

In less than six hours, commercially available artificial intelligence software, generally used by drug researchers to discover new types of drugs, was able to produce 40,000 toxic compounds. Many of these substances are previously unknown to science and perhaps far more deadly than anything else we humans have created on our own.

Although the report’s authors point out that they did not synthesize any of the toxins – nor was that their goal – the mere fact that commonly used machine learning software was so easily capable of designing lethal compounds should horrify us all.

The software the researchers relied on is used commercially by hundreds of companies working in the pharmaceutical industry around the world. It could easily be acquired by rogue states or terrorist groups. Although the report’s authors say that some experience is still required to produce potent toxins, the addition of AI to the field of drug discovery has dramatically lowered the technical threshold required for chemical weapons design.

How are we going to control who will have access to this technology? Power do we check it?

I have never been very concerned about the “AI will kill us” argument promulgated by the doomsayers and pictured in movies like “The Terminator”. While I love the franchise, as a computer-trained person, I saw the plot as a rather delusional fantasy invented by tech dudes to amplify its meaning. Skynet is a good science fiction movie, but computers are nowhere near true intelligence and there is still a long way to go before they can “take over”.

It’s still. The scenario presented in the journal Nature Machine Intelligence outlines a threat that hardly anyone in the field of drug discovery seems to have even contemplated. Certainly not the authors of the report, who failed to find it mentioned “in the literature” and who admit to being shocked by their findings. “We have been naive about the potential misuse of our craft,” they write. “Even our research on Ebola and neurotoxins … did not sound the alarm bell.”

Their study “shows that an autonomous non-human creator of a deadly chemical weapon is entirely feasible.” They are not afraid of a distant dystopian future, but of what could happen right now. “This is not science fiction,” they declare, expressing a degree of emotion rarely seen in a technical article.

Let’s go back for a moment and look at how this research came about. The work was originally intended as a thought experiment: What is AI capable of if it sets a nefarious goal? The company behind the research, Collaborations Pharmaceuticals Inc., is a respected, albeit small, player in the burgeoning field of AI-based drug discovery.

“We’ve spent decades using computers and artificial intelligence to improve human health, not degrade it,” is how the four co-authors describe their work, which is supported by grants from the National Institutes for Health.

Scientists were invited to contribute a paper to a biannual conference hosted by the Swiss Federal Institute for Nuclear, Biological and Chemical Protection on “how artificial intelligence technologies for drug discovery could potentially be misused”. It was a purely theoretical exercise.

The four scientists approached the problem with simple logic: instead of setting their AI software as a task of finding beneficial chemicals, they reversed the strategy and asked it to find destructive ones. They provided the program with the same data they usually use from databases that catalog the therapeutic and toxic effects of various substances.

Within hours, machine learning algorithms returned thousands of frightening compounds. The program produced not only VX (used to assassinate Kim Jong Un’s half-brother in Kuala Lumpur in 2017), but several other well-known chemical warfare agents. The researchers confirmed the thesis through “visual identification with molecular structures” recorded in public chemistry databases. Worse still, the software proposed many molecules that researchers had never seen before that “seemed equally plausible” as toxins and perhaps more dangerous.

All it took was a reversal of the target and a “harmless generative model” was transformed “from a useful medical tool into a generator of possibly deadly molecules.”

Molecules are just drawings, but as the authors write in their report: “For us, genius is now out of the medicine bottle.” They can “delete” their registration of these substances, but they “cannot delete the knowledge” of how others could recreate them.

What alarms the authors most is that, as far as they have been able to discover, the potential misuse of a technology designed for good has not been considered by its user community at all. Creators of de novo drugs, they point out, are simply not trained to think about subversion.

In the history of science, there are countless examples of good works aimed at harmful ends. Newton’s laws of motion are used to design missiles; the splitting of the atom gave rise to atomic bombs; pure math helps governments develop surveillance software. Knowledge is often a double-edged sword.

Forget Skynet. Software and know-how designed to save our lives can prove to be one of the biggest threats we face.

Margaret Wertheim is a science writer and artist who has written books on the cultural history of physics.