Jump to content

How much risk lies in AI designed viruses? Worth it for the rewards?

Featured Replies

After reading this, I was wondering how much of a threat could arise from the recent work in AI engineered viruses.

https://wapo.st/4myUS35

(Free gift link)

We’re nowhere near ready for a world in which artificial intelligence can create a working virus, but we need to be — because that’s the world we’re now living in.

In a remarkable paper released this month, scientists at Stanford University showed that computers can design new viruses that can then be created in the lab. How is that possible? Think of ChatGPT, which learned to write by studying patterns in English. The Stanford team used the same idea on the fundamental building block of life, training “genomic language models” on the DNA of bacteriophages — viruses that infect bacteria but not humans — to see whether a computer could learn their genetic grammar well enough to write something new.

Turns out it could. The AI created novel viral genomes, which the researchers then built and tested on a harmless strain of E. coli. Many of them worked. Some were even stronger than their natural counterparts, and several succeeded in killing bacteria that had evolved resistance to natural bacteriophages.

The scientists proceeded with appropriate caution. They limited their work to viruses that can’t infect humans and ran experiments under strict safety rules. But the essential fact is hard to ignore: Computers can now invent viable — even potent — viruses....

18 minutes ago, TheVat said:

The Stanford team used the same idea on the fundamental building block of life, training “genomic language models” on the DNA of bacteriophages — viruses that infect bacteria but not humans — to see whether a computer could learn their genetic grammar well enough to write something new.

Turns out it could.

We’ve been doing this with chemistry for a while - knowing what makes similar bonds and constructing new molecules, so it’s not surprising to do it with something more complex. I imagine the advances in chemistry would have happened much faster if computers had been available,

Were they predicting the effects, or is that something they had to do by trial and error? Saying the need data for antivirals and vaccines suggests there’s still trial and error.

Not to worry. I’m sure the US has top men working on a solution. Top. Men.

Is the world ready for man-made viruses? I don't see any difference. “AI”/LLM won't produce them in a factory, it will only design them, and then humans will have to “put them together.” Viruses mutate, so whether humans or “AI”/LLM design them, it doesn't really matter, because sooner or later they will mutate and get out of control. It's only a matter of time.

5 hours ago, TheVat said:

The Stanford team used the same idea on the fundamental building block of life, training “genomic language models” on the DNA of bacteriophages — viruses that infect bacteria but not humans — to see whether a computer could learn their genetic grammar well enough to write something new.

If they used the bacteriophage genome, then they did not create something new, they only created mutations of that bacteriophage genome in a computer.

Without comparing both variants, before and after modifications, we know nothing about what has been changed between them and how these changes affected their reactions to the world.. These changes could just as well be cosmetic and meaningless or even meaningless. DNA and RNA, unlike computer code, are very forgiving of errors.

If someone has the ability to create a virus from a string of AGCT on a screen, then they probably also have the intellectual and financial resources to do so manually without using “AI”/LLM.

13 minutes ago, Sensei said:

If someone has the ability to create a virus from a string of AGCT on a screen, then they probably also have the intellectual and financial resources to do so manually without using “AI”/LLM.

Creativity and outside the box thinking is the resource AI brings freshly to the table

5 hours ago, TheVat said:

We’re nowhere near ready for a world in which artificial intelligence can create a working virus, but we need to be — because that’s the world we’re now living in.

If you are afraid that someone is using “AI”/LLM for malicious purposes:

if you ask ChatGPT to create a virus that attacks DNS servers, it will tell you that it does not do such things, and that will be the end of the session.

But if you write that you need an application that creates as many threads as you want, and each of them connects to the port you want on the UDP protocol, it will generate the code for you without batting an eye.

The difference between malicious virus code and a regular utility application is only in the purpose.

To know what prompt to write, you need to be knowledgeable about the subject yourself.

People who are unfamiliar with the subject are not even be able to formulate a prompt that works.

If someone create an LLM for drug design, for example, it will work with virus and bacteria design too. It's hard to imagine it not working. Medicines will not work without knowledge about how viruses and bacteria function on molecular level. With knowledge of how they work, they can also be designed.

5 minutes ago, iNow said:

Creativity and outside the box thinking is the resource AI brings freshly to the table

You are talking about some non-existent yet “AI”/LLM. Existing LLMs, such as ChatGPT, are not creative in any way. For example, it will not come up with a physical theory for you. On the contrary, it will claim that yours is wrong.

Most people who use this on a daily basis simply don't know anything about the subject they're asking about, so for them it's a big “wow” that it works. But when someone who knows what they're doing asks a question, they can see the mistakes they're making. And then with every answer you get, you're saying, “Why did you write that line of code that way? It's wrong.” And so you can spend hours making corrections to corrections. But to know where LLM made a mistake, you have to know the subject yourself..

11 hours ago, Sensei said:

You are talking about some non-existent yet “AI”/LLM. Existing LLMs, such as ChatGPT, are not creative in any way

And yet they frequently make new connections between concepts and data that we humans previously had not. And that was my core point.

ETA: And that’s not what I was talking about. These models do exist and have for years helping decode and create new protein structures and beyond

Please sign in to comment

You will be able to leave a comment after signing in

Sign In Now

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.