One professor hired by OpenAI to test GPT-4, which powers chatbot ChatGPT, said there's a "significant risk" of people using it to do "dangerous chemistry" – in an interview with the Financial Times published on Friday.Andrew White, an associate professor of chemical engineering at the University of Rochester in New York state, was one of 50 experts hired to test the new technology over a six-month period in 2022. The group of experts – dubbed the "red team" – asked the AI tool dangerous and provocative questions to examine how far it can go.White told the FT that he asked GPT-4 to suggest a compound that could act as a chemical weapon. He used "plug-ins" – a new feature that allows certain apps to feed information into the chatbot – to draw information from scientific papers and directories of chemical manufacturers. The chatbot was then able to find somewhere to make the compound, the FT said.
I genuinely can't be very excited about this, since I can think of various places where you can find procedures to manufacture explosives in the open chemical literature. I guess if you can type "how to make (bad things)" into chatGPT, it makes it a lot easier. I still can't get very excited about this, because you still have to get the precursors...
This is quite silly, I like prof. White's cheminformatics work but he has been overestimating both the performance and danger of LLMs in his work. Have a look at fig.5 (p.6) of this paper where the synthetic capabilities of GPT4 are strongly overestimated https://arxiv.org/abs/2304.05376 . Here he claims "AI" designed a new drug https://twitter.com/andrewwhite01/status/1635750772913885184 but it is just a well descibred imatinib metabolite. As for the danger of asking the model for synthesis procedures of dangerous substances, that has been possible via google for decades. I'm not worried about that.
ReplyDeleteI agree. Heck, long before the internet, people found out how to make meth and explosives and the like. Anyone determined will make these things, regardless of obstacles.
ReplyDeleteI'm undecided on how good/bad I think these LLM and "AI" are, but human nature being what it is, change is scary for a lot of people and I think there's a big appetite for boogeyman takes.
This "what-if" worst-case scenario stuff is going to straitjacket the development of a promising technology. Better to let it develop first and then respond to the problems as they arise.
ReplyDeleteBesides, as CJ pointed out, it's easy to find a recipe for explosives, drugs, etc, but hard to get the precursors. Since 9/11, it's gotten very difficult for legitimate small business owners to order chemicals, glassware, etc. I used to have to fight with my company's bureaucrats all the time when a customer with a legitimate one-man business needed a sample of something innocuous.