ChatGPT made up research claiming guns aren’t harmful to kids. How far will we let AI go?

In recent months, artificial intelligence chatbots have taken the world by storm, with more than 100 million monthly users for the recently released ChatGPT alone. This is an exciting time for technological development—but also a dangerously fraught one, because it has become increasingly clear that this technology is not yet ready for prime time.

Since these technologies have been rolled out to the public, we have seen many examples of AI chatbots producing convincing facsimiles of the truth, without any actual factual basis. As we work to confront a weakening of societal truth and combat misinformation, these tools could open the floodgates and subject us all to a storm of confusion and falsehoods.

I’m particularly concerned about the impact of these lies on our public health, especially after I heard a worrying story from a colleague in which ChatGPT lied about the contents of his research. When an AI chatbot makes up stories—and refuses to admit its lies—we all should be scared.

I wrote about this blatant lie, and the other risks posed by runaway technology, in a recent op-ed for USA Today.

You can read the piece here.