Could an AI chatbot talk you out of believing a conspiracy theory?
Given the presidential debate this week, you probably heard plenty of misinformation and conspiracy theories.
Indeed, reporters and fact checkers were working overtime to specifically determine whether Haitian immigrants in Ohio were eating domestic pets, as grotesquely alleged by Republican presidential contender Donald Trump, and his vice presidential running mate, Ohio Senator J.D. Vance. Neither has produced evidence proving their claim, and local officials say it’s untrue. Still, the false allegation is all over the internet.
Experts have long worried about how rapidly conspiracy theories can spread, and some research suggests that people can’t be persuaded by facts that contradict those beliefs.
But a new study published today in Science offers hope that many people can and will abandon conspiracy theories under the right circumstances.
In this case, researchers tested whether conversations with a chatbot powered by generative artificial intelligence could successfully engage with people who believed popular conspiracy theories, like that the Sept. 11 attacks were orchestrated by the American government and that the COVID-19 virus was a man-made attempt by “global elites” to “control the masses.”
The study’s 2,190 participants had tailored back-and-forth conversations about a single conspiracy theory of their choice with OpenAI’s GPT-4 Turbo. The model had been trained on a large amount of data from the internet and licensed sources.
After the participants’ discussions, the researchers found a 20 percent reduction in conspiracy belief. Put another way, a quarter of participants had stopped adhering to the conspiracy theory they’d discussed. That decrease persisted two months after their interaction with the chatbot.
David Rand, a co-author of the study, said the findings indicate people’s minds can be changed with facts, despite pessimism about that prospect.
“Facts and evidence do matter to a substantial degree to a lot of people.”
“Evidence isn’t dead,” Rand told Mashable. “Facts and evidence do matter to a substantial degree to a lot of people.”
Rand, who is a professor of management science and brain and cognitive sciences at MIT, and his co-authors didn’t test whether the study participants were more likely to change their minds after talking to a chatbot versus someone they know in real life, like a best friend or sibling. But they suspect the chatbot’s success has to do with how quickly it can marshal accurate facts and evidence in response.
In a sample conversation included in the study, a participant who thinks that the Sept. 11 attacks were staged receives an exhaustive scientific explanation from the chatbot about how the Twin Towers collapsed without the aid of explosive detonations, among other popular related conspiracy claims. At the outset, the participant felt 100 percent confident in the conspiracy theory; by the end, their confidence dropped to 40 percent.
For anyone who’s ever tried to discuss a conspiracy theory with someone who believes it, they may have experienced rapid-fire exchanges filled with what Rand described as “weird esoteric facts and links” that are incredibly difficult to disprove. A generative AI chatbot, however, doesn’t have that problem, because it can instantaneously respond with fact-based information.
Nor is an AI chatbot hampered by personal relationship dynamics, such as whether a long-running sibling rivalry or dysfunctional friendship shapes how a conspiracy theorist views the person offering counter information. In general, the chatbot was trained to be polite to participants, building a rapport with them by validating their curiosity or confusion.
The researchers also asked participants about their trust in artificial intelligence. They found that the more a participant trusted AI, the more likely they were to suspend their conspiracy theory belief in response to the conversation. But even those skeptical of AI were capable of changing their minds.
Importantly, the researchers hired a professional fact-checker to evaluate the claims made by the chatbot, to ensure it wasn’t sharing false information, or making things up. The fact-checker rated nearly all of them as true and none of them as false.
For now, people who are curious about the researchers’ work can try it out for themselves by using their DebunkBot, which allows users to test their beliefs against an AI.
Rand and his co-authors imagine a future in which a chatbot might be connected to social media accounts as a way to counter conspiracy theories circulating on a platform. Or people might find a chatbot when they search online for information about viral rumors or hoaxes thanks to keyword ads tied to certain conspiracy search terms.
Rand said the study’s success, which he and his co-authors have replicated, offers an example of how AI can be used for good.
Still, he’s not naive about the potential for bad actors to use the technology to build a chatbot that confirms certain conspiracy theories. Imagine, for example, a chatbot that’s been trained on social media posts that contain false claims.
“It remains to be seen, essentially, how all of this shakes out,” Rand said. “If people are mostly using these foundation models from companies that are putting a lot of effort into really trying to make them accurate, we have a reasonable shot at this becoming a tool that’s widely useful and trusted.”