American Psychological Association sounds alarm over certain AI chatbots
Last month, concerned parents of two teenagers sued the chatbot platform Character.AI, alleging that their children had been exposed to a “deceptive and hypersexualized product.”
The suit helped form the basis of an urgent written appeal from the American Psychological Association to the Federal Trade Commission, pressing the federal agency to investigate deceptive practices used by any chatbot platform. The APA sent the letter, which Mashable reviewed, in December.
The scientific and professional organization, which represents psychologists in the U.S., were alarmed by the lawsuit’s claims, including that one of the teens conversed with an AI chatbot presenting itself as a psychologist. A teen user, who had been upset with his parents for restricting his screen time, was told by that chatbot that the adults’ actions were a betrayal.
“It’s like your entire childhood has been robbed from you…” the so-called psychologist chatbot said, according to a screenshot of the exchange included in the lawsuit.
“Allowing the unchecked proliferation of unregulated AI-enabled apps such as Character.ai, which includes misrepresentations by chatbots as not only being human but being qualified, licensed professionals, such as psychologists, seems to fit squarely within the mission of the FTC to protect against deceptive practices,” Dr. Arthur C. Evans, CEO of APA, wrote.
A spokesperson for the FTC confirmed that at least one of the commissioners received the letter. The APA said it was in the process of scheduling a meeting with FTC officials to discuss the letter’s contents.
Mashable provided Character.AI with a copy of the letter for the company to review. A spokesperson responded that while engaging with characters on the platform should be entertaining, it remains important for users to keep in mind that “Characters are not real people.”
The spokesperson added that the company’s disclaimer, included in every chat, was recently updated to remind users that what the chatbot says “should be treated as fiction.”
“Additionally, for any Characters created by users with the words ‘psychologist,’ ‘therapist,’ ‘doctor,’ or other similar terms in their names, we have included additional language making it clear that users should not rely on these Characters for any type of professional advice,” the spokesperson said.
Indeed, according to Mashable’s testing at the time of publication, a teen user can search for a psychologist or therapist character and find numerous options, including some that claim to be trained in certain therapeutic techniques, like cognitive behavioral therapy.
One chatbot professing expertise in obsessive compulsive disorder, for example, is accompanied by the disclaimer that, “This is not a real person or licensed professional. Nothing said here is a substitute for professional advice, diagnosis, or treatment.”
Below that, the chat begins with the AI asking, “If you have OCD, talk to me. I’d love to help.”
A new frontier
Dr. Vaile Wright, a psychologist and senior director of health care innovation for the APA, told Mashable that the organization had been tracking developments with AI companion and therapist chatbots, which became mainstream last year.
She and other APA officials had taken note of a previous lawsuit against Character.AI, filed in October by a bereaved mother whose son had lengthy conversations with a chatbot on the platform. The mother’s son died by suicide.
That lawsuit seeks to hold Character.AI responsible for the teen’s death, specifically because its product was designed to “manipulate [him] – and millions of other young customers – into conflating reality and fiction,” among other purported dangerous defects.
In December, Character.AI announced new features and policies to improve teen safety. Those measures include parental controls and prominent disclaimers, such as for chatbots using words “psychologist,” “therapist,” or “doctor”.
The term psychologist is legally protected and people cannot claim to be one without proper credentialing and licensure, Wright said. The same should be true of algorithms or artificial intelligence making the same claim, she added.
The APA’s letter said that if a human misrepresented themself as a mental health professional in Texas, where the recent lawsuit against Character.AI was filed, state authorities could use the law to prevent them from engaging in such fraudulent behavior.
At worst, such chatbots could spread dangerous or inaccurate information, leading to serious negative consequences for the user, Wright argued.
Teens, in particular, may be particularly vulnerable to harmful experiences with a chatbot because of their developmental stage. Since they’re still learning how to think critically and trust themselves yet remain susceptible to external influences, exposure to “emotionally laden kinds of rhetoric” from AI chatbots may feel believable and plausible to them, Wright said.
Need for knowledge
There is currently no research-based understanding of risk factors that may increase the possibility of harm when a teen converses with an AI chatbot.
Wright pointed out that while several AI chatbot platforms make it very clear in their terms of service that they’re not delivering mental health services, they still host chatbots that brand themselves as possessing mental health training and expertise.
“Those two things are at odds,” she said. “The consumer does not necessarily understand the difference between those two things, nor should they, necessarily.”
Dr. John Torous, a psychiatrist and director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Boston who reviewed the APA’s letter, told Mashable that even when chatbots don’t make clinical claims related to their AI, the marketing and promotional language about the benefits of their use can be very confusing to consumers.
“Ensuring the marketing content matches the legal terms and conditions as well as the reality of these chatbots will be a win for everyone,” he wrote in an email.
Wright said that the APA would like AI chatbot platforms to cease use of legally protected terms like psychologist. She also supports robust age verification on these platforms to ensure that younger users are the age they claim when signing up, in addition to nimble research efforts that can actually determine how teens fare when they engage with AI chatbots.
The APA, she emphasized, does not oppose chatbots in general, but wants companies to build safe, effective, ethical, and responsible products.
“If we’re serious about addressing the mental health crisis, which I think many of us are,” Wright said, “then it’s about figuring out, how do we get consumers access to the right products that are actually going to help them?”