Eating Disorder Hotline Chatbot Shut Down Over ‘Harmful’ Advice It Offered After Replacing Workers

The latest bot controversy focuses on the doling out of "harmful and unrelated" advice to those seeking eating disorder assistance.

chatbot user is pictured
Image via Getty/Tero Vesalainen
chatbot user is pictured

A chatbot that’s been reported to have replaced workers at an eating disorder hotline service was shut down, at least for now, after it was found to have given “harmful and unrelated” advice.

In a statement issued this week, the National Eating Disorders Association (NEDA) announced that the bot—referred to as “Tessa”—is at the center of an investigation.

“It came to our attention last night that the current version of the Tessa Chatbot, running the Body Positive program, may have given information that was harmful and unrelated to the program,” a spokesperson said. “We are investigating this immediately and have taken down that program until further notice for a complete investigation. Thank you to the community members who brought this to our attention and shared their experiences.”

In May, the chatbot was the subject of numerous headlines in light of word that it would be taking on a greater load starting June 1. In connection with this, per a recent NPR piece from Kate Wells, helpline workers were informed they were being fired. In fact, this news is reported to have come not long after staff members informed NEDA of their unionization.

In a separate Vice report from Chloe Xiang in May, Dr. Ellen Fitzsimmons-Craft—a Washington University professor whose team created Tessa after being hired by NEDA—said that the chatbot was created using “decades of research.” Additionally, Fitzsimmons-Craft—who was also cited in the original NPR piece—addressed ensuing coverage of the bot on Twitter by pointing out that Tessa was “never intended to be a 1:1 replacement for the helpline.” Furthermore, she added, it’s a rule-based chatbot, i.e. “not an AI chatbot.”

In a tweeted statement on Thursday, the NEDA Helpline Associates Union said the “alarming failure” of the chatbot “serves as further validation that perhaps human empathy is best left to humanity.” In a prior statement, as seen below, union members said they “now and forever condemn” NEDA’s “irresponsible and dangerous decision.”

And while it's been said that Tessa was not designed as an AI chatbot and is instead intended to serve in a rule-based capacity, this story speaks to the same concerns many have raised amid the ongoing criticisms of full-fledged examples of AI such as ChatGPT.

For example, a lawyer recently made headlines for saying he now “greatly regrets” using ChatGPT while working for a client who sued an airline. In short, as first reported by the New York Times last month, the lawyer’s decision to go the AI route “in order to supplement the legal research” in the case resulted in at least six nonexistent cases being cited in a brief.

Latest in Life