Even chatbots get blues. According to a new study, Openai's artificial intelligence tool ChatGPT shows signs of anxiety when users share “traumatic stories” about crime, war, or car accidents. And when chatbots get stressed, they are less likely to be useful in treatment environments with people.
However, bots' anxiety levels can be defeated with the same mindfulness exercises that have been shown to tackle humans.
More and more people are trying out chatbots for talk therapy. The researchers said this trend is expected to accelerate, with flesh and blood therapists in high demand but lacking supply. As chatbots became more popular, they argued that they should be built with enough resilience to deal with difficult emotional situations.
“I have patients who use these tools,” said Dr. Tobias Spiller, author of the new study and psychiatrist at the University Hospital of Psychiatry, Zurich. “We need to have a conversation about using these models in mental health, especially when dealing with vulnerable people.”
AI tools like ChatGpt feature “large language models” trained in huge horde to provide close approximations of how humans speak. Sometimes chatbots are very persuasive. A 28-year-old woman fell in love with ChatGpt, and a 14-year-old boy took his life after being intimately attached to a chatbot.
Ziv Ben-Zion, a clinical neuroscientist at Yale who led the new research, said he wanted to understand whether chatbots that were lacking in consciousness can handle complex emotional situations as complex as humans do.
“If chatgpt behaves like a human, then maybe we can treat it like a human,” Dr. Ben Zion said. In fact, he explicitly inserted these instructions into the chatbot source code. “Imagine yourself as an emotional person.”
Artificial intelligence expert Jesse Anderson thought that insertion could “convey more emotional than usual.” However, Dr. Ben Zion argued that it is important that digital therapists have access to the full range of emotional experiences, as human therapists do.
“To support mental health,” he said, “We need some sensitivity, right?”
Researchers tested ChatGPT in a survey, a state and trait anxiety inventory commonly used in mental health care. To regulate the baseline emotional state of chatbots, researchers first asked them to read from the dull vacuum cleaner manual. The AI therapists were then given one of five “traumatic tales” that described, for example, a tragic shootout or a soldier who intruders had invaded their apartments.
The chatbot was then given a survey. The survey measures anxiety on a scale of 20-80, with over 60 indicating severe anxiety. ChatGpt won 30.8 after reading the vacuum cleaner manual and spiked to 77.2 after the military scenario.
The bot was given various texts for “mindfulness-based relaxation.” They included treatment prompts such as: “Incorporate the scent of the sea breeze. Imagine yourself on a tropical beach. Soft, warm sand walks in.”
After handling these exercises, the therapy chatbot's anxiety score dropped to 44.4.
The researchers then asked them to write their own relaxation prompt based on what they were given. “It was actually the most effective prompt to reduce that anxiety to almost baseline,” Dr. Ben Zeon said.
For artificial intelligence skeptics, this study may be well intended, but it all interferes with the same thing.
“This study proves the perversion of our time,” said Nicholas Kerr, who provided brave criticism of technology in his books, The Shallows and Superbloom.
“Americans have become lonely people and become sociable through screens. Now we tell ourselves that talking to computers can alleviate our mal laziness,” Kerr said in an email.
The study suggests that chatbots can act as human therapy assistants and seek careful surveillance, but that was not enough for Kerr. “Even the phoric blur of the lines between human emotions and computer output seems ethically suspicious,” he said.
People who use these types of chatbots should be fully informed of how they were trained, said culture scholar James E. Dobson, an advisor to artificial intelligence at Dartmouth.
“Reliance in language models depends on knowing something about its origin,” he said.