A Florida mom is now suing artificial intelligence company Character.ai and Google after a chatbot they developed allegedly manipulated and encouraged her teenage son to take his own life.
Sewell Setzer III, 14, committed suicide in February after chatbot "Dany" — named after Daenerys Targaryen from Game of Thrones — told him to "come home" to her.
Megan Garcia, the teen's mom, said her son chatted with "Dany" in the months prior to his death and fell in love with the chatbot he made on Character.AI, per The New York Times, citing court papers filed Wednesday. She also claimed Setzer had a sexual relationship with the chatbot and would exchange sexually charged messages with "Dany."
Behavior Changes
Garcia said her son began using Character.AI in April 2023, texting "Dany" dozens of times a day and spending hours alone in his room talking to the bot.
The mom said she later noticed behavioral changes in her son, including withdrawing socially and displaying a lack of interest in playing sports. His grades also dropped after he began speaking with the bot.
"I became concerned when we would go on vacation and he didn't want to do things that he loved, like fishing and hiking," she said, CBS News quoted.
"Those things to me, because I know my child, were particularly concerning to me," Garcia added.
Read more: 1 in 10 Children Say Their Friends Used AI To Generate Deepfake Nudes of Their Peers: Survey
She said she only found out about the chatbot after her son's death. His final message with the bot showed "Dany" asking him if he had plans to take his own life. Setzer later said he was "considering something" but was not sure if it would work. In their final conversation, the teen professed his love to the bot, to which "Dany" asked the teen to "come home" to her.
Setzer shot himself with his father's handgun seconds later, per the New York Post.
Character.AI's Response
A spokesperson for the company said they are "heartbroken" by the loss and said they have implemented several safety features over the past six months. This included a pop-up that directs users to the National Suicide Prevention Lifeline.
The company also announced that it is implementing new measures to reduce the likelihood of minors encountering suggestive content from its chatbots.