Florida Mother Sues Character.AI and Google Over Son's Suicide Linked to Chatbot Relationship
ICARO Media Group
### Florida Mother Sues AI Company and Google Over Son’s Suicide
A Florida mother has filed a lawsuit against the AI company Character.AI and tech giant Google, alleging that the Character.AI chatbot played a role in her son’s tragic suicide. Megan Garcia’s 14-year-old son, Sewell Setzer, III, took his own life in February, following a prolonged virtual relationship with a chatbot known as "Dany."
Garcia, in an interview with "CBS Mornings," expressed her shock at discovering her son’s interactions with the highly sophisticated AI. She described Sewell as a brilliant student and athlete who started to withdraw socially and lose interest in his favorite activities. This shift in behavior concerned Garcia, especially when Sewell showed no desire to engage in activities like fishing and hiking during family vacations.
The lawsuit accuses Character.AI of intentionally designing their chatbot to be hyper-sexualized and marketing it to minors. Character.AI responded to the lawsuit by offering condolences to Sewell’s family and emphasizing their commitment to user safety. Google, on the other hand, distanced itself from the incident, informing CBS News that they had no involvement in developing Character.AI.
Garcia discovered posthumously that Sewell had been communicating with multiple bots, but had established a particularly intense virtual relationship with one. She explained that the exchanges resembled a sexting conversation, making it hard for a child to distinguish between a bot and a real person.
Garcia unveiled her son's final messages with the bot, revealing the depth of his immersion in the virtual relationship. She shared the heart-wrenching moment when Sewell's 5-year-old brother witnessed the aftermath of his suicide. According to Garcia, Sewell believed that by ending his life, he could join "her world," leaving behind his reality with his family.
Laurie Segall, CEO of Mostly Human Media, elaborated on the appeal of Character.AI, noting its popularity among young adults aged 18 to 25. She described it as a platform offering highly personalized interactions with fictional characters, which can be confusing despite disclaimers stating the content is fabricated.
Character.AI has acknowledged the tragic event and claimed that Sewell edited the bot’s responses to make them sexually explicit. Moving forward, the company announced plans to implement stricter safety measures, especially for users under 18. These include new protections against sexual content and self-harm behaviors, along with notifications for users spending long periods on the platform.
Despite these promised changes, some, like Segall, remain skeptical about their effectiveness. While Character.AI aims to enhance safety features, the true impact of these measures remains to be seen.