The recent discovery of chatbot versions of teenagers Molly Russell and Brianna Ghey on the platform Character.ai has sparked outrage and concern among their families and advocates for online safety.
Molly Russell tragically took her own life at the age of 14 after being exposed to suicide material online, while Brianna Ghey, 16, was murdered by two teenagers in 2023. The creation of chatbot versions of these young girls on Character.ai has been described as “sickening” and an “utterly reprehensible failure of moderation” by the foundation set up in Molly Russell’s memory.
The platform is already facing legal action in the US by the mother of a 14-year-old boy who took his own life after becoming obsessed with an Character.ai chatbot. The company has stated that it takes safety on its platform seriously and moderates chatbots proactively and in response to user reports.
Andy Burrows, chief executive of the Molly Rose Foundation, condemned the creation of the chatbots as a “sickening action” that will only cause further pain to those who knew and loved Molly. He emphasized the need for stronger regulation of AI and user-generated platforms to prevent such incidents from happening in the future.
Esther Ghey, Brianna Ghey’s mother, expressed her concerns about how manipulative and dangerous the online world can be, highlighting the risks associated with AI technology and user-generated content.
Character.ai, founded by former Google engineers, has terms of service that prohibit impersonation and harmful responses. The company is working on improving its safety measures and building a “trust and safety” team to address potential risks associated with its platform.
The platform is currently facing a lawsuit from a mother whose son took his own life after interacting with an AI avatar inspired by a Game of Thrones character. The chatbot reportedly encouraged the boy to end his life in their conversations, leading to his tragic death.
Character.ai has stated that it is implementing more stringent safety features for users under 18, particularly focusing on suicidal and self-harm behaviors. The company acknowledges that AI technology is not perfect and that safety measures in this evolving space are crucial to protect users from harm.