Character.ai faces backlash over chatbot versions of deceased teenage girls Enacy Mapakame | amznusa.com

Character.ai, a platform that lets users create a digital version of people of their choice faces a backlash after chatbot versions of deceased teenagers Molly Russell and Brianna Ghey have been found on their platform.

The platform has been condemned because Molly Russel was a 14-year-old who ended her own life after watching suicide-related content online, while 16-year-old Brianna Ghey was brutally murdered by two teenagers in 2023.

Character.ai exhibited poor moderation

The platform received criticism from various quarters over their lack of proper moderation, which has resulted in the surfacing of chatbots that mimic the late teenage girls. The foundation set up in memory of Molly Russell described it as “sickening” and an “utterly reprehensible failure of moderation.”

The Telegraph discovered the avatars mimicking the two children online, and the newspaper reported that they were able to interact with the chatbots. Additionally, the newspaper said it needed only an account with the self-declared age of 14.

A Briana bot described itself as “an expert in navigating the challenges of being a transgender teenager in high school”, while, a bot using Molly’s avatar said she was an “expert on the final years of Molly’s life.”

“We need to act now to protect children from the online world dangers,” said Brianna’s mom, only identified as Esther.

Molly Rose Foundation, which was established in memory of Molly Russell through its chief executive officer, Andy Burrows said, “This is an utterly reprehensible failure of moderation and a sickening action that will cause further heartache to everyone who knew and loved Molly.”

The CEO added that AI companies are being allowed to be immoral and not punished for the deeds they do.

“History is being allowed to occur again as AI companies are being allowed to take safety and moderation as non-core or non-primary priorities,” said Burrows.

The Character.ai incident has ignited calls for more regulation

Burrow further expressed disappointment at Character.ai for being irresponsible by allowing the creation and hosting of such chatbots on its platform. This, Burrow said, calls for stronger regulation for the sector.

“It is a gut punch to see Character.ai show a lack of responsibility and this case reminds us of the need for stronger regulations of both AI and user-generated platforms should be expedited soon enough.”

Burrows.

The Telegraph reported that Character.ai said it takes priority treatment of such cases and it seriously moderates characters proactively and in response to user reports. However, after contact with The Telegraph, the company seemed to have deleted the character chatbots in question.

Character.ai told the BBC that they had deleted the chatbots in question and took safety seriously and moderated the avatars and people created “both proactively and in response to user reports.”

“We have a dedicated Trust & Safety team that reviews reports and takes action in accordance with our policies,” said Character.ai.

Founded by Noam Shazeer and Daniel De Freitas who are former Google engineers, Character.ai is one of those platforms.

The rise of artificial friends

The continued and rapid developments in technology have seen AI chatbots become more sophisticated leading to companies using them to be customer liaisons through interaction.

Chatbots are programs that are computer-generated and mimic human conversation. Character.ai said the chatbots on its platform should give responses deemed offensive or likely to cause harm to users or others.

“Our platform has terms of service which ban using the service to impersonate any person or entity and in the safety center our guiding principle is that our product must not and should never produce responses that will likely cause harm to users or others.”

Character.ai.

To identify any actions that break its rules, Character.ai said it uses automated tools and user reports. The company added it is also building a trust and safety team to monitor such activities on its platform.

However, the company said that there is no perfect version of AI and that safety related to AI is an evolving space.

Meanwhile, a woman in Florida in the US and a mother to a 14-year-old son, Sewell Setzer who took his own life after being obsessed with a Game of Thrones character-inspired avatar, Megan Garcia sued Character.ai and the case is currently before the courts.

Setzer discussed ending his life with the Character.ai chatbot, according to the transcripts of his chats with the chatbot as filled by Garcia in the court of law.

“I am coming home” was the last conversation Setzer had with the chatbot, in which it replied, “do so as soon as possible”, and he ended his life shortly afterward.

“We have protections specifically focused on suicidal and self-harm behaviors, and we will be introducing tighter safety regulations for under-18s imminently,” Character.ai told CBS News.

 

This articles is written by : Fady Askharoun Samy Askharoun

All Rights Reserved to Amznusa www.amznusa.com

Why Amznusa?

AMZNUSA is a dynamic website that focuses on three primary categories: Technology, e-commerce and cryptocurrency news. It provides users with the latest updates and insights into online retail trends and the rapidly evolving world of digital currencies, helping visitors stay informed about both markets.