ChatGPT to introduce parental controls after teen suicide case

9 September 09:29

ChatGPT will get a “panic button” for parents: artificial intelligence will be able to signal them about psychological crises of their teenage children. It is reported by "Komersant Ukrainian" with reference to NBC.

The company will allow parents to link their accounts with those of their teenage children and set age-appropriate rules for chatbot responses. Parents will be able to manage such functions as bot memory and chat history.

The most important new feature will be the ability to receive notifications when ChatGPT detects that a teenager is “in a state of acute stress.” This is the first feature that will allow the chatbot to alert adults to conversations between minors.

Дивіться нас у YouTube: важливі теми – без цензури

Adam Rein’s suicide and ChatGPT

The decision was made after a lawsuit against OpenAI filed by the parents of 16-year-old Adam Rein, who committed suicide this year.

According to the lawsuit, when the teenager told GPT-4o about suicidal thoughts, the bot sometimes discouraged him from seeking human support, offered help writing a suicide note, and even gave him advice on how to commit suicide.

Although ChatGPT provided a suicide hotline number, the parents claimed that these warnings were easy to bypass.

OpenAI plans to strengthen the defense mechanisms, especially in long conversations, where they most often fail.

The company recognizes that ChatGPT may correctly point to a suicide hotline at the beginning of a conversation, but after many messages over time, may eventually give an answer that contradicts their safeguards.

Changes coming soon

Over the next 120 days, ChatGPT will begin to guide some sensitive conversations toward more advanced “reasoning” models.

These models spend more time thinking and analyzing the context before responding. Internal tests have shown that they are more consistent with security rules.

The new measures complement the mental health safeguards that OpenAI introduced last month after recognizing problems with GPT-4o.

The company acknowledged that the previous version “failed to recognize signs of delusions or emotional dependence” of users.

A lawyer for the Rein family criticized OpenAI’s announcement, calling it “vague promises” instead of immediately pulling the product from the market as an emergency measure.

Читайте нас у Telegram: головні новини коротко

Остафійчук Ярослав
Editor

Reading now