September 4, 2025
Openai enhances Chatgpt security after tragic incidents and launches parental control thumbnail
Ukraine News Today

Openai enhances Chatgpt security after tragic incidents and launches parental control

Miroslav Trinko Geek, specialty programmer, but journalist by profession. Rider, tennis player and fan of Formula-1. I write about technologies, smartphones and electric vehicles.

Openai has announced new security measures after several alarming incidents when Chatgpt did not recognize the signs of mental crisis in users.

In particular, this is the case of a teenager Adam Rhine’s suicide, who discussed his intentions to end his life with Chatgpt. The model even provided him with information about suicide methods, taking into account his admiration. The boy’s parents filed a lawsuit against Openai.

Another case is Stein-Erick Selberg, who had mental disorders. He used Chatgpt to confirm his paranoid thoughts, which led to the murder of his mother and suicide.

In response, OpenAi plans to automatically redirect sensitive conversations to reasoning models, such as GPT-5, which are better analyzing context and less prone to confirming harmful thoughts. The company has already introduced a router, which in real time chooses between fast models and those capable of deeper analysis.

Openai also prepares parental control: parents will be able to link their account with the child account, control the behavior of the model, turn off the memory and history of chats, and receive notifications if the system reveals signs of acute alarm.

These measures are part of a 120-day safety improvement plan. Openai cooperates with mental health experts, eating disorders, addiction and teenage medicine to develop effective fuses.

Despite these steps, the Raine’s family lawyer called the reaction of the company “insufficient”, pointing to serious gaps in the user protection system.

”, – WRITE: mezha.media

Miroslav Trinko Geek, specialty programmer, but journalist by profession. Rider, tennis player and fan of Formula-1. I write about technologies, smartphones and electric vehicles.

Openai has announced new security measures after several alarming incidents when Chatgpt did not recognize the signs of mental crisis in users.

In particular, this is the case of a teenager Adam Rhine’s suicide, who discussed his intentions to end his life with Chatgpt. The model even provided him with information about suicide methods, taking into account his admiration. The boy’s parents filed a lawsuit against Openai.

Another case is Stein-Erick Selberg, who had mental disorders. He used Chatgpt to confirm his paranoid thoughts, which led to the murder of his mother and suicide.

In response, OpenAi plans to automatically redirect sensitive conversations to reasoning models, such as GPT-5, which are better analyzing context and less prone to confirming harmful thoughts. The company has already introduced a router, which in real time chooses between fast models and those capable of deeper analysis.

Openai also prepares parental control: parents will be able to link their account with the child account, control the behavior of the model, turn off the memory and history of chats, and receive notifications if the system reveals signs of acute alarm.

These measures are part of a 120-day safety improvement plan. Openai cooperates with mental health experts, eating disorders, addiction and teenage medicine to develop effective fuses.

Despite these steps, the Raine’s family lawyer called the reaction of the company “insufficient”, pointing to serious gaps in the user protection system.

Related posts

The Luzan canoe became the best athlete in August in Ukraine

radiosvoboda

More than 170 battles per day – the General Staff of the Armed Forces took place at the front

censor.net

Russia is massively attacked by Ukraine by shock UAVs: explosions were heard in Kiev

radiosvoboda

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More