September 7, 2025
Openai enhances Chatgpt security after tragic incidents and launches parental control thumbnail
Ukraine News Today

Openai enhances Chatgpt security after tragic incidents and launches parental control

Miroslav Trinko Geek, specialty programmer, but journalist by profession. Rider, tennis player and fan of Formula-1. I write about technologies, smartphones and electric vehicles.

Openai has announced new security measures after several alarming incidents when Chatgpt did not recognize the signs of mental crisis in users.

In particular, this is the case of a teenager Adam Rhine’s suicide, who discussed his intentions to end his life with Chatgpt. The model even provided him with information about suicide methods, taking into account his admiration. The boy’s parents filed a lawsuit against Openai.

Another case is Stein-Erick Selberg, who had mental disorders. He used Chatgpt to confirm his paranoid thoughts, which led to the murder of his mother and suicide.

In response, OpenAi plans to automatically redirect sensitive conversations to reasoning models, such as GPT-5, which are better analyzing context and less prone to confirming harmful thoughts. The company has already introduced a router, which in real time chooses between fast models and those capable of deeper analysis.

Openai also prepares parental control: parents will be able to link their account with the child account, control the behavior of the model, turn off the memory and history of chats, and receive notifications if the system reveals signs of acute alarm.

These measures are part of a 120-day safety improvement plan. Openai cooperates with mental health experts, eating disorders, addiction and teenage medicine to develop effective fuses.

Despite these steps, the Raine’s family lawyer called the reaction of the company “insufficient”, pointing to serious gaps in the user protection system.

”, – WRITE: mezha.media

Miroslav Trinko Geek, specialty programmer, but journalist by profession. Rider, tennis player and fan of Formula-1. I write about technologies, smartphones and electric vehicles.

Openai has announced new security measures after several alarming incidents when Chatgpt did not recognize the signs of mental crisis in users.

In particular, this is the case of a teenager Adam Rhine’s suicide, who discussed his intentions to end his life with Chatgpt. The model even provided him with information about suicide methods, taking into account his admiration. The boy’s parents filed a lawsuit against Openai.

Another case is Stein-Erick Selberg, who had mental disorders. He used Chatgpt to confirm his paranoid thoughts, which led to the murder of his mother and suicide.

In response, OpenAi plans to automatically redirect sensitive conversations to reasoning models, such as GPT-5, which are better analyzing context and less prone to confirming harmful thoughts. The company has already introduced a router, which in real time chooses between fast models and those capable of deeper analysis.

Openai also prepares parental control: parents will be able to link their account with the child account, control the behavior of the model, turn off the memory and history of chats, and receive notifications if the system reveals signs of acute alarm.

These measures are part of a 120-day safety improvement plan. Openai cooperates with mental health experts, eating disorders, addiction and teenage medicine to develop effective fuses.

Despite these steps, the Raine’s family lawyer called the reaction of the company “insufficient”, pointing to serious gaps in the user protection system.

Related posts

The court allowed the Khmelnitsky MSEC excluder to leave the detention center on bail

radiosvoboda

Macron confirmed that the “decisive” coalition will take up on September 4 in Paris

radiosvoboda

If Germany did not decide to spend more on defense then NATO probably broke up – Merz

radiosvoboda

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More