“

Miroslav Trinko Geek, specialty programmer, but journalist by profession. Rider, tennis player and fan of Formula-1. I write about technologies, smartphones and electric vehicles.
In particular, this is the case of a teenager Adam Rhine’s suicide, who discussed his intentions to end his life with Chatgpt. The model even provided him with information about suicide methods, taking into account his admiration. The boy’s parents filed a lawsuit against Openai.
Another case is Stein-Erick Selberg, who had mental disorders. He used Chatgpt to confirm his paranoid thoughts, which led to the murder of his mother and suicide.
In response, OpenAi plans to automatically redirect sensitive conversations to reasoning models, such as GPT-5, which are better analyzing context and less prone to confirming harmful thoughts. The company has already introduced a router, which in real time chooses between fast models and those capable of deeper analysis.
Openai also prepares parental control: parents will be able to link their account with the child account, control the behavior of the model, turn off the memory and history of chats, and receive notifications if the system reveals signs of acute alarm.
These measures are part of a 120-day safety improvement plan. Openai cooperates with mental health experts, eating disorders, addiction and teenage medicine to develop effective fuses.
Despite these steps, the Raine’s family lawyer called the reaction of the company “insufficient”, pointing to serious gaps in the user protection system.
”, – WRITE: mezha.media

Miroslav Trinko Geek, specialty programmer, but journalist by profession. Rider, tennis player and fan of Formula-1. I write about technologies, smartphones and electric vehicles.
In particular, this is the case of a teenager Adam Rhine’s suicide, who discussed his intentions to end his life with Chatgpt. The model even provided him with information about suicide methods, taking into account his admiration. The boy’s parents filed a lawsuit against Openai.
Another case is Stein-Erick Selberg, who had mental disorders. He used Chatgpt to confirm his paranoid thoughts, which led to the murder of his mother and suicide.
In response, OpenAi plans to automatically redirect sensitive conversations to reasoning models, such as GPT-5, which are better analyzing context and less prone to confirming harmful thoughts. The company has already introduced a router, which in real time chooses between fast models and those capable of deeper analysis.
Openai also prepares parental control: parents will be able to link their account with the child account, control the behavior of the model, turn off the memory and history of chats, and receive notifications if the system reveals signs of acute alarm.
These measures are part of a 120-day safety improvement plan. Openai cooperates with mental health experts, eating disorders, addiction and teenage medicine to develop effective fuses.
Despite these steps, the Raine’s family lawyer called the reaction of the company “insufficient”, pointing to serious gaps in the user protection system.