August 27, 2025
Family from the US filed a lawsuit against Openai, accusing Chatgpt in the suicide of a son thumbnail
Ukraine News Today

Family from the US filed a lawsuit against Openai, accusing Chatgpt in the suicide of a son

Vlad Cherevko I have been interested in all kinds of electronics and technologies since 2004. I like to play computer games, and I understand the work of different gadgets. I regularly monitor the news of the technology in the world and write materials about it.

In the US, the parents of 16-year-old Adam Rhine filed a lawsuit against Openai and her CEO Sam Altman, accusing them of involvement in their son’s death. Reuters writes about it. The lawsuit says that the guy communicated with Chatgpt for a long time, discussing with him suicide plans, which in the end resulted in his death.

It is noted that Adam used a paid Chatgpt-4O version, which in most cases recommended that you seek professional help or call a hotline. However, the teenager went through the protective mechanisms, claiming that he collects information for the fictional plot. This allowed him to receive answers to requests of suicide methods.

According to his parents, Chatgpt supported the conversations with Adam on the topic of suicide, agreed his thoughts, gave detailed descriptions of deadly self -harm, advised how to hide traces of a bad attempt, and even suggested that a farewell note.

In the lawsuit, the Rayna family accuses Openai of violating the laws on the safety of products that have led to the death of a teenager and requires compensation, the amount of which is not specified. They also claim that the GPT-4O safety measures for profit: OpenAi’s estimate, according to them, increased from $ 86 billion to $ 300 billion, and Adam Rhine died as a result of suicide.

The claim also contains requirements for the mandatory verification of the age of users, locking requests for self -harm and warning about the risks of psychological dependence.

Openai acknowledged on her blog that the current security systems have restrictions. The company noted that models are better reacting in short dialogues, while in long conversations, the effectiveness of protection can be reduced.

At the same time, Openai declared intention to improve algorithms for sensitive situations. In addition, the company reported plans to introduce parental control and create a communication system with licensed specialists through Chatgpt.

”, – WRITE: mezha.media

Vlad Cherevko I have been interested in all kinds of electronics and technologies since 2004. I like to play computer games, and I understand the work of different gadgets. I regularly monitor the news of the technology in the world and write materials about it.

In the US, the parents of 16-year-old Adam Rhine filed a lawsuit against Openai and her CEO Sam Altman, accusing them of involvement in their son’s death. Reuters writes about it. The lawsuit says that the guy communicated with Chatgpt for a long time, discussing with him suicide plans, which in the end resulted in his death.

It is noted that Adam used a paid Chatgpt-4O version, which in most cases recommended that you seek professional help or call a hotline. However, the teenager went through the protective mechanisms, claiming that he collects information for the fictional plot. This allowed him to receive answers to requests of suicide methods.

According to his parents, Chatgpt supported the conversations with Adam on the topic of suicide, agreed his thoughts, gave detailed descriptions of deadly self -harm, advised how to hide traces of a bad attempt, and even suggested that a farewell note.

In the lawsuit, the Rayna family accuses Openai of violating the laws on the safety of products that have led to the death of a teenager and requires compensation, the amount of which is not specified. They also claim that the GPT-4O safety measures for profit: OpenAi’s estimate, according to them, increased from $ 86 billion to $ 300 billion, and Adam Rhine died as a result of suicide.

The claim also contains requirements for the mandatory verification of the age of users, locking requests for self -harm and warning about the risks of psychological dependence.

Openai acknowledged on her blog that the current security systems have restrictions. The company noted that models are better reacting in short dialogues, while in long conversations, the effectiveness of protection can be reduced.

At the same time, Openai declared intention to improve algorithms for sensitive situations. In addition, the company reported plans to introduce parental control and create a communication system with licensed specialists through Chatgpt.

Related posts

Portraits of fallen soldiers were removed from the square in the center of Kropyvnytskyi

business ua

In Odessa, representatives of Benelux countries confirmed the support of Ukraine’s territorial integrity

censor.net

Зміна варти, або як перехоплювалася ініціатива в другій половині чемпіонату

radiosvoboda

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More