“

Taras Mishchenko Editor -in -chief Mezha.media. Taras has over 15 years of experience in IT journalism, writes about new technologies and gadgets.
If Openai determines that the user is a minor, he will automatically get into the Chatgpt version, adapted by age. This regime blocks frank and sexual content and, in rare cases, the company may contact parents or involve law enforcement agencies if the teenager may harm.
Openai also works on the technology of accurate determination of the age of users, but in the event of uncertainty or incomplete information Chatgpt will offer a teenage version by default. In some countries, Chatgpt may ask the user to provide a passport or ID card.
Running updated security tools in Chatgpt has not become accidental. Openai shared plans to add parental control to Chatgpt last month, after a teenager’s family filed a lawsuit against the company, accusing a chatbot of involvement in his suicide.
However, the company only provided details about parental control now, it will allow parents:
- to associate your account with a teenage via email;
- set the time when the child cannot use chat bot;
- manage the functions to be disconnected;
- receive a message in case of a state of acute distress of a teenager;
- Specify the parameters of how chatbot responds to queries.
Chatgpt is currently designed for users from 13 years. Altman calls the innovation “complex solutions”. “After consulting with experts, we consider them the best and want to be transparent in our intentions,” he adds.
Changes in Chatgpt also coincide with the beginning of the US Federal Trade Commission (FTC) for technological companies such as Alphabet, Meta, Openai, XAI and Snap to find out how chatbots can affect children and adolescents. The department emphasizes that they want to understand what measures are used to evaluate the safety of such systems in the role of “digital companions”.
”, – WRITE: mezha.media

Taras Mishchenko Editor -in -chief Mezha.media. Taras has over 15 years of experience in IT journalism, writes about new technologies and gadgets.
If Openai determines that the user is a minor, he will automatically get into the Chatgpt version, adapted by age. This regime blocks frank and sexual content and, in rare cases, the company may contact parents or involve law enforcement agencies if the teenager may harm.
Openai also works on the technology of accurate determination of the age of users, but in the event of uncertainty or incomplete information Chatgpt will offer a teenage version by default. In some countries, Chatgpt may ask the user to provide a passport or ID card.
Running updated security tools in Chatgpt has not become accidental. Openai shared plans to add parental control to Chatgpt last month, after a teenager’s family filed a lawsuit against the company, accusing a chatbot of involvement in his suicide.
However, the company only provided details about parental control now, it will allow parents:
- to associate your account with a teenage via email;
- set the time when the child cannot use chat bot;
- manage the functions to be disconnected;
- receive a message in case of a state of acute distress of a teenager;
- Specify the parameters of how chatbot responds to queries.
Chatgpt is currently designed for users from 13 years. Altman calls the innovation “complex solutions”. “After consulting with experts, we consider them the best and want to be transparent in our intentions,” he adds.
Changes in Chatgpt also coincide with the beginning of the US Federal Trade Commission (FTC) for technological companies such as Alphabet, Meta, Openai, XAI and Snap to find out how chatbots can affect children and adolescents. The department emphasizes that they want to understand what measures are used to evaluate the safety of such systems in the role of “digital companions”.