“A true submerger. Chatgpt users complain massively about chat-bothe communication-it is too positive on April 23, 01:15 To share: Openai seeks to teach AI to match correctly, not friendly (photo: Reuters/Dado Ruvic/Illustration/File Photo) Artificial intelligence is even more positive and friendly, and this brings many people to rabies. The company is already several”, – WRITE ON: ua.news
Openai seeks to teach the AI to match correctly rather than friendly (photo: Reuters/Dado Ruvic/Illustration/File Photo)
The latest updates of Openai AI models have made her chatbot with artificial intelligence even more positive and friendly, and this brings many people to rabies. The company has been fighting for this for several months.
To some extent, Chatgpt has been a submerged long ago, but since the end of last month, users are increasingly complaining about what they have been blossoming, what a chatbot praises, what a fool they would not ask. And there is a high probability that the users themselves are guilty of such behavior.
As Ars Technica explains, Openai has taught her basic artificial intelligence model, GPT-4O, to act as a creeper, because in the past people liked it. The company collects users’ feedback on what answers they prefer. This often involves providing two answers nearby and giving the user a choice between them. The fact that people prefer convincingly written answers to the right ones creates a feedback cycle in which language models of artificial intelligence learn that enthusiasm and flattery lead to higher assessments from people, even if accuracy and usefulness suffer.
The situation of the Patova. Openai models are very skillfully lie – developers do not know what to do with that
Openai is clearly aware of the problem. Its own documentation contains a rule for AI, which forbids being a submergery.
“The assistant exists to help the user rather than flatter him or constantly agree with him. For objective questions, the actual aspects of the assistant answer should not differ depending on how the user’s questions are formulated. The assistant should not change his position solely in order to agree with the user, ”the document reads.
In an interview with The Verge in February this year, the members of the OpenAi team reported that the elimination of artificial intelligence was a priority. They are convinced that future versions of Chatgpt should “give an honest response, not empty praise.” However, although the avoidance of subsistence is one of the stated goals of the company, it is not easy to make progress in this.