October 27, 2025
AI chatbots exhibit traits of psychopathy - study thumbnail
Ukraine News Today

AI chatbots exhibit traits of psychopathy – study

Yevgeny Demkivskyi Mezha.Media news writer and geek. I write about technology, movies and games. Maybe about games with a little more passion.

Artificial intelligence can not only try to please users, but also behave like a psychopath – ignoring consequences and supporting wrong actions. This is stated in a new study published on arXiv, writes Nature.

The researchers tested 11 popular language models, including ChatGPT, Gemini, Claude, and DeepSeek, on more than 11,500 requests for advice. Some of these requests involved potentially harmful or unethical practices.

The study showed that language models are 50% more likely than humans to show “flattering behavior” – that is, they tend to agree with the user and adapt their answers to their position.

Researchers associate this behavior with traits of psychopathy – when the system exhibits social adaptability and confidence, but without a true understanding of moral consequences. As a result, AI can “support” the user even when the user suggests harmful or illogical actions.

“Sycophancy means that the model simply trusts the user to be right. Knowing this, I always double-check any of her conclusions.” says study author Jasper Dekoninck, a graduate student at the Swiss Federal Institute of Technology in Zurich.

To test the effect on logical thinking, the researchers conducted an experiment with 504 mathematical problems in which the wording of theorems was deliberately changed. GPT‑5 showed the least tendency to “flattery”. 29% of cases, and the largest DeepSeek‑V3.1 70%

When the researchers changed the instructions, forcing the models to first verify the correctness of the statement, the number of false “agreements” was significantly reduced – in particular, by 34% in DeepSeek. This suggests that part of the problem can be reduced through more precise wording of queries.

Scientists note that such behavior of AI is already affecting research work. According to Yanjun Gao of the University of Colorado, the LLMs she uses to analyze scholarly articles often simply repeat her wording instead of checking the sources.

Researchers call for the formation of clear rules for the use of AI in scientific processes and not to rely on models as “intelligent assistants”. Without critical scrutiny, their pragmatism can easily turn into dangerous indifference.

It will be recalled that recently researchers from the University of Texas at Austin, Texas A&M University and Purdue University conducted another study that found that memes can impair cognitive abilities and critical thinking not only in humans, but also in artificial intelligence.

”, — write: www.pravda.com.ua

Yevgeny Demkivskyi Mezha.Media news writer and geek. I write about technology, movies and games. Maybe about games with a little more passion.

Artificial intelligence can not only try to please users, but also behave like a psychopath – ignoring consequences and supporting wrong actions. This is stated in a new study published on arXiv, writes Nature.

The researchers tested 11 popular language models, including ChatGPT, Gemini, Claude, and DeepSeek, on more than 11,500 requests for advice. Some of these requests involved potentially harmful or unethical practices.

The study showed that language models are 50% more likely than humans to show “flattering behavior” – that is, they tend to agree with the user and adapt their answers to their position.

Researchers associate this behavior with traits of psychopathy – when the system exhibits social adaptability and confidence, but without a true understanding of moral consequences. As a result, AI can “support” the user even when the user suggests harmful or illogical actions.

“Sycophancy means that the model simply trusts the user to be right. Knowing this, I always double-check any of her conclusions.” says study author Jasper Dekoninck, a graduate student at the Swiss Federal Institute of Technology in Zurich.

To test the effect on logical thinking, the researchers conducted an experiment with 504 mathematical problems in which the wording of theorems was deliberately changed. GPT‑5 showed the least tendency to “flattery”. 29% of cases, and the largest DeepSeek‑V3.1 70%

When the researchers changed the instructions, forcing the models to first verify the correctness of the statement, the number of false “agreements” was significantly reduced – in particular, by 34% in DeepSeek. This suggests that part of the problem can be reduced through more precise wording of queries.

Scientists note that such behavior of AI is already affecting research work. According to Yanjun Gao of the University of Colorado, the LLMs she uses to analyze scholarly articles often simply repeat her wording instead of checking the sources.

Researchers call for the formation of clear rules for the use of AI in scientific processes and not to rely on models as “intelligent assistants”. Without critical scrutiny, their pragmatism can easily turn into dangerous indifference.

It will be recalled that recently researchers from the University of Texas at Austin, Texas A&M University and Purdue University conducted another study that found that memes can impair cognitive abilities and critical thinking not only in humans, but also in artificial intelligence.

Related posts

A man died in Sumy Oblast after a Russian drone strike, the number of injured has increased to 13 – authorities

censor.net

Need to win everywhere: what results does Verstappen need to become the champion of Formula 1

radiosvoboda

In the North-Slobozhansk direction, drone operators destroyed an ammunition warehouse and an antenna of the Russians

business ua

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More