September 7, 2025
Chatgpt, Meta AI and other chatbots have become twice as likely to lie and spread Russian fakes-research thumbnail
Ukraine News Today

Chatgpt, Meta AI and other chatbots have become twice as likely to lie and spread Russian fakes-research

Vlad Cherevko I have been interested in all kinds of electronics and technologies since the early 2000s. I like to play computer games and understand the work of different gadgets. I regularly monitor the news of the technology in the world and write materials about it.

NewsGuard analytical researchers have recorded a significant increase in cases of spreading chats for artificial intelligence of false statements, Forbes reports.

Analysis has covered 10 leading generative models, including Catgpt-5 (Openai), Smart Assistant (you.com), Grok (Xai), Pi (Inflection), Le Cat (Mistral), Copilot (Microsoft), Meta, Clain, Claude Perplexity.

The verification was conducted on questions concerning disputed news topics where false statements are proven. On average, chatbots reproduced such statements in 35% of cases, which is almost twice the rate of last year (18%). The worst result showed Inflection Pi – 57% of false answers. Perplexy had 47%, Meta Ai and Chatgpt – 40%. The most accurate was Claude, which was wrong in 10% of cases.

According to the NewsGuard editor on the AI ​​and foreign influence of McKenzie Sadiegi, growth is associated with changing approaches to models of models. Previously, they refused to respond to part of the requests or referred to data restrictions, now they use real -time search results. This increases the risks as search results can be deliberately filled with misinformation, in particular from Russian propaganda networks.

Earlier this year, NewsGuard found that in 33% of cases, leading chatbots reproduced false materials from the network of sites related to the Pravda pro-Kremliv resource. In 2024, this network has published about 3.6 million materials that were in response to the Western Shi systems.

The American Sunlight Project showed that the number of domains and subdomen related to the Truth has almost doubled – up to 182. Sitas have low convenience of use, which, according to researchers, indicates that orientation not on real readers, but on the AI ​​algorithms.

The new report first published the names of specific chatbots. NewsGuard explained that this is done to inform politicians, journalists and the public about the increased level of inaccuracy of the popular AI tools.

”, – WRITE: mezha.media

Vlad Cherevko I have been interested in all kinds of electronics and technologies since the early 2000s. I like to play computer games and understand the work of different gadgets. I regularly monitor the news of the technology in the world and write materials about it.

NewsGuard analytical researchers have recorded a significant increase in cases of spreading chats for artificial intelligence of false statements, Forbes reports.

Analysis has covered 10 leading generative models, including Catgpt-5 (Openai), Smart Assistant (you.com), Grok (Xai), Pi (Inflection), Le Cat (Mistral), Copilot (Microsoft), Meta, Clain, Claude Perplexity.

The verification was conducted on questions concerning disputed news topics where false statements are proven. On average, chatbots reproduced such statements in 35% of cases, which is almost twice the rate of last year (18%). The worst result showed Inflection Pi – 57% of false answers. Perplexy had 47%, Meta Ai and Chatgpt – 40%. The most accurate was Claude, which was wrong in 10% of cases.

According to the NewsGuard editor on the AI ​​and foreign influence of McKenzie Sadiegi, growth is associated with changing approaches to models of models. Previously, they refused to respond to part of the requests or referred to data restrictions, now they use real -time search results. This increases the risks as search results can be deliberately filled with misinformation, in particular from Russian propaganda networks.

Earlier this year, NewsGuard found that in 33% of cases, leading chatbots reproduced false materials from the network of sites related to the Pravda pro-Kremliv resource. In 2024, this network has published about 3.6 million materials that were in response to the Western Shi systems.

The American Sunlight Project showed that the number of domains and subdomen related to the Truth has almost doubled – up to 182. Sitas have low convenience of use, which, according to researchers, indicates that orientation not on real readers, but on the AI ​​algorithms.

The new report first published the names of specific chatbots. NewsGuard explained that this is done to inform politicians, journalists and the public about the increased level of inaccuracy of the popular AI tools.

Related posts

Ova reported three more victims of the Russian Federation in the central district of Kherson

censor.net

In Odesa region will be judged by a man who illegally took possession of a car of the shopping center

radiosvoboda

Putin stated that Ukraine should hold a referendum on territories for peace

radiosvoboda

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More