“

Vlad Cherevko I have been interested in all kinds of electronics and technologies since the early 2000s. I like to play computer games and understand the work of different gadgets. I regularly monitor the news of the technology in the world and write materials about it.
Analysis has covered 10 leading generative models, including Catgpt-5 (Openai), Smart Assistant (you.com), Grok (Xai), Pi (Inflection), Le Cat (Mistral), Copilot (Microsoft), Meta, Clain, Claude Perplexity.
The verification was conducted on questions concerning disputed news topics where false statements are proven. On average, chatbots reproduced such statements in 35% of cases, which is almost twice the rate of last year (18%). The worst result showed Inflection Pi – 57% of false answers. Perplexy had 47%, Meta Ai and Chatgpt – 40%. The most accurate was Claude, which was wrong in 10% of cases.
According to the NewsGuard editor on the AI and foreign influence of McKenzie Sadiegi, growth is associated with changing approaches to models of models. Previously, they refused to respond to part of the requests or referred to data restrictions, now they use real -time search results. This increases the risks as search results can be deliberately filled with misinformation, in particular from Russian propaganda networks.
Earlier this year, NewsGuard found that in 33% of cases, leading chatbots reproduced false materials from the network of sites related to the Pravda pro-Kremliv resource. In 2024, this network has published about 3.6 million materials that were in response to the Western Shi systems.
The American Sunlight Project showed that the number of domains and subdomen related to the Truth has almost doubled – up to 182. Sitas have low convenience of use, which, according to researchers, indicates that orientation not on real readers, but on the AI algorithms.
The new report first published the names of specific chatbots. NewsGuard explained that this is done to inform politicians, journalists and the public about the increased level of inaccuracy of the popular AI tools.
”, – WRITE: mezha.media

Vlad Cherevko I have been interested in all kinds of electronics and technologies since the early 2000s. I like to play computer games and understand the work of different gadgets. I regularly monitor the news of the technology in the world and write materials about it.
Analysis has covered 10 leading generative models, including Catgpt-5 (Openai), Smart Assistant (you.com), Grok (Xai), Pi (Inflection), Le Cat (Mistral), Copilot (Microsoft), Meta, Clain, Claude Perplexity.
The verification was conducted on questions concerning disputed news topics where false statements are proven. On average, chatbots reproduced such statements in 35% of cases, which is almost twice the rate of last year (18%). The worst result showed Inflection Pi – 57% of false answers. Perplexy had 47%, Meta Ai and Chatgpt – 40%. The most accurate was Claude, which was wrong in 10% of cases.
According to the NewsGuard editor on the AI and foreign influence of McKenzie Sadiegi, growth is associated with changing approaches to models of models. Previously, they refused to respond to part of the requests or referred to data restrictions, now they use real -time search results. This increases the risks as search results can be deliberately filled with misinformation, in particular from Russian propaganda networks.
Earlier this year, NewsGuard found that in 33% of cases, leading chatbots reproduced false materials from the network of sites related to the Pravda pro-Kremliv resource. In 2024, this network has published about 3.6 million materials that were in response to the Western Shi systems.
The American Sunlight Project showed that the number of domains and subdomen related to the Truth has almost doubled – up to 182. Sitas have low convenience of use, which, according to researchers, indicates that orientation not on real readers, but on the AI algorithms.
The new report first published the names of specific chatbots. NewsGuard explained that this is done to inform politicians, journalists and the public about the increased level of inaccuracy of the popular AI tools.