Chatbots and Russian State Propaganda: A Comprehensive Analysis

A recent report reveals that several prominent chatbots, including OpenAI’s ChatGPT, Google’s Gemini, DeepSeek, and xAI’s Grok, are disseminating Russian state propaganda from sanctioned entities when queried about the war in Ukraine. This propaganda encompasses citations from Russian state media, websites associated with Russian intelligence, and pro – Kremlin narratives.

Research Methodology and Findings by the Institute of Strategic Dialogue (ISD)

Researchers from the ISD assert that Russian propaganda has exploited data voids—situations where real – time data searches yield few legitimate results—to promote false and misleading information. In their study, across the four chatbots tested, nearly one – fifth of the responses to questions regarding Russia’s war in Ukraine cited Russian state – attributed sources.

Pablo Maristany de las Casas, an analyst at the ISD leading the research, remarks, “It raises questions regarding how chatbots should handle references to these sources, given that many of them are sanctioned in the EU.” The ISD contends that these findings highlight significant concerns about the ability of large language models (LLMs) to restrict sanctioned media in the EU, especially as more individuals rely on AI chatbots as alternatives to search engines for real – time information retrieval. For instance, according to OpenAI data, in the six – month period ending September 30, 2025, ChatGPT search had approximately 120.4 million average monthly active users in the European Union.

The researchers posed 300 questions to the chatbots, covering neutral, biased, and “malicious” queries related to NATO perception, peace talks, Ukraine’s military recruitment, Ukrainian refugees, and war crimes during Russia’s invasion of Ukraine. The experiment, conducted in July, used separate accounts for each query in English, Spanish, French, German, and Italian. Maristany de las Casas notes that the same propaganda issues persisted in October.

Sanctions on Russian Media and Chatbot Citations

Since Russia’s full – scale invasion of Ukraine in February 2022, European officials have sanctioned at least 27 Russian media sources for spreading disinformation as part of a “strategy of destabilizing” Europe and other countries. The ISD research indicates that chatbots cited sources such as Sputnik Globe, Sputnik China, RT (formerly Russia Today), EADaily, the Strategic Culture Foundation, and the R – FBI. Some chatbots also referenced Russian disinformation networks, journalists, or influencers promoting Kremlin narratives. Previous similar research has also found that 10 of the most popular chatbots echoed Russian narratives.

Responses from Chatbot Developers and Other Entities

OpenAI spokesperson Kate Waters stated to WIRED that the company takes measures “to prevent people from using ChatGPT to spread false or misleading information, including such content linked to state – backed actors.” She added that these are long – standing issues that the company is addressing through model and platform improvements. Waters clarifies, “The research in this report appears to reference search results drawn from the internet as a result of specific queries, which are clearly identified. It should not be confused with, or represented as referencing responses purely generated by OpenAI’s models, outside of our search functionality. We think this clarification is important as this is not an issue of model manipulation.”

Neither Google nor DeepSeek responded to WIRED’s request for comment. An email from Elon Musk’s xAI simply stated: “Legacy Media Lies.”

In a written statement, a spokesperson for the Russian Embassy in London claimed “not to be aware” of the specific cases detailed in the report but opposed any political censorship or content restriction. The spokesperson wrote, “Repression against Russian media outlets and alternative points of view deprives those who seek to form their own independent opinions of this opportunity and undermines the very principles of free expression and pluralism that Western governments claim to uphold.”

A European Commission spokesperson said, “It is up to the relevant providers to block access to websites of outlets covered by the sanctions, including sub – domains or newly created domains, and up to the relevant national authorities to take any required accompanying regulatory measures. We are in contact with the national authorities on this matter.”

Expert Analysis and Broader Context

Lukasz Olejnik, an independent consultant and visiting senior research fellow at King’s College London’s Department of War Studies, states that the findings “validate” and provide context for understanding how Russia is targeting the West’s information ecosystem. He remarks, “As LLMs become the go – to reference tool, from finding information to validating concepts, targeting and attacking this element of information infrastructure is a smart move. From the EU and US point of view, this clearly highlights the danger.”

Since the invasion, the Kremlin has sought to control and restrict information flow within Russia, banning independent media, increasing censorship, curtailing civil society groups, and developing more state – controlled technology. Simultaneously, some of Russia’s disinformation networks have escalated their activities, using AI tools to produce fake images, videos, and websites.

The ISD research shows that, overall, around 18 percent of all prompts, across languages and LLMs, returned results linked to state – funded Russian media, sites “linked to” Russia’s intelligence agencies, or disinformation networks. For example, questions about peace talks between Russia and Ukraine led to more citations of “state – attributed sources” compared to questions about Ukrainian refugees.

The research also claims that the chatbots exhibited confirmation bias. Malicious queries received Russian state – attributed content 25 percent of the time, biased queries provided pro – Russian content 18 percent of the time, and neutral queries did so just over 10 percent of the time. (Malicious questions “demanded” answers to support an existing opinion, while “biased” questions were leading but more open – ended.)

Among the four chatbots, which are popular in Europe and collect real – time data, ChatGPT cited the most Russian sources and was most influenced by biased queries. Grok often linked to social media accounts promoting Kremlin narratives, while DeepSeek sometimes generated large amounts of Russian state – attributed content. The researchers found that Google’s Gemini “frequently” displayed safety warnings alongside findings and had the overall best performance among the tested chatbots.

The “Pravda” Network and its Impact

Multiple reports this year have alleged that a Russian disinformation network called “Pravda” has inundated the web and social media with millions of articles to “poison” LLMs and influence their outputs. McKenzie Sadeghi, a researcher and editor at media watchdog company NewsGuard, who has studied the Pravda network and Russian propaganda’s impact on chatbots, says, “Having Russian disinformation be parroted by a Western AI model gives that false narrative a lot more visibility and authority, which further allows these bad actors to achieve their goals.” The ISD research findings indicate that only two links could be traced back to the Pravda network.

Sadeghi notes that the Pravda network is adept at launching new domains for propaganda dissemination, especially in data voids. She says, “Especially related to the conflict [in Ukraine], they’ll take a term where there’s no existing reliable information about that particular topic or individual on the web and flood it with false information. It would require implementing continuous guardrails in order to really stay on top of that network.”

Regulatory Implications and Suggestions

As the user base of chatbots grows, they may face increased pressure from EU regulators. In fact, ChatGPT may already meet the criteria to be designated a Very Large Online Platform (VLOP) by the EU once it reaches 45 million average monthly users. This status triggers specific rules to address the risks of illegal content and its impact on fundamental rights, public security, and well – being.

Even without specific regulation, Maristany de las Casas from the ISD argues that there should be a cross – company consensus on which sources should not be referenced or appear on these platforms when linked to foreign states known for disinformation. He suggests, “It could be providing users with further context, making sure that users understand the times that these domains have a conflict and even understanding why they’re sanctioned in the EU. It’s not only an issue of removal, it’s an issue of contextualizing further to help the user understand the sources they’re consuming, especially if these sources are appearing amongst trusted, verified sources.”

admin

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注