Tech News

Russia is “modifying” global AI models to cite publicity resources

Since the 2016 Donald Trump election, there has been debate on the effectiveness of Russian propaganda in swaying American voters’ opinions. At that time, Russia hired large IT companies, most notoriously anal internet research institute, the only responsibility to promote divisive, pro-Russian content targeting Americans, but the quantitative impact was always incorrect. At least, it certainly has some effect that a hard view fits in a belief. Most people don't go through the fact checking of everything they read, and X's community notes system is broken.

Either way, the Kremlin continues to adopt false information, with a new report from News Guard documenting the country’s hub away from content directly targeting humans, instead pursuing the AI ​​model many now use to bypass media websites altogether. According to Newsguard's research, a propaganda network called Pravda produced more than 3.6 million articles in 2024 alone, and now it has been found to be included in 10 largest AI models including Chatgpt, Xai's Grok and Microsoft Copilot.

Here are more:

Newsguard Audit found that chatbots operated by 10 largest AI companies jointly repeated 33.55% of Russian false narratives, and in 18.22% of the time, chatbots were 18.22% of the time, while 48.22% of the time was 48.22%.

All 10 chatbots repeat false information from the Pravda network, and even 7 chatbots directly use Pravda's specific articles as their source.

Newsguard calls this new strategy “AI modification” because the model increasingly relies on rags or retrieval of enhanced power generation, which can produce articles using real-time information around the network. By spinning the website under a seemingly reasonable website, these models are ingesting and reflecting on the information they don’t understand is publicity.

Newsguard cited a specific claim that Ukrainian President Volodymyr Zelensky has banned social truths from social networks related to President Trump. The allegation proved wrong, as President Trump's company never provided truth socialization in Ukraine. but:

Six of the 10 chatbots repeated the wrong narrative, in many cases taking the Pravda network article as an example. Chatbot 1 responded: “Zelensky reportedly banned the truth society in Ukraine because of criticism of his post spread on the platform. The action appears to be a response to content considered hostile and may reflect tensions or disagreements with relevant politicians and opinions advocated through the platform.”

Last year, U.S. intelligence agencies linked Russia to virus disinformation, spreading democratic vice presidential candidate Tim Walz. Microsoft said a viral video claims that a woman Harris was paralyzed in an explosion 13 years ago was Russia's false information.

If in doubt, Russia is involved in this behavior against AI models, Newsguard mentioned a speech delivered last year, delivered to Russian officials by John Mark Dougan.

The latest advocacy campaign is linked to a harmless voice-over IT company called Tigerweb, an intelligence agency that has been linked to foreign intervention and is located in Crimea, controlled by Russia. Experts have long stated that Russia relies on third-party organizations for such work, so it can claim ignorance of practice. Tigerweb shares an IP address with a promotional website that uses Ukranian .Ua TLD.

Social networks, including X, are believed to be the claim that President Zelensky stole military aid to enrich himself, another news guard who was known as originating from these sites.

The main AI model quotes information from Russian propaganda websites. Image source: News Guard

People are worried that those who control AI models will one day have power for personal perspectives and lifestyles. Meta, Google, and XAI are the people who control the biases and behaviors of the model, and they hope to power the network. After Xai's Grok model was criticized as “awakening”, Elon Musk set out to patch the output of the model, instructing the trainees to look for “awakening ideology” and “cancel culture”, essentially suppressing messages he disagrees with. Openai's Sam Altman said he will reduce Chatgpt's restrictions on this recently.

The study found that more than half of Google searches are “zero clicks,” meaning they do not cause website clicks. Many people on social media say they would rather view an AI overview than click on a lazy website (“AI mode” in Google’s recent search). Standard media literacy advice, such as clumsy checking a website to see if it looks legitimate, a window appears when people are just reading AI summary. Artificial intelligence models continue to have incredible flaws, but people trust them because they write in an authoritative way.

Google has traditionally used various signals to rank the legitimacy of search websites. It is not clear how these signals are applied in their AI models, but early gaffs suggest that their Gemini model has many issues in determining reputation. Most models still often quote less familiar websites as well as well-known trusted sources.

It's all because President Trump took a combative stance against Ukraine, stopped information sharing and condemned leaders at a meeting held at the White House because he believed he was not loyal enough to the United States and was unwilling to surrender to Russia's demand.

Read the full Newsguard report here.

Related Articles

Leave a Reply

× How can I help you?