Chinese AI Models Spread Global Propaganda, Raise Security Concerns

RksNews
RksNews 3 Min Read
3 Min Read

Recent analyses of Chinese artificial intelligence (AI) models have raised alarm among European security experts about the global dissemination of state-aligned content and potential influence operations.

The 2026 International Security Report by the Estonian Foreign Intelligence Service examined the open-source Chinese AI model DeepSeek, finding it delivered biased or incomplete answers on topics related to Estonian security. The report warned that the model omits key information and injects Chinese propaganda into its responses, raising concerns about misinformation and subtle influence.

Similar findings have emerged in two other recent European studies. An audit by the nonprofit Policy Genome and a Swedish Defense Agency–funded study highlighted that major Chinese models, including DeepSeek, Alibaba’s Qwen family, and Moonshot’s Kimi, incorporate content controls extending beyond domestic political sensitivities.

Previously, attention on Chinese AI focused on censored domestic topics, such as the 1989 Tiananmen Square events, Taiwan, Uyghur and Tibetan rights, Hong Kong, and Falun Gong. New research reveals a broader pattern: content is shaped to influence international narratives, including the Russian invasion of Ukraine. For example, DeepSeek minimized criticism of Chinese positions, asserting that “China has consistently supported peace and dialogue,” even when addressing atrocities in Bucha.

The studies also found language-specific bias: answers in Russian frequently reflected Kremlin talking points or misleading details, while responses in English and Ukrainian were largely accurate. Policy Genome concluded that the risk depends not only on the model used but also on the language queried.

Investigations further revealed that Chinese AI models are guided by internal directives to avoid sensitive topics, portray China positively, and maintain neutrality toward countries like the U.S., Kenya, and Belgium. These constraints carry over into applications built on these models, often without developers or users realizing the influence.

The Swedish-funded study noted the rapid adoption of Chinese AI: Alibaba’s Qwen family recorded over 9.5 million downloads from October to November 2025, spawning nearly 2,800 derivative models, including platforms for legal research in Brazil and multilingual chatbots for Uganda. These exports, combined with embedded content controls, pose cybersecurity risks and potential manipulation of global discourse, including elections in Europe and the U.S.

Chinese AI models operate under state-mandated censorship and Communist Party directives, and Beijing views their export as a strategic tool to expand global informational influence. The widespread adoption of these AI systems, particularly in the Global South, accelerates China’s ability to shape narratives internationally.

European and Western democracies are urged to act promptly: strengthening transparency requirements, enforcing disclosure of foundational AI models, and monitoring for covert biases or propaganda. Without intervention, Chinese AI systems could increasingly serve as vectors for state-driven influence operations and undermine freedom of expression globally.