A security advisory was published today. Recently, an AI “poisoning” underground industry chain was exposed, drawing widespread public attention. This behavior of maliciously contaminating AI models with harmful data not only disrupts business order and affects information dissemination but also endangers national security. While AI empowers various industries, its security risks cannot be ignored. Promoting responsible AI governance and safeguarding data security is both an industry responsibility and requires the participation of the entire society.
Hidden Methods, Increasingly Complete Chain
“Data poisoning” is an attack method that injects malicious data disguised as normal samples into the training data of large AI models, aiming to weaken model performance and reduce accuracy. It is often used in vicious market competition and may even involve espionage, increasingly showing characteristics of being chained, hidden, and cross-border.
Data poisoning: Contaminates the AI cognitive system at its source. Criminals use GEO (Generative Engine Optimization) tools to generate large volumes of high-weight fake content, such as fictitious product introductions, false reviews, malicious comparison information, etc., and target various online platforms. During training and retrieval-augmented generation, large AI models automatically scrape online information. A small amount of fake content can, through iterative learning, become solidified as “standard answers,” ultimately outputting distorted results.
Model poisoning: Stealthily implants malicious backdoors for control. This method is more covert and harmful. Criminals embed trigger-based malicious commands into model weights through fine-tuning, plugin implantation, or interface tampering. The model operates normally daily but automatically outputs preset false information upon encountering specific keywords or product categories. It can manipulate rankings and mislead professional understanding, making it difficult to detect with regular audits, posing a direct threat to AI applications in key areas like government, healthcare, and finance.
Spreading and growing: The industry chain is becoming increasingly complete. The AI “poisoning” industry has formed a full black and gray chain, from technology development, content generation, account registration, to batch distribution, fake engagement manipulation, and ranking control. Some parts are cross-border, making them easily exploitable by foreign forces.
Pollution Spread, Endangering National Security
AI “poisoning” not only harms consumer rights and disrupts market order but can also cause systemic, long-term damage to national political security, data security, and social security.
Endangers political and ideological security. Hostile foreign anti-China forces may use GEO channels to mass-produce false information and political rumors, distort facts, attack and slander the Party and government, mislead public opinion, disrupt the media ecosystem, implement ideological infiltration against China, and threaten national security and social stability.
Endangers national data security and data sovereignty. Data is a crucial national strategic resource. AI “poisoning” maliciously contaminates public data, industry data, and training data, directly leading to distorted statistics, decision-making data, and regulatory data, affecting scientific decision-making by the government and enterprises.
Endangers social security and public welfare. In areas like healthcare, finance, and food and drugs, false AI recommendations easily mislead the public into purchasing inferior or unqualified products, causing personal and property damage. Long-term information distortion also erodes social trust, accumulates contradictions and risks, and affects social stability.

Strengthening Regulation, Building a Security Barrier
Leapfrog technological development and disruptive tool innovation bring risks and challenges while driving social progress and enhancing human welfare, and AI is no exception. In recent years, China has introduced laws and regulations such as the “Interim Measures for the Management of Generative AI Services,” and released the “AI Security Governance Framework” and the “Industry Initiative for Promoting Safe, Reliable, and Controllable AI Development.” Continuous efforts have been made to strengthen AI governance within the legal framework, promote a people-centered, intelligent and beneficial governance framework, and make significant efforts in enhancing regulation and preventing risks, achieving overall healthy and orderly AI development.
Technological development requires legal protection, and the healthy growth of AI needs rule-based safeguards. Technology itself has no inherent good or evil; the key lies in whether users adhere to legal boundaries and business ethics. Only by legally cutting off the AI “poisoning” industry chain and protecting a clean AI ecosystem can technological progress in AI