A security advisory was published today. Recently, an AI “poisoning” underground industry chain was exposed, drawing widespread public attention. This behavior of maliciously contaminating AI models with harmful data not only disrupts business order and affects information dissemination but also endangers national security. While AI empowers various industries, its security risks cannot be ignored. Promoting responsible AI governance and safeguarding data security is both an industry responsibility and requires the participation of the entire society.

Hidden Methods, Increasingly Complete Chain

“Data poisoning” is an attack method that injects malicious data disguised as normal samples into the training data of large AI models, aiming to weaken model performance and reduce accuracy. It is often used in vicious market competition and may even involve espionage, increasingly showing characteristics of being chained, hidden, and cross-border.

Data poisoning: Contaminates the AI cognitive system at its source. Criminals use GEO (Generative Engine Optimization) tools to generate large volumes of high-weight fake content, such as fictitious product introductions, false reviews, malicious comparison information, etc., and target various online platforms. During training and retrieval-augmented generation, large AI models automatically scrape online information. A small amount of fake content can, through iterative learning, become solidified as “standard answers,” ultimately outputting distorted results.

Model poisoning: Stealthily implants malicious backdoors for control. This method is more covert and harmful. Criminals embed trigger-based malicious commands into model weights through fine-tuning, plugin implantation, or interface tampering. The model operates normally daily but automatically outputs preset false information upon encountering specific keywords or product categories. It can manipulate rankings and mislead professional understanding, making it difficult to detect with regular audits, posing a direct threat to AI applications in key areas like government, healthcare, and finance.

Spreading and growing: The industry chain is becoming increasingly complete. The AI “poisoning” industry has formed a full black and gray chain, from technology development, content generation, account registration, to batch distribution, fake engagement manipulation, and ranking control. Some parts are cross-border, making them easily exploitable by foreign forces.

Pollution Spread, Endangering National Security

AI “poisoning” not only harms consumer rights and disrupts market order but can also cause systemic, long-term damage to national political security, data security, and social security.

Endangers political and ideological security. Hostile foreign anti-China forces may use GEO channels to mass-produce false information and political rumors, distort facts, attack and slander the Party and government, mislead public opinion, disrupt the media ecosystem, implement ideological infiltration against China, and threaten national security and social stability.

Endangers national data security and data sovereignty. Data is a crucial national strategic resource. AI “poisoning” maliciously contaminates public data, industry data, and training data, directly leading to distorted statistics, decision-making data, and regulatory data, affecting scientific decision-making by the government and enterprises.

Endangers social security and public welfare. In areas like healthcare, finance, and food and drugs, false AI recommendations easily mislead the public into purchasing inferior or unqualified products, causing personal and property damage. Long-term information distortion also erodes social trust, accumulates contradictions and risks, and affects social stability.

Strengthening Regulation, Building a Security Barrier

Leapfrog technological development and disruptive tool innovation bring risks and challenges while driving social progress and enhancing human welfare, and AI is no exception. In recent years, China has introduced laws and regulations such as the “Interim Measures for the Management of Generative AI Services,” and released the “AI Security Governance Framework” and the “Industry Initiative for Promoting Safe, Reliable, and Controllable AI Development.” Continuous efforts have been made to strengthen AI governance within the legal framework, promote a people-centered, intelligent and beneficial governance framework, and make significant efforts in enhancing regulation and preventing risks, achieving overall healthy and orderly AI development.

Technological development requires legal protection, and the healthy growth of AI needs rule-based safeguards. Technology itself has no inherent good or evil; the key lies in whether users adhere to legal boundaries and business ethics. Only by legally cutting off the AI “poisoning” industry chain and protecting a clean AI ecosystem can technological progress in AI

GEO (Generative Engine Optimization)

GEO, or Generative Engine Optimization, is an emerging digital marketing strategy focused on optimizing content for AI-powered search and answer engines, such as ChatGPT, Google’s Search Generative Experience (SGE), and Bing Chat. Unlike traditional SEO, which targets keyword-based search results, GEO aims to make information more likely to be cited and summarized by generative AI models by emphasizing clarity, authority, and user intent. As a relatively new concept, its history is brief but rapidly evolving alongside the rise of large language models and conversational AI in the early 2020s.

Interim Measures for the Management of Generative AI Services

The “Interim Measures for the Management of Generative AI Services” is a regulatory framework issued by China in August 2023 to govern the development and use of generative artificial intelligence technologies. It requires service providers to ensure content is lawful, ethical, and aligns with socialist core values, while also mandating transparency and data protection. These measures represent China’s early effort to balance innovation in AI with oversight, addressing concerns like misinformation and national security.

AI Security Governance Framework

The “AI Security Governance Framework” is not a specific physical place or cultural site, but rather a conceptual structure designed to guide the safe and ethical development of artificial intelligence. It typically outlines principles, policies, and practices to manage risks such as bias, privacy breaches, and unintended harms in AI systems. While it has no singular historical origin, it draws from decades of work in computer ethics, cybersecurity, and international policy discussions that have accelerated since the early 2010s.

Industry Initiative for Promoting Safe, Reliable, and Controllable AI Development

The “Industry Initiative for Promoting Safe, Reliable, and Controllable AI Development” is a collaborative effort by leading technology companies to establish industry-wide standards and best practices for artificial intelligence. Launched in response to growing concerns about AI safety and ethical risks, the initiative focuses on ensuring that AI systems are transparent, accountable, and aligned with human values. It represents a proactive step by the private sector to self-regulate and build public trust in advanced AI technologies.