315 Confirmed! AI Poisoning Black Market is Too Rampant! The AI Recommendations You Trust May Be "Poisoned" by Others

Introduction: Your digital life is being secretly manipulated. Read on to avoid pitfalls!

Editor | Jing Cheng

Author | Jiang Jing

At the 2026 CCTV 315 Gala, an inconspicuous demonstration caused many to break out in cold sweat.

A software called “Liqing GEO Optimization System” can output content on an AI platform, feeding “soft articles” into large AI models. Surprisingly, a fictional smart bracelet was recommended as a “good product” within three days by AI.

This seemingly absurd operation is not an isolated case but a microcosm of a complete industry chain covering software development, content generation, mass publishing, and commercial monetization.

Such AI misinformation may be happening every time we consult AI.

The question is, how is this AI poisoning different from usual false advertising and online rumors? Can ordinary people operate it? What impact does it have on us and the AI industry?

1. What exactly is AI poisoning?

As AI becomes more widely integrated into daily life, the concept of “AI poisoning” is gradually entering the public eye. Many people confuse it with false advertising and online rumors, but there are fundamental differences.

The core definition of AI poisoning is fundamentally different from everyday false advertising and online rumors. False advertising involves direct deception by businesses—for example, exaggerating a regular water cup as having medicinal effects. Online rumors involve fabricating false information to mislead public opinion or harm others.

In contrast, AI poisoning does not target humans directly. Instead, it pollutes or misleads AI—by injecting false content into AI training data or inputting malicious commands during operation—causing AI to learn incorrect knowledge from the source and indirectly pass on false information to us under the guise of “smart recommendations.”

Renowned financial writer and Impact Research Institute director Gao Chengyuan pointed out that in a business context, GEO (Generative Engine Optimization) is a trust-building system designed for the AI search era. It uses structured, authoritative content to make brands or individuals’ professional images visible, recognized, and trusted by AI systems, ultimately becoming the “default answer” in AI conversations. Its essence is the compliant accumulation of trust assets, not malicious dissemination of false information.

However, Wu Zewei, a special researcher at the Su Commercial Bank, said, “False advertising” and “online rumors” are direct deceptions aimed at humans, which consumers can often verify through multiple channels. AI poisoning, on the other hand, is an indirect, technical attack that contaminates the “water source” AI relies on for thinking and judgment.

He pointed out that once a model is successfully poisoned, it can continuously and covertly output manipulated “standard answers” to all questions without users knowing, packaging false information as objective algorithmic judgments. The scope of harm can exponentially expand and is much harder to detect.

For example, the GEO software exposed by 315 is a core tool for AI poisoning.

According to Global Network, industry insiders bought this software on e-commerce platforms, fabricated a smart bracelet called Apollo9, and made outrageous claims like “quantum entanglement sensing” and “black hole-level battery life.” After inputting data into the software, it generated over a dozen soft articles within minutes, which could be automatically published without human intervention.

2. Is AI poisoning really operable by anyone with zero foundation?

Currently, there is online buzz claiming that “AI poisoning can be done with just a hundred yuan and no prior experience.” What is the real situation? How is GEO optimization software circulating online, and why can it be easily purchased?

Wang Peng, deputy researcher at Beijing Academy of Social Sciences, said that “zero foundation poisoning” often refers to using automation tools to mass-deploy AI-generated content with specific biases on social media and Q&A platforms. This operation is very low-cost and essentially an “AI version of SEO.”

He pointed out that GEO software circulates in black and gray markets and some e-commerce platforms under the guise of “traffic diversion tools.” It operates in a legal and technical gray area and is initially hard to distinguish from normal content creation, leading to very low purchase barriers.

The main reason it is so easily accessible is demand and lack of regulation. Companies want to occupy AI recommendation slots; even after spending hundreds of millions on advertising, they are willing to invest a few million more to poison the data, just to get AI to recommend their products more.

According to China National Radio, some GEO service providers openly admit, “Doing GEO is poisoning,” serving over 200 clients annually. They profit from the price difference in publishing platforms, with a few dozen yuan per fake soft article, publishing in bulk, gradually forming a complete black industry loop.

Are techniques like tag flipping, backdoor attacks, and prompt injection technically difficult? Can ordinary people operate them easily? Are there quick methods to identify such poisoning behaviors?

Gao Heng, an expert at the Science and Technology News Society’s Sci-Fi Communication and Future Industry Committee, believes that prompt injection and similar reasoning-stage attacks are more likely to occur because they happen during model usage, essentially inducing the model to produce biased answers through specific inputs.

He said this method usually affects single interaction results rather than changing the model’s core capabilities. Therefore, strictly speaking, truly “contaminating” a model’s ability is not easy to do and difficult to implement on a large scale with simple tools.

3. AI poisoning threatens the AI industry

As AI large models become more integrated into our lives, production, and creation, a more hidden and deadly threat is quietly spreading—AI poisoning.

Unlike hardware failures or algorithm vulnerabilities, it uses false data, malicious samples, and incorrect information as weapons to quietly pollute the training environment that AI depends on.

Many see it as just a minor data cleanliness issue, but they overlook that AI’s foundation is data. Poisoned data can shake the future of AI.

After uncovering the truth about AI poisoning, the most important question is: how much impact will this invisible pollution have on the entire AI industry? Will false data poisoning training models cause years of technological accumulation to collapse overnight, leading to a vicious cycle of “bad money drives out good”?

The answer is far more severe than we imagine.

For ordinary people, the most direct harm is being misled by AI: asking which home appliance is best might get a fabricated product; consulting AI about legal issues could lead to signing invalid contracts.

Relying on AI for medical advice might result in incorrect medication suggestions. These harms are closely related to our lives and are hard to detect—after all, most people believe “AI won’t make mistakes.”

After understanding the dangers of AI poisoning, do you still dare to blindly trust AI recommendations?

Senior enterprise management expert and senior consultant Dong Peng believes that in the short term, it will inevitably cause a “bad money drives out good” effect, with polluted data training low-quality models, disrupting market order, increasing compliance costs for legitimate companies, and squeezing out high-quality AI.

Mao Huina, founder of Wanshi Technology, thinks AI poisoning won’t be effective long-term or create a “bad money drives out good” cycle. Ultimately, AI serves users; although low-quality content may temporarily confuse AI engines, users can easily identify it, and their feedback will help large language models learn to distinguish information quality and continue improving. As a technological advancement, large language models will promote business segmentation and diversification, making it difficult for bad data to persist long-term.

Dong Peng believes that in the long run, this opposition will generate a unified evolutionary drive. The proliferation of poisoning attacks will push the industry from an extensive pursuit of “scale expansion” toward a more refined focus on “data quality” and “model robustness.”

He admits this process is like a painful vaccine—though painful, it forcibly awakens the industry’s collective awareness of data security and model trustworthiness, ultimately driving AI technology to spiral upward, achieving a dialectical progression from quantity to quality.

4. How to steer GEO technology back onto the right path

As AI poisoning shifts from a hidden technical risk to an openly threatening industry security, people are not only warning about risks but also asking a more critical question: do the tools used for malicious poisoning carry an inherent “original sin” from the start?

Many tools abused by black markets were not created for destruction. GEO technology is a typical example. It was originally designed to help optimize AI models and improve data quality and reliability but has been distorted into a weapon for data poisoning driven by profit. Technology itself is innocent; misuse is the real problem.

In response, the industry must not only defend passively but also actively correct course. How can we guide such technologies to return to positive applications at the industry level? Can industry self-discipline uphold bottom lines? What systems, regulations, and ecosystems are needed to truly promote beneficial development and avoid harm?

The answer requires collective effort from the entire industry.

After the 315 exposure, many GEO-related companies publicly stated their firm stance against AI poisoning. For example, iFlytek’s partner Henan Henghui Heguan Network Technology Co., Ltd., and AB Ke explicitly said they would not engage in “manipulating AI recommendations” or “ranking manipulation.”

This marks the beginning of industry self-discipline, but relying solely on self-regulation is far from enough.

Yuan Shuai, deputy secretary-general of the Zhongguancun Internet of Things Industry Alliance, said that guiding GEO and similar technologies toward positive applications requires establishing clear industry standards for technology use, defining legitimate scenarios such as model optimization and data calibration, and setting up a filing system for service providers to regulate the development and sale of related tools.

Angel investor and AI expert Guo Tao pointed out that the government should strengthen regulation, introduce relevant laws and regulations to severely punish malicious behaviors like poisoning, establish authoritative data review and supervision agencies to audit AI training data and models, and enhance technical R&D to improve AI models’ resistance to poisoning. Multiple measures should be combined to guide technology toward positive development.

After the 315 exposure of AI poisoning black industry, maintaining the bottom line of AI recommendations is crucial to safeguarding your digital life and data security.

Next time you encounter unfamiliar products recommended by AI, will you believe them blindly?

Have you ever been tricked by false information recommended by AI? Share your experience in the comments and help others avoid pitfalls!

Partially compiled from Global Network, China National Radio, and others.

(Editors: Wang Zhiqiang HF013)

【Disclaimer】This article reflects only the author’s personal views and is not related to Hexun.com. Hexun.com remains neutral regarding the statements and opinions expressed and does not guarantee the accuracy, reliability, or completeness of the content. Readers should use it as a reference and bear all responsibilities themselves. Email: news_center@staff.hexun.com

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin