
China’s Approach to AI Hallucination Challenges
The recent period has seen the rise of the Chinese AI industry, with companies delivering cost-effective, open-source models that compete with Western counterparts.
However, like the entire AI sector, these models experience hallucination issues, which have worsened in some Chinese models over time. China has taken steps to address this problem, mainly through its legal framework, but it remains an unresolved challenge.
In February 2026, a Weibo user reported that the Tiger broker, integrated with Deepseek, provided fabricated data when analyzing Alibaba's financial report. This highlights the potential for AI models to hallucinate and their negative impact.
Chinese researchers have started examining the hallucination phenomenon, with a joint study by Fudan University and the Shanghai AI Laboratory setting a benchmark for Chinese language models. Additionally, a team from the University of Science and Technology of China and Tencent's YouTu Lab introduced a tool to combat AI hallucination.
Chinese AI companies have not yet taken strong measures, focusing instead on legal protection. For example, DeepSeek, Qwen 3, and Manus' terms of use disclaim responsibility for errors generated by their models.
At the state level, China is using laws and policies to hold AI model creators accountable and ensure trustworthiness. The Governance Principles for a New Generation of Artificial Intelligence call for gradual trustworthiness, and the Interim Measures for the Management of Generative AI Services prohibit fake and harmful content.
China's interest in regulating hallucinations is largely driven by its information flow control policy, aiming to prevent the spread of false information that could undermine state legitimacy.
Despite these efforts, China needs to focus more on this issue, considering the performance of its models. Recent evaluations show that Chinese models still lag behind international ones like GPT-5 and Claude series.