Chinese AI Service Provider Found Not Liable for Generating AI “Hallucinations”
作者:微信文章如您希望下载本文的PDF版本,请点击文末“阅读原文”获取。
Introduction
In December 2025, the Hangzhou Internet Court held that the defendant, a generative AI service provider (the “Defendant”), was not liable for generating AI "hallucinations," finding that the Defendant had fulfilled its reasonable duty of care, such as applying the common technological measures widely used in the AI industry to enhance the accuracy of its AI-generated content and also reminding its users that the AI-generated content might not be accurate.
Background
The Defendant operates a general-purpose intelligent dialogue application based on its proprietary large language model ("LLM"), which provides text-generation and related information services (the "Application"). In early 2025, the plaintiff, Mr. Liang (the “Plaintiff”), registered for the service and queried the Application regarding university admissions. In response, the Application generated inaccurate information concerning a specific campus. Despite the Plaintiff’s efforts to correct the misinformation generated during this dialogue, the Application continued to generate responses affirming the accuracy of the disputed AI-generated content.
During the same interaction, the Application generated statements indicating that compensation in the amount of RMB 100,000 would be provided to the user if its AI-generated information were incorrect, and further suggested that the Plaintiff seek relief before the Hangzhou Internet Court. Relying on these AI-generated statements by the Application, the Plaintiff subsequently brought an action against the Defendant seeking RMB 9,999 in damages, alleging that the inaccurate AI-generated information from the Application was misleading and that the statements from the Application constituted a binding "compensation promise."
The Defendant raised three primary defenses: (i) the AI-generated statements from the Application did not constitute a legally binding declaration of intent attributable to the Defendant; (ii) the Defendant had fulfilled its duty of care as a generative AI service provider; and (iii) the Plaintiff failed to establish any actual and compensable loss.
Issues
In this case, the court focused its analysis on the following three primary issues:
(1) whether the AI-generated content from the Application constituted a legally binding declaration of intent attributable to the Defendant; (2) whether tort disputes arising from generative AI-generated content should be governed by general fault-based tort liability or strict product liability; and (3) whether the Defendant fulfilled its duty of care as a generative AI service provider.
1.Whether the AI-generated content from the Application constituted a legally binding declaration of intent attributable to the Defendant
The court held that the purported "compensation promise" generated by the Application did not constitute a legally binding declaration of intent, either independently or as attributable to the Defendant, for the following reasons:
Lack of Civil Subject Status. The court reaffirmed that, under current PRC law, artificial intelligence does not possess civil subject status and therefore cannot independently manifest a declaration of intent with legal effect. Only natural persons, legal persons, and unincorporated organizations are recognized as civil subjects capable of expressing legally binding intent.
No Attribution to the Generative AI Service Provider. Under the specific circumstances of this case, the court found that the AI-generated content by the Application could not be deemed as a manifestation of the Defendant’s will. The Application could not be regarded as an agent, representative, or messenger of the Defendant, nor was there evidence that the Defendant used the Application as a programmatic tool to preset, transmit, or externalize a specific intention to assume compensation liability.
Absence of Reasonable Reliance. Considering the general social perceptions and transaction practices, the court held that the Plaintiff failed to establish a protectable interest of "reasonable reliance" on the fully AI-generated content from the Application. Given the inherent uncertainty and probabilistic nature of generative AI outputs, the Plaintiff could not reasonably rely on the Application's AI-generated statements as reflecting a legally binding commitment by the Defendant.
2.Whether tort disputes arising from generative AI–generated content should be governed by general fault-based tort liability or strict product liability
The court ruled that, pursuant to Article 2, paragraph 1 of the Interim Measures for the Management of Generative Artificial Intelligence Services (the "GenAI Measures"), the Application constitutes a "generative artificial intelligence service." Based on this characterization, the court held that tort disputes arising from such services should be governed by the fault-based liability principle under Article 1165(1) of the PRC Civil Code, rather than the strict product liability standard.
The court reasoned that generative AI systems constitute "services," rather than "products," within the meaning of the PRC Product Quality Law, as they lack standardized, stable, and inspectable quality metrics. Moreover, the court emphasized that, applying strict product liability on generative AI systems would impose excessive burdens on such providers and therefore hindering technological innovation, which is contrary to the regulatory approach reflected in the existing AI governance policies.
3. Whether the Defendant fulfilled its duty of care as a generative AI service provider
The court emphasized that given the rapid development and highly ubiquitous application scenarios of generative AI technology, the duty of care for generative AI service providers exists within a "dynamically adjustable framework" and should be assessed pursuant to a "dynamic systems theory (动态系统论)." Under the circumstances of this case, the court analyzed the duty of care along the following three dimensions:
(1)Duty regarding legally prohibited content
The court held that, for content that is toxic, harmful and illegal, generative AI service providers shall bear stricter review obligations. However, for AI-generated content that might include inaccurate information, the existing Chinese legal framework does not require generative AI service providers to ensure the absolute accuracy for such AI-generated content. According to Article 4(5) of the GenAI Measures, generative AI service providers must take effective measures to "improve the accuracy and reliability of generated content," which emphasizes a conduct-based duty of care rather than a guarantee of error-free results. Given that general-purpose conversational AI applications face a massive volume of unpredictable queries across all knowledge domains, to expect the Defendant to review the accuracy of all AI-generated information at the output layer would exceed its current technical feasibility.
(2)Duty of disclosure regarding its functional limitations
The court held that generative AI service providers have the duty of disclosure to ensure that users recognize the functional limitations of its service. Specifically, generative AI service providers shall disclose that: (i) its AI service limitations, explaining that the content is AI-generated and might not be accurate; (ii) the AI labels should be put in a conspicuous and prominent position to the audience; and (iii) cautionary warnings: for professional queries involving personal or property safety, the providers shall issue conspicuous warnings by way of affirmative cautionary language at appropriate times and locations. In this case, the Defendant presented conspicuous AI notices on the Application's welcome page, in the user agreement, and specifically at conspicuous positions within the user interface, thereby fulfilling its duty of disclosure.
(3) Duty to apply industry standards to enhance accuracy
The court held that, in general, generative AI service providers must adopt industry-standard technical measures to ensure its functional reliability reaches the average market level. But if generative AI services extend to specialized fields involving life safety or mental health—such as healthcare, psychological counseling, or emotional companionship—the court noted that, the relevant service providers must meet heightened technical requirements and fulfill special security protection obligations to mitigate risks and safeguard user rights in those specific scenarios.
In this case, the court ruled that the Defendant had fulfilled its duty of care to apply industry standards to ensure content accuracy, on the grounds that: (1) the AI services provided by the Defendant were primarily conventional, focused on content creation and information retrieval; (2) the Defendant's LLM had completed the required national security filings and assessments; and (3) the Defendant provided evidence demonstrating that it had applied multiple-layer guardrails to ensure model security and output reliability.
Based on the foregoing, the court concluded that the Defendant was not at fault and that its conduct did not constitute an infringement of the Plaintiff's rights. Accordingly, the court dismissed the Plaintiff's claims in their entirety. As neither party filed an appeal, the judgment has entered into legal effect.
Comment
This is China's first case addressing a generative AI service provider's liability in generating AI "hallucinations." In this case, the Hangzhou Internet Court articulated a fault-based analytical framework grounded in a differentiated duty-of-care approach, holding that the Defendant, as a generative AI service provider, has exercised its reasonable duty of care and therefore not liable for generating the AI "hallucination."
It should be noted that in this ruling, the court emphasized that no "actual harm" was suffered by the Plaintiff. But what if the AI "hallucinations" caused actual harm, such as mental harm or death to its users, especially to the vulnerable group (e.g., minors, elderly or mentally vulnerable group), what extra guardrails (i.e., duty of care) should be implemented? This remains an open question to be answered.
Footnotes:
See Hangzhou Internet Court’s official WeChat account, The First-Instance Judgment in a Generative AI "Hallucination" Infringement Dispute Has Entered into Legal Effect, published on December 29, 2025, available at: https://mp.weixin.qq.com/s/riMkHOiBhDra1xe70wQcRA. The original judgment is not publicly available as of the date of this article.
Authors
Seagull Song
International Partner
Intellectual Property Group
seagull.song@cn.kwm.com
Areas of Practice:AI,IP strategy, cross-border transaction, copyright clearance, licensing and IP enforcement etc.
Dr. Song leads her team to advise multinational companies regarding legal issues related to merchandise, movie, TV, formality show, theme park, digital media, publishing, music, games and e-sports, virtual avatar, NFT products and metaverse, etc.Dr. Song has extensive experience advising multinational sports, media and entertainment companies in their licensing deals, project finance, IP strategy and IP protection.
Wang Mo
Associate
Intellectual Property Group
Thanks to Huang Jiaona for her contribution to this article.
转载声明:好文共赏,如需转载,请直接在公众号后台或下方留言区留言获取授权。
封面来源:问题少年002·杜飞辰
页:
[1]