
On January 25, 2026, the Cyberspace Administration of China (CAC) concluded its public consultation on the "Interim Measures for the Administration of Anthropomorphic AI Interaction Services." Originally published for comment on December 27, 2025, the draft establishes a prescriptive regulatory regime for artificial intelligence systems designed to mimic human personality or provide emotional companionship. For U.S.-based law firms and multi-national corporations, these rules represent a significant expansion of the global AI compliance landscape.
Key Provisions of the Anthropomorphic AI Draft
The draft measures target AI services that simulate human traits through text, audio, video, or imagery. The CAC aims to address the psychological and social impacts of "human-like" AI by mandating specific technical and operational controls:
- Mandatory Disclosure: Providers must issue clear and repeated notifications to users stating they are interacting with an AI, not a human.
- Session Interventions: Systems must include session-time nudges, such as a reminder at the two-hour mark, to prevent over-reliance or addiction.
- Emotional Monitoring: The draft requires automated escalation and intervention when a system detects signals of extreme user emotion or harmful dependence.
- Life-Cycle Safety: Compliance obligations span the entire life cycle of the product, from initial design and deployment to the eventual shutdown of the service.
Special Protections for Minors and Older Adults
The CAC draft places heavy emphasis on protecting vulnerable demographics. AI services targeting these groups must implement specific safeguards:
- Minors: Providers are required to offer a "minor mode," obtain verified guardian consent, and provide dashboards for guardians to monitor time and spending caps.
- Older Adults: Systems must include emergency contact requirements and are strictly prohibited from simulating the personalities of deceased relatives.
From a data perspective, the draft restricts the use of sensitive chat logs for model training without explicit, separate consent. This requirement intersects with existing Chinese personal information laws, necessitating rigorous updates to data processing agreements and privacy policies. Firms utilizing AI-powered legal analysis software to manage international privacy risks will need to account for these specific consent workflows.
Compliance Thresholds and Legal Liability
The draft introduces specific triggers for mandatory safety assessments. Organizations must file for regulatory review if they reach 1,000,000 registered users or 100,000 monthly active users (MAU). Furthermore, app stores and distribution platforms are tasked with verifying a provider’s compliance before listing anthropomorphic AI products.
For legal departments, this regulatory shift impacts several areas of practice:
- Commercial Contracts: Vendor and reseller agreements must now include representations and warranties regarding compliance with China's anthropomorphic AI standards.
- Product Liability: Requirements for systems to intervene during user distress create new benchmarks for duty of care. A litigation strategy AI tool may be necessary to evaluate the potential for tort exposure if these automated interventions fail.
- Cross-Border Operations: Legal teams must ensure that their legal workflow automation software reflects the differing requirements between China’s emotional monitoring mandates and other jurisdictions, such as the EU, which may restrict emotion-recognition technologies.
Conclusion
China's move to regulate emotional and anthropomorphic AI adds a new layer to an already complex global regulatory stack. As these interim measures move toward finalization, legal counsel must prepare for heightened design-level controls and rigorous data-use restrictions. Using an AI brief analysis tool to compare these draft rules against domestic US state laws and the EU AI Act can help firms maintain a consistent global compliance posture while mitigating jurisdictional risk.
Sources
- China drafts rules to police ‘human-like’ AI and emotional companions - ThinkAutomated
- Mapping the Privacy Compliance Cost Curve: A 2026 Strategic Framework - AInvest
- New State Laws Regulating Use of AI in 2026 - The National Law Review
