The advent of transformer decoders and breakthroughs in computational power has led to the emergence of generative AI. Leveraging vast training data sets and self-attention mechanisms, AI models enable natural language dialogue akin to human communication. AI systems based on large language models are now extensively deployed not only in industrial sectors but also across commercial domains and other facets of social life.
However, the widespread application of AI in society has raised new issues, such as potential infringements of legitimate interests in the sourcing of training data, and decision-support algorithms that may conflict with social ethics. Different countries adopt varying approaches regarding whether AI-related technological ethical issues should be addressed during patent examination procedures or resolved through litigation after grant.
The CNIPA maintains that examining the ethical dimensions of AI-related inventions within patent examination procedures is necessary to guide AI technology towards serving the public interest effectively. The revised Guidelines for Patent Examination, which came into effect on January 1 2026, explicitly state that AI ethical considerations should be incorporated into examinations conducted under Article 5 of the Patent Law.
Previous examination practice under Article 5
Article 5 of the Patent Law stipulates that “[n]o patent right shall be granted for any invention-creation that is contrary to the laws or social morality or that is detrimental to public interest.”
This provision does not specify particular relevant laws. In previous examination practice, assessments under this clause did not consider whether inventions violated all legal provisions. The primary focus was on whether the product or manufacturing method itself constituted, or directly led to, a criminal act. Technical solutions such as counterfeit currency production, gambling machines, and methods for forging documents were explicitly classified as inventions violating the law.
Furthermore, the technical ethics of inventive solutions have also been assessed in previous examination practice; namely, whether they comply with public order and morals or serve the public interest. Technologies involving human cloning or research on human embryos, for instance, have been explicitly categorised as inventions violating public order and morality. Such technologies directly exploit the human body for commercial gain, contravening ethical standards, and thus should not be granted exclusive rights. Similarly, technologies for products that may cause human disability are deemed ineligible for patent protection as they impede the public interest.
Newly introduced provision for AI-related patent applications
In the latest revision, the CNIPA upholds its established examination principles regarding the ethical dimensions of AI technology, stipulating that the implementation of AI-related inventions must not be based on potential infringement of others’ rights to life, privacy, or similar fundamental rights.
The revised Guidelines for Patent Examination introduce a new provision in Section 6.1.1, Chapter 9, Part II: “Examination according to Article 5(1) of the Patent Law: No patent right shall be granted for any invention patent application containing algorithmic features or business rules and method features according to Article 5(1) if their data collection, label management, rule setting, decision-support, or other processes involve content that is contrary to the laws or social morality or that is detrimental to public interest.”
To facilitate understanding, the CNIPA provides two example cases.
One concerns a sales system utilising facial data collected in public spaces. During the collection of this data, the person whose data is being collected is not asked for their agreement. Furthermore, the data is collected for commercial reasons and not for public safety purposes. Consequently, this technical solution is deemed to violate relevant provisions of the Personal Information Protection Law and is therefore ineligible for a patent.
The other example case concerns emergency decision-support for autonomous vehicles. The decision-support algorithm suggests that if the vehicle is unable to avoid obstacles such as pedestrians, the system could determine collision targets based on factors such as the pedestrian’s age. Such a decision-support algorithm contravenes public order and good morals and is therefore deemed ineligible for patent protection.
These two cases demonstrate that the examination standards proposed in the revised Guidelines for Patent Examination align with the requirements of China’s current legislation. The Cybersecurity Law, the Personal Information Protection Law, and the Interim Measures for the Administration of Generative Artificial Intelligence Services stipulate that individuals possess the right to know and decide regarding the processing of their personal information, while personal information processors must establish personal information protection systems and implement security measures such as encryption and de-identification for collected personal information. Particularly for sensitive personal information, processing requires obtaining the individual’s separate agreement.
Sensitive personal information comprises biometric data, religious beliefs, specific identities, medical and health records, financial accounts, location tracking information, and personal information pertaining to minors less than 14 years of age. For such information, personal information processors must fully explain the necessity of data collection to the data subjects.
The Interim Measures for the Administration of Generative Artificial Intelligence Services stipulate that generative AI services provided shall not generate discrimination based on beliefs, nationality, gender, age, etc. Providers of generative AI services shall also furnish specific data labelling rules and mark generated content accordingly.
Drafting suggestions
According to the revised Guidelines for Patent Examination, at least four steps of AI technology (data collection, label management, rule setting, and decision-support) must comply with the aforementioned legal provisions. This imposes new requirements on drafting AI-related patents. As examination encompasses not only the technical solutions proposed in the claims but also the content within the specification, it is unnecessary to include all essential step details within the claims. Instead, relevant step details can be listed in the examples within the specification.
Data collection
Regarding the data collection step, the specification shall explicitly state whether the data is obtained from a legitimate source. If the collected data constitutes personal data, the specification should detail the steps prompting users to consent to data collection and receiving the signals or input information indicating such consent from the user. Particularly for sensitive personal information – such as voice data, facial data, physiological information such as iris/fingerprint data, and medical data – users must not only be prompted but also informed of the necessity for collecting such sensitive personal information and the implications of its processing.
Where AI technology utilises data collected in public settings, the specification must state that the collection occurred following explicit public notification.
Where data originates from databases – such as the applicant’s internal databases or private web databases – though not explicitly covered in the revised Guidelines for Patent Examination, the current legislation requires clarification that the database is an internally established applicant database or that data usage has been authorised by the database owner. This prevents the data from being deemed illegally scraped and non-compliant with legal requirements.
Where collected personal information is utilised in generative AI models to generate other data, the specification must explicitly state whether the personal information underwent de-identification and desensitisation. That is, the generated data should not be traceable to sensitive personal information. In particular, biometric data, medical health information, and financial account details relating to individual users should not be employed without desensitisation when providing generative services to the public.
Data labels
Where data labels are employed in AI-related inventions, it is preferable to avoid labels that may cause discrimination during label creation and usage. Potentially discriminatory labels include those based on age, gender, religion, region, occupation, and similar factors.
In certain specialised domains, such as medical AI or facial recognition AI services, data labelling concerning age, gender, etc. is unavoidable. For AI models in such domains, the specification must explicitly state whether the management of various labels and associated rules complies with technical ethics. For instance, if a facial recognition AI service directly labels individuals as “suspect persons” or having “criminal tendencies” based solely on skin tone, region, or attire, this constitutes a discriminatory labelling that contravenes technical ethics. Conversely, if the AI algorithm labels an individual as “suspect” or “experiencing significant emotional change” based on detected and analysed facial muscle movement alongside database comparisons, such labelling constitutes technically derived reasoning and remains ethically compliant.
Decision-support
In AI-related inventions, the most creative steps typically involve the AI algorithm’s rules and decision-support. These steps distinguish the invention seeking protection from existing AI algorithms.
Within these steps, discriminatory outcomes must be avoided. For instance, an AI-assisted treatment method based on treatment records generates recommendations based on patients’ perceived ability to pay, wherein the AI algorithm recommends that patients with low payment capacity should not use more effective drugs with a higher price or should receive reduced treatment levels. Such recommended decisions not only contravene technical ethics but also violate the fundamental principle of ‘healthcare equity’. Consequently, discriminatory recommendation algorithms are ineligible for patent protection.
Accordingly, when drafting the description and claims for AI-related patent applications, it is crucial to fully comply with both the Patent Law and ethical requirements, and to clearly disclose the technical means adopted to ensure fairness, non-discrimination, and compliance with public order and good morals.
Final thoughts
The above drafting recommendations are based solely on China’s current legislation and the prevailing Guidelines for Patent Examination. As AI technology remains in a phase of rapid development, relevant laws and regulations governing AI ethical standards are continuously evolving. It is foreseeable that patent examination criteria pertaining to AI will be further refined in the future.
Kangxin Partners will continue to monitor patent examination practices within China’s AI sector, providing timely, detailed analysis and interpretation.