Managing IP is part of the Delinian Group, Delinian Limited, 8 Bouverie Street, London, EC4Y 8AX, Registered in England & Wales, Company number 00954730
Copyright © Delinian Limited and its affiliated companies 2023

Accessibility | Terms of Use | Privacy Policy | Modern Slavery Statement
Sponsored content

Analysis of China’s Guidelines for Registration Review of AI Medical Devices

Sponsored by


Xiaoyan Zhou of Purplevine IP explains how to protect IP in the rapidly developing area of AI medical devices in China, and the risks involved

Accelerating digital and AI transformation has become a significant direction of development. To keep up with this trend, the Chinese government has attached particular significance to medical AI development in recent years. In April 2018, the General Office of the State Council issued the Opinions on Promoting the Development of "Internet+ Medical Health", proposing to promote "Internet+", the application of AI.

The Chinese government has also taken a step forward in registration management. On March 9 2022, the Center for Medical Device Evaluation issued the Guidelines for Registration Review of AI Medical Devices (the ‘Guidelines’), which aim to guide applicants to establish the life cycle of AI medical devices and prepare the registration application documents. The Guidelines also regulate the requirements of technical reviews of AI medical devices, providing a reference for the systematic review of AI medical devices and quality management software.

I. Definition of AI medical devices

It is paramount to clarify the definition of AI medical devices. According to the Guidelines, “AI medical devices” refers to medical devices that apply AI technology to analyse "medical device data" to achieve their intended use; in particular, medical use. The Guidelines also define the scope of medical device data and AI technology.

Thus, the three key elements of an AI medical device are:

  • Medical device data;

  • AI technology; and

  • Medical use.

If an AI medical product operates based on non-medical device data, or a medical device achieves non-medical uses by using AI technology, it is not considered an AI medical device.

II. Risks of AI medical devices

There are a few risks involved in AI medical devices. An understanding of the risks and Chinese legislative concerns would help medical device companies to have a full grasp of the Guidelines.

1. Overfitting and underfitting with an algorithm

Overfitting means an algorithm is overtrained to the extent that it tries to cover all the training data set and starts to learn irrelevant information within the data set. Underfitting means that an algorithm is not fully trained and thus cannot capture a relationship in the data set accurately. Overfitting and underfitting reduce the generalisation capability of the algorithm.

2. Inaccuracy of the clinical decision support system

The inaccuracy of the clinical decision support system may cause false negatives and false positives. False negatives may lead to delays in follow-up treatment. This can be consequential, especially for patients with rapidly progressive diseases. False positives may lead to overtreatment, which does not benefit the patients.

Algorithms used to manage data processing and testing in AI medical devices may also have risks in undervaluing or overvaluing information.

3. Issues of imported AI medical devices

There are risks to using imported AI medical devices given the different medical standards between China and foreign countries. These differences include race, epidemiological characteristics, and standards on clinical diagnosis and treatment.

It is obvious that the risks of AI medical devices mainly come from the software, such as:

  • Algorithms;

  • Data; and

  • Decision-making mechanisms.

Among them, the algorithm is at the core of AI medical devices; thus, meticulous attention should be given to the risk management of the algorithms.

III. Analysis of the Guidelines

The development of AI technology is driven by algorithms, which are based on a model/data, and computing power. While a model/data is the foundation of AI technology, the algorithm is the core of AI technology, and computing power guarantees the operation of AI technology.

The generalisation capability of algorithms (which refers to whether the algorithms can properly adapt to new data) is the key breakthrough to test the risk level of the algorithm in AI medical devices. Thus, the Guidelines require that the generalisation capability algorithm of the products should meet the requirements before and after launch, and during product renewal.

To manage the risks of AI medical devices, the generalisation capability of the algorithm of AI medical devices is strictly scrutinised. The Guidelines put forward the following requirements:

1. Data acquisition

  • Adequacy and diversity of data;

  • The scientificity and rationality of data distribution; and

  • Quality control for data collection, data collation, data annotation, data set construction, etc.

2. Algorithm design

  • Clarify the basis of algorithm selection, including the reasons and basic principles of selection;

  • Provide the training data volume-evaluation index curve to prove the adequacy and effectiveness of algorithm training. If it cannot be provided, it is necessary to elaborate on the reasons and provide alternative evidence; and

  • As an important part of software verification, algorithm performance assessment needs to evaluate the algorithm design results based on the data sets. It should comprehensively consider the assessment requirements – such as the avoidance of false negatives and false positives, repeatability and reproducibility, robustness, real-time performance – to verify that the algorithm’s performance meets the objective of the algorithm’s design and acts as the basis of software verification and validation.

3. Validation and qualification

  • Clinical validation: the evaluation should be based on the core function or the core algorithm, in combination with the intended use and maturity; and

  • A comparative analysis of the algorithm’s performance should be conducted.

IV. How to protect your IP of AI medical devices

Algorithms are very important for AI medical devices, and the Guidelines put forward requirements in data acquisition, algorithm design, validation, qualification, etc. Therefore, as the core advantages of some AI medical device companies, the IP of AI algorithms and the underlying technologies should be rigorously protected:

  • Given that algorithms cannot be patented, AI medical device companies are advised to apply a combined approach of trade secret protection and patent applications. To be more specific, the AI algorithm can be protected as technical information as a type of trade secret. This requires a company to set up a thorough technical information management system and relative support. To protect the IP of the algorithm further, AI medical device companies should try to frame the algorithm as a module of the AI medical device (for a method patent) to obtain patent rights.

  • On the technical side, companies should avoid including too much immaterial information in the technical features of the claims during the patent drafting process. By avoiding this, companies are facilitated in evidence collection for potential infringement in the future. Furthermore, the focus of the technical solution section should rest on the core technology rather than its application. Therefore, it is recommended to consider how to reflect the innovation of the algorithm or the underlying technology at the application level when drafting patents for AI medical devices.

  • AI medical device companies should also diversify their patent portfolios; for example, apply for a method patent for the algorithm, a utility model patent for the mechanical design of the AI medical device, and a design patent for the interface of the AI medical device display. The diversification of the patent portfolio helps to strengthen the protection of an AI medical device and its algorithm.

more from across site and ros bottom lb

More from across our site

Counsel are eying domestic industry, concurrent PTAB proceedings and heightened scrutiny of cases before institution
Jack Daniel’s has a good chance of winning its dispute over dog toys, but SCOTUS will still want to protect free speech, predict sources
AI users and lawyers discuss why the rulebook for registering AI-generated content may create problems and needs further work
We provide a rundown of Managing IP’s news and analysis coverage from the week, and review what’s been happening elsewhere in IP
A technical effect must still be evident in the original patent filing, the EBoA said in its G2/21 decision today, March 23
Brands should not be deterred from pursuing lookalike producers, and an unfair advantage claim could be the key, say Emma Teichmann and Geoff Steward at Stobbs
Justice Mellor’s highly anticipated ruling surprised SEP owners and reassured implementers that the UK may not be so hostile after all
The England and Wales High Court's judgment comes ahead of a separate hearing concerning one of the patents-in-suit at the EPO
While the rules allow foreign firms to open local offices and offer IP services, a ban on litigation and practising Indian law could mean little will change
A New York federal court heard oral arguments this week in a copyright case pitting publishing giants against a digital library