Over the past few decades, artificial intelligence (AI) technology has been increasingly popularised and applied across various fields. Today, AI has a significant impact on our lives and is becoming a research hotspot all over the world.
While AI is welcomed in many ways, there are some potential risks caused by the misuse of the technology. This includes criminal activities performed by using AI, and the violation of moral rules done by the unregulated collection of data when developing certain AI technologies. Besides the law, governmental regulations on AI are necessary to support its development and simultaneously prevent its misuse.
China’s approach to tackling AI misuse
As one of the most active countries operating in the AI industry, China has made remarkable progress across many aspects of AI, such as speech recognition, image recognition, mapping/semantic analysis, and other AI-related technologies. During the fight against the COVID-19 pandemic in 2020, AI technology was used to penetrate into many fields, such as diagnosis assistance, epidemic screening and prevention, remote office/education, body temperature detection, factory production, and autonomous vehicle/robot creation among other uses. For example, an AI epidemic prevention and control system developed by the Chinese company, 4Paradigm, is significantly better than the traditional system in tracing infection paths and screening high-risk groups, and its screening accuracy was reported to have increased the findings from 5.8% to 93%.
There is little doubt that AI technology has significant advantages in data collection and data mining. However, the in-depth development of AI also brings out some disturbing problems for people, such as excessive data collection and privacy leakage caused by machine learning technology, among other concerns. Fortunately, these problems are increasingly being paid more attention, and there has been some corresponding policies created to regulate AI.
China has been an active player in the development of AI, and is also deeply aware of the problems brought by technology, prompting it to take positive measures. The Chinese government has actively been making efforts to establish an AI regulatory system in China. In June 2017, the cyber-security law was launched in China to regulate network operators, such as service providers, in adopting AI practices. The law stated “when collecting and using personal information, [network operators] shall follow the principles of lawfulness, fairness, and necessity, disclose the collection and use rules, and clearly state the purpose, method, and scope of the collection and use of information, and obtain the consent of the person being collected.”
In January 2021, the National Industrial Information Security Development Research Center issued the white paper on the ‘Development of AI New Infrastructure in 2020’. This focused on two functions of AI-related infrastructure – supporting AI’s sustainable innovation and development, and promoting the transformation and upgrade of traditional industries. At the end of the white paper, it also states the need for “strengthening the ability of risk judgment, prevention and control, and promoting the healthy development of the artificial intelligence technology”, as well as “ensuring the security of the artificial intelligence technology, the security of product, the security of data and the security of application, and preventing the security problems existed in the artificial intelligence itself and the security risks brought by its application”. Such revised laws show the government’s intention of strengthening supervision on AI.
China has been an active player in the development of AI, and is also deeply aware of the problems brought by technology, prompting it to take positive measures.
International approaches to tackling AI misuse
Other countries and regions, including the US and the EU, have also introduced relevant policies for AI.
In November 2020, the White House issued the ‘Guidance for Regulation of Artificial Intelligence Applications’, which “sets out policy considerations that should guide, to the extent permitted by law, regulatory and non-regulatory approaches to AI applications developed and deployed outside of the Federal government.”
The guidance excluded AI developed and used by the government from the scope of research, and only focused on ‘weak’ AI, while excluding ‘strong’ AI that is similar to or even beyond human perception, from the scope of research. Further, the guidance emphasised regulatory and non-regulatory approaches to AI are not necessary to mitigate every foreseeable risk, and any assessment of risk should compare that risk to risk presented by the situation that would obtain absent the AI application at issue.
The guidance also stated if an AI application lessens risk that would otherwise obtain; any relevant regulations presumably should permit that application.
In February 2020, the European Commission released the white paper ‘On Artificial Intelligence - A European approach to excellence and trust’, which aimed to emphasise the implementation of ‘hard’ supervision on AI under the premise of become ‘people-oriented’. In Part 5 of the white paper, “An ecosystem of trust”, it proposes a regulatory framework for AI.
The Commission concludes, “in addition to the possible adjustments to existing legislation – new legislation specifically on AI may be needed in order to make the EU legal framework fit for the current and anticipated technological and commercial developments.”
In addition to the possible adjustments to existing legislation – new legislation specifically on AI may be needed in order to make the EU legal framework fit for the current and anticipated technological and commercial developments.
AI may always be a double-edged sword, which not only promotes profound social changes, but also brings some troubles to humans. Therefore, the corresponding government laws and regulations aim to provide support and guidance for the development of AI, and more importantly, ensure the healthy development of the area. After all, people will rely more on AI in the future, and the relationship between human and machine will become more complex.
Liu Shen & Associates
T: +86 10 6268 1616
Hong Zhang joined Liu Shen & Associates in 2005 and became a qualified patent attorney in 2007.
Hong specialises in patent prosecution, re-examination, patent writing, and client counseling with a focus on data/image processing, electrical engineering, telecommunication, electronics, computer science, electronic design as well as the internet.
Hong has a master’s degree from Beijing University of Technology in 2005, where she majored in circuit and system. She received professional training at John Marshall Law School in 2011, on matters related to US patent law and practices.
Liu Shen & Associates
T: +86 10 6268 1616
Guanghao Zou joined Liu Shen & Associates in 2017 as a patent engineer. Before joining the firm, he worked as a telecommunication algorithm engineer at Huawei.
Guanghao provides IP-related services such as patent drafting, re-examination, and patent litigation for several clients. He handles IP affairs in the fields of telecommunication, electrical and electronics, computer science, semiconductor technology, and signal and information processing, among other areas.
Guanghao has a master’s degree from the University of Electronic Science and Technology of China, where he majored on signal and information processing in 2015.
The material on this site is for law firms, companies and other IP specialists. It is for information only. Please read our Terms and Conditions and Privacy Notice before using the site. All material subject to strictly enforced copyright laws.
© 2021 Euromoney Institutional Investor PLC. For help please see our FAQs.