The rapid development of artificial intelligence (AI) technology may have led to revolutionary changes across various industries, but it has also given rise to new challenges in security and regulation.
First, in terms of data security and privacy, AI models often require a large amount of personal, sensitive data for training and operation, significantly increasing the risk of data leakage and privacy infringement. For example, in application areas such as facial recognition, voice recognition and data mining, issues of data security and privacy protection are particularly prominent.
Second, ethical and social bias issues in AI models are increasingly drawing attention. If the training data contains biases, AI models are likely to inherit and amplify these biases, leading to unfair or even discriminatory decisions in application. Additionally, many advanced AI models, such as deep learning models, act as "black box" models and lack transparency and explainability in the decision-making process, which could have serious consequences in critical areas such as healthcare, judiciary and finance.
In addition, new types of adversarial attacks have begun to emerge. These attacks can mislead AI models by making minor input changes, resulting in incorrect or potentially dangerous outputs. This not only poses a threat to individual users but may also impact overall societal safety.
As AI technology becomes more and more popular in content creation, autonomous driving, medical diagnosis and other fields, legal and compliance issues are becoming increasingly important. Currently, there is a lack of a unified AI regulatory framework globally, making it complicated for multinational companies and research institutions to navigate compliance issues.
As one of the countries with the fastest development in AI technology, China has given high priority to AI safety regulation.
Governance in an emerging field
Countries around the world have gradually started formulating and implementing regulations and rules on AI. For example, in the US, AI regulation mainly occurs at both the federal and state levels. The federal government primarily regulates through specialised agencies such as the Federal Trade Commission (FTC) and the National Institute of Standards and Technology (NIST), focusing mainly on data security, privacy and ethical issues.
In January 2023, the NIST released the Artificial Intelligence Risk Management Framework (AI RMF 1.0), aiming to guide organisations in reducing security risks, avoiding biases and other negative consequences, and enhancing the credibility of AI systems when developing and deploying them.
The European Union (EU), on the other hand, places greater emphasis on the ethics and privacy protection of AI, with the General Data Protection Regulation (GDPR) being a typical example. Additionally, the EU has released a series of AI ethical guidelines and plans to introduce a comprehensive AI regulatory framework in the coming years.
In Asia, Japan and South Korea focus more on technical standards and ethical guidelines, while Singapore aims to attract international AI companies through flexible regulatory measures. As one of the countries with the fastest development in AI technology, China has given high priority to AI safety regulation. Over the past year or so, China has successively introduced a series of regulatory measures targeting AI.
This is the first time China has made explicit provisions for the research and development and services of generative AI.
China’s strides in AI regulation
On 1 March 2022, China introduced its first nationwide specialised regulation for AI, the Provisions on the Administration of Algorithm-generated Recommendations for Internet Information Services, which standardises the use of algorithm recommendation technologies when providing online services within China.
On 25 November 2022, China released the Provisions on the Administration of Deep Synthesis in Internet Information Services, which stipulates that providers and technical supporters of deep synthesis services should inform and obtain separate consent from individuals whose biometric information, such as faces and voices, is being edited.
Most notably, on 13 July 2023, seven national departments including the Cyberspace Administration of China jointly announced the Interim Measures for the Management of Generative Artificial Intelligence Services (hereinafter referred to as the "Measures"). The Measures officially took effect on 15 August 2023, aiming to regulate a broader range of generative AI technologies. This is the first time China has made explicit provisions for the research and development and services of generative AI.
The Measures take a positive attitude towards generative AI services, mentioning multiple times the encouragement of innovative applications of generative AI technologies across various industries and fields. They aim to generate positive, healthy and uplifting high-quality content; explore optimised application scenarios; and build an application ecosystem.
Additionally, the Measures encourage the independent innovation of foundational technologies such as generative AI algorithms, frameworks, chips and supporting software platforms. They also promote equal and mutually beneficial international exchanges and cooperation and participation in the formulation of international rules related to generative AI.
Through classification and grading, more targeted regulatory measures can be applied to different types of AIGC applications...
Among these, three aspects of the Measures are particularly worth our attention.
Classification and grading of applications
First, it emphasises the classification and grading management mechanism for Generative Artificial Intelligence Services (AIGC), highlighting the important approach for subsequent regulatory mechanisms targeting different types of risks. Through classification and grading, more targeted regulatory measures can be applied to different types of AIGC applications, thereby ensuring the effectiveness and specificity of regulation.
Strengthening industry ecosystem
Second, the management measures focus on nurturing the AIGC industry ecosystem, especially concerning the construction of generative AI infrastructure and public training data resource platforms. It stresses the promotion of collaborative sharing of computational resources to enhance the efficiency of their utilisation.
By promoting the orderly opening of classified and graded public data, expanding high-quality public training data resources, and encouraging the use of secure and trustworthy chips, software, tools and computational resources, a more stable and reliable infrastructure support can be provided for the development of the AIGC industry.
Domestic and international exchanges and cooperation
Third, the management measures emphasise domestic and international exchanges and cooperation. They differentiate the scope of application based on services provided domestically and abroad, as well as business types that do not provide services domestically, clarifying the scope of the regulation.
For foreign investment in generative AI services, compliance with relevant foreign investment laws and administrative regulations is required. Strengthening international cooperation and exchanges can better introduce advanced technologies and services from abroad and also better promote China's AIGC industry to the international market.
Protection of minors and others
In addition, the Measures place a strong emphasis on the protection of minors, requiring effective measures to prevent underage users from becoming overly dependent on or addicted to generative AI services.
In terms of supervising generative AI, the Measures also mention that relevant authorities can conduct supervision and inspection of generative AI services according to their responsibilities. Service providers should cooperate and provide detailed explanations as required, concerning the source, scale, type, annotation rules and algorithmic mechanisms of the training data, and offer necessary technical and data support and assistance.
Furthermore, the Measures cover multiple aspects of protection, such as personal privacy and trade secrets. For example, organisations and personnel involved in the safety assessment and supervision of generative AI services should keep confidential any state secrets, trade secrets, personal privacy and personal information they become aware of during the course of their duties, and should not disclose or unlawfully provide them to others.
AI regulation is still in its nascent stage, with the main challenge being how to strike a balance between technological innovation and ethics and safety.
Balancing AI regulation and AI development
Overall, the Measures focus on risk prevention while also incorporating certain fault tolerance and error correction mechanisms, enhancing the feasibility of implementation and better achieving a dynamic balance between AI regulation and AI technology development. This provides valuable insights for other countries in formulating safety and regulatory provisions related to AI.
In summary, in the forefront of the AI field, countries and regions such as North America, Europe and Asian nations including China, Japan, South Korea and Singapore are rapidly developing and refining AI regulation frameworks. However, for many developing countries, AI regulation is still in its nascent stage, with the main challenge being how to strike a balance between technological innovation and ethics and safety. Some countries have begun to formulate basic AI strategies and policies, but most have yet to establish a comprehensive regulatory framework.
In addition to domestic regulatory measures, international organisations such as the United Nations, the World Economic Forum, and the Organization for Economic Co-operation and Development also need to actively promote global cooperation in AI regulation. These organisations primarily focus on the impact of AI on the global economy, society and security, and are committed to building a fair, transparent and sustainable global AI ecosystem.
Looking ahead, regulations governing AI worldwide will continue to evolve and improve to adapt to the rapid development and global application of AI technology. At the same time, the regulatory process needs to balance multiple aspects such as technological innovation, privacy protection, ethics and social responsibility to ensure the sustainable development and application of AI technology.
Related: China-based hacking groups: Keeping critical infrastructure cyber-safe | China’s strides in AI: Promising but not without its challenges | How China is tightening controls over cross-border data transfers