What does the Chinese leadership mean by "instituting oversight systems to ensure the safety of AI?"
Translating an official interpretation of the Third Plenum resolution section on AI safety
The resolution from the Third Plenum meeting of the Communist Party of China (CPC) 20th Central Committee included the goal of “instituting oversight systems to ensure the safety of AI.” In our previous newsletter, we noted this is the “strongest indication yet that top echelons of the Chinese system are concerned about AI safety.” CPC plenary meetings usually occur once or twice a year and set the overarching strategic direction for China, so the content and signals of their resulting documents are among the most comprehensive and authoritative indications of leadership thinking.
Since the resolution was published, the Chinese leadership has released additional official study materials that further explain the Third Plenum resolution to inform party cadres and the general public. As these materials are not readily available on the internet, Concordia AI has chosen to translate select passages that are relevant to AI safety and governance. The original Chinese text is also provided at the bottom of this document for reference.
“Why establish AI safety oversight systems?” Translated excerpt from the Third Plenum’s 100 Questions Study Guide
The 100 Questions Study Guide was co-edited by President Xi and three of the other top seven leaders on the CPC’s Politburo Standing Committee: Chinese People’s Political Consultative Conference Chairman WANG Huning (王沪宁), first-ranked secretary of the CPC Secretariat CAI Qi (蔡奇), and Executive Vice Premier DING Xuexiang (丁薛祥). The book provides approximately 2-page long answers to 112 different questions relating to the Third Plenum resolution.1 We have translated one question focused on AI safety in full.
Key points:
Motivations for creating AI safety oversight systems are explained in terms of responding to rapid AI development, promoting high-quality development, and participating in global governance.
AI safety oversight should involve “forward-looking prevention and constraint-based guidance,” which suggests an active and potentially precautionary approach.
The text argues against putting development ahead of governance. Instead, it suggests that both should go hand in hand, progressing at the same time.
The section is supportive of AI governance efforts globally, referencing China’s Global AI Governance Initiative, the UK’s Global AI Safety Summit, EU AI safety legislation, and American AI safety standards.
Translation:
Question 97. Why establish AI safety oversight systems?
The [Third Plenum] “resolution” proposes: “instituting oversight systems to ensure the safety of artificial intelligence.” This is an important strategic arrangement made by the Party Center to coordinate development and security, and actively respond to AI safety risks.
AI is a strategic technology leading the current round of scientific and technological revolution and industrial transformation. It has strong leading and spillover effects, with major and far-reaching impacts on economic development, social progress, international political and economic landscape, and more. AI safety is an important component of many domains of China’s “holistic view of national security.”2 General Secretary Xi Jinping attaches great importance to holistically planning and balancing AI development and security. General Secretary Xi has provided a series of important expositions on the dialectical unity of development and security as well as building a firm national security barrier. He has emphasized the need to strengthen assessment and prevention of potential risks from AI development, safeguard the interests of the people and national security, and ensure that AI is safe, reliable, and controllable.
First, establishing AI safety oversight systems is an inevitable requirement for responding to the rapid development of AI. After more than 60 years of evolution, AI has entered a new period of explosive growth around the world. General-purpose AI, represented by large models and generative AI, has made breakthrough progress and become a new milestone in the history of AI development.3 As a disruptive technology with wide-ranging impacts, AI may also bring about problems such as changing employment structures, impacting laws and social ethics, infringing on personal privacy, and challenging the norms of international relations. This will have a profound impact on government management, economic security, social stability, and even global governance. We must attach great importance to the safety risks and challenges that AI may bring, strengthen regulation for forward-looking prevention and constraint-based guidance, and minimize risks as much as possible.
Second, establishing AI safety oversight systems is an inevitable requirement for achieving high-quality development. In the new journey of the new era, high-quality development has become the primary task of building a modern socialist country in all respects, and AI is an important engine for developing new productive forces and achieving high-quality development.4 To promote high-quality development with AI, we must learn profound lessons from the historical practice of “development first, governance later.” We should fully understand and assess the unpredictable safety risks that may exist in AI, a disruptive technology. We should abandon uninhibited growth that comes at the cost of sacrificing safety and achieve “development and governance simultaneously” by strengthening AI safety oversight. We need to strengthen strategic research, strengthen forward-looking prevention, and strengthen constraint-based guidance on AI; accurately grasp technology and industry development trends; fully understand and assess the potential gaps or blind spots that may exist in each “disruptive innovation” and deal with them in a timely manner; and ensure the safety, reliability, and controllability of AI.
Third, establishing AI safety oversight systems is an inevitable requirement for participating in and leading global AI governance. AI is relevant to the fate of all mankind, and countries around the world generally attach great importance to AI safety oversight. The United States has formulated AI safety standards, the European Union has instituted AI safety oversight laws and regulations, and the United Kingdom held the world’s first AI Safety Summit, which called for international cooperation to address AI risks. China is a major AI country. China has continuously issued policies, regulations, and international position papers, and has actively carried out communication, exchanges, and pragmatic cooperation with each major country on AI safety. In October 2023, President Xi Jinping proposed the “Global AI Governance Initiative,” advocating human-centered [AI] and intelligent development for good as a universal consensus. The initiative promotes the values of equality, mutual benefit, and respect for human rights. It provides constructive solutions to AI development and governance issues of common concern, and contributes a blueprint for relevant international discussions and rule-making. We must continue to play an active role as a responsible major country, strengthen leadership, and continue to provide Chinese solutions to ensure the healthy development of AI. We should oppose building “small yards and high fences” in AI and promote strengthening technology sharing among all parties. We must strive to close divides in intelligent technology, jointly promote orderly and safe global AI development, and ensure that AI always develops in a direction conducive to the progress of human civilization.
Original Chinese Text
97. 为什么要建立人工智能安全监管制度?
《决定》提出:“建立人工智能安全监管制度。”这是党中央统筹发展与安全,积极应对人工智能安全风险作出的重要部署。
人工智能是引领这一轮科技革命和产业变革的战略性技术,具有溢出带动性很强的“头雁”效应,正在对经济发展、社会进步、国际政治经济格局等方面产生重大而深远的影响。人工智能安全是我国总体国家安全观诸多领域中的重要组成部分。习近平总书记高度重视统筹人工智能发展和安全,围绕发展和安全辩证统一关系、筑牢国家安全屏障等作出一系列重要论述,强调要加强人工智能发展的潜在风险研判和防范,维护人民利益和国家安全,确保人工智能安全、可靠、可控。
第一,建立人工智能安全监管制度,是应对人工智能快速发展的必然要求。经过60多年演进,全球人工智能进入新一轮爆发期,以大模型和生成式人工智能为代表的通用人工智能取得突破性进展,成为人工智能发展史上新的里程碑。人工智能作为影响面广的颠覆性技术,也可能带来改变就业结构、冲击法律与社会伦理、侵犯个人隐私、挑战国际关系准则等问题。将对政府管理、经济安全和社会稳定乃至全球治理产生深远影响。必须高度重视人工智能可能带来的安全风险挑战,通过加强监管进行前瞻预防与约束引导,最大限度降低风险。
第二,建立人工智能安全监管制度,是实现高质量发展的必然要求。新时代新征程,高质量发展成为全面建设社会主义现代化国家的首要任务,而人工智能是发展新质生产力、实现高质量发展的重要引擎。以人工智能推进高质量发展,要吸取人类历史上“先发展、后治理”的深刻教训,充分认识和评估人工智能这一颠覆性技术可能存在的难以预料的安全风险,摒弃以牺牲安全为代价的粗放增长,通过加强人工智能安全监管,实现“边发展、边治理”,加强对人工智能战略研究、前瞻预防和约束引导,准确把握技术和产业发展趋势,充分认识和评估每一项“颠覆性创新”可能存在的漏洞或盲点并及时加以处置,确保人工智能安全、可靠、可控。
第三,建立人工智能安全监管制度,是参与和引领人工智能全球治理的必然要求。人工智能攸关全人类命运,各国普遍重视人工智能安全监管。美国制定人工智能安全标准,欧盟制定人工智能安全监管法规,英国举行全球首届人工智能安全峰会,呼吁通过国际合作解决人工智能风险。我国是人工智能大国,不断颁布政策法规和国际立场文件,积极同各主要国家就人工智能安全开展沟通交流、务实合作。2023年10月,习近平主席提出《全球人工智能治理倡议》,倡导以人为本、智能向善的普遍共识,弘扬平等互利、尊重人类权益的价值理念,为各方普遍关切的人工智能发展与治理问题提供了建设性解决思路,为相关国际讨论和规则制定提供了蓝本。我们要继续发挥负责任大国积极作用,加强引领,不断为保障人工智能健康发展提供中国方案,反对在人工智能上搞“小院高墙”,促进各方加强技术共享,努力弥合智能鸿沟,共同促进全球人工智能有序安全发展,确保人工智能始终朝着有利于人类文明进步的方向发展。
The Chinese title is《党的二十届三中全会〈决定〉学习辅导百问》.
This concept was formulated by President Xi in 2014 to broader the idea of national security beyond traditional military and foreign affairs domains. For more context, see the Chinese government or Wikipedia.
The Chinese phrase 通用人工智能 can mean both artificial general intelligence (usually understood to connote human-level or superhuman capabilities) as well as general-purpose AI systems (usually understood to mean AI that can complete tasks of many different varieties). Since the text referenced large models and generative AI, it seems to refer to existing models, which are generally believed to constitute general-purpose AI systems but have not yet reached the level of artificial general intelligence.