Global AI Safety Summit: Possible Topics and China’s Relevance
The UK will host the first global AI safety summit this November, convening governments, top AI companies, and international experts to discuss the safe development and use of frontier AI technologies. We believe this represents a unique opportunity to bring major global powers including China around a table to discuss significant safety risks that are likely to require international coordination.
While there could be scope for disagreement between China and other major AI powers on values and geopolitical issues, we think there are some topics where there could be beneficial discussions. In this special issue of Concordia AI’s AI Safety in China newsletter, we outline these topics, explain their importance, and assess the feasibility of productive discussions. We hope that this can generate debate and inform the ongoing preparations for the summit.
1. Agreeing on Shared Risks
What might this look like? Discussing the risks associated with frontier AI models and agreeing to a shared statement acknowledging risk mitigation as a global priority.
Why is this important? Having a common understanding of the shared risks if AI is not developed safely is crucial for enabling coordination to manage those risks. A shared statement could encourage governments and private developers to allocate more resources to risk mitigation and strengthen safety-conscious norms among those developing and deploying frontier AI systems.
How productive are discussions likely to be? The Chinese ambassador’s comments at the UN Security Council meeting on AI in July suggest several concerns that are likely shared by many states, such as the importance of ensuring human control and worries about AI misuse by extremists. In addition, multiple influential Chinese experts have raised concerns around extreme AI risks.
2. Creating a Positive Vision
What might this look like? Building consensus about how greater confidence in the safety of AI systems can help unlock faster and wider deployment of solutions to global problems. Participants could potentially commit to strive for global access to AI-based services that address significant social and environmental challenges.
Why is this important? The greater the mutual benefits from safe AI development, the stronger the incentive for international actors to coordinate on AI safety. Conversely, actors are more likely to race to build their own advanced AI systems — possibly cutting corners on safety — if they believe that they will not be able to benefit from AI solutions developed elsewhere. Moreover, major AI powers arguably have a moral responsibility to ensure that the benefits from AI are distributed fairly across the globe.
How productive are discussions likely to be? Competitive dynamics could limit actors’ willingness to share access to systems that could confer substantial military or economic advantages. However, it seems possible to identify specific domains where committing to sharing safe AI solutions would be mutually beneficial, such as climate change and public health. Leveraging AI for international development could also be an area of overlapping interest. The UK’s AI strategy states an intention to use international collaboration to unlock AI’s potential to accelerate progress on tackling poverty, while China has emphasised the need for equal access to AI to bridge divides between the Global North and South.
3. Sharing Ideas for Governance Mechanisms
What might this look like? Exchanging proposals and best practices for governing frontier AI models, including red-teaming, licensing, and third-party auditing.
Why is this important? Disseminating AI governance proposals and practices to a broad group of global stakeholders would have at least two benefits: identifying potential flaws or challenges through a more diverse set of reviewers and encouraging wider adoption of the best ideas.
How productive are discussions likely to be? China has been quick to introduce an algorithm registry and security reviews for certain generative AI systems, giving it a strong existing infrastructure that could be built upon using related ideas discussed at the summit. With the state currently working towards a more holistic AI Law, and influential scholars suggesting new innovations such as a yearly, externally-written “social responsibility report” for foundation models, now is a particularly good window for dialogue. Learning more about the pros and cons of China’s existing governance mechanisms could also help inform the design of governance measures in other jurisdictions.
4. Accelerating Progress on Technical Safety Research
What might this look like? Identifying the most promising ideas for promoting technical advances in areas of AI safety, such as scalable oversight and interpretability. These could include global prizes, funding commitments, and agreements to enable international research collaboration or exchanges.
Why is this important? State-backed initiatives and incentives could help direct a higher proportion of global AI research effort towards safety, increasing the chance of finding technical solutions that ensure the safety and alignment of frontier AI systems. Such incentives could be particularly valuable in countries including China that are home to large pools of AI talent.
How productive are discussions likely to be? There are reasons to believe international collaborations on technical AI safety can be possible even amid geopolitical tensions; the majority of AI safety research is currently published openly, and OpenAI CEO Sam Altman has spoken of “great potential for researchers in the US, China and around the world to work together.” Even if large-scale research collaborations between Chinese and certain international actors prove challenging, the summit could be a good platform for channelling competition in a healthy direction – encouraging a race-to-the-top on technical safety research.
5. Coordinating on International Governance Mechanisms
What might this look like? Identifying and agreeing to further discussions on areas of AI governance where international coordination seems valuable and feasible. These might include sharing information about certain safety incidents or criminal uses of AI, and establishing new institutions to fulfill functions such as building expert consensus or setting safety standards.
Why is this important? While many governance mechanisms can be useful if implemented solely at the level of individual AI labs or nations, there are certain areas where international coordination would produce substantial benefits for risk reduction.
How productive are discussions likely to be? Any new international coordination mechanisms would likely require substantial time and effort to achieve agreement. Information-sharing protocols may have to overcome concerns about how safety-relevant information can be shared without divulging sensitive details. Participants might disagree about how far any new institutions should be integrated into existing international organisations such as the UN or OECD. However, international institutions and information-sharing systems in comparable domains such as nuclear power (IAEA), aviation (ICAO), and climate change (IPCC) suggest that future coordination on AI safety is viable. The summit could be a useful early step in that direction.
Written by Concordia AI