Track 1.5 and Track 2 dialogues have long been a mechanism for exchange on issues of common concern between nations.1 They can foster trust and increase mutual understanding among participants, as well as help formulate and refine policy solutions. During the Cold War, the Pugwash Conferences on Science and World Affairs, held regularly since 1957, served as a pivotal venue for reducing nuclear risks – an effort for which the organization and its longtime leader Joseph Rotblat received the Nobel Peace Prize in 1995.
In recent years, distinguished experts including Henry Kissinger, former US Assistant Secretary of Defense Graham Allison, former Chinese Vice-Minister of Foreign Affairs FU Ying (傅莹), and former US State Department Policy Planning Director Anne-Marie Slaughter have recommended exploring scientist-scientist Track 2 dialogues or a Pugwash Conference model for AI. Given growing international consensus on potential AI risks, evident in the Bletchley Declaration, Xi-Biden meeting at APEC, and joint papers or statements by Chinese and Western scholars, 2024 seems to be an especially good window for greater progress on identifying technical and governance mitigants to AI risks.
This post provides an overview of existing Track 1.5 and 2 dialogues between China and the US, the UK, and European countries on AI.2 We compiled a “Table of Existing China-Western Track 1.5 and 2 AI Dialogues” that took place in or after 2022, which can be referenced for acronyms of dialogue participants. We also analyze the key attributes of existing dialogues, assess gaps in the landscape, and close with recommendations. We recognize that dialogue organizers and funders balance many considerations, such as funding, location, and political incentives, but hope that our recommendations can still help (potential) organizers and participants in the field consider how dialogues can be more effective.
Key findings:
We identified 8 China-Western AI-focused dialogues that took place in or after 2022, and just 2 of the 8 focused on frontier AI safety and governance. This is a small proportion of the China-Western dialogues, which number 40+ between China and the US and many more involving the UK and EU.
Current dialogue participants comprise mostly of foreign policy and military experts, while featuring a much smaller proportion of academic scientists, industry representatives, or experts from other domains that intersect with AI risks (e.g. biosecurity and cybersecurity).
Current dialogues are generally oriented towards political and scientific consensus-building.
We suggest examples of areas for future discussion including exploring international institutions to govern frontier AI, information sharing mechanisms for AI threats, and other sources of emerging risk.
Key characteristics of AI-related dialogues
Methodology: We divided dialogues into 4 categories.
China-Western Track 1.5 and 2 dialogues are dialogues that involve organizations based in China on one side, and based in the US, UK, or EU on the other. We refer primarily to dialogues that involve groups of participants, are institutionalized, and/or intend to meet more than once; a single roundtable or instance of scholarly exchange (e.g. a US professor visiting Chinese universities) would not qualify, nor would conferences that lack institutionalized dialogue. Recent research by RAND and the Wire China have listed 30+ and 40+ US-China Track 1.5 and 2 dialogues that appear to align with our definition, respectively. This number is an undercount, as it does not include the likely substantial number of additional dialogues between China and institutions in the UK or EU and likely does not include many confidential dialogues.
We define AI-related dialogues as dialogues that appear to have around 15-50% of their content focusing on AI. We found 12 such dialogues.
We code 8 dialogues as AI-focused, in that the name of the dialogue mentioned AI or the dialogue appears to focus >50% on AI.
We found evidence of just 2 frontier AI safety and governance dialogues – ones that appear to extensively discuss risks of frontier AI systems, including the potential for foundation models or narrow AI systems in dangerous domains (e.g. biological synthesis) to be misused or escape human control.
Our data is based fully on official public readouts of dialogues or media reporting about dialogues. Our conclusions around what proportion of dialogues are focused on AI were made by assessing if, based on those sources, AI was a major topic of discussion in the dialogue and whether there were other, non-AI, topics of discussion in the dialogue. These are ultimately subjective judgements, but we tried to be as precise as possible in our coding.
There are few known AI-focused dialogues in the overall China-Western Track 1.5 and 2 dialogue ecosystem. Looking at publicly known dialogues involving just China and the US, there are only 2-4 China-US AI-focused dialogues, of which 2 are frontier AI safety-related dialogues, out of at least 40 China-US dialogues.3 Meanwhile, overall interest in China-Western AI dialogues seems to be increasing, with the number of AI-focused dialogues rising from 4 between 2020-2021 to at least 8 today.4 Military AI is the most common topic and the focus of 3 of the 8 AI-focused dialogues, covering topics such as lethal autonomous weapons (LAWs) and nuclear command and control. In addition to those 3, the CISS-Brookings dialogue initially focused solely on military AI, but has since broadened. Interest in frontier AI safety and governance has also increased since 2023, as both dialogues focused on frontier AI were founded in 2023 and other dialogues that do not focus on frontier AI seem to have begun including AI safety in a portion of their discussions starting in 2023, such as the CISS-Brookings dialogue and the Sino-EU Cyber Dialogue.
Given the small number of AI-focused dialogues and general early stage of policy discussions in the field, there is room for stakeholders to fill in the gaps. However, simply increasing the number of dialogues without ensuring quality and filling a neglected niche could also create competition for a limited number of relevant stakeholders on each side, reducing the overall productivity of dialogues.
Due to the length of the Table of Existing China-Western Track 1.5 and 2 AI Dialogues, we are including a preview below. You can access the full table here.
Gaps among dialogues and tentative suggestions
It is difficult to offer confident suggestions for China-Western AI dialogues because many details of existing dialogues have not been publicly documented, and there remains a strong possibility that there are nonpublic exchanges Concordia AI is not aware of. Nevertheless, we hope that this analysis can assist current and prospective conveners to enhance the value of their dialogues. Overall, a larger number of dialogues appear to involve policy think tanks, former foreign policy officials, and/or former military officials, mostly oriented towards increasing mutual understanding and consensus. Examples include the CISS-Brookings dialogue, CACDA-INHR dialogue, and HD dialogue. We outline below various groups of stakeholders in AI-policy that appear underrepresented and could offer contributions, especially in dialogues seeking to solve specific, technical problems.
Few dialogues seem to involve academic AI scientists. The IDAIS dialogue and CAS-Royal Society dialogue are the only 2 organized by AI scientists. Some of the other dialogues may have some level of technical participation; however, scientific dialogue has not been prioritized. The Pugwash Conferences are precedent for scientist-scientist dialogue on catastrophic risks. While China-Western scientist-scientist dialogues do occur in other venues, such as international machine learning conferences, top frontier AI safety workshops such as the New Orleans Alignment Workshop and the 2023 Center for Human-Compatible AI (CHAI) Workshop had few publicly-listed China-based speakers. IDAIS in particular appears to be a strong project focusing on frontier AI safety, with support from prominent AI scientists in China and the West. Thus far, the IDAIS and the CAS-Royal Society dialogues appear more focused on consensus building, given respective readouts. Therefore, future iterations or new projects in this vein could build upon the consensus to find technical solutions to issues including standards for red-teaming, verification mechanisms for potential international AI treaties, and information sharing mechanisms about safety risks. Furthermore, scientific dialogues are likely easier to set up than dialogues involving industry or politically-connected actors, given the strong norms around international scientific and academic collaboration.
Alternatively, it could be more straightforward and effective to set up such exchanges on the sidelines of major machine learning conferences, such as NeurIPS, ICML, IJCAI, or AAAI, without needing to coordinate a formal dialogue.5 This approach is also worth further exploration. However, some downsides would include: the participant pool would be limited to scientists attending the same conference, which is highly variable and may not always include top scientists; time on the sidelines of conferences is limited due to a large number of additional social and professional events; foreign (especially Chinese) participants may have difficulty obtaining visas for conferences (in the US in particular). Other forms of academic collaboration would also help to fill this gap and could be easier to set up than a formal dialogue.
Few of the dialogues appear to include domain experts from other domains that intersect with AI risk. The Bletchley Declaration articulated concerns about misuse of AI in biological synthesis and cyberattacks in particular. Researchers from the US and China have picked up on that concern to research how AI could enable biological and/or chemical risks. Given that the risks of AI misuse constitute shared global threats that would require international cooperation, it would be beneficial to involve experts in biosecurity, chemical weapons, or cybersecurity. Some of the dialogues that discuss the interaction of AI with nuclear command and control already likely include nuclear domain experts, such as the INHR and European Leadership Network dialogues. However, there do not appear to be any dialogues that include biological or cyber domain experts, apart from the recently announced Nuclear Threat Initiative International AI-Bio Forum and a recent grant for INHR’s dialogue to work on AI-bio issues. While the Carnegie Endowment for International Peace and Shanghai Institutes for International Studies previously held dialogues on US-China cybersecurity and cyber usage in nuclear command and control, discussion of AI was limited, and both dialogues appear to have concluded.
None of the dialogues seem to involve participants from standards setting bodies or seek to jointly develop international standards. Dialogues could explore jointly promoting international standards, for instance by engaging the US National Institute of Standards and Technology (NIST) and its subordinated US AI Safety Institute, as well as China’s TC28 technical committee, TC260 technical committee, or the government-affiliated China Electronics Standardization Institute to develop international standards on watermarking AI-generated content. While international standards are ultimately developed at organizations such as the IEC and ISO, bilateral discussion among technical experts on risk management frameworks, safety testing, red-teaming practices, etc. could be an important driver of bilateral or international cooperation.6 An October 2023 joint mapping exercise of domestic governance frameworks between Singaporean and US government bodies seems like a good precedent to draw from.
Industry involvement seems to be limited in existing dialogues. The Shaikh Group’s dialogue appears to be the only public dialogue that strongly features AI industry actors, while the NCUSCR dialogue has some industry presence. Some of the other dialogues may also have some industry presence, but participant lists are not fully public. Given the importance of private companies in developing cutting-edge AI capabilities, these companies have relevant expertise to contribute to discussions around mitigating frontier AI risks. In addition, they play important roles in implementing regulation or making voluntary commitments around safety.
State-backed labs, such as Peng Cheng Lab and Shanghai AI Lab, do not appear to be included in any dialogues.7 The closer state relationship of these labs could introduce greater political sensitivities and pose a barrier to dialogue. However, they are key players in China’s AI industry – they are recipients of potentially large amounts of government funding, developers of notable AI models (such as InternLM), and possess networks spanning industry, academia, and government. Some lab leaders appear to have substantial policy influence: Peng Cheng Lab director GAO Wen (高文) briefed China’s Politburo on AI in 2018, and both Beijing Institute for General AI (BIGAI) head ZHU Songchun (朱松纯) and Shanghai AI Lab former director TANG Xiao’ou (汤晓鸥) are or were members of a major policy advisory body.8 Some of these lab leaders have also shown interest in AI safety.
Retired officials and government-adjacent participants are concentrated primarily in the diplomatic and security, rather than scientific, domains. A number of dialogues involve retired officials on both sides, such as the CISS-Brookings dialogue, CACDA-INHR dialogue, and China-US Green Fund-NCUSCR dialogue. However, these retired officials appear to come primarily from diplomatic or military backgrounds. There do not appear to be as many retired officials participating from science policy backgrounds, such as the US Office of Science and Technology Policy (OSTP) or NIST, and on the Chinese side the Ministry of Science and Technology (MOST), Cyberspace Administration of China (CAC), or Ministry of Industry and Information Technology (MIIT). Meanwhile, government-affiliated convenors on the Chinese side tend to have ties to the foreign policy and security apparatus rather than science and technology policy organs. For instance, some Chinese-side convenors or participants are from the PLA-affiliated National University of Defense Technology (NUDT), policy-focused Chinese Academy of Social Sciences (CASS), security-affiliated China Institutes of Contemporary International Relations (CICIR), and foreign affairs-focused China Arms Control and Disarmament Association (CACDA). Organizers should consider prioritizing the addition of government-affiliated organizations with more technical expertise and influence on AI policymaking, such as MIIT-overseen China Academy of Information and Communications Technology (CAICT), MOST-overseen Chinese Academy of Science and Technology for Development (CASTED), and the CAC-overseen Cybersecurity Association of China (CSAC).
Suggestions for policy dialogue topics
We list some key open questions in international AI governance below. Gearing dialogues to build consensus on and seek solutions to such problems could be especially impactful.
How can governments set up information sharing mechanisms for AI risks? The Chinese, US, UK, and European governments have agreed in the Bletchley Declaration that there are misuse and loss of control risks with AI – in particular, potential for misuse in cybersecurity and biotechnology. Chinese government documents have also noted the risk of AI misuse by terrorist or criminal organizations. Sharing information about security incidents or “near misses” could be used to improve safety and reduce the likelihood of new incidents. However, governments may have practical constraints around sharing information, such as lack of trust and the potential dual-use nature of AI in cybersecurity and biology. This requires further exploration, given that in the future it may be desirable to share information to combat transnational risks.
What should an international governance regime for AI look like? A number of proposals for global AI governance have circulated, drawing from analogies to existing institutions such as the Intergovernmental Panel on Climate Change (IPCC), CERN, or the International Atomic Energy Association (IAEA). The UN High-Level Advisory Board on AI also assessed subfunctions for international AI governance, proposing a focus on scientific assessment, horizon scanning, and risk classification for the next 6-12 months, as well as building to capabilities such as standard setting and enforcement beyond 12 months. However, governments will likely need to agree first on which governance functions are better performed internationally versus domestically. For instance, should international institutions play a role in articulating standards for AI-driven bias and discrimination, scientific risk assessments, enforcing limitations on proliferation of advanced models, etc.? Some common threats might only be effectively mitigated if tackled together, while flexibility for different domestic approaches may make sense on topics where cultural approaches differ substantially. Given the lack of clarity on these questions, bilateral dialogues could elicit views from Chinese and Western experts on what governance setups would be acceptable to their respective governments.
What are emerging sources of risk as model capabilities increase? As models become more and more advanced, their threat profile could change significantly. For instance, there is currently substantial debate around whether open-sourcing frontier AI models is unsafe today and whether it will increase or decrease safety in the long term. However, governance of open-source models that do not respect international borders requires global buy-in, and discussing pros and cons of different approaches between Chinese and Western stakeholders would help lay the foundation for future cooperation on this issue. Another topic that is currently more speculative but could require international cooperation down the line is red lines for shutting off systems that are autonomous above a certain threshold.
Conclusion
We think that China-Western dialogues on AI are an important mechanism for improving coordination to reduce frontier AI risks. This post is a preliminary effort to help current and prospective convenors or participants get a bird's eye view of the landscape to further optimize their efforts. There has been meaningful progress over the past year, and we are hopeful that this momentum will continue. Individuals and organizations that would like to discuss our findings in greater depth are welcome to reach out to us directly at info@concordia-ai.com.
Track 2 dialogues involve purely nongovernmental participants, while Track 1.5 dialogues involve participation of government as well as civil society or the scholarly community.
Due to scoping limitations, we did not explore dialogues between China and other countries in Asia, such as Japan, South Korea, and Australia.
Two to four depends on whether the Shaikh Group dialogue and the International Dialogue for AI Safety (IDAIS) are coded as US dialogues. The Shaikh Group dialogue is convened by a non-US organization but seems to involve several US organizations. One of the two IDAIS Western convenors is based in the US.
It is possible that this data reflects a rebound of dialogues post-COVID following a reduction of dialogues during COVID, rather than increasing interest. However, that appears unlikely because we found only one other AI-focused dialogue before 2022 that has not continued (involving the Harvard Berkman Klein Center).
NeurIPS is the Conference on Neural Information Processing Systems, ICML is the International Conference on Machine Learning, IJCAI is the International Joint Conference on Artificial Intelligence, and AAAI is the Association for the Advancement of Artificial Intelligence.
IEC is the International Electrotechnical Commission and ISO is the International Organization for Standardization.
The Shaikh group’s dialogue did involve unspecified “Chinese state-backed groups” that could include such labs.
The policy advisory body is the Chinese People's Political Consultative Conference.
Thank you - this is excellent and and very helpful. 👏