Concordia AI at the International AI Cooperation and Governance Forum 2023
Concordia AI moderated and chaired the Frontier AI Safety and Governance Sub-Forum as part of the International AI Cooperation and Governance Forum 2023
On December 9, 2023, Concordia AI moderated and chaired the Frontier AI Safety and Governance Sub-Forum as part of the International AI Cooperation and Governance Forum, organized by Tsinghua University and Hong Kong University of Science and Technology (HKUST). This post introduces the event and briefly summarizes the talks and panels. As arguably the most significant governance-focused AI conference in China, discussions at the event are valuable for understanding Chinese thinking on the topic.
Key Takeaways
Guests at the sub-forum included ZHANG Bo (张钹), Honorary Dean of Tsinghua University’s Institute for Artificial Intelligence and one of the founding figures of AI research in China; industry representatives from Anthropic, xAI, Microsoft Research Asia; think tank representatives from the Center for Strategic and International Studies (CSIS) and the Future Society; academics from Tsinghua University, East China University of Political Science and Law, the University of Hong Kong, HKUST, and Cambridge University; and a government representative from the Infocomm Media Development Authority of Singapore.
The guests presented on and discussed topics relating to two questions:
What can the scientific community and AI developers do to support frontier AI safety and governance?
How can national policymakers and the international community work together to promote the benefits and reduce risks of frontier AI?
A recording of the conference can be found here.
The International AI Cooperation and Governance Forum
Since its inception in 2020, the Forum has been supported by various UN agencies including the United Nations Development Programme (UNDP), the United Nations Educational, Scientific and Cultural Organization (UNESCO), UN Women, and the International Labour Organization (ILO). In past years, the Forum was organized by the Institute of AI International Governance (I-AIIG), an AI governance think tank at Tsinghua University. 2023 was the first year the Forum was held in Hong Kong jointly with the HKUST, capitalizing on Hong Kong’s position as a bridge between China and the rest of the world. This year, the theme of the Forum was “Developing a Global Framework for AI Governance,” likely referencing developments over the past year including the establishment of the UN High Level Advisory Board, China’s Global AI Governance Initiative, the Bletchley Declaration, and other major initiatives calling for greater international cooperation on AI governance.
The Frontier AI Safety and Governance Sub-Forum
Motivated by the promises and perils that can be wrought by frontier AI – highly capable AI models that can perform a wide variety of tasks – the sub-forum explored how frontier AI could be safely developed, deployed, and governed. Thirteen guests presented on and discussed the role of scientific experts, AI developers, national policymakers, and the international community in frontier AI safety and governance.
Opening remarks: Zhang Bo, Academician of the Chinese Academy of Sciences and Honorary Dean of Tsinghua University’s Institute for Artificial Intelligence
Zhang Bo stated that the rapid rise of generative AI and large models has kindled the beginnings of artificial general intelligence, but while empowering various industries, large models also bring a series of problems and challenges. Large models represented by GPT show emergent behavior and unexpected capabilities, but at the same time can also exhibit hallucinations and lack robustness and self-awareness. Strengthening the governance of large models requires effective technical measures; at the same time, governance issues should also be taken into consideration and solved to prevent abuse and misuse. Additionally, we should aim to develop “third generation of AI,” establish new interpretable and robust AI theories and methods, and solve the fundamental issues of AI safety.1
Part 1: What can the scientific community and AI developers do to support frontier AI safety and governance?
Jimmy Ba, cofounder of xAI on “The Need for Insight in AI”: Jimmy outlined the importance of foresight––understanding how AI models could progress, insight––understanding how LLMs work, and oversight for AI development. Additionally, he suggested guiding students to critically evaluate AI-generated content rather than restricting their access to generative AI. Finally, he emphasized the need for generalists and technical experts to learn from each other for AI governance.
ZHOU Bowen (周伯文), Chair Professor at Tsinghua University and former Senior VP of JD.com on “Supporting the Governance of Foundation Models in Full Life Cycle by the Scientific Community”: Bowen stressed the need to govern the entire AI model lifecycle to manage uncertainties and risks, including existential risks. He suggested that scientists could play a coordinating role between stakeholders in AI governance, participate in the governance of AI across the model lifecycle, and invest more in the construction of trustworthy AI.
Michael Sellitto, Head of Global Affairs at Anthropic on “Anthropic’s Responsible Scaling Policy”: Michael presented Anthropic's AI Safety Levels (ASL) framework which defines escalating security standards as risks increase to ensure responsible AI development. He noted that the burden of ensuring safety increases with more advanced capabilities.
Sean S. ÓhÉigeartaigh, Director of AI: Futures and Responsibility Programme at the University of Cambridge on “Open-sourcing Frontier AI Models”: Sean cautioned that open-sourcing AI has benefits but also risks, and frontier models may not be suitable for open-sourcing. He argued that neither a fully open nor fully closed approach guarantees safety, and oversight mechanisms are important.
Panel discussion 1
The four speakers were joined by XIE Xing (谢幸) of Microsoft Research Asia and FU Jie (付杰) of HKUST for a roundtable discussion moderated by Concordia CEO Brian Tse. The group discussed topics including frontier AI model evaluations, allocating funding to AI safety R&D, the offense-defense balance in frontier AI systems, and incorporating technical insights into AI policymaking.
Participants strongly agreed that evaluations were important but noted limitations of current evaluation methods. Xie Xing advocated for interdisciplinary research to develop more reliable evaluations, while Zhou Bowen stressed the difficulty of aligning AI to human values. Regarding AI safety funding, Zhou Bowen predicted that in future, training AI models would take comparatively less compute than aligning AI models due to the difficulty of alignment, and thus more spending on AI alignment was desirable.
Some noted that with frontier AI, the offense-defense balance was skewed towards offense, and multiple layers of defense and techniques were required. In closing, panelists gave brief recommendations on what the scientific community and AI developers could do to support frontier AI safety and governance. Proposals included more national efforts like the UK AI Safety Institute to educate policymakers and the public on frontier AI, more inclusive conferences to foster greater international engagement and mutual understanding, more model evaluations, and programs to encourage interdisciplinary research in AI safety and governance.
Part 2: How can policymakers and the international community work together to improve frontier AI safety and governance?
Nicolas Miailhe, Co-founder and President of The Future Society on “Governing the rise of general purpose AI: Taking stock of ongoing international cooperation efforts and possible pathways”: Nicolas proposed a "functionalism" approach for global AI governance, first aligning on goals like peace and prosperity, then coordinating strategies and standards. He argued that international mindsets and structures must adapt cooperatively to meet AI's rapid pace and impacts.
Michael Frank, Senior Fellow at the CSIS Wadhwani Center for AI and Advanced Technologies on “International approaches and possibilities for cooperation on AI governance”: Michael analyzed differences and tradeoffs in AI governance models between the EU's legislation-based risk tiering and the US’s executive order approach. He concluded that international cooperation is still possible and imperative based on collective risks and incentives.
GAO Qiqi (高奇琦), Dean and Professor of the Political Science Institute at East China University of Political Science and Law on “Consensus Governance of LLMs from the perspective of Intersubjectivity”: Qiqi introduced potential "consensus governance" systems for corporate, national, and global oversight of large language models. He contended that technical alignment alone cannot substitute for multi-stakeholder governance, including third party auditing, and warned that escalating societal risks may require expanding participative governance and oversight frameworks.
Wan Sie Lee, Director of Data-Driven Tech at Singapore’s Infocomm Media Development Authority on “Evaluation and Testing for Generative AI”: Wan Sie shared an overview of Singapore's AI governance initiatives, focusing on the AI Verify Foundation. She discussed challenges in capabilities for risk-proportional oversight and testing collaboration across models.
Panel discussion 2
The four speakers were joined by CHEN Qi (陈琪), Deputy Director at Tsinghua University’s Center for International Security and Strategy and Angela Zhang, Director of the Philip K. H. Wong Centre for Chinese Law at the University of Hong Kong in a roundtable discussion hosted by Brian Tse. Participants discussed topics ranging from balancing innovation with risk management and the role of multilateral organizations, to mitigating potentially catastrophic impacts from AI.
Participants agreed that international governance frameworks are needed to categorize AI risks and corresponding response plans, though some cautioned wariness against anchoring too hard on institutional analogies. Angela Zhang noted that geopolitical tensions are undermining AI safety and governance, leading countries to underregulate AI and more dialogue and cooperation was crucial. Gao Qiqi asserted that amid the rapid development of AI, there is an “emergency” for the international community to identify clearer definitions of AGI in order to develop more effective governance measures.
There was consensus that even if frontier AI risks were speculative, they could not be ignored. Chen Qi noted that epistemic communities around the world must continue collaborating from the bottom-up. Both Angela Zhang and Chen Qi agreed that in order for international governance efforts to be successful, the political rhetoric must shift from competitive frames such as export control towards recognizing common interests. Overall, there was consensus that AI presents challenges which transcend borders and require coordinated global governance based on shared priorities.
Report launch
At the closing of the sub-forum, Concordia AI released the draft report, “Best Practices for Frontier AI Safety,” co-authored by Tsinghua University’s Institute for International AI Governance, Shanghai AI Lab’s AI Governance Research Center, Tsinghua University’s Foundation Model Research Center, and Concordia AI. The report, written in Chinese, delves into a list of potential best practices for frontier AI developers and relevant domestic and international policies in the area. We hope that this report can serve as a reference for other institutions considering improving the safety of their frontier AI models, and encourage mutual learning among different stakeholders. We are gathering feedback from relevant experts on the draft report and plan to publish a public version in 2024.
Feedback and Suggestions
Please reach out to us at info@concordia-ai.com if you have any feedback, comments, or suggestions for topics for the newsletter to cover.