New Spring 2024 updated report on State of AI Safety in China
As the China-US intergovernmental dialogue on AI begins today in Geneva, Concordia AI is excited to present an update to our State of AI Safety in China report, first published in October 2023. Our updated report is best viewed as a powerpoint and can also be viewed as a PDF.
2024 will be a pivotal year for international AI governance, with the AI Seoul Summit 2024 in late May, China-US talks, future China-France dialogue, the Shanghai World AI Conference and Global AI Governance High Level Meeting in July, and more. We believe that improved understanding of China’s rapidly evolving views and positions on AI safety and governance are essential for the success of these upcoming endeavors.
Key takeaways:
The relevance and quality of Chinese technical research for frontier AI safety has increased substantially, with growing work on frontier issues such as LLM unlearning, misuse risks of AI in biology and chemistry, and evaluating “power-seeking” and “self-awareness” risks of LLMs.
There have been nearly 15 Chinese technical papers on frontier AI safety per month on average over the past 6 months. The report identifies 11 key research groups who have written a substantial portion of these papers.
China’s decision to sign the Bletchley Declaration, issue a joint statement on AI governance with France, and pursue an intergovernmental AI dialogue with the US indicates a growing convergence of views on AI safety among major powers compared to early 2023.
Since 2022, 8 Track 1.5 or 2 dialogues focused on AI have taken place between China and Western countries, with 2 focused on frontier AI safety and governance.
Chinese national policy and leadership show growing interest in developing large models while balancing risk prevention.
Unofficial expert drafts of China’s forthcoming national AI law contain provisions on AI safety, such as specialized oversight for foundation models and stipulating value alignment of AGI.
Local governments in China’s 3 biggest AI hubs have issued policies on AGI or large models, primarily aimed at accelerating development while also including provisions on topics such as international cooperation, ethics, and testing and evaluation.
Several influential industry associations established projects or committees to research AI safety and security problems, but their focus is primarily on content and data security rather than frontier AI safety.
In recent months, Chinese experts have discussed several focused AI safety topics, including “red lines” that AI must not cross to avoid “existential risks,” minimum funding levels for AI safety research, and AI’s impact on biosecurity.
To learn more about this topic, sign up to our Report Launch Webinar to discuss China’s role in AI and more with experts Jeffrey Ding, Matt Sheehan, Robert Trager, and Angela Zhang on May 15 at 9 AM ET / 2 PM BST / 9 PM China Time.