Concordia AI’s mission is to ensure that AI is developed and deployed in a way that is safe and aligned with global interests. We further our mission by:
Promoting international understanding and cooperation on AI safety;
Improving AI risk foresight and governance in Chinese AI policy; and
Strengthening the technical safety research community in China.
We had a busy 2023! In the below post we compiled our highlights from the past year.
1. Promoting international understanding and cooperation on AI safety
Attended the inaugural Global AI Safety Summit at Bletchley Park as one of the four non-governmental Chinese attendees.
Invited by Michelle Donelan, the UK Secretary of State for Science, Innovation and Technology, to deliver remarks at the closing plenary on the first day.
Advocated for China's involvement in the Summit through public writing.
Invited to meet with relevant embassies and organizers ahead of the Summit.
Published a comprehensive 150+ page report on the “State of AI Safety in China,” covering Chinese domestic governance, international AI governance, technical safety research, expert views, lab/corporate governance, and public opinion on AI risks.
Provided presentations based on the report to senior staff/leadership at organizations including the Brookings Institution, Center for Strategic and International Studies, Google DeepMind, the Frontier Model Forum, and Tony Blair Institute for Global Change.
Featured in Politico Digital Future Daily and quoted by Sixth Tone and the ChinAI newsletter.
Invited to participate in expert panels at the University of Hong Kong Philip K. H. Wong Centre for Chinese Law’s Regulating Generative AI Conference, Oxford Blavatnik School of Government’s Global State of AI Policy event, and Harvard’s upcoming China Law Symposium.
Facilitated meetings between UC Berkeley’s Stuart Russell, MIT’s Max Tegmark, and University of Cambridge’s David Krueger in Beijing with Chinese AI experts including former Baidu President ZHANG Ya-Qin (张亚勤) and his Tsinghua AIR lab, leadership at Beijing Academy of AI, and leadership at Tsinghua University’s Institute for International AI Governance, among others.
Published the official Chinese translation of the consensus statement from the International Dialogue on AI Safety, which was co-convened by Turing Award winners Yoshua Bengio and Andrew Yao, as well as Professors Stuart Russell and Zhang Ya-Qin.
Invited Andrew Yao, Zhang Ya-Qin and 2 deans from top Chinese universities XUE Lan (薛澜) and GAO Qiqi (高奇琦) to co-author Managing AI Risks in an Era of Rapid Progress alongside Geoffrey Hinton, Yoshua Bengio, Dawn Song and others. Produced the official Chinese translation of the commentary.
Attended the inaugural Singapore AI For Global Good Conference, where Deputy Prime Minister Lawrence Wong announced Singapore’s National AI Strategy 2.0. Co-chaired the session on “Mitigating Catastrophic Risks & Ongoing Harms from AI.”
Attended a Track 2 dialogue on AI regulation and governance hosted by the Paul Tsai China Center at Yale Law School in partnership with the Institute of Law, Chinese Academy of Social Sciences.
Led the English translation and commentary of an expert draft of China’s upcoming national AI law coordinated by the Institute of Law, Chinese Academy of Social Sciences, and published on Stanford’s DigiChina project.
Launched and published the bi-weekly AI Safety in China Newsletter, with over 580 subscribers (including from leading AI labs, governments, leading think tanks, and notable media publications), 12 posts, and over 10,000 views in the first 5 months.
Launched the Chinese Perspectives on Risks from AI website with 10 expert profiles.
Submitted a global AI policy memo to the UN Global Digital Compact in March 2023 and an essay to the UN Secretary General’s Call for Papers on Global AI Governance in September 2023.
2. Improving AI risk foresight and governance in Chinese AI policy
Co-hosted and moderated a half-day forum on Frontier AI Safety and Governance during the International AI Cooperation and Governance Forum co-organized by Tsinghua University and Hong Kong University of Science and Technology (HKUST) in December 2023.
Invited 13 experts including ZHANG Bo (张钹), Honorary Dean of Tsinghua University’s Institute for Artificial Intelligence and one of the founding figures of AI research in China; industry representatives from Anthropic, xAI, Microsoft Research Asia; think tank representatives from the Center for Strategic and International Studies (CSIS) and the Future Society; academics from Tsinghua University, East China University of Political Science and Law, the University of Hong Kong, HKUST, and Cambridge University; and a government representative from the Infocomm Media Development Authority of Singapore.
Released a draft Chinese report for expert feedback titled “Best Practices for Frontier AI Safety: Research and Development Practices and Policy Construction Guide for Chinese Institutions.” The finalized version of the 70+ page report, incorporating those expert views, was published in January 2024.
Participated in an international closed-door meeting on global AI governance and bilateral dialogues on AI during the conference.
Moderated the AI governance panel at Boao Forum for Asia’s Global Economic Development and Security Forum.
Released a 60+ page Chinese report on “Frontier Large Model Risks, Safety, and Governance” and interviewed by Hunan Daily Press at the Forum.
Selected as deputy chief expert of AI Safety Governance Committee and deputy chair of the AI Governance Working Group in China’s Artificial Intelligence Industry Alliance.
Presented at or participated in key workshops on frontier AI governance in China, including:
Large Model Value Alignment Workshop organized by Tencent Research Institute;
Science and Technology Ethics Governance Sub-forum organized by the China Academy of Information and Communications Technology, a think tank supervised by the Ministry of Industry and Information Technology;
AI Safety and Security Risks and Legal Rules seminar organized by SFC Compliance Technology Institute and the Institute of Law, Chinese Academy of Social Sciences.
Providing inputs to upcoming iterations of an expert draft of China’s upcoming national AI Law, coordinated by the Institute of Law, Chinese Academy of Social Sciences.
Submitted policy suggestions to relevant Chinese ministries and departments including the Ministry of Industry and Information Technology, the Cyberspace Administration of China, the Beijing Municipal Government’s Science and Technology Commission and the Shanghai Municipal Government’s Science and Technology Commission.
Published the “Safety and Global Governance of Generative Al” report in English and Chinese with 29 essays from over 40 policymakers, industry practitioners, and experts in and outside China. The essays analyzed generative AI risks and benefits from the perspective of global governance, developing countries, engineering, and companies. Commissioned by the Shenzhen Association for Science and Technology and World Federation of Engineering Organizations - Committee on Engineering for Innovative Technologies (WFEO-CEIT) to be the Chief Editor of the report.
Grew our WeChat platform to over 3,300 subscribers, publishing articles including a comprehensive Chinese-language series on AGI risk and alignment, the first Chinese landscape of LLM safety and alignment job opportunities, and a detailed Chinese-language analysis of the FLI pause letter and the Center for AI Safety’s risk statement.
3. Growing and supporting the technical safety research community in China
Co-hosted and moderated a full-day forum on AI Safety and Alignment during the Beijing Academy of AI (BAAI) conference in June 2023.
Invited 14 experts including Geoffrey Hinton, Sam Altman, Stuart Russell, and Andrew Yao, along with others from institutions including Anthropic, DeepMind, Tsinghua University, University of Cambridge, Peking University, and BAAI.
Reviewed and published the Chinese translation of Brian Christian’s latest book, The Alignment Problem, during the conference.
Co-authored AI Alignment: A Comprehensive Survey alongside top Chinese AI scientists including GAO Wen (高文), ZHU Song-Chun (朱松纯), and GUO Yike (郭毅可).
Organized the first AI Safety and Alignment Fellowship program in China.
Recruited 23 top graduate students and industry researchers from institutions including Peking University, Tsinghua University, Beijing Academy of AI, Amazon, and ByteDance.
Co-organized the AI Safety and Alignment Reading Club with Swarma Club (集智俱乐部), a leading Chinese community at the intersection of complexity science and AI.
Presented at or participated in key workshops on frontier AI safety and alignment in China, including:
“Superalignment” workshop organized by the Beijing Academy of AI with participation by OpenAI Superalignment Team Co-Lead Jan Leike;
AI/Large Model Value Alignment Workshop co-organized by Shanghai AI Lab Governance Research Center and Fudan University Technology Ethics and Future of Humanity Research Institute;
AI Safety Workshop organized by the China Computer Federation Young Computer Scientists & Engineers Forum (CCF YOCSEF).
Participated in the World Artificial Intelligence Conference (WAIC) in Shanghai in July, 2023.
Jointly released a Chinese-language AI Alignment Failure Database with leading Chinese AI media publication Synced Review (机器之心).
Co-organized the 2023 IJCAI-WAIC Large Model and Technological Singularity: Humanities and Science Face-to-Face Summit.
Co-hosted Shanghai AI Lab’s first AI safety and alignment speaker series.
Organizational updates
In 2023, Concordia AI was certified as a social enterprise (社会企业) under a new policy administered by the Beijing Civil Affairs Bureau. This certification recognizes that Concordia AI’s primary mission is to solve pressing social issues, with a commitment of spending ≥35% after-tax profits on projects with a public purpose. We are one of the only social enterprises in the AI sector in Beijing, and the only social enterprise focused on AI safety and governance in China.
Our team expanded from 5 to 8 full-time staff. We are grateful for the contributions of our network of 34 part-time affiliates over 2023.