Concordia AI 2024 Impact Highlights
Concordia AI’s mission is to ensure that AI is developed and deployed in a way that is safe and aligned with global interests. We further our mission by advancing international coordination, advising leading AI companies and policymakers, and convening AI safety conferences in China and globally. Some highlights in 2024 are below (see 2023 here). We welcome readers to reach out to us directly at info@concordia-ai.com if you have any feedback or are interested in engaging.
Advancing international coordination on AI safety and governance
Multilateral Initiatives
Global AI Summit series: Since participating in the first Global AI Safety Summit at Bletchley Park, we shared remarks at the Minister’s Session during the AI Seoul Summit and engaged extensively in the preparations for the upcoming Paris AI Action Summit. These include providing consultative submissions, attending an official side event at OECD headquarters, and having dinner with Special Envoy Anne Bouverot at the French Ambassador’s residence in Beijing.
International scientific assessment on AI safety: Kwan Yee Ng, our Senior Program Manager, contributed to the first International Scientific Report on the Safety of Advanced AI as one of the writers. Chaired by Turing Award winner Yoshua Bengio, the report is supported by an expert panel representing 30 countries including China as well as experts from the EU and the UN. She also co-authored a report led by Oxford Martin AI Governance Initiative and Carnegie Endowment for International Peace exploring ways to achieve international scientific consensus on AI risks. Brian Tse, our CEO, signed the Manhattan Declaration on Inclusive International Scientific Understanding of AI co-sponsored by Yoshua Bengio and Alondra Nelson on the sidelines of the UN Summit on the Future.
Global AI governance at the United Nations: We participated in the United Nations High-Level AI Advisory Body’s (HLAB) “Consultative Network” during January-September 2024, submitted feedback to the Office of the Special Envoy on Technology on the HLAB Interim Report, and attended multiple events during the UN Summit on the Future.
Track 2 dialogues: Participated in a dialogue on U.S.-China understanding of AI governance at the University of Oxford in May 2024 and University of Tokyo in fall 2024. Contributed to another dialogue among US, Chinese, and international experts to advance practices for AI testing and evaluation. Helped organise a closed door China-Western dialogue on “International AI Governance Frameworks,” co-chaired by former Chinese Vice Minister of Foreign Affairs FU Ying (傅莹) and Tsinghua I-AIIG Dean XUE Lan (薛澜).
Global AI-bio governance: Participated in the inaugural meeting of the International AI-Bio Forum, contributing to two technical working groups focused on (1) Horizon Scanning, Risk Assessment, and Evaluations (2) Safety of Biodesign Tools.
Research Impact and Engagement
China AI safety and governance analysis: Published the State of AI Safety in China Spring 2024 Report, launched with a webinar featuring leading experts Jeffrey Ding, Matt Sheehan, Robert Trager, and Angela Zhang. Produced pioneering analyses on “The State of China-Western Track 1.5 and 2 Dialogues on AI” and “China’s AI Safety Evaluations Ecosystem.” Created complementary resources including the Chinese Technical AI Safety Database and China’s AI governance documents database. Provided briefings to senior leadership at over a dozen global organisations including leading AI labs, policy think tanks, and various diplomatic missions in China. Maintained the “AI Safety in China” newsletter reaching over 1,000 subscribers across members of governments, AI labs, and AI safety institutes.
Academic and public engagement: Delivered a lecture to the National University of Singapore's AI ethics course; delivered presentations and participated in expert panels at Harvard University, the University of Oxford, the Centre for International Governance Innovation, Bay Area Alignment Workshop, and other leading institutions. Shared insights through The Diplomat op-ed, Carnegie Council podcast, and CGTN’s CMG Forum TV interview.

Convening AI safety and governance conferences in China and globally
World AI Conference Conference (WAIC), Shanghai: Hosted the Frontier AI Safety and Governance Forum at China's most influential AI conference. Convened 25 distinguished experts including former Baidu President ZHANG Ya-Qin (张亚勤), Turing Award Laureate Yoshua Bengio, Peng Cheng Lab Director GAO Wen (高文), Shanghai AI Lab Director ZHOU Bowen (周伯文), and UN High-Level Advisory Body members ZHANG Linghan (张凌寒), ZENG Yi (曾毅), and Ruimin He. The Forum drew 300 in-person participants, over 1 million online viewers, and coverage from 20+ news outlets.
International AI Cooperation and Governance Forum, Singapore: Co-hosted AI Safety sessions at the Forum with Singapore’s AI Verify Foundation. Chaired the AI Safety Plenum featuring Director of the Singapore AI Safety Institute LAM Kwok Yan, CTO of UK AI Safety Institute Jade Leung, Dean of Tsinghua Institute for AI International Governance XUE Lan, Chief Scientist of Zhipu AI TANG Jie (唐杰), and representative from the EU AI Office’s AI Safety Unit Friederike Grosse-Holz. Organized a closed-door AI Safety and Risk Management Workshop with 20+ international researchers exploring testing frameworks, risk assessment, mitigation strategies, and monitoring protocols.
Zhongguancun Forum, Beijing: Co-hosted a roundtable discussion on AI safety and governance at the Zhongguancun Forum's AGI Conference, organized by the Beijing Institute for General AI, Peking University Institute for Artificial Intelligence, Peking University School of Intelligence Science and Technology, and Tsinghua University Department of Automation.


Technical AI safety research: Co-organized AI safety workshops at major ML conferences, including “Trustworthy Multi-modal Foundation Models and AI Agents” at ICML 2024 with Shanghai AI Lab and “Socially Responsible Language Modelling Research (SoLaR)” at NeurIPS 2024. Co-led the 10-week AI Safety and Alignment Reading Group with leading AI safety researchers. Presented research on evaluation-based risk assessment frameworks at University of Hong Kong’s AI Benchmarking Workshop.
Advising leading AI companies and policymakers
National Standards and Policy Guidance: Joined the AI Subcommittee at National Information Technology Standardization Technical Committee (SAC/TC28), contributing to the National Standard for AI Risk Management Capability Assessment. Joined the National Cybersecurity Standardization Technical Committee (SAC/TC260) SWG-ETS (Special Working Group on Emerging Technology Safety/Security) and provided feedback on the TC260 standards committee’s AI Safety and Governance Framework 1.0. Presented recommendations at the AI Subcommittee of the National Science and Technology Ethics Committee. Published pioneering Chinese-language report on the global landscape of AI safety institutes.
Industry AI Safety Framework and Best Practices: Led AI safety initiatives through the China Academy of Information and Communications Technology’s (CAICT) AI Industry Alliance (AIIA), presenting "Best Practices for Frontier AI Safety" at the Alliance’s Plenum and “Policy and Practice: Analysis of AI Safety Frameworks” at a subsequent workshop. Contributed to the development of the AI Safety Benchmark for language models, multimodal systems, and AI agents for AIIA. Established “AI Guard x Concordia AI” column on frontier AI safety, produced pioneering Chinese-language analyses on global frontier AI risk management frameworks, and expanded frontier model risk management consulting services for developers.
Open-Source Model Governance: Published groundbreaking Chinese-language report on open-source/open-weight foundation model safety and governance with Peking University Institute for Artificial Intelligence and Beijing Institute for General AI, launched at the 2024 Zhongguancun Forum AGI Conference co-hosted by Concordia AI. Presented findings at CAICT's AI risk management seminar, Nankai University’s AI and Law conference, and contributed to Tsinghua University’s roundtable on open-source model governance within the context of China's AI legislation.

Expanding presence in Singapore and other organizational updates
Organizational Growth: We established our Singapore office and joined key global initiatives. As a member of the World Economic Forum’s AI Governance Alliance, we contributed to their white paper on AI agents and participated in Safe Systems and Regulation tracks. We also joined Singapore’s AI Verify Foundation alongside leading tech companies and third-party organizations in developing open-source AI testing frameworks and safety standards.