Special Edition: World AI Conference Recap
The 2025 World AI Conference (WAIC) was held in Shanghai from July 26 to 28. As China’s flagship AI event, WAIC carries strong political backing and brings together senior Chinese and international figures from industry, academia, and government. With many experts convening in one place, WAIC has also become a hub for workshops, policy dialogues, and side events.
In this special edition, we highlight key developments related to AI safety that emerged during and around WAIC. A future post will provide a full recap of Concordia AI’s own AI Safety and Governance Forum, which we hosted at the conference.
Key takeaways
High-level central government backing of WAIC continued, with Premier Li Qiang (李强) delivering opening remarks.
WAIC released a Global AI Governance Action Plan with detailed safety proposals, though its implementation remains unclear.
China proposed a global AI cooperation organization, but details and timelines are still vague.
The China AI Safety and Development Association (CnAISDA) hosted the first WAIC plenary focused on AI safety, where the head of a major state-affiliated think tank warned of CBRN misuse risks.
ZHENG Nanning (郑南宁), the expert who had briefed the Politburo on AI in April, warned of loss of control risk from recursive self-improvement of AI.
The “Shanghai Consensus” on AI safety was signed by top Chinese and international scientists at a dialogue on the sidelines of WAIC, calling for red lines and safe-by-design research.
Nobel Laureate Geoffrey Hinton discussed AI safety in private meetings with Chinese government officials.
Official signalling on safety at WAIC
Premier Li Qiang on development, safety, and international cooperation
Background: Premier Li Qiang (李强), the second-ranked official in China, delivered the opening remarks for the conference. Concordia AI staff attended the opening ceremony in-person.
Content: Li noted the transformative potential of LLMs, multimodal systems, and embodied AI, while acknowledging that “risks and challenges brought by AI are drawing widespread attention.” He called for greater global consensus on balancing development and safety, and argued that AI should always be controlled by humanity. Li also stressed AI should be an international public good, reaffirming China's support for capacity building in the Global South.

Implications: This is the second year in a row that the Premier attended WAIC, signalling maintained high-level political backing (before 2024 no such high-ranking official attended). Li’s main messages on capacity building, safety, and international cooperation are very similar to those in his speech at WAIC 2024. Hence, the official summary signals continued high-level backing for WAIC, but has provided few updates on Chinese positions on AI safety. Li’s full remarks were not published and explored some of these topics in greater depth.
Global AI Governance Action Plan released
Background: WAIC released a “Global AI Governance Action Plan” (Cn, En) outlining 13 proposals for international collaboration across innovation, infrastructure, sustainability, and safety.
Safety provisions in the action plan include:
AI risk assessments, targeted mitigation measures, and emergency response;
Risk testing and evaluation systems, with shared platforms for mutually recognized AI safety testing;
A global framework for “threat information sharing;”
Increased R&D investment in interpretability, transparency, and safety;
Traceability systems for misuse prevention;
An “open source compliance system” and “technical safety guidelines for open source communities;”
International exchanges on AI safety best-practices.
The plan supports a UN-centered approach to global AI governance, including the early launch of two proposed UN mechanisms: an International Scientific Panel on AI and a Global Dialogue on AI Governance Dialogue. It also urges faster development of international standards in safety, industry, and ethics through international standard setting bodies, such as the International Telecommunication Union (ITU), the International Organization for Standardization (ISO), and the International Electrotechnical Commission (IEC).
Implications: The plan signals strong support for international safety measures (such as joint testing and emergency protocols) but its practical impact remains uncertain, since the plan was issued under the WAIC banner and not by the central government directly. It reflects growing sentiment that AI safety needs to be integrated into AI capacity-building and open source ecosystems. Nevertheless, it is unclear how the plan will be advanced, given the lack of publicly identified government implementing bodies or clear implementation mechanisms.
Ministerial-level meeting
Background: China hosted a ministerial meeting with over 30 other countries and international organizations at WAIC, a continuation after the first iteration as held in 2024.
Remarks and discussion: Minister of Science and Technology YIN Hejun (阴和俊), Vice Minister of Foreign Affairs MA Zhaoxu (马朝旭), and Shanghai Mayor GONG Zheng (龚正) each gave remarks.
Yin noted the importance of AI governance, discussed the Global AI Governance Initiative, and called for deepening science and technology cooperation. He argued that technological monopolies and AI divides are major AI challenges.
Gong spoke about Shanghai’s approach to AI development and governance.
Ma emphasized the importance of inclusive development and bridging AI divides. He also welcomed countries to participate in the initial design of a global AI cooperation organization.
Other topics of discussion in the ministerial meeting included international AI cooperation, global governance, risk prevention, and balancing AI innovation with safety and controllability.
Implications: The ministerial meeting appears to have focused primarily on cooperation and governance among Global South countries, given the emphasis on closing AI divides. While risk prevention, safety, and controllability remained on the agenda as with 2024, these topics were not emphasized in the readout. This ministerial meeting did not result in immediate outputs, and its significance is unclear. Yet, it still offers a valuable opportunity for high officials from China and around the world to exchange views on which AI governance issues they find most pressing.
Early plans for a global AI cooperation organization announced
Background: At the opening ceremony, Premier Li Qiang revealed that China proposes to establish “a global AI cooperation organization.”
Little details: Official media followed up with a short announcement that the organization is tentatively headquartered in Shanghai, and would pursue three goals:
Deepen innovation cooperation and unlock AI dividends;
Promote inclusive development and bridge the AI divide;
Strengthen coordinated governance to ensure beneficial AI.
Implications: The establishment of a global AI cooperation organization could become a significant part of China’s AI diplomacy. However, there are too few public details to judge its impact. Given the language on “tentatively” headquartering it in Shanghai, planning appears to be in early stages, with no timeline indicated.
WAIC Forums
Apart from the opening ceremony, WAIC features a wide range of “Forums.” Among them, three stand out as especially authoritative. The Main Forum (主论坛)—effectively an extension of the opening ceremony—has strong political backing across multiple ministries. The first day of the conference additionally featured two Plenary Sessions (全体会议): one on Scientific Frontiers and another on Development and Safety.
Notably, this marks the first time one of WAIC’s main forums or plenaries has focused explicitly on AI safety. Last year, the three main forums were titled “Global AI Governance,” “Industrial Development,” and “Scientific Frontiers.”
Below, we highlight key AI safety discussions from these three flagship Forums.
Main Forum
There were a number of remarks regarding AI safety during the Main Forum, but we focus on the following two exchanges since their participants had not been previously vocal on AI safety.
A keynote speech by Academician and former President of Xi’an Jiaotong University ZHENG Nanning (郑南宁), who had briefed the 24 Chinese officials in the Communist Party of China (CPC) Politburo in April;
A fireside chat between former Google CEO Eric Schmidt and former Microsoft Executive Vice President for the AI & Research Group Harry Shum (沈向洋).
Zheng Nanning: Academician Zheng gave a speech on the evolution of AI from “model-driven” to “intent-driven” — meaning systems that are capable of understanding goals, formulating plans, have causal understanding, and can interact with other AI agents. One slide of his talk focused on AI safety and governance. He argued that AI is currently showing a capacity for self-improvement, and once AI is in charge of its own training process, it will develop more quickly and could “exceed the boundaries of human forecasts and control.” Additionally, he warned that accelerated AI deployment, despite yielding temporary strategic advantages for companies or countries, would likely weaken explainability and human oversight.
Shum and Schmidt Fireside: Both experts affirmed the need for China-US cooperation to maintain global stability and ensure human control over AI. Schmidt additionally proposed AI red lines, personnel exchanges, and addressing non-proliferation concerns. Shum called for cooperation on basic research and developing regular dialogue channels on specific topics to foster trust.
Other remarks: AI safety also received prominent attention in keynote speeches by Nobel Laureate Geoffrey Hinton and Turing Award Winner Yoshua Bengio, as well as a panel discussion involving Turing Award Winner Andrew YAO (姚期智), Johns Hopkins University Professor Gillian Hadfield, UC Berkeley Professor Stuart Russell, former Microsoft Chief Research and Strategy Officer Craig Mundie, and Shanghai AI Lab (SHLAB) Director and Chief Scientist ZHOU Bowen (周伯文).
Implications: As with WAIC 2024, AI safety and governance issues received substantial emphasis in the main forum. Academician Zheng Nanning, who had not previously spoken publicly on frontier AI safety and governance, appeared to voice concern about loss of control resulting from recursive self-improvement of AI systems. This is notable given that he briefed the Politburo in April.
Development and Safety Plenary
Background: This Plenary was organized by the China AI Safety and Development Association (CnAISDA), a coalition of leading institutions introduced at the French AI Action Summit in February. CnAISDA positions itself as China’s counterpart to international AI Safety/Security Institutes (AISIs).
The three-hour session (agenda, summary, full recording) featured speeches by government officials, expert keynotes and panels, and the release of an updated version of AI safety commitments first released in December 2024.
Speeches by government officials
WU Wei (吴伟), Executive Vice Mayor of Shanghai, opened the plenary by emphasizing the urgent need for global consensus on AI safety, security, and governance. He underscored that AI safety is a shared international challenge requiring coordinated global responses.
HUO Fupeng (霍福鹏), Director of the Center for Innovation-Driven Development at the National Development and Reform Commission (NDRC), echoed the call for international collaboration—particularly on frontier AI safety research, standards, norms, and policy frameworks. He highlighted SHLAB’s “45-degree law” as a leading domestic example of frontier safety thinking. Huo publicly affirmed that several institutions established CnAISDA “with the support of the government.” This marks the first public confirmation of CnAISDA’s government backing by an official.
Expert keynotes and panels
The forum brought together leading voices from academia, government-affiliated think tanks, and industry to discuss key issues in AI safety and governance. Notable participants included Nobel Laureate Geoffrey Hinton, Turing Award winners Andrew Yao, Yoshua Bengio, and David Patterson, Dean of the Tsinghua Institute of International AI Governance (I-AIIG) XUE Lan (薛澜), Dean of the China Academy of Information and Communications Technology (CAICT) YU Xiaohui (余晓晖), and Dean of CCID ZHANG Li (张立).
Concordia AI has previously documented many of these experts’ views on AI safety, but a notable moment at the Plenary came from CAICT’s Yu. Yu explicitly warned that advanced AI poses misuse risks in critical domains such as chemical, biological, radiological, and nuclear weapons (CBRN). Framing AI’s rapid development as a “double-edged sword,” Yu’s remarks are significant given CAICT’s important role in safety evaluation, policy advising, and standard-setting in China.
Updated voluntary safety commitments released at CnAISDA Plenary
The most specific outcome from the CnAISDA plenary was the release of the “China AI Security and Safety Commitments Framework”, an updated version of the AI Safety Commitments released in December 2024. The first five commitments remain unchanged (covering safety teams, safety testing, data security, infrastructure security, and model transparency). The main changes are:
Revised commitment 6: Previously focused on “advancing frontier AI safety research,” the updated commitment broadens its scope to include obligations such as “preventing safety and security risks in frontier fields,” and “strengthening the assessment of risks related to the abuse of AI systems in high-risk scenarios.” However, it removes references to specific frontier AI technologies (agents and embodied AI mentioned in the original version), and does not further define what these frontier risks are.
New commitment 7 on international engagement: Signatories commit to “actively participating in global dialogues on AI safety, security, and governance” and to “contributing to the exchange of experiences and best practices in risk identification, assessment, and mitigation.”
Increased number of signatories: A new website launched by CAICT and AIIA shows five new signatories, with Honor, Sangfor, Vivo, Qi An-Xin, and ZTE joining the original group of 17 companies.
Implementation practices: The site also includes brief summaries of 43 common practices for implementing the 6 commitments, which CAICT began collecting in March. However, the current disclosures remain relatively generic and do not provide detailed insights into individual companies' safety practices.

Implications: This Plenary is the first time one of WAIC’s main forums or plenaries is explicitly focused on safety. It also marks the first major publicized activity of CnAISDA since its initial launch in February. Yu’s comments on CBRN risks underscore that these concerns are being taken seriously by key actors in China’s AI safety ecosystem.
The most concrete news from the Forum was the release of the updated CAICT safety commitments, which demonstrate that CAICT is continuing to actively promote and implement the commitments with the backing of CnAISDA. In terms of substance, the updated version of the commitments puts slightly more focus on “frontier safety,” but refrains from defining what specific risks they are concerned with.
Scientific Frontiers Plenary
Background: This plenary session was co-organized by SHLAB, the Shanghai Xuhui District Government, and Global Artificial Intelligence Academic Alliance. A Vice Minister of Science and Technology, a Vice Mayor of Shanghai, and China’s Special Envoy for Climate Change all gave speeches. The forum focused largely on frontier AI development and AI for science, however SHLAB also shared several notable AI safety advancements.
SHLAB on AI safety: During his keynote speech at the plenary, SHLAB Director and Chief Scientist Zhou Bowen argued that AI safety should improve with AI capabilities, proposing an L1 to L5 scale for AGI classification and safety standards. For example, he explained that AI systems at L4 (Architect), which can exceed 99% of human experts in many domains, should have evolutionary reflective capabilities to ensure safety. Zhou also announced SHLAB’s “SafeWork” AI safety and security technology stack. A website by the SHLAB Center for Safe & Trustworthy AI explains that this workstream includes:
SafeWork-F1: A Frontier AI Risk Management Framework: Co-authored by SHLAB and Concordia AI, this framework “serves as a guideline for general-purpose AI model developers to manage the potential severe risks from their general-purpose AI models.” It includes risk identification, thresholds, analysis, evaluation, mitigation, and governance, focusing on red lines and early warning indicators for cyberoffense, biological and chemical risks, persuasion and manipulation, uncontrolled autonomous AI R&D, strategic deception and scheming, self-replication, and collusion.
The framework is accompanied by a lengthy technical report titled “Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report,” which evaluates 18 state of the art large models for evidence of breaching the aforementioned early warning indicators and red lines.
SafeWork-V1: Towards Formally Verifiable AI: This project explores formal verification for LLMs through verification of code generated by coding agents.
SafeWork-T1: A multimodal training platform to accelerate training of safe reasoning models.
SafeWork-R1: This is a multimodal reasoning model that uses safety-oriented reinforcement learning post-training to enhance safety. The authors claim that this mechanism “enables SafeWork-R1 to develop intrinsic safety reasoning and self-reflection abilities.”
Implications: At WAIC 2024, Zhou Bowen had first presented his “45-degree law” for AI safety, which proposed to ensure proportional progress of AI safety alongside AI development, and suggested setting yellow (warning) lines and red lines for AI. The projects unveiled this year demonstrate that SHLAB is actively working on concrete follow-ups to operationalize this idea into multiple stages of AI development, including a risk framework resembling frontier AI safety frameworks globally as well as formal verification.
Other Forums
WAIC also hosted over 100 smaller Forums, many of which addressed AI safety and governance. Concordia AI’s examination of WAIC forums found 16 AI safety and governance forums in 2025, compared to 15 in 2024. Therefore, across the entire suite of events in the forum, the amount of AI safety and governance representation remained roughly consistent between 2024 and 2025. However, we will not cover them all in this newsletter issue due to space constraints.
Concordia AI was proud to host the AI Safety and Governance Forum. You can find a brief summary (in Chinese) on our Wechat account, and we will share a comprehensive summary with full video recordings in a forthcoming post on Substack as well.
Other notable events on the sidelines of WAIC
AI safety dialogue of leading scientists and experts
Background: On July 22–25, the 4th meeting of the International Dialogues on AI Safety (IDAIS) was held in Shanghai. This dialogue has been led by top computer scientists from China, the US, and Canada, with three meetings since 2023, bringing together academics, policy advisors, experts, and industry leaders.
The dialogue: The meeting culminated in the release of the “Shanghai Consensus,” which warns of “increasing evidence suggesting that advanced future AI systems may deceive humans and escape our control” and calls for:
Safety assurance from frontier AI developers;
Commitments to global verifiable behavioral red lines;
Investment in research on safe-by-design AI.
Signatories included: Nobel Laureate Geoffrey Hinton; Turing Award winner Yoshua Bengio; Turing Award winner Andrew YAO (姚期智); Director I-AIIG at Tsinghua University XUE Lan (薛澜); FU Ying (傅莹); Executive Director of the Center for AI Safety and Advisor at xAI Dan Hendrycks; Co-Founder of Chinese AI startup Stepfun ZHU Yibo (朱亦博); research scientists at SHLAB; and various other experts from leading Chinese and international institutions. Concordia AI CEO Brian Tse also participated in the dialogue and signed the consensus statement.

Implications: The convening reflects a growing consensus among leading Chinese and international experts on the severity of extreme AI risks. Much of the content echoes previous IDAIS statements. However, while previous IDAIS statements described risks as largely in the future, this year’s statement notes that “some AI systems today already demonstrate the capability and propensity to undermine their creators’ safety and control efforts,” suggesting heightened sense of urgency.
Geoffrey Hinton discusses AI safety with Chinese government officials
Background: WAIC 2025 appears to be Professor Geoffrey Hinton’s first visit to China. In addition to his participation in IDAIS and WAIC, Hinton held meetings with several senior Chinese government officials:
Ahead of WAIC, Hinton met with Shanghai Party Secretary CHEN Jining (陈吉宁), alongside Andrew Yao, former Google CEO Eric Schmidt, and Microsoft Senior Advisor Craig Mundie. Chen is one of the top 24 officials in China as a member of the CPC Politburo.
During WAIC, Hinton also met with Vice Minister of Science and Technology LONG Teng (龙腾) and other officials from the Ministry of Science and Technology.

AI safety was on the agenda:
Shanghai Party Secretary Chen emphasized that Shanghai is “coordinating development with safety” and promoting “scientific innovation alongside safety governance.” He added: “Together, we can build a global governance framework for AI that steers its development toward a beneficial, safe, and fair future.” The international guests echoed these themes, stressing that AI’s “opportunities come with challenges” and that progress “must be balanced with safety,” requiring global cooperation.
Vice Minister Long emphasized that “China attaches great importance to both the development and safety of AI.” Geoffrey Hinton highlighted that AI “poses multiple safety risks and must remain under human control,” but also stressed that it represents “a promising area for global cooperation that demands collective efforts from all nations.”
Implications: Hinton’s public speeches at WAIC heavily focused on frontier risks, such as loss of control. It is likely that he discussed similar issues directly with Chinese officials, even though the official readouts of both meetings do not reveal which specific AI safety risks were discussed.
Early Warning and Crisis Coordination Workshop
Concordia AI co-hosted a workshop on “Early Warning and Crisis Coordination for Advanced AI” on the sidelines of WAIC along with the Carnegie Endowment for International Peace, Oxford Martin School’s AI Governance Initiative, Oxford China Policy Lab, Tsinghua University Center for International Security and Strategy, and Tsinghua University Institute for AI International Governance. Participants hailed from China, Europe, North America, South America, and Asia, with representatives from industry, academia, think tanks, and international organizations. The workshop was held under Chatham House rules.
In case you missed it: Concordia AI reports
We are proud that Concordia AI has released several major reports at WAIC, including:
The State of AI Safety in China (2025) report, providing comprehensive updates of AI safety and governance developments in China from May 2024 to June 2025.
The State of AI Safety in Singapore report, the first comprehensive analysis of Singapore's AI safety ecosystem.
Shanghai AI Lab, in partnership with Concordia AI, released the Frontier AI Risk Management Framework v1.0, China’s first comprehensive framework for managing severe risks from general-purpose AI models. Concordia AI technical staff also contributed to a follow-up technical report Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report.
Concordia AI and the Center for Biosafety Research and Strategy of Tianjin University released a report titled Responsible Innovation in AI x Life Sciences, emphasizing potential for transformative advances in the convergence of AI and the life sciences alongside critical biosafety and biosecurity challenges.

The most comprehensive overview of what happened on AI safety at WAIC, thanks for putting this together!