Concordia AI at UK Global AI Safety Summit
Concordia AI's CEO, Brian Tse, attended the first day of the summit.
Our previous newsletter covered China’s attendance at the UK Global AI Safety Summit. This post details the experience of Concordia AI’s CEO, Brian Tse.
Morning Roundtable: Risks From Loss of Control Over Frontier AI
The morning of the first day focused on Understanding Frontier AI Risks. Delegates were assigned to roundtables on:
Risks to Global Safety from Frontier AI Misuse
Risks from Unpredictable Advances in Frontier AI Capability
Risks from Loss of Control over Frontier AI
Risks from the Integration of Frontier AI into Society
Brian took part in the roundtable on: “Risks from Loss of Control over Frontier AI.” His full speech is as follows:
“The probability and timing of when AI may go out of control are highly uncertain, but this uncertainty does not mean we should not take action. Faced with significant risks to global stability and public safety, society should adopt a “bottom-line thinking” approach, preparing for the worst-case scenarios while striving for the best outcomes.1 Global scientists and policymakers can monitor “risk warning signals;” let me give three examples:
The continuous improvement of LLM-based agents. For instance, large models are no longer stuck in loops and have made progress in long-term reasoning and decision-making. For example, Tsinghua University's AgentBench can assess the performance of large model agents when faced with a wide range of real-world challenges.
The risk of AI systems autonomously replicating themselves. For example, the ability to write language model worms that spread to other network systems. Completing many such tasks could indicate that future AI systems may have the ability to spread across global server networks and avoid human detection. This is similar to the computer worm issues we face in the field of cybersecurity.
The self-improvement capabilities of AI systems. For instance, there are already possibilities of using AI to produce training datasets or to provide feedback to models in reinforcement learning.”
The roundtable was chaired by Mrs Josephine Teo, Singapore’s Minister for Communications and Information. Minister Teo’s summary of the roundtable can be found here.
Afternoon Roundtable: What Should Frontier AI Developers Do To Scale Responsibly?
The afternoon of the first day focused on Improving Frontier AI Safety. Delegates were assigned to roundtables on:
What should Frontier AI developers do to scale responsibly?
What should National Policymakers do in relation to the risk and opportunities of AI?
What should the International Community do in relation to the risk and opportunities of AI?
What should the Scientific Community do in relation to the risk and opportunities of AI?
Brian took part in the roundtable on: “What should Frontier AI developers do to scale responsibly?” After the CEOs of several frontier AI labs reported on their respective organizations' Responsible Capability Scaling policies, Brian emphasized the need for government regulation, third-party evaluations, and ensuring that transformative developments from AI are aligned with the broader interests and preparedness of society. His full speech is as follows:
“I commend the companies for taking initial steps in the right direction, but we must accelerate robust oversight for the policies to be truly responsible.
First, the scaling policies should eventually become mandatory and regulated by governments. We simply cannot let the industry mark their own homework on matters of public safety and national security.
Second, frontier developers should support the development of the third-party evaluation ecosystem, especially in novel domains without established expertise. There are hundreds to thousands of experts on CBRN and cyber risks, and only a few startup nonprofits for deceptive alignment or autonomous replication.
Third, we need a global watchdog or a globally coordinated licensing regime. As companies scale their AI systems by 100-1000x in the coming years with the potential to increase global biological risks, we are fast approaching stakes similar to BSL-4 labs.
Finally, if and when we approach superintelligence capability that could, e.g. automate the entire scientific R&D enterprise, the developers have to listen to global public opinion: is humanity prepared for such a transformative development in our history?”
The roundtable was chaired by Ms. Michelle Donelan, the UK’s Secretary of State for Science, Innovation and Technology. Ms. Donelan’s summary of the roundtable can be found here.
Remarks at the closing plenary
Brian was invited by Michelle Donelan, the UK’s Secretary of State for Science, Innovation and Technology, to deliver remarks at the closing plenary on the first day. His full speech (lightly edited from the transcript for clarity) is as follows:
“Hi everyone. This is Brian from Concordia AI, we are an AI safety organization based in Beijing. What an incredible day we've had here at Bletchley Park! I would like to commend the UK for making this happen and to share three key messages:
First, the world has a shared interest in ensuring global AI safety. Risks from AI—from catastrophic misuse, unexpected dangerous capabilities, to the potential loss of human control—do not respect national borders. We have a collective responsibility to protect the present and future generations to come.
Second, we have much to gain when we work together as a global community. By encouraging collaboration between brilliant researchers around the world, we can come up with better AI safety solutions. As countries develop governance frameworks for AI, we have a golden window of opportunity to exchange lessons and learn from each other. And we should work towards establishing international institutions to govern the risks and opportunities from AI.
Third, we must include and empower the voices from the Global South. As AI capabilities diffuse over time, the success of global AI governance regimes will ultimately depend on the support of the entire world. More importantly, frontier AI development will be transformational for all of humanity, so giving everyone a say in how this should go is also morally the right thing to do.
In closing, let's carry forward this spirit of openness and cooperation from our time together at Bletchley Park. This is only the beginning, let’s get to work. Thank you everyone.”
The concept of “bottom-line thinking” was popularized by President Xi Jinping and has been used in a range of contexts, from pandemic preparedness to financial risks. Although the concept lacks a precise definition, it generally emphasizes the identification of worst-case scenarios and red lines and encourages taking preventative measures to avoid their realization. Since its coinage, use of the term has expanded into the wider Chinese intellectual discourse, including with regard to AI risks.