State of AI Safety in Singapore Report Released
Concordia AI is proud to launch the inaugural State of AI Safety in Singapore report at the World AI Conference 2025, marking a significant milestone for our new Singapore office. You can download the report here or on our website.
This report follows in the footsteps of our State of AI Safety in China report series, which has been published annually since 2023, and offers the first comprehensive overview of Singapore’s AI safety ecosystem. The report demonstrates how smaller, resource-constrained states like Singapore can still play a meaningful role in shaping emerging AI safety norms, even as global discussions often focus on a few countries developing frontier-scale models.
The analysis in the report covers Singapore’s multi-layered domestic governance approach—spanning voluntary frameworks, targeted legislation, national standards, and testing and evaluation efforts—as well as its role in global AI governance through multilateral forums, regional initiatives, and bilateral partnerships. It also explores the local AI assurance market, profiles both domestic and foreign general-purpose AI developers, and maps the landscape of AI safety research across universities, government bodies, and research institutes. Key research themes and public attitudes toward AI risks in Singapore are also highlighted.
Here are some of the key findings from the report. We welcome any feedback!
Key findings
Domestic Approach
Singapore relies on voluntary frameworks and targeted legislation instead of a broad or national AI‑specific law. The Model AI Governance Framework, first issued in 2019 for traditional AI and updated in 2024 for generative AI, provides broad voluntary guidelines for industry, while legislation is targeted and focuses on specific AI risks, such as new penalties for AI‑generated election deepfakes. There is no clear move toward a national AI law.
Policy instruments emphasize downstream testing and assurance rather than model‑level controls. Toolkits such as the “Starter Kit for Safety Testing of LLM Applications” and the “Global AI Assurance Pilot” (now Sandbox) provide deployers with dedicated test cases and specific guidance on how to test different components of generative AI applications for safety risks. Because testing and evaluations are less well-explored at the application level than at the model level globally, Singapore’s focus on deployment testing positions the country to fill an important gap in global AI safety practice.
International Approach
Singapore plays an outsized convening role in global and regional AI governance. It has actively engaged in global AI governance discussions since 2018, contributing at the United Nations and global AI safety summits, among other fora. It leverages its neutral foreign policy stance to convene international AI events such as the Singapore Conference on AI and uses its diplomatic platforms to amplify the voices of smaller states. As chair of the Association of Southeast Asian Nations’ (ASEAN) Digital Ministers Meeting in 2024 and through the Digital Forum of Small States (Digital FOSS) initiative, Singapore has promoted inclusive dialogue and capacity-building so that developing countries can help shape global AI norms.
Bilaterally, Singapore embeds AI governance clauses in trade and digital agreements and encourages interoperability through ‘crosswalks’ that map international governance frameworks onto each other. Recent digital economy agreements have included provisions for building AI governance systems and sharing best practices between partners. The AI Verify Testing Framework streamlines compliance by mapping to NIST AI Risk Management Frameworks and ISO/IEC 42001, allowing businesses to meet multiple regulatory requirements through a single testing process.
Industry
Singapore’s home‑grown models prioritize training on regional languages, with safety features still at an early stage and slated for further development. The SEA‑LION and MERaLION model families focus on Southeast Asian languages and dialects rather than frontier capability; current safeguards are limited to basic toxicity evaluations and the SEA‑Guard prompt filter, which is in the early stages.
Singapore is a vibrant assurance hub, hosting major foreign general-purpose AI developers and both local and international AI safety testing and assurance providers. Leading US and Chinese technology companies and global frontier start-ups maintain Singapore offices or partnerships on AI safety testing, bringing expertise to Singapore. Meanwhile, local and international AI assurance companies form a comprehensive ecosystem by providing testing and assurance services across model, application, and organizational levels.
Technical Research
Academic research on AI safety is expanding with Singaporean universities serving as the primary drivers. Most publications come from Singaporean universities such as NUS, NTU, SMU, and SUTD, with support from A*STAR, GovTech and the new Singapore AI Safety Institute. This research centres on robustness, multimodal safety, unlearning, and agent behavior; the “Singapore Consensus on Global AI Safety Research Priorities” (May 2025) highlights a set of technical research areas—risk assessment, safety-by-design development, and post-deployment control—providing a roadmap for future collaborations to close these gaps.
Public Opinion
Reliable public opinion data on AI safety are limited and focused on near‑term concerns. No dedicated national survey has probed Singaporeans’ views on AI risks. Few global polls include Singapore, but those that do reveal public concern about misinformation, data privacy, cybersecurity, and reduced human interaction.

