Abstract
As part of a broader investigation into AI safety across Southeast Asia, this study examines Singapore’s evolving sociotechnical landscape as the nation advances its ambitions in artificial intelligence research, development, and deployment. Despite Singapore’s reputation as one of Asia’s most “intelligent” and digitally integrated cities, emerging risks associated with AI systems disproportionately affect vulnerable populations, in this case, children and youth. This paper interrogates how adults in positions of influence, including educators, parents, technology practitioners, and policymakers shape the conditions under which young people learn, engage with, and navigate AI-mediated environments. We highlight a persistent policy and pedagogical dilemma: how to cultivate robust AI literacy among children and young people to make them future-ready, while simultaneously safeguarding them from algorithmic harms, data risks, and emerging forms of digital inequity. Drawing on qualitative insights, we argue that effective AI safety for minors requires a multi-layered governance model integrating (1) intergenerational AI education and capacity-building, (2) adaptive regulatory frameworks attuned to developmental needs, and (3) meaningful public participation in AI governance. The findings position Singapore as a critical case for understanding how societies can balance innovation with protection, especially for those who are vulnerable, as human-AI interactions become deeply embedded in everyday life.
Presenters
Karryl Kim Sagun TrajanoResearch Fellow, S. Rajaratnam School of International Studies, Nanyang Technological University Singapore, Singapore
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
2026 Special Focus—Human-Centered AI Transformations
KEYWORDS
Artificial Intelligence, Online Safety, AI Governance, Children, Youth, Southeast Asia
