AI Bee

From Rule-Takers to Rule-Makers: The Global South’s Moment in AI Safety Governance

At a time when artificial intelligence is evolving faster than the institutions meant to guide it, the India AI Impact Summit 2026 hosted one of its most consequential conversations: “International AI Safety Coordination: What Policymakers Need to Know.” As the closing dialogue of the International AI Safety Coordination track, the session confronted a defining question of our era – will developing economies shape the rules of frontier AI, or merely comply with frameworks written elsewhere?

The discussion brought together an influential mix of ministers, multilateral leaders and AI safety thinkers. The message was unmistakable: the Global South cannot afford to be passive rule-takers in a fragmented global AI order. Collective action is no longer a matter of diplomatic courtesy; it is a technological and economic necessity.

Artificial intelligence is already embedded in public health systems, agricultural advisories, welfare distribution and financial inclusion platforms across emerging economies. The debate, therefore, is not about speculative futures. It is about governance catching up with deployment. Speakers argued that the next phase of AI development will hinge on whether nations can build institutional capacity, share risk assessments and operationalise interoperable standards fast enough to keep pace with accelerating frontier capabilities.

Josephine Teo, Minister for Digital Development and Information, Singapore, emphasised that regulation must be grounded in rigorous evidence rather than reactive intuition. Drawing parallels with aviation safety, she noted that no aircraft takes off without structured testing and simulation. AI, she implied, deserves no less discipline. Without internationally aligned testing frameworks and interoperable standards, fragmentation will deepen, trust will erode and safe scaling will become increasingly complex.

Her call for evidence-based policymaking was echoed by Gobind Singh Deo, Malaysia’s Minister of Digital Development and Information. He underscored that credible regional cooperation depends on strong domestic foundations. Middle powers, he argued, must first invest in institutional capability and enforcement mechanisms at home. Only then can platforms such as the ASEAN AI Safety Network translate high-level commitments into operational systems for shared risk preparedness. Regional solidarity without national capacity risks becoming symbolism without substance.

From a multilateral vantage point, Mathias Cormann, Secretary-General of the Organisation for Economic Co-operation and Development, placed public trust at the centre of AI’s future trajectory. Trust, he reminded the audience, is not manufactured through slogans but built through inclusion and objective evidence. Governments, industry and civil society must work in concert to close the widening gap between innovation and oversight. At times, he cautioned, responsible governance may require slowing down – testing, monitoring and sharing findings transparently to ensure systems function as intended and respect fundamental rights.

For developing economies with constrained regulatory infrastructure, the stakes are even higher. Sangbu Kim, Vice President for Digital and AI at the World Bank, argued that safety must be designed into systems from inception, not retrofitted after harm emerges. In low-capacity environments, partnerships become force multipliers. Mechanisms such as red-teaming and joint evaluation exercises allow countries to anticipate emerging threats before large-scale deployment. AI, he observed memorably, is both “the spear and the shield” – a tool that can accelerate development but also amplify risk if left unmanaged.

The conversation turned more urgent when Jaan Tallinn, Founding Engineer of Skype and Co-Founder of the Future of Life Institute, addressed frontier AI dynamics. Competitive pressures among leading laboratories, he warned, make unilateral restraint improbable. Yet the concentration of advanced AI capabilities within a limited number of actors could paradoxically make governance easier – if political awareness translates into coordinated international action. Alignment, not isolation, will determine whether oversight keeps pace with capability.

Across perspectives, a near-term agenda crystallised. Over the next 12 to 18 months, policymakers must move from principles to operational cooperation. That means shared safety benchmarks, structured cross-border information exchanges and coordinated investment in regulatory capacity. It also means recognising that South-South collaboration can be a strategic lever, allowing developing economies to shape global AI governance norms rather than merely absorb them.

The summit’s closing dialogue made one thing clear: the governance of frontier AI will not be settled by rhetoric alone. It will be shaped by whether nations – particularly in the Global South – choose to collaborate, test, share evidence and build trust together. In the race between innovation and oversight, coordination may well be the decisive advantage that ensures artificial intelligence strengthens societies instead of destabilising them.

Related Articles

Back to top button