di
Ettore Greco; Francesca Maremonti
Global governance the challenge
Differences between China and the West make it difficult to reach meaningful agreements on regulating artificial intelligence. However, the G7 and the United Nations are laying the foundations for common principles. Promoting inclusive AI for the Global South is a growing priority
T
he rapidly evolving AI policy landscape presents two main challenges. First, countries at the forefront of AI development and application have adopted different regulatory approaches. Despite efforts such as the Trade and Technology Council (TTC), significant divergences remain across the Atlantic, with the U.S. favoring a market-driven approach and the EU promoting a human-centered, risk-based model. China, meanwhile, emphasizes strict state control over AI advancements, a strategy that contrasts sharply with the free-market principles dominant in the West. Other emerging technological powers, such as India, are developing their own models. This fragmentation of regulatory efforts is a major obstacle to establishing global AI governance based on common principles and tools. Heightening geopolitical tensions and technological rivalries among major powers further complicate the entrenched differences in regulatory cultures and the economic role of the state.
Second, the rise of AI is widening the global digital divide. In Africa, where the majority of people lack internet access, AI adoption remains low due to the shortage of talent, inadequate digital infrastructure, and insufficient institutional frameworks. Experts, stakeholders, and policymakers have become increasingly aware of the urgency and global strategic relevance of these challenges, particularly in relation to the achievement of the Sustainable Development Goals (SDGs) and the implementation of the UN 2030 Agenda. This awareness has spurred renewed international efforts to reach a common understanding of AI's potential and impact, and to launch joint initiatives within various multilateral bodies and forums.
Governments worldwide have come to recognize artificial intelligence as a key driver of competitiveness and innovation. As a result, AI regulation has moved to the forefront of policy agendas in a growing number of countries. The development of AI strategies has surged in recent years, with 71 countries now having released national strategies aimed at unlocking AI’s potential while mitigating its risks. However, the process of regulating AI has been neither smooth nor uniform.
In a first phase of AI regulation, starting in 2017, high income countries raced to release their respective AI strategies, striving for leadership in an unregulated field. By 2019, 70% of high-income countries had published an AI strategy. Leading digital powers promoted their respective AI strategies to gain traction, often rooted in different principles and national interests. This initial phase of AI regulation produced two main results: a severe fragmentation of the AI regulatory landscape and a growing digital divide between the Global North and the rest of the world which has lagged behind. Since 2021, a second phase of AI regulation has been unfolding. The Covid-19 pandemic has shown the benefits of AI adoption in practice, enabling more efficient forecasting, diagnosis, support for response and control of the pandemic. Governments have become increasingly aware of the transformative potential of AI, while technology companies have found home in a growing number of countries worldwide.
Policy makers have then scaled up their efforts to design and release AI strategies to lay the groundwork for increasing AI adoption in middle and low-income countries. In 2023, the picture of countries participating in AI regulation looked significantly more diverse than before. Rwanda was the first low-income country to release an AI strategy, in 2023. Countries spreading across different regions, like Benin, Bangladesh and Tajikistan, have followed along, with increasing participation of previously underrepresented voices in the AI regulatory debate.
Many middle and low-income countries have turned to existing AI governance models when designing their own strategies. This has produced “clusters” of approaches, while normative divergences across AI governance models have remained. For example, in South Asia many governments have turned to India’s “AI for all” strategy as a guide. Pakistan drew inspirationfrom its main partner for AI technological development, China, as it sought to draft its AI strategy.
Many low and middle countries striving for economic growth have seen AI as an accelerator of the so-called 4th industrial revolution. According to one estimate, AI could contribute up to $15.7 trillion to the global economy by 2030. However, AI-generated economic growth is likely to remain geographically concentrated, with North America and China expected to see the largest gains. Most countries are grappling with the lack AI readiness, which will likely feed into greater global inequality. The commitment flagged by a soaring number of countries to chart AI regulation is a sign of progress. However, the fragmented AI policy landscape and the digital divide between regions hinder a global regulatory response to AI.
Governments across the world and international organisations have become increasingly aware of the implications of the AI regulatory fragmentation and the global digital divide. AI technologies can boost economic and social growth in both high-income and low-income countries. The use of AI across disciplines must be expanded to cover cutting-edge science and critical areas such as healthcare, education and climate change. Bilateral and multilateral platforms and international initiatives for greater cooperation on AI are proliferating. Multilateralism may prove to be a useful tool at the service of artificial intelligence and digitalisation.
In recent years, the United Nations has been increasingly vocal about the need to provide multilateral responses to AI challenges. The UN 2023 edition of the Activities on Artificial Intelligence Report, drafted in collaboration with the International Telecommunication Union (ITU), recognises the potential of AI systems to accelerate and enable progress towards reaching the 17 Sustainable Development Goals. AI underpins systems that could improve forecasting of food crises, monitoring water productivity, mapping schools through satellite imagery and optimizing the performance of communication networks, among other applications. Within this framework, the UN has launched a number of initiatives to unlock the potential of AI for global development.
A high-level Advisory Body on AI was established by the UN in October 2023 to foster a globally inclusive approach to AI. Bringing together a multi-stakeholder team of experts representing 33 countries, the Advisory Board released its final report, “Governing AI for Humanity,” in September 2024. The report highlights the global AI governance gaps, advocating and providing a roadmap for the imperative need for a global AI approach. In March 2024, the UN adopted a landmark resolution on the promotion of “safe, secure and trustworthy” AI systems to boost sustainable development for all.
The US-led drafting process saw the contributions of 120 other Member States, including China. Important milestones were cemented in September 2024, at the UN Summit of the Future held in New York. In the leadup to the event, the main digital stakeholders gathered at the Action Days from 20-21 September, committing to providing funds amounting to $1.05 billion to advance digital inclusion. During the Summit, leaders adopted the Pact for the Future, which includes a Global Digital Compact and a Declaration on Future Generations. These measures aim at transforming global digital governance with the goal of entrenching it with SDGs and the 2030 Agenda. The Global Digital Compact, in particular, provides a comprehensive framework for global governance of digital technology and AI, charting a roadmap for global digital cooperation to harness AI potential and bridge digital divides.
Forum like the G7 and the G20 have been at work to provide multilateral solutions to the unresolved challenges of AI governance. Digitalisation features prominently across the areas of work of the two fora and in their ministerial meetings, with a particular attention dedicated to AI.
Most of the leading digital powers – but not China and India - gather at the G7 level, making the forum a valuable venue to address regulatory fragmentation and try to build common ground for greater convergence among the AI models of G7 countries. Under the Japanese presidency in 2023, G7 leaders agreed on a set of “International Guiding Principles on Artificial Intelligence” and a voluntary “Code of Conduct for AI developers.” These first pillars underline the G7 commitment to deepen cooperation on AI in principle by setting standards that could be integrated national AI regulatory frameworks. The voluntary Code of Conduct aims at holding private companies accountable for the respect of basic principles in the development of AI technologies. In 2024, the G7, under the Italian presidency, established the AI Hub for Sustainable Development in collaboration with UN Development Programme. The AI Hub aims to promote a multistakeholder approach to support local AI digital ecosystems and strengthen capacities to advance AI for sustainable development, with a particular focus on Africa.
If the G7 pays particular attention to the African continent, the G20 – with a strong focus on AI’s societal impacts - has doubled down on its commitment towards global inclusion and a broader digital outreach across countries and stakeholders. The G20 agenda largely aligns with the UN commitment to unlock AI’s potential to promote the SDGs, while spearheading global conversations on AI governance. The Science20 (S20) platform, bringing together scientific academies from the G20 countries, has convened since 2017. Across leaderships, the S20 has mostly focused on the impact of technology on a global scale, with yearly mottos ranging from the 2020 “Transformative Science for Sustainable Development” to the 2024 “Science for Global Transformation”.
Deepening geopolitical and technological rivalries are likely to continue preventing any major agreements on effective instruments for global AI governance. Divergent interests and approaches, particularly between Western countries and China, remain difficult to reconcile. However, ongoing efforts at the UN and in groups like the G20 and G7 could help lay the groundwork for agreements on a set of basic global principles and standards. The involvement of AI stakeholders and actors, which has been encouraged in many cooperation frameworks, could significantly contribute to advancing common solutions.
AI governance is likely to be a blend of voluntary arrangements with varying degrees of monitoring, alongside new regulatory efforts and enforcement actions. This mix will differ across regions, reflecting their distinct regulatory cultures and economic models. In this context, transatlantic dialogue and cooperation will play a key role. The G7, in particular, has launched several promising initiatives on AI governance, leveraging converging policy efforts across the Atlantic and key preparatory work by the OECD. These initiatives could potentially serve as a foundation for broader, albeit limited, global agreements.
Concerns about AI's potential to exacerbate the digital divide have also led to several international initiatives aimed at harnessing AI’s potential for achieving the Sustainable Development Goals and fostering economic growth in low- and middle-income countries. Bridging the digital divide has become a central focus of UN diplomacy, with efforts centered on promoting participatory and inclusive AI models for countries in the Global South.