Executive Summary
The global governance of artificial intelligence presents what we term the AI Governance Trilemma: no single jurisdiction can simultaneously maximise (1) innovation speed, (2) safety and rights protection, and (3) global competitiveness. Any regulatory framework that prioritises two of these objectives necessarily compromises the third. This trilemma — analogous to the Mundell-Fleming impossible trinity in international macroeconomics — structures the strategic landscape of the US-EU-China regulatory race that will determine the trajectory of AI development for the coming decade.
As of early 2026, global investment in AI has surpassed $450 billion annually, with the sector projected to contribute $15.7 trillion to the global economy by 2030 according to PwC estimates. Yet the regulatory frameworks governing this transformative technology remain fragmented across jurisdictions, creating a multi-player game with significant implications for innovation trajectories, safety outcomes, and the distribution of economic gains. This paper models the AI governance landscape as a three-player Bayesian game, analyses the equilibrium strategies emerging in each major jurisdiction, and proposes coordination mechanisms that could ameliorate the trilemma's constraints.
The Trilemma Framework: Three Vertices, Two Choices
The AI Governance Trilemma can be formalised as follows. Define three policy objectives that any AI governance regime seeks to achieve:
Innovation Speed (I): The pace at which AI systems move from research to deployment, encompassing regulatory approval timelines, compliance burdens on developers, and the permissiveness of experimental deployment. The Stanford HAI AI Index 2025 reports that the median time from AI model development to commercial deployment ranges from 8 months in the United States to 22 months in the European Union — a gap primarily attributable to regulatory process differences.
Safety Assurance (S): The degree to which regulatory frameworks mitigate risks from AI deployment, including algorithmic bias, autonomous system failures, privacy violations, and existential risk considerations. The OECD AI Policy Observatory tracks 47 distinct safety requirements across member jurisdictions, with compliance costs estimated at 8–15% of development budgets for high-risk AI applications.
Global Competitiveness (C): A jurisdiction's ability to attract AI talent, investment, and corporate headquarters relative to competing jurisdictions. The IMF's 2025 World Economic Outlook identifies AI sector competitiveness as a primary determinant of medium-term growth differentials among advanced economies.
The trilemma holds because of fundamental tensions between these objectives. Maximising I and C requires light-touch regulation that attracts firms and enables rapid deployment — but this compromises S. Maximising S and C requires smart regulation that builds consumer trust and sets global standards — but extensive compliance processes slow I. Maximising I and S requires heavy investment in testing infrastructure and regulatory capacity — but the associated costs and restrictions drive firms to more permissive jurisdictions, undermining C.
The Three-Player Game: US, EU, and China Strategies
Each major AI jurisdiction has implicitly chosen which vertex of the trilemma to deprioritise, revealing distinct strategic postures that can be modelled as a three-player game with incomplete information about competitors' future regulatory trajectories.
The United States: Prioritising I + C, Compromising S. The US regulatory approach, characterised by the 2023 Executive Order on AI Safety and subsequent sector-specific guidance rather than comprehensive legislation, prioritises maintaining innovation leadership and competitiveness. The National AI Research Resource and tax incentives for AI R&D investment (totalling approximately $12 billion in fiscal incentives through 2025) reflect a strategy of attracting and retaining AI development capacity. However, the absence of comprehensive federal AI legislation — with regulation distributed across the FTC, FDA, NHTSA, and state-level initiatives — creates safety governance gaps. The Brookings Institution's 2025 analysis identifies 23 "regulatory blind spots" where commercially deployed AI systems face no binding safety requirements.
The European Union: Prioritising S + C, Compromising I. The EU AI Act, fully effective from August 2025, represents the most comprehensive AI regulatory framework globally. Its risk-based classification system, mandatory conformity assessments for high-risk systems, and prohibitions on certain AI practices prioritise safety while seeking to establish regulatory standards that become global benchmarks — the "Brussels Effect" applied to AI. However, empirical evidence suggests innovation costs: European AI startups raised 38% less venture capital per capita than US counterparts in 2025, and the European Commission's own impact assessment acknowledged a projected 10–15% reduction in AI deployment speed for high-risk applications.
China: Prioritising I + S (state-defined), Compromising C (in the liberal sense). China's approach — encompassing the Interim Measures for Generative AI (2023), algorithmic recommendation regulations, and deep synthesis provisions — combines rapid deployment facilitation for strategically prioritised AI applications with extensive state oversight. The framework enables fast innovation in approved domains while maintaining comprehensive monitoring. However, data sovereignty requirements, restrictions on cross-border AI services, and geopolitical tensions reduce China's attractiveness as an AI development hub for international firms, with foreign AI R&D investment declining 34% between 2022 and 2025 according to Rhodium Group estimates.
Bayesian Game Dynamics: Uncertainty and Strategic Adjustment
The interaction between these three jurisdictions constitutes a Bayesian game in which each player has incomplete information about competitors' true preferences and future regulatory trajectories. Each jurisdiction observes its competitors' current regulatory posture but faces uncertainty about whether observed strategies represent stable commitments or transitional positions.
Define the type space Θi for each jurisdiction i as the set of possible preference orderings over {I, S, C}. Each jurisdiction's strategy σi maps from its type θi and beliefs about competitors' types to a regulatory posture. The Bayesian Nash equilibrium requires that each jurisdiction's strategy maximises its expected payoff given its beliefs about competitors' types, and that beliefs are consistent with observed strategies.
This framework illuminates several empirical phenomena. First, regulatory signalling: jurisdictions strategically announce regulatory intentions to influence competitors' beliefs and strategies. The EU's early publication of the AI Act proposal in 2021 — years before full implementation — served as a commitment device that shaped US and Chinese regulatory responses. Second, strategic ambiguity: the US preference for executive orders and agency guidance over legislation preserves flexibility to adjust strategy as competitors' approaches become clearer. Third, learning dynamics: each jurisdiction updates its beliefs about optimal strategy based on observed outcomes in competitor jurisdictions, creating a dynamic game with converging beliefs over time.
The Standards War: Competing for Regulatory Hegemony
Beyond domestic policy optimisation, the AI governance trilemma generates a secondary game: competition for global regulatory standard-setting. The jurisdiction whose AI governance framework becomes the de facto global standard captures significant advantages — reduced compliance costs for domestic firms, influence over global AI development trajectories, and soft power in international institutions.
This standards competition exhibits characteristics of a war of attrition. Each jurisdiction incurs costs by maintaining a distinct regulatory framework (compliance fragmentation, market access barriers) but hopes competitors will eventually converge toward its approach. The EU's strategy explicitly seeks this outcome: the Brussels Effect operates through market size, requiring foreign firms serving EU consumers to comply with EU standards, which then propagate globally through corporate compliance harmonisation. The OECD estimates that 67% of multinational AI firms have adopted EU AI Act classification categories as internal global standards, even in jurisdictions without equivalent requirements.
However, the standards war is not winner-take-all. Network effects in AI governance are weaker than in technical standards (like USB or TCP/IP), because regulatory compliance is more adaptable than hardware design. The emerging equilibrium appears to involve partial convergence around risk-based classification (adopted by the EU, Singapore, Canada, and Brazil) with persistent divergence in enforcement mechanisms and risk thresholds — a "fragmented convergence" that reduces but does not eliminate the costs of regulatory heterogeneity.
The Race-to-the-Bottom Risk and Institutional Safeguards
Economic theory predicts that regulatory competition can produce either a "race to the top" (if regulations create positive externalities through consumer trust and market stability) or a "race to the bottom" (if regulation is primarily perceived as a cost). The AI governance domain exhibits both dynamics simultaneously.
For consumer-facing AI applications (healthcare diagnostics, financial services, autonomous vehicles), evidence suggests a race to the top: jurisdictions with credible safety frameworks attract greater consumer adoption. McKinsey's 2025 consumer survey found that 72% of respondents in OECD countries expressed higher willingness to use AI healthcare tools in jurisdictions with explicit regulatory oversight. This dynamic incentivises robust safety governance.
Conversely, for enterprise and infrastructure AI (model training, data centre operations, B2B AI services), competitive dynamics favour regulatory minimisation. AI firms can locate training operations, data processing, and research facilities in permissive jurisdictions while serving regulated markets remotely. This mobility creates pressure for jurisdictions to lower compliance burdens for non-consumer-facing AI activities — a dynamic visible in the UK's post-Brexit "pro-innovation" AI regulatory approach and Singapore's deliberately light-touch AI governance framework.
The net effect depends on the relative magnitude of these opposing forces. Current empirical evidence — including the continued concentration of AI talent and investment in relatively permissive jurisdictions (the US attracted 58% of global AI private investment in 2025) — suggests that race-to-the-bottom pressures currently dominate for the most economically significant AI activities.
Toward Cooperative Solutions: Multilateral AI Governance Mechanisms
The AI governance trilemma cannot be eliminated — it reflects genuine trade-offs. However, cooperative mechanisms can shift the feasibility frontier outward, enabling jurisdictions to achieve better outcomes across all three dimensions than unilateral strategies permit. Three institutional innovations merit consideration:
1. Mutual Recognition Agreements for AI Conformity Assessment. Analogous to mutual recognition agreements in pharmaceutical regulation, AI-specific MRAs would allow conformity assessments conducted in one jurisdiction to be recognised in others. This directly addresses the I-C trade-off: firms can deploy faster across jurisdictions without duplicating compliance processes. The G7's Hiroshima AI Process provides a nascent framework, but binding mutual recognition requires deeper institutional development.
2. International AI Safety Benchmarking Infrastructure. Establishing shared benchmarking infrastructure — akin to the International Bureau of Weights and Measures for AI safety metrics — would reduce the cost of safety assurance and create common evidentiary standards. The UK AI Safety Institute and its international counterparts represent early steps, but a formal multilateral benchmarking institution would systematically lower the S-I trade-off by reducing duplicative testing.
3. Tiered Global AI Governance Compact. Drawing on the climate governance model, a tiered compact would establish minimum global AI safety standards (analogous to Paris Agreement nationally determined contributions) with differentiated obligations based on AI development capacity. This addresses the C-S trade-off by preventing race-to-the-bottom dynamics while accommodating legitimate differences in regulatory capacity and priorities.
Implications for GDEF's Regulation & Policy Working Group
The trilemma framework suggests that advocacy for any single regulatory model — whether the EU's comprehensive approach, the US's innovation-first posture, or China's state-directed model — is less productive than designing mechanisms that ameliorate the trade-offs all jurisdictions face. GDEF's convening role across government, industry, and civil society positions it to facilitate the institutional innovations outlined above.
The Regulation & Policy Working Group's proposed International AI Governance Compact initiative will draw on this analysis to develop concrete proposals for mutual recognition frameworks and shared safety infrastructure, for consideration at the 2026 Annual Summit.
References & Sources
- OECD, AI Policy Observatory: Regulatory Tracker 2025. Organisation for Economic Co-operation and Development. oecd.ai/en/dashboards
- Stanford University, Artificial Intelligence Index Report 2025. Human-Centered Artificial Intelligence (HAI). aiindex.stanford.edu/report
- European Commission, AI Act Impact Assessment: Final Report. Directorate-General for Communications Networks, Content and Technology. digital-strategy.ec.europa.eu
- IMF, World Economic Outlook: AI and the Global Economy. October 2025. imf.org/en/Publications/WEO
- PwC, Global Artificial Intelligence Study: Sizing the Prize. pwc.com
- Brookings Institution, The US AI Regulatory Gap: Mapping Federal Oversight Blind Spots. Governance Studies, 2025. brookings.edu
- Mundell, R.A. (1963). "Capital Mobility and Stabilization Policy under Fixed and Flexible Exchange Rates." Canadian Journal of Economics and Political Science, 29(4), 475–485. doi.org/10.2307/139336
- Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University Press. global.oup.com
- Rhodium Group, China AI Investment Tracker 2025. rhg.com/research
- Harsanyi, J.C. (1967). "Games with Incomplete Information Played by 'Bayesian' Players, I–III." Management Science, 14(3), 159–182. doi.org/10.1287/mnsc.14.3.159