Executive Summary
The information ecosystem is experiencing a crisis of integrity. The Oxford Internet Institute's 2025 Global Disinformation Inventory identifies organised disinformation campaigns operating in 86 countries — up from 48 in 2020. The Reuters Institute Digital News Report reveals that trust in news has declined to 38% globally, with only 23% of respondents trusting news encountered on social media platforms. The proliferation of generative AI tools has dramatically reduced the cost of producing convincing disinformation: the Centre for Countering Digital Hate estimates that AI-generated disinformation output has increased tenfold since the release of publicly accessible image and text generation tools in 2022.
This policy brief analyses information integrity through the lens of externality theory and public goods economics. We argue that disinformation is best understood as a negative externality of attention-maximising platform design — analogous to pollution as an externality of industrial production. Just as environmental regulation internalises the costs of pollution, platform accountability regulation must internalise the costs of information pollution. We examine three regulatory approaches — the EU's Digital Services Act, proposed US Section 230 reforms, and Brazil's Internet Freedom Act amendments — and assess their effectiveness through the framework of regulatory economics.
Disinformation as Negative Externality
Arthur Pigou's foundational work on externalities provides a precise framework for understanding the disinformation problem. A negative externality arises when an economic activity imposes costs on third parties that are not reflected in the activity's market price. Industrial pollution is the textbook example: a factory that emits toxic waste into a river captures the full benefit of its production while imposing health and environmental costs on downstream communities.
Social media platforms' attention-maximising algorithms exhibit an analogous externality structure. Platforms' revenue depends on user engagement (time spent, interactions, shares), and engagement-maximising algorithms systematically amplify content that triggers strong emotional responses — including outrage, fear, and tribal identity — because such content generates more clicks and shares. Research published in Science (2018) by Vosoughi, Roy, and Abeysekera found that false news stories on Twitter spread six times faster and reached significantly more people than accurate stories, precisely because falsehood is often more emotionally engaging than truth.
The costs of algorithmic amplification of disinformation are borne not by the platforms or the content creators but by society: eroded trust in institutions, polarised political discourse, public health misinformation (the WHO's 2024 assessment attributes an estimated 15,000 excess COVID-19 deaths in Europe to vaccine misinformation), and undermined democratic processes. These are classic negative externalities: real costs imposed on third parties by a market interaction (between platform and advertiser) from which they are excluded.
The magnitude of the externality is substantial. A 2024 Brookings Institution analysis estimates that disinformation costs the global economy approximately $78 billion annually through market manipulation, health misinformation consequences, election interference, and brand safety losses. This figure excludes the harder-to-quantify costs of social trust erosion and democratic damage.
Content Moderation as Public Good
If disinformation is a negative externality, content moderation is the corresponding public good: a service that benefits all users of the information ecosystem but that individual actors have insufficient incentive to provide. Content moderation is non-excludable (all users benefit from a healthier information environment) and non-rivalrous (one user's benefit from reduced disinformation does not diminish another's), satisfying the formal criteria for a public good.
The free-rider problem in content moderation manifests at multiple levels. Individual users have little incentive to invest time in verifying and reporting false content when the benefits of doing so are diffused across the entire platform. Platforms have limited incentive to invest in moderation beyond the level necessary to retain advertisers, because the societal benefits of a healthier information ecosystem are not captured in their revenue. Governments face cross-border free-rider problems: disinformation originating in one jurisdiction harms information integrity globally, but enforcement resources are national.
Platforms' moderation investment decisions illustrate the underinvestment problem. Despite combined annual revenues exceeding $350 billion, the major social media platforms (Meta, Alphabet, X, TikTok's parent ByteDance) collectively spend an estimated $12–15 billion on trust and safety operations — approximately 3–4% of revenue. For comparison, the financial services industry spends approximately 8–12% of revenue on compliance and risk management, reflecting regulatory mandates that the technology sector largely lacks.
Regulatory Approaches: A Comparative Analysis
The EU Digital Services Act (DSA). The DSA, fully operational since February 2024, represents the most comprehensive platform accountability framework globally. Its key innovations include: mandatory systemic risk assessments for very large online platforms (VLOPs) with over 45 million EU users; transparency obligations for recommender algorithms and advertising targeting; independent audit requirements; and researcher access to platform data. Enforcement lies with the European Commission for VLOPs and with national Digital Services Coordinators for smaller platforms.
From a regulatory economics perspective, the DSA's systemic risk assessment requirement is its most significant innovation. By requiring platforms to identify, analyse, and mitigate risks to fundamental rights, public health, and democratic processes, the DSA effectively internalises the negative externalities of algorithmic amplification. Platforms must now account for the societal costs of their design choices, not merely their commercial consequences. Early implementation evidence suggests that VLOPs have increased trust and safety staffing by 25–40% and invested in recommender algorithm modifications to reduce amplification of harmful content.
US Section 230 reform proposals. Section 230 of the Communications Decency Act — which provides broad immunity to platforms for user-generated content — has been the subject of bipartisan reform proposals, though no legislation has been enacted. Proposed reforms range from narrowing immunity for algorithmic amplification (the EARN IT Act, the SAFE TECH Act) to conditioning immunity on compliance with reasonable content moderation practices. The economic analysis of Section 230 reform is complex: blanket immunity creates a moral hazard (platforms bear no legal cost for amplifying harmful content), but overly broad liability could create a chilling effect on speech and disproportionately burden smaller platforms that lack the resources for comprehensive moderation.
Brazil's Internet Freedom Act amendments. Brazil's 2024 amendments to the Marco Civil da Internet establish requirements for platform transparency, algorithmic accountability, and content moderation responsiveness. The Brazilian approach is notable for its emphasis on electoral integrity, reflecting the country's experience with platform-mediated political disinformation. The amendments require platforms to establish rapid-response mechanisms during election periods and to provide electoral authorities with data access for investigating digital campaign violations.
The Attention Economy's Structural Incentives
Regulatory interventions that address symptoms (content moderation) without addressing root causes (attention-maximising business models) are likely to be insufficient. The fundamental driver of disinformation amplification is the economic structure of the attention economy: platforms whose revenue depends on maximising user engagement have a structural incentive to amplify emotionally provocative content, including disinformation.
Tim Wu's analysis of the "attention economy" highlights that user attention is the scarce resource around which platform business models are organised. Advertisers pay platforms for access to user attention, measured in impressions, clicks, and engagement metrics. This creates what economists call a "two-sided market" in which platforms mediate between users (who provide attention) and advertisers (who pay for access to that attention). The platform's optimisation objective — maximise engagement to maximise advertising revenue — is misaligned with the information ecosystem's social objective of accurate, balanced, and trustworthy information.
Addressing this structural misalignment requires interventions beyond content moderation. Potential structural approaches include: mandating interoperability, which would reduce platforms' monopoly power over user attention and enable competitive alternatives; prohibiting micro-targeted advertising for political content, which reduces the economic return on disinformation campaigns; and requiring algorithmic choice — giving users meaningful control over the recommendation algorithms that shape their information environment, rather than defaulting to engagement maximisation.
The Generative AI Amplification Challenge
Generative AI has fundamentally altered the disinformation landscape. The cost of producing convincing false content — text, images, audio, and video — has collapsed by orders of magnitude. A 2025 analysis by the Centre for Strategic and International Studies (CSIS) documents that state-sponsored disinformation operations now routinely use AI-generated content, with detection rates for AI-generated text falling below 50% for current detection tools.
The arms race between AI-generated disinformation and AI-powered detection is inherently asymmetric. Generating convincing false content is computationally cheap and scalable; detecting it requires sophisticated analysis that is computationally expensive and fragile. Content provenance technologies — such as the C2PA standard for content credentials and watermarking of AI-generated content — offer a more sustainable approach by establishing the authenticity of genuine content rather than attempting to identify all false content. However, provenance adoption remains nascent: fewer than 5% of images published online currently carry C2PA credentials.
Implications for GDEF's Regulation & Policy Working Group
Information integrity is a foundational requirement for functional markets, democratic governance, and social trust — all core concerns of GDEF's mandate. The externality analysis presented here demonstrates that market mechanisms alone will not produce adequate information integrity: the divergence between private incentives and social costs is too large. Regulatory intervention is necessary, but must be carefully designed to address structural incentives rather than merely policing content. GDEF's Regulation & Policy Working Group will advance platform accountability frameworks in its programme of work, with particular focus on cross-jurisdictional coordination and the generative AI amplification challenge.
References & Sources
- Oxford Internet Institute, Global Disinformation Inventory 2025. University of Oxford. oii.ox.ac.uk
- Reuters Institute, Digital News Report 2025. University of Oxford. reutersinstitute.politics.ox.ac.uk
- European Commission, Digital Services Act: First Annual Implementation Report, 2025. ec.europa.eu/dsa
- Brookings Institution, The Economic Costs of Disinformation, 2024. brookings.edu/technology
- Vosoughi, S., Roy, D., and Aral, S. (2018). "The Spread of True and False News Online." Science, 359(6380), 1146–1151. doi.org/10.1126/science.aap9559
- Pigou, A.C. (1920). The Economics of Welfare. London: Macmillan. oll.libertyfund.org
- CSIS, AI-Generated Disinformation: State of the Threat 2025. Centre for Strategic and International Studies. csis.org/technology
- WHO, Infodemic and Health Misinformation: Global Assessment 2024. who.int/infodemic
- Wu, T. (2016). The Attention Merchants. New York: Knopf. penguinrandomhouse.com
- C2PA, Content Provenance and Authenticity Technical Specification. c2pa.org