Once viewed as political propaganda or military deception, disinformation has become a lucrative industrial complex. Driven by the engagement-focused business models of social media and the algorithmic allocation of advertising funds, multiple online actors — from content creators to the platforms themselves — have the financial incentives to promote and amplify incendiary content.
The fundamental principle of the Internet’s business model is simple: engagement equals revenue. Each view, click, comment, and repost is an engagement that potentially translates to money. Consequently, content creators have learned to circulate precisely the type of content that generates clicks, which, as we know from existing marketing research, is likely shocking, highly emotional, and tribalizing. The last one means that the controversial content that forces self-classification into us vs. them groups will likely generate interactions. In turn, the social media algorithm picks up engaging content and distributes it further.
Advertisers unknowingly fund fake news
Thinking about disinformation as a market system helps identify its supply chain. At the production level, the actors that seed disinformation — from state actors to politicians and influencers — create narrative campaigns designed to mislead the public for harm or profit. The intermediaries are the platform’s algorithms and recommender systems that pick up and amplify the content. Alongside, entire grey industries, such as “bot farms,” use fake social media accounts and other automated scripts for a dual purpose: to commit ad fraud and to amplify disinformation on demand. Finally, advertisers, knowingly or unknowingly, fund this economy by trusting their ad spending to the advertising technology ecosystem (AdTech), which operates without oversight or accountability.
My research on the overlap between digital advertising and fake news suggests that the financial incentives of the disinformation economy are staggering. Fake news websites rake in revenue through programmatic advertising, earning money each time a user clicks on an article. Meanwhile, reactionary and provocative influencers leverage incendiary content to build an audience and then cash in through the podcast circuit, which can result in lucrative personal brands and even positions of political influence.
The role of Big Tech
Tech companies are both enablers and beneficiaries of the disinformation economy because they are, at the core, AdTech businesses that dominate a certain aspect of this economy. For example, Google dominates the search, video, and display advertising markets, Meta leads the social media advertising ecosystem, and Amazon leads e-commerce and display advertising.
While platforms claim to combat misinformation, their business models depend on user engagement, which results in a conflict of interest. Meta’s dismissal of fact-checking cooperation in the US is an example. The economic incentives to spread disinformation will persist if AdTech firms continue distributing ad spending without accountability and democratic oversight.
Countering the disinformation economy
Examining disinformation from the perspective of its business models opens opportunities for interventions beyond messaging and media literacy. Rather than viewing disinformation and misinformation as accidental byproducts, a market-oriented perspective proposes that they thrive in the current markets designed for digital advertising and influencer marketing.
The book “Market-Oriented Disinformation Research” proposes that addressing the issue necessitates systemic changes in revenue generation. It suggests three specific mechanisms already utilized in financial markets to prevent their use in terrorism and money laundering: Know Your Customer (KYC), Duty to Care, and Due Diligence.
KYC rules mandate that financial institutions verify the identities of their clients and monitor transactions to identify suspicious activities. This can help limit the flow of dark money in digital advertising, where undisclosed sources funnel money into AdTech without oversight.
Marketers should have a duty to care — a legal responsibility to ensure that their spending does not inadvertently fuel disinformation, hate speech, or fraudulent activities. This would mean verifying that AdTech intermediaries indeed distribute their ad spending as intended and taking responsibility for what ads are funding.
Advertising agencies and AdTech intermediaries should be held responsible for performing due diligence to prevent the waste of their clients’ advertising budgets on ad fraud and the funding of harmful content.
Carlos Diaz Ruiz is an associate professor of marketing at Hanken School of Economics and the author of the book “Market-Oriented Disinformation Research: Digital Advertising, Disinformation and Fake News on Social Media”. This Open-Access book explores the spread of false or misleading information online through the lens of marketing theory and consumer research.