AI Disinformation Farms: 1,000 Fake News Sites Generate Millions of Views for Ad Revenue

NewsGuard and other researchers identified over 1,000 "pink slime" news sites generating AI-written disinformation content at scale, earning advertising revenue while spreading false narratives — with some sites producing 1,200 articles per day.

Global Advertising Ecosystem·2023·2 min read

Background

AI text generation tools dramatically reduced the cost of producing fake news content. In 2023, operations that had previously required dozens of writers could produce thousands of articles daily with a handful of operators and AI tools. These operations monetised via programmatic advertising on fake news sites.

The Attack

Operations identified by NewsGuard, GDI, and the Stanford Internet Observatory used LLMs to generate large volumes of news-style articles — from local fake reporting to political disinformation — posting them to networks of websites with credible-looking names. These sites attracted programmatic advertising from major brands (often through multiple ad network layers that obscured placement). Some sites produced 1,200 articles per day with no human writers. One identified Chinese operation produced fake local US news sites targeting swing states ahead of the 2024 election. The content mixed genuine news with disinformation to appear legitimate.

Response

Brand Safety groups worked with ad networks to demonetise identified fake news sites. Meta, Google, and others updated policies. NewsGuard published its AI-generated disinformation tracker. The EU's Digital Services Act required large platforms to assess political disinformation risks.

Outcome

The democratisation of AI-generated content makes it economically viable to operate disinformation at scale purely for advertising revenue — political or ideological motivation is no longer necessary. The volume and quality of AI content makes automated detection increasingly difficult.

Key Takeaways

  1. Programmatic advertising systems must implement better verification of content publisher legitimacy
  2. AI-generated content detection tools are valuable but imperfect — cross-reference news with established sources
  3. Ad networks bear responsibility for where ads appear — brand safety controls must extend to AI-generated fake news sites
  4. Media literacy about AI-generated content is now an essential civic skill
AI disinformationfake newsprogrammatic advertisingsynthetic contentpink slime