Industrialized Deception - How LLM-generated misinformation impacts digital ecosystems. Accepted at ACM WWW 2026.

Industrialized Deception: How LLMs Are Reshaping the Disinformation Landscape

Our paper “Industrialized Deception: The Collateral Effects of LLM-Generated Misinformation on Digital Ecosystems” has been accepted at ACM TheWebConf ’26 (WWW ’26) in Dubai. It examines how large language models are reshaping the disinformation landscape and what we can do about it.

The Problem: Deception at Industrial Scale

Generative AI has fundamentally changed how misinformation spreads. What once required human effort to write, edit, and distribute can now be automated at unprecedented scale. We call this industrialized deception: the automated production of misleading content that erodes trust across entire digital ecosystems.

The threat goes beyond individual fake articles. When AI-generated misinformation proliferates, it undermines trust not just in specific content but in the information infrastructure itself. Our research identifies two systemic risks: epistemic fragmentation, where shared facts give way to competing AI-generated narratives, and synthetic consensus, where LLM-generated content creates an illusion of widespread agreement on fabricated claims.

Our Approach: JudgeGPT and RogueGPT

To study this problem empirically, we built two open-source tools that form a complete experimental pipeline:

  • RogueGPT is a controlled stimulus generation engine. It creates AI-generated news fragments under precise, reproducible conditions, giving researchers fine-grained control over variables like topic, writing style, and model used.
  • JudgeGPT is a web-based survey platform where participants evaluate news articles on authenticity and credibility, without knowing which are AI-generated and which are human-written, or which are legitimate and which are fabricated.

Together, these tools let us systematically investigate how humans perceive and detect AI-generated misinformation.

Key Findings

Our research reveals several important insights:

  • People struggle to tell the difference. Participants in our studies have difficulty distinguishing LLM-generated news from human-written articles. The quality gap between AI-generated and human content has narrowed dramatically.
  • Detection is an arms race. While detection capabilities have improved, the competition between generation and detection continues to intensify, especially with multimodal systems that combine text, images, audio, and video.
  • Experts are skeptical of purely technical solutions. Our longitudinal expert survey shows that researchers and policymakers prefer provenance standards (like C2PA) and regulatory frameworks over AI-based detection tools alone.
  • Agentic AI raises the stakes. The emergence of autonomous AI systems capable of generating and disseminating content shifts the challenge from detecting fake content to identifying coordinated inauthentic behavior.

What Can We Do About It?

The paper discusses several mitigation strategies:

  • Content provenance: Cryptographic standards like C2PA that embed verifiable origin data into media files. We explore this further in our companion project Origin Lens.
  • Inoculation approaches: Pre-emptively exposing people to weakened forms of misinformation techniques, building psychological resilience before they encounter the real thing.
  • LLM-based detection: Using the same AI technology to detect synthetic content, though this remains a cat-and-mouse game.
  • Media literacy and education: Equipping people with critical thinking tools to evaluate information sources, regardless of whether the content is AI-generated.

The Dual-Use Paradox

Perhaps the most striking insight is the dual-use nature of generative AI itself. The same technology that enables industrialized deception also offers our best tools for combating it. LLMs can generate misinformation at scale, but they can also help detect it. This paradox is central to the ongoing challenge and suggests that solutions must combine technical, regulatory, and educational approaches.

Read the Paper

The full paper is available on arXiv: Industrialized Deception: The Collateral Effects of LLM-Generated Misinformation on Digital Ecosystems. It will be presented at ACM TheWebConf ’26 in Dubai this April.

Both JudgeGPT and RogueGPT are open source. If you work in this space, we’d love to hear from you.

Citation: Loth, A. et al. (2026). Industrialized Deception: The Collateral Effects of LLM-Generated Misinformation on Digital Ecosystems. In Companion Proceedings of the ACM Web Conference 2026 (WWW Companion ’26), April 13-17, 2026, Dubai, UAE. DOI: 10.1145/3774905.3795471