Rabbit Hole

See what people are claiming online - rumors, conspiracy theories, and questionable content.

What is this?

We collect stories that are spreading online - things like conspiracy theories, fake videos, and propaganda - so you can see what's being said and make up your own mind about what's real.

Propaganda

Agenda-pushing content

Rumor

Unverified theories

Deepfake

AI-generated fakes

Influence Ops

Organized campaigns

Evidence = How much proof exists
Mystery = How intriguing it is
Deepfake

Another AI-generated video of Donald Trump criticising Keir Starmer circulates online

The video appears to show Donald Trump telling Keir Starmer to concentrate on governing the UK. But it isn’t real.

Evidence 4/5
Mystery 2/5
Jan 31, 2026 2
Deepfake

AI-enhanced image of Minneapolis shooting shared online

A still image which has been taken from a real video of the shooting of Alex Pretti has been enhanced with artificial intelligence, resulting in an agent kneeling on the ground missing a head.

Evidence 4/5
Mystery 3/5
Jan 31, 2026 2
Deepfake

Industrialized Deception: The Collateral Effects of LLM-Generated Misinformation on Digital Ecosystems

Generative AI and misinformation research has evolved since our 2024 survey. This paper presents an updated perspective, transitioning from literature review to practical countermeasures. We report on changes in the threat landscape, including improved AI-generated content through Large Language Models (LLMs) and multimodal systems. Central to this work are our practical contributions: JudgeGPT, a platform for evaluating human perception of AI-generated news, and RogueGPT, a controlled stimul...

Evidence 2/5
Mystery 3/5
Jan 31, 2026 2
Deepfake

A Marketplace for AI-Generated Adult Content and Deepfakes

Generative AI systems increasingly enable the production of highly realistic synthetic media. Civitai, a popular community-driven platform for AI-generated content, operates a monetized feature called Bounties, which allows users to commission the generation of content in exchange for payment. To examine how this mechanism is used and what content it incentivizes, we conduct a longitudinal analysis of all publicly available bounty requests collected over a 14-month period following the platfo...

Evidence 2/5
Mystery 3/5
Jan 31, 2026 2
Deepfake

Explainable Deepfake Detection with RL Enhanced Self-Blended Images

Most prior deepfake detection methods lack explainable outputs. With the growing interest in multimodal large language models (MLLMs), researchers have started exploring their use in interpretable deepfake detection. However, a major obstacle in applying MLLMs to this task is the scarcity of high-quality datasets with detailed forgery attribution annotations, as textual annotation is both costly and challenging - particularly for high-fidelity forged images or videos. Moreover, multiple studi...

Evidence 2/5
Mystery 3/5
Jan 31, 2026 2
Deepfake

Revealing the Truth with ConLLM for Detecting Multi-Modal Deepfakes

The rapid rise of deepfake technology poses a severe threat to social and political stability by enabling hyper-realistic synthetic media capable of manipulating public perception. However, existing detection methods struggle with two core limitations: (1) modality fragmentation, which leads to poor generalization across diverse and adversarial deepfake modalities; and (2) shallow inter-modal reasoning, resulting in limited detection of fine-grained semantic inconsistencies. To address these,...

Evidence 2/5
Mystery 3/5
Jan 31, 2026 3
Deepfake

Agentic AI Microservice Framework for Deepfake and Document Fraud Detection in KYC Pipelines

The rapid proliferation of synthetic media, presentation attacks, and document forgeries has created significant vulnerabilities in Know Your Customer (KYC) workflows across financial services, telecommunications, and digital-identity ecosystems. Traditional monolithic KYC systems lack the scalability and agility required to counter adaptive fraud. This paper proposes an Agentic AI Microservice Framework that integrates modular vision models, liveness assessment, deepfake detection, OCR-based...

Evidence 4/5
Mystery 2/5
Jan 31, 2026 3
Deepfake unverified

Audio Deepfake Detection at the First Greeting: "Hi!"

This paper focuses on audio deepfake detection under real-world communication degradations, with an emphasis on ultra-short inputs (0.5-2.0s), targeting the capability to detect synthetic speech at a conversation opening, e.g., when a scammer says "Hi." We propose Short-MGAA (S-MGAA), a novel lightweight extension of Multi-Granularity Adaptive Time-Frequency Attention, designed to enhance discriminative representation learning for short, degraded inputs subjected to communication processing a...

Evidence 2/5
Mystery 3/5
Jan 30, 2026 18
Deepfake unverified

MARE: Multimodal Alignment and Reinforcement for Explainable Deepfake Detection via Vision-Language Models

Deepfake detection is a widely researched topic that is crucial for combating the spread of malicious content, with existing methods mainly modeling the problem as classification or spatial localization. The rapid advancements in generative models impose new demands on Deepfake detection. In this paper, we propose multimodal alignment and reinforcement for explainable Deepfake detection via vision-language models, termed MARE, which aims to enhance the accuracy and reliability of Vision-Langu...

Evidence 2/5
Mystery 3/5
Jan 30, 2026 5
Deepfake unverified

Audio Deepfake Detection in the Age of Advanced Text-to-Speech models

Recent advances in Text-to-Speech (TTS) systems have substantially increased the realism of synthetic speech, raising new challenges for audio deepfake detection. This work presents a comparative evaluation of three state-of-the-art TTS models--Dia2, Maya1, and MeloTTS--representing streaming, LLM-based, and non-autoregressive architectures. A corpus of 12,000 synthetic audio samples was generated using the Daily-Dialog dataset and evaluated against four detection frameworks, including semant...

Evidence 2/5
Mystery 3/5
Jan 30, 2026 4