Rabbit Hole

See what people are claiming online - rumors, conspiracy theories, and questionable content.

What is this?

We collect stories that are spreading online - things like conspiracy theories, fake videos, and propaganda - so you can see what's being said and make up your own mind about what's real.

Propaganda

Agenda-pushing content

Rumor

Unverified theories

Deepfake

AI-generated fakes

Influence Ops

Organized campaigns

Evidence = How much proof exists
Mystery = How intriguing it is
Are Black Panthers protesting in 2026? Look out for these misleading images
Deepfake

Are Black Panthers protesting in 2026? Look out for these misleading images

We pieced together AI-generated, doctored, outdated or misrepresented images claiming to document a resurgence of the dissolved Black Panther Party.

Evidence 4/5
Mystery 2/5
Feb 03, 2026 2
Detecting AI-Generated Content in Academic Peer Reviews
Deepfake

Detecting AI-Generated Content in Academic Peer Reviews

The growing availability of large language models (LLMs) has raised questions about their role in academic peer review. This study examines the temporal emergence of AI-generated content in peer reviews by applying a detection model trained on historical reviews to later review cycles at International Conference on Learning Representations (ICLR) and Nature Communications (NC). We observe minimal detection of AI-generated content before 2022, followed by a substantial increase through 2025, w...

Evidence 2/5
Mystery 3/5
Feb 03, 2026 1
The Verification Crisis: Expert Perceptions of GenAI Disinformation and the Case for Reproducible Provenance
Deepfake

The Verification Crisis: Expert Perceptions of GenAI Disinformation and the Case for Reproducible Provenance

The growth of Generative Artificial Intelligence (GenAI) has shifted disinformation production from manual fabrication to automated, large-scale manipulation. This article presents findings from the first wave of a longitudinal expert perception survey (N=21) involving AI researchers, policymakers, and disinformation specialists. It examines the perceived severity of multimodal threats -- text, image, audio, and video -- and evaluates current mitigation strategies. Results indicate that whi...

Evidence 2/5
Mystery 3/5
Feb 03, 2026 0
Witnessd: Proof-of-process via Adversarial Collapse
Deepfake

Witnessd: Proof-of-process via Adversarial Collapse

Digital signatures prove key possession, not authorship. An author who generates text with AI, constructs intermediate document states post-hoc, and signs each hash produces a signature chain indistinguishable from genuine composition. We address this gap between cryptographic integrity and process provenance. We introduce proof-of-process, a primitive category for evidence that a physical process, not merely a signing key, produced a digital artifact. Our construction, the jitter seal, injec...

Evidence 4/5
Mystery 4/5
Feb 03, 2026 0
Deepfake

Video of white dragon in China is AI-generated fantasy

Social media users have long shared images and videos said to show the fictional creatures.

Evidence 2/5
Mystery 1/5
Feb 02, 2026 0
Deepfake

Towards Explicit Acoustic Evidence Perception in Audio LLMs for Speech Deepfake Detection

Speech deepfake detection (SDD) focuses on identifying whether a given speech signal is genuine or has been synthetically generated. Existing audio large language model (LLM)-based methods excel in content understanding; however, their predictions are often biased toward semantically correlated cues, which results in fine-grained acoustic artifacts being overlooked during the decisionmaking process. Consequently, fake speech with natural semantics can bypass detectors despite harboring subtle...

Evidence 2/5
Mystery 3/5
Feb 02, 2026 0
Deepfake

Do these videos show alligators nabbing rotisserie chickens at Walmart stores in Florida?

Several versions of the same story originated from a page that said it creates "America's favorite AI videos."

Evidence 2/5
Mystery 2/5
Feb 02, 2026 0
Deepfake

AI-Driven Cybersecurity Threats: A Survey of Emerging Risks and Defensive Strategies

Artificial Intelligence's dual-use nature is revolutionizing the cybersecurity landscape, introducing new threats across four main categories: deepfakes and synthetic media, adversarial AI attacks, automated malware, and AI-powered social engineering. This paper aims to analyze emerging risks, attack mechanisms, and defense shortcomings related to AI in cybersecurity. We introduce a comparative taxonomy connecting AI capabilities with threat modalities and defenses, review over 70 academic an...

Evidence 2/5
Mystery 2/5
Feb 01, 2026 0
Deepfake

Social Media - No, Donald Trump didn’t post that ‘only criminals carry guns.’ It’s a fake Truth Social post

Donald Trump posted on Truth Social, “Only criminals carry guns on our streets, we need law and order.”

Evidence 2/5
Mystery 1/5
Feb 01, 2026 0
Deepfake

DIVER: Dynamic Iterative Visual Evidence Reasoning for Multimodal Fake News Detection

Multimodal fake news detection is crucial for mitigating adversarial misinformation. Existing methods, relying on static fusion or LLMs, face computational redundancy and hallucination risks due to weak visual foundations. To address this, we propose DIVER (Dynamic Iterative Visual Evidence Reasoning), a framework grounded in a progressive, evidence-driven reasoning paradigm. DIVER first establishes a strong text-based baseline through language analysis, leveraging intra-modal consistency to ...

Evidence 2/5
Mystery 2/5
Feb 01, 2026 1
Deepfake

When Is Self-Disclosure Optimal? Incentives and Governance of AI-Generated Content

Generative artificial intelligence (Gen-AI) is reshaping content creation on digital platforms by reducing production costs and enabling scalable output of varying quality. In response, platforms have begun adopting disclosure policies that require creators to label AI-generated content, often supported by imperfect detection and penalties for non-compliance. This paper develops a formal model to study the economic implications of such disclosure regimes. We compare a non-disclosure benchmark...

Evidence 2/5
Mystery 2/5
Feb 01, 2026 0
Deepfake

Além do Desempenho: Um Estudo da Confiabilidade de Detectores de Deepfakes

Deepfakes are synthetic media generated by artificial intelligence, with positive applications in education and creativity, but also serious negative impacts such as fraud, misinformation, and privacy violations. Although detection techniques have advanced, comprehensive evaluation methods that go beyond classification performance remain lacking. This paper proposes a reliability assessment framework based on four pillars: transferability, robustness, interpretability, and computational effic...

Evidence 2/5
Mystery 2/5
Feb 01, 2026 0
Deepfake

Robust Fake News Detection using Large Language Models under Adversarial Sentiment Attacks

Misinformation and fake news have become a pressing societal challenge, driving the need for reliable automated detection methods. Prior research has highlighted sentiment as an important signal in fake news detection, either by analyzing which sentiments are associated with fake news or by using sentiment and emotion features for classification. However, this poses a vulnerability since adversaries can manipulate sentiment to evade detectors especially with the advent of large language model...

Evidence 2/5
Mystery 3/5
Feb 01, 2026 0
Deepfake

X posts - Social media users spread AI-manipulated image of Alex Pretti holding gun

An image shows Alex Pretti holding a gun, not a phone, while pinned by federal immigration agents in Minneapolis.

Evidence 2/5
Mystery 2/5
Feb 01, 2026 0
Deepfake

The Paradigm Shift: A Comprehensive Survey on Large Vision Language Models for Multimodal Fake News Detection

In recent years, the rapid evolution of large vision-language models (LVLMs) has driven a paradigm shift in multimodal fake news detection (MFND), transforming it from traditional feature-engineering approaches to unified, end-to-end multimodal reasoning frameworks. Early methods primarily relied on shallow fusion techniques to capture correlations between text and images, but they struggled with high-level semantic understanding and complex cross-modal interactions. The emergence of LVLMs ha...

Evidence 2/5
Mystery 2/5
Feb 01, 2026 0
Deepfake

OnePiece: A Large-Scale Distributed Inference System with RDMA for Complex AI-Generated Content (AIGC) Workflows

The rapid growth of AI-generated content (AIGC) has enabled high-quality creative production across diverse domains, yet existing systems face critical inefficiencies in throughput, resource utilization, and scalability under concurrent workloads. This paper introduces OnePiece, a large-scale distributed inference system with RDMA optimized for multi-stage AIGC workflows. By decomposing pipelines into fine-grained microservices and leveraging one-sided RDMA communication, OnePiece significant...

Evidence 2/5
Mystery 2/5
Feb 01, 2026 0
Deepfake

MultiCaption: Detecting disinformation using multilingual visual claims

Online disinformation poses an escalating threat to society, driven increasingly by the rapid spread of misleading content across both multimedia and multilingual platforms. While automated fact-checking methods have advanced in recent years, their effectiveness remains constrained by the scarcity of datasets that reflect these real-world complexities. To address this gap, we first present MultiCaption, a new dataset specifically designed for detecting contradictions in visual claims. Pairs o...

Evidence 2/5
Mystery 2/5
Feb 01, 2026 0
Deepfake

Look out for image claiming to show Ilhan Omar with suspected attacker

The image was created, in part, using a photo from the suspect's Facebook page.

Evidence 2/5
Mystery 2/5
Feb 01, 2026 0
Deepfake

MS NOW shared AI-manipulated Alex Pretti photo on TV, website and YouTube. Here's what we know

An MS NOW spokesperson said the network used the image without knowing someone had digitally altered it.

Evidence 2/5
Mystery 3/5
Feb 01, 2026 0
Deepfake

Profiting From Exploitation: How We Found the Man Behind Two Deepfake Porn Sites

Content warning: This article contains descriptions of non-consensual sexual imagery. Depending on which of his social media profiles you were looking at, Mark Resan was either a marketing lead at Google or working for a dental implant company, a human resources company and a business software firm – all at the same time.            But a […] The post Profiting From Exploitation: How We Found the Man Behind Two Deepfake Porn Sites appeared first on bellingcat .

Evidence 4/5
Mystery 3/5
Jan 31, 2026 2