AI-generated CSAM reports surged 1,325% but 78% are mislabeled, masking the real threat and misallocating NCMEC resources

technology0 views
NCMEC's CyberTipline received 67,000 generative-AI-flagged reports in 2024 (up from 4,700 in 2023) and over 1 million in the first nine months of 2025, but at least 78% of those reports did not involve any AI-generated CSAM at all -- Amazon's 380,000 reports were all hash hits to known CSAM, not synthetic material. The actual volume of novel AI-generated CSAM is arriving in 'really, really small volumes' outside of Amazon's bulk reports, yet the inflated statistics are driving panic-based policy. Why it matters: Mislabeled reports inflate the perceived scale of AI-generated CSAM, so NCMEC analysts waste triage capacity on false positives, so real AI-generated abuse material that evades traditional hash-based detection (PhotoDNA, CSAI Match) goes undetected longer, so child victims of novel synthetic abuse receive delayed interventions, so the entire child safety ecosystem loses credibility when researchers like Stanford's Internet Observatory publicly debunk the statistics. The structural root cause is that the CyberTipline's 'Generative AI' checkbox conflates multiple unrelated scenarios -- AI-generated content, known CSAM found in AI training data, and AI-assisted detection -- into a single undifferentiated category, and there is no technical standard requiring reporters to distinguish between them.

Evidence

NCMEC CyberTipline data shows 4,700 AI-flagged reports in 2023 rising to 67,000 in 2024 (1,325% increase) and over 1 million from Jan-Sep 2025. Stanford Internet Observatory's February 2026 analysis found 78% of first-half 2025 AI-flagged reports contained no AI-generated material. Amazon accounted for 380,000 reports, all hash matches to known CSAM. Techdirt reported in February 2026 that 'Six Months of AI CSAM Crisis Headlines Were Based on Misleading Data.' Sources: NCMEC CyberTipline Data (missingkids.org), Stanford Cyberlaw (cyberlaw.stanford.edu), Techdirt (February 2, 2026).

Comments

AI-generated CSAM reports surged 1,325% but 78% are mislabeled, masking the real threat and misallocating NCMEC resources | Remaining Problems