AI's Harm By The Numbers
It doesn't look good for deepfakes...
Lawmakers are taking a hard look at deepfakes in the wake of Grok's nudification nightmare. But data suggests the deepfake problem exploded long before this latest wave.
Before 2023, autonomous vehicles, facial recognition, and content moderation algorithms were among the most frequently cited technologies in AI incidents. Since then, deepfake videos alone have outnumbered all three combined—before even counting photos and voice synthesis. Although this data is imperfect, it's undeniable that these tools have been used to degrade women and scam people and businesses out of millions.
You might have heard that focusing on AI's catastrophic risks distracts from present harms. Deepfakes show why that's a false choice. A few years ago, they barely registered in incident data. They became a major harm category because the technology improved. As AI gets more capable, new risks emerge.
In my latest for TIME, I spoke with Daniel Atherton from the AI Incident Database and Simon Mylius , who's built an LLM-powered tool to make incident data more actionable for lawmakers.

