The rise of deepfakes is more than just a technological novelty; it represents a profound threat to how we consume information and make decisions, from personal lives to politics. Imagine scrolling through your social media feed and seeing a video of a prominent politician making shocking statements, only to discover later that it wasn’t real at all. This is the power of deepfakes—AI-generated images, videos, or audio so convincing they blur the line between truth and fiction. And as Election Day approaches, this blurring can potentially disrupt one of our most fundamental democratic processes.
Deepfakes aren’t just funny filters or harmless pranks. They’ve evolved into sophisticated tools used to deceive, mislead, and manipulate. One of the most chilling aspects is their potential to undermine public trust in our electoral system. Picture a candidate being falsely depicted in a compromising position or a video going viral showing election officials doing something they never did. The results can be disastrous. Voters are misled, trust in our institutions erodes, and the democratic process itself is jeopardized.
This issue isn’t speculative—it’s happening now. Since January, more than ten viral deepfakes related to the 2024 U.S. election have reached over 140 million viewers. These deceptive pieces of content range from AI-generated videos of Vice President Kamala Harris discussing deeply personal topics to fake images of President Trump wading through floodwaters after a hurricane. Some are labeled, but many aren’t, making it harder for voters to distinguish fact from fiction. View the examples on our website.
As Election Day nears, the threat posed by these deepfakes grows. That’s whyAccountable Tech is leading a coalition of 40 groups demanding social media companies take immediate action with our No Deepfakes for Democracy campaign. The coalition’s demands were clear:
- Social media platforms must create better systems to detect and moderate deepfakes
- Require all political AI-generated content to be labeled
- Work with researchers to monitor the spread of these deceptive videos and images.
Despite the urgency, many tech companies—who were instrumental in developing the very tools that make these deepfakes possible—have been slow to act. Instead of taking responsibility, they’ve allowed AI-generated propaganda to spread unchecked, with millions of users unknowingly consuming and spreading misinformation.
Think about it: what happens when voters can no longer trust what they see or hear? Can anyone claim a real video, audio, or image is fake because deepfakes have become so believable? It’s a tactic we’ve already seen political figures use. President Trump, for instance, has falsely claimed that Vice President Kamala Harris used AI to inflate crowd sizes at her rallies. This phenomenon, known as the “liar’s dividend,” allows people to dismiss real, legitimate content as deepfakes, further eroding public trust.
The stakes have never been higher. With billions of people set to vote globally in 2024, protecting the integrity of our information ecosystem is crucial. The spread of deepfakes doesn’t just threaten election results—it undermines the very fabric of our democracy. And while social media platforms continue to profit from this chaotic landscape, it’s up to us to demand change before it’s too late—Big Tech companies must stop the spread of dangerous deepfakes.
This moment is a turning point. The fight against AI-powered disinformation is a fight for truth itself. We must push for stronger regulations, more transparency, and a commitment from tech companies to prioritize truth over profit. If we fail to act, the future of our democracy could be decided not by informed voters but by those who wield the most convincing lies.