Staff Posts

The Rising Threat of Deepfakes: Safeguarding Truth Ahead of the 2024 Elections

The rise of deepfakes—AI-generated images, videos, and audio that can convincingly mislead—poses a growing threat to our democratic processes by spreading disinformation and eroding public trust. As Election Day approaches, we must hold social media platforms accountable for detecting and labeling political deepfakes to protect the integrity of our elections.

Kenya Juarez
Kenya Juarez,
Oct 18, 2024

The rise of deepfakes is more than just a technological novelty; it represents a profound threat to how we consume information and make decisions, from personal lives to politics. Imagine scrolling through your social media feed and seeing a video of a prominent politician making shocking statements, only to discover later that it wasn’t real at all. This is the power of deepfakes—AI-generated images, videos, or audio so convincing they blur the line between truth and fiction. And as Election Day approaches, this blurring can potentially disrupt one of our most fundamental democratic processes.

Deepfakes aren’t just funny filters or harmless pranks. They’ve evolved into sophisticated tools used to deceive, mislead, and manipulate. One of the most chilling aspects is their potential to undermine public trust in our electoral system. Picture a candidate being falsely depicted in a compromising position or a video going viral showing election officials doing something they never did. The results can be disastrous. Voters are misled, trust in our institutions erodes, and the democratic process itself is jeopardized.

This issue isn’t speculative—it’s happening now. Since January, more than ten viral deepfakes related to the 2024 U.S. election have reached over 140 million viewers. These deceptive pieces of content range from AI-generated videos of Vice President Kamala Harris discussing deeply personal topics to fake images of President Trump wading through floodwaters after a hurricane. Some are labeled, but many aren’t, making it harder for voters to distinguish fact from fiction. View the examples on our website. 

As Election Day nears, the threat posed by these deepfakes grows. That’s whyAccountable Tech is leading a coalition of 40 groups demanding social media companies take immediate action with our No Deepfakes for Democracy campaign. The coalition’s demands were clear: 

  • Social media platforms must create better systems to detect and moderate deepfakes
  • Require all political AI-generated content to be labeled
  • Work with researchers to monitor the spread of these deceptive videos and images.

Despite the urgency, many tech companies—who were instrumental in developing the very tools that make these deepfakes possible—have been slow to act. Instead of taking responsibility, they’ve allowed AI-generated propaganda to spread unchecked, with millions of users unknowingly consuming and spreading misinformation.

Think about it: what happens when voters can no longer trust what they see or hear? Can anyone claim a real video, audio, or image is fake because deepfakes have become so believable? It’s a tactic we’ve already seen political figures use. President Trump, for instance, has falsely claimed that Vice President Kamala Harris used AI to inflate crowd sizes at her rallies. This phenomenon, known as the “liar’s dividend,” allows people to dismiss real, legitimate content as deepfakes, further eroding public trust.

The stakes have never been higher. With billions of people set to vote globally in 2024, protecting the integrity of our information ecosystem is crucial. The spread of deepfakes doesn’t just threaten election results—it undermines the very fabric of our democracy. And while social media platforms continue to profit from this chaotic landscape, it’s up to us to demand change before it’s too late—Big Tech companies must stop the spread of dangerous deepfakes.

This moment is a turning point. The fight against AI-powered disinformation is a fight for truth itself. We must push for stronger regulations, more transparency, and a commitment from tech companies to prioritize truth over profit. If we fail to act, the future of our democracy could be decided not by informed voters but by those who wield the most convincing lies.

More Staff Posts

Glints of Gratitude: AT’s Biggest Moments in 2024
Robbie Dornbush
Robbie Dornbush,
Dec 19, 2024

In 2024, Accountable Tech had a big year of campaigns to protect kids online, amplify the real world dangers of AI, defend democracy, and safeguard the privacy of abortion seekers.

Bluesky's the Limit
Nicole Gill
Nicole Gill,
Dec 17, 2024

Elon Musk’s race to the bottom for social media companies may finally be turning around thanks to fresh competition from Bluesky.

The Future of Music and AI: A Conversation with Kevin Erickson
Giliann Karon
Giliann Karon,
Dec 16, 2024

A discussion with Kevin Erickson from Future of Music Coalition about AI’s threats to musicians, the gatekeeping status of streaming platforms, and recommendations for lawmakers, regulators, and AI developers.

Join the fight to rein in Big Tech.

Big Tech companies are some of the most powerful and profitable companies in history, presenting new threats to the safety of communities and the health of democracy. We’re taking them on through legislation, regulation and direct advocacy.