Staff Posts

Democracy vs. Artificial Intelligence

Democracy is in jeopardy particularly with the rise of AI, and platforms have a responsibility to protect democratic integrity and public trust in elections.

Jul 31, 2023

In the early 2000s, when wide eyed internet users first engaged with social media, there was an air of optimism—a belief that technology would level the playing field, empower individuals, and foster a more inclusive and participatory democracy. Call it naivety if you’d like, but I don’t think any of us expected that the college student who accidentally stumbled into building an online social network, while drunkenly hacking into Harvard’s network to compare pictures of girls, would eventually become one of the most influential decision makers of our time. 

A sobering reality hit when we realized just how much power these platforms have over the real world. In 2018, the U.N. found that Facebook played a “determining role” in the Rohingya genocide by serving as the primary information source for the military’s propaganda machine. Only two years later, Mark Zuckerberg called for lawmakers in the U.S. to pass stricter regulation of “harmful content, election integrity, privacy and data portability” all while turning around to spend millions of dollars to lobby against said regulation. Across the world, we saw a global pandemic break out and misinformation about COVID-19 spread as rapidly if not faster than the virus itself, with dire consequences. In elections from Brazil to the U.S, we witnessed election cycles littered with disinformation and misinformation as a tool of demagoguery by far-right opportunists. Now, while the media swoons over AI leaders calling for regulation, we need to approach these calls as the same PR stunts that other Big Tech CEOs have used to evade accountability. 

We now better understand social media’s toxic business model. We’ve witnessed the way that algorithms create echo chambers, reinforcing existing biases and deepening divisions. How disinformation campaigns manipulate public opinion, posing a threat to the very fabric of our democracies. What may have happened if simple design principles were implemented by these platforms, like circuit breakers to halt fast-spreading content with disinformation for human review? Or strike systems that would lessen the impact of repeat violators?  Now, in a time when technology is evolving at an unprecedented speed, it’s essential that we learn the lessons from our previous collective failures to regulate tech. 

Artificial intelligence is here, and our democracy is at risk again. That is unless we move quickly and efficiently to regulate Big Tech. Having grappled with the erosion of privacy and the concentration of power in the hands of a few, social media platforms now need to act quickly to put safeguards in place, but we know they are not incentivized to do so without significant outside pressure. We are a year away from 65 global elections happening, but these platforms are now underinvesting in trust and safety resources. 

Beyond just resources, the platforms have started rolling back policies such as a prohibition on perpetuating the “Big Lie,” the massive conspiracy still falsely claiming Donald Trump’s victory in the 2020 election. After election results in 2020 went public, Trump spread disinformation by posting a photo of a dumpster filled with “mail in ballots”, while a simple reverse Google search could identify that the source of the photo was entirely unrelated to the elections at all. Without the right safeguards in place, we don’t know what kind of AI-generated disinformation will go viral next or how people will be able to effectively fact check it.  

Whether we want to admit it or not, the harms of these tech giants have impacted every aspect of our democracy. But in response, we have witnessed a growing movement for change. Advocates, policymakers, and everyday people are calling for greater regulation, increased oversight, and the establishment of platform policies that will protect our rights and our democracy. 

Earlier this month Accountable Tech hosted a panel at Netroots Nation on this issue and attended other important panels on the topic of disinformation on platforms. We are proud to partner with experts from across the civil society space to build on the principles from our 2020 Election Integrity Roadmap ahead of 2024. Stay tuned! More to come.  

More Staff Posts

Why Accountable Tech is Leaving X
Kenya Juarez
Kenya Juarez,
Nov 15, 2024

Accountable Tech is leaving X, but we’re still committed to advocating for safer digital spaces elsewhere.

The Rising Threat of Deepfakes: Safeguarding Truth Ahead of the 2024 Elections
Kenya Juarez
Kenya Juarez,
Oct 18, 2024

The rise of deepfakes—AI-generated images, videos, and audio that can convincingly mislead—poses a growing threat to our democratic processes by spreading disinformation and eroding public trust. As Election Day approaches, we must hold social media platforms accountable for detecting and labeling political deepfakes to protect the integrity of our elections.

BetterHelp sold my data, and all I got was $9.72.
Q Chaghtai
Q Chaghtai,
Aug 19, 2024

Tech companies continuously turn massive profits by endlessly tracking and profiling us, determining how to keep us hooked, and then hyper-targeting us with ads.

Join the fight to rein in Big Tech.

Big Tech companies are some of the most powerful and profitable companies in history, presenting new threats to the safety of communities and the health of democracy. We’re taking them on through legislation, regulation and direct advocacy.