Statements
2024

NEW: All Major Social Media Platforms Are Failing on Election Preparedness Policies

For Immediate Release
March 5, 2024
Contact: press@accountabletech.org

Accountable Tech released a new analysis today that found ten major social media platforms – including Facebook, Instagram, Threads, YouTube, TikTok, Snapchat, Discord, LinkedIn, Nextdoor, and X (formerly Twitter) – are failing on basic election preparedness policies.  Researchers measured the extent to which these platforms’ policies meet recommendations made in Accountable Tech’s Democracy By Design roadmap – actionable, high-impact, and content-agnostic steps to protect the integrity of elections.

“With an estimated 2 billion voters around the world heading to the polls this year, social media companies have a responsibility to protect the integrity of our elections and ensure that their platforms are not harnessed by bad actors to undermine trust in elections,” said Nicole Gill, Executive Director of Accountable Tech. “It’s past time for platforms to put democracy above profits and implement readily actionable, common-sense changes.”

READ THE SCORECARD

Accountable Tech’s scorecard judges platform’s election preparedness policies in alignment with the three major planks of the Democracy by Design framework: 1) bolstering resilience, which focuses on ‘soft interventions’ that introduce targeted friction and context to mitigate harm; 2) countering election manipulation, which outlines bulwarks against evolving threats posed by malign actors and automated systems; and 3) paper trails, which highlights key transparency measures needed to assess systemic threats, evaluate the efficacy of interventions, and foster trust.

Specific findings include:

  • Insufficient guardrails to stop the spread of manipulated content depicting public figures, like deepfakes: Just 20% of platforms – TikTok and Snapchat – have policies on the books that would prohibit deceptively manipulated media of public figures. That means that the vast majority of these platforms do not prohibit deepfake videos or manipulated images depicting candidates doing or saying things they actually did not, or other types of deceptive depictions of candidates, election officials, or other government figures, which we’re already seeing proliferate as elections heat up.

  • Platform features enable AI-generated political ads to be micro-targeted to voters: Nearly every social media platform which allows political advertising does not explicitly prohibit AI-generated ads from being micro-targeted to voters.
  • Lack of transparency on performance and engagement related to election-related posts: No platform provides transparent access to data related to the highest-performing and highest-engagement election-related posts, advertisements, accounts, links, and groups. That means that voters, independent researchers, and election officials are left in the dark about how election-related information spreads across platforms.

  • Insufficient “friction” to stop the spread of misleading election information: A majority of platforms do not have policies in place to put posts that contain misleading or unverified election information behind click-through warning labels that include clear context and fact. Without these labels, election misinformation is able to be spread more quickly and magnify threats.
  • A lack of transparency, including an opacity around policy enforcement and safety teams: Some platforms, like Meta, which have previously come under intense scrutiny for their role in amplifying the spread of electoral disinformation narratives, have numerous policies, but it’s impossible to know how they are being enforced. Platforms have wide latitude when it comes to enforcement, and there is reason for skepticism that they meaningfully follow through. This is made more concerning because of industry-wide layoffs and cuts to election integrity safety teams – including the complete dismantling of X’s election integrity team.

For full results and methodology, please see here.

###

From the Newsroom

Statements
Dec 18, 2024
Accountable Tech on the Kids Online Safety Act

Accountable Tech Executive Director and Co-Founder Nicole Gill issued the following statement on House Republican leadership’s decision to exclude the Kids Online Safety Act in the year-end government funding bill:

Statements
Oct 29, 2024
One Week from Election Day, Accountable Tech Condemns Meta for Facilitating Anti-Government Militia Organizing on Facebook

Accountable Tech Co-Founder and Executive Director Nicole Gill released the following statement after new reporting found that Facebook is failing to shut down anti-government militia activity on its platform – and is even auto-generating group pages – in the lead-up to Election Day:

Join the fight to rein in Big Tech.

Big Tech companies are some of the most powerful and profitable companies in history, presenting new threats to the safety of communities and the health of democracy. We’re taking them on through legislation, regulation and direct advocacy.