Facebook claims it doesn’t profit off hate. In reality, it’s a core feature of their business model.
Nearly all of Facebook’s revenue comes from ad sales. Facebook’s customers are advertisers, not users. And what makes Facebook so valuable to advertisers is the unprecedented precision of its targeting – which is fueled by mass-surveillance and data mining. That’s where users come in.
Every time we interact with Facebook content, we train its AI to better predict and manipulate our behavior. So they employ powerful curation and recommendation algorithms that decide what to serve each user to keep us engaged. And nothing generates more engagement than outrageous content, so that’s what the AI amplifies.
These algorithms – designed to maximize profits – have mainstreamed extremist movements, created white supremacist echo chambers, promoted dangerous coronavirus hoaxes, and more.
Facebook says it brings the world closer together — but its algorithms fuel division and extremism. Facebook says it values free expression — but its AI chooses everything we see. Facebook says it gives everyone a voice — but it gives a bullhorn to hate and disinformation.
It’s time for Facebook to turn off the algorithms, and let users choose whether to opt-in.
PRACTICAL APPLICATION AND FURTHER RECOMMENDATIONS
Rethinking how content-shaping algorithms can best serve users and the public good will take time, but mitigating the harms of the status quo is urgent. That’s why we’re calling on Facebook to swiftly implement this stopgap, which would make an immediate positive impact, without taking away any tools from users.
Practical application:
- News Feed would be reverse-chronological by default. The “see first” tool would still allow users to ensure they never miss posts from specific Pages. The Voting Information Center would still appear at the top of the app. Efforts to limit the spread of harmful content should not be impacted. Users could opt back into the algorithmic News Feed at any time.
-
- See further recommendations for expanding user customization below.
- Page and Group recommendations would be disabled by default. These tools have proven particularly dangerous, repeatedly pushing people down dangerous rabbit holes and into extreme echo chambers to keep them engaged. Again, users could choose to turn these functions back on if they so desired.
Further recommendations for improving content-shaping algorithms moving forward:
- User Customization. Facebook acknowledges that users want to “better understand and more easily control what you see” in News Feed, yet offers limited ability to do it.
Facebook even makes clear it will automatically revert users back to the algorithmic News Feed, whether users want that or not:

- Facebook should provide additional customization options – at a minimum, by allowing users to weight the underlying signals Facebook utilizes to assign ‘relevancy scores’ in algorithmic News Feeds. Users should be able to prioritize other factors in their News Feeds and recommendations as well, such as authoritative news sources (determined by a third-party arbiter like NewsGuard), diversity of viewpoints, and more.
-
- MIT Media Lab created a customizable news aggregator tool called Gobo – and conducted a pilot test with encouraging results – which provides users with a set of sliders by which to filter social media posts. Unlike Twitter, Facebook renders Gobo nonfunctional, but as the developers made clear, their goal was to encourage companies like Facebook to embrace similar user customization controls.
- Healthier Discourse. Facebook should commit to fostering healthier discourse, and begin working with independent experts to define specific goals, adjust algorithms accordingly, and develop relevant output metrics to quantify their progress as changes are implemented.
-
- Twitter announced such an initiative in 2018 in partnership with Cortico, a non-profit that’s outlined a good framework of health indicators:
-
-
- Shared Attention: Is there overlap in what we are talking about?
- Shared Reality: Are we using the same facts?
- Variety: Are we exposed to different opinions grounded in shared reality?
- Receptivity: Are we open, civil, and listening to different opinions?
-
- Algorithmic Transparency. Facebook should increase algorithmic transparency to the greatest extent possible without compromising trade secrets or privacy. They should also publish regular transparency reports detailing the impact of their algorithmic decision-making.
- Facebook’s content-shaping algorithms are so opaque that their impact can’t be quantitatively evaluated – but specific claims have repeatedly been undermined by evidence.
-
-
- Facebook has said they are promoting CDC and WHO content amid the pandemic and docking disinformation, but an Institute for Strategic Dialogue and BBC investigation found that known disinformation sites received six times more interactions between January and April of 2020.
-
ADDITIONAL BACKGROUND
- Facebook is a dominant surveillance-advertising company.
-
- In 2019, advertising sales accounted for 98.5% ($69.7 of $70.7 billion) of Facebook’s 2019 revenue.
-
- According to eMarketer, Facebook alone owns a 59.4% market share of all political digital ad spending.
-
- Amnesty International: Facebook And Google’s Pervasive Surveillance Poses An Unprecedented Danger To Human Rights | “Google and Facebook’s platforms are underpinned by algorithmic systems that process huge volumes of data to infer incredibly detailed characteristics about people and shape their online experience. Advertisers then pay Facebook and Google to be able to target people with advertising or specific messages.”
98.5%
Advertising sales accounted for 98.5% of Facebook’s $70.7 billion revenue in 2019.
- Facebook is a major news source, but its algorithms promote misinformation.
-
- Pew Research: “Facebook is far and away the social media site Americans use most commonly for news. About half (52%) of all U.S. adults get news there.”
-
- Accountable Tech/Navigator Research: “More Americans under the age of 45 rely on social media as a major source of news than any other medium.”
-
- A 2016 BuzzFeed News analysis “found that top fake election news stories generated more total engagement on Facebook than top election stories from 19 major news outlets combined.”
-
- A 2019 Oxford study found that stories from popular junk news sites around the EU elections generated up to four times more engagement on Facebook than stories from legitimate sources, even as they failed to gain traction on Twitter.
- Facebook’s own research has confirmed that their algorithms have toxic effects, and that the platform can manipulate users’ offline behavior.
-
- Wall Street Journal, 5/26/20: Facebook Executives Shut Down Efforts to Make the Site Less Divisive
-
-
- “Our algorithms exploit the human brain’s attraction to divisiveness… If left unchecked [Facebook will] more and more divisive content in an effort to gain user attention & increase time on the platform.”
-
-
-
- “64% of all extremist group joins are due to our recommendation tools… Our recommendation systems grow the problem.”
-
-
- Facebook civil rights audit, 7/8/20: “[T]he algorithms used by Facebook inadvertently fuel extreme and polarizing content… Facebook should do everything in its power to prevent its tools and algorithms from driving people toward self-reinforcing echo chambers of extremism, and that the company must recognize that failure to do so can have dangerous (and life-threatening) real-world consequences.”
“64% of all extremist group joins are due to our recommendation tools…Our recommendation systems grow the problem.”
2016 Facebook Presentation
- Facebook’s content-shaping AI distorts discourse, directly contradicting numerous statements from Mark Zuckerberg.
-
- “I believe in giving people a voice because, at the end of the day, I believe in people.” [10/17/19]
-
- “Giving everyone a voice empowers the powerless and pushes society to be better over time.” [10/17/19]
-
- “I believe people should decide what is credible, not tech companies.” [10/17/19]