Accountable Tech and Design It For Us today released new research that found Meta recommends dangerous, body-shaming, and sexually explicit content to users ages 18 and under via their Instagram Teen Accounts, in a direct violation of the company’s promise to protect young users. Accountable Tech worked with five members of the youth-led coalition Design It For Us to test the new promised default settings in Teen Accounts over two weeks this spring. 100% of all accounts tested were recommended “sensitive content” prohibited by Meta, including sexual content (across 100% of test accounts) and body image and disordered eating content (across 80% of test accounts). Instagram Teen Accounts were also found to haphazardly enact other privacy and mental health protections, including nighttime notifications, time limit reminders, and removal of offensive comments.
“It’s no coincidence that Meta launched Teen Accounts last fall, as bipartisan kids safety legislation picked up momentum in Congress,” said Nicole Gill, Executive Director of Accountable Tech. “From the start, Teen Accounts have always been about PR, not protecting kids. While they make claims to the contrary, it’s clear that Meta cannot be relied upon to protect their youngest users. It’s time our lawmakers demanded real guardrails to protect kids and teens from platforms designed explicitly to endanger them.”
“When 4 out of 5 participants report feeling negative and overwhelmed by recommended content in their feeds, we’re not talking about isolated incidents—it is a systemic problem,” said Zamaan Qureshi, Co-chair of Design It For Us. “This is about a product that’s designed in a way that undermines their mental health. Young people deserve positive digital spaces that support our well-being, not algorithms that bombard us with harmful content that can trigger anxiety or body image issues. It is Meta’s responsibility to fix this problem immediately, and our elected officials must hold them to account.”
- 100% of all teen accounts tested received images and reels that would be classified as “sensitive content” within Meta’s framework for age-appropriate content limitations. 80% of accounts were recommended potentially harmful health content, and 100% of accounts were recommended sexual content.
- The control account reported the highest volume of sexualized content, indicating that a standard teen following popular accounts still receives sensitive content.
- Accounts also received content related to other sensitive topics, including alcohol, steroids, and supplements, hateful (racist, homophobic, and misogynistic) content, and disturbing content depicting gun violence and domestic violence. These sensitive posts were recommended to the users in their Reels feed – not from accounts that the users followed – showing that Meta’s algorithm is pushing inappropriate content to teens when they are not looking for it.
- An overwhelming majority (4 out of 5) of research participants had distressing experiences while using Instagram Teen Accounts. Participants reported feeling overwhelmed, uneasy, and uncomfortable by the content they were seeing.
- Account protection settings were inconsistent, occasionally allowing some of our test accounts to fall through the cracks. Teen Accounts automatically have certain settings turned on to help limit exposure to sensitive content and reduce screen time. However, these settings did not work 100% effectively all of the time, for example, only 60% of the users received a reminder to close the app after 60 minutes.
For full results and methodology, please see here.
History shows – Meta’s public efforts to demonstrate self-regulation come at the threat of exposure and accountability, but they rarely uphold their promises to act:
- 2018: Evidence dated to 2019 from the ongoing FTC trial revealed that Meta knew inappropriate interactions and grooming behavior were taking place on their platforms. They claimed they began work in 2018 on recommendations restrictions.
- September 2021: Whistleblower Frances Haugen’s release of the Facebook Files revealed that Meta was aware of the harmful effects of their platforms, yet prioritized profit over power in addressing these harms.
- May 2023: The Kids Online Safety Act (KOSA) is reintroduced with the support of President Biden.
- October 2023: 42 state attorneys general bring forward damning evidence of Meta’s knowledge its products were doing harm to young people.
- November 2023: Whistleblower Arturo Béjar comes forward, revealing the shocking number of unwanted sexual advances teens receive on Instagram and Meta leadership’s failure to act on this information.
- January 31, 2024: Meta CEO Mark Zuckerberg apologizes to survivor parents in a congressional hearing and affirms that Meta is investing in preventative efforts.
- May 9, 2024: Maryland signs the Age Appropriate Design Code into law.
- July 30, 2024: The Senate passes KOSA 91-3.
- September 16, 2024: The House Energy and Commerce Committee announces that KOSA will be marked up in committee on September 18th.
- September 17, 2024: Ahead of a crucial markup on the KOSA in the House of Representatives, with the bill poised to move forward on a bipartisan basis, Meta announces Instagram Teen Accounts — its answer to protecting young people online.
- December 18, 2024: Meta and its proxies successfully prevent the passage of KOSA in the House of Representatives.
- February 26, 2025: Meta apologizes and claims it fixed an error that caused users to get recommended a flood of graphic and violent content.
- April 24, 2025: Design It For Us, Heat Initiative, and Parents Together join with dozens of parent survivors and youth advocates to rally for accountability at Meta’s headquarters in New York, delivering a petition of more than 10,000 signers demanding action. Meta commented that Teen Accounts are the solution.
###