Ahead of Hearings on AI, Accountable Tech Reminds Senators of Broken Promises from Big Tech
Digital ad buy and accompanying “Broken Promises Report” highlights examples of tech CEOs calling for self-regulation before defying their own rules
Today, Accountable Tech launched a digital ad buy and a new report highlighting Big Tech’s alarming history of making and then breaking their own public commitments on privacy and safety. The Broken Promises Report and the organization’s latest ad buy on Capitol Hill comes as U.S. Senators prepare to welcome tech CEOs to Congress for a series of hearings on AI technology and regulation. Accountable Tech’s new ad shows how we cannot trust Big Tech CEOs at their word, especially when they say they support regulation:
In early August, Accountable Tech joined AI Now Institute and EPIC in releasing the “Zero Trust AI Governance” framework, which underscores the importance of legislators taking a “zero trust” approach to AI regulation and puts the burden on companies to prove the safety of their AI tools.
“Big Tech CEOs have a long history of making promises they have no intention of keeping,” said Bianca Recto, communications director for Accountable Tech. “They promised to protect children from harmful advertising, then continued to sell ads targeting them. They promised to delete location data when people visit abortion clinics, just to continue tracking and storing their sensitive data. And just this year OpenAI promised to limit how ChatGPT is used by political campaigns – only to disregard enforcement of its own ban.”
Recto continued, “Big Tech has shown us what ‘self-regulation’ looks like, and it looks a lot like their own self-interest. Senators must go into this week’s AI hearings with their eyes wide open – or risk once again getting fooled by savvy PR at the expense of our safety.”
The report highlights a litany of examples that show Big Tech companies failing to keep their promises to responsibly self-regulate, including:
OpenAI’s failure to uphold its own AI-safeguards on political content. In March 2023, OpenAI prohibited political campaigns from using the bot to generate content that could target a particular voting demographic. Analysis from The Washington Post found that OpenAI failed to enforce its ban for months, posting a real-time threat to election integrity.
Google’s failure to protect sensitive location data for people visiting abortion clinics. In 2022 after the overturning of Roe v. Wade, Google went out of its way to commit to deleting the location history of any user visiting particularly personal locations like fertility centers, addiction treatment facilities, weight loss clinics, cosmetic surgery clinics, and others. However, research from Accountable Tech found that Google continues to track and store sensitive location history data for short trips to abortion clinics, along with search query data collected on Google Maps and stored for 18 months by default.
Meta’s failure to keep its commitment to safeguard elections. Despite its promise to make election integrity a priority after fallout from the Cambridge Analytica scandal in 2016, Meta cut its election integrity teams by 80%, blocked researchers’ access to critical resources to monitor dangerous election related information, and allowed 650,000 posts questioning the legitimacy of the 2020 election results between Election Day and the deadly January 6, 2021 insurrection.
YouTube’s failure to protect child safety online despite voluntary commitments. In 2019, Google’s YouTube committed to make “protecting kids and their privacy” the company’s number one priority after facing a $170 million fine for illegally collecting data on minors and selling it for profit. The company agreed to stop selling personalized ads on children’s content, but just four years later, researchers found evidence that Google was continuing to target children.