By now, you’ve likely been inundated with news of both the enormous potential benefits and dangers posed by artificial intelligence (AI) since the rollout of widely accessible generative AI tools like ChatGPT last fall.
Last week, Accountable Tech submitted a comment in response to the White House Office of Science and Technology Policy’s request for information (RFI) outlining steps the administration can immediately take to address some of the most pressing risks posed by generative AI: the erosion of our information ecosystem and harm to the integrity of our elections.
Accountable Tech has spent years working to address the systemic drivers of the information crisis that are pushing our democracy to the brink – from campaigning to end the surveillance advertising business model that rewards harmful lies and extremist content, to developing a sweeping election integrity roadmap to combat efforts to deceive voters and manipulate public discourse.
With the rapid innovation and deployment of generative AI’s, bad faith actors could gain tools to quickly and easily create and distribute fake news articles, social media posts, videos, and audio clips which have become increasingly less distinguishable from authentic content. Even when harnessed with good intent, the growth of AI adds harrowing new layers to the ever-deepening information crisis.
Many industry leaders may appear to be eager for regulation, but as we point out in Fast Company, there’s reason to be skeptical. Big Tech has publicly voiced concern and asked Congress to pass legislation to regulate them in the past, while simultaneously spending millions to defeat the very legislation they publicly praise. Time will tell if Big Tech is operating from the same playbook now when it comes to AI, but we shouldn’t hold our breath. U.S. officials have no time to waste.
In the absence of new legislation to confront the near-term threats of AI, Accountable Tech’s comment offers several levers the Biden administration can pull to address the immediate threats new generative AI systems pose to the integrity of our elections and democracy without waiting for Congress to act, including:
- Vigorous enforcement of the breadth of laws already on the books, including federal statutes against voter suppression and foreign election interference;
- Utilizing the full scope of executive authority to curtail AI-related harms, including by clarifying that deceptive AI ads violate FEC’s prohibition on fraudulent misrepresentation and through the FTC’s commercial surveillance rulemaking process; and
- Leveraging the bully pulpit to advance AI accountability, including by urging industry leaders to collectively embrace key standards, supporting plaintiffs seeking redress from AI harms, and pressuring Congress to swiftly pass bipartisan legislation.
In these early days of publicly accessible generative AI tools, immediate action is critical to address the urgent threats it poses to our information ecosystem and free and fair elections. You can read Accountable Tech’s full comment here, and stay tuned as Accountable Tech prepares to release a holistic framework for grappling with AI’s potential harms.