Staff Posts

3 Key Takeaways from the AI Aspirations Conference

Our Campaigns Director shares his thoughts and insights after attending the White House Office of Science and Technology’s conference on AI.

Zach Praiss
Zach Praiss,
Jun 20, 2024

Last Thursday, I had the opportunity to attend AI Aspirations: R&D for Public Missions, a White House conference to project a vision for harnessing and seizing the promise and potential of artificial intelligence (AI).

Building on President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence — which set the foundation last fall for guidance on government use of AI and established voluntary commitments from Big Tech companies — the conference brought together experts from across the Biden-Harris Administration, Congress, academia, civil society, and industry to discuss aspirational visions for AI grounded in serving the public’s interest.

It was a lively mix of speakers and breakout sessions, and while there were so many topics discussed in detail, here are some of my topline takeaways from AI Aspirations:

  • We need to “manage AI’s risks, so we can seize its benefits.” This was one of the first points made in the opening remarks of Arati Prabhakar, the Director of the Office of Science and Technology Policy. The point could not be more important in my mind. As companies and governments race to develop and deploy AI, we cannot overlook the safety and privacy risks of these new technologies. If we fail to manage these risks, we literally risk losing all the potential of AI to unmitigated and potentially far-reaching harms. As Senator Amy Klobuchar explained later at the conference, “One way to mess with promise is to not set guardrails.”

The conference struck two tones: an aspirational pursuit of AI’s potential for the public interest and a cautious approach to mitigating the risks and preserving rights in the development and deployment of new AI technologies. Those are not two notes in dissonance, but actually, harmony. We need to manage AI’s risks to seize its potential.

  • AI has powerful potential for public missions spanning across sectors. The convening presented seven aspirations for AI technologies to serve the public interest in energy, education, weather, materials, health, transportation, and government services. As I learned, AI can help us better predict the weather, create sustainable materials for semiconductors, prevent dangerous traffic patterns, and improve the efficiency of the electrical grid. Each of the aspirations for AI presented at the conference represented life-changing possibilities. Just imagine if we could predict dangerous weather sooner to save lives and minimize damage to communities in the future.

While these aspirations may not have short-term financial benefits for those in the industry, it’s incumbent on us to seize these opportunities to leverage this new technology for these public missions that could have profound positive impacts on communities, economies, and the planet in the long run.

  • AI must not be just the means to an end. It’s not enough to just ask ourselves, does AI fulfill its task? Instead, we must also examine how AI completes its task. This was a critical point raised in a breakout session at the convening that had me thinking about how so many of us have overlooked the machinations of how AI does what it does. 

Although Big Tech might present it as a matter of magic as shown on Google’s AI Overview or Meta’s “Made with AI” symbols, AI is a composite of algorithms, data, and computational power made by engineers to simulate human activities like reasoning and creating. AI systems are built and designed to leverage data to create outputs. While many — including myself — have been at first taken aback by the potential of new AI systems to create content, we must also always ask ourselves, where is this coming from? Who’s data has informed these systems? How did these AI reach these outputs and should we be concerned about bias and misinformation or worse, discrimination and disinformation? Big Tech companies must be more transparent and held more accountable for the processes that underpin their AI creations.

In the end, the White House Office of Science and Technology succeeded in bringing together a diverse group of stakeholders for AI technologies across government, civil society, and industry. I hope this was just the beginning of more of these constructive conversations that balance and embrace aspirations and innovation with meaningful considerations of risk and harm.

More Staff Posts

BetterHelp sold my data, and all I got was $9.72.
Q Chaghtai
Q Chaghtai,
Aug 19, 2024

Tech companies continuously turn massive profits by endlessly tracking and profiling us, determining how to keep us hooked, and then hyper-targeting us with ads.

Testing Abortion Disinformation Narratives on Google Gemini
Aditi Ramesh
Aditi Ramesh,
Jul 26, 2024

We tested common disinformation narratives around abortion and reproductive health to see how bad actors could or may already be using AI to spread disinformation.

How AI is Cooking the Planet — Q&A With Michael Khoo of Friends of the Earth
Robbie Dornbush
Robbie Dornbush,
Jul 08, 2024

The AI boom is using a ton of water and energy – and that is further endangering the climate.

Join the fight to rein in Big Tech.

Big Tech companies are some of the most powerful and profitable companies in history, presenting new threats to the safety of communities and the health of democracy. We’re taking them on through legislation, regulation and direct advocacy.