Building civic resilience in the age of AI 

Artificial intelligence is rapidly reshaping how government, industry and society operate—bringing both new tools for strengthening civic life and new challenges that are already affecting everything from how we work to our democracy and public trust in government.  

The Partnership AI Center for Government® recently convened cross-sector experts to talk AI and public service at the National Conference on Citizenship “Building Civic Resilience” conference. 
 

We asked:  

  • Where might AI technologies help build civic resilience? When might they undermine it? 
  • What must government leaders do to ensure AI supports a just, democratic and resilient society? 
     

Panelists included Solomon Abiola, Maryland’s AI/ML policy & governance director, Elizabeth Laird, the Center for Democracy and Technology’s director of equity in civic technology, and Gus Rossi, Omidyar Network’s director of tech policy. The conversation was moderated by Partnership President and CEO Max Stier.

Key insights 

  • Ideas outpace implementation 

A lack of ideas for how AI could be used in government is rarely the bottleneck. When Maryland launched its first cohort of AI interns last summer, the number of potential use cases submitted by state offices far exceeded the number of interns available.   

  • Rethink human-in-the-loop 

AI governance frameworks often use the term “human-in-the-loop” to articulate the need for human participation in the use of AI tools. But this isn’t enough: we need to prioritize the involvement of humans throughout the development, implementation, evaluation and use of AI. 

  • Treat all software as AI 

Government IT teams need to start treating all software as AI. They should not expect the average user to determine if a specific product has an AI-powered functionality embedded in it or not. 

  • AI governance is not universal 

Internationally, the U.S. approach to AI is unique. Many countries prioritize privacy and data security, even if it means slowing down adoption of the technology.   

  • Privacy must be prioritized 

Governments at all levels, but especially at the local level, should ensure every vendor has a strong privacy policy in place. However, privacy policies are only effective if people follow them.  

  • Public input to AI development is bipartisan 

In a rare show of bipartisanship, both the Biden and Trump administrations have emphasized the need to solicit feedback from the public on how AI is used in government.

Collaboration is key for responsible AI use

Our session sat alongside other critical examinations of the state of federal data, trust in public institutions and the power of citizen assemblies

Because AI, machine learning and intelligent automation already intersect with public service, we believe it is critical for leaders to work together to understand how to use AI responsibly to deliver better outcomes for the public.  

At the NCoC conference, leaders from government, philanthropy, the nonprofit sector, the media and community organizations worked to chart a path to building civic resilience and strengthening civic health. We were excited to be part of the conversation. 


Continuing the conversation   

The AI Center for Government champions AI innovators across all levels of government. If your agency is taking steps to lead AI well, we’d love to hear from you. Join us as we highlight real-world AI use cases and convene public sector leaders from across the country to share tools and insights to lead confidently in the age of AI.   
 

We’re here to help!