AI use cases in government: Building trust through best practices 

The Partnership for Public Service’s AI Center for Government® recently convened current and former public sector leaders to share lessons learned regarding the development and use of artificial intelligence in the public sector.  

Joining us on the panel were: 

  • Adrianna Tan, Managing Director, Future Ethics AI; former Director of Product Management, San Francisco Digital Services 
  • Alexander Schneider, Digital Service Expert, Colorado Digital Service, Governor’s Office of Information Technology 
  • Martin Skorczynski, Assistant Director for Emerging Technology, Government Accountability Office 

Implementing AI in high-risk settings 

Adrianna Tan shared a use case focused on her work testing AI models in collaboration with the Department of Defense. In this test, 200 military doctors evaluated AI’s ability to accurately summarize patient data and prepare treatment plans. Findings from this study, in which the examined AI models produced a multitude of errors in their analyses, helped create datasets for the DOD to evaluate AI vendors and informed policies for responsible AI use at the agency. 

From this experience, Tan listed two key takeaways for AI implementations

  1. Consider the level of risk – High-risk deployments of AI can have a major impact on peoples’ lives. 
  1. Practice AI safety – Basic AI safety practices, including expert evaluation and human oversight, are vital to effective AI implementations. 

Developing AI tools with a human-centered approach 

Alexander Schneider emphasized the importance of basing technology solutions, including AI, on human needs. To build successful products, Schneider’s team integrates user research, usability testing, user interviews and employee experience journey maps into the development process. These are critical elements in defining where new technology can assist public-sector employees before decisions on implementation are made. 

Through collaboration with the Colorado Department of Revenue, Schneider’s team has used this human-centered process to guide development of AI tools such as: 

  • Compliance-formatted call summarizations at the department’s call centers 
  • An on-demand knowledge retrieval system for call center employees to access resources including department policies and standard operating procedures 
  • An in-development skills-based routing system that directs callers to the best contact to resolve an issue in one call 

Centering accountability through governance-by-design 

Martin Skorczynski stressed the importance of establishing governance processes to ensure AI tools are trustworthy. As a concept, Skorczynski explained governance-by-design as a process where potential AI use cases are categorized by the development team according to value and feasibility. New AI tools are developed only when their value to the organization, the technical capabilities of the team and governance processes have been established. 

As an example of this process, Skorczynski cited the Government Accountability Office’s  work on an early-stage prototype tool that would generate outreach templates to members of Congress connecting the members’ publicly stated priorities to GAO reports most relevant to their work. Through the governance-by-design process, Skorczynski’s team considered the question of accountability for the system’s work. The team concluded that the tool should not directly communicate with members of Congress since accountability for the system’s work belongs with the humans that review, refine and approve any AI-generated communications. 

In Skorczynski’s own words: “The technology is not the hard part. Deciding before you build anything who is accountable for what the technology adds: that’s the hard part. And that is the difference between AI that earns public trust and AI that erodes it.” 

Using best practices to build public trust in AI use 

These cases describe several different examples of AI in government, but they are unified around one theme: implementation of AI in government must be informed by processes that build public trust. 

Testing AI models before use in high-risk settings, taking a human-centered approach when designing AI tools and proactively assessing questions of an AI tool’s value and feasibility to an organization all constitute best practices that lay a foundation for that trust. 

Watch the full discussion here: 


Continuing the conversation   

The AI Center for Government champions AI innovators across all levels of government. If your agency is taking steps to lead AI well, we’d love to hear from you. Join us as we highlight real-world AI use cases and convene public sector leaders from across the country to share tools and insights to lead confidently in the age of AI. 

We’re here to help! 

Sign up for our  newsletter.  

Follow us on LinkedIn

Get in touch! Email us at [email protected]