Industry Insights

Governing the Rise of AI in Government: Why Governance Must Be a Whole-of-Agency Mission

Written by Fed Gov Today | Jul 24, 2025 1:32:46 PM

 

Original broadcast 7/27/25

Presented by ServiceNow

Artificial intelligence is no longer a futuristic concept for federal agencies—it’s part of daily workflows. According to the General Services Administration (GSA), nearly half of its employees are already using the agency’s new AI tool regularly. With AI permeating government operations, the need for robust, agency-wide governance has never been greater.

Jonathan Alboum, Federal Chief Technology Officer at ServiceNow and former Chief Information Officer at the U.S. Department of Agriculture, joined Fed Gov Today with Francis Rose to unpack what effective AI governance should look like and how agencies can implement it at scale. His key message: governance cannot be limited to CIOs or isolated tech teams. It must be integrated across agency leadership to truly manage AI risks and maximize its value.

The Governance Gap in AI Adoption

Alboum points out that the number of reported AI use cases has surged, with agencies submitting thousands of entries in inventories compiled earlier this year. This rapid adoption is a sign of innovation, but also a source of concern.

“As the technology begins to take hold like that, one of the big challenges is governance,” Alboum said. “How do we know that this technology is providing real value to agencies? How do we understand the data used to train these models? Is it effective? Is it trustworthy?”

He notes that many agencies are still figuring out how to manage AI usage, especially generative AI, and often lack tools to assess risks or monitor compliance. That absence of structure raises the potential for unintended consequences—misuse of sensitive data, model drift, or decisions made by systems that are poorly understood by their human supervisors.

A Whole-of-Agency Approach

Alboum strongly believes AI governance must go beyond the CIO’s office. “It becomes an agency-level issue, a corporate issue if you will,” he said. That means involving other leadership roles such as the Chief Data Officer, Chief Information Security Officer, and Chief Risk Officer.

The rationale is clear: AI doesn’t operate in a silo. It intersects with core domains like data management, cybersecurity, privacy, legal compliance, and public trust. Effective governance must bring all of these threads together in a coordinated framework.

“We want to make sure we bring together these disparate groups in a way where people can see the data, understand the models, and evaluate outcomes. That transparency is what builds trust internally and externally,” Alboum explained.

Learning from Existing Frameworks

One valuable resource Alboum highlights is the NIST AI Risk Management Framework (AI RMF), developed in collaboration with government, industry, and academic stakeholders. While voluntary, the NIST framework provides agencies with a structure for assessing, mitigating, and documenting AI risks.

“The more buy-in there is during the creation of a framework, the more likely it is that it will be adopted and used,” Alboum said, noting that the collaborative model used by NIST should be mirrored internally as agencies shape their own policies.

This includes adapting the framework to align with agency-specific missions and environments, as well as using it to standardize practices across departments and programs.

What Good Governance Looks Like

Alboum lays out a vision for governance that starts with inventory—knowing what AI is in use, and by whom. “We should be able to make that inventory available, at least internally, so we can map it to strategic priorities and mission outcomes,” he said.

But governance doesn’t stop at visibility. It also involves metrics to evaluate how AI is performing. Alboum suggests tracking risk exposure, return on investment, time saved, dollars preserved (especially in fraud detection), and alignment with legal and ethical obligations.

“We want to be able to demonstrate not just that AI is being used, but that it's being used in a way that’s safe, secure, and creates trust,” he said.

The goal is not to stifle innovation, but to channel it through guardrails that promote accountability and consistency. When governance is done well, Alboum argues, AI becomes a tool for not only greater efficiency but also better public service.

From Tools to Trust

Ultimately, Alboum believes AI presents a generational opportunity for government. But that opportunity comes with responsibility.

“If we can use this technology to build trust in the way government operates, then we’re doing more than just automating tasks or saving time,” he said. “We’re changing the way people think about their interactions with government. We’re improving the experience, and we’re enhancing public confidence.”

To get there, he says, agencies must prioritize governance with the same energy they’re giving to experimentation and adoption. Because without the right foundation in place, even the most powerful AI tools won’t achieve their full potential.