Inside the AI Control Tower: The Blueprint for Governing Government’s AI Explosion

Original Broadcast Date: 03/01/2026

Presented by ServiceNow & Carahsoft

As artificial intelligence use cases expand across the federal government, agencies are confronting a reality that extends far beyond experimentation. AI is no longer a pilot project or innovation lab exercise. It is becoming embedded in how government operates. In his conversation on Fed Gov Today, Geoff Browning, Director of Global Government Affairs and Public Policy at ServiceNow, focuses on a central question: how do agencies scale AI responsibly?

Browning makes it clear that speed alone does not determine success. In both the public and private sectors, he says, the organizations that get AI right are not necessarily the ones that move fastest. They are the ones that establish governance first — and then scale.

He describes what he calls an “AI control tower,” a structured approach to managing artificial intelligence across an enterprise. At its core, the idea centers on answering difficult governance questions before deployment expands. Agencies must determine what AI systems can and cannot touch. They must decide which types of cases always default to a human and clearly define the human’s role. They must understand how AI assets are tracked from data ingestion to output. And because AI becomes part of the infrastructure, it must comply with federal laws and requirements from the outset.

Browning emphasizes that governance has two parts. The first involves thoughtful decision-making: setting guardrails, defining acceptable use, and identifying risk levels. The second involves operationalizing those decisions. It is not enough to publish policy; agencies must build mechanisms that measure maturity and risk over time. He points to tools that align with established frameworks, such as the NIST risk management framework, to help agencies benchmark their AI programs.

As agencies build inventories of AI use cases — a requirement reinforced by the administration — Browning sees progress. Agencies are not only cataloging AI deployments but also assessing which use cases are higher risk and taking mitigating steps. He acknowledges that these determinations are difficult and vary by use case, but he sees federal employees working hard to confront those challenges thoughtfully.

Another area Browning highlights is workforce readiness. On the technical side, agencies want short time to value. But emerging technologies require new skills. Drawing from a global skills forecast conducted with Pearson, Browning notes a sharp increase in demand for AI governance capabilities, data stewardship, and auditing skills. These roles are becoming essential as AI systems scale. Agencies recognize this need within their own workforces, and Browning stresses that access to training and certification pathways can help close those gaps.BrowningFrame2

Beyond governance and workforce, Browning raises a topic that receives less public attention but carries enormous importance: asset management. He begins with traditional software and hardware inventories. Congress passed the MEGABYTE Act to require agencies to maintain accurate software inventories using automated tools. Years later, implementation remains challenging. Agencies still struggle to fully track what software and hardware exist on their networks and how those systems are configured.

That foundational challenge becomes even more complex when AI enters the picture. AI systems introduce additional dependencies — models, training data, integrations, and cloud environments. Browning explains that as agencies move toward more advanced, even agentic AI systems, governance becomes even more critical. Dependencies multiply, and the need to track assets across their life cycles grows more urgent. Configuration management, which is already essential for vulnerability response, becomes even more important in an AI-enabled environment.

He underscores that agencies must understand what technologies are on their networks, how they are configured, and how they interact. When AI systems integrate across cloud providers or connect with multiple vendors, visibility becomes essential. Asset tracking cannot stop at procurement. It must extend through deployment, operation, and eventual retirement.

Looking ahead, Browning encourages agencies to think about quantifying the value they receive from AI investments. Testing, evaluation, validation, and verification are critical components of responsible deployment. Agencies must ensure that AI systems produce measurable outcomes. Investment on the front end must translate into results on the back end.

Ultimately, Browning’s message is not about hype. It is about discipline. As AI use expands — in some cases rapidly — governance structures, workforce development, asset management, and measurable performance become the difference between experimentation and enterprise success.

AI, he suggests, is becoming embedded in federal infrastructure. Agencies that take the time to build their control towers — defining rules, managing assets, developing skills, and measuring impact — position themselves to scale responsibly. The challenge is significant. But so is the opportunity for agencies that approach AI with clarity, structure, and accountability.