Proactive Protection: How AI Is Transforming Cybersecurity in Government

 

 

Original broadcast 6/11/25

Presented by Dataminr

 

Speaking at the AI for Government Summit, Tierney described how AI is empowering cyber teams to operate more like intelligence analysts—always watching, always evaluating, and always anticipating. “AI gives agencies the ability to shift from reactive to proactive,” he said. “You’re no longer waiting for the attack to happen inside your network. You’re identifying the threat before it reaches the gate.”

Screenshot 2025-06-01 at 4.11.56 PMOne of the key ways AI supports this transformation is by analyzing publicly available data at scale. Open-source intelligence (OSINT) has long been a tool for national security professionals, but it’s only recently that technology has made it feasible to monitor and process massive volumes of online chatter, forums, file repositories, and dark web marketplaces in real time. Tierney explained that AI models can now scan these data streams continuously to identify emerging threat signals—whether it’s chatter about new vulnerabilities, mentions of agency-specific software, or signs that a particular ransomware group is mobilizing.

For government agencies, this kind of early warning is invaluable. It allows security teams to patch vulnerabilities before they’re exploited, strengthen defenses in areas of likely attack, and prioritize their limited resources to address the most imminent risks. “The reality is, agencies are trying to do more with less,” Tierney noted. “They don’t have enough people. So anything that helps them focus on what matters most is a win.”

AI is also addressing one of the most persistent challenges in government cybersecurity: third-party risk. Agencies often rely on a web of contractors, vendors, and external service providers—each of which introduces its own vulnerabilities. In the past, agencies depended on self-reporting or formal audits to assess third-party risk, both of which are slow and prone to blind spots. Now, with AI, agencies can independently monitor their vendor ecosystem and spot signs of compromise—sometimes before the vendors themselves are aware.

One of the most powerful developments Tierney highlighted is agentic AI—autonomous systems designed to operate like human analysts. These agents not only gather data, but interpret it, apply context, and deliver actionable insights. For example, just as a person might use mapping tools to evaluate a travel route, an AI agent can assess a threat actor’s history, motivations, and likely attack vectors. It then presents that information in a clear, decision-ready format, helping cybersecurity teams move faster and smarter.

Screenshot 2025-06-01 at 6.02.32 PMThis ability to add context is critical. Raw alerts aren’t helpful if teams can’t assess their significance. Agentic AI fills that gap, giving defenders the same kind of decision advantage adversaries seek to exploit. “These agents work like tireless colleagues,” Tierney said. “They’re always on, always gathering, always contextualizing.”

But as government leaders look to embrace these capabilities, Tierney issued a word of caution. Not all AI is created equal. Many tools on the market use open-source models or repackaged commercial technologies that may not be fully secure—or even accurate. For high-stakes environments like federal cybersecurity, that’s not good enough.

Tierney urged agencies to work with partners who are deeply invested in AI—not just commercially, but technically. “If you’re going to rely on AI to protect your enterprise, you need to know where the model came from, how it was trained, and how it’s evolving,” he said. He recommended prioritizing solutions built on proprietary large language models developed in-house by vendors with a long-standing commitment to AI research and development. These models are more likely to be rigorously tested, better understood, and more adaptable to the nuances of government missions.

The pace of change in AI makes this guidance even more urgent. Just two years ago, Tierney noted, the term “agentic AI” didn’t exist. Now, it’s becoming a critical element of modern cybersecurity strategy. With generative AI capabilities already maturing, and new autonomous models on the horizon, agencies need to be forward-looking and flexible. That doesn’t mean waiting—it means starting with the right foundation and building from there.

For Tierney, the ultimate promise of AI in government cybersecurity is about resilience. It’s about giving small teams the tools to act like large ones. It’s about detecting threats early, understanding them clearly, and responding decisively. And above all, it’s about protecting the systems and services that millions of Americans rely on—before it’s too late.

Key Takeaways:

  • AI enables government cyber teams to detect and respond to threats before they reach agency systems.

  • Agentic AI delivers context and autonomy, acting like a digital analyst working alongside human teams.

  • Agencies should prioritize trustworthy, proprietary AI models from deeply invested partners to ensure accuracy and security.


This interview was featured in the program Innovation in Government recorded on location at the Carahsoft AI for Government Summit. To watch the full program, click here.

Join our Newsletter

Please fill out the requested information below