Securing AI and Its Data

Original broadcast 10/1/25

 

Presented by UberEther & Carahsoft

At the Billington CyberSecurity Conference in Washington, DC, Matt Topper, CEO and Founder of UberEther, explored the fast-changing relationship between artificial intelligence and cybersecurity. His perspective highlighted both the promise of AI in defending government systems and the new risks it introduces, particularly when sensitive data flows into tools that are not yet adequately secured. For Topper, the stakes are high: agencies must find ways to harness AI’s power without undermining the very security they seek to strengthen.

Topper began by drawing a parallel to the technology hype cycles of the past. Just as cloud and mobile once surged onto the scene with great fanfare, AI has become the latest transformative technology. Agencies are racing to adopt AI-driven tools, eager to leverage their ability to analyze vast datasets, identify patterns, and automate tasks. But in the rush, Topper cautioned, many are neglecting to put proper security boundaries around these systems. As a result, some agencies are halting AI deployments until they can assess the risks, while others are pressing forward without fully understanding what they may be exposing.

Screenshot 2025-09-25 at 4.07.48 PMOne of the most pressing concerns, he explained, is data leakage. Sensitive information, including research, program details, or even intelligence, can unintentionally become part of AI training data. If vendors then use that information to refine their models, it risks being exposed far more widely than intended. In a government context, this could mean classified or sensitive research becoming available to adversaries simply because it was processed through an unsecured AI platform. Topper warned that without firm guardrails, agencies may end up giving away their most valuable assets.

He noted that the challenge is not only external but also internal. Once information enters an AI system, it often leaves the controlled environment of document repositories or secure networks. Data that was once restricted can suddenly become accessible to a much larger group of users. For agencies handling Special Access Programs or sensitive research areas like weapons of mass destruction, the implications are profound. AI systems, while powerful, can inadvertently flatten layers of security and distribute information far beyond what policies allow.

Topper pointed to the immaturity of current standards as another area of concern. He described how the Model Context Protocol (MCP), which underpins many AI APIs, initially released its specification with an unfinished security section—literally marked as “to do.” This lack of rigor has already led to vulnerabilities. He cited examples where organizations using MCP servers found themselves exposed, with attackers able to pivot into repositories and systems such as Salesforce or Snowflake. While fixes like OAuth integration have been introduced, Topper emphasized that these remain imperfect solutions. For him, the real path forward lies in developing stronger identity frameworks and limiting exposure through short-lived credentials.

This is where the concept of non-human identity comes into play. In traditional cybersecurity, identity and access management focuses on human users. But with AI systems, machines themselves act as intermediaries, accessing and processing data on behalf of humans. Topper argued that agencies must treat these systems as distinct identities, subject to the same rigorous controls as individuals. By issuing short-lived certificates—perhaps granting an AI system access for only a few hours—agencies can drastically reduce the window of opportunity for attackers. Persistent access becomes harder to exploit, and intrusions are easier to detect.

Despite these challenges, Topper expressed enthusiasm about AI’s potential as a defensive tool. Security operations centers (SOCs), he explained, are overwhelmed by alerts—tens of thousands per day, far more than analysts can review. Historically, many alerts go unexamined, leaving potential threats undetected. AI models, however, can sift through massive datasets, recognize patterns, and prioritize alerts, reducing the burden on analysts. Instead of searching for a single “needle in a haystack,” AI can highlight the small number of anomalies that truly matter, giving analysts a fighting chance to respond.

Even more promising is AI’s ability to recommend countermeasures. By incorporating attack frameworks like MITRE ATT&CK, AI-driven tools can not only detect malicious activity but also suggest the most effective way to stop it. This type of automation moves agencies closer to real-time defense, where adversaries can be detected and neutralized before they cause significant harm.

Topper tempered his optimism with realism, noting that adversaries have access to the same AI capabilities. Attackers are already training models on massive datasets, including repositories like GitHub, to refine their techniques. Some of these models are poorly tuned, but others are becoming highly specialized for offensive operations. The result is a cat-and-mouse game where defenders must innovate just as quickly as attackers. In his view, AI will not eliminate this dynamic, but it can shift the balance if implemented thoughtfully.

For Topper, the future of AI in government cybersecurity will be defined by how well agencies balance innovation with caution. They must build security into AI systems from the ground up, recognize the risks of data exposure, and embrace new identity frameworks that account for non-human actors. At the same time, they must seize the opportunity to empower SOC analysts, automate defenses, and leverage AI’s speed and precision. Done right, AI could help agencies not only keep pace with adversaries but, in some cases, get ahead.

Key Takeaways

  • Sensitive government data can leak into AI systems if guardrails are not established, creating risks of exposure.

  • Non-human identity and short-lived certificates are emerging as critical tools for securing AI platforms.

  • AI has the potential to transform SOC operations by filtering alerts and recommending countermeasures in real time.

Join our Newsletter

Please fill out the requested information below