The Hidden Danger of AI: Why Governance Will Decide the Outcome

Original Broadcast Date: 02/22/2026

Presented by Symantec by Broadcom, Carbon Black. by Broadcom, & Carahsoft

Artificial intelligence is rapidly becoming embedded in government infrastructure, and Garrett Lee says that reality demands a new level of discipline around data governance.

In his conversation with Francis, Lee, Regional Vice President for Public Sector Sales at Broadcom’s Enterprise Security Group, frames AI as both inevitable and powerful. The U.S. government, he says, will absolutely use AI as a force multiplier. The caution is not about whether to adopt AI, but about how those systems interact with data.

Lee describes AI as a “risk multiplier.” Importantly, he does not argue that AI creates entirely new categories of cyber threats. Instead, he explains that AI amplifies existing threat vectors. The way data flows into models, how prompts are constructed, how outputs are generated, and which models are approved for use all expand the attack surface. In that sense, AI becomes part of the infrastructure itself—no longer a side tool, but a core operational component.

That shift has implications for governance. When AI becomes infrastructure, visibility becomes essential. Agencies must understand how data interacts with models at every stage. They need to know what information is fed into systems, what is produced as output, and whether those interactions align with policy. Lee emphasizes the importance of building guardrails and maintaining the ability to audit and tune those systems over time.

The stakes are high because of the nature of federal data. Lee points out that government systems manage vast amounts of sensitive information—classified material, controlled unclassified information (CUI), personally identifiable information (PII), protected health information (PHI), law enforcement data, and export-controlled data. That data is constantly flowing through federal environments. Decisions about how AI tools access and process it must be deliberate and carefully governed.

For Lee, data protection strategies cannot remain theoretical. They must operate in real time as agencies adopt AI. He outlines five key areas that enable safe AI adoption: identifying sensitive data, monitoring AI tool usage, enforcing data protection policies, maintaining compliance, and providing governance visibility. These elements, he explains, do not block innovation. They enable it.

A critical part of the challenge is understanding usage patterns. Lee says many organizations are surprised when they examine their AI adoption closely. Despite best efforts to control and secure usage, agencies often discover more AI activity—and more data interaction with models—than they expected.LeeFrame2

That phenomenon leads to what Lee calls “shadow AI,” borrowing from the earlier concept of shadow IT. Just as employees once deployed servers or software outside official oversight, users today may experiment with AI tools without formal approval. Most of the resulting data exposure, he notes, is not malicious. It is inadvertent. Sensitive information may be entered into prompts without awareness of downstream consequences, or models may be trained on data that should never be exposed in outputs.

These risks, Lee stresses, are governable. Tools such as data loss prevention capabilities can help organizations monitor and control how data moves through AI systems. Real-time visibility and continuous monitoring allow agencies to tune policies and enforce protections as adoption grows.

He also makes clear that it is never too late to begin. Whether an agency is early in its AI journey or already experimenting at scale, implementing guardrails remains possible and necessary. A phased approach can move organizations toward what he describes as “secure by design” AI adoption—where data is consistently discovered, classified, analyzed in context, and governed in real time.

Leadership plays a role in making that happen. Lee highlights the importance of Chief Data Officers and other senior leaders in guiding safe AI adoption. These leaders help set policy, define acceptable models, and ensure that innovation does not erode public trust or amplify cyber risk.

Throughout the discussion, Lee returns to a central theme: AI and data are inseparable. If AI becomes embedded in infrastructure, governance must evolve alongside it. Agencies cannot treat AI as an experimental side project. They must treat it as part of the operational backbone, subject to the same scrutiny, oversight, and policy rigor as any other critical system.

The message is balanced and pragmatic. AI delivers real opportunity. It promises efficiency and insight at unprecedented scale. But without strong data governance—identification of sensitive information, monitoring of usage, enforcement of policies, and maintenance of compliance—the same systems that multiply force can multiply risk.

In Lee’s view, the path forward is clear. Innovation and protection are not opposing forces. With the right tools and leadership, agencies can build guardrails that allow AI adoption to accelerate responsibly. The goal is not to slow progress. It is to ensure that as AI becomes foundational infrastructure, trust, security, and compliance remain foundational as well.