Innovation

Innovation in Government:  Two Edges of the AI Sword

Written by Fed Gov Today | May 7, 2026 6:43:34 PM
Presented by Carahsoft

Artificial intelligence is reshaping every aspect of cybersecurity across government and industry. At the CyberSmart Summit presented by Carahsoft, federal, state, local, and industry leaders joined Francis Rose to discuss how agencies are adapting their cyber strategies to address rapidly evolving threats, modernizing zero trust architectures, protecting software supply chains, and preparing for an era where machines increasingly defend against machines. The conversations highlighted the growing importance of visibility, automation, resilience, AI governance, and mission-focused cybersecurity strategies across government operations.

Zero Trust, AI, and Mission Security at USDA

Tony Brannum, Chief Information Security Officer at the U.S. Department of Agriculture, discussed how USDA has spent years evolving its zero trust strategy through enterprise consolidation, modernization, and risk-based cybersecurity practices. Brannum explained that cybersecurity decisions at USDA are deeply tied to mission delivery, particularly around food supply chain security, agricultural inspection systems, SNAP benefits, and operational technology deployed across rural America.

Brannum emphasized that zero trust is no longer simply about user authentication or network access. The rise of supply chain risks, AI-enabled systems, and distributed IoT environments has fundamentally changed how agencies must approach trust and verification. USDA is now focused heavily on software bill of materials practices, continuous integration and continuous detection pipelines, and evaluating the trustworthiness of software before it enters operational environments.

AI adoption is simultaneously creating opportunities and introducing new risks. Brannum explained that while USDA wants to leverage AI for automation, threat hunting, and operational efficiencies, agencies must place guardrails around these systems. AI requires continuous re-authentication and ongoing monitoring because agencies cannot fully trust autonomous systems without oversight.

The discussion also focused on the changing risk landscape for USDA, particularly as food supply chains become increasingly digitized and interconnected. Brannum described how technologies such as automated grading systems, IoT farm sensors, and distributed agricultural infrastructure create new attack surfaces that did not previously exist. USDA must secure these systems while still supporting mission delivery and operational agility.

Brannum also highlighted the importance of balancing cybersecurity and operational functionality. He stressed that compliance alone is insufficient in today’s threat environment. Instead, agencies must make risk-based decisions that acknowledge perfect security is impossible while still protecting critical services and infrastructure.

Key Takeaways

  • USDA’s zero trust strategy is expanding beyond identity to include supply chain integrity, IoT security, and AI governance.
  • AI introduces both operational advantages and significant cybersecurity risks that require continuous monitoring and guardrails.
  • Mission delivery remains central to cybersecurity decision-making across USDA operations.

AI as a Cybersecurity Force Multiplier

Garrett Lee, Regional Vice President for Public Sector Sales at Broadcom Enterprise Security Group, explored how artificial intelligence is transforming cybersecurity operations for both defenders and attackers.

Lee described AI as a necessary force multiplier for cybersecurity teams, particularly security operations center analysts who must process massive amounts of telemetry, alerts, and network data. AI can dramatically improve threat correlation, incident summarization, response prioritization, and predictive analysis. By automating repetitive tasks and accelerating decision-making, AI allows analysts to focus on the most critical threats.

At the same time, Lee warned that adversaries are using the same technologies. AI-powered malware creation, phishing campaigns, and automated attack generation have lowered the technical barrier for cybercriminals. Sophisticated offensive capabilities are no longer limited to highly skilled nation-state actors. Smaller organizations and lower-level adversaries now have access to tools capable of launching advanced attacks.

Lee emphasized that AI amplifies existing risks rather than creating entirely new ones. The rapid adoption of AI across organizations introduces new attack surfaces, especially when agencies fail to secure the models, datasets, and infrastructure supporting AI systems. Data governance, AI trustworthiness, and access controls are becoming critical components of enterprise cybersecurity strategies.

The conversation also examined how organizations must secure both the AI systems themselves and the environments in which AI operates. Lee argued that zero trust principles apply directly to AI systems because agencies must continuously validate the integrity of the models, data, and access privileges associated with AI-driven operations.

Key Takeaways

  • AI significantly improves cybersecurity operations through automation, threat analysis, and faster decision-making.
  • Adversaries are using AI to democratize advanced cyberattacks and phishing campaigns.
  • Agencies must apply zero trust principles to AI systems and protect training data, models, and outputs.

    📺 Watch Full Interview

Visibility, AI Governance, and Defending Against Emerging Threats

Chris Usserman, Global Public Sector CTO at Infoblox, focused on the growing challenge of defending against AI-enabled cyber threats while agencies simultaneously accelerate AI adoption.

Usserman explained that threat actors are advancing faster than many organizations because attackers are less constrained by governance requirements, compliance standards, and operational oversight. Government agencies must move quickly enough to defend themselves while still maintaining appropriate governance, privacy protections, and accountability structures.

He argued that visibility is the foundational requirement for cybersecurity in an AI-driven environment. Organizations cannot secure systems they cannot see. AI tools are already embedded inside browsers, productivity suites, and enterprise software platforms, meaning many organizations are already using AI whether they formally recognize it or not.

Usserman warned that attackers increasingly leverage “living off the land” techniques, where they exploit trusted applications and enterprise tools rather than deploying obvious malware. AI-powered systems can potentially help attackers identify sensitive data, automate reconnaissance, and move laterally across environments using legitimate applications already deployed inside organizations.

The conversation also addressed the federal government’s push toward commercial technology adoption. Usserman explained that government agencies must define clear cybersecurity requirements while allowing industry partners to innovate rapidly. Government should focus on establishing security expectations while leveraging the agility and technical innovation of private sector providers.

He stressed the importance of collaboration among agencies, standards organizations like NIST, and private cybersecurity companies to continuously monitor evolving AI threats and translate emerging intelligence into operational protections.

Key Takeaways

  • Visibility across networks, applications, and AI systems is essential for modern cybersecurity operations.
  • AI is already embedded in many trusted enterprise applications, creating hidden security challenges.
  • Government and industry collaboration is critical to keeping pace with rapidly evolving AI-enabled threats.

📺 Watch Full Interview

Government Adoption of AI for Cyber Defense

Kevin Walsh, Director of IT and Cybersecurity at the Government Accountability Office, examined how federal agencies are adopting AI for cybersecurity operations and where challenges still remain.

Walsh explained that government adoption of AI varies widely across agencies. Some organizations are aggressively implementing AI-driven security capabilities, while others continue to struggle with outdated infrastructure, legacy systems, and slower modernization timelines.

He highlighted the paradox facing government agencies: federal organizations possess enormous volumes of valuable data that make them ideal candidates for AI implementation, but many agencies still operate aging systems that complicate integration and modernization efforts.

Walsh emphasized that AI is becoming essential for defending against modern cyber threats because adversaries are already leveraging AI at scale. Human analysts alone cannot keep pace with the volume and speed of AI-enabled attacks. Agencies must use AI to augment detection, vulnerability identification, incident response, and recovery operations.

He also discussed the importance of cross-government collaboration, encouraging agencies to share lessons learned through CIO councils, CISO councils, and peer networks. Agencies can accelerate their maturity by learning from successful early adopters rather than solving problems independently.

Workforce development remains another major priority. Walsh stressed that while AI can automate many technical processes, human oversight remains critical for ethical decision-making, accountability, and anomaly detection. He introduced the concept of “diligent skepticism,” encouraging cybersecurity professionals to maintain critical thinking even as AI-generated content becomes more convincing through deepfakes and synthetic media.

Key Takeaways

  • Federal agencies are adopting AI at different speeds depending on infrastructure readiness and modernization maturity.
  • AI is becoming essential for cyber defense because human analysts cannot manage modern attack volumes alone.
  • Human oversight and critical thinking remain vital despite increasing automation.

Managing AI Risk and Organizational Change

Richard Breakiron, Director of Strategic Initiatives at Commvault, discussed how organizations can successfully integrate AI while managing cybersecurity risks and operational change.

Breakiron emphasized that AI should be viewed as a tool rather than a completely separate category of technology. Organizations should apply many of the same governance, risk management, and operational frameworks used for other emerging technologies.

However, AI differs because of its scale, speed, and broad operational reach. Unlike traditional tools, AI systems can impact entire enterprises simultaneously. This creates greater organizational risk if governance and oversight are not properly implemented.

Breakiron encouraged agencies to adopt a phased “crawl, walk, run” approach to AI implementation. Organizations should begin with controlled deployments, monitor outcomes carefully, adjust based on lessons learned, and gradually expand capabilities over time. He warned against deploying AI broadly without fully understanding operational impacts and associated risks.

The discussion also explored how existing cybersecurity frameworks such as NIST, FedRAMP, and CMMC remain highly relevant in the AI era. Strong identity management, access controls, governance processes, and auditing remain foundational requirements even as organizations adopt advanced AI technologies.

Breakiron also addressed the challenge of hallucinations and autonomous AI behavior, emphasizing the need for clear boundaries, auditability, and human oversight when deploying agentic AI systems capable of making operational decisions.

Key Takeaways

  • Organizations should approach AI adoption incrementally using phased deployment strategies.
  • Existing cybersecurity frameworks remain highly valuable in managing AI risks.
  • AI systems require strong governance, auditing, and oversight to prevent unintended outcomes.

📺 Watch Full Interview

AI, Identity, and Continuous Cyber Resilience

Dr. Elizabeth Di Bene, Chief Information Security Officer for Loudoun County, Virginia, discussed how cybersecurity strategies must evolve from static protection models to continuous resilience and behavioral analysis.

Di Bene explained that software supply chain security increasingly depends on validating the integrity of development pipelines, builds, and deployment processes using cryptographic verification and continuous monitoring practices. Zero trust principles must now extend deeply into software supply chains.

She argued that cybersecurity has fundamentally shifted from perimeter defense and reactive monitoring to continuous resilience and operational awareness. AI changes the threat landscape because traditional signature-based defenses can no longer reliably detect adaptive or polymorphic threats.

Di Bene emphasized that organizations must instead focus on behavioral baselining, operational tempo analysis, and anomaly detection. AI systems can dynamically alter behaviors and signatures in real time, making traditional detection approaches increasingly ineffective.

The conversation also explored the growing importance of non-human identities. AI systems themselves effectively function as identities inside enterprise environments and require the same monitoring, governance, and access controls as human users.

Di Bene strongly advocated for automation-driven cybersecurity operations, explaining that machine-speed attacks require machine-speed defenses. Human analysts alone cannot respond quickly enough to AI-enabled attacks, making automated detection and response systems essential for future cyber defense strategies.

Key Takeaways

  • Cybersecurity is shifting from reactive defense to continuous resilience and behavioral analysis.
  • Signature-based defenses are becoming ineffective against adaptive AI-enabled threats.
  • Organizations must secure both human and non-human identities inside enterprise environments.

Continuous Verification and Software Supply Chain Security

David Olschewske, Strategic Account Manager at Forescout, discussed the next evolution of zero trust and the growing importance of software supply chain visibility.

Olschewske explained that zero trust is evolving from “never trust, always verify” to “never trust, continuously verify.” Organizations must constantly validate both users and devices because risk conditions can change dynamically over time.

He emphasized that users should increasingly be treated like endpoints, with organizations continuously monitoring behavior, access patterns, locations, and anomalies to identify suspicious activity. AI complicates this challenge because attackers can leverage AI-generated identities, synthetic behaviors, and automated tools to mimic legitimate users.

The conversation also highlighted the importance of software bill of materials practices. Olschewske argued that organizations must understand exactly what exists inside the software and firmware they deploy before connecting systems to operational networks. Supply chain validation is now essential because trusted vendors can still inadvertently introduce vulnerabilities or compromised components.

He used physical supply chain analogies to explain that organizations should not inherently trust delivered products without inspection and validation. The same principle applies to software, firmware, and digital infrastructure.

Key Takeaways

  • Zero trust is evolving toward continuous verification of users, devices, and behaviors.
  • AI-enabled threats make identity monitoring and behavioral analysis increasingly important.
  • Organizations must validate software and firmware integrity before deployment.

📺 Watch Full Interview

Securing First-Party and Third-Party AI Systems

George Kaminski, Manager of Securing Solutions Engineering at Cisco, examined the differences between securing externally sourced AI tools and internally developed AI systems.

Kaminski explained that organizations face two distinct AI security challenges. Third-party AI tools such as ChatGPT and other external platforms create data leakage and governance risks because organizations cannot fully control the underlying infrastructure or data handling practices. Agencies must establish guardrails governing which tools employees can access and what information users can share through external AI systems.

Internally developed first-party AI systems introduce a different set of risks, including prompt injection attacks, poisoned datasets, compromised plugins, and hallucinated outputs. Organizations must secure every stage of the AI lifecycle, including training data, runtime execution, integrations, and outputs.

Kaminski emphasized that organizations need tools capable of inspecting prompts, monitoring AI behavior, validating outputs, and scanning integrations before deployment. AI governance must include logging, auditing, rollback capabilities, and strict policy enforcement.

He also discussed the growing importance of agentic AI systems, which can autonomously take actions on behalf of users or organizations. While these systems offer operational advantages, they also require stronger guardrails and oversight mechanisms to prevent unintended actions or security failures.

Key Takeaways

  • Organizations must separately secure third-party AI tools and internally developed AI systems.
  • Prompt injection, poisoned datasets, and hallucinated outputs create major AI security risks.
  • Agentic AI requires strong governance, auditing, and operational guardrails.
📺 Watch Full Interview