Artificial intelligence is reshaping every aspect of cybersecurity across government and industry. At the CyberSmart Summit presented by Carahsoft, federal, state, local, and industry leaders joined Francis Rose to discuss how agencies are adapting their cyber strategies to address rapidly evolving threats, modernizing zero trust architectures, protecting software supply chains, and preparing for an era where machines increasingly defend against machines. The conversations highlighted the growing importance of visibility, automation, resilience, AI governance, and mission-focused cybersecurity strategies across government operations.
Brannum emphasized that zero trust is no longer simply about user authentication or network access. The rise of supply chain risks, AI-enabled systems, and distributed IoT environments has fundamentally changed how agencies must approach trust and verification. USDA is now focused heavily on software bill of materials practices, continuous integration and continuous detection pipelines, and evaluating the trustworthiness of software before it enters operational environments.
AI adoption is simultaneously creating opportunities and introducing new risks. Brannum explained that while USDA wants to leverage AI for automation, threat hunting, and operational efficiencies, agencies must place guardrails around these systems. AI requires continuous re-authentication and ongoing monitoring because agencies cannot fully trust autonomous systems without oversight.
The discussion also focused on the changing risk landscape for USDA, particularly as food supply chains become increasingly digitized and interconnected. Brannum described how technologies such as automated grading systems, IoT farm sensors, and distributed agricultural infrastructure create new attack surfaces that did not previously exist. USDA must secure these systems while still supporting mission delivery and operational agility.
Brannum also highlighted the importance of balancing cybersecurity and operational functionality. He stressed that compliance alone is insufficient in today’s threat environment. Instead, agencies must make risk-based decisions that acknowledge perfect security is impossible while still protecting critical services and infrastructure.
Key Takeaways
Garrett Lee, Regional Vice President for Public Sector Sales at Broadcom Enterprise Security Group, explored how artificial intelligence is transforming cybersecurity operations for both defenders and attackers.
At the same time, Lee warned that adversaries are using the same technologies. AI-powered malware creation, phishing campaigns, and automated attack generation have lowered the technical barrier for cybercriminals. Sophisticated offensive capabilities are no longer limited to highly skilled nation-state actors. Smaller organizations and lower-level adversaries now have access to tools capable of launching advanced attacks.
Lee emphasized that AI amplifies existing risks rather than creating entirely new ones. The rapid adoption of AI across organizations introduces new attack surfaces, especially when agencies fail to secure the models, datasets, and infrastructure supporting AI systems. Data governance, AI trustworthiness, and access controls are becoming critical components of enterprise cybersecurity strategies.
The conversation also examined how organizations must secure both the AI systems themselves and the environments in which AI operates. Lee argued that zero trust principles apply directly to AI systems because agencies must continuously validate the integrity of the models, data, and access privileges associated with AI-driven operations.
Key Takeaways
Chris Usserman, Global Public Sector CTO at Infoblox, focused on the growing challenge of defending against AI-enabled cyber threats while agencies simultaneously accelerate AI adoption.
He argued that visibility is the foundational requirement for cybersecurity in an AI-driven environment. Organizations cannot secure systems they cannot see. AI tools are already embedded inside browsers, productivity suites, and enterprise software platforms, meaning many organizations are already using AI whether they formally recognize it or not.
Usserman warned that attackers increasingly leverage “living off the land” techniques, where they exploit trusted applications and enterprise tools rather than deploying obvious malware. AI-powered systems can potentially help attackers identify sensitive data, automate reconnaissance, and move laterally across environments using legitimate applications already deployed inside organizations.
The conversation also addressed the federal government’s push toward commercial technology adoption. Usserman explained that government agencies must define clear cybersecurity requirements while allowing industry partners to innovate rapidly. Government should focus on establishing security expectations while leveraging the agility and technical innovation of private sector providers.
He stressed the importance of collaboration among agencies, standards organizations like NIST, and private cybersecurity companies to continuously monitor evolving AI threats and translate emerging intelligence into operational protections.
Key Takeaways
Kevin Walsh, Director of IT and Cybersecurity at the Government Accountability Office, examined how federal agencies are adopting AI for cybersecurity operations and where challenges still remain.
He highlighted the paradox facing government agencies: federal organizations possess enormous volumes of valuable data that make them ideal candidates for AI implementation, but many agencies still operate aging systems that complicate integration and modernization efforts.
Walsh emphasized that AI is becoming essential for defending against modern cyber threats because adversaries are already leveraging AI at scale. Human analysts alone cannot keep pace with the volume and speed of AI-enabled attacks. Agencies must use AI to augment detection, vulnerability identification, incident response, and recovery operations.
He also discussed the importance of cross-government collaboration, encouraging agencies to share lessons learned through CIO councils, CISO councils, and peer networks. Agencies can accelerate their maturity by learning from successful early adopters rather than solving problems independently.
Workforce development remains another major priority. Walsh stressed that while AI can automate many technical processes, human oversight remains critical for ethical decision-making, accountability, and anomaly detection. He introduced the concept of “diligent skepticism,” encouraging cybersecurity professionals to maintain critical thinking even as AI-generated content becomes more convincing through deepfakes and synthetic media.
Key Takeaways
Richard Breakiron, Director of Strategic Initiatives at Commvault, discussed how organizations can successfully integrate AI while managing cybersecurity risks and operational change.
However, AI differs because of its scale, speed, and broad operational reach. Unlike traditional tools, AI systems can impact entire enterprises simultaneously. This creates greater organizational risk if governance and oversight are not properly implemented.
Breakiron encouraged agencies to adopt a phased “crawl, walk, run” approach to AI implementation. Organizations should begin with controlled deployments, monitor outcomes carefully, adjust based on lessons learned, and gradually expand capabilities over time. He warned against deploying AI broadly without fully understanding operational impacts and associated risks.
The discussion also explored how existing cybersecurity frameworks such as NIST, FedRAMP, and CMMC remain highly relevant in the AI era. Strong identity management, access controls, governance processes, and auditing remain foundational requirements even as organizations adopt advanced AI technologies.
Breakiron also addressed the challenge of hallucinations and autonomous AI behavior, emphasizing the need for clear boundaries, auditability, and human oversight when deploying agentic AI systems capable of making operational decisions.
Key Takeaways
Dr. Elizabeth Di Bene, Chief Information Security Officer for Loudoun County, Virginia, discussed how cybersecurity strategies must evolve from static protection models to continuous resilience and behavioral analysis.
She argued that cybersecurity has fundamentally shifted from perimeter defense and reactive monitoring to continuous resilience and operational awareness. AI changes the threat landscape because traditional signature-based defenses can no longer reliably detect adaptive or polymorphic threats.
Di Bene emphasized that organizations must instead focus on behavioral baselining, operational tempo analysis, and anomaly detection. AI systems can dynamically alter behaviors and signatures in real time, making traditional detection approaches increasingly ineffective.
The conversation also explored the growing importance of non-human identities. AI systems themselves effectively function as identities inside enterprise environments and require the same monitoring, governance, and access controls as human users.
Di Bene strongly advocated for automation-driven cybersecurity operations, explaining that machine-speed attacks require machine-speed defenses. Human analysts alone cannot respond quickly enough to AI-enabled attacks, making automated detection and response systems essential for future cyber defense strategies.
Key Takeaways
Olschewske explained that zero trust is evolving from “never trust, always verify” to “never trust, continuously verify.” Organizations must constantly validate both users and devices because risk conditions can change dynamically over time.
He emphasized that users should increasingly be treated like endpoints, with organizations continuously monitoring behavior, access patterns, locations, and anomalies to identify suspicious activity. AI complicates this challenge because attackers can leverage AI-generated identities, synthetic behaviors, and automated tools to mimic legitimate users.
The conversation also highlighted the importance of software bill of materials practices. Olschewske argued that organizations must understand exactly what exists inside the software and firmware they deploy before connecting systems to operational networks. Supply chain validation is now essential because trusted vendors can still inadvertently introduce vulnerabilities or compromised components.
He used physical supply chain analogies to explain that organizations should not inherently trust delivered products without inspection and validation. The same principle applies to software, firmware, and digital infrastructure.
Key Takeaways
George Kaminski, Manager of Securing Solutions Engineering at Cisco, examined the differences between securing externally sourced AI tools and internally developed AI systems.
Internally developed first-party AI systems introduce a different set of risks, including prompt injection attacks, poisoned datasets, compromised plugins, and hallucinated outputs. Organizations must secure every stage of the AI lifecycle, including training data, runtime execution, integrations, and outputs.
Kaminski emphasized that organizations need tools capable of inspecting prompts, monitoring AI behavior, validating outputs, and scanning integrations before deployment. AI governance must include logging, auditing, rollback capabilities, and strict policy enforcement.
He also discussed the growing importance of agentic AI systems, which can autonomously take actions on behalf of users or organizations. While these systems offer operational advantages, they also require stronger guardrails and oversight mechanisms to prevent unintended actions or security failures.
Key Takeaways