Presented by Broadcom & Carahsoft
Garrett Lee, Regional Vice President for Public Sector Sales at Broadcom Enterprise Security Group, explored how artificial intelligence is transforming cybersecurity operations for both defenders and attackers.
Lee described AI as a necessary force multiplier for cybersecurity teams, particularly security operations center analysts who must process massive amounts of telemetry, alerts, and network data. AI can dramatically improve threat correlation, incident summarization, response prioritization, and predictive analysis. By automating repetitive tasks and accelerating decision-making, AI allows analysts to focus on the most critical threats.
At the same time, Lee warned that adversaries are using the same technologies. AI-powered malware creation, phishing campaigns, and automated attack generation have lowered the technical barrier for cybercriminals. Sophisticated offensive capabilities are no longer limited to highly skilled nation-state actors. Smaller organizations and lower-level adversaries now have access to tools capable of launching advanced attacks.
Lee emphasized that AI amplifies existing risks rather than creating entirely new ones. The rapid adoption of AI across organizations introduces new attack surfaces, especially when agencies fail to secure the models, datasets, and infrastructure supporting AI systems. Data governance, AI trustworthiness, and access controls are becoming critical components of enterprise cybersecurity strategies.
The conversation also examined how organizations must secure both the AI systems themselves and the environments in which AI operates. Lee argued that zero trust principles apply directly to AI systems because agencies must continuously validate the integrity of the models, data, and access privileges associated with AI-driven operations.
Key Takeaways
- AI significantly improves cybersecurity operations through automation, threat analysis, and faster decision-making.
- Adversaries are using AI to democratize advanced cyberattacks and phishing campaigns.
- Agencies must apply zero trust principles to AI systems and protect training data, models, and outputs.
