Original broadcast 2/3/26
Presented by Microsoft
Bob Costello, Chief Information Officer at the Cybersecurity and Infrastructure Security Agency (CISA), and Steve Faehl, US Government Security Leader at Microsoft, discuss how agencies are approaching AI in cybersecurity through a practical, risk-driven lens. Their conversation highlights that agencies face multiple simultaneous challenges: using AI to improve cyber defense operations, securing AI systems so they function safely and correctly, and preparing for adversaries who are also using AI to become more effective.
Costello shares that CISA’s defensive cyber operations teams are becoming active users of AI-enabled solutions, rather than simply advising on them. He describes using AI in the authorization to operate process, specifically by leveraging AI tools to help write control statements and move systems into production faster. He also explains that AI is being applied to penetration testing work, supporting ongoing validation and continuous evaluation of systems. These examples reflect how AI can accelerate processes that traditionally involve heavy manual labor and long cycle times.
A key factor for CISA, Costello notes, is data readiness. CISA has large volumes of data across different mission areas, including infrastructure security, emergency communications, and cybersecurity operations. Some of these systems are more than 20 years old, and preparing them for AI use requires modernization. Costello emphasizes that modernizing systems without improving outcomes can actually make things worse if agencies simply accelerate flawed processes. He stresses that AI must be grounded in clear problem definitions and must be used where it truly fits the operational need.
Steve Faehl frames AI in cybersecurity using three categories: security with AI, security of AI, and security from AI. Security with AI focuses on using AI to improve cybersecurity operations and provide new defensive advantage. Security of AI focuses on securing AI systems themselves and ensuring they operate as intended. Security from AI focuses on the growing threat of adversaries using AI to increase the speed and sophistication of attacks.
A central point in the discussion is that AI can help shift cybersecurity toward defender advantage. Faehl explains that cybersecurity has long been asymmetric: attackers only need to find one vulnerability, while defenders must locate and address every vulnerability. AI can help reduce this asymmetry by improving detection and enabling defenders to proactively identify vulnerabilities with the same capabilities attackers may have. This shift, he argues, allows defenders to act earlier and reduce exposure before adversaries can exploit weaknesses.
Costello adds that a major challenge for defenders has been the “patch everything” approach, which is exhausting for operations teams and security personnel. He expresses optimism that AI-enabled tools can help security leaders answer critical questions such as whether their environments are actually vulnerable to a specific issue, and how to prioritize response when vulnerabilities are disclosed and threats evolve rapidly. He notes that vulnerabilities are exploited faster than ever, sometimes minutes or even seconds after disclosure, which makes prioritization essential.
Costello also describes how vulnerabilities that appear low criticality on their own can become severe when chained together. AI-enabled tools can help identify those chains and understand real risk more accurately. This kind of modeling and prioritization allows defenders to focus efforts where risk is highest, improving resource allocation and accelerating mitigation efforts without sacrificing rigor.
Another major concern raised is the improvement of phishing attacks through AI. Costello shares that he has seen phishing emails that look highly convincing, contain no language errors, and include personal details that make them appear legitimate. He notes that if trained professionals find these attacks convincing, average users are at even greater risk. This reinforces the importance of workforce training and awareness, and highlights how adversaries are using AI to increase the efficiency and believability of social engineering attacks.
Costello describes how this level of telemetry can shift risk management practices from occasional reviews to near real-time visibility. He explains that organizations historically authorized systems for three years and then reassessed periodically, but the ability to synthesize modern telemetry could enable up-to-the-second risk decisions. For Costello, this represents an aspirational goal that cybersecurity leaders have pursued for a long time: continuously understanding what is normal, what is abnormal, and where risk is changing in real time.
Both leaders emphasize that agencies must adopt a culture of experimentation to keep pace with evolving threats and tools. Costello stresses that leaders must support teams through testing and failure, ensuring employees know their leaders will back them if a pilot or initiative does not work. Faehl reinforces that AI reduces the cost and time required to test new defensive capabilities, and he offers a practical rule of thumb: if a tool cannot reach 80% value in about 20 minutes, agencies should consider moving on to something else.
Their shared perspective is that speed and iteration are essential, both in technology adoption and in defensive operations. By lowering the barrier to capability testing, AI allows agencies to move faster, reduce defender workload, increase attacker cost, and take real steps toward winning the economic dynamics of cybersecurity.
This segment was part of the program Mission Innovation: Becoming a Frontier Agency.