Mission Innovation

AI and Cybersecurity: Security With AI, Of AI, and From AI

Written by Fed Gov Today | Jan 21, 2026 12:13:00 AM

Original broadcast 2/3/26

Presented by Microsoft

Bob Costello, Chief Information Officer at the Cybersecurity and Infrastructure Security Agency (CISA), and Steve Faehl, US Government Security Leader at Microsoft, discuss how agencies are approaching AI in cybersecurity through a practical, risk-driven lens. Their conversation highlights that agencies face multiple simultaneous challenges: using AI to improve cyber defense operations, securing AI systems so they function safely and correctly, and preparing for adversaries who are also using AI to become more effective.

Costello explains that deploying AI tools inside CISA is uniquely complex because CISA’s workforce advises other organizations on cybersecurity best practices. He describes the responsibility of modeling secure adoption and using AI responsibly in ways that can be trusted by both internal teams and broader government partners. For Costello, success depends on more than the technology itself, and he introduces workforce readiness as a critical element. Training and educating people on how to use new tools effectively becomes essential to building safe adoption and real operational value.

Costello shares that CISA’s defensive cyber operations teams are becoming active users of AI-enabled solutions, rather than simply advising on them. He describes using AI in the authorization to operate process, specifically by leveraging AI tools to help write control statements and move systems into production faster. He also explains that AI is being applied to penetration testing work, supporting ongoing validation and continuous evaluation of systems. These examples reflect how AI can accelerate processes that traditionally involve heavy manual labor and long cycle times.

A key factor for CISA, Costello notes, is data readiness. CISA has large volumes of data across different mission areas, including infrastructure security, emergency communications, and cybersecurity operations. Some of these systems are more than 20 years old, and preparing them for AI use requires modernization. Costello emphasizes that modernizing systems without improving outcomes can actually make things worse if agencies simply accelerate flawed processes. He stresses that AI must be grounded in clear problem definitions and must be used where it truly fits the operational need.

Steve Faehl frames AI in cybersecurity using three categories: security with AI, security of AI, and security from AI. Security with AI focuses on using AI to improve cybersecurity operations and provide new defensive advantage. Security of AI focuses on securing AI systems themselves and ensuring they operate as intended. Security from AI focuses on the growing threat of adversaries using AI to increase the speed and sophistication of attacks.

Faehl emphasizes that agencies should begin by clarifying which of these problems they are trying to solve. By establishing that focus, organizations can choose tools and approaches that are appropriate for their risk posture and mission environment. He also argues that cybersecurity is an area that is in urgent need of transformation, and that AI has the potential to deliver that transformation by providing defenders with advantages they have struggled to achieve for years.

A central point in the discussion is that AI can help shift cybersecurity toward defender advantage. Faehl explains that cybersecurity has long been asymmetric: attackers only need to find one vulnerability, while defenders must locate and address every vulnerability. AI can help reduce this asymmetry by improving detection and enabling defenders to proactively identify vulnerabilities with the same capabilities attackers may have. This shift, he argues, allows defenders to act earlier and reduce exposure before adversaries can exploit weaknesses.

Costello adds that a major challenge for defenders has been the “patch everything” approach, which is exhausting for operations teams and security personnel. He expresses optimism that AI-enabled tools can help security leaders answer critical questions such as whether their environments are actually vulnerable to a specific issue, and how to prioritize response when vulnerabilities are disclosed and threats evolve rapidly. He notes that vulnerabilities are exploited faster than ever, sometimes minutes or even seconds after disclosure, which makes prioritization essential.

Costello also describes how vulnerabilities that appear low criticality on their own can become severe when chained together. AI-enabled tools can help identify those chains and understand real risk more accurately. This kind of modeling and prioritization allows defenders to focus efforts where risk is highest, improving resource allocation and accelerating mitigation efforts without sacrificing rigor.

Another major concern raised is the improvement of phishing attacks through AI. Costello shares that he has seen phishing emails that look highly convincing, contain no language errors, and include personal details that make them appear legitimate. He notes that if trained professionals find these attacks convincing, average users are at even greater risk. This reinforces the importance of workforce training and awareness, and highlights how adversaries are using AI to increase the efficiency and believability of social engineering attacks.

The conversation also explores the importance of cybersecurity data and visibility. Faehl explains that defenders need to understand their own environments better than attackers do, but that has historically been difficult. He points to the growth of security telemetry and data availability, in part due to Zero Trust guidance and implementation efforts driven by organizations such as CISA and NIST. With more security data available, agencies have better potential to enumerate the problem space and assess risk more accurately. Faehl notes that traditional cybersecurity metrics can create a false sense of security, sometimes described as “watermelon green,” meaning green on the outside but red underneath. Better data, paired with AI, can help reveal hidden weaknesses and provide a more realistic assessment of posture.

Costello describes how this level of telemetry can shift risk management practices from occasional reviews to near real-time visibility. He explains that organizations historically authorized systems for three years and then reassessed periodically, but the ability to synthesize modern telemetry could enable up-to-the-second risk decisions. For Costello, this represents an aspirational goal that cybersecurity leaders have pursued for a long time: continuously understanding what is normal, what is abnormal, and where risk is changing in real time.

Both leaders emphasize that agencies must adopt a culture of experimentation to keep pace with evolving threats and tools. Costello stresses that leaders must support teams through testing and failure, ensuring employees know their leaders will back them if a pilot or initiative does not work. Faehl reinforces that AI reduces the cost and time required to test new defensive capabilities, and he offers a practical rule of thumb: if a tool cannot reach 80% value in about 20 minutes, agencies should consider moving on to something else.

Their shared perspective is that speed and iteration are essential, both in technology adoption and in defensive operations. By lowering the barrier to capability testing, AI allows agencies to move faster, reduce defender workload, increase attacker cost, and take real steps toward winning the economic dynamics of cybersecurity.

This segment was part of the program Mission Innovation: Becoming a Frontier Agency