March 4, 2025
Artificial intelligence (AI) is rapidly transforming how federal agencies operate, from bolstering cybersecurity to protecting critical
Tina Won Sherman, Director of Homeland Security and Justice Issues at GAO, shares her insights with Francis, shedding light on the challenges agencies face in assessing AI risks. While agencies are making progress, none have fully met the necessary standards for effective risk evaluation. As AI continues to evolve, addressing these shortcomings is crucial to ensuring national security and resilience.
The Department of Homeland Security (DHS), specifically the Cybersecurity and Infrastructure Security Agency (CISA), is responsible for coordinating AI risk assessments across 16 critical infrastructure sectors. These sectors include transportation, healthcare, energy, communications, and financial services, among others. While the federal government does not own most of this infrastructure, it plays a key role in safeguarding these sectors against cyber threats and other vulnerabilities.
Under a recent executive order, federal agencies overseeing these critical sectors were required to submit AI risk assessments to DHS. The goal? To understand how AI is being used, identify potential risks, and develop strategies to mitigate them. However, GAO’s review found that the assessments agencies submitted were incomplete and failed to fully address key areas of risk management.
GAO evaluated the agency-submitted assessments based on six critical activities for effective AI risk assessment:
Sherman notes that while most agencies successfully documented their methodologies and identified AI use cases, none fully addressed all six elements. The biggest shortcoming? Not a single agency effectively evaluated the level of AI risk impact.
Sherman points to several key reasons why federal agencies struggled with their assessments.
GAO is calling on DHS to update and strengthen its guidance to ensure agencies conduct more comprehensive AI risk assessments in the future. Sherman emphasizes that even though AI technology and policies may evolve, the fundamental principles of risk assessment remain the same.
DHS has already begun refining its approach for the next round of risk assessments. The goal is to provide agencies with clearer instructions and better tools so they can evaluate risks more thoroughly and develop stronger mitigation strategies.
AI is already playing a vital role in securing critical infrastructure, but without proper risk assessment, federal agencies may be unknowingly exposing themselves to serious threats. Whether it’s a cyberattack on AI-driven security systems or bias in automated decision-making, failing to fully understand the risks could lead to significant consequences for national security and public trust.
Sherman highlights that risk management is a long-standing principle of critical infrastructure security, and agencies must take a proactive approach to AI risk assessment. This means not just identifying risks, but truly understanding their impact and mapping out clear solutions.
As DHS updates its guidance and agencies refine their approaches, AI risk assessments will likely improve. However, federal leaders must stay committed to continuous improvement. AI technology is advancing rapidly, and agencies need agile, adaptable risk management strategies to keep up.
For agencies looking to strengthen their AI security, GAO’s recommendations serve as a roadmap for better risk assessment practices. By addressing these gaps now, the federal government can ensure that AI remains a powerful tool for security—without becoming a security risk itself.
You can find Sherman’s full report and recommendation here.