AI RISK ALERT: The Cybersecurity Gaps in Federal AI Systems – And What’s Next!

 

March 4, 2025

Subscribe and listen anytime on Apple Podcasts, Spotify, or anytime at FedGovToday.com.

Artificial intelligence (AI) is rapidly transforming how federal agencies operate, from bolstering cybersecurity to protecting critical images-2-1infrastructure. However, as agencies adopt AI solutions, the risks that come with these technologies must be properly assessed and managed. The Government Accountability Office (GAO) recently reviewed how federal agencies are handling AI risk assessments—and the results reveal significant gaps.

Tina Won Sherman, Director of Homeland Security and Justice Issues at GAO, shares her insights with Francis, shedding light on the challenges agencies face in assessing AI risks. While agencies are making progress, none have fully met the necessary standards for effective risk evaluation. As AI continues to evolve, addressing these shortcomings is crucial to ensuring national security and resilience.

Who Is Responsible for AI Risk Assessments?

The Department of Homeland Security (DHS), specifically the Cybersecurity and Infrastructure Security Agency (CISA), is responsible for coordinating AI risk assessments across 16 critical infrastructure sectors. These sectors include transportation, healthcare, energy, communications, and financial services, among others. While the federal government does not own most of this infrastructure, it plays a key role in safeguarding these sectors against cyber threats and other vulnerabilities.

Under a recent executive order, federal agencies overseeing these critical sectors were required to submit AI risk assessments to DHS. The goal? To understand how AI is being used, identify potential risks, and develop strategies to mitigate them. However, GAO’s review found that the assessments agencies submitted were incomplete and failed to fully address key areas of risk management.

The Six Key Elements of AI Risk Assessment

GAO evaluated the agency-submitted assessments based on six critical activities for effective AI risk assessment:

  • Documenting an assessment methodology – Establishing a clear approach to risk evaluation.
  • Identifying AI use cases – Listing where and how AI is being applied.
  • Identifying potential risks – Understanding threats that AI applications might pose.
  • Outlining mitigation strategies – Proposing solutions to minimize identified risks.
  • Mapping mitigation strategies to risks – Connecting solutions directly to specific threats.
  • Evaluating the level of risk – Assessing the severity and likelihood of AI-related threats.

Sherman notes that while most agencies successfully documented their methodologies and identified AI use cases, none fully addressed all six elements. The biggest shortcoming? Not a single agency effectively evaluated the level of AI risk impact.

Why Are Agencies Falling Short?

Sherman points to several key reasons why federal agencies struggled with their assessments.

  1. A Tight Deadline
    Agencies were given only 90 days to develop their AI risk assessments. For some, this was their first time attempting such an evaluation, making it difficult to gather the necessary information in such a short timeframe.
  2. AI Is a Moving Target
    The fast-evolving nature of AI makes it challenging to identify all potential risks. New technologies and applications emerge rapidly, and agencies may not have the resources or expertise to stay ahead of evolving threats.
  3. Gaps in DHS Guidance
    DHS provided a template and guidance to help agencies complete their assessments, but GAO found that the guidance lacked critical details—especially regarding evaluating risk impact and mapping mitigation strategies to risks. As a result, agencies did not have clear instructions on how to complete these essential steps.

What Needs to Happen Next?

GAO is calling on DHS to update and strengthen its guidance to ensure agencies conduct more comprehensive AI risk assessments in the future. Sherman emphasizes that even though AI technology and policies may evolve, the fundamental principles of risk assessment remain the same.

DHS has already begun refining its approach for the next round of risk assessments. The goal is to provide agencies with clearer instructions and better tools so they can evaluate risks more thoroughly and develop stronger mitigation strategies.

AI is already playing a vital role in securing critical infrastructure, but without proper risk assessment, federal agencies may be unknowingly exposing themselves to serious threats. Whether it’s a cyberattack on AI-driven security systems or bias in automated decision-making, failing to fully understand the risks could lead to significant consequences for national security and public trust.

Sherman highlights that risk management is a long-standing principle of critical infrastructure security, and agencies must take a proactive approach to AI risk assessment. This means not just identifying risks, but truly understanding their impact and mapping out clear solutions.

Looking Ahead

As DHS updates its guidance and agencies refine their approaches, AI risk assessments will likely improve. However, federal leaders must stay committed to continuous improvement. AI technology is advancing rapidly, and agencies need agile, adaptable risk management strategies to keep up.

For agencies looking to strengthen their AI security, GAO’s recommendations serve as a roadmap for better risk assessment practices. By addressing these gaps now, the federal government can ensure that AI remains a powerful tool for security—without becoming a security risk itself.

You can find Sherman’s full report and recommendation here.



Join our Newsletter

Please fill out the requested information below