Breaking Barriers in Federal Cybersecurity and AI

 

 

October 16, 2024

Presented by Cohesity

 

Streamlining Zero Trust at the DoD with New Standards and Tools

Screenshot 2024-10-16 at 11.28.01 AMLes Call, Director of the Zero Trust Portfolio Management Office at the Department of Defense (DoD), outlines efforts to streamline the DoD's path to implementing Zero Trust by 2027. The current Red Team assessments, though valuable for perimeter security, fall short in assessing modern Zero Trust environments. To address this, the DoD developed a repeatable, automated assessment process incorporating two new tools: the Zero Trust Readiness Assessment Tool (ZT RAT) and the Zero Trust Readiness Objectives Assessment Management (ZT ROAM) tool. These tools enable self-assessment and automate attack simulations to test Zero Trust environments more efficiently. Call emphasizes the importance of continuous evaluation, proof of concepts like the Navy's Flank Speed, and aligning efforts with the DoD’s broader Fulcrum initiative to meet the 2027 deadline.

Key Takeaways:

  1. The DoD developed the ZT RAT and ZT ROAM tools to automate and streamline Zero Trust assessments, reducing reliance on Red Teams and improving scalability.

  2. The Navy's early work on Zero Trust through its Flank Speed program serves as a successful model for other agencies to replicate.

  3. Agile methods and continuous assessments are crucial to achieving the 2027 Zero Trust implementation goal, aligned with the Fulcrum initiative led by the DoD.

 

Balancing AI Implementation and Compliance in Federal Agencies

Screenshot 2024-10-16 at 11.27.53 AMCraig Martell, Chief Technology Officer (CTO) at Cohesity and former Chief Digital and AI Officer at the DoD, offers insight into the challenges federal agencies face in merging compliance requirements with AI implementation. He shares the difficulty of balancing compliance and execution, particularly in AI systems where getting data right and performing accurate evaluations are crucial. Martell emphasizes that AI, being probabilistic by nature, is guaranteed to make errors. However, the acceptable margin of error varies based on the use case. In less critical applications like online advertising errors are tolerable, but in high-stakes domains such as telemedicine or autonomous vehicles, even a 3% error rate can be unacceptable. He advocates for a tailored risk management approach that aligns with the specific requirements of each AI use case. Additionally, Martell stresses the importance of data resilience and permissions management. Backing up data effectively and ensuring it complies with organizational access rules is key to both operational success and regulatory compliance. 

Key Takeaways:

  1. Agencies must focus on breaking data silos, managing data permissions, and ensuring resilience to implement effective AI strategies.

  2. AI errors are inevitable, but managing risk means ensuring errors are acceptable based on the use case, especially for high-stakes applications.

  3. AI evaluation has become more complex with generative models, making it crucial to understand when AI gets it wrong and how to correct it effectively.

Federal Agencies Lay Solid Groundwork for AI Implementation

Screenshot 2024-10-16 at 11.24.52 AMKevin Walsh, Director of IT and Cybersecurity at GAO, discusses the positive progress federal agencies are making in implementing artificial intelligence (AI) under Executive Order 14,110. Walsh emphasizes that agencies such as the Office of Management and Budget (OMB), the General Services Administration (GSA), and the Office of Personnel Management (OPM) have successfully met the early milestones outlined in the executive order, which covers over 100 requirements. These early successes signal that the groundwork for AI integration across the government is being effectively laid. He notes that this progress is vital for ensuring long-term success and positioning the government to adopt AI in ways that enhance operational efficiency. Walsh highlights the importance of focusing not only on the technology but also on recruiting the right talent to manage and make sense of the data AI systems rely on. However, he cautions that the more difficult challenges lie ahead. Walsh stresses the need for government agencies to adopt AI cautiously and thoughtfully, ensuring it is implemented securely and ethically to mitigate potential risks.

Key Takeaways:

  1. Agencies have successfully hit early benchmarks in AI implementation, creating a foundation for future adoption.

  2. Recruitment of AI talent and cross-agency collaboration are seen as crucial to ensuring the success of AI in government.

  3. Complex issues such as AI watermarking, system auditing, and ensuring AI safety and accountability still present significant challenges for future phases.

 

 

Join our Newsletter

Please fill out the requested information below