Securing AI in Defense

Original broadcast 9/17

 

Presented by Carahsoft

Artificial intelligence is rapidly becoming a cornerstone of modernization efforts across the Department of Defense, but as with any transformative technology, its adoption must be approached with care. Steve Jacyna, Director at Carahsoft, argues that while AI presents unmatched opportunities for efficiency, analysis, and operational decision-making, its deployment in defense contexts must be guided by three core principles: strong identity controls, human oversight, and ethical safeguards.

Jacyna begins with identity management, the foundational pillar of any zero trust strategy. Just as human employees require strict authentication and access protocols, AI systems themselves must be tracked, authenticated, and governed within defense networks. Treating an AI model as though it were another user—subject to the same authentication, access privileges, and monitoring—ensures that these tools do not become entry points for malicious activity or inadvertently compromise sensitive data. Zero trust, Jacyna emphasizes, must extend beyond humans and encompass all machine learning models operating within an organization’s digital ecosystem.

SteveJacyna.00_08_16_20.Still001The second principle centers on the human element. AI is a powerful tool for automating tasks and augmenting decisions, but Jacyna is clear that it must not replace human judgment. In military and national security settings, the stakes are too high to delegate ultimate authority to algorithms. Whether analyzing cyber threats, flagging anomalies in network behavior, or providing insights to operators, AI should be thought of as a digital assistant. Humans remain the decision-makers, with AI accelerating their ability to process information but never supplanting their responsibility. Jacyna describes this as ensuring “an adult is always in the room,” ready to validate recommendations and apply context that an algorithm cannot.

The third principle is ethics. In commercial spaces, ethical concerns around AI often focus on bias, fairness, and transparency. In defense, the scope broadens considerably to include military operations, advanced weaponry, and the integrity of democratic institutions. Jacyna stresses the need for strict policies to ensure AI tools are deployed responsibly, without breaking laws or compromising principles. This includes limiting how AI interacts with sensitive military systems, such as weapons platforms, and building safeguards into deployments to ensure compliance with ethical and legal frameworks. By embedding ethics into the DNA of AI deployments, defense organizations can gain trust from both their workforce and the public.

While these three principles form the backbone of Jacyna’s approach, he also acknowledges that none of this will be possible without a strong, well-prepared cybersecurity workforce. Recruiting and retaining skilled professionals is already one of the Department of Defense’s greatest challenges, and the introduction of AI only raises the stakes. Jacyna points to the education pipeline as the most effective way to meet this challenge. At the university level, cybersecurity programs must integrate AI and machine learning into their curriculum, ensuring graduates are ready to step directly into defense roles. Certifications, internships, and partnerships with defense organizations can give students practical skills before they graduate.

But the pipeline should not start at the university level. Jacyna advocates for early engagement in K–12 education, where students can be introduced to cybersecurity as an exciting and prestigious career path. Just as children once dreamed of becoming pilots, astronauts, or firefighters, cybersecurity operators should be elevated in the public imagination. By presenting these roles as critical to national security, educators can inspire the next generation of cyber defenders.

Retention is equally important. Jacyna argues that positions in cybersecurity must be treated as elite, prestigious roles within the Department of Defense. By fostering communities of excellence, creating recognition programs, and offering opportunities for continued learning, the government can keep talented professionals engaged. Public–private exchange programs also have a role to play. By allowing professionals to rotate between government service and private sector roles, these programs not only broaden skill sets but also encourage the transfer of innovation across both environments.

Ultimately, Jacyna’s perspective reflects a balanced approach to AI adoption in defense. Technology alone cannot deliver success—it must be paired with secure architectures, human oversight, and a capable workforce. By embedding AI into zero trust strategies, preserving human decision-making, enforcing ethical policies, and investing in talent, the Department of Defense can harness the potential of AI while avoiding its pitfalls.

As the military continues to adapt to new threats in cyberspace and beyond, the importance of secure, responsible, and ethical AI will only grow. Carahsoft, under Jacyna’s leadership, is positioning itself as a partner to government organizations navigating this complex transformation. His message is clear: with the right safeguards and people in place, AI can be a powerful enabler of defense, but without them, it could become a liability.

Join our Newsletter

Please fill out the requested information below