Building Confidence in AI: Empowering Government Workforces and Securing Infrastructure

 


Original broadcast 6/11/25

Presented by Intel & Carahsoft

As artificial intelligence continues to evolve, government agencies are increasingly tasked with implementing it in ways that are both operationally effective and culturally sustainable. Burnie Legette, Director of IoT Sales for AI at Intel, believes that doing so starts with something deceptively simple: trust. At the AI for Government Summit, Legette laid out a practical vision for how agencies can deploy AI not only to automate routine tasks but to enable innovation—without leaving their people or their security posture behind.

Screenshot 2025-06-01 at 4.13.15 PM-1Legette framed the AI conversation through a human-centered lens. While much of the current discourse focuses on generative capabilities or model performance, he stressed that AI’s greatest value lies in its ability to free people from mundane, repetitive tasks. “AI is great at doing the routine work,” he said. “It’s like giving every employee an intern—someone to do the first draft, gather the background, and get the ball rolling.” But just as no one would publish an intern’s work without review, AI outputs still require human oversight. The goal is not replacement—it’s acceleration.

In federal environments, where labor shortages, burnout, and rising demands are common, this kind of augmentation can be transformative. Legette pointed to applications like virtual research assistants and document drafting tools as areas where AI is already proving its value. These tools help government workers handle growing workloads without sacrificing quality or compliance. But adoption, he noted, depends on employee confidence—and that’s where many agencies hit a wall.

The root of the challenge is cultural. While some employees are excited about the new tools, others remain hesitant. They may fear job loss or question the reliability of AI-generated content. To overcome this resistance, Legette advised agencies to reframe the conversation around empowerment. “You’re still the expert,” he said. “AI is here to support your judgment, not replace it.” By treating AI like a collaborative tool rather than a threat, agencies can foster a culture of experimentation, learning, and trust.

Beyond workforce readiness, Legette highlighted several infrastructure and governance challenges that must be addressed for AI to operate securely at scale. First among them is data protection. As AI systems ingest and process more sensitive information, the need for robust data governance becomes paramount. “Security is as important as horsepower,” he explained. “It’s not just about whether you have the compute power to run models—it’s about whether your data is secure, private, and properly accessed.”

To that end, Legette pointed to emerging practices like Zero Trust architecture and confidential computing. These approaches ensure that only authorized users can access specific data or models, and that sensitive operations are isolated from external threats. “Think of it like cybersecurity for your AI models,” he said. “Because just like data can be compromised, models can be tampered with too.”

Indeed, Legette warned that model integrity is becoming a serious concern. As adversaries grow more sophisticated, they’re beginning to target not just the networks that run AI, but the AI models themselves. Threats like weight manipulation, bias injection, and drift can undermine model accuracy and reliability. To address this, agencies need a model governance strategy that includes continuous testing, validation, and retraining.

Screenshot 2025-06-01 at 6.04.09 PMEncouragingly, AI can also be used to monitor itself. Legette described projects where AI agents run in the background, tracking the performance of other AI models to detect when outputs deviate from expected results. This “agentic AI” approach allows agencies to identify and correct issues in real time, ensuring long-term reliability and alignment. In one current project, Legette is helping a partner use these software agents to measure model output accuracy, enabling continuous quality control without human intervention.

He also addressed the often-misunderstood topic of infrastructure scale. Many agencies are unsure how much compute capacity they need to support AI. Some overestimate, others underestimate. Legette advised starting with a clear use case and scaling thoughtfully, always with an eye toward future readiness. He emphasized that over-provisioning resources can be as detrimental as under-resourcing—especially when agencies are under pressure to optimize budgets.

Ultimately, Legette believes the path forward for AI in government lies in pragmatism and partnership. Agencies must invest in both people and platforms. They must prioritize security without stalling innovation. And above all, they must focus on making AI usable, trustworthy, and responsive to the real problems their workforces are trying to solve.

“There’s a tendency to talk about AI like it’s magic,” he said. “But it’s not. It’s a tool. And like any tool, its value depends on how—and why—you use it.”

Key Takeaways:

  • AI boosts productivity by automating routine tasks, empowering employees—not replacing them.

  • Security and governance, including Zero Trust and model monitoring, are critical for trusted AI.

  • Continuous testing and agentic AI help ensure model accuracy and operational resilience.

Join our Newsletter

Please fill out the requested information below