6/5/25
At the Carahsoft AI for Government Summit, leaders from across the federal technology ecosystem came together to discuss a turning point: artificial intelligence is no longer just an emerging concept or a collection of pilot projects—it’s becoming an operational imperative across government.
From secure implementation and clean data to decision-centric design and edge deployment, executives from Dell Technologies, Zoom, SAS Federal, Broadcom, Granicus, Salesforce and Palo Alto Networks offered a unified but nuanced perspective: the real promise of AI lies in its ability to augment human potential, accelerate mission outcomes, and reshape how government thinks about service delivery and risk.
From Pilots to Production
One of the clearest signals from the summit was that AI is moving from experimental to operational. Agencies are no longer asking if they should pursue AI initiatives—they’re asking how fast they can do it securely and effectively.
Luke Norris, Vice President of AI Strategy at Granicus, put it plainly: “We’re seeing people move beyond the concept of it just being a proof of concept to really thinking about it as a scale technology.” He noted that the speed at which agencies are expected to deliver is shifting. “Days and weeks make a difference now, not years and months like we’re used to.”
This transition is driving the need for platforms that can adapt quickly and scale seamlessly—away from custom-built systems and toward interoperable, modular tools that can evolve alongside policy, mission scope, and citizen expectations.
Designing AI Backward: From Decision to Data
Several speakers emphasized that the key to impactful AI lies in designing from the decision backward—not from the algorithm forward. David Hendrie, Director of Artificial Intelligence at SAS Federal, explained, “If you start with the decision and then work back from that, that’s when you’re going to see the most success.”
This approach requires rethinking how data is organized, shared, and governed. It also demands tighter collaboration between mission owners and technologists. AI initiatives flounder not due to lack of compute power, but due to misalignment between technology capabilities and mission priorities.
Ken Rollins, AI Solution Architect at Dell Technologies, reinforced this point: “Once you figure out what is the outcome that you’re trying to achieve, then that basically determines where that data is going to be needed.” Rollins explained that this data-to-decision chain is also critical when deploying AI models at the edge, where latency and bandwidth are constrained.
The Hidden Barrier: Data Quality and Cultural Alignment
No matter how sophisticated the model, it’s only as good as the data it’s trained on. Karl Hermann, Head of US Public Sector AI at Zoom, highlighted data hygiene as a persistent issue: “Organizations don’t feel they have the correct governance in place and the data isn’t clean enough for them to move forward.”
But beyond the technical barriers, Hermann also pointed to a cultural gap. “The right people may not be asking the right questions,” he said, emphasizing that successful AI implementation requires not just skilled data scientists, but empowered business owners who understand how to frame problems in terms AI can solve.
That cultural alignment is often the missing link in scaling successful pilots into repeatable programs. Leaders must foster environments where AI is seen as a collaborative tool rather than a disruptive threat.
Security and Responsible Use as Prerequisites
As AI becomes more deeply embedded into federal workflows, the security of models, data, and decisions rises in importance. Garrett Lee, Public Sector CTO at Broadcom, warned, “Data itself is the target.” He argued that restrictive or unclear policies are creating bottlenecks and “holding back gains that could be made by using these tools.”
Wayne LeRiche, Director of Federal Engineering at Palo Alto Networks, echoed this concern and focused on the operational side of securing AI: “Whatever they do, they need to make sure they’re protecting those models, all the data they’re using, all the things they’re using to train it.”
LeRiche also noted how agencies are exploring vetted intermediary tools that review prompts before they are submitted to AI engines—a safeguard that adds security without sacrificing speed. “You’re not going to that native interface and cutting and pasting in things,” he said. “You’re going to a kind of intermediary that’s doing the vetting first.”
This layered approach to responsible AI is gaining traction, particularly in sensitive environments where the cost of a data leak or model misuse is extremely high.
Interoperability as the New Mandate
AI doesn’t exist in a vacuum—it lives in a complex digital ecosystem that must be flexible, responsive, and integrated. Rollins noted a growing spirit of cross-vendor collaboration in the industry: “We’re seeing a lot of collaboration that hasn’t historically happened in the past… now you can attack a problem in a number of different ways.”
This interoperability is not just a convenience—it’s a necessity. As federal agencies move toward AI-powered infrastructure, the ability to mix, match, and replace components without starting over is key to maintaining agility. Configurable, modular environments allow agencies to try new tools without committing to full-stack overhauls.
Trust as a Competitive Advantage
Whether it’s trusting the model, the data, the vendor, or the user, trust is emerging as the most valuable currency in AI adoption.
AI is no longer a standalone innovation topic. It’s woven into everything from customer service automation to threat detection, logistics, and public health. But without trust—among developers, operators, policymakers, and the public—even the best models will sit on the shelf.
That’s why secure design, clean data, responsible governance, and clear alignment with mission outcomes are so crucial. These aren’t best practices—they are requirements.
As Luke Norris summarized, this moment isn’t just about adoption. It’s about agency. “You can imagine what you want that solution to look like—and now, you can actually build it.”
Conclusion
The collective insights from the Carahsoft AI for Government Summit suggest that the federal AI journey is maturing. The focus is shifting from whether AI is possible, to how it can be applied responsibly, securely, and at scale to deliver better outcomes.
From strategic design to cultural readiness, from data governance to edge deployment, industry leaders are not just offering tools—they’re helping to shape the frameworks that will define the next decade of digital government. And that future is being built now.
Industry Perspectives: