Presented by Carahsoft
At the AI for Government Summit, presented by Carahsoft, public and private sector leaders came together to discuss how artificial intelligence is transforming government operations. From cybersecurity to data infrastructure, from automation to agentic AI, this episode of Innovation in Government reveals the priorities, challenges, and breakthroughs shaping how agencies integrate AI to improve service delivery, increase efficiency, and secure critical systems.
Driving Mission Outcomes with AI Infrastructure
AI is only as effective as the foundation it runs on, and the federal government is learning quickly that infrastructure includes far more than just GPUs. As AI Program Executive at Carahsoft, Mike Adams explained, agencies must consider cloud architectures, data handling, and security protocols as key components of a successful AI ecosystem. He emphasized that "AI-ready" infrastructure must support both traditional and agentic AI, the latter requiring dramatically more compute power. More importantly, Adams underscored the human factor: successful AI deployments rely on skilled professionals who know what problems need solving and how to apply AI accordingly. The evolution of AI—from early machine learning to generative and now agentic and even physical AI—requires agencies to plan flexibly for the long term while moving fast enough to remain competitive on the global stage.
Key Takeaways:
-
AI infrastructure includes cloud, data, security, and skilled personnel.
-
Agentic AI demands exponentially more computing resources.
-
Agencies must balance rapid deployment with future-proofing.
GSA’s Enterprise Chatbot Brings AI to Daily Workflows
At the General Services Administration, AI is being deployed not as a novelty, but as a practical tool to enhance everyday work. Chief AI Officer and Chief Data Scientist Zach Whitman described how GSA built and rolled out a chatbot across the agency to assist with routine tasks like document drafting and information retrieval. After piloting use cases and collecting user feedback, the bot now serves nearly a third of the agency's workforce, helping save time and improve productivity. Whitman noted that as users become more comfortable, their expectations and creativity grow—many are now asking for API access to plug AI tools directly into their workflows. This evolution has sparked efforts to expand foundational model support, evaluate accuracy across domains like procurement and HR, and develop agentic workflows. GSA’s success, Whitman said, is due in part to its culture of learning, its emphasis on monitoring and feedback, and its strong alignment between technology and mission.
Key Takeaways:
-
GSA’s chatbot reduces drudge work and boosts employee productivity.
-
User feedback has driven expansion into API access and agentic AI.
-
Evaluation and monitoring ensure accuracy across mission domains.
Embedding AI in the Flow of Government Work
Bringing a decade of government experience to her role as an Industry Advisor at Salesforce, Mia Jordan shared how agencies can unlock the power of their existing data with AI—without starting from scratch. She emphasized that the data already used to process loans, grants, permits, and inspections can be made more actionable when AI is embedded directly into workflows. Rather than layering AI on top, Salesforce’s AgentForce embeds AI where work is already being done, enabling systems to surface relevant information, suggest next steps, and support decisions in real time. Jordan described how Salesforce’s Data Cloud reaches into scattered data repositories and pulls contextually relevant insights directly into active cases. She also cautioned against building DIY AI tools from scratch, advocating instead for solutions that respect metadata, roles, and compliance frameworks like FedRAMP High. The key, she argued, is deploying generative AI with context, compliance, and control in mind.
Key Takeaways:
-
Agencies already have the data they need—AI makes it actionable.
-
Embedding AI in workflows improves efficiency and decision-making.
-
Context-aware, compliant tools enable responsible GenAI deployment.
Accelerating Results Through Data and Urgency
Todd Schroeder, Vice President of Public Sector at Databricks, shared how successful AI adoption in government starts not with technology, but with urgency and leadership. Increasingly, it’s business and mission leaders—not just technologists—who are driving the push to use AI for better results. Schroeder pointed out that years of cloud sprawl and isolated software deployments have created data silos across agencies, making it difficult to extract enterprise insights. Agencies that are progressing in AI, he said, are those prioritizing open data architectures and reducing complexity, rather than layering on more tools. He also underscored that AI demands a new development mindset: whereas software projects used to take months, AI models must iterate in days or even hours. That requires fail-fast approaches, sound governance, and trusted partnerships. When agencies embrace that pace—while maintaining transparency, auditability, and security—AI can deliver rapid and measurable improvements to mission performance.
Key Takeaways:
-
Business leaders are catalyzing AI adoption for mission results.
-
Simplified, open data infrastructure enables scalable AI.
-
Rapid iteration with strong governance is key to AI success.
Mission-Driven AI for Agriculture and Security
At the U.S. Department of Agriculture, Information Technology Manager Rudolf Rojas described how AI is helping the agency tackle both security threats and operational inefficiencies. USDA handles highly sensitive agricultural data used in financial markets, and safeguarding that information is a top priority. Rojas explained how AI tools are being used to create dashboards that flag intrusion attempts and monitor international access patterns—essential for preventing early leaks of data that could impact the Chicago Board of Trade or Wall Street. On the mission side, USDA is using AI to support farmers, such as in a Florida pilot where unmanned aerial vehicles capture images of citrus groves to detect signs of disease. These images are fed into AI models that help identify areas at risk of blight without the need for manual inspection. Additionally, Rojas shared how AI is streamlining procurement—helping to sort through dozens of contractor applications more efficiently and identify qualified partners. With funding and staffing tight, he emphasized that AI is essential to “do more with less” while maintaining mission integrity.
Key Takeaways:
-
AI helps USDA safeguard sensitive agricultural data from global threats.
-
Geospatial AI improves agricultural health monitoring and saves resources.
-
AI-driven procurement tools increase efficiency in contractor evaluation.
Scaling AI Through Partnership and Strategy
Felipe Millon of OpenAI detailed how public sector organizations are transitioning from pilot programs to scaled implementations of AI, and how strong partnerships make the difference. Citing the Commonwealth of Pennsylvania’s success using ChatGPT Enterprise to save eight hours per week per employee, Millon highlighted how early adopters are demonstrating real value. He emphasized that scaling requires both top-down strategy and bottom-up enablement, and pointed to barriers like cybersecurity authorization and procurement as persistent challenges. OpenAI’s approach includes both secure enterprise offerings and tools like ChatGPT Gov, built specifically for secure use in defense and intelligence environments. Millon stressed that waiting too long to start is not a viable strategy, given the rapid evolution of reasoning models and new capabilities. Instead, he encouraged agencies to begin using tools today and grow with the technology.
Key Takeaways:
-
Successful scale requires both strategic vision and user enablement.
-
Authorization and procurement remain hurdles but can be overcome.
-
AI maturity accelerates when agencies start using tools early.
Delivering AI Outcomes at Scale
At Intel, Burnie Legette, Director of IoT Sales for AI, explained how AI is delivering real value by automating routine tasks like document creation and research, freeing government employees for more creative work. He described AI as a digital intern that supports—not replaces—staff, while noting that trust in AI-generated output is still a barrier for some. Legette also emphasized the importance of secure infrastructure, from confidential computing to Zero Trust models, and warned about threats to AI models themselves, including tampering and bias drift. Ongoing monitoring, testing, and model validation are essential. He described how some agencies are already using AI to monitor their own AI tools, employing agentic software agents to ensure models remain accurate and secure.
Key Takeaways:
-
AI increases efficiency by offloading routine tasks.
-
Security and governance are critical for trusted AI deployment.
-
Continuous monitoring ensures AI model accuracy and resilience.
Mindful AI Infrastructure Planning for Long-Term Success
Dave Hinchman, Director of IT and Cybersecurity at the Government Accountability Office, emphasized the need for mindfulness in planning AI infrastructure. Based on decades of audits, Hinchman explained that agencies too often fail to think through the full lifecycle of technology deployments. With AI, the stakes are higher: it will fundamentally alter how government delivers services, and its energy and infrastructure demands are immense. Hinchman encouraged agencies to focus on mission impact, to choose the right partners, and to begin laying the infrastructure groundwork now. He also pointed out the regulatory risks of moving too quickly, drawing parallels with the fragmented cyber regulation landscape and warning against making the same mistakes with AI governance.
Key Takeaways:
-
AI and cloud must be planned with long-term mission impact in mind.
-
Poor infrastructure choices risk failure, inefficiency, or waste.
-
Proactive governance and smart partnerships are essential for success.
Cybersecurity at the Edge with Agentic AI
Paul Tierney, Senior Vice President of Public Sector Sales at Dataminr, described how artificial intelligence is revolutionizing cybersecurity across government. Rather than responding to threats after they penetrate networks, AI enables agencies to detect vulnerabilities in the public domain—before they hit. AI tools now analyze open data sources, forums, and threat signals in real time, shifting security postures from reactive to proactive. Tierney highlighted how AI also improves visibility into third-party risk, allowing agencies to identify vendor vulnerabilities before even the vendors themselves. He explained how agentic AI—intelligent agents that behave autonomously—are making cybersecurity more contextual, efficient, and actionable. These agents act like human analysts, gathering relevant intel and providing full situational awareness. In a fast-changing threat environment, Tierney argued, government needs to prioritize proprietary AI capabilities and trusted partnerships to stay ahead.
Key Takeaways:
-
AI shifts cybersecurity from reactive to proactive.
-
Agentic AI agents provide real-time context and threat insights.
-
Proprietary AI tools enhance security, accuracy, and trust.
Efficient, Secure AI Pipelines for Government
Randy Hayes, Vice President of Public Sector at VAST Data, emphasized the importance of working with partners who have real experience building AI pipelines. With AI infrastructure becoming more complex, Hayes warned against relying on vendors with buzzwords but little delivery. Instead, he advocates for platforms that consolidate storage, analytics, and AI functions into a single architecture, allowing agencies to reduce costs, simplify management, and increase efficiency. He pointed out that staff constraints across government mean fewer people are being asked to manage more systems, and purpose-built AI platforms can lighten the load. Hayes stressed that with funding tight and pressure increasing, agencies need to find infrastructure solutions that work the first time, not through trial and error.
Key Takeaways:
-
AI success depends on partners with real pipeline experience.
-
Consolidated platforms simplify operations and reduce costs.
-
Staff shortages make scalable, efficient AI infrastructure essential.
Please fill out the requested information below