Podcast

The $1 AI Revolution: What Government’s Next Big Move Means

Written by Fed Gov Today | Aug 18, 2025 5:39:59 PM
 

August 19, 2025

Subscribe and listen to the Fed Gov Today Podcast anytime on Apple Podcasts, Spotify, or at FedGovToday.com.

Artificial intelligence in government is entering a new phase—one where access is cheaper, experimentation is encouraged, and the stakes are higher than ever. Former Joint Artificial Intelligence Center (JAIC) Director Lt. Gen. Jack Shanahan, USAF (Ret.), explains that recent agreements from the General Services Administration (GSA) mark a turning point. For just $1 per department or agency, organizations across government can begin using advanced AI tools from companies like OpenAI and Anthropic.

Shanahan sees this development as a big deal. He separates the current landscape into two tracks: large-scale Defense Department contracts worth hundreds of millions of dollars, and these newly announced government-wide agreements that essentially put powerful AI tools into the hands of agencies at almost no cost. For him, the second track is where the real opportunity lies. Agencies can test, experiment, and figure out what works before locking into operational use. He calls this the “minimum viable product” approach—getting tools into the hands of users quickly so they can shape how AI best supports their missions.

But Shanahan also adds a note of caution. Experimentation is essential, he says, but AI shouldn’t yet be trusted in life-or-death scenarios, especially in defense and intelligence. The models available today weren’t trained on the sensitive data that those communities rely on. To be truly effective, they need tuning and adaptation for specific government missions. That means additional training, tighter controls, and thorough evaluation.

Evaluation, in fact, is one of Shanahan’s biggest priorities. While companies tout their own testing and red-teaming processes, he insists that government must “trust but verify.” Agencies should conduct their own independent assessments to ensure AI tools work as advertised. For him, test and evaluation are non-negotiable, especially when the consequences could be national security risks.

Shanahan also raises concerns about the broader industrial base. Big tech companies have the advantage of money, computing power, and talent. Smaller firms—those the government often says it wants to support—can’t always afford to give away their technology or services, even temporarily. This creates an uneven playing field, where frontier AI models are dominated by only a handful of giants. The smaller players may need to find niches or partner with bigger companies to stay relevant.

Another risk is vendor lock-in. Shanahan points out the obvious: today’s $1 deal could become tomorrow’s six-figure contract. It’s the classic strategy—offer services at minimal cost to get agencies dependent, then raise prices once switching becomes difficult. He urges government leaders to prepare now, building flexibility into contracts and avoiding long-term lock-in.

Looking ahead, Shanahan is intrigued by the possibility of disruptive newcomers. He recalls how China’s DeepSeek briefly grabbed attention, showing that alternatives to the scale-driven AI race are possible. Even if foreign models aren’t appropriate for U.S. government use, the lesson is clear: innovation can come from unexpected places. Agencies must stay agile, open to new methods, and ready to pivot quickly.