Podcast

The Pentagon’s AI Gamble: Can Automation Save U.S. Weapons Testing?

Written by Fed Gov Today | Jun 25, 2025 7:34:31 PM
 

June 26, 2025

Subscribe and listen to the Fed Gov Today Podcast anytime on Apple Podcasts, Spotify, or at FedGovToday.com.

Doug Schmidt, former Director of the Office of Operational Test and Evaluation (DOT&E) at the Department of Defense, and now Dean of the School of Computing, Data Sciences, and Physics at William & Mary, shares a candid look at the rapidly changing landscape of weapons testing in the military. As the Pentagon downsizes the DOT&E staff and explores new ways to deliver oversight, Schmidt explains the opportunities and pitfalls that lie ahead.

At the heart of the conversation is a sweeping new directive from the Secretary of Defense that trims the DOT&E to a core team of just 30 civilians, 15 military personnel, and one senior executive. Schmidt says the intent is to streamline operations, eliminate redundancy, and boost efficiency—but warns that this reduction could come at the cost of effective oversight. In particular, he highlights the increasing reliance on rapid capability systems, like drones and autonomous platforms, which are often developed by non-traditional contractors. These systems lack standardized testing environments and are rarely evaluated under operationally realistic conditions.

Schmidt points out that integration is a major concern. While contractors may test their systems in isolation, few have the capability or relationships needed to conduct joint-force-level evaluations. “Very few people have all the parts of the elephant,” he says. Without coordinated testing across systems and domains, the Department risks fielding technologies that look good on paper but fall short in real-world conflict.

Automation and artificial intelligence are viewed as solutions to the shrinking workforce, but Schmidt is cautious. He champions responsible innovation and believes that AI can play a key role in modernizing testing. However, he stresses that current AI tools are not mature enough to replace human expertise. Drawing on his experience, Schmidt notes that while concepts like digital twins are appealing, building simulations with the fidelity needed for national defense is often more expensive than physical testing.

The downsizing isn’t limited to federal staff—it also affects support contractors like the Institute for Defense Analyses, which hold much of the institutional memory of past testing efforts. Schmidt argues that the Department must invest heavily in science and technology if it wants automation to succeed. Without proper validation, he says, AI could generate misleading data and expose warfighters to unnecessary risk.

Despite the challenges, Schmidt sees potential in collaboration. He highlights promising research at Army and Navy testing facilities and points to national labs and federally funded research centers as key players in building a more integrated, future-ready testing ecosystem. The key, he says, is breaking down silos and fostering partnerships between government, academia, and industry.

As Schmidt turns his attention to educating the next generation, he warns of a global race in AI literacy. He stresses the need to train a workforce that is both technologically adept and grounded in foundational knowledge. “The augmented,” as he calls them, are the future leaders—those who can merge deep expertise with smart use of AI tools.

In a time of rapid change, Schmidt makes one thing clear: innovation must be balanced with caution, collaboration, and a deep commitment to getting it right.