Lawless AI’ Is Spreading Across Government And Bad Data Is Fueling It

Original Broadcast Date: 04/12/2026

Presented by Workday Government

Artificial intelligence is rapidly expanding across the federal government, but according to Cliff Purkey of Workday Government, the biggest challenge isn’t the technology itself—it’s the data behind it.

Speaking with Francis Rose on Fed Gov Today, Purkey describes a growing concern he calls “lawless AI.” The term refers to AI systems producing unreliable or unintended outcomes because they rely on poor-quality or ungoverned data. As more agencies adopt AI tools, he emphasizes that the integrity of the underlying data becomes the defining factor in whether those tools succeed or fail.

Purkey explains that many people have already experienced this issue firsthand. When users input a prompt into an AI system and receive incorrect or misleading information, it often traces back to the data the system has access to. If that data is inconsistent, incomplete, or poorly governed, the outputs cannot be trusted.

That challenge is becoming more urgent as AI adoption accelerates. New data from the Office of Management and Budget shows there are now more than 3,500 AI use cases across the federal government—more than double the number from the previous year. While that growth reflects strong interest and investment, it also increases the risk of scaling flawed systems.

For Purkey, the solution begins with establishing clean, reliable data. He draws a connection between data quality and repeatable processes, noting that well-managed data enables consistency. When systems operate with standardized, governed data, outcomes become easier to verify and trust.

This consistency has practical benefits for government employees. Purkey explains that when AI systems are built on strong data foundations, they can take over routine, repetitive tasks. That automation frees up time for employees to focus on more complex work that requires human judgment. In this model, AI supports—not replaces—human decision-making.

However, achieving that level of reliability requires more than simply adding AI to existing systems. Purkey cautions against a common approach he sees in agencies: layering new AI capabilities on top of legacy systems. While this may seem like a faster or less disruptive path, it often introduces additional complexity and fails to address the root problem.

He points out that many legacy systems have been in place for decades and were not designed to support modern data needs. In some cases, agencies attempt to connect multiple systems that contain overlapping or inconsistent data. This creates confusion, making it difficult for AI to interpret information accurately.PurkeyFrame2

Purkey also raises questions about the rationale behind avoiding vendor lock-in by maintaining older systems. He suggests that in trying to avoid dependence on new vendors, agencies may actually be locking themselves into outdated technologies that limit progress.

In contrast, he highlights the benefits of integrated, enterprise-level systems where data is managed within a single environment. In these systems, data is governed consistently across functions such as human resources, finance, and payroll. This unified approach ensures that AI interacts with the same data in the same way every time, reducing the likelihood of errors.

Another key advantage of these systems is transparency. Purkey explains that modern platforms can provide what he describes as an “always-on audit” capability. This allows organizations to track not only what actions were taken, but also when they occurred, why they happened, and what rules were in place at the time. This level of visibility is especially important in government, where accountability and oversight are critical.

Governance plays a central role in making this work. Purkey acknowledges that establishing strong governance frameworks can be just as challenging as implementing new technology. Agencies must define what data is allowed into their systems and ensure it is entered in a consistent way. Without these controls, even the most advanced AI tools can produce unreliable results.

He also notes that maintaining clean data is an ongoing process. It is not enough to clean up historical data; agencies must ensure that new data entering the system meets the same standards. This requires clear rules, consistent enforcement, and systems designed to prevent errors before they occur.

Transitioning to this model is not without challenges. Purkey identifies change management as one of the biggest hurdles. Moving away from long-standing systems and processes requires leadership commitment and a willingness to embrace new approaches. Leaders must communicate the benefits clearly and help employees understand how these changes will improve outcomes.

Despite these challenges, Purkey sees progress. Agencies are becoming more aware of the importance of data quality and are exploring ways to modernize their systems while continuing to support their missions.

As AI continues to evolve, Purkey’s message is clear: success depends less on the sophistication of the technology and more on the strength of the data behind it. Without clean, governed data, AI risks becoming unpredictable and unreliable. With it, agencies have the opportunity to unlock greater efficiency, improve decision-making, and build trust in the systems they rely on every day.