March 24, 2026
On this episode of Fed Gov Today, Kshemendra Paul makes a clear case: the success of artificial intelligence in government depends far less on the technology itself and far more on the quality and governance of the data behind it. In conversation with Francis Rose, Paul emphasizes that agencies can only achieve high-fidelity AI outcomes if they first address foundational data challenges like consistency, sharing, and management.
Paul points to the President’s Management Agenda (PMA) as a strong signal that data is increasingly viewed as a strategic asset across government. He notes that data appears throughout the PMA’s priorities—from combating fraud, waste, and abuse with analytics, to improving customer experience through better service delivery. In each case, he explains, the common thread is data. Whether agencies are implementing risk models or modernizing systems, their success ultimately hinges on how well they manage and use their data.
However, Paul identifies a major obstacle: agencies often struggle with how they organize and govern data internally. He explains that responsibilities for data are frequently split or unclear between Chief Information Officers (CIOs) and Chief Data Officers (CDOs). While CDOs have gained more authority over time, especially following legislation like the Evidence-Based Policymaking Act, funding and accountability structures have not always kept pace. This mismatch can create confusion and slow progress.
Beyond organizational challenges, Paul highlights a deeper issue—how agencies think about data ownership. He argues that the traditional, agency-centric model is no longer sufficient. Instead, he advocates for a domain-based, federated approach to data governance. In this model, agencies collaborate across boundaries, focusing on shared missions rather than isolated control of data. This shift, he says, is essential for tackling complex, real-world problems that require coordination among multiple stakeholders.
A key enabler of this approach is the development of a common data “language.” Paul draws on his experience with the National Information Exchange Model (NIEM), which helped standardize how data is described and shared across systems. By creating a common lexicon, agencies can overcome the barriers posed by incompatible systems and formats. He explains that this reduces complexity and allows information to flow more easily, ultimately improving mission outcomes.
Paul also underscores the importance of open, collaborative governance. He stresses that effective data sharing requires transparency and participation from a broad ecosystem, including state and local partners and the private sector. Without trust, he notes, agencies will remain stuck in siloed thinking, even when collaboration would clearly improve efficiency and reduce risk.
Looking ahead, Paul connects these principles directly to the rise of AI. He explains that better data governance not only improves interoperability but also addresses critical AI concerns such as bias, hallucinations, and overall reliability. By focusing on data quality and standards, agencies can build AI systems that are more accurate, trustworthy, and effective.
