Governing Trusted AI With Practical Guardrails

Original broadcast 11/19/25

 

Presented by Carahsoft

Artificial intelligence is accelerating at a pace that few could have predicted, and its potential to transform mission outcomes is undeniable. Yet as quickly as AI evolves, so do the risks associated with its misuse, misdirection, or misalignment with federal standards. For leaders like COL Kathy Swacina (USA, Ret.) and Marina Alvarez, this moment requires not just innovation, but governance—practical, intelligent, mission-aware governance that ensures AI is deployed safely, ethically, and consistently.

Screenshot 2025-11-13 at 3.16.11 PMTogether, Swacina and Alvarez lead the AI Think Tank within the Holistic Information Security Practitioner Institute (HISPI), including its flagship initiative, Project Cerebellum. Their work began with a simple but urgent observation: government CIOs and CISOs were deeply concerned about the pace of AI adoption across agencies and the absence of unified, comprehensible standards for determining what makes an AI tool “trusted.”

AI was rapidly entering acquisition pipelines. Vendors were arriving with solutions that advertised transparency, compliance, and fairness—yet offered no clear way to validate those claims. Agency leaders knew the stakes. A flawed model could introduce systemic bias into decision-making. An opaque model could violate privacy, ethical norms, or security frameworks. And a compromised model—whether poisoned, manipulated, or misinformed—could pose real risks to national security.

Swacina, drawing from her experience as Former Deputy Chief of Staff for the U.S. Army Reserve Command, understood how high-stakes systems require rigorous evaluation. Alvarez, leveraging her background as Former SE-ICAM Program Manager at the U.S. Department of State, recognized the gaps in identity governance and transparency that could undermine AI integrity. Together, they designed an effort to fill those gaps.

The result is the TAM Score—a comprehensive technical and compliance mapping system that evaluates AI against multiple globally recognized standards. Rather than reinventing the wheel, Project Cerebellum harmonizes dozens of frameworks into a single, accessible reference. These include NIST AI frameworks, ISO/IEC standards, risk management guidance, cybersecurity requirements, and even industry-specific regulations like PCI.

This approach allows developers, vendors, and federal agencies to speak the same language. With TAM, AI products can be measured consistently, producing a score that reflects trustworthiness across ethics, security, privacy, transparency, and operational readiness. The value, as Alvarez explains, is that agencies no longer need to guess. They can assess.

But governance, the leaders emphasize, is not just about checking boxes. It is about ensuring AI is deployed for the right reasons, toward the right problems, with the right oversight. Swacina notes that without transparency—without knowing what data trained a model, how it reached its conclusions, and what guardrails exist—agencies risk embedding bias and inequity into critical mission workflows. AI must not only be powerful; it must be principled.

This is especially important as generative AI and agentic AI evolve. Generative systems can produce answers, content, or recommendations instantaneously, but they also can confidently generate wrong or biased information if not governed properly. Agentic systems, which operate semi-autonomously, amplify this risk. Without strict controls, auditing, and human-in-the-loop processes, agencies could lose insight into how decisions are being made.

Screenshot 2025-11-13 at 3.16.23 PMSwacina and Alvarez argue that ethical AI governance is inseparable from national security. AI systems increasingly interact with sensitive datasets, operational intelligence, and public-facing citizen services. A flawed model could compromise trust in government. A compromised model could expose vulnerabilities to adversaries.

To avoid these outcomes, the HISPI AI Think Tank emphasizes transparency as the bedrock of trust. Developers must be able to explain model behavior. Agencies must be able to audit datasets, detect anomalies, and intervene when necessary. And all stakeholders must have a shared framework for evaluating compliance—not one vendor’s interpretation versus another’s, but an agreed-upon set of criteria that holds true across missions.

The TAM Score equips agencies with that framework. It provides clarity where previously there was inconsistency. It integrates technical validation with ethical alignment. And it transforms governance from barrier to enabler—giving developers and agencies a pathway to build systems that are both innovative and responsible.

Swacina and Alvarez also stress that AI governance must evolve beyond compliance and into operational culture. Agencies must train personnel to understand AI outputs, evaluate reliability, and flag concerns. Governance cannot sit only in policy documents; it must be embedded in workflows, acquisition strategies, and mission execution.

The Indo-Pacific context reinforces this urgency. The region is a center of strategic competition where data, intelligence, and technology integration shape daily operations. AI can accelerate decision-making, improve situational awareness, and strengthen cooperation with allies—but only if trusted. Without governance, it risks creating vulnerabilities rather than advantages.

Project Cerebellum aims to ensure the former. By focusing on transparency, cross-framework compliance, and ethical operation, the initiative provides a blueprint for federal agencies navigating the rapidly changing AI landscape. It gives mission leaders confidence that their AI tools are reliable. It gives developers a roadmap. And it gives the broader ecosystem a common foundation on which to build future capabilities.

For Swacina and Alvarez, the work is far from complete. AI will continue to evolve, and so must the frameworks that govern it. But by establishing a holistic approach now, they believe the federal government can shape a future where AI accelerates mission outcomes safely, equitably, and responsibly.

Key Takeaways

  • The TAM Score unifies multiple global AI standards to help agencies evaluate trustworthiness consistently.

  • Ethical AI requires transparency, explainability, and mission-aligned governance—not just technical capability.

  • Trusted AI depends on a blend of compliance, culture, and continuous oversight to ensure reliability and ethical use.

 

Join our Newsletter

Please fill out the requested information below