Original broadcast 8/19/25
Presented by Carahsoft
In an era where technology is evolving at unprecedented speed, cybersecurity risk management needs to be both consistent and adaptable. Victoria Yan Pillitteri, Manager of the Security Engineering and Risk Management Group at NIST, says the NIST Risk Management Framework (RMF) provides exactly that — a flexible, repeatable process that can be applied to any technology, including emerging fields like artificial intelligence.
“The RMF is the foundation for cybersecurity risk management,” Pillitteri explains. “It’s completely agnostic to the type of technology or organization. These are just good practices in a repeatable, manageable way.”
The framework, born out of the requirements of the Federal Information Security Modernization Act (FISMA), is not a checklist. Instead, it is a methodology that guides organizations through understanding their systems, identifying the right protections, implementing them, and monitoring continuously. Pillitteri says one of the biggest misconceptions is treating RMF as a static compliance exercise rather than an adaptable process tailored to each environment.
This open, collaborative approach ensures the framework remains relevant and practical. It also allows NIST to incorporate lessons learned from the field into updates, keeping pace with technological change. The most recent comment period closed in early August, and updated controls are scheduled for release before September 2 in alignment with the latest executive order on cybersecurity.
Looking ahead, Pillitteri is focused on applying RMF principles to AI security. NIST plans to release a concept paper outlining security control overlays for different AI use cases, from predictive models to generative and agentic AI. Each use case will have unique considerations — how to protect the model, the data, and the outputs — but the overarching approach will leverage existing standards wherever possible.
Pillitteri also hopes to foster a “community of interest” around AI security, bringing together stakeholders from government, industry, and academia to refine best practices. The goal is to ensure that as AI adoption grows, so does the ability to manage its risks effectively.
“Good risk management is here to stay,” she says. “Whether it’s for AI or any other emerging technology, it’s about understanding what you have, how to protect it, and making sure you’re doing it well — continuously.”
Key Takeaways:
NIST’s RMF is a flexible, technology-agnostic process for managing cybersecurity risks.
Continuous public feedback ensures controls remain practical and relevant.
New AI-specific security overlays will address the unique risks of different AI use cases.
Watch the full episode at InnovationInGov.com