Presented by Cisco & Carahsoft
George Kaminski, Manager of Securing Solutions Engineering at Cisco, examined the differences between securing externally sourced AI tools and internally developed AI systems.
Internally developed first-party AI systems introduce a different set of risks, including prompt injection attacks, poisoned datasets, compromised plugins, and hallucinated outputs. Organizations must secure every stage of the AI lifecycle, including training data, runtime execution, integrations, and outputs.
Kaminski emphasized that organizations need tools capable of inspecting prompts, monitoring AI behavior, validating outputs, and scanning integrations before deployment. AI governance must include logging, auditing, rollback capabilities, and strict policy enforcement.
He also discussed the growing importance of agentic AI systems, which can autonomously take actions on behalf of users or organizations. While these systems offer operational advantages, they also require stronger guardrails and oversight mechanisms to prevent unintended actions or security failures.
Key Takeaways