
Traditional security models were not designed for systems where inputs can alter behavior, identities are non-human, and decision-making is probabilistic.
When it comes to frameworks for managing agentic AI safely, the OWASP GenAI Security Project cannot be overlooked.
Here’s why.
The OWASP GenAI security project story
Originally launched in 2023 as the OWASP Top 10 for LLM Applications, the initiative has rapidly grown into one of the most important community-driven efforts in AI security.
Today, it is a comprehensive, open-source body of knowledge focused on securing generative AI systems across their entire lifecycle. It brings together researchers, practitioners, and security leaders to define risks, standardize terminology, and provide actionable guidance.
By early 2026, the project has expanded into a broader ecosystem that includes more than a dozen sub-projects. Some of the most impactful components include:
- OWASP Top 10 for LLMs (2025/2026): A widely adopted list of the most critical vulnerabilities affecting AI applications
- Top 10 for Agentic Applications: A new framework addressing the risks introduced by autonomous AI agents
- LLM Cybersecurity and Governance Checklist: A strategic tool designed to help organizations align AI usage with security and compliance requirements
- AI Bill of Materials (AIBOM) Generator: A mechanism for improving visibility into the components and dependencies that make up AI systems
Together, these resources provide something the industry has been missing: a shared foundation for understanding AI risk. For CISOs, that means a common language to align security, engineering, and leadership teams around what actually matters.
Why OWASP matters for CISOs
1. It Redefines What Application Security Looks Like
Classic application security risks still exist, but they manifest differently in AI systems.
In a traditional application, inputs are clearly separated from logic. In a large language model, that boundary disappears. Prompts influence behavior directly, effectively acting as both data and execution logic.
This creates entirely new attack surfaces.
For example, a support chatbot integrated with internal systems might be instructed via prompt injection to expose sensitive data or override guardrails. Unlike SQL injection, this does not exploit a technical flaw in code, it exploits the model’s reasoning layer.
The OWASP Top 10 for LLMs highlights risks such as:
- Prompt Injection (LLM01): Malicious inputs that manipulate model behavior or override intended instructions
- Excessive Agency (LLM06): Scenarios where an AI system is granted too much autonomy and can take unintended or harmful actions
These are not edge cases. They are fundamental to how AI systems operate.
For CISOs, this is a signal that existing AppSec programs need to evolve, not just expand. Security testing, threat modeling, and monitoring all need to account for how AI systems interpret and act on inputs, not just how code executes them.
2. It Introduces a Framework for AI Identities
One of the most important shifts in the 2026 OWASP updates is the focus on agentic AI.These systems do not just respond. They act. They can access systems, modify data, trigger workflows, and interact across the SaaS environment.
This introduces a new category of identity.
AI agents are not employees, but they operate with real permissions, often using API keys or delegated access. In many environments, they are provisioned quickly and without the same governance controls applied to human users. This creates an identity gap.
For example, an AI sales assistant might have access to CRM data, email systems, and customer records. If compromised or misconfigured, it could send unauthorized communications, expose sensitive data, or take actions that would normally require multiple layers of approval. The OWASP Top 10 for Agentic Applications provides a framework for closing that gap. It outlines how to think about authentication, authorization, least privilege, and monitoring for non-human identities, similar to modern machine identity management.
For CISOs, this is a critical governance challenge. The question is no longer just “who has access,” but “what autonomous systems are acting on our behalf, and under what constraints?”
3. It Exposes Hidden AI Supply Chain Risk
Modern AI systems depend on a wide range of external components, from pre-trained models to APIs, plugins, and datasets. The OWASP category of Supply Chain Vulnerabilities (LLM03) highlights a key issue. Most organizations lack visibility into these dependencies.
Without that visibility, it becomes difficult to assess risk or enforce policy.
For instance, a seemingly simple AI feature may rely on multiple third-party services, each with its own data handling practices, security posture, and update cycles. A vulnerability or data leak in any one of these components can cascade into your environment.The concept of an AI Bill of Materials (AIBOM) builds on ideas from the broader Software Bill of Materials (SBOM), extending them into the AI ecosystem.
With AIBOM, CISOs can:
- Understand where models and data originate
- Evaluate third-party and vendor risk more effectively
- Track changes in dependencies over time
- Improve incident response and accountability
This is a foundational step toward bringing AI into established risk management practices like third-party risk management and software supply chain security.
4. It Bridges the Gap Between Regulation and Execution
AI regulation is accelerating, with frameworks like the EU AI Act setting new expectations for governance, transparency, and accountability. The challenge for most organizations is translating these requirements into actual controls.
OWASP helps bridge that gap.
Its governance checklist aligns closely with frameworks like the NIST AI Risk Management Framework, turning high-level requirements into practical guidance such as access controls, auditability, and risk classification.
For example, instead of simply stating that AI systems must be “secure and trustworthy,” OWASP helps define what that looks like in practice. What controls should exist, how systems should be monitored, and where accountability should sit. For CISOs, this provides a starting point for operationalizing AI governance without building everything from scratch. It also creates defensibility, the ability to show auditors, regulators, and boards that AI risks are being managed against recognized standards.
Is guidance enough? Nope.
OWASP provides the foundation, but it does not solve execution on its own.
Security teams still need real-time visibility into how AI is being used across the organization, especially in SaaS environments where adoption often happens without centralized oversight.
In practice, that means being able to:
- Discover AI usage, including Shadow AI tools adopted by business units
- Identify and track AI agents and their permissions across systems
- Monitor behavior to detect misuse, drift, or overreach
- Respond quickly when risks are identified, before they escalate
This is where CISOs need the tech to back up the framework.
Solutions like Wing Security’s help translate OWASP guidance into enforceable controls by providing continuous discovery, behavioral monitoring, and automated remediation across the SaaS stack.
Instead of static policies, CISOs gain a dynamic view of how AI is actually being used, and the ability to intervene when necessary. The goal is not just to understand AI risk, but to actively manage it in real time.
