CISOs don’t have to choose between AI innovation and security risk—here’s how to enable agentic AI safely with Continuous Identity
The exponential growth of LLM context window sizes has enabled agentic AI—software capable of autonomously striving to achieve a goal, using step-by-step reasoning, taking action, invoking APIs, and completing workflows. It has redefined what’s possible. Yet it has also intensified a long-standing tension: security vs. innovation.
For fear of employees turning to “shadow AI”, Chief Information Security Officers (CISOs) face pressure not to be the team that says “no” at the same time they are expected to keep sensitive data and systems protected. With the introduction of the Model Context Protocol (MCP), this challenge becomes even more acute. MCP makes it easier than ever for AI agents to leverage enterprise systems and the data stored in them. That’s powerful, but greatly increases the risk. To learn more about MCP and its risks, see my previous blog post.
The good news? CISOs don’t have to choose between blocking progress and risking security. In fact, they can be the champions of safe, scalable AI adoption by ensuring that the right guardrails are in place from the start.
Enterprise services are typically designed with human behavior in mind. They assume a level of discretion. Employees don’t often try to work around denied access requests. AI agents, by contrast, are optimized for results, not rules. When one approach fails, they try another. And another.
This poses novel security threats. As explored in the previous post, a reasoning model that’s denied access to a customer address might try instead to get nearby delivery points, triangulating sensitive information without ever explicitly breaking a rule. This type of indirect inference, long studied in academic security research, becomes much more practical when reasoning models are widely deployed.
Even more troubling is that many enterprises grant access broadly. Authorization policies reflect job titles or business units, not precise, contextual, time-bound needs. While a human might ignore irrelevant data, an AI agent will not.
The result: confidential information, once obscured by complexity or access friction, becomes easily discoverable by an eager agent.
By establishing architectural patterns and policy guardrails early, security leaders can turn the agentic AI challenge into an opportunity to get their organizations to adopt modern security best practices, which not only secure the new AI use cases, but cuts risk in existing activities too.
Here are three principles to guide that shift:
MCP turns LLMs into actors with the same privileges as their human users. That makes dynamic, zero standing -privilege access a prerequisite.
With ZSP, access isn’t granted in advance or indefinitely. It’s requested and evaluated at the time of use, in context. This is critical for MCP because the AI’s behavior is contextual and time-sensitive.
Consider an employee working on an M&A deal. For that period, access to sensitive financial models may be appropriate. After the deal closes, it’s not. If an agent retains those privileges, it may continue to query or reference stale, but still confidential, data.
Dynamic authorization platforms like SGNL make it possible to enforce real-time access decisions based on identity, task, time, device, and more. This ensures that AI agents only have access to the information their users are entitled to right now, and it ensures that every user’s access is modulated based on context, thereby greatly reducing the blast radius of a compromised identity.
The flexibility of MCP depends on “tool discovery”—the ability of an AI client to learn what functions are available on a server. But not all users should be able to see or use all tools.
Instead of presenting a static tool list, MCP servers should authorize the list tools
method in MCP based on the user’s identity. That way, agents can’t invoke tools they were never meant to know about.
Furthermore, execution itself should happen under the requesting user’s privileges—not the server’s. This preserves the authorization context all the way through the request, making it easier to log, trace, and audit downstream behavior.
In traditional access control models, identity is the perimeter. With MCP, context becomes just as important.
Because AI agents use prior responses to inform future ones, what they know is often shaped by what they’re allowed to know. If an agent receives a response once, it may retain and reuse it—even if the user’s permissions change. That means ephemeral authorization is essential. Every request must be independently evaluated with up-to-date context, including role changes, location shifts, or device posture.
Additionally, context-aware access must be policy-enforced, not hardcoded. Business conditions change. Access policies need to keep up. That’s only feasible if authorization is externalized and centrally managed.
AI in the enterprise isn’t a matter of if, it’s a matter of how. MCP will be adopted, whether through sanctioned deployments or user-led experimentation. The choice for CISOs is whether to lead or chase.
Security shouldn’t be the department of “no.” It should be the function that makes “yes” possible safely, reliably, and at scale.
By embracing ZSP, contextual access, and AI-aware controls at the protocol level, CISOs can unlock the benefits of agentic AI across the enterprise without opening the door to unintended risk.
In doing so, they don’t just defend the business. They enable it.
Want more of the latest identity-first security topics and trends delivered to your inbox? Helpful and insightful content, no fluff.