The advent of agentic artificial intelligence heralds a transformative era for organizations, moving beyond mere assistance to autonomous action. These AI agents are not simply advanced chatbots or copilots; they are sophisticated entities capable of planning, decision-making, and independent execution across a multitude of digital and operational domains. As they increasingly undertake tasks such as writing code, managing data, processing transactions, provisioning infrastructure, and engaging with customers—often without direct human oversight—their potential to unlock unprecedented business value is immense. However, this revolutionary capability is contingent upon robust security measures, a domain where most organizations remain critically unprepared, according to Itamar Apelblat, Co-Founder and CEO of Token Security.
The prevailing security paradigm for AI, characterized by guardrails like prompt filtering, output controls, and behavior monitoring, is fundamentally flawed, Apelblat argues. This approach, he contends, attempts to constrain behavior after access has been granted. Once an AI agent possesses credentials and network connectivity, a single misstep can lead to catastrophic consequences, including data exfiltration, destructive actions, or cascading system failures across interconnected environments. To enable innovation while ensuring security, organizations must fundamentally rethink their control plane, shifting the focus from prompts and networks to identity as the scalable and foundational element for securing and governing these autonomous systems.
The Imperative Shift: From Guardrails to Identity-Centric Security
The rapid proliferation of AI agents presents a complex challenge for cybersecurity professionals. Unlike traditional software, AI agents are dynamic and can adapt their behavior based on their objectives and the data they encounter. This inherent non-determinism makes static rule-based security measures, such as prompt filtering, increasingly ineffective. The sheer volume of potential interactions and prompts means that any limitations imposed by guardrails are likely to be circumvented eventually. As Apelblat succinctly puts it, "Even if prompt controls worked 99% of the time, 1% of infinity is still infinity."
This reality necessitates a strategic pivot in how AI security is conceptualized and implemented. The focus must shift from attempting to manage the outputs and interactions of AI agents to meticulously controlling their access and identity. This means asking critical questions about the specific permissions granted to each agent, the systems and data they are authorized to access, and the precise nature of their intended operations. By tightening access controls at the identity layer, organizations can create a more effective containment strategy for autonomous software, moving beyond the limitations of coarse network controls and the inherent weaknesses of prompt-based filters.
Treating AI Agents as First-Class Identities
A cornerstone of securing agentic AI lies in recognizing that these entities, the moment they interact with production systems, APIs, cloud roles, SaaS platforms, or infrastructure, cease to be mere experiments and become distinct digital identities. Each AI agent leverages various forms of identity, including API tokens, OAuth grants, service accounts, cloud roles, secrets, and access keys. Yet, in many organizations, these AI-generated identities are often invisible, unmanaged, and poorly governed, creating significant security blind spots.
Mandating that every AI agent be treated as a first-class digital identity is crucial. This involves comprehensive inventorying and management of all identities associated with AI agents, understanding their purpose, and ensuring they are provisioned and deprovisioned with the same rigor as human identities. Without clear visibility into the identities that AI agents are utilizing, organizations lose the ability to effectively control them, creating an environment ripe for exploitation.
Eliminating Shadow AI Through Identity Visibility
The rise of "Shadow AI"—unauthorized or unmanaged AI agents operating within an organization—is a growing concern. This phenomenon is not primarily a tooling issue but rather a manifestation of an identity governance deficit. Developers, IT administrators, and business users are increasingly deploying AI agents that connect to critical business systems, access sensitive APIs, retrieve proprietary data, and initiate complex workflows, often without the knowledge or consent of security teams.
These unannounced agents operate by leveraging existing credentials or by creating new ones that go undetected. This lack of visibility fundamentally undermines Zero Trust security architectures, as unknown agents can be implicitly trusted simply because their credentials appear valid. To combat Shadow AI, organizations must prioritize gaining comprehensive identity visibility. This includes discovering all AI agent identities, understanding their permissions and associated risks, and ensuring that only authorized agents with appropriate controls are operating within the environment. As Apelblat emphasizes, "If you can’t see it, you can’t secure it. And in the AI era, what you can’t see is often autonomous."
Securing Based on Intent, Not Just Static Permissions
Traditional access control models often rely on static permissions, which may not adequately address the dynamic and goal-oriented nature of AI agents. Two identical AI agents with the same set of permissions can exhibit vastly different behaviors depending on their underlying objectives. This introduces a critical missing dimension: intent.
To effectively secure AI agents, organizations must move beyond simply granting permissions and instead focus on enforcing intent. This involves answering key questions such as: What is the specific goal of this AI agent? What are the boundaries of acceptable behavior for this agent to achieve its goal? What data and systems are absolutely necessary for its operation, and what should be explicitly out of bounds?
For instance, an AI agent designed to summarize customer support tickets should not possess the capability to export the entire customer database. Similarly, an agent tasked with infrastructure optimization should be prohibited from modifying critical Identity and Access Management (IAM) policies. By defining and enforcing intent through tightly scoped identity and access controls, organizations can establish a robust security framework that dictates acceptable operational parameters. This approach breaks the dangerous assumption that AI agents can simply inherit the permissions of the human users they might be acting on behalf of. The security of AI agents is not about predicting every possible behavior but about rigorously enforcing their intended purpose through controlled access.
Implementing Full AI Agent Lifecycle Governance
Security vulnerabilities associated with AI agents rarely manifest at the moment of their creation. Instead, they tend to emerge and compound over time. Access can accumulate, ownership can become ambiguous, and credentials can persist long after an agent is no longer actively used or has been repurposed. AI agents can be modified, re-assigned, and ultimately abandoned, often without any formal deprovisioning process, dramatically compressing the typical software lifecycle.
To mitigate these risks, organizations must implement comprehensive lifecycle governance for every AI agent. This includes establishing clear ownership, defining processes for regular access reviews and audits, implementing automated credential rotation and revocation, and creating a formal deprovisioning process for agents that are no longer needed. Continuous lifecycle control is essential to prevent the invisible compounding of risk. Without the ability to answer critical questions about agent status, purpose, and access at any given moment, organizations relinquish control over these powerful autonomous systems. Emerging frameworks for AI agent identity lifecycle management are being developed to address this precise challenge, offering structured approaches to ensure ongoing security and compliance.
The Broader Impact: A New Era of Digital Trust
The widespread adoption of agentic AI promises to redefine operational efficiency and innovation across industries. From accelerating drug discovery in pharmaceuticals to optimizing supply chains in manufacturing and personalizing customer experiences in retail, the potential benefits are transformative. However, realizing this potential hinges on establishing a new foundation of digital trust.
Organizations that attempt to bolt AI capabilities onto legacy, human-centric identity management models risk either over-provisioning AI agents with excessive privileges or stifling innovation through overly restrictive, albeit well-intentioned, security measures. Those that neglect the critical aspect of identity governance will inevitably face the consequences of lost control, data breaches, and operational disruptions.
The path forward is not to impede the progress of AI but to secure it intelligently. Identity-centric security, coupled with rigorous lifecycle governance, provides the scalable control plane necessary for managing autonomous systems. This approach ensures that security acts as an enabler of innovation rather than an impediment.
In the coming decade, companies that successfully leverage AI to transform their business operations while maintaining robust security will be the ones that thrive. The key to this delicate balance lies in a profound understanding and meticulous management of digital identities, particularly those of autonomous AI agents. As the lines between human and machine interaction blur, establishing clear, secure, and governable identities for AI will be paramount to building and sustaining trust in the digital realm. The successful integration of agentic AI into the fabric of business operations will depend on the ability to ensure that autonomy is always tethered to accountability, with identity serving as the critical link.







