Identity Is the Real AI Control Plane

Featured image for Identity Is the Real AI Control Plane

AI Agents Broke the Identity Model You're Using

AI agents broke the identity model because they exhibit three properties that no prior digital actor combined: they are non-deterministic, initiative-taking, and authority-bearing1. The same input can produce different actions. The agent decides when and what to do. And it exercises delegated decision rights, not just execution rights.

Bots execute scripts. Service accounts run static workloads. Humans log in and out. An autonomous agent does none of these cleanly — it operates 24/7, makes decisions inside the perimeter with valid credentials, and acts on behalf of someone who is not at the keyboard when it acts. That is a category that legacy IAM was never designed to handle, and forcing it into the human or workload bucket is how organizations end up with unaccounted authority running across their stack.

Identity classOperates 24/7Makes decisionsDeterministicScope of authority
Human userNoYesYes (intent)Bounded by job role
Service accountYesNoYesStatic, narrow
WorkloadYesNoYesStatic, scoped to function
AI agentYesYesNoDelegated, dynamic, broad

The scale shift makes this urgent. Forrester projects2 that by mid-2026 the average Fortune 500 company will operate dozens of AI models, hundreds of AI-powered applications, and potentially thousands of autonomous agents. Microsoft, GitHub, ServiceNow, and Fiddler AI all launched products in the agent control plane category within the six months leading up to that projection2 — a market does not converge that fast unless the underlying architectural problem is real.

The risk surface is already showing up. According to Security Boulevard4, 80% of companies have already experienced unintended AI agent actions. That is a single-source figure and worth treating as directional rather than definitive — but the direction is not in dispute. If a compromised agent is a high-speed insider with valid credentials1, then infrastructure controls aren't your control plane. Your authorization model is.

The architectural question this raises is not "how do we restrict access?" It is "how do we govern authority?"

From Access Control to Authority Control

Governing AI agents requires a shift from access control (who can log in) to authority control (who may act, under what constraints, with what accountability). This is the architectural pivot that makes identity, not infrastructure, the real control plane. Treating it as an AI strategy decision, not a security ticket, is what separates firms that scale agents safely from those that scale unknown risk.

Traditional firewalls assume the threat is outside the perimeter. Agents operate inside it, with valid credentials, hitting valid APIs. There is no perimeter to defend — only authority to govern. And the protocols most organizations rely on for that authority were not built for this workload. OAuth, OIDC, and SAML were designed for humans who log in, do work, and log out5. Traditional IAM systems built around them lack the capability to handle dynamic, interdependent, and often ephemeral AI agents at scale11.

So why does identity become the consistent thread? Because identity is the only consistent thread tying autonomous agent actions back to a responsible human or organization5. Every API call, every database query, every cross-system action eventually traces back to who the agent is and whose authority it carries. Identity becomes the source of truth for accountability — without it, you have logs you can't attribute and actions you can't revoke.

That reframes the governance question. McKinsey puts it directly6: the question we must always answer has shifted from "Is the model accurate?" to "Who is accountable when the system acts?" Identity is how that accountability survives at machine speed.

QuestionAccess ControlAuthority Control
What is being asked?Can this entity log in?Under what constraints may this entity act?
What is enforced?Authentication at session startContinuous authorization per action
Time horizonSession-boundedDecision-bounded
Failure modeUnauthorized entryAuthorized action with no accountability
Built forHumans, static workloadsAutonomous, decision-making agents

If identity is the control plane, what does an identity-first architecture actually require?

What an Identity-First Control Plane Requires

An identity-first agentic AI control plane architecture requires four capabilities working together: verified agent identity, scoped and revocable authority, continuous monitoring, and unified governance across heterogeneous agents. Identity is the primary lever, but it does not work alone. This is where most AI implementation efforts go wrong — they pick one of the four and call it a control plane.

A control plane, properly defined, is the coordination and governance layer that translates intelligence into authorized enterprise action8. Forrester describes it as the infrastructure that inventories, governs, orchestrates, and assures heterogeneous AI agents across vendors and domains2. AWS frames it as a single pane of glass across the operational, management, and orchestration mechanisms that span an environment's tenants7. The capabilities below are how that pane gets built.

  1. Verified agent identity. Every agent has an auditable identity with provenance, capabilities, and compliance status before it accesses any resource11. No identity, no action. This goes beyond simple authentication to include the full identity chain back to the issuing authority.
  2. Scoped, revocable authority. Authorization is dynamic and continuous, not granted once at session start. JWT, OAuth2, and OIDC handle initial deployments; ABAC and policy-as-code handle the dynamic case; instant revocation triggers when posture drifts. Privileged access management is evolving toward exactly this role4 — treating bots and autonomous agents like privileged users by enforcing least privilege.
  3. Continuous monitoring and observability. Every action is logged and explained in real time. McKinsey argues governance can no longer be a periodic, paper-heavy exercise6; as agents operate continuously, governance must become real-time, data-driven, and embedded — with humans holding final accountability. Some organizations are embedding control agents — critic agents that challenge outputs, guardrail agents that enforce policy, compliance agents that monitor regulation — directly into agent workflows6.
  4. Unified governance across vendors. This is the gap. Forrester identifies three categories of missing standards keeping enterprises from implementing the control plane as a portable, vendor-agnostic governance layer3: incomplete instrumentation, absent portable agent identity, and missing cross-plane governance schemas. Until those mature, "unified" means policy you wrote, enforced consistently across systems you control.

Where MCP fits. The Model Context Protocol standardizes how LLMs invoke actions and use tools, and how applications return context to LLMs9. Some commentary frames it as "the Kubernetes for language models"10, bringing order to agent orchestration. But MCP is the tool execution surface the identity layer governs — not a substitute for identity. An identity-based control plane decides whether and under what constraints MCP invocations are authorized. MCP is governed; it does not govern.

McKinsey is blunt about what happens without this6: without upgrading inventory, identity management, and observability, organizations don't deploy governed agents — they scale unknown risk.

Two emerging patterns push this architecture further: cryptographic binding of authority, and zero-trust adapted for agents.

The Emerging Pattern: Cryptographic Authority and Zero-Trust for Agents

The next wave of agent identity is already moving beyond bearer tokens — and the patterns are worth knowing now, before they become table stakes. Authority is being cryptographically bound to the device, container, or runtime where the agent executes, and zero-trust frameworks are being adapted so every agent action requires explicit, verified authorization. These patterns are emerging, not yet universal, and that distinction matters.

Hardware-bound identity is the most concrete pattern of the three. The agent's private key is cryptographically tied to its device, VM, container, or runtime via TPM 2.0 or equivalent12, and every AI call carries a signed proof-of-origin (DPoP-style) bound to the request itself. The practical effect: a stolen token is inert. Without the workload's private key, no valid proof, no action12. Entrust frames the broader requirement directly1: authority must be cryptographically bound, issued from hardware roots of trust, expressed as verifiable credentials, and post-quantum ready by default.

Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs) push the model further. An agent's identity, in this design, consists of a globally unique DID and a structured set of verifiable attributes organized into provenance, capabilities, and compliance dimensions13. This is academic and standards-track today, not commodity infrastructure. But it is the direction portable agent identity is heading.

The Cloud Security Alliance's Agentic Trust Framework applies zero-trust principles to agents directly11: every agent must have a verified, auditable identity before accessing any resource, and traditional IAM systems like OAuth and SAML do not handle dynamic, interdependent, ephemeral agents at scale. Continuous posture verification is the operational expression of this — access downgrades automatically if executable integrity, image provenance, or sandbox state drifts.

Three emerging standards to track:

  • CSA Agentic Trust Framework — zero-trust governance specifically for AI agents.
  • DIDs and Verifiable Credentials — portable, cryptographically verifiable agent identity.
  • Hardware-bound identity (TPM 2.0, DPoP-style proofs) — credentials inseparable from the runtime.

None of these are commodity yet. The honest read is that they are the patterns serious teams should evaluate now so the architecture decisions made at five agents do not have to be redone at five hundred.

For founder-led firms deploying agents on client work, the practical question is where to start.

What This Means for Founder-Led Firms Deploying Agents

If your firm is deploying agents on client engagements or internal workflows, the practical question is whether you have a control plane or just a collection of credentials. The first is governable. The second scales unknown risk. This is the architecture decision for founders that compounds fastest — what you build at five agents determines feasibility at five hundred.

Most firms today have two to five agents in production. That is exactly the right scale to architect properly. Before complexity arrives, the practical starting point is straightforward: build an agent inventory, give every agent a verified identity, scope its permissions tightly, and log every action. AWS describes the same pattern from the multi-tenant side7 — agent onboarding requires tenant identity, per-tenant resource provisioning, tiering, and policy configuration. None of that is exotic; what's exotic is doing it consistently across every agent before the count gets away from you.

Four questions for any agent control plane vendor:

  1. Does the platform issue and verify portable agent identity, or does each agent rely on credentials stitched together at deployment?
  2. Can authority be revoked in real time when posture drifts, or only at the next credential rotation?
  3. Is there a unified audit trail across systems the agent touches, or are logs trapped in each tool's silo?
  4. Does the platform support human-in-the-loop checkpoints for high-risk actions, or only post-hoc review?

The firms that architect identity-first now will not have to rearchitect when their agent count goes from five to five hundred. And the framing matters: agents are intellectual augmentation, not autonomous replacements. Humans remain accountable for what agents do in their name — and the control plane is how that accountability survives at machine speed.

If mapping these decisions to specific client workflows and engagement commitments feels like a full-time job on its own, that's exactly the kind of problem an outside AI implementation partner can move through faster than building the answer in-house. The architecture is not the hard part. The hard part is making it specific to your firm before you have hundreds of agents to clean up.

FAQ: Common Questions on Agent Control Planes

What is an agentic AI control plane?

An agentic AI control plane is the governance and coordination layer that inventories, governs, orchestrates, and assures heterogeneous AI agents across vendors and domains2. AWS describes it as a single pane of glass for the operational, management, and orchestration mechanisms that span an environment7. In practical terms, it is what translates agent intelligence into authorized enterprise action8.

Why is identity the control plane instead of orchestration?

Because identity is the only consistent thread tying autonomous agent actions back to a responsible human or organization5. Orchestration moves work between agents. Identity is what makes any of that work attributable, revocable, and accountable. Token Security frames the endgame directly5: the agentic enterprise will be won by the cryptographic trust layer that binds autonomous authority to accountable humans.

Can traditional firewalls secure AI agents?

Not on their own. Agents operate inside the perimeter with valid credentials and hit valid APIs4 — the threat model that firewalls were built for does not match the workload. Firewalls remain useful as one layer; they are not a control plane for agents.

Can we just use OAuth for agent authorization?

OAuth, OIDC, and SAML were designed for humans who log in, do work, and log out5, not autonomous decision loops. Traditional IAM systems built around them lack the capability to handle dynamic, interdependent, and ephemeral AI agents at scale11. They are useful primitives inside an identity-first architecture, not a substitute for one.

What happens if an agent's credentials are stolen?

Without hardware binding and real-time revocation, a compromised agent becomes a high-speed insider executing privileged actions at machine speed1. With hardware-bound identity and DPoP-style proofs, stolen tokens are inert — they can't generate valid cryptographic proofs without the workload's private key12.

How does MCP relate to a control plane?

The Model Context Protocol standardizes how LLMs invoke actions and use tools9. An identity-based control plane governs whether and under what constraints MCP invocations are authorized. MCP is the tool execution surface; identity is the governance layer over it.

Conclusion: Architecture Decisions That Compound

An agentic AI control plane is not optional infrastructure. It is the architectural decision that determines whether agent deployments stay safe, auditable, and scalable as their numbers and authority grow. Identity is the primary lever; orchestration and observability complete the trinity.

The decision compounds. The architecture chosen at five agents determines feasibility at five hundred — and the firms that get this right treat identity as a governance question now, not a security ticket later. Humans remain responsible for what agents do in their name.

Authority delegated without identity is unaccounted authority. The control plane is how delegation stays accountable at machine speed.

References

  1. Entrust, "AI Agent Identity: The Missing Control Plane for Agentic AI" (2026) — https://www.entrust.com/blog/2026/04/the-agentic-enterprise-needs-a-new-control-plane
  2. Forrester Research, "Announcing Our Evaluation of the Agent Control Plane Market" (2025) — https://www.forrester.com/blogs/announcing-our-evaluation-of-the-agent-control-plane-market/
  3. Forrester Research, "Agent Control Planes Still Need a Robust Standards Stack" (2025) — https://www.forrester.com/blogs/agent-control-planes-still-need-a-robust-standards-stack/
  4. Security Boulevard, "Why Privileged Access Is Becoming the Control Plane for Agentic AI" (2026) — https://securityboulevard.com/2026/04/why-privileged-access-is-becoming-the-control-plane-for-agentic-ai/
  5. Token Security, "Why AI Agent Identity Is the New Control Plane for Enterprise Security" (2026) — https://www.token.security/blog/why-ai-agent-identity-is-the-new-control-plane-for-enterprise-security
  6. McKinsey & Company, "Agentic AI Governance for Autonomous Systems" (2025) — https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/trust-in-the-age-of-agents
  7. Amazon Web Services, "Employing Control Planes in Agentic Environments" (2025) — https://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-multitenant/employing-control-planes-in-agentic-environments.html
  8. Snowflake, "Powering the Era of the Agentic Enterprise" (2025) — https://www.snowflake.com/en/blog/agentic-enterprise-control-plane/
  9. Anthropic, "Introducing the Model Context Protocol" (2024) — https://www.anthropic.com/news/model-context-protocol
  10. Vectara, "MCP: The Control Plane of Agentic AI" (2025) — https://www.vectara.com/blog/mcp-the-control-plane-of-agentic-ai
  11. Cloud Security Alliance, "Agentic Trust Framework: Zero Trust for AI Agents" (2026) — https://cloudsecurityalliance.org/blog/2026/02/02/the-agentic-trust-framework-zero-trust-governance-for-ai-agents
  12. Beyond Identity, "The Attacker Gave Claude Their API Key: Why AI Agents Need Hardware-Bound Identity" (2025) — https://www.beyondidentity.com/resource/the-attacker-gave-claude-their-api-key-why-ai-agents-need-hardware-bound-identity
  13. Dock.io, "AI Agent Digital Identity Verification" (2025) — https://www.dock.io/post/ai-agent-digital-identity-verification

Our blog

Latest blog posts

Tool and strategies modern teams need to help their companies grow.

View all posts
Featured image for What is Agentic AI
Featured image for How to Use ChatGPT for Business