TL;DR
- AI agents turn “one request” into delegation chains across tools, so auth must verify who, on-behalf-of, consent, purpose, scope, and time.
- The biggest failures are not “bad RBAC.” They are lost context, standing service accounts, and unverifiable delegation.
- You can fix this with purpose-bound, goal-scoped authorization, human approvals for high-risk steps, and semantic audit trails, without rewriting your whole stack.
The real shift: agents create a new trust model
AI agent security is not just about stopping “bad prompts.” It is about managing consented delegation under uncertainty, where the system must trust three parties at once:
- Workflow owners (accountable for what the agent can do)
- Agent users (expect the agent to act only with their consent)
- Agents and tools (which can be spoofed, injected, or delegated incorrectly)
Traditional authorization mostly answers: “Is this identity allowed?”
Agentic authorization must also answer: “Is this action still the user’s consented intent, in this workflow, at this step, through this delegation chain?”
What BodySnatcher exposed (and why it matters)
The BodySnatcher vulnerability (CVE-2025-12420) is a sharp example of the gap: attackers could impersonate users and trigger agentic workflows with the impersonated user’s entitlements, impacting components like Now Assist AI Agents and the Virtual Agent API.
Even when authentication exists, agentic workflows multiply the blast radius:
- A single impersonation can become many downstream actions.
- Tool calls happen across systems that do not share a unified view of why this is happening.
- Default configurations can turn “run a workflow” into “create persistence” in the wrong conditions.
The takeaway: authentication is necessary, but it is not sufficient when the system is executing long-running, multi-step delegated actions.
Why traditional authorization breaks for AI agents
RBAC and ABAC are not “wrong.” The problem is that most implementations assume direct, human-driven requests, not autonomous orchestration.
The core gaps
- Standing privileges vs. goal-scoped access: agents often run under broad service identities “just in case.”
- Delegation without proof: downstream tools cannot verify who delegated what to whom.
- Consent drift: the agent keeps acting after the user’s consent is stale or ambiguous.
- Context loss across tools: the baton gets dropped between agent, tool server, and microservice.
- Audit trails without meaning: logs show what happened, not whether it matched purpose.
The complexities that make this hard (and real)
- Multi-agent handoffs and tool delegation
- Long-running workflows (minutes or hours)
- Cross-tenant and multi-workspace boundaries
- “Read plus act” chains (RAG plus privileged actions)
- Tool ecosystems with inconsistent auth standards
A practical definition: what “AI agent authorization” must guarantee
AI agent authorization verifies that an action is delegated, consented, purpose-bound, least-privilege scoped, and time-bounded, and that this proof survives tool-to-tool hops.
Think of each tool call as needing an authorization envelope:
- Actor: the human user identity (original delegator)
- Delegate: the agent identity (which model, which instance)
- Purpose: the declared intent (for example,
support_ticket_resolution) - Scope: which resources are in-bounds (for example,
ticket_id=9812,customer_id=123) - TTL: how long the delegation is valid
- Chain: prior delegations, sub-agents, tool calls (provenance)
If any of these are missing, enforcement becomes guesswork.
Principles for robust AI agent authorization
1) Declare and validate purpose (make “intent” enforceable)
Do not log “intent” as prose. Make it an attribute:
purpose="refund_order"order_id="123"risk="high"
Then validate: can this agent perform this action for this purpose on this resource right now?
2) Propagate identity and delegation chain end-to-end
Every hop should carry the actor, delegate, purpose, and scope so downstream checks do not degrade into “service account says yes.”
3) Use goal-scoped permissions (not role-scoped)
Grant access only for the duration and scope of the goal, revoke automatically when done.
4) Put approvals where the blast radius is real
High-risk actions (create admin, export data, delete, change billing) should require explicit approval even if low-risk steps remain autonomous.
5) Produce semantic audit trails (who, what, why)
Store decision inputs and outcomes so investigations can answer:
“Who delegated this? What was the purpose? Was it within scope?”
What this means in practice
For developers
You want speed and safety, without hand-rolling an authorization engine.
Patterns that work:
- Purpose-bound checks before every tool call (not just at workflow start)
- Short-lived delegation sessions (TTL plus scope)
- Step-up approvals for risky actions
- Guardrails as policy (not in prompts)
Permit’s policy check model is designed for this: ask a PDP, “Can X do Y on Z with context C?” and enforce the answer consistently.
For IAM and Security teams
You need governance, auditability, and clear ownership.
Controls that matter most:
- Centralized policies with least-privilege defaults
- Separation of duties for high-risk actions
- Visibility into delegation chains (not just API calls)
- Decision logging for investigations and compliance
Permit’s audit log capabilities help you operationalize that visibility.
How agent.security (powered by Permit.io) addresses this with little-to-no pre-setup
agent.security is built around the realities above. It creates persistent identities for agents bound to their human delegators, maintains continuous consent, and derives least-privilege permissions from that relationship, so you can control agents without months of bespoke plumbing.
Where it is especially useful:
- Rapidly bringing “on-behalf-of,” consent, and provenance into agent workflows
- Enforcing trust boundaries across multi-agent and multi-tool environments
- Giving workflow owners and security teams a shared control plane
Instead of asking every app team to reinvent delegation semantics, you standardize the model once.
How Permit.io fits under the hood
Permit is the authorization layer that makes these controls enforceable across services:
- ABAC lets you encode purpose, scope, and TTL-style rules as attributes.
- ReBAC models relationships (user, org, resource, agent) so “who can act on what” follows real-world structure.
- Audit logs plus decision context support investigations and governance.
- AI workflow integrations make checks practical rather than theoretical.
Quick wins checklist (do this before you “redesign auth”)
- Inventory agents, tools, and the actions they can performClassify actions into low, medium, and high risk
- Replace standing privileges with goal-scoped access (TTL plus scope)
- Require approvals for high-risk actions
- Log purpose, scope, delegator, and delegate for every sensitive decision
- Run incident drills: “Can we answer who, what, why in 5 minutes?”
FAQ
What is delegated authorization for AI agents?
It is authorization that proves an agent action is permitted on behalf of a user, with explicit scope, purpose, and time bounds, plus a verifiable delegation chain.
What’s the difference between intent and scope?
Intent or purpose is why the action is happening (for example, “refund order”). Scope is what it may touch (for example, order_id=123 only).
How do you prevent agents from using standing privileges?
Use short-lived, goal-scoped permissions (TTL) and evaluate every tool call against purpose and scope.
When should you add human-in-the-loop approvals?
For high-blast-radius actions: exporting sensitive data, deleting records, creating privileged users, changing access controls, financial operations.
How do you audit agents with “why” context?
Log the authorization decision inputs: delegator, agent identity, action, resource, purpose, scope, TTL, and the decision rationale.
Get started (without boiling the ocean)
If you want to prototype this fast:
- Start with Permit’s quickstart and first policy checks.
- Implement a purpose-bound ABAC rule.
- Add relationships for “on-behalf-of” delegation with ReBAC.Turn on audit logging and wire it to your logging pipeline.
- For the fastest path to agent-specific controls and continuous consent semantics, evaluate agent.security.
Written by
Or Weis
Co-Founder / CEO at Permit.io