Skip to main content
AI agents are no longer chatbots. They book travel, move money, file JIRA tickets, manage cloud infrastructure, and send emails on your behalf. The question every engineering team eventually hits is the same: how do you authorize an AI agent to act without giving it the keys to the kingdom? This guide covers everything you need to know about AI agent authorization in 2026 — why it matters, what the current approaches get wrong, how delegated authorization works, and how to implement it with working code.

Why AI Agents Need Authorization

A traditional API integration has a developer, a credential, and a service. The developer writes the code, decides which endpoints to call, and ships it. The credential never makes autonomous decisions. AI agents break this model. An agent decides at runtime which tools to call, which APIs to hit, and which actions to take. The same agent might read your calendar one moment and try to delete a database table the next — all within a single conversation. Without proper authorization, the only thing standing between an agent and a catastrophic action is the LLM’s judgment. That is not a security model. That is a prayer. Three specific problems make AI agent authorization different from traditional API auth:
  1. Dynamic tool selection. You cannot predict at build time which tools an agent will invoke. The agent decides based on the user’s request and its own reasoning.
  2. Delegation chains. A parent agent may spawn sub-agents, each of which needs its own narrowly-scoped permissions. API keys do not support this.
  3. Auditability. When an agent performs an action, you need to know which agent, which user approved it, what scopes were granted, and when the permission expires. API keys give you none of this.

The Problem with API Keys

Most agent frameworks today tell you to pass your API keys as environment variables. The agent gets the same access as the key holder — typically full read/write access to the service. Here is what goes wrong:
ProblemAPI KeysDelegated Auth
ScopeAll-or-nothing accessPer-action scoped permissions
RevocationRotate the entire keyRevoke individual agent grants instantly
Audit trail”Someone used the key""Agent X did action Y, approved by user Z”
DelegationShare the same key (dangerous)Parent grants child a strict subset of permissions
ExpiryManual rotationAutomatic time-limited tokens
IdentityKey identifies the developerToken identifies the agent, user, and developer
API keys were designed for server-to-server communication where a human developer controls every call. Using them for autonomous agents is like giving your car keys to a stranger and hoping they only drive to the grocery store.

How Delegated Authorization for AI Agents Works

Delegated authorization follows a pattern similar to OAuth 2.0, but extended for agent-specific requirements. The flow works like this:
  1. Agent registration. The agent gets its own cryptographic identity (a DID). This is not a shared credential — it uniquely identifies this agent instance.
  2. Authorization request. The agent requests specific permissions (scopes) from a human user. The user sees exactly what the agent is asking for.
  3. Consent. The user reviews the requested scopes and approves or denies. This is the human-in-the-loop step that API keys skip entirely.
  4. Token exchange. After approval, the agent receives a grant token — a signed JWT containing the approved scopes, the agent identity, the user identity, and an expiry time.
  5. Enforcement. On every tool call, the token’s scopes are checked against the tool’s required permission. If the agent tries to call a tool outside its granted scopes, the call is rejected before execution.
  6. Revocation. The user (or an admin) can revoke the grant at any time. The agent’s access stops immediately.

Implementing AI Agent Authorization with Grantex

Grantex is an open protocol (Apache 2.0) that implements this entire flow. Here is a working example in TypeScript:
import { Grantex } from '@grantex/sdk';

const gx = new Grantex({
  apiKey: process.env.GRANTEX_API_KEY,
  baseUrl: 'https://grantex-auth-dd4mtrt2gq-uc.a.run.app',
});

// 1. Register the agent with its own identity
const agent = await gx.agents.create({
  name: 'expense-report-agent',
  description: 'Files expense reports and reads receipts',
});

// 2. Request scoped authorization from the user
const auth = await gx.authorize({
  agentId: agent.id,
  userId: 'user_alice',
  scopes: ['expenses:read', 'expenses:create'],
  callbackUrl: 'https://app.example.com/callback',
});
// Redirect user to auth.consentUrl — they see the exact scopes

// 3. After user approves, exchange the code for a grant token
const token = await gx.tokens.exchange({
  code: callbackCode,
  agentId: agent.id,
});

// 4. Verify the token before every action
const result = await gx.tokens.verify(token.grantToken);
console.log(result.scopes);    // ['expenses:read', 'expenses:create']
console.log(result.agent);     // 'did:grantex:ag_01ABC...'
console.log(result.principal); // 'user_alice'
console.log(result.expiresAt); // '2026-04-06T00:00:00Z'
The same flow in Python:
from grantex import Grantex

gx = Grantex(
    api_key=os.environ["GRANTEX_API_KEY"],
    base_url="https://grantex-auth-dd4mtrt2gq-uc.a.run.app",
)

agent = gx.agents.create(name="expense-report-agent", description="Files expense reports")

auth = gx.authorize(
    agent_id=agent.id,
    user_id="user_alice",
    scopes=["expenses:read", "expenses:create"],
    callback_url="https://app.example.com/callback",
)

token = gx.tokens.exchange(code=callback_code, agent_id=agent.id)

result = gx.tokens.verify(token.grant_token)
# result.scopes == ['expenses:read', 'expenses:create']

Scope Enforcement on Tool Calls

Authorization alone is not enough. You also need enforcement — checking the token’s scopes against the tool’s required permission on every call, before execution. Grantex ships 53 pre-built tool manifests covering popular services (Salesforce, Jira, Stripe, HubSpot, GitHub, Slack, and more). Each manifest maps tool names to required permission levels:
import { Grantex } from '@grantex/sdk';
import { salesforceManifest } from '@grantex/sdk/manifests/salesforce';

const gx = new Grantex({ apiKey: 'gx_...' });
gx.loadManifest(salesforceManifest);

// This checks the JWT's scopes against the manifest — no network call
const check = await gx.enforce({
  grantToken: token.grantToken,
  connector: 'salesforce',
  tool: 'delete_contact',
});

if (!check.allowed) {
  console.log(check.reason); // "Token lacks tool:salesforce:delete scope"
}
For framework-specific enforcement, Grantex integrates with LangChain, CrewAI, OpenAI Agents SDK, Google ADK, Vercel AI, and more.

Comparing AI Agent Authorization Approaches

ApproachScopedRevocableAuditableDelegableStandard
Raw API keysNoRotate allNoNoN/A
OAuth 2.0 (human flow)YesYesPartialNoRFC 6749
Service accountsNoYesPartialNoVaries
Custom JWT issuanceYesManualManualManualCustom
Grantex (delegated auth)YesYes (instant)Yes (hash-chained)Yes (depth-limited)Open spec

Getting Started

If you are building AI agents that interact with real services, you need authorization that matches the autonomy you are giving those agents. API keys are not enough. Start here: The more capable your agents become, the more critical proper authorization becomes. Building it in from the start is significantly easier than retrofitting it after an incident.