How Do You Give an AI Agent a Verifiable, Auditable, Enforceable Identity?
Your company deploys ten AI agents. One reads customer data from a database. Another sends emails on behalf of employees. A third modifies infrastructure configurations. A fourth processes financial transactions.
Now ask yourself: can you prove which agent performed which action? Can you revoke one agent's access without affecting the others? Can you detect when an agent exceeds its intended purpose? Can you show an auditor a tamper-proof record of every decision every agent made?
If the answer to any of these is no, your agents don't have identity. They have access—which is a very different thing.
Access Is Not Identity
Most AI agents in production today authenticate using one of three approaches—all of which give access without identity:
Shared API keys
Multiple agents use the same key. You can't distinguish Agent A from Agent B. If one is compromised, all are compromised. Audit logs show “API key sk-...3f7a made a request”—not which agent, not why, not whether it was authorized to do so.
OAuth Client Credentials
A step up, but still service-level identity. All agents sharing the same client ID are indistinguishable. Bearer tokens can be stolen and replayed. Scopes are declared but not enforced at runtime. There's no mechanism to prove which agent used the token.
Hardcoded credentials
The worst case, but more common than anyone admits. Database passwords, service account tokens, and admin credentials embedded in agent configurations. No rotation, no scoping, no audit trail.
These approaches answer the question “can this process connect?” They don't answer “who is this agent, what is it allowed to do, and can we prove it did what it claims?”
The Three Properties of Real Agent Identity
When we set out to solve this problem, we defined what “identity” actually means for an AI agent. It requires three properties—and you need all three:
Verifiable
Any party can independently confirm that an agent is who it claims to be. Not by checking a token—by verifying a cryptographic signature. The identity can't be forged, transferred, or replayed.
Auditable
Every action the agent takes is recorded with a cryptographic signature that proves authorship. The audit trail is tamper-proof. You can reconstruct exactly what happened, when, and which agent did it—months or years later.
Enforceable
The agent's identity is bound to a set of capabilities that are checked at runtime. It's not enough to declare what an agent should do—the system must block actions that exceed declared capabilities, in real time.
Existing protocols give you fragments of these properties. OIDC provides verifiable human identity. OAuth provides some enforceability through scopes. But no existing protocol provides all three for autonomous non-human software.
1. Making Identity Verifiable: Cryptographic Keypairs
The foundation of verifiable identity is cryptography—not tokens, not passwords, not shared secrets. When an agent registers with AIM, it receives its own Ed25519 keypair:
# Agent registration generates a unique keypair
from aim_sdk import secure
agent = secure("customer-data-reader", capabilities=[
"database:read",
"api:customers:get"
])
# The agent now has:
# - A unique agent ID (e.g., agent_01HYX3KM7VQB9W2TJNP5GZ8R4E)
# - An Ed25519 private key (stored securely, never transmitted)
# - A public key registered with AIM (used for verification)
# - Declared capabilities (enforced at runtime)Why Ed25519? It's fast, produces compact signatures, and is widely trusted in security-critical systems (SSH, Signal, WireGuard). But the specific algorithm matters less than the principle: each agent's identity is a cryptographic keypair, not a shared secret.
What this means in practice
- Unforgeable: Only the agent with the private key can produce valid signatures. Unlike bearer tokens, possession of the verification key doesn't grant access.
- Non-transferable: If a bearer token is stolen, the attacker has the same access as the legitimate holder. If a public key is leaked, nothing is compromised.
- Independently verifiable: Any service can verify an agent's signature using the public key registered with AIM. No callback to AIM required for verification.
Every action the agent takes is signed with its private key. A database query, an API call, a file write—each one carries a cryptographic proof of which agent performed it. This is fundamentally different from bearer tokens, where any process holding the token is indistinguishable from the legitimate holder.
2. Making Identity Auditable: Signed Action Records
Verifiable identity enables something that bearer tokens can't: non-repudiation. When every action is cryptographically signed, you get an audit trail where:
The agent cannot deny performing the action
The signature proves the action was performed by the private key holder. Since only the registered agent has the private key, the agent can't claim “that wasn't me.”
The record cannot be tampered with
Modifying the action record invalidates the signature. If someone changes the timestamp, the target resource, or the parameters, the signature check fails.
The trail is complete and chronological
Every action—authorized or blocked—is recorded. You can reconstruct the full sequence of what any agent did over any time period.
{
"agent_id": "agent_01HYX3KM7VQB9W2TJNP5GZ8R4E",
"agent_name": "customer-data-reader",
"action": "database:read",
"resource": "customers.profiles",
"result": "allowed",
"timestamp": "2026-02-11T14:23:07.891Z",
"signature": "ed25519:a7f3b2c9...",
"trust_score": 0.96
}
// Any party can verify this record using the agent's public key
// Tampering with any field invalidates the signatureThis matters for three reasons:
- Compliance: Regulations like SOC 2, HIPAA, and GDPR increasingly require demonstrable accountability for automated data access. “An API key accessed the database” isn't sufficient—auditors need to know which system, what it accessed, and whether it was authorized.
- Incident response: When something goes wrong, you need to trace the exact sequence of actions across agents. With signed audit records, you can determine root cause in minutes, not days.
- Trust: Customers and partners increasingly ask “how do you govern your AI?” A cryptographic audit trail is the strongest possible answer.
3. Making Identity Enforceable: Runtime Capability Checks
Verifiable identity tells you who the agent is. Auditable identity tells you what it did. Enforceable identity ensures it can only do what it's supposed to do.
This is where most identity solutions fall short. OAuth scopes are declared at token issuance time but aren't enforced by the protocol itself—enforcement is left to each resource server. API keys have no concept of capabilities at all. The result is a gap between what an agent should do and what it can do.
AIM closes this gap with runtime capability enforcement:
# Agent registered with specific capabilities
agent = secure("customer-data-reader", capabilities=[
"database:read",
"api:customers:get"
])
# This works — capability matches
result = agent.execute("database:read", target="customers.profiles")
# This is BLOCKED — agent doesn't have database:write
result = agent.execute("database:write", target="customers.profiles")
# CapabilityDenied: agent 'customer-data-reader' lacks 'database:write'The enforcement happens at the SDK level, before the action reaches the target system. This matters because:
Prompt injection defense
If a malicious prompt manipulates an agent into attempting unauthorized actions, the capability check blocks the action before any damage occurs. The agent literally cannot exceed its declared permissions.
Behavioral drift detection
Agents powered by LLMs can behave unpredictably. If an agent starts attempting actions outside its normal pattern, the capability system catches it immediately rather than waiting for a post-mortem analysis.
Least-privilege by default
Agents start with exactly the capabilities they need—nothing more. This is the principle of least privilege, enforced automatically rather than relying on developers to configure each downstream service correctly.
Dynamic capability adjustment
Capabilities can be modified in real time through the AIM dashboard or API. If an agent's trust score drops, its capabilities can be automatically restricted without redeploying the agent.
The Missing Piece: MCP Server Attestation
Agent identity doesn't exist in a vacuum. Agents connect to MCP (Model Context Protocol) servers for tools—database access, API calls, file operations, web searches. The identity of the agent is only meaningful if you also verify the tools it connects to.
The supply chain problem
An MCP server can change its tool surface without notice. A server that offered read_file yesterday might offer execute_command today. If your agent connects to tampered or modified MCP servers, the agent's identity guarantees are undermined.
AIM addresses this with MCP server attestation:
Cryptographic attestation records
Every MCP server connected to an AIM-managed agent gets a cryptographic snapshot of its tool surface—which tools it offers, their schemas, and their declared behaviors.
Automatic drift detection
AIM continuously monitors MCP servers and detects when their tool surface changes from the attested baseline. New tools added, schemas modified, behaviors changed—drift is flagged before agents interact with modified servers.
Trust score integration
MCP server attestation status feeds directly into the agent's trust score. If an agent is connected to drifted or unattested servers, its trust score is affected automatically.
Beyond Binary: Continuous Trust Scoring
Traditional identity systems are binary: you're authenticated or you're not. The token is valid or expired. This makes sense for human login flows—but AI agents operate continuously, and their trustworthiness can change over time.
AIM computes a continuous trust score for each agent based on eight factors:
| Factor | What it measures |
|---|---|
| Identity verification | Is the agent's cryptographic identity valid and unrevoked? |
| Capability compliance | Is the agent staying within its declared capabilities? |
| Behavioral baseline | Does the agent's current behavior match its historical pattern? |
| MCP server integrity | Are the MCP servers the agent connects to still attested and un-drifted? |
| Error rate | Is the agent producing abnormal error patterns? |
| Access patterns | Is the agent accessing resources at unusual times or volumes? |
| Policy compliance | Does the agent comply with organizational security policies? |
| Registration freshness | Is the agent's registration current and its keys recently rotated? |
The trust score updates in real time. If an agent's behavior deviates from its baseline, its trust score drops. If its MCP servers drift, the score drops. If it repeatedly attempts unauthorized actions, the score drops. And as the score drops, the system can automatically restrict the agent's access—without waiting for a human to intervene.
This is the difference between identity and authentication. Authentication is a point-in-time check. Identity is continuous—it encompasses who the agent is, what it's doing, and whether it should still be trusted.
Putting It All Together
Here's what the full identity lifecycle looks like for an AIM-managed agent:
Registration
Agent registers with AIM and receives a unique Ed25519 keypair. Capabilities are declared. MCP servers are attested. The agent has a verifiable identity from its first action.
Runtime operation
Every action is checked against capabilities (enforceable), signed with the agent's private key (verifiable), and recorded in the audit log (auditable). Unauthorized actions are blocked before reaching target systems.
Continuous monitoring
The 8-factor trust score updates in real time. MCP servers are monitored for drift. Behavioral baselines are maintained. If anything deviates, the system responds automatically.
Audit & compliance
Cryptographically signed audit records are available for compliance reviews, incident response, and forensic analysis. Every action is attributable, every record is tamper-proof.
Why This Matters Now
The AI agent ecosystem is at an inflection point. Frameworks like LangChain, CrewAI, AutoGen, and LangChain4j make it trivially easy to deploy agents that interact with production systems. MCP is becoming the standard for agent-tool communication. Enterprises are moving from experimentation to production deployment.
But the identity infrastructure hasn't kept pace. Most organizations are deploying agents with the same identity primitives they use for human users or traditional services—and discovering that the gap between “access” and “identity” creates real operational, security, and compliance risks.
The question isn't whether AI agents need proper identity—it's whether you build it before or after the first incident that makes you wish you had.
Give Your Agents Real Identity
AIM is open source (Apache-2.0) and provides verifiable, auditable, enforceable identity for AI agents. Cryptographic keypairs, runtime capability enforcement, continuous trust scoring, MCP attestation, and signed audit trails—in one line of code.