Why Your NHI Strategy Doesn't Cover AI Agents

Abdel Fane
#nhi#ai-agents#governance#security#enterprise

If you're a CISO or security architect, you've probably heard of Non-Human Identity (NHI) governance. You might even have a platform in place — Oasis, Entro, Astrix, or Clutch. These tools manage your service accounts, API keys, OAuth tokens, and SSH keys across cloud environments.

But here's the uncomfortable truth: your NHI strategy has a blind spot. AI agents are the fastest-growing class of non-human identity in your organization, and your current tools weren't designed to govern them.

The NHI Market Is Booming — But Missing the Point

The NHI security market has exploded. Over $400 million in venture funding flowed into NHI platforms in 2025 alone. Gartner now recognizes NHI management as a critical security category. Every enterprise security team is being asked about their NHI strategy.

This attention is well-deserved. Non-human identities outnumber human identities 45:1 in the average enterprise. Service accounts proliferate. API keys get shared. OAuth tokens never expire. The attack surface is massive.

Traditional NHI platforms do excellent work managing this complexity. They discover service accounts across AWS, Azure, and GCP. They track OAuth token grants across SaaS applications. They alert when API keys are exposed or when service accounts haven't been rotated.

But they all share a common assumption: non-human identities execute fixed, predictable operations.

AI Agents Are a Different Class of NHI

AI agents don't just authenticate and execute a predetermined operation. They reason. They make decisions. They call tools dynamically based on context. They interact with other agents. They connect to MCP servers whose tool surfaces can change without notice.

Consider the difference:

CharacteristicTraditional NHIAI Agent
BehaviorFixed, deterministicDynamic, context-dependent
CapabilitiesStatic permissionsDrift over time
Tool accessPredefined API endpointsMCP servers with changing tools
InteractionsService-to-serviceAgent-to-agent (A2A)
Decision-makingNoneAutonomous reasoning
Attack surfaceCredential theftPrompt injection, tool misuse, capability drift

Traditional NHI platforms can discover that an agent has an API key. But they can't answer the questions that matter for agent governance.

The Questions Your NHI Platform Can't Answer

What capabilities does this agent actually use at runtime?

Traditional NHI tools see static permissions. Agent behavior is dynamic.

Has this agent's behavior drifted from its declared purpose?

An agent might be approved for "customer support" but start accessing financial data.

Which MCP servers is this agent connected to, and have their tools changed?

MCP servers can add new tools at any time. Your agent's attack surface expands silently.

If this agent is compromised, what's the blast radius?

Agents interact with other agents. A single compromised agent can cascade.

Who is accountable for this agent's actions?

Service accounts are typically owned by teams. Agents often have no clear owner.

These aren't edge cases. They're the fundamental governance questions for any AI agent deployment. And traditional NHI platforms weren't built to answer them.

The Agent NHI Gap

Here's what this means in practice: your organization is deploying AI agents — with LangChain, CrewAI, AutoGen, or custom frameworks — and your NHI strategy treats them as invisible.

What traditional NHI sees

  • • An API key was created
  • • The key has access to OpenAI
  • • Last used: 3 minutes ago
  • • Owner: unknown

What agent governance sees

  • • Agent: customer-support-bot
  • • Owner: jane.doe@company.com
  • • Capabilities: db:read, api:call
  • • Trust score: 87/100 (declining)
  • • MCP servers: 2 attested, 1 drifted
  • • Behavior: accessing financial tables (unusual)

The difference isn't just more data — it's the right data for governing autonomous systems.

Why "Bolt-On" Agent Support Won't Work

Some NHI vendors are starting to add "AI agent" features. Oasis recently announced "Agentic Identity Security." Entro is publishing content about AI agent governance. This is progress, but it's fundamentally constrained.

Traditional NHI platforms are built on a discovery-and-inventory model: scan cloud APIs, find service accounts, track their usage. This works for static identities. It doesn't work for agents that:

  • Need cryptographic identity beyond API keys
  • Require runtime capability enforcement (not just permission auditing)
  • Connect to MCP servers that need attestation and drift detection
  • Interact with other agents via A2A protocols
  • Require behavioral trust scoring, not just risk ratings

Bolting agent features onto a service-account platform is like adding video calling to email — technically possible, but you're fighting the architecture.

What Agent NHI Governance Actually Requires

Purpose-built agent NHI governance needs different primitives:

Cryptographic agent identity

Not just API keys — Ed25519 keypairs with challenge-response authentication. The agent proves its identity on every action. Post-quantum cryptography (ML-DSA) for future-proofing.

Capability-based access control

Agents declare what they can do (db:read, api:call, file:write). Every action is checked against declared capabilities at runtime. Unauthorized actions are blocked, not just logged.

MCP server attestation

Cryptographic fingerprints of MCP server tool surfaces. Automatic drift detection when tools change. Supply chain visibility across your agent fleet.

Behavioral trust scoring

Not a static risk rating — a continuous 8-factor trust score that adapts based on agent behavior, capability usage, and compliance status.

Ownership and lifecycle management

Every agent linked to a human owner. Automated lifecycle transitions (active → inactive → suspended → revoked). Orphan detection when owners leave.

Complementary, Not Competitive

This isn't about replacing your existing NHI platform. Oasis, Entro, Astrix, and Clutch are excellent at what they do — managing service accounts, API keys, and OAuth tokens across cloud environments.

AI agent governance is a different layer. It addresses a different class of NHI with different requirements. Many enterprises will run both:

  • Traditional NHI platform for service accounts, API keys, OAuth tokens
  • Agent NHI platform for AI agents, MCP servers, A2A interactions

The question isn't "should we replace our NHI platform?" It's "do we have coverage for the fastest-growing class of NHI in our organization?"

What You Can Do Today

1

Inventory your AI agents

How many AI agents are running in your organization? Who deployed them? What do they access? Most security teams can't answer these questions.

2

Map your MCP servers

Which MCP servers exist in your environment? Are they registered? Attested? Do you know when their tool surfaces change?

3

Evaluate agent-native governance

Look for platforms purpose-built for AI agent identity — not service-account platforms with agent features bolted on.

4

Start with visibility

You can't govern what you can't see. Begin by getting visibility into agent deployments, then layer on governance controls.

Close the Gap in Your NHI Strategy

AIM is the open-source NHI platform purpose-built for AI agents. Cryptographic identity, capability-based access control, MCP attestation, and full lifecycle governance — without the enterprise price tag.