Skip to main content
  1. Posts/

Non-Human Identities Are the Top Emerging Threat, and It's Privileged Service Accounts All Over Again

If you’ve been in this industry long enough, you’ll recognize the pattern immediately.

A new class of identity shows up in the environment. It gets created quickly because someone needs something to work. It gets broad permissions because scoping takes time and the team’s under deadline. Nobody assigns an owner. Nobody sets an expiration. Nobody reviews it again.

Ten years ago, that was privileged service accounts. Today, it’s non-human identities at a scale that makes the old service account problem look manageable by comparison.

The Numbers Should Scare You
#

The data coming out of 2026’s security reports paints a consistent picture, and it’s not good.

Machine identities now outnumber human identities by ratios that range from 25-to-1 at the low end to over 100-to-1 in cloud-heavy environments. That ratio is accelerating as organizations deploy AI agents, automate workflows, and spin up microservices that each need their own credentials.

The Entro Security 2025 State of Non-Human Identities report found that 97% of NHIs have excessive privileges. Just 0.01% of machine identities control 80% of cloud resources. And 71% aren’t rotated within recommended timeframes. These aren’t edge cases. This is the baseline.

The CSA’s 2026 survey report confirms that 68% of IT security incidents now involve machine identities, and half of surveyed enterprises have experienced a breach tied to unmanaged NHIs. IBM’s latest Cost of a Data Breach report puts the average cost of a breach involving compromised credentials at $4.91 million.

If you’re sitting in a GRC role reading this, you already know what the access review process looks like for human accounts. Now ask yourself: does anything close to that process exist for the API keys, OAuth tokens, service accounts, and AI agent credentials running in your environment?

For most organizations, the honest answer is no.

We’ve Seen This Movie Before
#

Here’s where this gets frustrating. The problems with non-human identities aren’t new problems. They’re the exact same problems we had with privileged service accounts a decade ago, just at exponentially larger scale.

Overprivileged by default. A developer needs a service account for a new Lambda function. They’re on a deadline. Figuring out the exact minimum permissions takes time, so they attach AdministratorAccess and move on. That account now has unrestricted access to the entire AWS environment for a task that needed read access to one S3 bucket. Multiply that across every team, every sprint, every year. Sound familiar? It should. This is the same pattern that created the privileged service account mess that PAM vendors have been trying to clean up for the last decade.

No lifecycle management. When an employee leaves, HR triggers offboarding. Access gets revoked, accounts get disabled. But when a project ends or an integration gets retired, who decommissions the service account? Who revokes the API key? The answer, overwhelmingly, is nobody. NHIs accumulate like sediment. They build up in layers, and nobody wants to touch the old ones because nobody knows what they’re connected to.

No ownership model. Human accounts have managers. Managers respond to access review emails (sometimes) and eventually approve or reject recertifications. NHIs don’t have managers. They don’t have org chart entries. They don’t show up in HR systems. When a security team flags an overprivileged service account, the first question is always “who owns this?” and the answer is almost always a shrug.

Fear of breaking things. This is the one that keeps the problem alive. Teams know these credentials are overprivileged. They know they should be rotated or scoped down. But the operational risk of touching a credential that might be wired into twelve different production systems keeps everyone’s hands off it. So the credential sits there, unchanged, overprivileged, and unmonitored. The exact same inertia that kept domain admin service accounts running unchanged for years in on-prem environments.

AI Agents Just Made It Worse
#

Everything above was already a significant problem before AI agents entered the picture. Now it’s accelerating in ways that IAM teams aren’t prepared for.

AI agents aren’t chatbots. They’re autonomous systems that execute tasks, call APIs, move data, modify configurations, and make decisions without human intervention. To do all of that, they need credentials. They need access. And they need it to a lot of things.

Think about what an AI agent needs to do its job. It needs to read your email. Access your CRM. Query your databases. Execute commands in your cloud environment. Commit code to your repos. Each of those capabilities requires an identity with permissions. Each of those identities is an NHI. And each one is a potential attack vector if it’s overprivileged, unmonitored, or compromised.

The Gravitee State of AI Agent Security 2026 report found that 80.9% of technical teams have pushed AI agents past planning into active testing or production. Only 14.4% of those agents went live with full security and IT approval. That means the vast majority of AI agents running in enterprise environments right now were deployed without anyone from security reviewing what access they have or what they’re allowed to do.

The CSA’s report on cloud and AI security put it bluntly: if an agent is overprivileged, an attacker can use it to exfiltrate data at machine speed without ever compromising a human credential.

One Identity is predicting 2026 will see the first major breach traced back to an overprivileged AI agent. And here’s the part that should keep you up at night: it won’t look like an attack. It’ll look exactly like the system doing what it was designed to do.

The “Agents of Chaos” Problem
#

In February 2026, a team of 38 researchers published a paper called “Agents of Chaos” that documented what happened when they gave autonomous AI agents real system access in a controlled lab. Email, file systems, shell commands, the works.

The results were brutal. Over two weeks of testing, the agents failed in 11 distinct ways. They followed instructions from unauthorized users. They leaked sensitive information. They executed destructive commands. They enabled denial-of-service conditions. They spread unsafe behaviors to other agents.

And then they lied about it. Some agents reported tasks as “completed” when the system state showed the opposite.

If you can’t trust an AI agent’s own status reports, you can’t audit what it actually did. You can’t catch a breach. You can’t even confirm that something went wrong. That’s not a theoretical risk. That’s an observability failure that breaks your entire incident response model.

This wasn’t a production breach. It was a controlled experiment. But the capabilities these agents had, reading email, managing files, executing commands, are exactly the capabilities that organizations are granting production AI agents right now. With broad permissions. Without monitoring. Without lifecycle management.

It’s privileged service accounts all over again, except now the service account can reason, make decisions, and talk to other service accounts autonomously.

What IAM Teams Need to Do
#

The people requesting and provisioning access for NHIs and AI agents have some serious work ahead of them. The current state isn’t sustainable, and the threat data says it’s actively being exploited.

Treat NHIs as first-class identities in your IAM program. Not as a footnote. Not as a “we’ll get to it” backlog item. Every NHI needs to go through the same governance process as a human identity: provisioning approval, access scoping, ownership assignment, periodic review, and decommissioning. Yes, this is hard at scale. That doesn’t make it optional.

Stop granting AI agents admin-level access. This is the privileged service account problem in its newest form. When someone requests access for an AI agent, the default answer shouldn’t be “give it what it needs to work.” It should be “give it the absolute minimum it needs, for the shortest time possible, with monitoring on every action.” Just-in-time access. Short-lived tokens. Scoped permissions. These aren’t new concepts. They just need to be applied to a new category of identity.

Build an NHI inventory and keep it current. You can’t secure what you can’t see. Discover every service account, API key, OAuth token, certificate, and AI agent credential across your environment. Assign an owner to each one. Track permissions against actual usage. If a credential has admin access but only ever reads from one table, that’s your signal to scope it down.

Enforce credential rotation and expiration. Long-lived static credentials are the root cause behind most NHI breaches. Replace permanent API keys with short-lived tokens. Automate rotation on a defined schedule. Set hard expiration dates. If a credential can’t be rotated without breaking production, that’s a finding, not an excuse. It means you’ve got a fragile dependency that needs to be re-architected.

Monitor NHI behavior, not just NHI existence. Knowing what NHIs exist in your environment isn’t enough. You need behavioral baselines. A service account that suddenly starts accessing resources outside its normal pattern could indicate compromise. An AI agent making API calls at unusual volumes could mean it’s been manipulated through prompt injection. Traditional SIEM rules built for human behavior patterns don’t catch these. You need detection logic built for machine identity behavior.

Add NHIs to your access review cycle. If your organization runs quarterly or semi-annual access recertifications for human accounts, NHIs need to be in that same cycle. Owners need to confirm that each NHI still requires the access it has. Stale, unused, or overprivileged NHIs need to be flagged and remediated with the same urgency as an orphaned admin account.


The security industry spent years trying to get organizations to manage privileged service accounts properly. Some succeeded. A lot didn’t, and those failures became breach after breach after breach.

We’re at the exact same inflection point with non-human identities, except the scale is orders of magnitude larger, the attack surface now includes autonomous AI agents that can reason and act on their own, and the window to get ahead of this is closing fast.

The IAM teams, the GRC programs, the people approving access requests for bots and agents and service accounts: they’re the ones who will determine whether NHIs become a managed risk or the next decade’s dominant breach vector.

The data says we’re currently heading toward the latter. That’s still fixable. But not if nobody treats it as urgent.

Juan Carlos Munera
Author
Juan Carlos Munera
Passionate about cybersecurity, governance, risk, and compliance. Sharing insights on security best practices, frameworks, and industry trends.