A team sets up an AI agent. The agent needs access to internal tools, say Slack, so someone creates an API key with broad permissions. "We'll scope it later." The agent works. Everyone moves on.
Six months later, nobody remembers who created that key. What it can reach. Whether the permissions were ever reviewed. They weren't.
I know this because it happened to us. Two people building an AI product, and we couldn't trace one of our own API keys back to its source. Couldn't tell if it was production or staging. We are a team of 2. Now picture a company with hundreds of developers and dozens of agents, each with their own credentials. None of them expiring. None of them reviewed.
Ring any bells?
The numbers
A Cybernews survey from August 2025 found that 59% of employees use unapproved AI tools at work. Of those, 75% share sensitive data with them. Not accidentally. Routinely.
The UpGuard "State of Shadow AI" report from November 2025 puts it higher: 81% globally. And the part that got me: 88% of security leaders admitted to using unauthorized AI tools themselves. The people responsible for enforcing governance are bypassing it.
The problem is that none of this was designed for machines.
Identity management was built for people. People log in, work, log out. Their access has a lifecycle: onboarding, role changes, offboarding.
AI agents don't follow any of that. Static tokens, broad permissions, persistent access. They run around the clock. They don't trigger the same flags a human account would.
And when one of those credentials gets compromised, it doesn't leak one user's data. It leaks whatever that agent could reach.
It gets worse
In February 2026, security researchers at Koi Security audited ClawHub, the official skill marketplace for OpenClaw. Out of 2,857 skills, 341 were malicious. Roughly 12%. One attacker alone uploaded 677 trojanized packages. Some had thousands of downloads before anyone noticed. The malicious skills stole SSH keys, browser credentials, wallet data. Some installed info-stealers like Atomic macOS Stealer. All on a marketplace where any GitHub account older than a week could publish.
Same month, researchers found over 8,000 MCP servers publicly exposed on the internet. No authentication on admin panels. Default configs. Debug endpoints wide open.
Shadow IT was already a governance problem. Shadow AI is worse. Agents don't just store data or move it around. They make decisions. They create records, send emails, trigger workflows, call external APIs. That's the entire point.
You can't do forensics on a system you didn't know existed.
What we're doing about it
The fix isn't to stop using agents. It's to build governance, audit trails, and compliance infrastructure into the agent itself, not bolt it on after. The kind of thing nobody wants to build, and everyone will eventually require.
Try this. Pick one of your AI agents and list everything it has access to. Every API, every integration, every credential. If you can do that in under an hour, you're ahead of most companies.
If you can't, you have no idea what to revoke when something goes wrong. And something always goes wrong.
You've seen the LinkedIn posts. Someone's agent racked up a five-figure cloud bill because a leaked key let it spin up resources. Someone else had their keys stolen and is writing about going bankrupt. These show up in my feed weekly. At some point, the cautionary tales stop being about other people.
It's why we started fluado. Because the tooling to do this right doesn't exist yet.
If you're trying to figure out how to deploy agents without the governance headache, we should talk.

