If you're building anything with AI agents in the enterprise, you've almost certainly encountered MCP, the Model Context Protocol. It has become, in a remarkably short period of time, the de facto standard for connecting AI agents to the tools and data they need to do useful work. And in January and February of this year, security researchers filed over 30 CVEs against MCP servers, clients, and infrastructure, including a CVSS 9.6 remote code execution flaw in a package that had been downloaded nearly half a million times.

That's a concerning number, but the individual vulnerabilities aren't really the story. The story is what they reveal about how we've been thinking about agent-to-tool trust, which is to say, we mostly haven't been.

The Trust Problem Underneath the CVEs

Among 2,614 MCP implementations surveyed, 82% use file operations vulnerable to path traversal, two-thirds have code injection risk, and over a third are susceptible to command injection. Researchers found more than 8,000 MCP servers visible on the public internet, many with admin panels and API routes exposed without authentication. These aren't exotic attack vectors; they're the infrastructure equivalent of leaving the building unlocked and the security cameras off.

What makes this particularly concerning for agentic AI is a new class of attack called tool poisoning. In a tool poisoning attack, an adversary modifies the description of an MCP tool (the metadata that tells an agent what the tool does and when to use it) to trick the agent into exfiltrating data, executing unintended operations, or escalating its own privileges. The agent isn't compromised in the traditional sense; it's faithfully following instructions that have been tampered with. The vulnerability isn't in the model. It's in the implicit trust the agent places in the tools it's been told to call.

"The vulnerability isn't in the model. It's in the trust agents place in their tools."

Why Role-Based Permissions Aren't Enough

The conventional response to access control problems in enterprise software is role-based permissions: define roles, assign permissions, enforce at the boundary. But agentic AI introduces a wrinkle that role-based models were never designed for. An agent acting on behalf of a user may need to call another agent, which calls a tool, which accesses a data source, and the question of "what is this agent authorized to do" can't be answered by looking at a static role assignment, because the authorization needs to flow dynamically through a chain of delegated actions.

This is where capabilities-based trust becomes a more promising model. Instead of granting an agent a role that confers broad, static permissions, you issue it a cryptographically signed capability (sometimes called a datagram or token) that specifies exactly what it can do, for how long, and on whose behalf. When that agent delegates to another agent or tool, the capability can be scoped, attenuated, and verified at each step. No implicit trust. No ambient authority. Every action is traceable to a specific, verifiable grant.

A Path Forward, Not a Silver Bullet

I want to be clear that capabilities-based trust isn't a magic fix that makes every MCP deployment secure overnight. It's an architectural direction, a way of thinking about agent authorization that addresses the systemic gap the CVEs exposed. The practical implementation requires infrastructure: a governance layer that sits between agents and the tools they call, enforcing capability verification, logging every interaction, and ensuring that tool descriptions match their actual behavior.

What I find encouraging is that this isn't speculative computer science; capabilities-based security has a long research history (dating back to Dennis and Van Horn in 1966, if you're inclined to look it up). What's new is the context in which it becomes urgent. When agents are making autonomous decisions about which tools to call and what data to access, the old model of "authenticate once, authorize broadly" becomes genuinely dangerous. The enterprises that get ahead of this will be the ones that deploy their agent infrastructure with a control plane that enforces trust at every interaction, not just at the front door. It's a design challenge we're actively working through at FlashQuery, and one we think deserves more serious attention from anyone building in the agentic AI space.

Matt Genovese
Matt Genovese
Founder at FlashQuery to give enterprises sovereign control over their AI control plane.