← Back

AI agents won't kill your system of record. Your rate limits will.

Zain Hoda, co-founder of Vanna AI, wrote a sharp thread recently arguing that AI agents will hollow out systems of record.1 The thesis: once an agent can clone your entire CRM in seconds, the data moat evaporates. The SoR becomes a dumb write endpoint while the agent becomes the real interface.

He’s right about the problem. I think he’s wrong about the outcome.

Systems of record aren’t going to collapse. But the ones that fight this shift will absolutely get replaced by the ones that don’t.

The cybersecurity parallel nobody’s making

In cybersecurity, there’s a foundational principle: authorise as close to the resource as possible. Don’t put all your trust at the perimeter and hope for the best. Push access control down to where the data actually lives.

Systems of record already do this — and they’ve been doing it for decades. They combine two things that are genuinely hard to separate: the enterprise data itself and the access control rules that govern who can see and change it. Who can view this customer record? Who can approve this expense? Who changed this field and when?

These aren’t incidental features. They’re the entire reason regulated industries can’t just dump everything into an agent’s context window and call it a day.2

Hoda acknowledges this in passing — governance, permissioning, multi-user sync — but dismisses it as “a much smaller business.” I think that dramatically undersells it.

Rate limiting is the wrong fight

Where I strongly agree: systems of record that respond to AI by locking down API access are playing a losing game. A stupid game, even.

Rate limits don’t protect your moat. They just make your product worse. A sufficiently motivated agent will cache locally, sync periodically and route around you entirely. Congratulations — you’ve now trained your customers to treat you as an unreliable upstream dependency rather than the centre of their workflow.

This is the enterprise software equivalent of the music industry suing Napster. You’re not wrong about ownership. You’re just catastrophically wrong about strategy.

MCP is the adaptation that matters

Here’s where it gets interesting.

The Model Context Protocol is an open standard that lets AI agents talk to enterprise software in a structured way.3 It’s not just another integration layer. It’s a way for systems of record to remain competitive precisely because it lets them maintain control where it matters.

An MCP server sits in front of your data and exposes it to AI agents in a structured, governed way. Think of it like a concierge desk in a hotel. The agent doesn’t get the master key to every room. It makes a request, the concierge checks whether that request is allowed, and hands over only what’s appropriate. Every interaction is scoped, authenticated and logged.

This is the “authorise at the resource” principle, applied to AI. Rather than fighting agent access, you’re channelling it through a layer you control.

The SoR that embraces MCP says: “Yes, agents can interact with our data. Here’s the protocol. Here are the permissions. Here’s the audit trail.” The one that fights it says: “No, you can’t have more than ten API calls per minute.” Which one are you building on?

Of course, MCP isn’t magic. Ship an MCP endpoint without authentication or access control and you’ve just given every agent on the internet an open door.4 The Clawdbot incident in January proved that — over a thousand exposed deployments, most of them running default configs with no auth. The concierge desk only works if someone’s actually checking credentials.

Treating changes like bank transfers

There’s another angle that doesn’t get enough attention: reversibility.

AI agents will make mistakes. They’ll update the wrong record, merge duplicates that shouldn’t be merged, create entries based on hallucinated context. This isn’t a hypothetical — it’s the inevitable cost of autonomous action at scale.

Systems of record are uniquely positioned to handle this, but only if they treat every AI-initiated change like a bank transfer. When your bank processes a payment, it doesn’t just subtract from one account and add to another. It creates a record of the transfer: what changed, when, by whom, and what the balances were before and after. If something goes wrong, the bank can reverse the transfer cleanly without knocking everything else out of alignment.

Every change an AI agent makes should work the same way. Discrete, recorded, with a clear before-and-after state. If the agent updates a customer record incorrectly, you should be able to hit undo without worrying about what else breaks downstream.

Make rollback easy. Make the blast radius visible. Give humans a one-click undo that doesn’t cascade into chaos.

This is where the “governance is just a feature” argument falls apart completely. Building this kind of change-tracking and safe reversal into a data platform isn’t a sidecar bolt-on. It’s genuinely hard engineering that sits at the heart of the system. And it’s exactly the kind of thing a standalone agent cache can’t do well, because it doesn’t own the canonical state.

The real split

What’s actually happening isn’t “systems of record collapse.” It’s a bifurcation.

Systems of record that adapt will expose rich MCP interfaces, maintain authoritative access control, provide change-level auditability and make AI a first-class citizen of their platform.5 They’ll be more valuable, not less, because they’re the trust layer in an increasingly autonomous stack.

Systems of record that don’t adapt will rate-limit, restrict and litigate their way into irrelevance. Their customers will migrate to platforms that work with the agent paradigm rather than against it.

The moat was never “we store your data.” It was “we’re the system you trust with your data.” Trust requires governance, auditability and control. Those things aren’t going away — they’re about to matter a lot more.

TL;DR

Systems of record aren’t dying. But the ones that treat AI access as a threat rather than a design constraint will lose to the ones that don’t. MCP gives these platforms a way to stay authoritative while opening up to agents. The winners will be the platforms that make AI interaction trackable, reversible and auditable by default.

The losers will be the ones still arguing about rate limits.


Footnotes

  1. Zain Hoda (co-founder, Vanna AI), “The Agent Will Eat Your System of Record” (X/Twitter, 2025). Link

  2. Cerbos, “MCP Permissions: Securing AI Agent Access to Tools” (September 2025). Detailed treatment of access control enforcement via MCP servers. Link

  3. Anthropic, “Model Context Protocol” (November 2024). The protocol was donated to the Agentic AI Foundation under the Linux Foundation in December 2025. Wikipedia

  4. PointGuard AI, “Clawdbot MCP Vulnerability Exposes AI Agents” (January 2026). Over a thousand exposed MCP deployments with no authentication — a real-world example of what happens when the concierge desk has no one behind it. Link

  5. Microsoft, “Dynamics 365 ERP Model Context Protocol” (November 2025). Microsoft’s framing of the shift “from systems of record to systems of action.” Link