Agents Don't Need MCP to Figure Things Out. But They Do Need Something Better.
MCP isn't the destination. It's a waypoint.

Give a capable agent enough time and tokens, and it will work out most things on its own.
I have watched this in production work. An agent navigating an API it was never explicitly told about, piecing together the right endpoints from documentation, making it work. No MCP. No custom tooling. Just the model, some context, and enough runway.
That is not a reason to abandon structured agent tooling. It is a reason to get clearer about what that tooling is actually for.
The Binary Debate Is the Wrong Debate
The MCP conversation right now is too binary. Either you are building MCP servers for everything, or you are on the "MCPs were a mistake, bash is better" side. Both camps are missing the point.
DHH ran a clean experiment in February that illustrates where this is heading. He gave an agent a computer, no MCPs, no APIs configured, and a single task. The agent got its own email address, completed a signup flow, sourced images, and joined a Basecamp account from an email invite. No special equipment. Just the model navigating the open web.
His read: agents are heading toward a world where they do not need custom integration layers to interact with their environment.
I think he is right about the direction. I do not think it means MCP is dead.
Where MCP Still Earns Its Keep
For complex operations on internal systems, structured tooling still makes sense. When an agent needs to write to a database, modify records, or execute a transactional workflow where partial failure is a real problem, I want a clean, controlled interface between the agent and the system.
The question is not "should I build an MCP server." It is "what is the cost of the agent figuring this out alone versus having a defined interface, and what happens if it gets it wrong."
Low stakes, read-only, web-native: let the agent navigate. High stakes, write operations, internal systems: wrap it properly. That logic is not new. It is the same abstraction decision we made when designing APIs for human developers.
The Real Constraint Is Data Efficiency
Here is the part that does not get enough attention.
The 2025 State of AI Agents report found that 46% of teams cite integration with existing systems as their primary challenge. That number will come down as models improve at self-directed navigation. But the deeper constraint will remain: how many tokens does it take to get from a task to a correct result.
Every layer of tooling is a tax on that. Large tool definitions consume context. Poorly designed interfaces force extra calls to clarify what the agent should already know. I have seen agent runs where 40% of the token spend was the model navigating its own tooling rather than doing the actual work. That is a badly designed interface, not a smart agent.
The agents that perform best in production are not the ones with the most tools. They are the ones where every interface is designed around what the agent actually needs to know, in the format it most naturally reasons about.
Most MCP servers today are not that. They are API wrappers with JSON schemas bolted on, built for human-readable documentation that got reformatted for tool calls.
Something Better Is Coming
MCP is a waypoint, not a destination. The pattern we are converging on is some kind of interface standard designed from the ground up for how models process information, not how REST APIs were designed for developers.
What that looks like exactly, nobody knows yet. But the direction is clear: agent tooling will evolve toward making the right data available in the most token-efficient, latency-minimal way possible. APIs and tooling should accelerate agents, not become another layer they have to work around.
The ones that are not accelerants will get replaced. Not by agents brute-forcing the open web, but by better-designed interfaces that treat the model as the primary consumer.
Build for that.


