In the previous article, we walked through setting up Claude Code with AgentMail’s MCP server. The setup requires installing an npm package locally via npx — a Node.js process that runs on your machine, sitting between Claude Code and AgentMail’s API. This raises a fair question: why the middleman?
What the NPM Package Actually Does
The @agentmail/mcp-server npm package is a local MCP server. It speaks the MCP protocol on one side (stdio transport, talking to Claude Code) and makes REST API calls to AgentMail’s cloud service on the other. It’s a protocol translator — converting MCP tool calls like send_message into HTTP requests against api.agentmail.to.
Claude Code launches it as a child process. It runs for the duration of your session, consuming local resources, and dies when Claude Code exits.
Would a Direct MCP URL Be More Agentic?
Yes. Unambiguously.
The more agentic architecture would be for AgentMail to expose a remote MCP server endpoint directly — something like https://mcp.agentmail.to/sse — that Claude Code connects to over SSE or HTTP streaming. No npm install, no local Node.js process, no version management. Just a URL and a bearer token.
This is how MCP was designed to evolve. The protocol supports remote transports (SSE, streamable HTTP). A direct connection would mean:
-
Zero local dependencies — no Node.js requirement, no
npxfetching packages on first run - Instant setup — paste a URL and token into your MCP config, done
- Always current — server-side updates deploy without clients touching anything
- Lower resource usage — no local process consuming memory for what amounts to HTTP proxying
- True composability — any MCP client connects the same way, whether it’s Claude Code, Cursor, or a custom agent
Why the NPM Proxy Exists Anyway
The npm approach isn’t a design choice so much as a timing artifact. MCP launched with stdio as the primary transport — the client spawns a local process and communicates over stdin/stdout. This was the path of least resistance for early MCP server authors:
-
Stdio was the first stable transport. Remote MCP (SSE, streamable HTTP) came later and is still maturing in terms of client support and authentication standards.
-
It sidesteps authentication complexity. With a local process, the API key lives in an environment variable on your machine. A remote MCP endpoint needs proper auth negotiation — OAuth flows, token refresh, session management. The MCP spec’s auth story is still solidifying.
-
Client compatibility. Not all MCP clients support remote transports yet. An npm package over stdio works with every MCP client that exists today.
-
Offline capability. The local process could theoretically cache, queue, or batch requests — though most implementations, including AgentMail’s, don’t actually do this.
The Cost of the Proxy Pattern
The tradeoff is real. Every npm-based MCP server adds:
- A Node.js runtime dependency
- A process to manage and debug when things break
- A version to keep updated
-
A first-run delay while
npxdownloads the package - A potential supply chain attack surface (you’re running code fetched from npm)
Multiply this across the MCP servers a power user might configure — email, calendar, GitHub, Slack, databases — and you’re running a small fleet of Node.js processes locally, each doing little more than translating MCP calls into API requests.
Where This Is Heading
The industry is clearly moving toward remote MCP servers. Cloudflare, Smithery, and others are building infrastructure for hosted MCP endpoints. The protocol’s streamable HTTP transport was designed precisely for this use case.
AgentMail — and most MCP service providers — will likely offer direct remote endpoints once the ecosystem matures. The npm package is scaffolding, not architecture.
For now, the npm proxy works. It’s the pragmatic choice given where MCP tooling stands today. But if you’re building an MCP server for your own service, consider starting with remote transport. The stdio-npm pattern is a bridge to a world where MCP servers are just URLs — and we’re almost there.