James Aspinwall | February 19, 2026
MCP servers are the plumbing that lets LLMs do real work — call APIs, query databases, control applications. But running them has been a friction-filled exercise in dependency management, environment config, and auth wrangling. Docker’s MCP toolkit changes that equation significantly. It turns MCP servers into containers you can spin up, tear down, and compose without touching the host system.
This article walks through the full stack: what MCP is, how Docker’s toolkit runs existing servers from a catalog, how to wire them into multiple LLM clients, and how to generate and run your own custom MCP servers for tools like Obsidian, Toggl, and Kali Linux.
MCP in Sixty Seconds
MCP — Model Context Protocol — is Anthropic’s standard for exposing tools to language models. The idea is simple: instead of giving an LLM a raw API or a GUI, you stand up an MCP server that implements the logic. The server describes its tools in a schema — plain-language names, parameters, descriptions — and the LLM calls them by name.
The client (Claude Desktop, Cursor, LM Studio, whatever) never touches the underlying API directly. It sees tool names like append_note or search_vault and sends structured requests. The MCP server handles auth, endpoints, payloads, error handling — all the messy details stay behind the wall.
Two transport modes:
- Local: stdin/stdout using JSON-RPC pipes. The client spawns the server process and talks to it directly. Zero network overhead.
- Remote: HTTP(S) with Server-Sent Events (SSE). The server runs on a network — same machine, different machine, cloud — and clients connect over a URL.
This transport flexibility is what makes Docker’s approach work. A container can expose either mode depending on how you run it.
The Docker MCP Toolkit
Docker Desktop now ships an MCP toolkit (currently beta) on macOS, Linux, and Windows. Enable it in Docker Desktop settings and you get two things: a catalog of pre-built MCP servers and a gateway that orchestrates them.
The Catalog
Docker provides an official catalog of MCP servers — Obsidian, DuckDuckGo, Brave Search, Airbnb search, YouTube transcripts, and more. Adding one is a few clicks: select the server, supply any required API keys, and Docker spins up the container on demand.
Each catalog entry exposes a list of tools described in plain language. The Obsidian server, for example, offers tools like “append content to a note,” “search vault,” and “create note.” The LLM reads these descriptions and knows how to call them. You never write integration code.
The Gateway
The gateway is the key architectural piece. It’s a single MCP endpoint that sits between your LLM clients and all your MCP servers. Clients connect to one place — the gateway — and it handles routing to the correct server, manages container lifecycle (spinning up containers when tools are called, tearing them down after), and centralizes secrets.
This means you configure your LLM client once. Add ten MCP servers to your catalog and they all appear as available tools through the same connection.
Wiring Up LLM Clients
Claude Desktop
Claude Desktop connects to the Docker MCP gateway as a single client endpoint. Once connected, it automatically sees every tool from every MCP server in your catalog. The demo experience: ask Claude to create a note in your Obsidian vault, it calls the Obsidian MCP server’s append_note tool, you grant permission, and the note appears. No code. No auth handling on your side.
LM Studio
LM Studio runs local models — Gemma, DeepSeek, whatever you prefer. Point it at the Docker MCP gateway and even local models can call the same tools. There’s a practical caveat: smaller local models struggle with complex multi-step tool use. They’ll handle single-tool calls fine but tend to lose the thread when orchestrating several tools in sequence. This is a model capability issue, not an MCP issue.
Cursor
Same pattern. Configure Cursor to point at the Docker MCP gateway and your coding assistant gains access to every tool in the catalog. Useful for development workflows where you want your editor’s AI to interact with external services — time tracking, documentation, search — without leaving the IDE.
Building Custom MCP Servers with AI
This is where it gets interesting. The pattern:
- Write a description of the tool you want.
- Feed it to an LLM (Claude Opus works well) along with a pre-written “MCP server build prompt.”
- The LLM generates the full server: Dockerfile, requirements.txt, server code, catalog YAML entry, and README.
- Build the Docker image. Register it with the gateway. Done.
Example: Dice Roller
A simple starter. You want tools for coin flips, D&D dice rolls, and custom dice. The LLM generates a dice_server.py implementing the MCP protocol, a Dockerfile to containerize it, and the YAML entries for the Docker MCP catalog and registry.
Build the image:
docker build -t dice-server .
Add the catalog and registry entries. Restart the gateway. Now Claude can call roll_dice("2d6") or flip_coin() directly from the chat UI. Docker spins up the container for the request and tears it down after. Clean, ephemeral, no lingering processes.
Example: Toggl Time Tracking
More practical. The LLM-generated server wraps Toggl’s REST API and exposes tools like start_timer, stop_timer, list_timers, and current_timer. The Toggl API token is stored securely using Docker MCP secrets:
docker mcp secret set toggl_api_token
The server reads the secret at runtime. No tokens in environment variables, no .env files lying around. Add the server definition to the custom catalog, restart, and you’re tracking time conversationally: “Start a timer for the MCP article” — and it does.
Example: Kali Linux Security Tools
The ambitious one. A Kali Linux container exposing security tools — nmap, WPScan, SQLMap — as MCP tools for use against intentionally vulnerable targets like DVWA.
This one requires some troubleshooting. The generated code may need guardrail adjustments (whitelisting target hosts), Dockerfile fixes (running as root for tools that require it), and rebuilds. But once working, you can run security scans conversationally from any LLM client connected to the gateway. Useful for security professionals who want to orchestrate reconnaissance and scanning through natural language.
A word of caution: exposing offensive security tools through an LLM gateway is powerful and dangerous. Restrict targets aggressively. Run on isolated networks. This is for authorized testing only.
Remote Access and Workflow Automation
The gateway’s SSE transport mode opens up remote access. Start the gateway with:
docker mcp gateway run --transport sse --port 8811
Now any tool that speaks MCP over HTTP can connect — from another machine, from a workflow engine, from a cloud service. The practical application: point an n8n MCP node at the gateway URL and orchestrate multi-tool flows. Search DuckDuckGo, look up Airbnb listings, save results to Obsidian — all chained in one automation, all running through containerized MCP servers managed by the gateway.
Why This Matters
MCP servers aren’t new. But the friction of running them — managing Python environments, handling dependencies, wiring auth, keeping processes alive — has kept adoption limited to developers willing to fight through the setup.
Docker’s toolkit collapses that friction. Containers handle isolation. The gateway handles routing and secrets. The catalog handles discovery. And AI handles server generation — describe what you want, get a working server, build the image, register it.
The skill combination — using catalogs, writing custom MCP servers with AI assistance, wiring them through the Docker gateway, exposing them to agents and workflow engines — is still relatively rare. For developers and automation builders, this is high-leverage territory. The tools are ready. The patterns are established. The gap is adoption.
MCP turns LLMs from conversational partners into operational agents. Docker turns MCP servers from dependency nightmares into disposable containers. Together, they make “give the AI access to my tools” a solved problem rather than a weekend project.