By James Aspinwall, co-written by Alfred Pennyworth (my trusted AI) — March 7, 2026, 07:22
ClickHouse just raised $400 million at a $15 billion valuation — a 2.5x jump from $6.35 billion nine months earlier. Dragoneer led, with Bessemer, GIC, Index Ventures, Khosla, Lightspeed, and T. Rowe Price participating. ARR on ClickHouse Cloud is growing 250%+ year-over-year across 3,000+ customers. They acquired Langfuse — the leading open-source LLM observability platform (20,000 GitHub stars, 26M+ monthly SDK installs, used by 63 Fortune 500 companies). And they launched an MCP server.
That last point is why WorkingAgents should pay attention.
ClickHouse is not a database. It is a real-time analytical engine that processes billions of rows in milliseconds — and it just built a native interface for AI agents to query it. The convergence of real-time analytics, agent-facing data access, and LLM observability creates a partnership surface with WorkingAgents that neither company could build alone.
What ClickHouse Does
ClickHouse is an open-source, column-oriented OLAP database built for speed. Where traditional row-oriented databases (Postgres, MySQL) store data horizontally — one row at a time — ClickHouse stores data vertically, one column at a time. For analytical queries that scan specific columns across millions or billions of rows, this architecture is 100x faster than row-oriented alternatives.
The numbers are not marketing:
| Metric | ClickHouse |
|---|---|
| Query speed | Milliseconds at petabyte scale |
| GitHub stars | 46,200+ |
| Contributors | 2,800+ |
| Pull requests | 70,400+ |
| Releases | 746+ |
| Cloud customers | 3,000+ |
| ARR growth | 250%+ YoY |
| Valuation | $15B |
| Total funding | ~$1B+ |
Enterprise customers include Meta, eBay, Microsoft, Spotify, Lyft, HubSpot, Cisco, GitLab, Deutsche Bank, IBM, Sony, Tesla, Capital One, Anthropic, and Cursor. Recent AI-focused adopters include Sierra, Poolside, Weights & Biases, LangChain, Lovable, and Decagon.
The Product Suite
ClickHouse Cloud — Fully managed, serverless, auto-scaling. Available on AWS, GCP, and Azure. Three tiers:
| Tier | Starting Price | Features |
|---|---|---|
| Basic | $50/month | Serverless, scales to zero |
| Scale | Custom | Dedicated clusters, isolated hardware |
| Enterprise | Custom | BYOC, data residency, advanced compliance |
ClickHouse Open Source — Self-hosted, free. The same engine that powers the cloud offering. 46,000+ stars on GitHub.
ClickHouse Local — Query CSV, TSV, Parquet files directly without a server. Zero setup.
Postgres Managed by ClickHouse — Native CDC (change data capture) piping Postgres transactions into ClickHouse for up to 100x faster analytics. Unified transactional + analytical stack.
ClickStack — Open-source observability platform. Logs, metrics, and traces stored in ClickHouse.
Langfuse Cloud — LLM observability. Every LLM call traced — cost tracking, quality evaluation, prompt versioning. Langfuse runs on ClickHouse under the hood.
Use Cases
- Real-time dashboards — Instant analytics over billions of rows for user-facing products
- Observability — Log, metric, and trace storage at massive scale (ClickStack, Langfuse)
- Data warehousing — Cost-efficient analytical processing at petabyte scale
- ML and GenAI — Vector search, training dataset aggregation, LLM observability
- Agent-facing analytics — AI agents querying databases autonomously via MCP
The Agentic Data Stack
This is where ClickHouse’s vision intersects directly with WorkingAgents.
In January 2026, ClickHouse published “The Agentic Data Stack” — a reference architecture for connecting AI agents directly to data. The thesis: traditional analytics pipelines (user → ticket → analyst → dashboard → answer, taking days or weeks) are being replaced by agent-facing systems where AI autonomously discovers, queries, and analyzes data in seconds.
The architecture has three layers:
Chat Layer (LibreChat) — ChatGPT-style interface supporting multiple LLM providers, MCP server connections, inline charts and tables, and code execution.
Data Layer (ClickHouse + MCP Server) — The MCP server exposes three tools to AI agents: list databases, list tables, and execute read-only SELECT queries. Agents iteratively explore schemas and run analytical queries at sub-second speed across billions of rows.
Observability Layer (Langfuse) — Full LLM tracing capturing every call, enabling cost tracking, quality evaluation, and prompt versioning.
Real-world adoption validates the pattern:
- Shopify runs thousands of custom agents connected to 30+ internal MCP servers via LibreChat
- Daimler Truck employees created 3,000+ custom AI agents for manufacturing and data retrieval
- ClickHouse internally — their DWAINE agent handles 70% of data warehouse queries for 200+ internal users, processing 33 million LLM tokens per day
- cBioPortal built an agent enabling cancer genomics researchers to query data with natural language
The MCP Server
ClickHouse’s remote MCP server is now in public beta for ClickHouse Cloud. It exposes:
- List databases — Agent discovers available data sources
- List tables — Agent explores schema structure
- Execute SELECT — Agent runs read-only analytical queries
Security model: OAuth-based authentication, read-only access only, fully managed on ClickHouse Cloud. The self-managed MCP server (PyPI package) has 220,000+ downloads.
In a demo, Claude conducted 10 sequential queries to analyze dot-com bubble impacts on tech stocks — exploring schemas, running aggregations, detecting patterns — all within seconds, without human intervention between queries.
Why This Matters for Agents
Christian Jensen (Dragoneer partner and ClickHouse board member) put it directly: “As models become more capable, the bottleneck moves to data infrastructure.”
AI agents generating rapid-fire analytical queries need:
- Near-instant processing — Multiple exploratory queries per prompt
- Consistent performance — High concurrency from numerous agents simultaneously
- Complex analytical capabilities — Aggregations, joins, window functions across massive datasets
- Interactive responsiveness — Chat-based interfaces demand sub-second answers
ClickHouse’s columnar architecture delivers this. Traditional databases do not.
The Synergy Map
WorkingAgents and ClickHouse serve fundamentally different functions in the AI stack. ClickHouse is the analytical engine — it answers questions about data. WorkingAgents is the operational engine — it schedules actions, manages state, controls permissions, and ensures things get done. Together, they create a complete agent-facing infrastructure.
1. ClickHouse as the Analytics Layer for WorkingAgents
WorkingAgents generates operational data that needs analytical querying:
- Alarm history — When did alarms fire? What was the average delay between scheduling and execution? Which alarms failed and why?
- Task metrics — How many tasks completed per user per day? What is the average task lifetime? Where are the bottlenecks?
- Tool usage — Which MCP tools are called most frequently? What is the error rate by tool? Which users generate the most tool calls?
- Access control audit — Who granted which permissions when? How many permission checks fail per day? Which tools are most commonly denied?
WorkingAgents currently stores this data in per-user SQLite databases — excellent for isolation and operational queries, but not designed for cross-user analytics at scale. ClickHouse is the analytical complement: pipe operational events from WorkingAgents into ClickHouse, and suddenly you have millisecond dashboards over the entire platform’s activity.
The integration pattern:
WorkingAgents (operational data) → CDC/batch export → ClickHouse (analytics)
↓
Real-time dashboards
Usage reports
Anomaly detection
Billing metering
2. WorkingAgents as the Action Layer for ClickHouse Agents
ClickHouse’s agentic data stack answers questions. WorkingAgents takes action on the answers.
An agent queries ClickHouse: “Show me all customers with declining engagement over the last 30 days.” ClickHouse returns the list in milliseconds. Now what?
- WorkingAgents NIS creates follow-up tasks for each at-risk customer
- WorkingAgents alarm schedules check-ins at appropriate intervals
- WorkingAgents pushover notifies the account manager
- WorkingAgents task manager tracks whether follow-ups were completed
- If no follow-up after 3 days, WorkingAgents alarm fires and escalates
ClickHouse tells you what is happening. WorkingAgents decides what to do about it.
This is the missing layer in ClickHouse’s agentic data stack. Their reference architecture has chat (LibreChat), data (ClickHouse), and observability (Langfuse) — but no operational orchestration. No scheduling. No persistent task management. No escalation chains. No access-controlled tool execution. WorkingAgents is the fourth pillar.
3. MCP Server to MCP Server
Both ClickHouse and WorkingAgents expose MCP servers. An AI agent connected to both can:
- Query ClickHouse: “What products had the highest return rate last month?”
- Get the answer in milliseconds
- Call WorkingAgents: “Create a task to review the top 5 products with highest returns”
- WorkingAgents creates the task, assigns it, schedules a follow-up
- Query ClickHouse again: “What were the return reasons for product X?”
- Call WorkingAgents: “Send a push notification to the product manager with this analysis”
Two MCP servers, one agent, seamless data-to-action flow. The agent thinks with ClickHouse and acts with WorkingAgents.
4. Langfuse + WorkingAgents Observability
ClickHouse acquired Langfuse for LLM observability — tracing every LLM call, tracking costs, evaluating quality. WorkingAgents runs LLM-powered agent sessions (ServerChat) that generate exactly the kind of telemetry Langfuse is built to observe.
Integrating Langfuse into WorkingAgents’ chat module would provide:
- Cost tracking — How much does each user’s agent workflow cost in LLM tokens?
- Quality evaluation — Are agent responses improving or degrading over time?
- Prompt versioning — Track which system prompts produce better tool-calling accuracy
- Debugging — When an alarm-triggered agent workflow fails, trace every LLM call in the chain
Langfuse is already used by 19 Fortune 50 and 63 Fortune 500 companies. It runs on ClickHouse. The integration path is clear: WorkingAgents sends LLM traces to Langfuse, Langfuse stores them in ClickHouse, dashboards display operational AI health in real time.
5. Real-Time Monitoring Integration
WorkingAgents has a Monitor module that tracks system health. ClickHouse is built for exactly this kind of high-frequency time-series data:
- Health check results — Store every ping, every response time, every status code
- Anomaly detection — ClickHouse’s analytical speed enables real-time anomaly detection over monitoring data
- Historical analysis — “Show me all outages longer than 5 minutes in the last 90 days” returns in milliseconds
- Alert correlation — Cross-reference monitoring events with alarm firings, task completions, and tool usage
WorkingAgents’ Monitor currently stores results in SQLite. For a single-user system, this works. For an enterprise deployment with hundreds of monitored endpoints, ClickHouse’s columnar engine provides the analytical horsepower SQLite cannot.
6. Enterprise Customer Overlap
ClickHouse’s customer list overlaps significantly with companies that need operational AI orchestration:
- Decagon — AI agent platform. On ClickHouse for analytics. Needs WorkingAgents for scheduling, escalation, persistence.
- Cursor — AI code editor. On ClickHouse for observability. WorkingAgents is an MCP server compatible with Cursor.
- Lovable — AI app builder. Uses ClickHouse for observability and debugging. Agent workflows need operational orchestration.
- LangChain — Agent framework. Uses ClickHouse for analytics. LangChain agents need WorkingAgents’ operational layer.
- Weights & Biases — ML observability. On ClickHouse. Their customers building agents need orchestration.
- Anthropic — LLM provider. On ClickHouse. WorkingAgents already integrates Anthropic as a primary provider.
Every ClickHouse customer building AI agents is a potential WorkingAgents customer. The pitch: “You have the analytics. Here is the orchestration.”
The Agentic Data Stack — Extended
ClickHouse’s reference architecture with WorkingAgents as the fourth layer:
┌─────────────────────────────────────────────────┐
│ Chat Layer (LibreChat / Custom UI) │
│ Natural language → agent reasoning │
├─────────────────────────────────────────────────┤
│ Data Layer (ClickHouse + MCP Server) │
│ Analytical queries at sub-second speed │
├─────────────────────────────────────────────────┤
│ Action Layer (WorkingAgents + MCP Server) ◄── NEW
│ Scheduling, tasks, CRM, notifications, │
│ access control, persistent state │
├─────────────────────────────────────────────────┤
│ Observability (Langfuse on ClickHouse) │
│ LLM tracing, cost tracking, quality eval │
└─────────────────────────────────────────────────┘
The data layer tells the agent what is true. The action layer tells the agent what to do. The observability layer tells you whether the agent did it well. The chat layer is the human interface. All four connected via MCP.
The Partnership Path
Phase 1: ClickHouse as Analytics Backend
Pipe WorkingAgents operational events (alarm firings, task completions, tool calls, permission checks) into ClickHouse. Build real-time dashboards showing platform health, usage patterns, and cost metrics. This gives WorkingAgents enterprise clients the analytical visibility they expect.
Phase 2: Dual MCP Integration
Document and publish the pattern: one agent, two MCP servers — ClickHouse for data, WorkingAgents for actions. Build a reference demo showing a complete data-to-action workflow. This is the most compelling partnership demo at any AI conference.
Phase 3: Langfuse Integration
Add Langfuse tracing to WorkingAgents’ ServerChat module. Every LLM call, every tool invocation, every token — traced and stored in ClickHouse. Enterprise clients get full AI observability without additional infrastructure.
Phase 4: Joint Reference Architecture
Extend ClickHouse’s published “Agentic Data Stack” to include WorkingAgents as the action layer. Co-publish the architecture with deployment guides. Position the combined stack as the complete open infrastructure for enterprise AI agents.
The Numbers
| ClickHouse | Value |
|---|---|
| Valuation | $15B |
| Series D | $400M (Jan 2026) |
| Total funding | ~$1B+ |
| Cloud ARR growth | 250%+ YoY |
| Cloud customers | 3,000+ |
| GitHub stars | 46,200+ |
| Contributors | 2,800+ |
| Langfuse SDK installs | 26M+/month |
| Fortune 500 on Langfuse | 63 |
| Key investors | Dragoneer, Bessemer, GIC, Index, Khosla, Lightspeed, T. Rowe Price |
| Open source | Yes (Apache 2.0) |
| Query speed | Milliseconds at petabyte scale |
The Bottom Line
ClickHouse is where data goes to be understood fast. WorkingAgents is where decisions go to be executed reliably. ClickHouse answers “what is happening” in milliseconds across billions of rows. WorkingAgents answers “what should we do about it” with scheduled actions, persistent state, and crash-recoverable workflows.
The agentic data stack needs both. An agent that can query a database but cannot schedule a follow-up is half a solution. An agent that can schedule actions but cannot analyze data is the other half. Together — ClickHouse for the analytical brain, WorkingAgents for the operational hands — you get a complete autonomous system.
ClickHouse already has the MCP server. WorkingAgents already has the MCP server. The integration is two configuration lines in an agent’s tool list. The technical barrier is near zero. The business case — analytics plus orchestration for every enterprise AI deployment — is the entire market.
Sources: