# ServerChat.Provider Behaviour module defining the contract that all LLM provider integrations must implement for the ServerChat system. ## Overview `ServerChat.Provider` is an Elixir behaviour that establishes a three-callback interface for integrating LLM APIs into the `ServerChat` GenServer. Any module that implements this behaviour can be used as a drop-in provider, enabling `ServerChat` to route chat messages to different LLM backends without changing its core logic. The behaviour decouples `ServerChat` from any specific API format — each provider handles its own tool formatting, HTTP transport, and response parsing, including recursive tool-calling loops. ## Implementing Providers | Provider Key | Module | API Endpoint | Behaviour | |--------------|--------|--------------|-----------| | `:anthropic` | `ServerChat.Anthropic` | Anthropic Messages `/v1/messages` | `@behaviour ServerChat.Provider` | | `:openrouter` | `ServerChat.OpenRouter` | OpenAI-compatible `/v1/chat/completions` | `@behaviour ServerChat.Provider` | | `:perplexity` | `ServerChat.Perplexity` | Perplexity Responses `/v1/responses` | `@behaviour ServerChat.Provider` | ## Callbacks ### format_tools/1 Converts MCP tool definitions into the wire format expected by the provider's API. ```elixir @callback format_tools(tools :: list()) :: list() ``` **Parameters:** - `tools` — list of MCP tool maps, each containing `:name`, `:description`, and `:input_schema` **Returns:** a list of tool definitions in the provider's required format. **Example implementations:** ```elixir # Anthropic — MCP format is native, passthrough def format_tools(tools), do: tools # OpenRouter — OpenAI function-calling format def format_tools(tools) do Enum.map(tools, fn tool -> %{ type: "function", function: %{ name: tool[:name], description: tool[:description], parameters: tool[:input_schema] } } end) end # Perplexity — flat function format def format_tools(tools) do Enum.map(tools, fn tool -> %{ type: "function", name: tool[:name], description: tool[:description], parameters: tool[:input_schema] } end) end ``` ### call_llm/4 Makes the HTTP request to the provider's API endpoint. ```elixir @callback call_llm( api_key :: String.t(), tools :: list(), messages :: list(), model :: String.t() ) :: {:ok, map()} | {:error, any()} ``` **Parameters:** | Parameter | Type | Description | |-----------|------|-------------| | `api_key` | `String.t()` | Provider API key from environment | | `tools` | `list()` | Pre-formatted tools (output of `format_tools/1`) | | `messages` | `list()` | Conversation history as a list of role/content maps | | `model` | `String.t()` | Model identifier (e.g. `"claude-sonnet-4-5-20250929"`, `"sonar"`) | **Returns:** - `{:ok, response_body}` — parsed JSON response from the API - `{:error, reason}` — HTTP or parsing failure ### handle_response/5 Processes the API response. If the LLM requested tool calls, the provider executes them via `ServerChat.call_mcp_tool/3` and calls `call_llm/4` again with the results — repeating until the LLM produces a final text answer. ```elixir @callback handle_response( api_key :: String.t(), tools :: list(), user_id :: integer(), response :: map(), messages :: list() ) :: {:ok, String.t(), list()} | {:error, any()} ``` **Parameters:** | Parameter | Type | Description | |-----------|------|-------------| | `api_key` | `String.t()` | Provider API key (needed for recursive calls) | | `tools` | `list()` | Formatted tools (passed through to recursive `call_llm/4`) | | `user_id` | `integer()` | User ID for MCP tool execution permissions | | `response` | `map()` | Raw parsed response body from `call_llm/4` | | `messages` | `list()` | Current conversation history | **Returns:** - `{:ok, reply_text, updated_messages}` — final LLM text and the full message history including any tool call/result rounds - `{:error, reason}` — if the response cannot be processed ## Call Flow ``` ServerChat.chat/2 | v provider.call_llm(api_key, tools, messages, model) | v provider.handle_response(api_key, tools, user_id, response, messages) | +---> finish_reason: "stop" ---> return {:ok, text, messages} | +---> finish_reason: "tool_calls" | v Execute tools via ServerChat.call_mcp_tool/3 | v Append tool results to messages | v provider.call_llm/4 again (recursive) | v provider.handle_response/5 again (recursive) ``` ## Writing a New Provider To add a new LLM provider: 1. Create a module under `lib/server_chat/` that declares `@behaviour ServerChat.Provider` 2. Implement all three callbacks: `format_tools/1`, `call_llm/4`, `handle_response/5` 3. Add the provider to the `@providers` map in `ServerChat`: ```elixir @providers %{ anthropic: ServerChat.Anthropic, openrouter: ServerChat.OpenRouter, perplexity: ServerChat.Perplexity, my_provider: ServerChat.MyProvider # add here } ``` 4. Add API key lookup in `ServerChat.api_key/1` and model defaults in `ServerChat.default_model/1` 5. Set environment variables: `MY_PROVIDER_API_KEY`, `MY_PROVIDER_MODEL` ### Minimal Implementation ```elixir defmodule ServerChat.MyProvider do @behaviour ServerChat.Provider @impl true def format_tools(tools) do # Convert MCP tools to your API's format Enum.map(tools, fn tool -> ... end) end @impl true def call_llm(api_key, tools, messages, model) do # POST to your API, return {:ok, body} or {:error, reason} end @impl true def handle_response(api_key, tools, user_id, response, messages) do # Parse response, execute tool calls if needed, return {:ok, text, messages} end end ``` ## Dependencies | Module | Purpose | |--------|---------| | `ServerChat` | GenServer that dispatches to provider callbacks | | `ServerChat.Anthropic` | Anthropic Claude implementation | | `ServerChat.OpenRouter` | OpenRouter (OpenAI-compatible) implementation | | `ServerChat.Perplexity` | Perplexity Responses API implementation | | `MCPServer.Manager` | MCP tool execution (called by providers via `ServerChat.call_mcp_tool/3`) |