Compatible Providers

Many LLM providers use OpenAI-compatible /v1/chat/completions endpoints. llmock works with all of them out of the box — just point the SDK's base URL at your llmock instance.

Supported Providers

Provider Base URL Path Notes
Mistral /v1/chat/completions Standard OpenAI-compatible endpoint
Groq /openai/v1/chat/completions Uses /openai/ prefix — llmock strips it automatically
Ollama /v1/chat/completions Standard OpenAI-compatible endpoint
Together AI /v1/chat/completions Standard OpenAI-compatible endpoint
vLLM /v1/chat/completions Standard OpenAI-compatible endpoint

How It Works

Mistral Configuration

Mistral's SDK uses the standard OpenAI-compatible endpoint. Point MISTRAL_API_ENDPOINT at llmock:

Environment variables bash
export MISTRAL_API_ENDPOINT="http://localhost:5555/v1"
export MISTRAL_API_KEY="mock-key"
Programmatic setup ts
import { Mistral } from "@mistralai/mistralai";

const client = new Mistral({
  apiKey: "mock-key",
  serverURL: "http://localhost:5555/v1",
});

Groq Configuration

Groq's SDK sends requests to /openai/v1/chat/completions (note the /openai prefix). llmock handles this automatically.

Environment variables bash
export GROQ_BASE_URL="http://localhost:5555/openai/v1"
export GROQ_API_KEY="mock-key"
Programmatic setup ts
import Groq from "groq-sdk";

const client = new Groq({
  apiKey: "mock-key",
  baseURL: "http://localhost:5555/openai/v1",
});

Ollama Configuration

Ollama exposes an OpenAI-compatible endpoint locally. Point the OpenAI SDK at llmock instead:

Environment variables bash
export OPENAI_BASE_URL="http://localhost:5555/v1"
export OPENAI_API_KEY="mock-key"
Programmatic setup ts
import OpenAI from "openai";

// Same SDK you'd use with Ollama, just different base URL
const client = new OpenAI({
  apiKey: "mock-key",
  baseURL: "http://localhost:5555/v1",
});

Together AI Configuration

Environment variables bash
export TOGETHER_BASE_URL="http://localhost:5555/v1"
export TOGETHER_API_KEY="mock-key"

vLLM Configuration

Environment variables bash
# vLLM uses the OpenAI SDK — just change the base URL
export OPENAI_BASE_URL="http://localhost:5555/v1"
export OPENAI_API_KEY="mock-key"

Example Fixture

The same fixture works for all compatible providers. Model names are passed through — match on whatever model name your code sends:

fixtures/compat.json json
{
  "fixtures": [
    {
      "match": {
        "model": "mistral-large-latest",
        "userMessage": "hello"
      },
      "response": {
        "content": "Bonjour! How can I help?"
      }
    },
    {
      "match": {
        "model": "llama-3.3-70b-versatile",
        "userMessage": "hello"
      },
      "response": {
        "content": "Hey there! What can I do for you?"
      }
    },
    {
      "match": { "userMessage": "hello" },
      "response": {
        "content": "Hi! I'm a catch-all response."
      }
    }
  ]
}

The /openai/v1/* prefix alias also works for /openai/v1/embeddings and /openai/v1/models — any /openai/-prefixed path is transparently routed to the corresponding /v1/ endpoint.