Fixtures
Fixtures define what the mock server returns. Each fixture has a
match criteria and a response. Load them from JSON files,
register them programmatically, or mix both approaches.
File Format
fixtures/example.json json
{
"fixtures": [
{
"match": {
"userMessage": "hello",
"model": "gpt-4"
},
"response": {
"content": "Hello!"
},
"latency": 200,
"chunkSize": 10
}
]
}
Match Fields
| Field | Type | Description |
|---|---|---|
| userMessage | string | RegExp | Substring or regex match on the last user message |
| inputText | string | RegExp | Match on embedding input text |
| toolCallId | string | Match on tool_call_id in the last message |
| toolName | string | Match on tool function name |
| model | string | RegExp | Match on the requested model name |
| responseFormat | string | Match on response_format.type (e.g. "json_object") |
| sequenceIndex | number | Match on the Nth occurrence of this pattern |
| predicate | function | Custom function: (req) => boolean (programmatic only) |
Response Types
| Type | Fields | Description |
|---|---|---|
| Text | content, role?, finishReason? | Plain text response |
| Tool Call | toolCalls[], finishReason? | Function call(s) with name + arguments |
| Error | error.message, error.type?, status? | Error response with HTTP status |
| Embedding | embedding[] | Vector of numbers |
Fixture Options
| Field | Type | Description |
|---|---|---|
| latency | number | Milliseconds delay before first chunk |
| chunkSize | number | Characters per SSE chunk (streaming) |
| truncateAfterChunks | number | Abort stream after N chunks (error injection) |
| disconnectAfterMs | number | Disconnect after N ms (error injection) |
Loading Fixtures
From a file
load-file.ts ts
const mock = new LLMock();
mock.loadFixtureFile("./fixtures/chat.json");
mock.loadFixtureFile("./fixtures/tools.json");
From a directory
load-dir.ts ts
// Loads all .json files in the directory (non-recursive)
mock.loadFixtureDir("./fixtures");
Programmatically
programmatic.ts ts
// Shorthand methods
mock.onMessage("hello", { content: "Hi!" });
mock.onToolCall("get_weather", { content: "72F" });
mock.onEmbedding("my text", { embedding: [0.1, 0.2] });
mock.onJsonOutput("data", { key: "value" });
mock.onToolResult("call_123", { content: "Done" });
// Full fixture object
mock.addFixture({
match: { userMessage: "hello", model: "gpt-4" },
response: { content: "Hi!" },
latency: 100,
chunkSize: 5,
});
// Predicate-based routing
mock.on(
{ predicate: (req) => req.messages.at(-1)?.role === "tool" },
{ content: "Done!" }
);
Routing Rules
- First match wins — fixtures are checked in registration order
- All match fields must pass — multiple match fields are AND-ed
-
Substring matching —
userMessage: "hello"matches"say hello world" - Cross-provider — the same fixtures work for OpenAI, Claude, and Gemini requests
JSON files cannot use predicate (functions can't be serialized). Use
programmatic registration for predicate-based routing.