Embeddings
The POST /v1/embeddings endpoint returns OpenAI-compatible embedding vectors.
You can provide explicit vectors in fixtures or let llmock generate deterministic
embeddings automatically from the input text.
Endpoint
| Method | Path | Format |
|---|---|---|
| POST | /v1/embeddings | JSON |
How It Works
-
If a fixture matches with an
embeddingresponse, that exact vector is returned - If no fixture matches, a deterministic embedding is auto-generated from the input text using a hash-based algorithm
- Auto-generated embeddings are deterministic: same input always produces the same output
-
Default dimension is 1536 (matching text-embedding-3-small), configurable via the
dimensionsrequest parameter
Unit Test: Fixture-based Embedding
embedding-fixture.test.ts ts
const mock = new LLMock();
await mock.start();
// Register a fixture with explicit embedding vector
mock.onEmbedding("embed-this", { embedding: [0.1, -0.2, 0.3, 0.4, -0.5] });
const res = await fetch(`${mock.url}/v1/embeddings`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
model: "text-embedding-3-small",
input: "embed-this",
}),
});
const body = await res.json();
expect(body.object).toBe("list");
expect(body.data[0].embedding).toEqual([0.1, -0.2, 0.3, 0.4, -0.5]);
expect(body.data[0].index).toBe(0);
Unit Test: Auto-generated Embedding
embedding-auto.test.ts ts
import { generateDeterministicEmbedding } from "@copilotkit/llmock/helpers";
// Deterministic: same input always produces the same output
const a = generateDeterministicEmbedding("hello world");
const b = generateDeterministicEmbedding("hello world");
expect(a).toEqual(b);
// Default dimension is 1536
expect(a).toHaveLength(1536);
// Custom dimension
const c = generateDeterministicEmbedding("hello", 768);
expect(c).toHaveLength(768);
// All values are between -1 and 1
for (const val of a) {
expect(val).toBeGreaterThanOrEqual(-1);
expect(val).toBeLessThanOrEqual(1);
}
JSON Fixture
fixtures/embeddings.json json
{
"fixtures": [
{
"match": { "inputText": "embed-this" },
"response": {
"embedding": [0.1, -0.2, 0.3, 0.4, -0.5]
}
}
]
}
Response Format
Matches the OpenAI /v1/embeddings response format:
Response shape json
{
"object": "list",
"model": "text-embedding-3-small",
"data": [
{
"object": "embedding",
"index": 0,
"embedding": [0.1, -0.2, 0.3, ...]
}
],
"usage": { "prompt_tokens": 0, "total_tokens": 0 }
}
Embedding fixtures use match.inputText instead of
match.userMessage. The inputText matcher checks the embedding
input string (or each string in an input array).