Azure OpenAI
llmock routes Azure OpenAI deployment-based URLs to the existing chat completions and embeddings handlers. Point the Azure OpenAI SDK at your llmock instance and fixtures work exactly as they do with the standard OpenAI endpoints.
How It Works
Azure OpenAI uses a different URL pattern than standard OpenAI. Instead of
/v1/chat/completions, Azure uses
/openai/deployments/{deployment-id}/chat/completions with an
api-version query parameter.
llmock detects these Azure-style URLs and rewrites them to the standard paths before
routing to the existing handlers. The deployment ID is extracted and used as a model
fallback when the request body omits the model field (which Azure requests
commonly do, since the model is implied by the deployment).
URL Pattern Mapping
| Azure URL | Mapped To |
|---|---|
/openai/deployments/{id}/chat/completions |
/v1/chat/completions |
/openai/deployments/{id}/embeddings |
/v1/embeddings |
Model Resolution
When a request arrives via an Azure deployment URL, llmock resolves the model name using these rules:
-
If the request body includes a
modelfield, that value is used (body takes precedence). -
If the body omits
model, the deployment ID from the URL is used as the model name for fixture matching.
This means you can write fixtures that match by deployment name:
{
"match": {
"model": "my-gpt4-deployment",
"userMessage": "hello"
},
"response": {
"content": "Hello from Azure!"
}
}
Authentication
llmock does not validate authentication tokens, but it accepts both Azure-style and standard auth headers without rejecting the request:
api-key: your-azure-key(Azure-native header)Authorization: Bearer your-token(standard OAuth/OpenAI header)
SDK Configuration
To point the Azure OpenAI Node.js SDK at llmock, set the endpoint to your llmock URL:
import { AzureOpenAI } from "openai";
const client = new AzureOpenAI({
endpoint: "http://localhost:4005", // llmock URL
apiKey: "mock-key",
apiVersion: "2024-10-21",
deployment: "my-gpt4-deployment",
});
const response = await client.chat.completions.create({
model: "my-gpt4-deployment",
messages: [{ role: "user", content: "hello" }],
});
Environment Variables
When using the Azure OpenAI SDK, you can configure the endpoint via environment variables:
# Point Azure SDK at llmock
AZURE_OPENAI_ENDPOINT=http://localhost:4005
AZURE_OPENAI_API_KEY=mock-key
The api-version query parameter is accepted but ignored — llmock
responds identically regardless of which API version is requested. This means you can
test against any API version without changing fixtures.