AWS Bedrock
llmock supports the AWS Bedrock Claude invoke endpoint. Point the AWS SDK at your llmock instance and fixtures match against the Bedrock-format requests, returning Anthropic Messages API responses — the same format Bedrock uses for Claude models.
Phase 1: Non-streaming invoke only. Streaming via
invoke-with-response-stream is planned for a future release.
How It Works
AWS Bedrock uses a URL pattern of
/model/{modelId}/invoke to call foundation models. The request body uses the
Anthropic Messages format with an additional anthropic_version field, and
does not include a model field in the body (the model is in the
URL).
llmock detects the Bedrock URL pattern, extracts the model ID, translates the request to the internal fixture-matching format, and returns the response in the Anthropic Messages API format — which is identical to the Bedrock Claude response format.
URL Pattern
| Bedrock URL | Description |
|---|---|
POST /model/{modelId}/invoke |
Non-streaming invoke (supported) |
POST /model/{modelId}/invoke-with-response-stream |
Streaming invoke (planned) |
Request Format
Bedrock Claude requests use the Anthropic Messages format. The
anthropic_version field is accepted but not validated. The model is taken
from the URL path, not the request body.
{
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 512,
"messages": [
{ "role": "user", "content": "Hello" }
],
"system": "You are helpful"
}
Response Format
Bedrock Claude responses are identical to the Anthropic Messages API non-streaming responses:
{
"id": "msg_...",
"type": "message",
"role": "assistant",
"content": [{ "type": "text", "text": "Hello!" }],
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": { "input_tokens": 10, "output_tokens": 5 }
}
Model Resolution
The model ID is extracted from the URL path. This is used both for fixture matching and included in the response body. Bedrock model IDs typically look like:
anthropic.claude-3-5-sonnet-20241022-v2:0anthropic.claude-3-haiku-20240307-v1:0anthropic.claude-3-opus-20240229-v1:0
Write fixtures that match by Bedrock model ID:
{
"match": {
"model": "anthropic.claude-3-5-sonnet-20241022-v2:0",
"userMessage": "hello"
},
"response": {
"content": "Hello from Bedrock!"
}
}
SDK Configuration
To point the AWS SDK Bedrock Runtime client at llmock, configure the endpoint URL:
import { BedrockRuntimeClient, InvokeModelCommand } from "@aws-sdk/client-bedrock-runtime";
const client = new BedrockRuntimeClient({
region: "us-east-1",
endpoint: "http://localhost:4005", // llmock URL
credentials: { accessKeyId: "mock", secretAccessKey: "mock" },
});
const response = await client.send(new InvokeModelCommand({
modelId: "anthropic.claude-3-5-sonnet-20241022-v2:0",
contentType: "application/json",
body: JSON.stringify({
anthropic_version: "bedrock-2023-05-31",
max_tokens: 512,
messages: [{ role: "user", content: "Hello" }],
}),
}));
Fixture Examples
{
"fixtures": [
{
"match": { "userMessage": "hello" },
"response": { "content": "Hi there!" }
},
{
"match": { "userMessage": "weather" },
"response": {
"toolCalls": [{
"name": "get_weather",
"arguments": "{\"city\":\"SF\"}"
}]
}
}
]
}
Fixtures are shared across all providers. The same fixture file works for OpenAI, Claude Messages, Gemini, Azure, and Bedrock endpoints — llmock translates each provider's request format to a common internal format before matching.