aimock-demo.mp4
Mock infrastructure for AI application testing — LLM APIs, image generation, text-to-speech, transcription, video generation, MCP tools, A2A agents, AG-UI event streams, vector databases, search, rerank, and moderation. One package, one port, zero dependencies.
npm install @copilotkit/aimock// The class is still named `LLMock` for back-compat after the v1.7.0 package
// rename from `@copilotkit/llmock` to `@copilotkit/aimock`.
import { LLMock } from "@copilotkit/aimock";
const mock = new LLMock({ port: 0 });
mock.onMessage("hello", { content: "Hi there!" });
await mock.start();
// Set env BEFORE importing/constructing the OpenAI (or other provider) client.
// Many SDKs cache the base URL at construction time — if the client is built
// before these are set, it will talk to the real API (surprise bills) instead
// of aimock.
process.env.OPENAI_BASE_URL = `${mock.url}/v1`;
process.env.OPENAI_API_KEY = "mock"; // SDK requires a value, even when base URL is mocked
// ... run your tests ...
await mock.stop();aimock mocks everything your AI app talks to:
| Tool | What it mocks | Docs |
|---|---|---|
| LLMock | OpenAI (Chat/Responses/Realtime), Claude, Gemini (REST/Live), Bedrock, Azure, Vertex AI, Ollama, Cohere | Providers |
| MCPMock | MCP tools, resources, prompts with session management | MCP |
| A2AMock | Agent-to-agent protocol with SSE streaming | A2A |
| AGUIMock | AG-UI agent-to-UI event streams for frontend testing | AG-UI |
| VectorMock | Pinecone, Qdrant, ChromaDB compatible endpoints | Vector |
| Services | Tavily search, Cohere rerank, OpenAI moderation | Services |
Run them all on one port with npx @copilotkit/aimock --config aimock.json, or use the programmatic API to compose exactly what you need.
- Record & Replay — Proxy real APIs, save as fixtures, replay deterministically forever
- Multi-turn Conversations — Record and replay multi-turn traces with tool rounds; match distinct turns via
toolCallId,sequenceIndex, or custom predicates - 11 LLM Providers — OpenAI Chat, OpenAI Responses, OpenAI Realtime, Claude, Gemini, Gemini Live, Azure, Bedrock, Vertex AI, Ollama, Cohere — full streaming support
- Multimedia APIs — image generation (DALL-E, Imagen), text-to-speech, audio transcription, video generation
- MCP / A2A / AG-UI / Vector — Mock every protocol your AI agents use
- Chaos Testing — 500 errors, malformed JSON, mid-stream disconnects at any probability
- Drift Detection — Daily CI validation against real APIs
- Streaming Physics — Configurable
ttft,tps, andjitter - WebSocket APIs — OpenAI Realtime, Responses WS, Gemini Live
- Prometheus Metrics — Request counts, latencies, fixture match rates
- Docker + Helm — Container image and Helm chart for CI/CD
- Vitest & Jest Plugins — Zero-config
useAimock()with auto lifecycle and env patching - Response Overrides — Control
id,model,usage,finishReasonin fixture responses - Zero dependencies — Everything from Node.js builtins
- uses: CopilotKit/aimock@v1
with:
fixtures: ./test/fixtures
- run: npm test
env:
OPENAI_BASE_URL: http://127.0.0.1:4010/v1See the GitHub Action docs for all inputs and examples.
# LLM mocking only
npx -p @copilotkit/aimock llmock -p 4010 -f ./fixtures
# Remote fixtures — load JSON from an HTTPS URL (repeatable)
npx -p @copilotkit/aimock llmock -p 4010 \
-f https://raw.githubusercontent.com/acme/mocks/main/openai.json \
-f ./fixtures/local-overrides.json
# Full suite from config
npx @copilotkit/aimock --config aimock.json
# Record mode: proxy to real APIs, save fixtures
npx -p @copilotkit/aimock llmock --record --provider-openai https://api.openai.com
# Convert fixtures from other tools
npx @copilotkit/aimock convert vidaimock ./templates/ ./fixtures/
npx @copilotkit/aimock convert mockllm ./config.yaml ./fixtures/
# Docker
docker run -d -p 4010:4010 -v "$(pwd)/fixtures:/fixtures" ghcr.io/copilotkit/aimock -f /fixtures -h 0.0.0.0Note on
llmockvsaimockCLIs. Thellmockbin is retained as a compat alias for users of the pre-1.7.0@copilotkit/llmockpackage. It runs a narrower flag-driven CLI without--configor theconvertsubcommand. New projects should useaimock(ornpx @copilotkit/aimock) for full feature support.
--fixtures accepts https:// and http:// URLs pointing at JSON fixture files in addition to filesystem paths, and the flag is repeatable so you can layer remote and local sources in argv order. Fetched fixtures are cached on disk at ~/.cache/aimock/fixtures/<sha256-of-url>/ (honors $XDG_CACHE_HOME); when paired with --validate-on-load, a fetch failure with a valid cached copy logs a warning and continues — without a cache, the process exits non-zero. HTTP fetches have a 10s timeout and a 50 MB body cap; redirects are rejected fail-loud, so configure your upstream to serve the final URL directly (GitHub raw content URLs already do).
Private and link-local addresses (loopback, RFC1918, CGNAT, cloud metadata, ULA, multicast) are rejected by default to prevent SSRF. For local development or tests that need to hit 127.0.0.1, opt out with AIMOCK_ALLOW_PRIVATE_URLS=1. Tarball and zip URL support is intentionally deferred.
Test your AI agents with aimock — no API keys, no network calls: LangChain · CrewAI · PydanticAI · LlamaIndex · Mastra · Google ADK · Microsoft Agent Framework
Step-by-step migration guides: MSW · VidaiMock · mock-llm · piyook/llm-mock · Python mocks · openai-responses · Mokksy
https://aimock.copilotkit.dev · Example fixtures
AG-UI uses aimock for its end-to-end test suite, verifying AI agent behavior across LLM providers with fixture-driven responses.
MIT