Skip to content

CopilotKit/aimock

Use this GitHub action with your project
Add this Action to an existing workflow or create a new one
View on Marketplace

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

386 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

aimock Unit Tests Drift Tests npm version

aimock-demo.mp4

Mock infrastructure for AI application testing — LLM APIs, image generation, text-to-speech, transcription, video generation, MCP tools, A2A agents, AG-UI event streams, vector databases, search, rerank, and moderation. One package, one port, zero dependencies.

Quick Start

npm install @copilotkit/aimock
// The class is still named `LLMock` for back-compat after the v1.7.0 package
// rename from `@copilotkit/llmock` to `@copilotkit/aimock`.
import { LLMock } from "@copilotkit/aimock";

const mock = new LLMock({ port: 0 });
mock.onMessage("hello", { content: "Hi there!" });
await mock.start();

// Set env BEFORE importing/constructing the OpenAI (or other provider) client.
// Many SDKs cache the base URL at construction time — if the client is built
// before these are set, it will talk to the real API (surprise bills) instead
// of aimock.
process.env.OPENAI_BASE_URL = `${mock.url}/v1`;
process.env.OPENAI_API_KEY = "mock"; // SDK requires a value, even when base URL is mocked

// ... run your tests ...

await mock.stop();

The aimock Suite

aimock mocks everything your AI app talks to:

Tool What it mocks Docs
LLMock OpenAI (Chat/Responses/Realtime), Claude, Gemini (REST/Live), Bedrock, Azure, Vertex AI, Ollama, Cohere Providers
MCPMock MCP tools, resources, prompts with session management MCP
A2AMock Agent-to-agent protocol with SSE streaming A2A
AGUIMock AG-UI agent-to-UI event streams for frontend testing AG-UI
VectorMock Pinecone, Qdrant, ChromaDB compatible endpoints Vector
Services Tavily search, Cohere rerank, OpenAI moderation Services

Run them all on one port with npx @copilotkit/aimock --config aimock.json, or use the programmatic API to compose exactly what you need.

Features

GitHub Action

- uses: CopilotKit/aimock@v1
  with:
    fixtures: ./test/fixtures

- run: npm test
  env:
    OPENAI_BASE_URL: http://127.0.0.1:4010/v1

See the GitHub Action docs for all inputs and examples.

CLI

# LLM mocking only
npx -p @copilotkit/aimock llmock -p 4010 -f ./fixtures

# Remote fixtures — load JSON from an HTTPS URL (repeatable)
npx -p @copilotkit/aimock llmock -p 4010 \
  -f https://raw.githubusercontent.com/acme/mocks/main/openai.json \
  -f ./fixtures/local-overrides.json

# Full suite from config
npx @copilotkit/aimock --config aimock.json

# Record mode: proxy to real APIs, save fixtures
npx -p @copilotkit/aimock llmock --record --provider-openai https://api.openai.com

# Convert fixtures from other tools
npx @copilotkit/aimock convert vidaimock ./templates/ ./fixtures/
npx @copilotkit/aimock convert mockllm ./config.yaml ./fixtures/

# Docker
docker run -d -p 4010:4010 -v "$(pwd)/fixtures:/fixtures" ghcr.io/copilotkit/aimock -f /fixtures -h 0.0.0.0

Note on llmock vs aimock CLIs. The llmock bin is retained as a compat alias for users of the pre-1.7.0 @copilotkit/llmock package. It runs a narrower flag-driven CLI without --config or the convert subcommand. New projects should use aimock (or npx @copilotkit/aimock) for full feature support.

Remote fixture URLs

--fixtures accepts https:// and http:// URLs pointing at JSON fixture files in addition to filesystem paths, and the flag is repeatable so you can layer remote and local sources in argv order. Fetched fixtures are cached on disk at ~/.cache/aimock/fixtures/<sha256-of-url>/ (honors $XDG_CACHE_HOME); when paired with --validate-on-load, a fetch failure with a valid cached copy logs a warning and continues — without a cache, the process exits non-zero. HTTP fetches have a 10s timeout and a 50 MB body cap; redirects are rejected fail-loud, so configure your upstream to serve the final URL directly (GitHub raw content URLs already do).

Private and link-local addresses (loopback, RFC1918, CGNAT, cloud metadata, ULA, multicast) are rejected by default to prevent SSRF. For local development or tests that need to hit 127.0.0.1, opt out with AIMOCK_ALLOW_PRIVATE_URLS=1. Tarball and zip URL support is intentionally deferred.

Framework Guides

Test your AI agents with aimock — no API keys, no network calls: LangChain · CrewAI · PydanticAI · LlamaIndex · Mastra · Google ADK · Microsoft Agent Framework

Switching from other tools?

Step-by-step migration guides: MSW · VidaiMock · mock-llm · piyook/llm-mock · Python mocks · openai-responses · Mokksy

Documentation

https://aimock.copilotkit.dev · Example fixtures

Real-World Usage

AG-UI uses aimock for its end-to-end test suite, verifying AI agent behavior across LLM providers with fixture-driven responses.

License

MIT

About

Mock everything your AI app talks to — LLM APIs, MCP, A2A, AG-UI, vector DBs, search. One package, one port, zero dependencies.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages