Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
7fc0dc3
chore: SnapshotStore trait and project_id/database_id
radim Apr 30, 2026
6916c38
feat: implemented snapshot store and basic wiring
radim Apr 30, 2026
f512f43
test(core): cover SnapshotStore trait impl
radim Apr 30, 2026
5af2acc
chore: config changes
radim Apr 30, 2026
c63da6b
chore: updated README for config changes
radim Apr 30, 2026
ff74830
test: added tests for project_id/database_id
radim Apr 30, 2026
71a982d
fix: typo
radim Apr 30, 2026
c3a99fd
chore: wired snapshot commands to use SnapshotStore
radim May 1, 2026
026e07e
chore: schema_diff sswitch to snapshot store
radim May 1, 2026
bea4c2c
chore: cleanup legacy methods
radim May 1, 2026
5b40e48
chore: removed leftover test
radim May 1, 2026
c5e5523
chore: cargo-husky pre-commit hook
radim May 1, 2026
a7550da
feat: dryrun snapshot export cli command
radim May 1, 2026
a8ff8e1
test: tests for list_keys, complete_key, snapshot export
radim May 1, 2026
b543889
docs: updated README.md and toml reference for profile/databases
radim May 1, 2026
18c9e66
chore: added project to demo
radim May 1, 2026
3773cef
feat: CLI --profile for import, drift, stats apply, probe and dump-sc…
radim May 1, 2026
84e1509
feat: snapshot profile support
radim May 1, 2026
0f054b5
test: profile resolution with cli overrides and missing profiles
radim May 1, 2026
4905cad
docs: profile selection logic
radim May 1, 2026
d1ad7b5
docs: switched tracking issue for multi-DB support in MCP
radim May 1, 2026
1c5ecb4
test: cover profile resolution edge cases and CLI schema path helpers
radim May 1, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .cargo-husky/hooks/pre-commit
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
#!/bin/sh

set -e

echo '+cargo fmt --all -- --check'
cargo fmt --all -- --check
62 changes: 62 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ dbg_macro = "deny"

[workspace.dependencies]
dry_run_core = { path = "crates/dry_run_core" }
async-trait = "0.1"
chrono = { version = "0.4", features = ["serde"] }
clap = { version = "4", features = ["derive", "env"] }
pg_query = "6.1"
Expand All @@ -26,6 +27,7 @@ thiserror = "2"
tokio = { version = "1", features = ["full"] }
toml = "0.8"
tracing = "0.1"
zstd = "0.13"
reqwest = { version = "0.12", features = ["json", "rustls-tls"], default-features = false }
rmcp = { version = "0.8", features = ["server", "transport-io", "transport-sse-server", "macros"] }
schemars = "1"
Expand Down
43 changes: 41 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ LLM/AI coding assistants are very good in writing code/SQL queries. But they are

Some PostgreSQL MCP server ask you for the database connection. And to perform the administrative tasks you might need SUPERUSER permission. But that's like asking for problem.


We've already seen where this leads: [production databases wiped by AI agents](https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-database-called-it-a-catastrophic-failure/), and [SQL injection in MCP servers](https://securitylabs.datadoghq.com/articles/mcp-vulnerability-case-study-SQL-injection-in-the-postgresql-mcp-server/) that were supposed to be read-only.

The model doesn't need to *query* your database. It needs to *understand* your schema: the structure, constraints, statistics, and version-specific behavior. That knowledge is structural. It changes when you deploy a migration, not between queries.
Expand Down Expand Up @@ -107,7 +106,7 @@ If you can connect to a PostgreSQL instance (local, dev, or production), one com
dryrun init --db "$DATABASE_URL"
```

This creates `dryrun.toml`, the `.dryrun/` data directory, and introspects the database into `.dryrun/schema.json`. You're ready to go.
This creates `dryrun.toml` (with `[project] id` and default profile), the `.dryrun/` data directory, and introspects the database into `.dryrun/schema.json`. Snapshots are keyed by `(project_id, database_id)`; set `database_id` per profile when a project has multiple databases (e.g. `auth`, `billing`). See [`docs/dryrun-toml.md`](docs/dryrun-toml.md) for the full config reference.

### Option B: Someone else has database access

Expand All @@ -134,6 +133,44 @@ dryrun lint

All commands work offline from the schema file. Each project has its own `dryrun.toml` and `.dryrun/`, there is no global state. Add `.dryrun/` to your `.gitignore`.

### Multiple databases per project

`dryrun snapshot take` keys snapshots by `(project_id, database_id)`. The defaults work — `project_id` is your folder name, `database_id` is the actual database name from `current_database()`:

```sh
dryrun init --db "$AUTH_DB" # captures auth
dryrun snapshot take --db "$BILLING_DB" # captures billing into its own stream
dryrun snapshot list --db "$AUTH_DB" # only auth snapshots
```

For stable refs (and so `list` / `diff` can run without retyping URLs), declare profiles in `dryrun.toml`:

```toml
[project]
id = "myapp"

[profiles.auth]
db_url = "${AUTH_DATABASE_URL}"
database_id = "auth"

[profiles.billing]
db_url = "${BILLING_DATABASE_URL}"
database_id = "billing"
```

Then:

```sh
dryrun --profile billing snapshot list
dryrun --profile billing snapshot diff --latest
```

See [`docs/dryrun-toml.md`](docs/dryrun-toml.md) for all profile options.

Every DB-related command (`init`, `import`, `probe`, `dump-schema`, `lint`, `drift`, `stats apply`, all `snapshot` subcommands) accepts `--profile` and falls back to the resolved profile's `db_url` and `schema_file` when the corresponding CLI flag is not provider.

> **Note:** the MCP server is currently single-database. Using the default profile. Or the option is to run one `dryrun mcp-serve` process per database. Native multi-database support inside one MCP process is tracked in [#4](https://github.com/boringSQL/dryrun/issues/7).

## MCP server

Add `dryrun` to your AI assistant. If you installed via Homebrew, `dryrun` is already on your PATH:
Expand All @@ -150,6 +187,8 @@ claude mcp add dryrun -- /path/to/dryrun mcp-serve

That's it. The server auto-discovers `.dryrun/schema.json` in the current project. No database credentials needed, your AI assistant gets full schema intelligence from the offline snapshot.

For projects with multiple databases, run one `dryrun mcp-serve` per database and add an entry per server in your client config. Native multi-database serving inside one MCP process is tracked in [#4](https://github.com/boringSQL/dryrun/issues/4).

See the [Tutorial](TUTORIAL.md) for live database setup, SSE transport, and Claude Desktop configuration.

## More
Expand Down
5 changes: 5 additions & 0 deletions crates/dry_run_cli/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -22,3 +22,8 @@ serde_json = { workspace = true }
tokio = { workspace = true }
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
zstd = { workspace = true }

[dev-dependencies]
cargo-husky = { version = "1", default-features = false, features = ["user-hooks"] }
tempfile = "3"
Loading