Config File
Location
Section titled “Location”Put wrangler-deploy.config.ts at the project root.
This file sits alongside your existing Wrangler config. It does not replace wrangler.jsonc. Think of the split like this:
wrangler.jsonc: local dev and normal Wrangler configwrangler-deploy.config.ts: project-level resource, stage, and binding modelwrangler.rendered.jsonc: generated deploy-time config with real IDs for a stage
You can also add a project defaults file, .wdrc or .wdrc.json, at the repo root or any parent directory. wd context uses it for defaults like stage name, dev port, account ID, and state password. It is optional, but helpful when you want agents or teammates to reuse the same settings without repeating flags.
The config also gives you inferred worker types through workerEnv(...) and typeof api.Env. That keeps the runtime config and the TypeScript Env shape in one place, with no generated files and no separate type generation step.
Minimal example
Section titled “Minimal example”import { defineConfig } from "wrangler-deploy";
export default defineConfig({ version: 1, workers: ["workers/api"], resources: { "payments-db": { type: "d1", bindings: { "workers/api": "DB", }, }, "cache-kv": { type: "kv", bindings: { "workers/api": "CACHE", }, }, },});With a worker config like this:
{ "name": "payments-api", "main": "src/index.ts", "d1_databases": [ { "binding": "DB", "database_name": "placeholder", "database_id": "placeholder" } ], "kv_namespaces": [ { "binding": "CACHE", "id": "placeholder" } ]}You keep the checked-in JSONC for development. wrangler-deploy reads it, renders a deploy-time config with real IDs, and uses that rendered file during wd deploy.
Schema
Section titled “Schema”import { defineConfig } from "wrangler-deploy";
export default defineConfig({ version: 1, // required, must be 1 workers: ["apps/api", "apps/worker"], // worker directories ("." for root) deployOrder: ["apps/worker", "apps/api"], // optional, inferred from serviceBindings resources: { /* ... */ }, // KV, Queue, D1, Hyperdrive, R2 serviceBindings: { /* ... */ }, // cross-worker bindings stages: { /* ... */ }, // protection and TTL rules secrets: { /* ... */ }, // declared secret names per worker routes: { /* ... */ }, // stage-aware URL patterns verify: { /* ... */ }, // post-deploy health checks verifyLocal: { /* ... */ }, // config-driven local runtime checks guard: { /* ... */ }, // workers usage guard integration state: { /* ... */ }, // remote state backend (KV) dev: { /* ... */ }, // local multi-worker dev config});Every worker listed in workers should point at a directory that already contains wrangler.jsonc or wrangler.json.
If you need a quick way to inspect or update project defaults, use the CLI instead of hand-editing JSON:
wd contextwd context get stagewd context set --stage staging --account-id 1234567890abcdef1234567890abcdefwd context unset --account-idwd context clearLocal Verify
Section titled “Local Verify”verifyLocal powers wd verify local. It is intentionally separate from deploy-time verify: this block is for local runtime workflows against wd dev.
verifyLocal: { checks: [ { type: "worker", name: "api health", worker: "workers/api", endpoint: "health", expectStatus: 200, expectBodyIncludes: ['"ok":true'], }, { type: "d1Reset", database: "payments-db", }, { type: "d1Seed", database: "payments-db", }, { type: "d1", database: "payments-db", sql: "SELECT COUNT(*) AS batch_count FROM batches;", expectTextIncludes: ['"batch_count": 1'], }, { type: "queue", queue: "payment-outbox", payload: JSON.stringify({ type: "batch.dispatched" }), expectStatus: 200, expectBodyIncludes: ['"queued":true'], }, { type: "cron", worker: "workers/batch-workflow", expectStatus: 200, expectBodyIncludes: ["ok"], }, ],}Supported check types:
workercronqueued1d1Seedd1Reset
worker, queue, and d1 checks can reference shared fixtures with fixture: "<name>". That lets wd verify local, wd dev ui, and the CLI reuse the same local payloads and SQL instead of duplicating them in multiple places.
verifyLocal also supports named packs:
verifyLocal: { checks: [...], packs: { smoke: { description: "Fast local smoke test for CI", checks: [ { type: "worker", worker: "workers/api", fixture: "api-health", expectStatus: 200, expectJsonIncludes: { ok: true }, }, ], }, },}Use packs when the repo needs a fast smoke run and a stricter regression run without copying the same fixtures into multiple scripts.
Guard integration
Section titled “Guard integration”guard connects wrangler-deploy to a deployed workers-usage-guard Worker.
guard: { endpoint: "https://workers-usage-guard.example.workers.dev", // set by `wd guard init` databaseId: "bd0274ea-ea3b-4fd7-966d-ee55d6ce9947", // set by `wd guard init` accounts: [ { accountId: "1234567890abcdef1234567890abcdef", billingCycleDay: 1, workers: [ { scriptName: "payment-api", thresholds: { requests: 500_000, cpuMs: 5_000_000 }, forecast: true, forecastLookaheadSeconds: 600, }, ], globalProtected: [], }, ],}guard.endpointandguard.databaseIdare printed bywd guard init— copy them from the output.guard.accountspowerswd guard status(GraphQL usage fetch path).guard.endpointenables signed API commands:breaches,report,disarm,arm,approvals,approve,reject.guard.databaseIdis used bywd guard deployandwd guard migrateto target the correct D1.- Signed commands require
WRANGLER_DEPLOY_GUARD_SIGNING_KEYat runtime. - Status requires
CLOUDFLARE_API_TOKEN.
Shared fixtures
Section titled “Shared fixtures”fixtures: { "api-health": { type: "worker", worker: "workers/api", endpoint: "health", description: "Shared health check used by verify and dev ui", }, "payment-outbox-dispatch": { type: "queue", queue: "payment-outbox", worker: "workers/api", payload: JSON.stringify({ type: "batch.dispatched", data: { batchId: "fixture-test" } }), }, "payments-batch-count": { type: "d1", database: "payments-db", worker: "workers/api", sql: "SELECT COUNT(*) AS batch_count FROM batches;", },}Fixture types:
workerfor reusablewd worker callinputsqueuefor reusablewd queue sendpayloadsd1for reusablewd d1 execSQL or file-based commands
Fixtures are optional. Keep using raw wrangler.jsonc and direct Wrangler commands if that is enough for your repo. Add fixtures when repeated local workflows start drifting across scripts, docs, and tests.
Resource types
Section titled “Resource types”| Type | Key | Provisioned via |
|---|---|---|
| D1 Database | d1 | wrangler d1 create |
| KV Namespace | kv | wrangler kv namespace create |
| Queue | queue | wrangler queues create |
| Hyperdrive | hyperdrive | wrangler hyperdrive create |
| R2 Bucket | r2 | wrangler r2 bucket create |
| Vectorize | vectorize | Cloudflare Vectorize API |
State backend
Section titled “State backend”// Local state on disk (default, omit or set explicitly)state: { backend: "local" }
// Remote KV state shared across your team and CIstate: { backend: "kv", namespaceId: "your-kv-namespace-id", keyPrefix: "wrangler-deploy/", // optional, default shown}See Remote State for setup and migration.
Deploy order
Section titled “Deploy order”deployOrder is optional. If you omit it, wrangler-deploy infers order from serviceBindings. Workers that are depended on deploy first. The graph is validated:
- Cycles are rejected with a full cycle path
- Unknown targets are rejected
- Explicit
deployOrderis validated against service binding dependencies
Stage rules
Section titled “Stage rules”stages: { production: { protected: true }, staging: { protected: true }, "pr-*": { protected: false, ttl: "7d" },}- Protected stages require
--forceto destroy - Unmatched stages default to protected (safe default)
- TTL is enforced by
wd gc
Secrets
Section titled “Secrets”secrets: { "apps/api": ["AUTH_SECRET", "API_KEY"], "apps/worker": ["API_KEY"],}Declared by name only. Values never stored. Deploy blocks if any are missing.
Local dev
Section titled “Local dev”dev: { ports: { "workers/api": 9000, }, args: ["--log-level", "debug"], queues: { "payment-outbox": { worker: "workers/api", path: "/__wd/queues/payment-outbox", }, }, companions: [ { name: "dev:cron", command: "pnpm dev:cron", cwd: ".", workers: ["workers/api"], }, ], endpoints: { health: { worker: "workers/api", path: "/health", method: "GET", description: "API health endpoint", }, }, d1: { "payments-db": { worker: "workers/api", seedFile: "sql/seed.sql", resetFile: "sql/reset.sql", }, }, session: { enabled: false, entryWorker: "workers/api", persistTo: ".wrangler/state", args: ["--local"], }, snapshots: { paths: [".wrangler/state"], },}portssets per-workerwd devport preferencesargsappends args to every spawnedwrangler devprocessqueuesdeclares local queue injection routes used bywd queue sendcompanionsstarts local-only helper processes alongsidewd devendpointsnames local HTTP routes sowd worker call --endpointandwd worker routescan resolve themd1adds local workflow hints forwd d1 exec,wd d1 seed, andwd d1 resetsession.enabledswitcheswd devto one shared Wrangler session with repeated-cconfigssession.entryWorkerpicks which worker owns the primary localhost URL in session modesession.persistTosets the default Miniflare state path for explicit session runssnapshots.pathsadds extra local paths to include inwd snapshot save
Use CLI flags when you want session mode only for a particular run:
$ wd dev --session --persist-to .wrangler/state