Skip to content

Config File

Put wrangler-deploy.config.ts at the project root.

This file sits alongside your existing Wrangler config. It does not replace wrangler.jsonc. Think of the split like this:

  • wrangler.jsonc: local dev and normal Wrangler config
  • wrangler-deploy.config.ts: project-level resource, stage, and binding model
  • wrangler.rendered.jsonc: generated deploy-time config with real IDs for a stage

You can also add a project defaults file, .wdrc or .wdrc.json, at the repo root or any parent directory. wd context uses it for defaults like stage name, dev port, account ID, and state password. It is optional, but helpful when you want agents or teammates to reuse the same settings without repeating flags.

The config also gives you inferred worker types through workerEnv(...) and typeof api.Env. That keeps the runtime config and the TypeScript Env shape in one place, with no generated files and no separate type generation step.

import { defineConfig } from "wrangler-deploy";
export default defineConfig({
version: 1,
workers: ["workers/api"],
resources: {
"payments-db": {
type: "d1",
bindings: {
"workers/api": "DB",
},
},
"cache-kv": {
type: "kv",
bindings: {
"workers/api": "CACHE",
},
},
},
});

With a worker config like this:

workers/api/wrangler.jsonc
{
"name": "payments-api",
"main": "src/index.ts",
"d1_databases": [
{ "binding": "DB", "database_name": "placeholder", "database_id": "placeholder" }
],
"kv_namespaces": [
{ "binding": "CACHE", "id": "placeholder" }
]
}

You keep the checked-in JSONC for development. wrangler-deploy reads it, renders a deploy-time config with real IDs, and uses that rendered file during wd deploy.

import { defineConfig } from "wrangler-deploy";
export default defineConfig({
version: 1, // required, must be 1
workers: ["apps/api", "apps/worker"], // worker directories ("." for root)
deployOrder: ["apps/worker", "apps/api"], // optional, inferred from serviceBindings
resources: {
/* ... */
}, // KV, Queue, D1, Hyperdrive, R2
serviceBindings: {
/* ... */
}, // cross-worker bindings
stages: {
/* ... */
}, // protection and TTL rules
secrets: {
/* ... */
}, // declared secret names per worker
routes: {
/* ... */
}, // stage-aware URL patterns
verify: {
/* ... */
}, // post-deploy health checks
verifyLocal: {
/* ... */
}, // config-driven local runtime checks
guard: {
/* ... */
}, // workers usage guard integration
state: {
/* ... */
}, // remote state backend (KV)
dev: {
/* ... */
}, // local multi-worker dev config
});

Every worker listed in workers should point at a directory that already contains wrangler.jsonc or wrangler.json.

If you need a quick way to inspect or update project defaults, use the CLI instead of hand-editing JSON:

Terminal window
wd context
wd context get stage
wd context set --stage staging --account-id 1234567890abcdef1234567890abcdef
wd context unset --account-id
wd context clear

verifyLocal powers wd verify local. It is intentionally separate from deploy-time verify: this block is for local runtime workflows against wd dev.

verifyLocal: {
checks: [
{
type: "worker",
name: "api health",
worker: "workers/api",
endpoint: "health",
expectStatus: 200,
expectBodyIncludes: ['"ok":true'],
},
{
type: "d1Reset",
database: "payments-db",
},
{
type: "d1Seed",
database: "payments-db",
},
{
type: "d1",
database: "payments-db",
sql: "SELECT COUNT(*) AS batch_count FROM batches;",
expectTextIncludes: ['"batch_count": 1'],
},
{
type: "queue",
queue: "payment-outbox",
payload: JSON.stringify({ type: "batch.dispatched" }),
expectStatus: 200,
expectBodyIncludes: ['"queued":true'],
},
{
type: "cron",
worker: "workers/batch-workflow",
expectStatus: 200,
expectBodyIncludes: ["ok"],
},
],
}

Supported check types:

  • worker
  • cron
  • queue
  • d1
  • d1Seed
  • d1Reset

worker, queue, and d1 checks can reference shared fixtures with fixture: "<name>". That lets wd verify local, wd dev ui, and the CLI reuse the same local payloads and SQL instead of duplicating them in multiple places.

verifyLocal also supports named packs:

verifyLocal: {
checks: [...],
packs: {
smoke: {
description: "Fast local smoke test for CI",
checks: [
{
type: "worker",
worker: "workers/api",
fixture: "api-health",
expectStatus: 200,
expectJsonIncludes: { ok: true },
},
],
},
},
}

Use packs when the repo needs a fast smoke run and a stricter regression run without copying the same fixtures into multiple scripts.

guard connects wrangler-deploy to a deployed workers-usage-guard Worker.

guard: {
endpoint: "https://workers-usage-guard.example.workers.dev", // set by `wd guard init`
databaseId: "bd0274ea-ea3b-4fd7-966d-ee55d6ce9947", // set by `wd guard init`
accounts: [
{
accountId: "1234567890abcdef1234567890abcdef",
billingCycleDay: 1,
workers: [
{
scriptName: "payment-api",
thresholds: { requests: 500_000, cpuMs: 5_000_000 },
forecast: true,
forecastLookaheadSeconds: 600,
},
],
globalProtected: [],
},
],
}
  • guard.endpoint and guard.databaseId are printed by wd guard init — copy them from the output.
  • guard.accounts powers wd guard status (GraphQL usage fetch path).
  • guard.endpoint enables signed API commands: breaches, report, disarm, arm, approvals, approve, reject.
  • guard.databaseId is used by wd guard deploy and wd guard migrate to target the correct D1.
  • Signed commands require WRANGLER_DEPLOY_GUARD_SIGNING_KEY at runtime.
  • Status requires CLOUDFLARE_API_TOKEN.
fixtures: {
"api-health": {
type: "worker",
worker: "workers/api",
endpoint: "health",
description: "Shared health check used by verify and dev ui",
},
"payment-outbox-dispatch": {
type: "queue",
queue: "payment-outbox",
worker: "workers/api",
payload: JSON.stringify({ type: "batch.dispatched", data: { batchId: "fixture-test" } }),
},
"payments-batch-count": {
type: "d1",
database: "payments-db",
worker: "workers/api",
sql: "SELECT COUNT(*) AS batch_count FROM batches;",
},
}

Fixture types:

  • worker for reusable wd worker call inputs
  • queue for reusable wd queue send payloads
  • d1 for reusable wd d1 exec SQL or file-based commands

Fixtures are optional. Keep using raw wrangler.jsonc and direct Wrangler commands if that is enough for your repo. Add fixtures when repeated local workflows start drifting across scripts, docs, and tests.

TypeKeyProvisioned via
D1 Databased1wrangler d1 create
KV Namespacekvwrangler kv namespace create
Queuequeuewrangler queues create
Hyperdrivehyperdrivewrangler hyperdrive create
R2 Bucketr2wrangler r2 bucket create
VectorizevectorizeCloudflare Vectorize API
// Local state on disk (default, omit or set explicitly)
state: { backend: "local" }
// Remote KV state shared across your team and CI
state: {
backend: "kv",
namespaceId: "your-kv-namespace-id",
keyPrefix: "wrangler-deploy/", // optional, default shown
}

See Remote State for setup and migration.

deployOrder is optional. If you omit it, wrangler-deploy infers order from serviceBindings. Workers that are depended on deploy first. The graph is validated:

  • Cycles are rejected with a full cycle path
  • Unknown targets are rejected
  • Explicit deployOrder is validated against service binding dependencies
stages: {
production: { protected: true },
staging: { protected: true },
"pr-*": { protected: false, ttl: "7d" },
}
  • Protected stages require --force to destroy
  • Unmatched stages default to protected (safe default)
  • TTL is enforced by wd gc
secrets: {
"apps/api": ["AUTH_SECRET", "API_KEY"],
"apps/worker": ["API_KEY"],
}

Declared by name only. Values never stored. Deploy blocks if any are missing.

dev: {
ports: {
"workers/api": 9000,
},
args: ["--log-level", "debug"],
queues: {
"payment-outbox": {
worker: "workers/api",
path: "/__wd/queues/payment-outbox",
},
},
companions: [
{
name: "dev:cron",
command: "pnpm dev:cron",
cwd: ".",
workers: ["workers/api"],
},
],
endpoints: {
health: {
worker: "workers/api",
path: "/health",
method: "GET",
description: "API health endpoint",
},
},
d1: {
"payments-db": {
worker: "workers/api",
seedFile: "sql/seed.sql",
resetFile: "sql/reset.sql",
},
},
session: {
enabled: false,
entryWorker: "workers/api",
persistTo: ".wrangler/state",
args: ["--local"],
},
snapshots: {
paths: [".wrangler/state"],
},
}
  • ports sets per-worker wd dev port preferences
  • args appends args to every spawned wrangler dev process
  • queues declares local queue injection routes used by wd queue send
  • companions starts local-only helper processes alongside wd dev
  • endpoints names local HTTP routes so wd worker call --endpoint and wd worker routes can resolve them
  • d1 adds local workflow hints for wd d1 exec, wd d1 seed, and wd d1 reset
  • session.enabled switches wd dev to one shared Wrangler session with repeated -c configs
  • session.entryWorker picks which worker owns the primary localhost URL in session mode
  • session.persistTo sets the default Miniflare state path for explicit session runs
  • snapshots.paths adds extra local paths to include in wd snapshot save

Use CLI flags when you want session mode only for a particular run:

Terminal window
$ wd dev --session --persist-to .wrangler/state