Quick Start
If your project already has wrangler.jsonc files, this is the path. You do not rewrite them, migrate them, or stop using them. wrangler-deploy reads them, generates one project-level config, and uses rendered JSONC files only when it is time to deploy or run stage-aware local dev.
At the end of this guide you will have:
- Your existing
wrangler.jsoncfiles untouched - A generated
wrangler-deploy.config.ts - A stage you can plan, apply, deploy, and destroy
- The option to run all workers locally with
wd dev - The option to run
wd dev --stage <stage>against rendered stage bindings - The option to switch Queue-heavy local development into a shared Wrangler session
- If you are starting from scratch, a ready-made Vite starter via
wd create vite
Install
Section titled “Install”npm install -D wrangler-deploy# orpnpm add -D wrangler-deployStart fresh
Section titled “Start fresh”If you are starting a new project, scaffold the starter first:
$ wd create vite my-app Created vite starter in /repo/my-appThen install dependencies and start the local frontend plus worker pair:
cd my-apppnpm installpnpm devInitialize
Section titled “Initialize”Run this in any project that already has wrangler.jsonc or wrangler.json files:
$ wd init ✓ Found workers/api/wrangler.jsonc ✓ Found workers/batch-workflow/wrangler.jsonc ✓ Found workers/event-router/wrangler.jsonc ✓ Generated wrangler-deploy.config.tswd init is built around existing Wrangler projects. It scans your wrangler.jsonc files and generates a matching wrangler-deploy.config.ts. Your Wrangler files stay untouched.
That is the trust boundary: wrangler.jsonc is still the thing you author and keep in the repo. wrangler-deploy adds stage orchestration around it instead of replacing it.
See what would be created before touching anything:
$ wd plan --stage staging + payments-db-staging (d1) create + token-kv-staging (kv) create + cache-kv-staging (kv) create + payment-outbox-staging (queue) create + payment-outbox-dlq-staging (queue) create
5 to create, 0 in sync, 0 drifted, 0 orphanedProvision the resources in Cloudflare and record their IDs in state:
$ wd apply --stage staging + payments-db-staging (d1) created + token-kv-staging (kv) created + cache-kv-staging (kv) created + payment-outbox-staging (queue) created + payment-outbox-dlq-staging (queue) created
✓ 5 resources appliedThis is the step where wrangler-deploy turns your stage name into real infrastructure. It also renders per-worker wrangler.rendered.jsonc files with the correct IDs and stage-specific names.
Deploy
Section titled “Deploy”Deploy workers using those rendered configs:
$ wd deploy --stage staging Deploying workers/batch-workflow... ✓ payment-batch-workflow-staging deployed Deploying workers/event-router... ✓ payment-event-router-staging deployed Deploying workers/api... ✓ payment-api-staging deployedWorkers deploy in dependency order, so service-binding targets go first. You still get Wrangler underneath. wrangler-deploy is deciding order and wiring, then calling wrangler deploy with the rendered JSONC for that stage.
Destroy
Section titled “Destroy”Tear everything down when you’re done:
$ wd destroy --stage staging --force Removing queue consumers... Deleting workers... Deleting resources... ✓ Stage "staging" destroyedLocal dev
Section titled “Local dev”Local development is still a Wrangler workflow. Your checked-in wrangler.jsonc files have not changed, so wrangler dev still works exactly the way it did before:
$ wrangler devIf you have multiple workers, wd dev starts them together and handles port assignment:
$ wd devStarting dev servers: workers/batch-workflow -> http://localhost:8787 workers/event-router -> http://localhost:8788 workers/api -> http://localhost:8789If you want Wrangler’s shared local Queue session, switch it on explicitly:
$ wd dev --session --persist-to .wrangler/stateStarting local dev session: workers/api -> http://localhost:8787 includes: workers/batch-workflow, workers/event-routerIf you already applied a stage, wd dev --stage staging uses the rendered stage configs directly so local bindings match deploy-time state instead of the raw placeholders in checked-in wrangler.jsonc files.
For local runtime workflows on top of that:
$ wd dev doctor$ wd dev ui --port 8899$ wd fixture list$ wd snapshot save local-baseline$ wd snapshot load local-baseline$ wd logs workers/api --once$ wd worker routes workers/api$ wd worker call workers/api --path /health$ wd worker call --fixture echo-ping$ wd d1 list$ wd d1 exec payments-db --sql 'SELECT COUNT(*) FROM batches;'$ wd d1 exec --fixture payments-batch-count$ wd queue list$ wd queue send payment-outbox --json '{"type":"batch.dispatched"}'$ wd queue send --fixture payment-outbox-dispatch$ wd queue send payment-outbox --json '{"type":"batch.dispatched"}' --watch --count 10 --every 5s$ wd queue replay payment-outbox --file fixtures/payment-outbox.json$ wd queue tail payment-outbox --once$ wd cron trigger workers/batch-workflow --port 8787$ wd verify local$ wd verify local --pack smokeThat is the core model of the tool:
- Keep
wrangler.jsoncfor development. - Add
wd dev --sessionwhen you need one shared Miniflare environment for Queues. - Generate stage-aware rendered JSONC for deploys.
- Use one project-level config to connect the two.