Skip to content

Quick Start

If your project already has wrangler.jsonc files, this is the path. You do not rewrite them, migrate them, or stop using them. wrangler-deploy reads them, generates one project-level config, and uses rendered JSONC files only when it is time to deploy or run stage-aware local dev.

At the end of this guide you will have:

  • Your existing wrangler.jsonc files untouched
  • A generated wrangler-deploy.config.ts
  • A stage you can plan, apply, deploy, and destroy
  • The option to run all workers locally with wd dev
  • The option to run wd dev --stage <stage> against rendered stage bindings
  • The option to switch Queue-heavy local development into a shared Wrangler session
  • If you are starting from scratch, a ready-made Vite starter via wd create vite
Terminal window
npm install -D wrangler-deploy
# or
pnpm add -D wrangler-deploy

If you are starting a new project, scaffold the starter first:

Terminal window
$ wd create vite my-app
Created vite starter in /repo/my-app

Then install dependencies and start the local frontend plus worker pair:

Terminal window
cd my-app
pnpm install
pnpm dev

Run this in any project that already has wrangler.jsonc or wrangler.json files:

Terminal window
$ wd init
Found workers/api/wrangler.jsonc
Found workers/batch-workflow/wrangler.jsonc
Found workers/event-router/wrangler.jsonc
Generated wrangler-deploy.config.ts

wd init is built around existing Wrangler projects. It scans your wrangler.jsonc files and generates a matching wrangler-deploy.config.ts. Your Wrangler files stay untouched.

That is the trust boundary: wrangler.jsonc is still the thing you author and keep in the repo. wrangler-deploy adds stage orchestration around it instead of replacing it.

See what would be created before touching anything:

Terminal window
$ wd plan --stage staging
+ payments-db-staging (d1) create
+ token-kv-staging (kv) create
+ cache-kv-staging (kv) create
+ payment-outbox-staging (queue) create
+ payment-outbox-dlq-staging (queue) create
5 to create, 0 in sync, 0 drifted, 0 orphaned

Provision the resources in Cloudflare and record their IDs in state:

Terminal window
$ wd apply --stage staging
+ payments-db-staging (d1) created
+ token-kv-staging (kv) created
+ cache-kv-staging (kv) created
+ payment-outbox-staging (queue) created
+ payment-outbox-dlq-staging (queue) created
5 resources applied

This is the step where wrangler-deploy turns your stage name into real infrastructure. It also renders per-worker wrangler.rendered.jsonc files with the correct IDs and stage-specific names.

Deploy workers using those rendered configs:

Terminal window
$ wd deploy --stage staging
Deploying workers/batch-workflow...
payment-batch-workflow-staging deployed
Deploying workers/event-router...
payment-event-router-staging deployed
Deploying workers/api...
payment-api-staging deployed

Workers deploy in dependency order, so service-binding targets go first. You still get Wrangler underneath. wrangler-deploy is deciding order and wiring, then calling wrangler deploy with the rendered JSONC for that stage.

Tear everything down when you’re done:

Terminal window
$ wd destroy --stage staging --force
Removing queue consumers...
Deleting workers...
Deleting resources...
Stage "staging" destroyed

Local development is still a Wrangler workflow. Your checked-in wrangler.jsonc files have not changed, so wrangler dev still works exactly the way it did before:

Terminal window
$ wrangler dev

If you have multiple workers, wd dev starts them together and handles port assignment:

Terminal window
$ wd dev
Starting dev servers:
workers/batch-workflow -> http://localhost:8787
workers/event-router -> http://localhost:8788
workers/api -> http://localhost:8789

If you want Wrangler’s shared local Queue session, switch it on explicitly:

Terminal window
$ wd dev --session --persist-to .wrangler/state
Starting local dev session:
workers/api -> http://localhost:8787
includes: workers/batch-workflow, workers/event-router

If you already applied a stage, wd dev --stage staging uses the rendered stage configs directly so local bindings match deploy-time state instead of the raw placeholders in checked-in wrangler.jsonc files.

For local runtime workflows on top of that:

Terminal window
$ wd dev doctor
$ wd dev ui --port 8899
$ wd fixture list
$ wd snapshot save local-baseline
$ wd snapshot load local-baseline
$ wd logs workers/api --once
$ wd worker routes workers/api
$ wd worker call workers/api --path /health
$ wd worker call --fixture echo-ping
$ wd d1 list
$ wd d1 exec payments-db --sql 'SELECT COUNT(*) FROM batches;'
$ wd d1 exec --fixture payments-batch-count
$ wd queue list
$ wd queue send payment-outbox --json '{"type":"batch.dispatched"}'
$ wd queue send --fixture payment-outbox-dispatch
$ wd queue send payment-outbox --json '{"type":"batch.dispatched"}' --watch --count 10 --every 5s
$ wd queue replay payment-outbox --file fixtures/payment-outbox.json
$ wd queue tail payment-outbox --once
$ wd cron trigger workers/batch-workflow --port 8787
$ wd verify local
$ wd verify local --pack smoke

That is the core model of the tool:

  • Keep wrangler.jsonc for development.
  • Add wd dev --session when you need one shared Miniflare environment for Queues.
  • Generate stage-aware rendered JSONC for deploys.
  • Use one project-level config to connect the two.