Local Dev
If you have more than one worker, you’ve probably dealt with opening three terminals, picking ports that don’t collide, and then watching two wrangler dev processes fight over inspector port 9229. wd dev handles all of that.
Wrangler already gives you the local runtime primitives. wrangler.jsonc is still the right local-dev file. The benefit here is that wrangler-deploy makes those primitives repo-aware. You ask for payment-outbox or workers/batch-workflow, not “the worker on port 8788 with the queue debug route we added last month”.
$ wd devStarting dev servers: workers/batch-workflow -> http://localhost:8787 workers/event-router -> http://localhost:8788 workers/api -> http://localhost:8789Workers start in dependency order and each one runs wrangler dev with its own port.
Queue session mode
Section titled “Queue session mode”For Queue-heavy local development, wd dev can also run a single Wrangler local session with repeated -c configs and shared Miniflare state. This mirrors Cloudflare’s documented wrangler dev -c ... -c ... --persist-to ... flow while still deriving the worker list from your wrangler-deploy.config.ts.
$ wd dev --session --persist-to .wrangler/stateStarting local dev session: workers/api -> http://localhost:8787 includes: workers/batch-workflow, workers/event-routerSet dev.session.entryWorker when the worker you want exposed on localhost is not the last worker in dependency order.
Stage-aware dev
Section titled “Stage-aware dev”If you have already run wd apply --stage staging, you can point local dev at the rendered stage bindings directly:
$ wd dev --stage staging --session --persist-to .wrangler/stateStarting local dev session: workers/api -> http://localhost:8787 includes: workers/batch-workflow, workers/event-routerThis keeps local bindings aligned with deployed resources and worker names. It is especially useful when you want wd dev to behave like the exact stage you already applied, rather than the raw placeholders in the checked-in configs.
Port resolution
Section titled “Port resolution”Before spawning anything, wrangler-deploy probes for free ports. If 8787 or 9229 is already taken by another process, it picks the next one. You never have to think about inspector port collisions when running multiple workers.
# Something else is on 8787? No problem.$ wd dev --port 9000Starting dev servers: workers/batch-workflow -> http://localhost:9000 workers/event-router -> http://localhost:9001 workers/api -> http://localhost:9002Filtering
Section titled “Filtering”In a large monorepo you probably don’t need every worker running. --filter starts only the target worker and its transitive service-binding dependencies.
$ wd dev --filter workers/apiStarting dev servers: workers/batch-workflow -> http://localhost:8787 workers/api -> http://localhost:8788batch-workflow is included because api has a WORKFLOWS service binding to it. event-router is skipped because api doesn’t depend on it.
If the filter doesn’t match any configured worker, the command fails immediately with a list of valid worker paths.
Single-worker read mode
Section titled “Single-worker read mode”For large monorepos, starting all workers locally can be slow. --fallback-stage runs a single worker while falling back to real Cloudflare resources from a deployed stage:
$ wd dev --filter workers/api --fallback-stage stagingStarting dev servers: workers/api → http://localhost:8787 → falls back to staging/cloud-api via service bindingThe filtered worker runs locally on the port while its service bindings point to the real workers in the fallback stage. This is useful when:
- You only need to modify one worker
- Other workers are already deployed
- Full local startup is too slow
Service bindings from the target worker automatically connect to their deployed counterparts in the fallback stage, not to local workers that aren’t running.
Hot reload
Section titled “Hot reload”Each wrangler dev process watches its own files. Save a change, the worker reloads. This is the same wrangler hot reload you already use. wrangler-deploy just starts the processes for you.
[api ] ⎔ Reloading local server...[api ] ⎔ Local server updated and readyCustom wrangler args
Section titled “Custom wrangler args”Dev options can be set in the config:
export default defineConfig({ workers: ["workers/api", "workers/batch"], dev: { ports: { "workers/api": 9000, }, args: ["--log-level", "debug"], session: { enabled: true, entryWorker: "workers/api", persistTo: ".wrangler/state", }, companions: [ { name: "dev:cron", command: "pnpm dev:cron", cwd: ".", workers: ["workers/api"], }, ], }, resources: {},});companions are for local-only helpers like recovery loops, seeders, or queue replay scripts. wd dev starts them after Wrangler boots and stops them on shutdown.
Runtime helpers
Section titled “Runtime helpers”wd dev now has a few companion commands for local runtime workflows:
$ wd dev doctor$ wd dev ui --port 8899$ wd snapshot save local-baseline$ wd snapshot load local-baseline$ wd logs workers/api --once$ wd worker routes workers/api$ wd worker call workers/api --path /health$ wd worker call --fixture echo-ping$ wd fixture list$ wd d1 list$ wd d1 exec payments-db --sql 'SELECT COUNT(*) FROM batches;'$ wd d1 exec --fixture payments-batch-count$ wd cron trigger workers/batch-workflow --port 8787$ wd cron loop workers/batch-workflow --port 8787 --every 5s$ wd queue list$ wd queue send payment-outbox --json '{"type":"batch.dispatched"}'$ wd queue send --fixture payment-outbox-dispatch$ wd queue send payment-outbox --json '{"type":"batch.dispatched"}' --watch --every 5s$ wd queue replay payment-outbox --file fixtures/payment-outbox.json$ wd queue tail payment-outbox --once$ wd verify local --pack smokeYou could build many of these flows manually with raw Wrangler plus custom scripts. The DX gain from wrangler-deploy is that the commands resolve workers, ports, queue producers, and local routes from wrangler-deploy.config.ts instead of making every developer rediscover that wiring.
wd dev doctorcatches local-only setup issues before startupwd dev uigives you a small live control plane for workers, queues, D1, verify packs, snapshots, logs, project defaults, and command metadatawd fixture listshows the shared local payloads and SQL used by the repowd snapshot saveandwd snapshot loadmake local state reproducible instead of disposablewd logstails persisted worker logs from the active dev runtimewd worker routesshows current local URLs and named endpointswd worker callresolves the current local port for a worker before making an HTTP request, and can reuse shared fixtureswd d1 list,wd d1 exec,wd d1 seed, andwd d1 resetmake local D1 workflows logical-name drivenwd cron triggerandwd cron loophit Cloudflare’s documented local scheduled routewd queue listandwd queue inspectshow queue producer/consumer topology from configwd queue sendposts payloads through a configured producer worker route and can reuse shared fixtureswd queue send --watchrepeats the same local injection on an intervalwd queue replayreplays JSON-array fixtures through the same routewd queue tailreads queue-related lines from persisted dev logswd verify local --packlets teams split quick CI smoke runs from deeper regression packs
The practical win is consistency. The same fixture can drive wd worker call, wd queue send, wd d1 exec, wd verify local, and buttons inside wd dev ui. Snapshot save/load adds reproducible state on top of that, and the UI now keeps replayable local actions so you can rerun a known-good workflow without rebuilding the command by hand. None of that replaces wrangler.jsonc; it removes the repo-memory tax around it.
The dashboard also surfaces the resolved .wdrc / .wdrc.json defaults and the same manifest that powers wd schema and wd tools, which makes it easier for agents and humans to confirm what the CLI will do before they run it.