CLI Commands
Both wrangler-deploy and the short alias wd are available after install. The CLI works for both new greenfield starters and existing Wrangler projects, so you can scaffold a fresh app or keep the wrangler.jsonc files you already have.
wd create vite [directory]
Section titled “wd create vite [directory]”Scaffold a new Vite-style starter project that already includes a worker API, a local Vite frontend, and a wrangler-deploy.config.ts file. Use this when you are starting greenfield instead of adopting an existing repo.
$ wd create vite my-app Created vite starter in /repo/my-app ✓ package.json ✓ tsconfig.json ✓ vite.config.ts ✓ index.html ✓ src/main.ts ✓ src/style.css ✓ workers/api/src/index.ts ✓ workers/api/wrangler.jsonc ✓ wrangler-deploy.config.ts ✓ README.md ✓ .gitignorewd create vitecreates a starter in a new directory, defaulting tocloudflare-vite-appwhen you omit the directory--namesets the package name and displayed project title--forceallows overwriting existing files in the target directory--jsonreturns the scaffold summary as machine-readable output
wd context [get|set|unset|clear]
Section titled “wd context [get|set|unset|clear]”Manage project-level defaults in a .wdrc or .wdrc.json file at or above the repo root. This is useful when you want to stop repeating the same stage, account, or dev settings on every command.
$ wd context get stage staging
$ wd context set --stage staging --account-id 1234567890abcdef1234567890abcdef file: /repo/.wdrc
$ wd context unset --account-id file: /repo/.wdrc
$ wd context clear file: /repo/.wdrc defaults clearedwd context get <key>reads a single resolved default valuewd context setmerges new defaults into the nearest existing context file or creates.wdrcat the repo rootwd context unsetremoves one or more keys from the file without touching the restwd context clearremoves the file entirely--jsonworks for all four subcommands when you want machine-readable output
wd init
Section titled “wd init”Scan your local Wrangler configs and generate wrangler-deploy.config.ts. Run this once when adopting wrangler-deploy in a repo that already has wrangler.jsonc or wrangler.json.
$ wd init ✓ Found workers/api/wrangler.jsonc ✓ Found workers/batch-workflow/wrangler.jsonc ✓ Found workers/event-router/wrangler.jsonc ✓ Generated wrangler-deploy.config.tsYour checked-in Wrangler files are not modified.
wd introspect [--filter <prefix>] [--dry-run]
Section titled “wd introspect [--filter <prefix>] [--dry-run]”Scan your live Cloudflare account and generate wrangler-deploy.config.ts from existing resources. Use this instead of init when you already have Workers and resources running in production.
If CLOUDFLARE_API_TOKEN is set, it discovers Workers and their bindings automatically. Without it, uses wrangler login credentials (but can’t fetch worker bindings).
# Pull everything from your account$ wd introspect Found 3 workers, 2 KV namespaces, 1 D1 database, 1 queue ✓ Generated wrangler-deploy.config.ts
# Only resources starting with "payments-"$ wd introspect --filter payments-
# Preview without writing the file$ wd introspect --dry-runwd plan --stage <name>
Section titled “wd plan --stage <name>”Dry-run. Shows what resources would be created, are in sync, drifted, or orphaned. Run this before apply to preview changes.
$ wd plan --stage pr-42 wrangler-deploy plan --stage pr-42
+ payments-db-pr-42 (d1) create + token-kv-pr-42 (kv) create + cache-kv-pr-42 (kv) create + payment-outbox-pr-42 (queue) create + payment-outbox-dlq-pr-42 (queue) create
5 to create, 0 in sync, 0 drifted, 0 orphanedwd apply --stage <name> [--database-url <url>]
Section titled “wd apply --stage <name> [--database-url <url>]”Provision resources in Cloudflare. This step is idempotent, so it is safe to run again if it fails partway through. State is written after each resource, and deploy-time wrangler.rendered.jsonc files are generated with the real IDs for that stage.
$ wd apply --stage pr-42 + payments-db-pr-42 (d1) created + token-kv-pr-42 (kv) created + cache-kv-pr-42 (kv) created + payment-outbox-pr-42 (queue) created + payment-outbox-dlq-pr-42 (queue) created
✓ 5 resources applied--database-url is required for Hyperdrive on first apply (it needs a Postgres connection string).
wd deploy --stage <name> [--verify]
Section titled “wd deploy --stage <name> [--verify]”Deploy workers using rendered configs with real resource IDs. Workers are deployed in dependency order, so service-binding targets go first. Deploy blocks if declared secrets are missing.
$ wd deploy --stage pr-42 Deploying workers/batch-workflow... ✓ payment-batch-workflow-pr-42 deployed Deploying workers/event-router... ✓ payment-event-router-pr-42 deployed Deploying workers/api... ✓ payment-api-pr-42 deployed--verify runs post-deploy coherence checks and fails the pipeline if anything is wrong:
$ wd deploy --stage pr-42 --verify ...deployed... Verification: 8 passed, 0 failedwd destroy --stage <name> [--force]
Section titled “wd destroy --stage <name> [--force]”Tear down all resources for a stage. Removes queue consumers first, then workers, then resources — in the right order. Requires --force for protected stages.
$ wd destroy --stage pr-42 Removing queue consumers... Deleting workers... Deleting resources... ✓ Stage "pr-42" destroyedwd status [--stage <name>]
Section titled “wd status [--stage <name>]”Without --stage: list all known stages. With --stage: show resources, workers, and drift for that stage.
$ wd status Stages: staging (3 resources, 3 workers) pr-42 (5 resources, 3 workers) production (5 resources, 3 workers) [protected]
$ wd status --stage staging Resources: payments-db-staging (d1) — in-sync cache-kv-staging (kv) — in-sync outbox-staging (queue) — drifted
Workers: payment-api-staging — deployed payment-batch-workflow-staging — deployedwd guard ...
Section titled “wd guard ...”Workers usage guard commands. These are split into:
status: direct Cloudflare GraphQL usage view (requiresCLOUDFLARE_API_TOKEN)init|deploy|migrate: provision and deploy the bundledworkers-usage-guardWorkerbreaches|report|disarm|arm|approvals|approve|reject: require a deployed guard endpoint andWRANGLER_DEPLOY_GUARD_SIGNING_KEY
wd guard status [--json]
Section titled “wd guard status [--json]”Show current usage for accounts defined in guard.accounts in wrangler-deploy.config.ts.
$ CLOUDFLARE_API_TOKEN=... wd guard statuswd guard init --account <id> [--billing-cycle-day <1-31>] [--workers <names>] [--database-id <id>] [--yes]
Section titled “wd guard init --account <id> [--billing-cycle-day <1-31>] [--workers <names>] [--database-id <id>] [--yes]”One-command setup: creates the D1 database, applies migrations, sets secrets, and deploys the bundled workers-usage-guard Worker. Prints the config snippet to add to wrangler-deploy.config.ts.
$ wd guard init --account 1234abcd --workers api,event-router --yesPrompts for notification channels interactively (skip with --yes).
--workers— comma-separated worker script names to monitor--database-id— skip D1 creation and use an existing database (re-init safe)--billing-cycle-day— day of month the billing cycle starts (default: 1)--yes— non-interactive; skip all prompts
wd guard deploy [--database-id <id>]
Section titled “wd guard deploy [--database-id <id>]”Re-deploy the bundled workers-usage-guard Worker using guard.databaseId from config (or --database-id).
$ wd guard deploy$ wd guard deploy --database-id bd0274ea-ea3b-4fd7-966d-ee55d6ce9947wd guard migrate [--database-id <id>]
Section titled “wd guard migrate [--database-id <id>]”Apply D1 migrations to the guard database. Uses guard.databaseId from config (or --database-id).
$ wd guard migratewd guard breaches --account <id> [--limit <n>] [--json]
Section titled “wd guard breaches --account <id> [--limit <n>] [--json]”Read recent breach forensics from the guard API.
$ wd guard breaches --account 1234abcd --limit 10wd guard report --account <id> [--date <YYYY-MM-DD>] [--json]
Section titled “wd guard report --account <id> [--date <YYYY-MM-DD>] [--json]”Read daily usage report data from the guard API.
$ wd guard report --account 1234abcd$ wd guard report --account 1234abcd --date 2026-04-19wd guard disarm <script> --account <id> [--reason <text>]
Section titled “wd guard disarm <script> --account <id> [--reason <text>]”wd guard arm <script> --account <id>
Section titled “wd guard arm <script> --account <id>”Toggle runtime protection for a worker script (human override on kill-switch behavior).
$ wd guard disarm payment-api --account 1234abcd --reason "incident mitigation"$ wd guard arm payment-api --account 1234abcdwd guard approvals --account <id> [--json]
Section titled “wd guard approvals --account <id> [--json]”wd guard approve <approval-id> --account <id>
Section titled “wd guard approve <approval-id> --account <id>”wd guard reject <approval-id> --account <id>
Section titled “wd guard reject <approval-id> --account <id>”List and decide pending human approvals created by the kill-switch workflow.
$ wd guard approvals --account 1234abcd$ wd guard approve appr-123 --account 1234abcd$ wd guard reject appr-456 --account 1234abcdwd verify --stage <name>
Section titled “wd verify --stage <name>”Read-only coherence check. Validates state against live resources, rendered configs, workers, secrets, and service bindings. Use this in CI after deploy, or manually to check if everything is still consistent.
$ wd verify --stage staging ✓ State file exists ✓ All resources in sync ✓ All workers deployed ✓ All secrets set ✓ Service bindings valid Verification: 5 passed, 0 failedwd verify local
Section titled “wd verify local”Run a config-driven local smoke test against the active development runtime.
$ wd verify local
wrangler-deploy verify local
+ api health — 200 http://127.0.0.1:8791/health + reset payments db — workers/api + seed payments db — workers/api + seeded batch count — workers/api + payment outbox accepts payloads — 200 http://127.0.0.1:8791/__wd/queues/payment-outbox + batch workflow cron route — 200 http://127.0.0.1:8787/cdn-cgi/handler/scheduled
6 passed, 0 failedThis is the repo-aware local harness. It can combine:
- worker endpoint checks
- cron triggers
- queue sends
- local D1 reset, seed, and SQL assertions
- named verify packs for CI-style smoke and regression runs
- machine-readable JSON output with
--json-report
Configure it with verifyLocal.checks in wrangler-deploy.config.ts, then run it after wd dev.
Use --pack <name> when you want a named subset or stricter regression pack:
$ wd verify local --pack smokeUse --json-report in CI when you want structured output:
$ wd verify local --pack regression --json-reportwd secrets --stage <name>
Section titled “wd secrets --stage <name>”Check which declared secrets are set or missing. Use before deploy to see what’s needed.
$ wd secrets --stage staging workers/api: AUTH_SECRET — set API_KEY — missingwd secrets set --stage <name>
Section titled “wd secrets set --stage <name>”Set missing secrets interactively. Prompts for each missing value.
$ wd secrets set --stage staging workers/api / API_KEY: **** ✓ 1 secret setwd secrets sync --to <name> --from-env-file <path>
Section titled “wd secrets sync --to <name> --from-env-file <path>”Bulk set secrets from a .dev.vars-style file. Useful for syncing local dev secrets to a staging environment.
$ wd secrets sync --to staging --from-env-file .dev.vars ✓ Synced 3 secrets to stagingDestroy stages past their TTL. Only affects unprotected stages. Typically run on a daily cron in CI to clean up old PR preview environments.
$ wd gc pr-38 expired (7d TTL, created 2026-03-28) — destroying pr-39 expired (7d TTL, created 2026-03-29) — destroying ✓ 2 stages destroyedwd graph [--stage <name>] [--format ascii|mermaid|dot|json]
Section titled “wd graph [--stage <name>] [--format ascii|mermaid|dot|json]”Show the topology graph. Default format is ascii. Use when you want to see how workers, resources, and bindings connect.
$ wd graph[worker] api├── (binding: DB) [d1] payments-db├── (binding: TOKEN_KV) [kv] token-kv├── (producer: OUTBOX_QUEUE) [queue] payment-outbox└── (service-binding: WORKFLOWS) [worker] batch-workflow[worker] batch-workflow├── (binding: DB) [d1] payments-db├── (binding: CACHE_KV) [kv] cache-kv└── (producer: OUTBOX_QUEUE) [queue] payment-outbox[worker] event-router├── (binding: DB) [d1] payments-db├── (producer: OUTBOX_QUEUE) [queue] payment-outbox└── (consumer) [queue] payment-outbox[queue] payment-outbox└── (dead-letter) [queue] payment-outbox-dlqMermaid output you can paste into a PR comment:
$ wd graph --format mermaidgraph TD subgraph Workers workers_api([api]) workers_batch_workflow([batch-workflow]) workers_event_router([event-router]) end subgraph Queues payment_outbox[/payment-outbox\] payment_outbox_dlq[/payment-outbox-dlq\] end workers_api -->|WORKFLOWS| workers_batch_workflow workers_api -->|OUTBOX_QUEUE| payment_outbox payment_outbox -. DLQ .-> payment_outbox_dlqWhen --stage is provided, overlays live state (resource IDs, worker URLs, sync status).
wd impact <worker-path>
Section titled “wd impact <worker-path>”Show what depends on a worker and what breaks if it goes down. Run this before making breaking changes.
$ wd impact workers/api Impact analysis for workers/api
Upstream (depends on): payments-db → shared with workers/batch-workflow, workers/event-router token-kv → exclusive payment-outbox → shared with workers/batch-workflow, workers/event-router
If workers/api is unavailable: workers/batch-workflow is unaffected (no direct dependency) workers/event-router is unaffected (no direct dependency)wd diff <stage-a> <stage-b> [--format json]
Section titled “wd diff <stage-a> <stage-b> [--format json]”Compare two stages side by side. Use before promoting staging to production, or to audit what a PR preview added.
$ wd diff staging production Diff: staging vs production
Resources: = payments-db (d1) — same + cache-kv (kv) — only-in-a ~ token-kv (kv) — different
Workers: = workers/api — same + workers/experiment — only-in-a--format json for machine-readable output you can pipe into other tools.
wd dev [--stage <stage>] [--filter <worker>] [--port <base>] [--session] [--persist-to <path>]
Section titled “wd dev [--stage <stage>] [--filter <worker>] [--port <base>] [--session] [--persist-to <path>]”Start all workers in local dev mode. Reads your existing wrangler.jsonc files as-is, or uses the rendered stage configs when you pass --stage. Automatically resolves available dev and inspector ports so multi-worker setups work without conflicts.
--stage is the normal way to make local dev line up with a stage you already applied. wrangler.jsonc remains the thing you author and keep in the repo. --remote is not the primary wrangler-deploy path.
Wrangler can start these workers too. The benefit of wd dev is that it derives the worker set, dependency order, port map, companion processes, and session settings from one project config instead of leaving that orchestration to shell scripts or tribal knowledge.
$ wd devStarting dev servers: workers/api -> http://localhost:8787 workers/batch-workflow -> http://localhost:8788 workers/event-router -> http://localhost:8789--filter starts only the target worker and its transitive service-binding deps. Use when you only need part of the system running:
$ wd dev --filter workers/apiStarting dev servers: workers/batch-workflow -> http://localhost:8787 workers/api -> http://localhost:8788(batch-workflow is included because api has a service binding to it.)
If you already ran wd apply --stage staging, then wd dev --stage staging uses the rendered stage configs directly. That keeps local bindings aligned with deploy-time bindings, including D1, KV, R2, queue, and service bindings.
Use --session for Cloudflare Queue local development sessions where producer and consumer workers need to share one Miniflare environment:
$ wd dev --stage staging --session --persist-to .wrangler/stateStarting local dev session: workers/api -> http://localhost:8787 includes: workers/batch-workflow, workers/event-router--persist-to also enables session mode even if --session is omitted.
wd dev doctor
Section titled “wd dev doctor”Run local preflight checks before starting a dev stack. This validates worker config files, session entry worker references, companion working directories, cron-enabled workers, and queue topology wiring.
$ wd dev doctor wrangler-deploy dev doctor
✓ dev worker config: workers/api: wrangler config found ✓ dev worker config: workers/batch-workflow: wrangler config found ✓ cron route: workers/batch-workflow: 1 cron trigger(s) configured ✓ dev session entry worker: workers/api is declaredUse this when wd dev behaves strangely or when you want to catch local-only config problems before you start anything.
wd dev ui
Section titled “wd dev ui”Start a small local control plane for the active runtime.
$ wd dev ui --port 8899
dev ui -> http://127.0.0.1:8899The UI shows worker URLs, named endpoints, queue topology, D1 databases, verify packs, snapshots, recent history, logs, resolved project defaults, and agent metadata. It also lets you:
- call named local endpoints
- run fixture-backed worker, queue, and D1 actions
- trigger cron workers
- run local verify packs with pass/fail summaries
- save and load local snapshots
- replay recent local actions from the UI history
The top of the dashboard includes the same .wdrc / .wdrc.json defaults and command manifest information exposed by wd context, wd schema, and wd tools.
Use this when you want the repo-aware runtime workflow without memorizing commands during local debugging.
wd cron trigger <worker>
Section titled “wd cron trigger <worker>”Trigger a local scheduled event against a running wrangler dev server. This calls Cloudflare’s documented local scheduled route: /cdn-cgi/handler/scheduled.
$ wd cron trigger workers/batch-workflow --port 8787 200 http://127.0.0.1:8787/cdn-cgi/handler/scheduled okOptional flags:
--cron "<expr>"to overridecontroller.cron--time <epoch>to overridecontroller.scheduledTime--path <route>to override the default scheduled route--port <number>to target an explicit local dev port
wd cron loop <worker>
Section titled “wd cron loop <worker>”Repeat local scheduled events on an interval. Useful for replaying cron-driven workflows or local recovery loops.
$ wd cron loop workers/batch-workflow --port 8787 --every 5s --cron "*/5 * * * *"Intervals accept ms, s, or m suffixes.
wd snapshot list
Section titled “wd snapshot list”List saved local runtime snapshots.
$ wd snapshot list
Snapshots
local-baseline created: 2026-04-07T19:02:26.117Z sources: .wrangler/state, .wrangler/state/v3, .wrangler-deploy/dev-runtime.json, .wrangler-deploy/dev-logswd snapshot save <name>
Section titled “wd snapshot save <name>”Save the current local state so it can be restored later.
$ wd snapshot save local-baselineThis snapshots the configured local Miniflare state plus runtime metadata and logs. It is meant for reproducible local environments, not as a replacement for wrangler.jsonc.
wd snapshot load <name>
Section titled “wd snapshot load <name>”Restore a previously saved local runtime snapshot.
$ wd snapshot load local-baselineUse this when you want to jump back to a known-good D1 and local resource state before rerunning wd verify local or replaying queue traffic.
wd worker call <worker>
Section titled “wd worker call <worker>”Call a running local worker by worker path instead of remembering the current port yourself.
This is the HTTP equivalent of wd queue send: Wrangler can expose the worker, but wd worker call resolves the current local port from the repo’s active dev runtime or planned dev config first.
$ wd worker call workers/api --method POST --path /__wd/echo --query source=docs --header x-request-id=local-test --json '{"ping":true}'
worker workers/api POST http://127.0.0.1:8788/__wd/echo?source=docs 200 {"ok":true,"worker":"api","method":"POST","path":"/__wd/echo","query":{"source":"docs"},"requestId":"local-test","body":"{\"ping\":true}"}Or call a shared fixture:
$ wd worker call --fixture echo-pingOptional flags:
--method <verb>to override the defaultGET--path <route>to call a non-root route--query key=valueto append query-string pairs, repeatable--header key=valueto send request headers, repeatable--json '<payload>'to send a JSON body and defaultcontent-type: application/json--body '<text>'to send a raw body--body-file request.txtto read the request body from disk--watchto repeat the call on an interval--every 5sto control the watch interval--count 10to stop after a fixed number of calls--port <number>to force a specific local dev port--fixture <name>to load a shared worker fixture from config--json,--body, and--body-fileare mutually exclusive
Start wd dev first if you want the command to use the active runtime’s current ports automatically.
wd worker routes [worker]
Section titled “wd worker routes [worker]”Show the current local URL for each worker plus any named endpoints declared in dev.endpoints.
$ wd worker routes workers/api
Worker routes
workers/api url: http://127.0.0.1:8788 endpoint health: GET /health endpoint echo: POST /__wd/echoThis is the discovery command for wd worker call. Use it when you want to see the repo’s local HTTP surface without opening multiple wrangler.jsonc files.
wd logs [worker]
Section titled “wd logs [worker]”Tail persisted logs from the active wd dev runtime.
$ wd logs workers/api --once
Tailing dev logs
[workers/api] [wrangler:info] GET /health 200 OK (5ms)Optional flags:
--onceto print the current snapshot and exit--every 1sto change the polling interval--grep <pattern>to filter by regex
This is broader than wd queue tail: it tails a worker’s full persisted runtime log instead of only queue-related lines.
wd d1 list
Section titled “wd d1 list”Show D1 database topology from wrangler-deploy.config.ts.
$ wd d1 list
D1 databases
payments-db bindings: workers/api:DB, workers/batch-workflow:DB, workers/event-router:DBwd d1 inspect <database>
Section titled “wd d1 inspect <database>”Inspect one logical D1 database in detail, including any configured seed or reset files.
$ wd d1 inspect payments-db
D1: payments-db
bindings: workers/api:DB, workers/batch-workflow:DB, workers/event-router:DB seed file: sql/seed.sql reset file: sql/reset.sql default worker: workers/apiwd d1 exec <database>
Section titled “wd d1 exec <database>”Run wrangler d1 execute --local by logical database name instead of by manually choosing a worker directory first.
$ wd d1 exec payments-db --sql 'SELECT COUNT(*) AS batch_count FROM batches;'$ wd d1 exec --fixture payments-batch-countUse one of:
--sql 'SELECT ...'--file sql/query.sql--fixture <name>
If the database is bound in multiple workers, configure dev.d1["<database>"].worker or pass --worker.
wd d1 seed <database>
Section titled “wd d1 seed <database>”Run a local seed SQL file for a logical D1 database.
$ wd d1 seed payments-dbThis uses dev.d1["payments-db"].seedFile by default, or --file when you want to override it for one run.
wd d1 reset <database>
Section titled “wd d1 reset <database>”Run a local reset SQL file for a logical D1 database.
$ wd d1 reset payments-dbThis is intentionally explicit. wrangler-deploy does not guess how to reset your schema. It runs the SQL file you configure in dev.d1["payments-db"].resetFile or pass with --file.
wd queue list
Section titled “wd queue list”Show queue topology from wrangler-deploy.config.ts, including producers, consumers, and dead-letter relationships.
$ wd queue list Queue topology
payment-outbox producers: workers/api:OUTBOX_QUEUE, workers/batch-workflow:OUTBOX_QUEUE consumers: workers/event-router payment-outbox-dlq → dead-letter for payment-outbox producers: none consumers: nonewd queue inspect <queue>
Section titled “wd queue inspect <queue>”Inspect one queue in detail.
$ wd queue inspect payment-outbox
Queue: payment-outbox
producers: workers/api:OUTBOX_QUEUE, workers/batch-workflow:OUTBOX_QUEUE consumers: workers/event-router dead-letter-for: nonewd queue send <queue>
Section titled “wd queue send <queue>”Send a local queue payload through a producer worker’s debug route. This avoids undocumented Miniflare internals and uses the worker’s real Queue binding.
Raw Wrangler does not give you a repo-level “send to this logical queue” workflow. wd queue send resolves the correct producer worker, current local port, and configured route from your project config first.
$ wd queue send payment-outbox --json '{"type":"batch.dispatched","data":{"batchId":"local-test"}}'$ wd queue send --fixture payment-outbox-dispatch
queue payment-outbox -> workers/api 200 http://127.0.0.1:8788/__wd/queues/payment-outbox {"queued":true}Use one of:
--json '<payload>'--file payload.json--fixture <name>
Optional flags:
--watchto repeat the same payload on an interval--every 5sto control the watch interval--count 10to stop after a fixed number of sends--worker <worker>when a queue has multiple producers and you want a specific one--port <number>to target a specific local dev port--path <route>to override the configured local injection route
If a queue has multiple producers, configure dev.queues in wrangler-deploy.config.ts or pass --worker.
wd fixture list
Section titled “wd fixture list”Show the shared local fixtures declared in wrangler-deploy.config.ts.
$ wd fixture list
Fixtures
api-health [worker] workers/api endpoint=health payment-outbox-dispatch [queue] payment-outbox via workers/api payments-batch-count [d1] payments-db via workers/api sqlThis is the discovery command for fixture-backed local workflows. Use it when you want reusable worker calls, queue sends, D1 queries, and local verification steps to all share the same inputs.
wd queue replay <queue>
Section titled “wd queue replay <queue>”Replay a fixture file containing a JSON array of queue payloads.
$ wd queue replay payment-outbox --file fixtures/payment-outbox.json
replay payment-outbox -> workers/api sent 2 message(s) to http://127.0.0.1:8788/__wd/queues/payment-outbox all messages acceptedThe file must contain a top-level JSON array. Each array element is POSTed as one queue payload through the same local producer route used by wd queue send.
wd queue tail <queue>
Section titled “wd queue tail <queue>”Tail queue-related logs from the active wd dev runtime. This reads persisted per-worker dev logs written by wd dev and shows queue markers or local injection route activity.
This is another app-level workflow wrapper. Wrangler gives you process logs, but not a logical “tail queue activity for this app” command.
$ wd queue tail payment-outbox --once
Tailing queue payment-outbox
[workers/api] [wrangler:info] POST /__wd/queues/payment-outbox 200 OK (9ms)Use:
--onceto print the current snapshot and exit--every 1sto change the polling interval--worker <worker>to restrict tailing to one related worker
Start wd dev first so runtime state and log files exist.
wd ci init [--provider github] [--branch main]
Section titled “wd ci init [--provider github] [--branch main]”Generate a GitHub Actions workflow. Run this once to set up CI/CD for your project.
$ wd ci init ✓ Generated .github/workflows/wrangler-deploy.ymlThe generated workflow includes apply, deploy, PR comments, check runs, cleanup on PR close, production deploy on push to main, and the right GitHub token permissions.
wd ci comment --stage <name>
Section titled “wd ci comment --stage <name>”Post or update a PR comment with worker URLs, topology diagram, resource tables, and secret status. Uses <!-- wrangler-deploy --> to update the same comment on each push. Requires GITHUB_TOKEN.
wd ci check --stage <name>
Section titled “wd ci check --stage <name>”Post a GitHub check run with success/failure status. If no state exists for the stage, the check reports failure and the command exits 1 — so your CI pipeline fails instead of passing silently. Requires GITHUB_TOKEN.
wd doctor
Section titled “wd doctor”Run diagnostic checks. Use when setting up a new project, after upgrading wrangler, or when CI fails unexpectedly.
$ wd doctor wrangler-deploy doctor
✓ wrangler installed: wrangler 4.80.0 ✓ wrangler auth: logged in as jag.reehal@gmail.com ✓ worker path: workers/api exists ✓ worker path: workers/batch-workflow exists ✓ worker path: workers/event-router exists ✓ config valid: No config errorswd completions --shell zsh|bash|fish
Section titled “wd completions --shell zsh|bash|fish”Generate shell completion scripts for tab-completion of commands and flags.
# Zshwd completions --shell zsh > ~/.zfunc/_wd
# Bashwd completions --shell bash > /etc/bash_completion.d/wd
# Fishwd completions --shell fish > ~/.config/fish/completions/wd.fish