Skip to content

CLI Commands

Both wrangler-deploy and the short alias wd are available after install. The CLI works for both new greenfield starters and existing Wrangler projects, so you can scaffold a fresh app or keep the wrangler.jsonc files you already have.

Scaffold a new Vite-style starter project that already includes a worker API, a local Vite frontend, and a wrangler-deploy.config.ts file. Use this when you are starting greenfield instead of adopting an existing repo.

Terminal window
$ wd create vite my-app
Created vite starter in /repo/my-app
package.json
tsconfig.json
vite.config.ts
index.html
src/main.ts
src/style.css
workers/api/src/index.ts
workers/api/wrangler.jsonc
wrangler-deploy.config.ts
README.md
.gitignore
  • wd create vite creates a starter in a new directory, defaulting to cloudflare-vite-app when you omit the directory
  • --name sets the package name and displayed project title
  • --force allows overwriting existing files in the target directory
  • --json returns the scaffold summary as machine-readable output

Manage project-level defaults in a .wdrc or .wdrc.json file at or above the repo root. This is useful when you want to stop repeating the same stage, account, or dev settings on every command.

Terminal window
$ wd context get stage
staging
$ wd context set --stage staging --account-id 1234567890abcdef1234567890abcdef
file: /repo/.wdrc
$ wd context unset --account-id
file: /repo/.wdrc
$ wd context clear
file: /repo/.wdrc
defaults cleared
  • wd context get <key> reads a single resolved default value
  • wd context set merges new defaults into the nearest existing context file or creates .wdrc at the repo root
  • wd context unset removes one or more keys from the file without touching the rest
  • wd context clear removes the file entirely
  • --json works for all four subcommands when you want machine-readable output

Scan your local Wrangler configs and generate wrangler-deploy.config.ts. Run this once when adopting wrangler-deploy in a repo that already has wrangler.jsonc or wrangler.json.

Terminal window
$ wd init
Found workers/api/wrangler.jsonc
Found workers/batch-workflow/wrangler.jsonc
Found workers/event-router/wrangler.jsonc
Generated wrangler-deploy.config.ts

Your checked-in Wrangler files are not modified.

wd introspect [--filter <prefix>] [--dry-run]

Section titled “wd introspect [--filter <prefix>] [--dry-run]”

Scan your live Cloudflare account and generate wrangler-deploy.config.ts from existing resources. Use this instead of init when you already have Workers and resources running in production.

If CLOUDFLARE_API_TOKEN is set, it discovers Workers and their bindings automatically. Without it, uses wrangler login credentials (but can’t fetch worker bindings).

Terminal window
# Pull everything from your account
$ wd introspect
Found 3 workers, 2 KV namespaces, 1 D1 database, 1 queue
Generated wrangler-deploy.config.ts
# Only resources starting with "payments-"
$ wd introspect --filter payments-
# Preview without writing the file
$ wd introspect --dry-run

Dry-run. Shows what resources would be created, are in sync, drifted, or orphaned. Run this before apply to preview changes.

Terminal window
$ wd plan --stage pr-42
wrangler-deploy plan --stage pr-42
+ payments-db-pr-42 (d1) create
+ token-kv-pr-42 (kv) create
+ cache-kv-pr-42 (kv) create
+ payment-outbox-pr-42 (queue) create
+ payment-outbox-dlq-pr-42 (queue) create
5 to create, 0 in sync, 0 drifted, 0 orphaned

wd apply --stage <name> [--database-url <url>]

Section titled “wd apply --stage <name> [--database-url <url>]”

Provision resources in Cloudflare. This step is idempotent, so it is safe to run again if it fails partway through. State is written after each resource, and deploy-time wrangler.rendered.jsonc files are generated with the real IDs for that stage.

Terminal window
$ wd apply --stage pr-42
+ payments-db-pr-42 (d1) created
+ token-kv-pr-42 (kv) created
+ cache-kv-pr-42 (kv) created
+ payment-outbox-pr-42 (queue) created
+ payment-outbox-dlq-pr-42 (queue) created
5 resources applied

--database-url is required for Hyperdrive on first apply (it needs a Postgres connection string).

Deploy workers using rendered configs with real resource IDs. Workers are deployed in dependency order, so service-binding targets go first. Deploy blocks if declared secrets are missing.

Terminal window
$ wd deploy --stage pr-42
Deploying workers/batch-workflow...
payment-batch-workflow-pr-42 deployed
Deploying workers/event-router...
payment-event-router-pr-42 deployed
Deploying workers/api...
payment-api-pr-42 deployed

--verify runs post-deploy coherence checks and fails the pipeline if anything is wrong:

Terminal window
$ wd deploy --stage pr-42 --verify
...deployed...
Verification: 8 passed, 0 failed

Tear down all resources for a stage. Removes queue consumers first, then workers, then resources — in the right order. Requires --force for protected stages.

Terminal window
$ wd destroy --stage pr-42
Removing queue consumers...
Deleting workers...
Deleting resources...
Stage "pr-42" destroyed

Without --stage: list all known stages. With --stage: show resources, workers, and drift for that stage.

Terminal window
$ wd status
Stages:
staging (3 resources, 3 workers)
pr-42 (5 resources, 3 workers)
production (5 resources, 3 workers) [protected]
$ wd status --stage staging
Resources:
payments-db-staging (d1) — in-sync
cache-kv-staging (kv) — in-sync
outbox-staging (queue) — drifted
Workers:
payment-api-staging deployed
payment-batch-workflow-staging deployed

Workers usage guard commands. These are split into:

  • status: direct Cloudflare GraphQL usage view (requires CLOUDFLARE_API_TOKEN)
  • init|deploy|migrate: provision and deploy the bundled workers-usage-guard Worker
  • breaches|report|disarm|arm|approvals|approve|reject: require a deployed guard endpoint and WRANGLER_DEPLOY_GUARD_SIGNING_KEY

Show current usage for accounts defined in guard.accounts in wrangler-deploy.config.ts.

Terminal window
$ CLOUDFLARE_API_TOKEN=... wd guard status

wd guard init --account <id> [--billing-cycle-day <1-31>] [--workers <names>] [--database-id <id>] [--yes]

Section titled “wd guard init --account <id> [--billing-cycle-day <1-31>] [--workers <names>] [--database-id <id>] [--yes]”

One-command setup: creates the D1 database, applies migrations, sets secrets, and deploys the bundled workers-usage-guard Worker. Prints the config snippet to add to wrangler-deploy.config.ts.

Terminal window
$ wd guard init --account 1234abcd --workers api,event-router --yes

Prompts for notification channels interactively (skip with --yes).

  • --workers — comma-separated worker script names to monitor
  • --database-id — skip D1 creation and use an existing database (re-init safe)
  • --billing-cycle-day — day of month the billing cycle starts (default: 1)
  • --yes — non-interactive; skip all prompts

Re-deploy the bundled workers-usage-guard Worker using guard.databaseId from config (or --database-id).

Terminal window
$ wd guard deploy
$ wd guard deploy --database-id bd0274ea-ea3b-4fd7-966d-ee55d6ce9947

Apply D1 migrations to the guard database. Uses guard.databaseId from config (or --database-id).

Terminal window
$ wd guard migrate

wd guard breaches --account <id> [--limit <n>] [--json]

Section titled “wd guard breaches --account <id> [--limit <n>] [--json]”

Read recent breach forensics from the guard API.

Terminal window
$ wd guard breaches --account 1234abcd --limit 10

wd guard report --account <id> [--date <YYYY-MM-DD>] [--json]

Section titled “wd guard report --account <id> [--date <YYYY-MM-DD>] [--json]”

Read daily usage report data from the guard API.

Terminal window
$ wd guard report --account 1234abcd
$ wd guard report --account 1234abcd --date 2026-04-19

wd guard disarm <script> --account <id> [--reason <text>]

Section titled “wd guard disarm <script> --account <id> [--reason <text>]”

Toggle runtime protection for a worker script (human override on kill-switch behavior).

Terminal window
$ wd guard disarm payment-api --account 1234abcd --reason "incident mitigation"
$ wd guard arm payment-api --account 1234abcd

wd guard approvals --account <id> [--json]

Section titled “wd guard approvals --account <id> [--json]”

wd guard approve <approval-id> --account <id>

Section titled “wd guard approve <approval-id> --account <id>”

wd guard reject <approval-id> --account <id>

Section titled “wd guard reject <approval-id> --account <id>”

List and decide pending human approvals created by the kill-switch workflow.

Terminal window
$ wd guard approvals --account 1234abcd
$ wd guard approve appr-123 --account 1234abcd
$ wd guard reject appr-456 --account 1234abcd

Read-only coherence check. Validates state against live resources, rendered configs, workers, secrets, and service bindings. Use this in CI after deploy, or manually to check if everything is still consistent.

Terminal window
$ wd verify --stage staging
State file exists
All resources in sync
All workers deployed
All secrets set
Service bindings valid
Verification: 5 passed, 0 failed

Run a config-driven local smoke test against the active development runtime.

Terminal window
$ wd verify local
wrangler-deploy verify local
+ api health 200 http://127.0.0.1:8791/health
+ reset payments db workers/api
+ seed payments db workers/api
+ seeded batch count workers/api
+ payment outbox accepts payloads 200 http://127.0.0.1:8791/__wd/queues/payment-outbox
+ batch workflow cron route 200 http://127.0.0.1:8787/cdn-cgi/handler/scheduled
6 passed, 0 failed

This is the repo-aware local harness. It can combine:

  • worker endpoint checks
  • cron triggers
  • queue sends
  • local D1 reset, seed, and SQL assertions
  • named verify packs for CI-style smoke and regression runs
  • machine-readable JSON output with --json-report

Configure it with verifyLocal.checks in wrangler-deploy.config.ts, then run it after wd dev.

Use --pack <name> when you want a named subset or stricter regression pack:

Terminal window
$ wd verify local --pack smoke

Use --json-report in CI when you want structured output:

Terminal window
$ wd verify local --pack regression --json-report

Check which declared secrets are set or missing. Use before deploy to see what’s needed.

Terminal window
$ wd secrets --stage staging
workers/api:
AUTH_SECRET set
API_KEY missing

Set missing secrets interactively. Prompts for each missing value.

Terminal window
$ wd secrets set --stage staging
workers/api / API_KEY: ****
1 secret set

wd secrets sync --to <name> --from-env-file <path>

Section titled “wd secrets sync --to <name> --from-env-file <path>”

Bulk set secrets from a .dev.vars-style file. Useful for syncing local dev secrets to a staging environment.

Terminal window
$ wd secrets sync --to staging --from-env-file .dev.vars
Synced 3 secrets to staging

Destroy stages past their TTL. Only affects unprotected stages. Typically run on a daily cron in CI to clean up old PR preview environments.

Terminal window
$ wd gc
pr-38 expired (7d TTL, created 2026-03-28) — destroying
pr-39 expired (7d TTL, created 2026-03-29) — destroying
2 stages destroyed

wd graph [--stage <name>] [--format ascii|mermaid|dot|json]

Section titled “wd graph [--stage <name>] [--format ascii|mermaid|dot|json]”

Show the topology graph. Default format is ascii. Use when you want to see how workers, resources, and bindings connect.

Terminal window
$ wd graph
[worker] api
├── (binding: DB) [d1] payments-db
├── (binding: TOKEN_KV) [kv] token-kv
├── (producer: OUTBOX_QUEUE) [queue] payment-outbox
└── (service-binding: WORKFLOWS) [worker] batch-workflow
[worker] batch-workflow
├── (binding: DB) [d1] payments-db
├── (binding: CACHE_KV) [kv] cache-kv
└── (producer: OUTBOX_QUEUE) [queue] payment-outbox
[worker] event-router
├── (binding: DB) [d1] payments-db
├── (producer: OUTBOX_QUEUE) [queue] payment-outbox
└── (consumer) [queue] payment-outbox
[queue] payment-outbox
└── (dead-letter) [queue] payment-outbox-dlq

Mermaid output you can paste into a PR comment:

Terminal window
$ wd graph --format mermaid
graph TD
subgraph Workers
workers_api([api])
workers_batch_workflow([batch-workflow])
workers_event_router([event-router])
end
subgraph Queues
payment_outbox[/payment-outbox\]
payment_outbox_dlq[/payment-outbox-dlq\]
end
workers_api -->|WORKFLOWS| workers_batch_workflow
workers_api -->|OUTBOX_QUEUE| payment_outbox
payment_outbox -. DLQ .-> payment_outbox_dlq

When --stage is provided, overlays live state (resource IDs, worker URLs, sync status).

Show what depends on a worker and what breaks if it goes down. Run this before making breaking changes.

Terminal window
$ wd impact workers/api
Impact analysis for workers/api
Upstream (depends on):
payments-db shared with workers/batch-workflow, workers/event-router
token-kv exclusive
payment-outbox shared with workers/batch-workflow, workers/event-router
If workers/api is unavailable:
workers/batch-workflow is unaffected (no direct dependency)
workers/event-router is unaffected (no direct dependency)

wd diff <stage-a> <stage-b> [--format json]

Section titled “wd diff <stage-a> <stage-b> [--format json]”

Compare two stages side by side. Use before promoting staging to production, or to audit what a PR preview added.

Terminal window
$ wd diff staging production
Diff: staging vs production
Resources:
= payments-db (d1) — same
+ cache-kv (kv) — only-in-a
~ token-kv (kv) — different
Workers:
= workers/api same
+ workers/experiment only-in-a

--format json for machine-readable output you can pipe into other tools.

wd dev [--stage <stage>] [--filter <worker>] [--port <base>] [--session] [--persist-to <path>]

Section titled “wd dev [--stage <stage>] [--filter <worker>] [--port <base>] [--session] [--persist-to <path>]”

Start all workers in local dev mode. Reads your existing wrangler.jsonc files as-is, or uses the rendered stage configs when you pass --stage. Automatically resolves available dev and inspector ports so multi-worker setups work without conflicts.

--stage is the normal way to make local dev line up with a stage you already applied. wrangler.jsonc remains the thing you author and keep in the repo. --remote is not the primary wrangler-deploy path.

Wrangler can start these workers too. The benefit of wd dev is that it derives the worker set, dependency order, port map, companion processes, and session settings from one project config instead of leaving that orchestration to shell scripts or tribal knowledge.

Terminal window
$ wd dev
Starting dev servers:
workers/api -> http://localhost:8787
workers/batch-workflow -> http://localhost:8788
workers/event-router -> http://localhost:8789

--filter starts only the target worker and its transitive service-binding deps. Use when you only need part of the system running:

Terminal window
$ wd dev --filter workers/api
Starting dev servers:
workers/batch-workflow -> http://localhost:8787
workers/api -> http://localhost:8788

(batch-workflow is included because api has a service binding to it.)

If you already ran wd apply --stage staging, then wd dev --stage staging uses the rendered stage configs directly. That keeps local bindings aligned with deploy-time bindings, including D1, KV, R2, queue, and service bindings.

Use --session for Cloudflare Queue local development sessions where producer and consumer workers need to share one Miniflare environment:

Terminal window
$ wd dev --stage staging --session --persist-to .wrangler/state
Starting local dev session:
workers/api -> http://localhost:8787
includes: workers/batch-workflow, workers/event-router

--persist-to also enables session mode even if --session is omitted.

Run local preflight checks before starting a dev stack. This validates worker config files, session entry worker references, companion working directories, cron-enabled workers, and queue topology wiring.

Terminal window
$ wd dev doctor
wrangler-deploy dev doctor
dev worker config: workers/api: wrangler config found
dev worker config: workers/batch-workflow: wrangler config found
cron route: workers/batch-workflow: 1 cron trigger(s) configured
dev session entry worker: workers/api is declared

Use this when wd dev behaves strangely or when you want to catch local-only config problems before you start anything.

Start a small local control plane for the active runtime.

Terminal window
$ wd dev ui --port 8899
dev ui -> http://127.0.0.1:8899

The UI shows worker URLs, named endpoints, queue topology, D1 databases, verify packs, snapshots, recent history, logs, resolved project defaults, and agent metadata. It also lets you:

  • call named local endpoints
  • run fixture-backed worker, queue, and D1 actions
  • trigger cron workers
  • run local verify packs with pass/fail summaries
  • save and load local snapshots
  • replay recent local actions from the UI history

The top of the dashboard includes the same .wdrc / .wdrc.json defaults and command manifest information exposed by wd context, wd schema, and wd tools.

Use this when you want the repo-aware runtime workflow without memorizing commands during local debugging.

Trigger a local scheduled event against a running wrangler dev server. This calls Cloudflare’s documented local scheduled route: /cdn-cgi/handler/scheduled.

Terminal window
$ wd cron trigger workers/batch-workflow --port 8787
200 http://127.0.0.1:8787/cdn-cgi/handler/scheduled
ok

Optional flags:

  • --cron "<expr>" to override controller.cron
  • --time <epoch> to override controller.scheduledTime
  • --path <route> to override the default scheduled route
  • --port <number> to target an explicit local dev port

Repeat local scheduled events on an interval. Useful for replaying cron-driven workflows or local recovery loops.

Terminal window
$ wd cron loop workers/batch-workflow --port 8787 --every 5s --cron "*/5 * * * *"

Intervals accept ms, s, or m suffixes.

List saved local runtime snapshots.

Terminal window
$ wd snapshot list
Snapshots
local-baseline
created: 2026-04-07T19:02:26.117Z
sources: .wrangler/state, .wrangler/state/v3, .wrangler-deploy/dev-runtime.json, .wrangler-deploy/dev-logs

Save the current local state so it can be restored later.

Terminal window
$ wd snapshot save local-baseline

This snapshots the configured local Miniflare state plus runtime metadata and logs. It is meant for reproducible local environments, not as a replacement for wrangler.jsonc.

Restore a previously saved local runtime snapshot.

Terminal window
$ wd snapshot load local-baseline

Use this when you want to jump back to a known-good D1 and local resource state before rerunning wd verify local or replaying queue traffic.

Call a running local worker by worker path instead of remembering the current port yourself.

This is the HTTP equivalent of wd queue send: Wrangler can expose the worker, but wd worker call resolves the current local port from the repo’s active dev runtime or planned dev config first.

Terminal window
$ wd worker call workers/api --method POST --path /__wd/echo --query source=docs --header x-request-id=local-test --json '{"ping":true}'
worker workers/api
POST http://127.0.0.1:8788/__wd/echo?source=docs
200
{"ok":true,"worker":"api","method":"POST","path":"/__wd/echo","query":{"source":"docs"},"requestId":"local-test","body":"{\"ping\":true}"}

Or call a shared fixture:

Terminal window
$ wd worker call --fixture echo-ping

Optional flags:

  • --method <verb> to override the default GET
  • --path <route> to call a non-root route
  • --query key=value to append query-string pairs, repeatable
  • --header key=value to send request headers, repeatable
  • --json '<payload>' to send a JSON body and default content-type: application/json
  • --body '<text>' to send a raw body
  • --body-file request.txt to read the request body from disk
  • --watch to repeat the call on an interval
  • --every 5s to control the watch interval
  • --count 10 to stop after a fixed number of calls
  • --port <number> to force a specific local dev port
  • --fixture <name> to load a shared worker fixture from config
  • --json, --body, and --body-file are mutually exclusive

Start wd dev first if you want the command to use the active runtime’s current ports automatically.

Show the current local URL for each worker plus any named endpoints declared in dev.endpoints.

Terminal window
$ wd worker routes workers/api
Worker routes
workers/api
url: http://127.0.0.1:8788
endpoint health: GET /health
endpoint echo: POST /__wd/echo

This is the discovery command for wd worker call. Use it when you want to see the repo’s local HTTP surface without opening multiple wrangler.jsonc files.

Tail persisted logs from the active wd dev runtime.

Terminal window
$ wd logs workers/api --once
Tailing dev logs
[workers/api]
[wrangler:info] GET /health 200 OK (5ms)

Optional flags:

  • --once to print the current snapshot and exit
  • --every 1s to change the polling interval
  • --grep <pattern> to filter by regex

This is broader than wd queue tail: it tails a worker’s full persisted runtime log instead of only queue-related lines.

Show D1 database topology from wrangler-deploy.config.ts.

Terminal window
$ wd d1 list
D1 databases
payments-db
bindings: workers/api:DB, workers/batch-workflow:DB, workers/event-router:DB

Inspect one logical D1 database in detail, including any configured seed or reset files.

Terminal window
$ wd d1 inspect payments-db
D1: payments-db
bindings: workers/api:DB, workers/batch-workflow:DB, workers/event-router:DB
seed file: sql/seed.sql
reset file: sql/reset.sql
default worker: workers/api

Run wrangler d1 execute --local by logical database name instead of by manually choosing a worker directory first.

Terminal window
$ wd d1 exec payments-db --sql 'SELECT COUNT(*) AS batch_count FROM batches;'
$ wd d1 exec --fixture payments-batch-count

Use one of:

  • --sql 'SELECT ...'
  • --file sql/query.sql
  • --fixture <name>

If the database is bound in multiple workers, configure dev.d1["<database>"].worker or pass --worker.

Run a local seed SQL file for a logical D1 database.

Terminal window
$ wd d1 seed payments-db

This uses dev.d1["payments-db"].seedFile by default, or --file when you want to override it for one run.

Run a local reset SQL file for a logical D1 database.

Terminal window
$ wd d1 reset payments-db

This is intentionally explicit. wrangler-deploy does not guess how to reset your schema. It runs the SQL file you configure in dev.d1["payments-db"].resetFile or pass with --file.

Show queue topology from wrangler-deploy.config.ts, including producers, consumers, and dead-letter relationships.

Terminal window
$ wd queue list
Queue topology
payment-outbox
producers: workers/api:OUTBOX_QUEUE, workers/batch-workflow:OUTBOX_QUEUE
consumers: workers/event-router
payment-outbox-dlq dead-letter for payment-outbox
producers: none
consumers: none

Inspect one queue in detail.

Terminal window
$ wd queue inspect payment-outbox
Queue: payment-outbox
producers: workers/api:OUTBOX_QUEUE, workers/batch-workflow:OUTBOX_QUEUE
consumers: workers/event-router
dead-letter-for: none

Send a local queue payload through a producer worker’s debug route. This avoids undocumented Miniflare internals and uses the worker’s real Queue binding.

Raw Wrangler does not give you a repo-level “send to this logical queue” workflow. wd queue send resolves the correct producer worker, current local port, and configured route from your project config first.

Terminal window
$ wd queue send payment-outbox --json '{"type":"batch.dispatched","data":{"batchId":"local-test"}}'
$ wd queue send --fixture payment-outbox-dispatch
queue payment-outbox -> workers/api
200 http://127.0.0.1:8788/__wd/queues/payment-outbox
{"queued":true}

Use one of:

  • --json '<payload>'
  • --file payload.json
  • --fixture <name>

Optional flags:

  • --watch to repeat the same payload on an interval
  • --every 5s to control the watch interval
  • --count 10 to stop after a fixed number of sends
  • --worker <worker> when a queue has multiple producers and you want a specific one
  • --port <number> to target a specific local dev port
  • --path <route> to override the configured local injection route

If a queue has multiple producers, configure dev.queues in wrangler-deploy.config.ts or pass --worker.

Show the shared local fixtures declared in wrangler-deploy.config.ts.

Terminal window
$ wd fixture list
Fixtures
api-health [worker]
workers/api endpoint=health
payment-outbox-dispatch [queue]
payment-outbox via workers/api
payments-batch-count [d1]
payments-db via workers/api sql

This is the discovery command for fixture-backed local workflows. Use it when you want reusable worker calls, queue sends, D1 queries, and local verification steps to all share the same inputs.

Replay a fixture file containing a JSON array of queue payloads.

Terminal window
$ wd queue replay payment-outbox --file fixtures/payment-outbox.json
replay payment-outbox -> workers/api
sent 2 message(s) to http://127.0.0.1:8788/__wd/queues/payment-outbox
all messages accepted

The file must contain a top-level JSON array. Each array element is POSTed as one queue payload through the same local producer route used by wd queue send.

Tail queue-related logs from the active wd dev runtime. This reads persisted per-worker dev logs written by wd dev and shows queue markers or local injection route activity.

This is another app-level workflow wrapper. Wrangler gives you process logs, but not a logical “tail queue activity for this app” command.

Terminal window
$ wd queue tail payment-outbox --once
Tailing queue payment-outbox
[workers/api]
[wrangler:info] POST /__wd/queues/payment-outbox 200 OK (9ms)

Use:

  • --once to print the current snapshot and exit
  • --every 1s to change the polling interval
  • --worker <worker> to restrict tailing to one related worker

Start wd dev first so runtime state and log files exist.

wd ci init [--provider github] [--branch main]

Section titled “wd ci init [--provider github] [--branch main]”

Generate a GitHub Actions workflow. Run this once to set up CI/CD for your project.

Terminal window
$ wd ci init
Generated .github/workflows/wrangler-deploy.yml

The generated workflow includes apply, deploy, PR comments, check runs, cleanup on PR close, production deploy on push to main, and the right GitHub token permissions.

Post or update a PR comment with worker URLs, topology diagram, resource tables, and secret status. Uses <!-- wrangler-deploy --> to update the same comment on each push. Requires GITHUB_TOKEN.

Post a GitHub check run with success/failure status. If no state exists for the stage, the check reports failure and the command exits 1 — so your CI pipeline fails instead of passing silently. Requires GITHUB_TOKEN.

Run diagnostic checks. Use when setting up a new project, after upgrading wrangler, or when CI fails unexpectedly.

Terminal window
$ wd doctor
wrangler-deploy doctor
wrangler installed: wrangler 4.80.0
wrangler auth: logged in as jag.reehal@gmail.com
worker path: workers/api exists
worker path: workers/batch-workflow exists
worker path: workers/event-router exists
config valid: No config errors

Generate shell completion scripts for tab-completion of commands and flags.

Terminal window
# Zsh
wd completions --shell zsh > ~/.zfunc/_wd
# Bash
wd completions --shell bash > /etc/bash_completion.d/wd
# Fish
wd completions --shell fish > ~/.config/fish/completions/wd.fish