In version 2.1.139 (released 2026-05-11) the Claude Code CLI shipped a new built-in slash command: /goal. The official changelog describes it in one sentence:
Added
/goalcommand: set a completion condition and Claude keeps working across turns until it’s met. Works in interactive,-p, and Remote Control. Shows live elapsed/turns/tokens as an overlay panel.
That sentence is the entire documented surface as of this writing. The rest of this article is what that means in practice and how to put it to work without overreaching the docs.
What changed
Before /goal, every Claude Code interaction was conversational: you typed, Claude responded, you read, you typed again. Multi-step tasks required you to keep nudging – “continue”, “now run the tests”, “fix that and retry”. /goal collapses that loop. You give Claude a completion condition once, and it keeps working across turns until the condition is met, without you re-prompting on every iteration.
You can think of it as a built-in version of the “keep going until X” wrapper scripts people had been writing around Claude Code by hand.
The command surface (verified)
From the changelog:
-
A
/goalcommand exists. - It sets a completion condition that persists across turns.
-
It runs in three modes: interactive (normal terminal),
-p(print/pipe mode for scripts), and Remote Control (web/mobile clients). - An overlay panel shows live elapsed time, turn count, and token usage.
Anything beyond that – specific subcommands for pausing, resuming, clearing, or inspecting the goal – is not in the official changelog or release notes. If you see articles claiming /goal pause or /goal resume exist, they are extrapolating. Treat them as unverified until Anthropic documents them.
Usage in interactive mode
The natural form is:
/goal <plain-language completion condition>
Examples worth trying:
/goal Finish migrating the Express router to Fastify with every existing test still green.
/goal Get `mix credo --strict` and `mix dialyzer` to exit clean on this branch.
/goal Read every module under lib/permissions and produce asset/docs/permissions-audit.md listing each permission key, the modules that reference it, and any keys that are registered but unused.
The pattern: state the terminal condition, not the path. “Tests pass” is a goal. “Run mix test, then fix the failing assertion” is a script. Let Claude pick the path.
Usage in -p (print) mode
-p mode is for scripted use: pipe input in, get output back, suitable for make, cron, or any other automation that already speaks to shell tools.
claude -p "/goal Verify all asset/blogs/*.md files start with the public:true frontmatter; fix any that don't"
Same semantics: Claude works until the condition is met or the budget runs out, then exits with its final summary. The overlay still tracks elapsed/turns/tokens server-side; in -p mode it surfaces in the final stdout block rather than as a live panel.
This is the mode you wire into a Cron or Alarm schedule when you want the agent to grind on something overnight.
Usage in Remote Control mode
Remote Control is the web/mobile path – you start a session from a phone or browser and the agent runs server-side. /goal works there too. The live overlay (elapsed, turns, tokens) is genuinely useful here because the session may run for minutes or hours and you want to see whether it’s still making progress without scrolling the transcript.
Building new modules with /goal
A pattern that maps cleanly onto the command:
/goal Add a `Pushover` module under lib/pushover.ex that wraps the Pushover REST API.
It must:
- Be a registered GenServer accessed by module name.
- Own its own Sqler instance for audit logs.
- Expose `Pushover.send(user_key, message, opts \\\\ [])` returning {:ok, _} | {:error, _}.
- Be supervised under the main application supervisor.
- Have a test in test/pushover_test.exs covering the happy path and a 4xx failure.
Complete when `mix test test/pushover_test.exs` passes and `mix credo --strict lib/pushover.ex` exits clean.
Notice the completion condition is mechanical: a test command exits zero. Claude can verify that by running it. That makes the goal grounded – there’s a clear “done” signal instead of “looks good to me”.
For larger features (multiple modules, schema changes, route additions), keep the goal narrow:
/goal Wire the existing ItermTabs module into the MCP server so an agent with the
iterm_tabs.send permission can call iterm_tabs_send and have it dispatch via
Permissions.ItermTabs.send_text/3.
Complete when:
- mix compile produces no warnings or new diagnostics
- mix credo --strict lib/mcp_server/tools/iterm_tabs.ex exits clean
- calling the tool from the bundled MCP test harness returns status:sent
If the goal would touch a dozen modules, split it. /goal is a loop, not a planner – it will keep grinding, but if the objective is structurally too large, you’ll burn budget on the wrong path.
Tests as the verification harness
The cleanest goals end with a test command. Some shapes that work:
... Complete when `mix test --only describe:"Pushover.send/3"` passes.
... Complete when `mix test path/to/file.exs:42` passes.
... Complete when `mix test && mix dialyzer` both exit zero.
... Complete when `pytest tests/test_x.py -q` exits zero.
... Complete when `npm test -- --runInBand` exits zero.
Any deterministic command whose exit code answers “is this done?” is a usable terminal condition. Tests are the obvious one; lint passes, type checks, and build succeeds are equally valid.
What does not work as a completion condition:
- Subjective phrasing like “the code is clean” or “looks production-ready” – no clear stopping rule, the loop will exhaust budget.
- Network-dependent conditions where the upstream is flaky – the loop will spin retrying things that aren’t Claude’s fault.
- Conditions Claude can’t verify itself (e.g. “the staging deploy succeeds” without giving it a way to check).
If you want a soft goal, write a hard one alongside it: “Complete when mix test passes AND lib/foo.ex is under 200 lines.”
Verify what Claude did, not just what it claimed
The overlay panel tells you the loop terminated, but it doesn’t tell you the work is correct. Before accepting a /goal run as done:
- Read the diff. Don’t rely on Claude’s self-report – inspect what actually landed.
- Re-run the verification command yourself. Out of band of the loop, in a fresh shell.
-
Check for collateral damage. Did the goal touch files outside its stated scope?
/goalwon’t volunteer that;git statusandgit diffwill.
This is the same trust-but-verify rule that applies to any agent-driven change. /goal doesn’t change the rule; it just makes the work cheaper to start.
Practical limits
A few worth knowing:
-
Budget is finite. A
/goalrun consumes tokens and wall clock. If the objective is genuinely large, the loop terminates at the budget limit, not at completion. The overlay’s token counter is the variable to watch. - One goal at a time. The changelog wording (“set a completion condition”) implies a single active goal per session. There’s no documented multi-goal queue.
-
No documented control surface beyond setting the goal. Pause/resume/clear are not in the docs. If you need to stop a runaway loop, the documented escape is the same as for any Claude Code session: Ctrl+C in interactive mode, kill the process in
-pmode. -
Mode matters. Interactive shows the live overlay;
-psurfaces it only in the final output. Plan accordingly when wiring into automation.
When to reach for /goal
Use it when:
- The task has a mechanical stopping rule (a test passes, a lint check exits zero, a file matches a spec).
- The path to that stopping rule has multiple turns of work and you don’t want to babysit.
-
You’re running in
-pmode from cron or a workflow runner and need the agent to keep going on its own.
Don’t use it when:
- The objective is exploratory (“figure out what’s wrong with the build”). Use a normal conversation – you need to be in the loop.
- The completion condition is subjective. The loop has no exit and will burn budget.
- The work needs human review at each step (sensitive refactors, schema migrations, anything touching auth or money).
Bottom line
/goal is a thin primitive: one command, one completion condition, one loop. Most of its power comes from the discipline you bring to writing the condition. A goal that ends in a deterministic command is worth running; one that ends in “looks good” is not.
When in doubt, write the test first, then write the goal that has to make it pass.