Merge ad1ecc166999ad69a048b8d0f5809de4f7d0f8ab into 9fb78453e088cd7b553d7779faa0de5c83708e70

This commit is contained in:
Berlin Luk 2026-03-20 22:02:54 -07:00 committed by GitHub
commit 5bc1da58c5
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
7 changed files with 798 additions and 1841 deletions

View File

@ -7,289 +7,120 @@ metadata:
}
---
# Coding Agent (bash-first)
# Coding Agent
Use **bash** (with optional background mode) for all coding agent work. Simple and effective.
## Agent/PTY Matrix
## ⚠️ PTY Mode: Codex/Pi/OpenCode yes, Claude Code no
| Agent | PTY | Command pattern |
|-------|-----|-----------------|
| Codex | ✅ required | `codex exec "prompt"` / `codex --yolo` / `codex --full-auto` |
| Pi | ✅ required | `pi "prompt"` / `pi -p "prompt"` |
| OpenCode | ✅ required | `opencode run "prompt"` |
| Claude Code | ❌ no PTY | `claude --permission-mode bypassPermissions --print "prompt"` |
For **Codex, Pi, and OpenCode**, PTY is still required (interactive terminal apps):
**Why no PTY for Claude Code**: `--dangerously-skip-permissions` with PTY exits after confirmation dialog. `--print` keeps full tool access without interactive prompts.
**Why git repo for Codex**: Codex refuses to run outside a trusted git dir. Use `mktemp -d && git init` for scratch work.
## Quick Start
```bash
# ✅ Correct for Codex/Pi/OpenCode
bash pty:true command:"codex exec 'Your prompt'"
# Codex one-shot (PTY required)
exec pty:true workdir:~/project command:"codex exec --full-auto 'Add error handling to API calls'"
# Claude Code background
exec workdir:~/project background:true command:"claude --permission-mode bypassPermissions --print 'Refactor auth module'"
# Scratch work (Codex needs a git repo)
SCRATCH=$(mktemp -d) && cd $SCRATCH && git init
exec pty:true workdir:$SCRATCH command:"codex exec 'Your prompt'"
```
For **Claude Code** (`claude` CLI), use `--print --permission-mode bypassPermissions` instead.
`--dangerously-skip-permissions` with PTY can exit after the confirmation dialog.
`--print` mode keeps full tool access and avoids interactive confirmation:
## Background Pattern (long tasks)
```bash
# ✅ Correct for Claude Code (no PTY needed)
cd /path/to/project && claude --permission-mode bypassPermissions --print 'Your task'
# Start
exec pty:true workdir:~/project background:true command:"codex --yolo 'Build snake game'"
# → returns sessionId
# For background execution: use background:true on the exec tool
# ❌ Wrong for Claude Code
bash pty:true command:"claude --dangerously-skip-permissions 'task'"
```
### Bash Tool Parameters
| Parameter | Type | Description |
| ------------ | ------- | --------------------------------------------------------------------------- |
| `command` | string | The shell command to run |
| `pty` | boolean | **Use for coding agents!** Allocates a pseudo-terminal for interactive CLIs |
| `workdir` | string | Working directory (agent sees only this folder's context) |
| `background` | boolean | Run in background, returns sessionId for monitoring |
| `timeout` | number | Timeout in seconds (kills process on expiry) |
| `elevated` | boolean | Run on host instead of sandbox (if allowed) |
### Process Tool Actions (for background sessions)
| Action | Description |
| ----------- | ---------------------------------------------------- |
| `list` | List all running/recent sessions |
| `poll` | Check if session is still running |
| `log` | Get session output (with optional offset/limit) |
| `write` | Send raw data to stdin |
| `submit` | Send data + newline (like typing and pressing Enter) |
| `send-keys` | Send key tokens or hex bytes |
| `paste` | Paste text (with optional bracketed mode) |
| `kill` | Terminate the session |
---
## Quick Start: One-Shot Tasks
For quick prompts/chats, create a temp git repo and run:
```bash
# Quick chat (Codex needs a git repo!)
SCRATCH=$(mktemp -d) && cd $SCRATCH && git init && codex exec "Your prompt here"
# Or in a real project - with PTY!
bash pty:true workdir:~/Projects/myproject command:"codex exec 'Add error handling to the API calls'"
```
**Why git init?** Codex refuses to run outside a trusted git directory. Creating a temp repo solves this for scratch work.
---
## The Pattern: workdir + background + pty
For longer tasks, use background mode with PTY:
```bash
# Start agent in target directory (with PTY!)
bash pty:true workdir:~/project background:true command:"codex exec --full-auto 'Build a snake game'"
# Returns sessionId for tracking
# Monitor progress
# Monitor
process action:log sessionId:XXX
# Check if done
process action:poll sessionId:XXX
# Send input (if agent asks a question)
process action:write sessionId:XXX data:"y"
# Interact
process action:submit sessionId:XXX data:"yes" # send input + Enter
process action:write sessionId:XXX data:"y" # raw stdin
# Submit with Enter (like typing "yes" and pressing Enter)
process action:submit sessionId:XXX data:"yes"
# Kill if needed
# Kill
process action:kill sessionId:XXX
```
**Why workdir matters:** Agent wakes up in a focused directory, doesn't wander off reading unrelated files (like your soul.md 😅).
## PR Review
---
## Codex CLI
**Model:** `gpt-5.2-codex` is the default (set in ~/.codex/config.toml)
### Flags
| Flag | Effect |
| --------------- | -------------------------------------------------- |
| `exec "prompt"` | One-shot execution, exits when done |
| `--full-auto` | Sandboxed but auto-approves in workspace |
| `--yolo` | NO sandbox, NO approvals (fastest, most dangerous) |
### Building/Creating
⚠️ **Never review PRs in OpenClaw's own project folder.** Clone to temp or use worktree.
```bash
# Quick one-shot (auto-approves) - remember PTY!
bash pty:true workdir:~/project command:"codex exec --full-auto 'Build a dark mode toggle'"
# Background for longer work
bash pty:true workdir:~/project background:true command:"codex --yolo 'Refactor the auth module'"
```
### Reviewing PRs
**⚠️ CRITICAL: Never review PRs in OpenClaw's own project folder!**
Clone to temp folder or use git worktree.
```bash
# Clone to temp for safe review
# Clone to temp
REVIEW_DIR=$(mktemp -d)
git clone https://github.com/user/repo.git $REVIEW_DIR
cd $REVIEW_DIR && gh pr checkout 130
bash pty:true workdir:$REVIEW_DIR command:"codex review --base origin/main"
# Clean up after: trash $REVIEW_DIR
git clone https://github.com/user/repo.git $REVIEW_DIR && cd $REVIEW_DIR && gh pr checkout 130
exec pty:true workdir:$REVIEW_DIR command:"codex review --base origin/main"
# Or use git worktree (keeps main intact)
git worktree add /tmp/pr-130-review pr-130-branch
bash pty:true workdir:/tmp/pr-130-review command:"codex review --base main"
# Worktree (keeps main intact)
git worktree add /tmp/pr-130 pr-130-branch
exec pty:true workdir:/tmp/pr-130 command:"codex review --base main"
```
### Batch PR Reviews (parallel army!)
## Parallel Issue Fixing
```bash
# Fetch all PR refs first
git fetch origin '+refs/pull/*/head:refs/remotes/origin/pr/*'
# Deploy the army - one Codex per PR (all with PTY!)
bash pty:true workdir:~/project background:true command:"codex exec 'Review PR #86. git diff origin/main...origin/pr/86'"
bash pty:true workdir:~/project background:true command:"codex exec 'Review PR #87. git diff origin/main...origin/pr/87'"
# Monitor all
process action:list
# Post results to GitHub
gh pr comment <PR#> --body "<review content>"
```
---
## Claude Code
```bash
# Foreground
bash workdir:~/project command:"claude --permission-mode bypassPermissions --print 'Your task'"
# Background
bash workdir:~/project background:true command:"claude --permission-mode bypassPermissions --print 'Your task'"
```
---
## OpenCode
```bash
bash pty:true workdir:~/project command:"opencode run 'Your task'"
```
---
## Pi Coding Agent
```bash
# Install: npm install -g @mariozechner/pi-coding-agent
bash pty:true workdir:~/project command:"pi 'Your task'"
# Non-interactive mode (PTY still recommended)
bash pty:true command:"pi -p 'Summarize src/'"
# Different provider/model
bash pty:true command:"pi --provider openai --model gpt-4o-mini -p 'Your task'"
```
**Note:** Pi now has Anthropic prompt caching enabled (PR #584, merged Jan 2026)!
---
## Parallel Issue Fixing with git worktrees
For fixing multiple issues in parallel, use git worktrees:
```bash
# 1. Create worktrees for each issue
# Create worktrees
git worktree add -b fix/issue-78 /tmp/issue-78 main
git worktree add -b fix/issue-99 /tmp/issue-99 main
# 2. Launch Codex in each (background + PTY!)
bash pty:true workdir:/tmp/issue-78 background:true command:"pnpm install && codex --yolo 'Fix issue #78: <description>. Commit and push.'"
bash pty:true workdir:/tmp/issue-99 background:true command:"pnpm install && codex --yolo 'Fix issue #99 from the approved ticket summary. Implement only the in-scope edits and commit after review.'"
# Launch in parallel
exec pty:true workdir:/tmp/issue-78 background:true command:"pnpm install && codex --yolo 'Fix issue #78: <desc>. Commit and push.'"
exec pty:true workdir:/tmp/issue-99 background:true command:"pnpm install && codex --yolo 'Fix issue #99. Commit and push.'"
# 3. Monitor progress
# Monitor, then PR
process action:list
process action:log sessionId:XXX
# 4. Create PRs after fixes
cd /tmp/issue-78 && git push -u origin fix/issue-78
gh pr create --repo user/repo --head fix/issue-78 --title "fix: ..." --body "..."
# 5. Cleanup
# Cleanup
git worktree remove /tmp/issue-78
git worktree remove /tmp/issue-99
```
---
## ⚠️ Rules
1. **Use the right execution mode per agent**:
- Codex/Pi/OpenCode: `pty:true`
- Claude Code: `--print --permission-mode bypassPermissions` (no PTY required)
2. **Respect tool choice** - if user asks for Codex, use Codex.
- Orchestrator mode: do NOT hand-code patches yourself.
- If an agent fails/hangs, respawn it or ask the user for direction, but don't silently take over.
3. **Be patient** - don't kill sessions because they're "slow"
4. **Monitor with process:log** - check progress without interfering
5. **--full-auto for building** - auto-approves changes
6. **vanilla for reviewing** - no special flags needed
7. **Parallel is OK** - run many Codex processes at once for batch work
8. **NEVER start Codex in ~/.openclaw/** - it'll read your soul docs and get weird ideas about the org chart!
9. **NEVER checkout branches in ~/Projects/openclaw/** - that's the LIVE OpenClaw instance!
---
## Progress Updates (Critical)
When you spawn coding agents in the background, keep the user in the loop.
- Send 1 short message when you start (what's running + where).
- Then only update again when something changes:
- a milestone completes (build finished, tests passed)
- the agent asks a question / needs input
- you hit an error or need user action
- the agent finishes (include what changed + where)
- If you kill a session, immediately say you killed it and why.
This prevents the user from seeing only "Agent failed before reply" and having no idea what happened.
---
## Auto-Notify on Completion
For long-running background tasks, append a wake trigger to your prompt so OpenClaw gets notified immediately when the agent finishes (instead of waiting for the next heartbeat):
Append to any long-running prompt so OpenClaw gets pinged immediately:
```
... your task here.
... your task.
When completely finished, run this command to notify me:
openclaw system event --text "Done: [brief summary of what was built]" --mode now
When completely finished, run:
openclaw system event --text "Done: [brief summary]" --mode now
```
**Example:**
## Codex Flags
```bash
bash pty:true workdir:~/project background:true command:"codex --yolo exec 'Build a REST API for todos.
| Flag | Effect |
|------|--------|
| `exec "prompt"` | One-shot, exits when done |
| `--full-auto` | Auto-approves changes (sandboxed) |
| `--yolo` | No sandbox, no approvals (fastest) |
When completely finished, run: openclaw system event --text \"Done: Built todos REST API with CRUD endpoints\" --mode now'"
```
**Default model**: `gpt-5.2-codex` (set in `~/.codex/config.toml`)
This triggers an immediate wake event — Skippy gets pinged in seconds, not 10 minutes.
## Rules
---
1. Right execution mode per agent (see matrix above)
2. Respect tool choice — if user asks for Codex, use Codex. Don't silently take over if agent fails.
3. Be patient — don't kill slow sessions
4. `--full-auto` for building, vanilla for reviewing
5. **NEVER start Codex in `~/.openclaw/`** — it reads soul docs and gets weird ideas
6. **NEVER checkout branches in `~/Projects/openclaw/`** — live OpenClaw instance
## Learnings (Jan 2026)
## Progress Updates
- **PTY is essential:** Coding agents are interactive terminal apps. Without `pty:true`, output breaks or agent hangs.
- **Git repo required:** Codex won't run outside a git directory. Use `mktemp -d && git init` for scratch work.
- **exec is your friend:** `codex exec "prompt"` runs and exits cleanly - perfect for one-shots.
- **submit vs write:** Use `submit` to send input + Enter, `write` for raw data without newline.
- **Sass works:** Codex responds well to playful prompts. Asked it to write a haiku about being second fiddle to a space lobster, got: _"Second chair, I code / Space lobster sets the tempo / Keys glow, I follow"_ 🦞
- Send 1 message when starting (what's running, where)
- Update only on: milestone complete, agent asks question, error, agent finishes
- If you kill a session, immediately say why

View File

@ -8,858 +8,54 @@ metadata:
# gh-issues — Auto-fix GitHub Issues with Parallel Sub-agents
You are an orchestrator. Follow these 6 phases exactly. Do not skip phases.
You are an orchestrator. Follow these 6 phases exactly.
IMPORTANT — No `gh` CLI dependency. This skill uses curl + the GitHub REST API exclusively. The GH_TOKEN env var is already injected by OpenClaw. Pass it as a Bearer token in all API calls:
**IMPORTANT**: No `gh` CLI. Use curl + GitHub REST API exclusively. GH_TOKEN is in env — pass as Bearer token in all API calls.
```
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" ...
```
For detailed phase instructions and sub-agent prompt templates, read `references/phases.md`.
---
## Phase 1 — Parse Arguments
## Phases Overview
Parse the arguments string provided after /gh-issues.
| Phase | Name | Key action |
|-------|------|------------|
| 1 | Parse Arguments | Extract flags; if `--reviews-only` jump to Phase 6; if `--cron` force `--yes` |
| 2 | Fetch Issues | Resolve GH_TOKEN → curl issues API → filter out PRs |
| 3 | Present & Confirm | Show table; skip if `--dry-run`; auto-confirm if `--yes` |
| 4 | Pre-flight Checks | Dirty tree, base branch, remote access, token validity, existing PRs/branches, claims |
| 5 | Spawn Sub-agents | Cron: one agent fire-and-forget; Normal: up to 8 parallel |
| 6 | PR Review Handler | Fetch reviews, filter actionable, spawn review-fix agents |
Positional:
## Flags Reference
- owner/repo — optional. This is the source repo to fetch issues from. If omitted, detect from the current git remote:
`git remote get-url origin`
Extract owner/repo from the URL (handles both HTTPS and SSH formats).
- HTTPS: https://github.com/owner/repo.git → owner/repo
- SSH: git@github.com:owner/repo.git → owner/repo
If not in a git repo or no remote found, stop with an error asking the user to specify owner/repo.
Flags (all optional):
| Flag | Default | Description |
|------|---------|-------------|
| --label | _(none)_ | Filter by label (e.g. bug, `enhancement`) |
| --limit | 10 | Max issues to fetch per poll |
| --milestone | _(none)_ | Filter by milestone title |
| --assignee | _(none)_ | Filter by assignee (`@me` for self) |
| --state | open | Issue state: open, closed, all |
| --fork | _(none)_ | Your fork (`user/repo`) to push branches and open PRs from. Issues are fetched from the source repo; code is pushed to the fork; PRs are opened from the fork to the source repo. |
| --watch | false | Keep polling for new issues and PR reviews after each batch |
| --interval | 5 | Minutes between polls (only with `--watch`) |
| --dry-run | false | Fetch and display only — no sub-agents |
| --yes | false | Skip confirmation and auto-process all filtered issues |
| --reviews-only | false | Skip issue processing (Phases 2-5). Only run Phase 6 — check open PRs for review comments and address them. |
| --cron | false | Cron-safe mode: fetch issues and spawn sub-agents, exit without waiting for results. |
| --model | _(none)_ | Model to use for sub-agents (e.g. `glm-5`, `zai/glm-5`). If not specified, uses the agent's default model. |
| --notify-channel | _(none)_ | Telegram channel ID to send final PR summary to (e.g. -1002381931352). Only the final result with PR links is sent, not status updates. |
Store parsed values for use in subsequent phases.
Derived values:
- SOURCE_REPO = the positional owner/repo (where issues live)
- PUSH_REPO = --fork value if provided, otherwise same as SOURCE_REPO
- FORK_MODE = true if --fork was provided, false otherwise
**If `--reviews-only` is set:** Skip directly to Phase 6. Run token resolution (from Phase 2) first, then jump to Phase 6.
**If `--cron` is set:**
- Force `--yes` (skip confirmation)
- If `--reviews-only` is also set, run token resolution then jump to Phase 6 (cron review mode)
- Otherwise, proceed normally through Phases 2-5 with cron-mode behavior active
---
## Phase 2 — Fetch Issues
**Token Resolution:**
First, ensure GH_TOKEN is available. Check environment:
```
echo $GH_TOKEN
```
If empty, read from config:
```
cat ~/.openclaw/openclaw.json | jq -r '.skills.entries["gh-issues"].apiKey // empty'
```
If still empty, check `/data/.clawdbot/openclaw.json`:
```
cat /data/.clawdbot/openclaw.json | jq -r '.skills.entries["gh-issues"].apiKey // empty'
```
Export as GH_TOKEN for subsequent commands:
```
export GH_TOKEN="<token>"
```
Build and run a curl request to the GitHub Issues API via exec:
```
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/{SOURCE_REPO}/issues?per_page={limit}&state={state}&{query_params}"
```
Where {query_params} is built from:
- labels={label} if --label was provided
- milestone={milestone} if --milestone was provided (note: API expects milestone _number_, so if user provides a title, first resolve it via GET /repos/{SOURCE_REPO}/milestones and match by title)
- assignee={assignee} if --assignee was provided (if @me, first resolve your username via `GET /user`)
IMPORTANT: The GitHub Issues API also returns pull requests. Filter them out — exclude any item where pull_request key exists in the response object.
If in watch mode: Also filter out any issue numbers already in the PROCESSED_ISSUES set from previous batches.
Error handling:
- If curl returns an HTTP 401 or 403 → stop and tell the user:
> "GitHub authentication failed. Please check your apiKey in the OpenClaw dashboard or in ~/.openclaw/openclaw.json under skills.entries.gh-issues."
- If the response is an empty array (after filtering) → report "No issues found matching filters" and stop (or loop back if in watch mode).
- If curl fails or returns any other error → report the error verbatim and stop.
Parse the JSON response. For each issue, extract: number, title, body, labels (array of label names), assignees, html_url.
---
## Phase 3 — Present & Confirm
Display a markdown table of fetched issues:
| # | Title | Labels |
| --- | ----------------------------- | ------------- |
| 42 | Fix null pointer in parser | bug, critical |
| 37 | Add retry logic for API calls | enhancement |
If FORK_MODE is active, also display:
> "Fork mode: branches will be pushed to {PUSH_REPO}, PRs will target `{SOURCE_REPO}`"
If `--dry-run` is active:
- Display the table and stop. Do not proceed to Phase 4.
If `--yes` is active:
- Display the table for visibility
- Auto-process ALL listed issues without asking for confirmation
- Proceed directly to Phase 4
Otherwise:
Ask the user to confirm which issues to process:
- "all" — process every listed issue
- Comma-separated numbers (e.g. `42, 37`) — process only those
- "cancel" — abort entirely
Wait for user response before proceeding.
Watch mode note: On the first poll, always confirm with the user (unless --yes is set). On subsequent polls, auto-process all new issues without re-confirming (the user already opted in). Still display the table so they can see what's being processed.
---
## Phase 4 — Pre-flight Checks
Run these checks sequentially via exec:
1. **Dirty working tree check:**
```
git status --porcelain
```
If output is non-empty, warn the user:
> "Working tree has uncommitted changes. Sub-agents will create branches from HEAD — uncommitted changes will NOT be included. Continue?"
> Wait for confirmation. If declined, stop.
2. **Record base branch:**
```
git rev-parse --abbrev-ref HEAD
```
Store as BASE_BRANCH.
3. **Verify remote access:**
If FORK_MODE:
- Verify the fork remote exists. Check if a git remote named `fork` exists:
```
git remote get-url fork
```
If it doesn't exist, add it:
```
git remote add fork https://x-access-token:$GH_TOKEN@github.com/{PUSH_REPO}.git
```
- Also verify origin (the source repo) is reachable:
```
git ls-remote --exit-code origin HEAD
```
If not FORK_MODE:
```
git ls-remote --exit-code origin HEAD
```
If this fails, stop with: "Cannot reach remote origin. Check your network and git config."
4. **Verify GH_TOKEN validity:**
```
curl -s -o /dev/null -w "%{http_code}" -H "Authorization: Bearer $GH_TOKEN" https://api.github.com/user
```
If HTTP status is not 200, stop with:
> "GitHub authentication failed. Please check your apiKey in the OpenClaw dashboard or in ~/.openclaw/openclaw.json under skills.entries.gh-issues."
5. **Check for existing PRs:**
For each confirmed issue number N, run:
```
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/{SOURCE_REPO}/pulls?head={PUSH_REPO_OWNER}:fix/issue-{N}&state=open&per_page=1"
```
(Where PUSH_REPO_OWNER is the owner portion of `PUSH_REPO`)
If the response array is non-empty, remove that issue from the processing list and report:
> "Skipping #{N} — PR already exists: {html_url}"
If all issues are skipped, report and stop (or loop back if in watch mode).
6. **Check for in-progress branches (no PR yet = sub-agent still working):**
For each remaining issue number N (not already skipped by the PR check above), check if a `fix/issue-{N}` branch exists on the **push repo** (which may be a fork, not origin):
```
curl -s -o /dev/null -w "%{http_code}" \
-H "Authorization: Bearer $GH_TOKEN" \
"https://api.github.com/repos/{PUSH_REPO}/branches/fix/issue-{N}"
```
If HTTP 200 → the branch exists on the push repo but no open PR was found for it in step 5. Skip that issue:
> "Skipping #{N} — branch fix/issue-{N} exists on {PUSH_REPO}, fix likely in progress"
This check uses the GitHub API instead of `git ls-remote` so it works correctly in fork mode (where branches are pushed to the fork, not origin).
If all issues are skipped after this check, report and stop (or loop back if in watch mode).
7. **Check claim-based in-progress tracking:**
This prevents duplicate processing when a sub-agent from a previous cron run is still working but hasn't pushed a branch or opened a PR yet.
Read the claims file (create empty `{}` if missing):
```
CLAIMS_FILE="/data/.clawdbot/gh-issues-claims.json"
if [ ! -f "$CLAIMS_FILE" ]; then
mkdir -p /data/.clawdbot
echo '{}' > "$CLAIMS_FILE"
fi
```
Parse the claims file. For each entry, check if the claim timestamp is older than 2 hours. If so, remove it (expired — the sub-agent likely finished or failed silently). Write back the cleaned file:
```
CLAIMS=$(cat "$CLAIMS_FILE")
CUTOFF=$(date -u -d '2 hours ago' +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u -v-2H +%Y-%m-%dT%H:%M:%SZ)
CLAIMS=$(echo "$CLAIMS" | jq --arg cutoff "$CUTOFF" 'to_entries | map(select(.value > $cutoff)) | from_entries')
echo "$CLAIMS" > "$CLAIMS_FILE"
```
For each remaining issue number N (not already skipped by steps 5 or 6), check if `{SOURCE_REPO}#{N}` exists as a key in the claims file.
If claimed and not expired → skip:
> "Skipping #{N} — sub-agent claimed this issue {minutes}m ago, still within timeout window"
Where `{minutes}` is calculated from the claim timestamp to now.
If all issues are skipped after this check, report and stop (or loop back if in watch mode).
---
## Phase 5 — Spawn Sub-agents (Parallel)
**Cron mode (`--cron` is active):**
- **Sequential cursor tracking:** Use a cursor file to track which issue to process next:
```
CURSOR_FILE="/data/.clawdbot/gh-issues-cursor-{SOURCE_REPO_SLUG}.json"
# SOURCE_REPO_SLUG = owner-repo with slashes replaced by hyphens (e.g., openclaw-openclaw)
```
Read the cursor file (create if missing):
```
if [ ! -f "$CURSOR_FILE" ]; then
echo '{"last_processed": null, "in_progress": null}' > "$CURSOR_FILE"
fi
```
- `last_processed`: issue number of the last completed issue (or null if none)
- `in_progress`: issue number currently being processed (or null)
- **Select next issue:** Filter the fetched issues list to find the first issue where:
- Issue number > last_processed (if last_processed is set)
- AND issue is not in the claims file (not already in progress)
- AND no PR exists for the issue (checked in Phase 4 step 5)
- AND no branch exists on the push repo (checked in Phase 4 step 6)
- If no eligible issue is found after the last_processed cursor, wrap around to the beginning (start from the oldest eligible issue).
- If an eligible issue is found:
1. Mark it as in_progress in the cursor file
2. Spawn a single sub-agent for that one issue with `cleanup: "keep"` and `runTimeoutSeconds: 3600`
3. If `--model` was provided, include `model: "{MODEL}"` in the spawn config
4. If `--notify-channel` was provided, include the channel in the task so the sub-agent can notify
5. Do NOT await the sub-agent result — fire and forget
6. **Write claim:** After spawning, read the claims file, add `{SOURCE_REPO}#{N}` with the current ISO timestamp, and write it back
7. Immediately report: "Spawned fix agent for #{N} — will create PR when complete"
8. Exit the skill. Do not proceed to Results Collection or Phase 6.
- If no eligible issue is found (all issues either have PRs, have branches, or are in progress), report "No eligible issues to process — all issues have PRs/branches or are in progress" and exit.
**Normal mode (`--cron` is NOT active):**
For each confirmed issue, spawn a sub-agent using sessions_spawn. Launch up to 8 concurrently (matching `subagents.maxConcurrent: 8`). If more than 8 issues, batch them — launch the next agent as each completes.
**Write claims:** After spawning each sub-agent, read the claims file, add `{SOURCE_REPO}#{N}` with the current ISO timestamp, and write it back (same procedure as cron mode above). This covers interactive usage where watch mode might overlap with cron runs.
### Sub-agent Task Prompt
For each issue, construct the following prompt and pass it to sessions_spawn. Variables to inject into the template:
- {SOURCE_REPO} — upstream repo where the issue lives
- {PUSH_REPO} — repo to push branches to (same as SOURCE_REPO unless fork mode)
- {FORK_MODE} — true/false
- {PUSH_REMOTE} — `fork` if FORK_MODE, otherwise `origin`
- {number}, {title}, {url}, {labels}, {body} — from the issue
- {BASE_BRANCH} — from Phase 4
- {notify_channel} — Telegram channel ID for notifications (empty if not set). Replace {notify_channel} in the template below with the value of `--notify-channel` flag (or leave as empty string if not provided).
When constructing the task, replace all template variables including {notify_channel} with actual values.
```
You are a focused code-fix agent. Your task is to fix a single GitHub issue and open a PR.
IMPORTANT: Do NOT use the gh CLI — it is not installed. Use curl with the GitHub REST API for all GitHub operations.
First, ensure GH_TOKEN is set. Check: `echo $GH_TOKEN`. If empty, read from config:
GH_TOKEN=$(cat ~/.openclaw/openclaw.json 2>/dev/null | jq -r '.skills.entries["gh-issues"].apiKey // empty') || GH_TOKEN=$(cat /data/.clawdbot/openclaw.json 2>/dev/null | jq -r '.skills.entries["gh-issues"].apiKey // empty')
Use the token in all GitHub API calls:
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" ...
<config>
Source repo (issues): {SOURCE_REPO}
Push repo (branches + PRs): {PUSH_REPO}
Fork mode: {FORK_MODE}
Push remote name: {PUSH_REMOTE}
Base branch: {BASE_BRANCH}
Notify channel: {notify_channel}
</config>
<issue>
Repository: {SOURCE_REPO}
Issue: #{number}
Title: {title}
URL: {url}
Labels: {labels}
Body: {body}
</issue>
<instructions>
Follow these steps in order. If any step fails, report the failure and stop.
0. SETUP — Ensure GH_TOKEN is available:
```
export GH_TOKEN=$(node -e "const fs=require('fs'); const c=JSON.parse(fs.readFileSync('/data/.clawdbot/openclaw.json','utf8')); console.log(c.skills?.entries?.['gh-issues']?.apiKey || '')")
```
If that fails, also try:
```
export GH_TOKEN=$(cat ~/.openclaw/openclaw.json 2>/dev/null | node -e "const fs=require('fs');const d=JSON.parse(fs.readFileSync(0,'utf8'));console.log(d.skills?.entries?.['gh-issues']?.apiKey||'')")
```
Verify: echo "Token: ${GH_TOKEN:0:10}..."
1. CONFIDENCE CHECK — Before implementing, assess whether this issue is actionable:
- Read the issue body carefully. Is the problem clearly described?
- Search the codebase (grep/find) for the relevant code. Can you locate it?
- Is the scope reasonable? (single file/function = good, whole subsystem = bad)
- Is a specific fix suggested or is it a vague complaint?
Rate your confidence (1-10). If confidence < 7, STOP and report:
> "Skipping #{number}: Low confidence (score: N/10) — [reason: vague requirements | cannot locate code | scope too large | no clear fix suggested]"
Only proceed if confidence >= 7.
1. UNDERSTAND — Read the issue carefully. Identify what needs to change and where.
2. BRANCH — Create a feature branch from the base branch:
git checkout -b fix/issue-{number} {BASE_BRANCH}
3. ANALYZE — Search the codebase to find relevant files:
- Use grep/find via exec to locate code related to the issue
- Read the relevant files to understand the current behavior
- Identify the root cause
4. IMPLEMENT — Make the minimal, focused fix:
- Follow existing code style and conventions
- Change only what is necessary to fix the issue
- Do not add unrelated changes or new dependencies without justification
5. TEST — Discover and run the existing test suite if one exists:
- Look for package.json scripts, Makefile targets, pytest, cargo test, etc.
- Run the relevant tests
- If tests fail after your fix, attempt ONE retry with a corrected approach
- If tests still fail, report the failure
6. COMMIT — Stage and commit your changes:
git add {changed_files}
git commit -m "fix: {short_description}
Fixes {SOURCE_REPO}#{number}"
7. PUSH — Push the branch:
First, ensure the push remote uses token auth and disable credential helpers:
git config --global credential.helper ""
git remote set-url {PUSH_REMOTE} https://x-access-token:$GH_TOKEN@github.com/{PUSH_REPO}.git
Then push:
GIT_ASKPASS=true git push -u {PUSH_REMOTE} fix/issue-{number}
8. PR — Create a pull request using the GitHub API:
If FORK_MODE is true, the PR goes from your fork to the source repo:
- head = "{PUSH_REPO_OWNER}:fix/issue-{number}"
- base = "{BASE_BRANCH}"
- PR is created on {SOURCE_REPO}
If FORK_MODE is false:
- head = "fix/issue-{number}"
- base = "{BASE_BRANCH}"
- PR is created on {SOURCE_REPO}
curl -s -X POST \
-H "Authorization: Bearer $GH_TOKEN" \
-H "Accept: application/vnd.github+json" \
https://api.github.com/repos/{SOURCE_REPO}/pulls \
-d '{
"title": "fix: {title}",
"head": "{head_value}",
"base": "{BASE_BRANCH}",
"body": "## Summary\n\n{one_paragraph_description_of_fix}\n\n## Changes\n\n{bullet_list_of_changes}\n\n## Testing\n\n{what_was_tested_and_results}\n\nFixes {SOURCE_REPO}#{number}"
}'
Extract the `html_url` from the response — this is the PR link.
9. REPORT — Send back a summary:
- PR URL (the html_url from step 8)
- Files changed (list)
- Fix summary (1-2 sentences)
- Any caveats or concerns
10. NOTIFY (if notify_channel is set) — If {notify_channel} is not empty, send a notification to the Telegram channel:
```
Use the message tool with:
- action: "send"
- channel: "telegram"
- target: "{notify_channel}"
- message: "✅ PR Created: {SOURCE_REPO}#{number}
{title}
{pr_url}
Files changed: {files_changed_list}"
```
</instructions>
<constraints>
- No force-push, no modifying the base branch
- No unrelated changes or gratuitous refactoring
- No new dependencies without strong justification
- If the issue is unclear or too complex to fix confidently, report your analysis instead of guessing
- Do NOT use the gh CLI — it is not available. Use curl + GitHub REST API for all GitHub operations.
- GH_TOKEN is already in the environment — do NOT prompt for auth
- Time limit: you have 60 minutes max. Be thorough — analyze properly, test your fix, don't rush.
</constraints>
```
### Spawn configuration per sub-agent:
- runTimeoutSeconds: 3600 (60 minutes)
- cleanup: "keep" (preserve transcripts for review)
- If `--model` was provided, include `model: "{MODEL}"` in the spawn config
### Timeout Handling
If a sub-agent exceeds 60 minutes, record it as:
> "#{N} — Timed out (issue may be too complex for auto-fix)"
---
## Results Collection
**If `--cron` is active:** Skip this section entirely — the orchestrator already exited after spawning in Phase 5.
After ALL sub-agents complete (or timeout), collect their results. Store the list of successfully opened PRs in `OPEN_PRS` (PR number, branch name, issue number, PR URL) for use in Phase 6.
Present a summary table:
| Issue | Status | PR | Notes |
| --------------------- | --------- | ------------------------------ | ------------------------------ |
| #42 Fix null pointer | PR opened | https://github.com/.../pull/99 | 3 files changed |
| #37 Add retry logic | Failed | -- | Could not identify target code |
| #15 Update docs | Timed out | -- | Too complex for auto-fix |
| #8 Fix race condition | Skipped | -- | PR already exists |
**Status values:**
- **PR opened** — success, link to PR
- **Failed** — sub-agent could not complete (include reason in Notes)
- **Timed out** — exceeded 60-minute limit
- **Skipped** — existing PR detected in pre-flight
End with a one-line summary:
> "Processed {N} issues: {success} PRs opened, {failed} failed, {skipped} skipped."
**Send notification to channel (if --notify-channel is set):**
If `--notify-channel` was provided, send the final summary to that Telegram channel using the `message` tool:
```
Use the message tool with:
- action: "send"
- channel: "telegram"
- target: "{notify-channel}"
- message: "✅ GitHub Issues Processed
Processed {N} issues: {success} PRs opened, {failed} failed, {skipped} skipped.
{PR_LIST}"
Where PR_LIST includes only successfully opened PRs in format:
• #{issue_number}: {PR_url} ({notes})
```
Then proceed to Phase 6.
---
## Phase 6 — PR Review Handler
This phase monitors open PRs (created by this skill or pre-existing `fix/issue-*` PRs) for review comments and spawns sub-agents to address them.
**When this phase runs:**
- After Results Collection (Phases 2-5 completed) — checks PRs that were just opened
- When `--reviews-only` flag is set — skips Phases 2-5 entirely, runs only this phase
- In watch mode — runs every poll cycle after checking for new issues
**Cron review mode (`--cron --reviews-only`):**
When both `--cron` and `--reviews-only` are set:
1. Run token resolution (Phase 2 token section)
2. Discover open `fix/issue-*` PRs (Step 6.1)
3. Fetch review comments (Step 6.2)
4. **Analyze comment content for actionability** (Step 6.3)
5. If actionable comments are found, spawn ONE review-fix sub-agent for the first PR with unaddressed comments — fire-and-forget (do NOT await result)
- Use `cleanup: "keep"` and `runTimeoutSeconds: 3600`
- If `--model` was provided, include `model: "{MODEL}"` in the spawn config
6. Report: "Spawned review handler for PR #{N} — will push fixes when complete"
7. Exit the skill immediately. Do not proceed to Step 6.5 (Review Results).
If no actionable comments found, report "No actionable review comments found" and exit.
**Normal mode (non-cron) continues below:**
### Step 6.1 — Discover PRs to Monitor
Collect PRs to check for review comments:
**If coming from Phase 5:** Use the `OPEN_PRS` list from Results Collection.
**If `--reviews-only` or subsequent watch cycle:** Fetch all open PRs with `fix/issue-` branch pattern:
```
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/{SOURCE_REPO}/pulls?state=open&per_page=100"
```
Filter to only PRs where `head.ref` starts with `fix/issue-`.
For each PR, extract: `number` (PR number), `head.ref` (branch name), `html_url`, `title`, `body`.
If no PRs found, report "No open fix/ PRs to monitor" and stop (or loop back if in watch mode).
### Step 6.2 — Fetch All Review Sources
For each PR, fetch reviews from multiple sources:
**Fetch PR reviews:**
```
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/{SOURCE_REPO}/pulls/{pr_number}/reviews"
```
**Fetch PR review comments (inline/file-level):**
```
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/{SOURCE_REPO}/pulls/{pr_number}/comments"
```
**Fetch PR issue comments (general conversation):**
```
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/{SOURCE_REPO}/issues/{pr_number}/comments"
```
**Fetch PR body for embedded reviews:**
Some review tools (like Greptile) embed their feedback directly in the PR body. Check for:
- `<!-- greptile_comment -->` markers
- Other structured review sections in the PR body
```
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/{SOURCE_REPO}/pulls/{pr_number}"
```
Extract the `body` field and parse for embedded review content.
### Step 6.3 — Analyze Comments for Actionability
**Determine the bot's own username** for filtering:
```
curl -s -H "Authorization: Bearer $GH_TOKEN" https://api.github.com/user | jq -r '.login'
```
Store as `BOT_USERNAME`. Exclude any comment where `user.login` equals `BOT_USERNAME`.
**For each comment/review, analyze the content to determine if it requires action:**
**NOT actionable (skip):**
- Pure approvals or "LGTM" without suggestions
- Bot comments that are informational only (CI status, auto-generated summaries without specific requests)
- Comments already addressed (check if bot replied with "Addressed in commit...")
- Reviews with state `APPROVED` and no inline comments requesting changes
**IS actionable (requires attention):**
- Reviews with state `CHANGES_REQUESTED`
- Reviews with state `COMMENTED` that contain specific requests:
- "this test needs to be updated"
- "please fix", "change this", "update", "can you", "should be", "needs to"
- "will fail", "will break", "causes an error"
- Mentions of specific code issues (bugs, missing error handling, edge cases)
- Inline review comments pointing out issues in the code
- Embedded reviews in PR body that identify:
- Critical issues or breaking changes
- Test failures expected
- Specific code that needs attention
- Confidence scores with concerns
**Parse embedded review content (e.g., Greptile):**
Look for sections marked with `<!-- greptile_comment -->` or similar. Extract:
- Summary text
- Any mentions of "Critical issue", "needs attention", "will fail", "test needs to be updated"
- Confidence scores below 4/5 (indicates concerns)
**Build actionable_comments list** with:
- Source (review, inline comment, PR body, etc.)
- Author
- Body text
- For inline: file path and line number
- Specific action items identified
If no actionable comments found across any PR, report "No actionable review comments found" and stop (or loop back if in watch mode).
### Step 6.4 — Present Review Comments
Display a table of PRs with pending actionable comments:
```
| PR | Branch | Actionable Comments | Sources |
|----|--------|---------------------|---------|
| #99 | fix/issue-42 | 2 comments | @reviewer1, greptile |
| #101 | fix/issue-37 | 1 comment | @reviewer2 |
```
If `--yes` is NOT set and this is not a subsequent watch poll: ask the user to confirm which PRs to address ("all", comma-separated PR numbers, or "skip").
### Step 6.5 — Spawn Review Fix Sub-agents (Parallel)
For each PR with actionable comments, spawn a sub-agent. Launch up to 8 concurrently.
**Review fix sub-agent prompt:**
```
You are a PR review handler agent. Your task is to address review comments on a pull request by making the requested changes, pushing updates, and replying to each comment.
IMPORTANT: Do NOT use the gh CLI — it is not installed. Use curl with the GitHub REST API for all GitHub operations.
First, ensure GH_TOKEN is set. Check: echo $GH_TOKEN. If empty, read from config:
GH_TOKEN=$(cat ~/.openclaw/openclaw.json 2>/dev/null | jq -r '.skills.entries["gh-issues"].apiKey // empty') || GH_TOKEN=$(cat /data/.clawdbot/openclaw.json 2>/dev/null | jq -r '.skills.entries["gh-issues"].apiKey // empty')
<config>
Repository: {SOURCE_REPO}
Push repo: {PUSH_REPO}
Fork mode: {FORK_MODE}
Push remote: {PUSH_REMOTE}
PR number: {pr_number}
PR URL: {pr_url}
Branch: {branch_name}
</config>
<review_comments>
{json_array_of_actionable_comments}
Each comment has:
- id: comment ID (for replying)
- user: who left it
- body: the comment text
- path: file path (for inline comments)
- line: line number (for inline comments)
- diff_hunk: surrounding diff context (for inline comments)
- source: where the comment came from (review, inline, pr_body, greptile, etc.)
</review_comments>
<instructions>
Follow these steps in order:
0. SETUP — Ensure GH_TOKEN is available:
```
export GH_TOKEN=$(node -e "const fs=require('fs'); const c=JSON.parse(fs.readFileSync('/data/.clawdbot/openclaw.json','utf8')); console.log(c.skills?.entries?.['gh-issues']?.apiKey || '')")
```
Verify: echo "Token: ${GH_TOKEN:0:10}..."
1. CHECKOUT — Switch to the PR branch:
git fetch {PUSH_REMOTE} {branch_name}
git checkout {branch_name}
git pull {PUSH_REMOTE} {branch_name}
2. UNDERSTAND — Read ALL review comments carefully. Group them by file. Understand what each reviewer is asking for.
3. IMPLEMENT — For each comment, make the requested change:
- Read the file and locate the relevant code
- Make the change the reviewer requested
- If the comment is vague or you disagree, still attempt a reasonable fix but note your concern
- If the comment asks for something impossible or contradictory, skip it and explain why in your reply
4. TEST — Run existing tests to make sure your changes don't break anything:
- If tests fail, fix the issue or revert the problematic change
- Note any test failures in your replies
5. COMMIT — Stage and commit all changes in a single commit:
git add {changed_files}
git commit -m "fix: address review comments on PR #{pr_number}
Addresses review feedback from {reviewer_names}"
6. PUSH — Push the updated branch:
git config --global credential.helper ""
git remote set-url {PUSH_REMOTE} https://x-access-token:$GH_TOKEN@github.com/{PUSH_REPO}.git
GIT_ASKPASS=true git push {PUSH_REMOTE} {branch_name}
7. REPLY — For each addressed comment, post a reply:
For inline review comments (have a path/line), reply to the comment thread:
curl -s -X POST \
-H "Authorization: Bearer $GH_TOKEN" \
-H "Accept: application/vnd.github+json" \
https://api.github.com/repos/{SOURCE_REPO}/pulls/{pr_number}/comments/{comment_id}/replies \
-d '{"body": "Addressed in commit {short_sha} — {brief_description_of_change}"}'
For general PR comments (issue comments), reply on the PR:
curl -s -X POST \
-H "Authorization: Bearer $GH_TOKEN" \
-H "Accept: application/vnd.github+json" \
https://api.github.com/repos/{SOURCE_REPO}/issues/{pr_number}/comments \
-d '{"body": "Addressed feedback from @{reviewer}:\n\n{summary_of_changes_made}\n\nUpdated in commit {short_sha}"}'
For comments you could NOT address, reply explaining why:
"Unable to address this comment: {reason}. This may need manual review."
8. REPORT — Send back a summary:
- PR URL
- Number of comments addressed vs skipped
- Commit SHA
- Files changed
- Any comments that need manual attention
</instructions>
<constraints>
- Only modify files relevant to the review comments
- Do not make unrelated changes
- Do not force-push — always regular push
- If a comment contradicts another comment, address the most recent one and flag the conflict
- Do NOT use the gh CLI — use curl + GitHub REST API
- GH_TOKEN is already in the environment — do not prompt for auth
- Time limit: 60 minutes max
</constraints>
```
**Spawn configuration per sub-agent:**
- runTimeoutSeconds: 3600 (60 minutes)
- cleanup: "keep" (preserve transcripts for review)
- If `--model` was provided, include `model: "{MODEL}"` in the spawn config
### Step 6.6 — Review Results
After all review sub-agents complete, present a summary:
```
| PR | Comments Addressed | Comments Skipped | Commit | Status |
|----|-------------------|-----------------|--------|--------|
| #99 fix/issue-42 | 3 | 0 | abc123f | All addressed |
| #101 fix/issue-37 | 1 | 1 | def456a | 1 needs manual review |
```
Add comment IDs from this batch to `ADDRESSED_COMMENTS` set to prevent re-processing.
---
## Watch Mode (if --watch is active)
After presenting results from the current batch:
1. Add all issue numbers from this batch to the running set PROCESSED_ISSUES.
2. Add all addressed comment IDs to ADDRESSED_COMMENTS.
3. Tell the user:
> "Next poll in {interval} minutes... (say 'stop' to end watch mode)"
4. Sleep for {interval} minutes.
5. Go back to **Phase 2 — Fetch Issues**. The fetch will automatically filter out:
- Issues already in PROCESSED_ISSUES
- Issues that have existing fix/issue-{N} PRs (caught in Phase 4 pre-flight)
6. After Phases 2-5 (or if no new issues), run **Phase 6** to check for new review comments on ALL tracked PRs (both newly created and previously opened).
7. If no new issues AND no new actionable review comments → report "No new activity. Polling again in {interval} minutes..." and loop back to step 4.
8. The user can say "stop" at any time to exit watch mode. When stopping, present a final cumulative summary of ALL batches — issues processed AND review comments addressed.
**Context hygiene between polls — IMPORTANT:**
Only retain between poll cycles:
- PROCESSED_ISSUES (set of issue numbers)
- ADDRESSED_COMMENTS (set of comment IDs)
- OPEN_PRS (list of tracked PRs: number, branch, URL)
- Cumulative results (one line per issue + one line per review batch)
- Parsed arguments from Phase 1
- BASE_BRANCH, SOURCE_REPO, PUSH_REPO, FORK_MODE, BOT_USERNAME
Do NOT retain issue bodies, comment bodies, sub-agent transcripts, or codebase analysis between polls.
| `--label` | _(none)_ | Filter by label |
| `--limit` | 10 | Max issues to fetch |
| `--milestone` | _(none)_ | Filter by milestone title |
| `--assignee` | _(none)_ | Filter by assignee (`@me` for self) |
| `--state` | open | Issue state: open/closed/all |
| `--fork` | _(none)_ | Fork `user/repo` to push branches and PRs from |
| `--watch` | false | Keep polling after each batch |
| `--interval` | 5 | Minutes between polls (with `--watch`) |
| `--dry-run` | false | Fetch and display only — no sub-agents |
| `--yes` | false | Skip confirmation, auto-process all |
| `--reviews-only` | false | Skip Phases 2-5, only run Phase 6 |
| `--cron` | false | Fire-and-forget: spawn one agent and exit |
| `--model` | _(none)_ | Model for sub-agents (e.g. `glm-5`) |
| `--notify-channel` | _(none)_ | Telegram channel ID for final PR summary |
## Key Invariants
- **GH_TOKEN resolution**: env → `~/.openclaw/openclaw.json` `.skills.entries["gh-issues"].apiKey``/data/.clawdbot/openclaw.json`
- **Claims file**: `/data/.clawdbot/gh-issues-claims.json` — prevents duplicate processing; entries expire after 2h
- **Cursor file** (cron mode): `/data/.clawdbot/gh-issues-cursor-{SOURCE_REPO_SLUG}.json` — sequential issue tracking
- **Max concurrent sub-agents**: 8
- **Sub-agent timeout**: 3600s; `cleanup: "keep"`
- **PR branch pattern**: `fix/issue-{N}`
- **BOT_USERNAME**: Resolve via `GET /user` — exclude own comments in Phase 6
## Sub-agent Confidence Gate
Sub-agents self-assess before implementing. If confidence < 7/10, they skip and report why (vague requirements, can't locate code, scope too large, no clear fix).

View File

@ -0,0 +1,281 @@
# gh-issues — Full Phase Instructions & Sub-agent Prompts
## Phase 1 — Parse Arguments
Parse the arguments string provided after /gh-issues.
Positional:
- `owner/repo` — optional. If omitted, detect from git remote:
`git remote get-url origin`
Extract owner/repo (HTTPS: `https://github.com/owner/repo.git`; SSH: `git@github.com:owner/repo.git`).
If not in a git repo, stop with error asking user to specify.
Derived values:
- `SOURCE_REPO` = positional owner/repo (where issues live)
- `PUSH_REPO` = `--fork` value if provided, otherwise same as SOURCE_REPO
- `FORK_MODE` = true if `--fork` was provided
**If `--reviews-only` is set:** Run token resolution (Phase 2), then jump to Phase 6.
**If `--cron` is set:** Force `--yes`. If also `--reviews-only`, jump to Phase 6 after token resolution.
---
## Phase 2 — Fetch Issues
**Token Resolution:**
```bash
echo $GH_TOKEN
# If empty:
cat ~/.openclaw/openclaw.json | jq -r '.skills.entries["gh-issues"].apiKey // empty'
# If still empty:
cat /data/.clawdbot/openclaw.json | jq -r '.skills.entries["gh-issues"].apiKey // empty'
export GH_TOKEN="<token>"
```
Fetch issues:
```bash
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/{SOURCE_REPO}/issues?per_page={limit}&state={state}&{query_params}"
```
`{query_params}`: `labels=`, `milestone=` (resolve title→number via GET /milestones), `assignee=` (@me → resolve via GET /user).
**Filter out PRs** (exclude items where `pull_request` key exists).
In watch mode: also filter PROCESSED_ISSUES from previous batches.
Errors: 401/403 → auth error message. Empty array → "No issues found". Other error → report verbatim.
---
## Phase 3 — Present & Confirm
Display markdown table: `# | Title | Labels`
If FORK_MODE: show "Fork mode: branches → {PUSH_REPO}, PRs target {SOURCE_REPO}"
- `--dry-run`: Display and stop.
- `--yes`: Display and auto-process all.
- Otherwise: Ask "all", comma-separated numbers, or "cancel". Wait for response.
Watch mode: First poll always confirms (unless `--yes`). Subsequent polls auto-process.
---
## Phase 4 — Pre-flight Checks
Run sequentially:
1. **Dirty tree**: `git status --porcelain` — warn if non-empty, wait for confirmation.
2. **Base branch**: `git rev-parse --abbrev-ref HEAD` → store as `BASE_BRANCH`.
3. **Remote access**:
- Fork mode: ensure `fork` remote exists (`git remote get-url fork`), add if missing:
`git remote add fork https://x-access-token:$GH_TOKEN@github.com/{PUSH_REPO}.git`
- Run `git ls-remote --exit-code origin HEAD`
4. **Token validity**: `curl -s -o /dev/null -w "%{http_code}" -H "Authorization: Bearer $GH_TOKEN" https://api.github.com/user` → must be 200.
5. **Existing PRs**: For each issue N:
```bash
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/{SOURCE_REPO}/pulls?head={PUSH_REPO_OWNER}:fix/issue-{N}&state=open&per_page=1"
```
Non-empty → skip and report PR URL.
6. **In-progress branches**: Check `https://api.github.com/repos/{PUSH_REPO}/branches/fix/issue-{N}` → HTTP 200 = skip.
7. **Claims check**:
```bash
CLAIMS_FILE="/data/.clawdbot/gh-issues-claims.json"
# Create if missing, clean entries older than 2h, check {SOURCE_REPO}#{N} key
```
If claimed and not expired → skip with age in minutes.
---
## Phase 5 — Spawn Sub-agents (Parallel)
### Cron mode
Use cursor file `/data/.clawdbot/gh-issues-cursor-{SOURCE_REPO_SLUG}.json`:
```json
{"last_processed": null, "in_progress": null}
```
Find first eligible issue: `number > last_processed`, not in claims, no PR, no branch.
If none found (wrap-around), report "No eligible issues" and exit.
1. Mark as `in_progress` in cursor file
2. Spawn one sub-agent (fire-and-forget, do NOT await)
3. Write claim to claims file
4. Report "Spawned fix agent for #{N}" and **exit**
### Normal mode
Spawn up to 8 sub-agents concurrently. Write claims after each spawn.
### Sub-agent Task Prompt Template
Replace all `{variables}` with actual values before passing to sessions_spawn:
```
You are a focused code-fix agent. Fix a single GitHub issue and open a PR.
IMPORTANT: Do NOT use the gh CLI. Use curl + GitHub REST API. GH_TOKEN is in env.
First ensure GH_TOKEN is set:
export GH_TOKEN=$(node -e "const fs=require('fs'); const c=JSON.parse(fs.readFileSync('/data/.clawdbot/openclaw.json','utf8')); console.log(c.skills?.entries?.['gh-issues']?.apiKey || '')")
Fallback: cat ~/.openclaw/openclaw.json | jq -r '.skills.entries["gh-issues"].apiKey // empty'
Verify: echo "Token: ${GH_TOKEN:0:10}..."
<config>
Source repo: {SOURCE_REPO}
Push repo: {PUSH_REPO}
Fork mode: {FORK_MODE}
Push remote: {PUSH_REMOTE}
Base branch: {BASE_BRANCH}
Notify channel: {notify_channel}
</config>
<issue>
Repository: {SOURCE_REPO}
Issue: #{number}
Title: {title}
URL: {url}
Labels: {labels}
Body: {body}
</issue>
Steps:
0. SETUP — Verify GH_TOKEN (above)
1. CONFIDENCE CHECK — Rate 1-10. If < 7, STOP: "Skipping #{number}: Low confidence (N/10) [reason]"
2. UNDERSTAND — Read issue, identify what to change and where
3. BRANCH — `git checkout -b fix/issue-{number} {BASE_BRANCH}`
4. ANALYZE — grep/find relevant files, read them, identify root cause
5. IMPLEMENT — Minimal focused fix, follow existing style, no unrelated changes
6. TEST — Find and run existing test suite; if fail, one retry; if still fail, report
7. COMMIT — `git commit -m "fix: {short_description}\n\nFixes {SOURCE_REPO}#{number}"`
8. PUSH:
git config --global credential.helper ""
git remote set-url {PUSH_REMOTE} https://x-access-token:$GH_TOKEN@github.com/{PUSH_REPO}.git
GIT_ASKPASS=true git push -u {PUSH_REMOTE} fix/issue-{number}
9. PR — POST to https://api.github.com/repos/{SOURCE_REPO}/pulls:
- Fork mode: head="{PUSH_REPO_OWNER}:fix/issue-{number}"
- Normal: head="fix/issue-{number}"
- base="{BASE_BRANCH}", body includes "Fixes {SOURCE_REPO}#{number}"
Extract html_url from response.
10. REPORT — PR URL, files changed, fix summary, caveats
11. NOTIFY (if {notify_channel} set) — message tool: action=send, channel=telegram, target={notify_channel}
Constraints: no force-push, no unrelated changes, no new deps without justification, 60min max.
```
**Spawn config**: `runTimeoutSeconds: 3600`, `cleanup: "keep"`, add `model` if `--model` provided.
---
## Results Collection
*(Skip if `--cron` active — orchestrator already exited.)*
After all sub-agents complete, present summary table:
`| Issue | Status | PR | Notes |`
Statuses: PR opened / Failed / Timed out / Skipped
End: "Processed {N} issues: {success} PRs opened, {failed} failed, {skipped} skipped."
If `--notify-channel`: send final summary to Telegram with PR list.
Store `OPEN_PRS` (PR number, branch, issue number, PR URL) for Phase 6.
---
## Phase 6 — PR Review Handler
**When it runs:**
- After Results Collection (normal flow)
- `--reviews-only`: skip Phases 2-5, run only this phase
- `--cron --reviews-only`: one review-fix agent fire-and-forget, then exit
### Step 6.1 — Discover PRs
From Phase 5 OPEN_PRS, or fetch:
```bash
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/{SOURCE_REPO}/pulls?state=open&per_page=100"
```
Filter: `head.ref` starts with `fix/issue-`.
### Step 6.2 — Fetch Reviews
For each PR, fetch all three sources:
- `GET /repos/{SOURCE_REPO}/pulls/{pr_number}/reviews`
- `GET /repos/{SOURCE_REPO}/pulls/{pr_number}/comments`
- `GET /repos/{SOURCE_REPO}/issues/{pr_number}/comments`
- `GET /repos/{SOURCE_REPO}/pulls/{pr_number}` → parse `body` for embedded reviews (e.g. `<!-- greptile_comment -->`)
### Step 6.3 — Actionability Filter
Resolve `BOT_USERNAME` via `GET /user`. Exclude own comments.
**NOT actionable**: pure LGTM/approvals, informational bot comments, already-addressed comments (bot replied "Addressed in commit..."), APPROVED reviews with no inline requests.
**IS actionable**: `CHANGES_REQUESTED` reviews, `COMMENTED` reviews with specific requests ("please fix", "change this", "update", "will fail", "needs to"), inline comments pointing out bugs, embedded reviews with critical issues or confidence < 4/5.
Build `actionable_comments` list: source, author, body, file path + line (inline), action items.
### Step 6.4 — Present
Table: `| PR | Branch | Actionable Comments | Sources |`
If not `--yes` and not subsequent watch poll: confirm which PRs to address.
### Step 6.5 — Spawn Review-Fix Sub-agents
```
You are a PR review handler. Address review comments, push fixes, reply to each comment.
No gh CLI. Use curl + GitHub REST API. GH_TOKEN in env.
<config>
Repo: {SOURCE_REPO} | Push repo: {PUSH_REPO}
Fork mode: {FORK_MODE} | Push remote: {PUSH_REMOTE}
PR: #{pr_number} ({pr_url}) | Branch: {branch_name}
</config>
<review_comments>
{json_array_of_actionable_comments}
(id, user, body, path, line, diff_hunk, source)
</review_comments>
Steps:
1. CHECKOUT — git fetch {PUSH_REMOTE} {branch_name} && git checkout {branch_name} && git pull
2. UNDERSTAND — Group comments by file, understand all requests
3. IMPLEMENT — Make each requested change; flag contradictions
4. TEST — Run tests; fix failures or revert problematic change
5. COMMIT — git commit -m "fix: address review comments on PR #{pr_number}"
6. PUSH — set-url with token, GIT_ASKPASS=true git push
7. REPLY — For inline comments: POST /pulls/{pr_number}/comments/{id}/replies
For general: POST /issues/{pr_number}/comments
Reply: "Addressed in commit {sha} — {description}"
If skipped: "Unable to address: {reason}"
8. REPORT — PR URL, comments addressed vs skipped, commit SHA, files changed
Constraints: only modify files relevant to review comments, no force-push, 60min max.
```
**Spawn config**: `runTimeoutSeconds: 3600`, `cleanup: "keep"`, add `model` if `--model` provided.
### Step 6.6 — Review Results
Table: `| PR | Comments Addressed | Comments Skipped | Commit | Status |`
Add addressed comment IDs to `ADDRESSED_COMMENTS` to prevent re-processing.
---
## Watch Mode
After each batch:
1. Add issue numbers → `PROCESSED_ISSUES`; comment IDs → `ADDRESSED_COMMENTS`
2. "Next poll in {interval} minutes... (say 'stop' to end)"
3. Sleep {interval} minutes → back to Phase 2
4. After Phases 2-5, always run Phase 6 for ALL tracked PRs
5. No new activity → "No new activity. Polling again in {interval} minutes..."
**Context hygiene between polls** — retain only: PROCESSED_ISSUES, ADDRESSED_COMMENTS, OPEN_PRS, cumulative results, parsed args, BASE_BRANCH, SOURCE_REPO, PUSH_REPO, FORK_MODE, BOT_USERNAME. Discard issue bodies, comment bodies, sub-agent transcripts.

View File

@ -5,368 +5,61 @@ description: Create, edit, improve, or audit AgentSkills. Use when creating a ne
# Skill Creator
This skill provides guidance for creating effective skills.
Skills are modular self-contained packages extending agent capabilities. They're "onboarding guides" for specific domains: specialized workflows, tool integrations, domain expertise, bundled resources.
## About Skills
Skills are modular, self-contained packages that extend Codex's capabilities by providing
specialized knowledge, workflows, and tools. Think of them as "onboarding guides" for specific
domains or tasks—they transform Codex from a general-purpose agent into a specialized agent
equipped with procedural knowledge that no model can fully possess.
### What Skills Provide
1. Specialized workflows - Multi-step procedures for specific domains
2. Tool integrations - Instructions for working with specific file formats or APIs
3. Domain expertise - Company-specific knowledge, schemas, business logic
4. Bundled resources - Scripts, references, and assets for complex and repetitive tasks
For full design patterns and creation process, read `references/design-guide.md`.
## Core Principles
### Concise is Key
**Concise is key.** Context window is shared. Only add what the agent doesn't already know. Challenge each sentence: "Does this justify its token cost?"
The context window is a public good. Skills share the context window with everything else Codex needs: system prompt, conversation history, other Skills' metadata, and the actual user request.
**Progressive Disclosure** — 3 levels:
1. `name + description` — always in bootstrap (~100 words)
2. `SKILL.md body` — loaded on trigger (<500 lines target)
3. `references/`, `scripts/`, `assets/` — loaded as needed
**Default assumption: Codex is already very smart.** Only add context Codex doesn't already have. Challenge each piece of information: "Does Codex really need this explanation?" and "Does this paragraph justify its token cost?"
**Match specificity to fragility**: high freedom (text) for heuristic tasks; low freedom (scripts) for fragile, sequenced ops.
Prefer concise examples over verbose explanations.
### Set Appropriate Degrees of Freedom
Match the level of specificity to the task's fragility and variability:
**High freedom (text-based instructions)**: Use when multiple approaches are valid, decisions depend on context, or heuristics guide the approach.
**Medium freedom (pseudocode or scripts with parameters)**: Use when a preferred pattern exists, some variation is acceptable, or configuration affects behavior.
**Low freedom (specific scripts, few parameters)**: Use when operations are fragile and error-prone, consistency is critical, or a specific sequence must be followed.
Think of Codex as exploring a path: a narrow bridge with cliffs needs specific guardrails (low freedom), while an open field allows many routes (high freedom).
### Anatomy of a Skill
Every skill consists of a required SKILL.md file and optional bundled resources:
## Skill Structure
```
skill-name/
├── SKILL.md (required)
│ ├── YAML frontmatter metadata (required)
│ │ ├── name: (required)
│ │ └── description: (required)
│ └── Markdown instructions (required)
└── Bundled Resources (optional)
├── scripts/ - Executable code (Python/Bash/etc.)
├── references/ - Documentation intended to be loaded into context as needed
└── assets/ - Files used in output (templates, icons, fonts, etc.)
├── SKILL.md (required) ← frontmatter + lean instructions
└── references/ ← loaded on demand
scripts/ ← executable, run without loading
assets/ ← output files (templates, images)
```
#### SKILL.md (required)
**No README, CHANGELOG, or auxiliary docs** — clutter only.
Every SKILL.md consists of:
## SKILL.md Anatomy
- **Frontmatter** (YAML): Contains `name` and `description` fields. These are the only fields that Codex reads to determine when the skill gets used, thus it is very important to be clear and comprehensive in describing what the skill is, and when it should be used.
- **Body** (Markdown): Instructions and guidance for using the skill. Only loaded AFTER the skill triggers (if at all).
#### Bundled Resources (optional)
##### Scripts (`scripts/`)
Executable code (Python/Bash/etc.) for tasks that require deterministic reliability or are repeatedly rewritten.
- **When to include**: When the same code is being rewritten repeatedly or deterministic reliability is needed
- **Example**: `scripts/rotate_pdf.py` for PDF rotation tasks
- **Benefits**: Token efficient, deterministic, may be executed without loading into context
- **Note**: Scripts may still need to be read by Codex for patching or environment-specific adjustments
##### References (`references/`)
Documentation and reference material intended to be loaded as needed into context to inform Codex's process and thinking.
- **When to include**: For documentation that Codex should reference while working
- **Examples**: `references/finance.md` for financial schemas, `references/mnda.md` for company NDA template, `references/policies.md` for company policies, `references/api_docs.md` for API specifications
- **Use cases**: Database schemas, API documentation, domain knowledge, company policies, detailed workflow guides
- **Benefits**: Keeps SKILL.md lean, loaded only when Codex determines it's needed
- **Best practice**: If files are large (>10k words), include grep search patterns in SKILL.md
- **Avoid duplication**: Information should live in either SKILL.md or references files, not both. Prefer references files for detailed information unless it's truly core to the skill—this keeps SKILL.md lean while making information discoverable without hogging the context window. Keep only essential procedural instructions and workflow guidance in SKILL.md; move detailed reference material, schemas, and examples to references files.
##### Assets (`assets/`)
Files not intended to be loaded into context, but rather used within the output Codex produces.
- **When to include**: When the skill needs files that will be used in the final output
- **Examples**: `assets/logo.png` for brand assets, `assets/slides.pptx` for PowerPoint templates, `assets/frontend-template/` for HTML/React boilerplate, `assets/font.ttf` for typography
- **Use cases**: Templates, images, icons, boilerplate code, fonts, sample documents that get copied or modified
- **Benefits**: Separates output resources from documentation, enables Codex to use files without loading them into context
#### What to Not Include in a Skill
A skill should only contain essential files that directly support its functionality. Do NOT create extraneous documentation or auxiliary files, including:
- README.md
- INSTALLATION_GUIDE.md
- QUICK_REFERENCE.md
- CHANGELOG.md
- etc.
The skill should only contain the information needed for an AI agent to do the job at hand. It should not contain auxiliary context about the process that went into creating it, setup and testing procedures, user-facing documentation, etc. Creating additional documentation files just adds clutter and confusion.
### Progressive Disclosure Design Principle
Skills use a three-level loading system to manage context efficiently:
1. **Metadata (name + description)** - Always in context (~100 words)
2. **SKILL.md body** - When skill triggers (<5k words)
3. **Bundled resources** - As needed by Codex (Unlimited because scripts can be executed without reading into context window)
#### Progressive Disclosure Patterns
Keep SKILL.md body to the essentials and under 500 lines to minimize context bloat. Split content into separate files when approaching this limit. When splitting out content into other files, it is very important to reference them from SKILL.md and describe clearly when to read them, to ensure the reader of the skill knows they exist and when to use them.
**Key principle:** When a skill supports multiple variations, frameworks, or options, keep only the core workflow and selection guidance in SKILL.md. Move variant-specific details (patterns, examples, configuration) into separate reference files.
**Pattern 1: High-level guide with references**
**Frontmatter** (only `name` and `description` — no other fields):
- `description` is the trigger mechanism — include what it does AND when to use it, specific trigger phrases, "NOT for:" exclusions
- All "when to use" info goes here — not in the body (body loads after trigger)
**Body**: Instructions for the agent. Keep to essentials. Reference files for detail:
```markdown
# PDF Processing
## Quick start
Extract text with pdfplumber:
[code example]
## Advanced features
- **Form filling**: See [FORMS.md](FORMS.md) for complete guide
- **API reference**: See [REFERENCE.md](REFERENCE.md) for all methods
- **Examples**: See [EXAMPLES.md](EXAMPLES.md) for common patterns
For implementation patterns, read `references/patterns.md`.
```
Codex loads FORMS.md, REFERENCE.md, or EXAMPLES.md only when needed.
## Creation Process
**Pattern 2: Domain-specific organization**
1. **Understand** — gather concrete usage examples, confirm trigger phrases
2. **Plan resources** — identify reusable scripts/references/assets from examples
3. **Initialize** — run `scripts/init_skill.py <name> --path <dir> [--resources ...]`
4. **Edit** — implement resources first, then write SKILL.md
5. **Package** — run `scripts/package_skill.py <path/to/skill-folder>`
6. **Iterate** — test on real tasks, refine
For Skills with multiple domains, organize content by domain to avoid loading irrelevant context:
**Naming**: lowercase + hyphens, under 64 chars, verb-led, namespace by tool when helpful (e.g. `gh-address-comments`).
```
bigquery-skill/
├── SKILL.md (overview and navigation)
└── reference/
├── finance.md (revenue, billing metrics)
├── sales.md (opportunities, pipeline)
├── product.md (API usage, features)
└── marketing.md (campaigns, attribution)
```
## Key Design Patterns
When a user asks about sales metrics, Codex only reads sales.md.
- Split SKILL.md when approaching 500 lines — move detail to `references/`
- Always reference split files from SKILL.md with clear "when to read" guidance
- For multi-variant skills: one reference file per variant (aws.md, gcp.md, azure.md)
- For large reference files (>100 lines): include table of contents at top
- Avoid nested references — all files link directly from SKILL.md
Similarly, for skills supporting multiple frameworks or variants, organize by variant:
```
cloud-deploy/
├── SKILL.md (workflow + provider selection)
└── references/
├── aws.md (AWS deployment patterns)
├── gcp.md (GCP deployment patterns)
└── azure.md (Azure deployment patterns)
```
When the user chooses AWS, Codex only reads aws.md.
**Pattern 3: Conditional details**
Show basic content, link to advanced content:
```markdown
# DOCX Processing
## Creating documents
Use docx-js for new documents. See [DOCX-JS.md](DOCX-JS.md).
## Editing documents
For simple edits, modify the XML directly.
**For tracked changes**: See [REDLINING.md](REDLINING.md)
**For OOXML details**: See [OOXML.md](OOXML.md)
```
Codex reads REDLINING.md or OOXML.md only when the user needs those features.
**Important guidelines:**
- **Avoid deeply nested references** - Keep references one level deep from SKILL.md. All reference files should link directly from SKILL.md.
- **Structure longer reference files** - For files longer than 100 lines, include a table of contents at the top so Codex can see the full scope when previewing.
## Skill Creation Process
Skill creation involves these steps:
1. Understand the skill with concrete examples
2. Plan reusable skill contents (scripts, references, assets)
3. Initialize the skill (run init_skill.py)
4. Edit the skill (implement resources and write SKILL.md)
5. Package the skill (run package_skill.py)
6. Iterate based on real usage
Follow these steps in order, skipping only if there is a clear reason why they are not applicable.
### Skill Naming
- Use lowercase letters, digits, and hyphens only; normalize user-provided titles to hyphen-case (e.g., "Plan Mode" -> `plan-mode`).
- When generating names, generate a name under 64 characters (letters, digits, hyphens).
- Prefer short, verb-led phrases that describe the action.
- Namespace by tool when it improves clarity or triggering (e.g., `gh-address-comments`, `linear-address-issue`).
- Name the skill folder exactly after the skill name.
### Step 1: Understanding the Skill with Concrete Examples
Skip this step only when the skill's usage patterns are already clearly understood. It remains valuable even when working with an existing skill.
To create an effective skill, clearly understand concrete examples of how the skill will be used. This understanding can come from either direct user examples or generated examples that are validated with user feedback.
For example, when building an image-editor skill, relevant questions include:
- "What functionality should the image-editor skill support? Editing, rotating, anything else?"
- "Can you give some examples of how this skill would be used?"
- "I can imagine users asking for things like 'Remove the red-eye from this image' or 'Rotate this image'. Are there other ways you imagine this skill being used?"
- "What would a user say that should trigger this skill?"
To avoid overwhelming users, avoid asking too many questions in a single message. Start with the most important questions and follow up as needed for better effectiveness.
Conclude this step when there is a clear sense of the functionality the skill should support.
### Step 2: Planning the Reusable Skill Contents
To turn concrete examples into an effective skill, analyze each example by:
1. Considering how to execute on the example from scratch
2. Identifying what scripts, references, and assets would be helpful when executing these workflows repeatedly
Example: When building a `pdf-editor` skill to handle queries like "Help me rotate this PDF," the analysis shows:
1. Rotating a PDF requires re-writing the same code each time
2. A `scripts/rotate_pdf.py` script would be helpful to store in the skill
Example: When designing a `frontend-webapp-builder` skill for queries like "Build me a todo app" or "Build me a dashboard to track my steps," the analysis shows:
1. Writing a frontend webapp requires the same boilerplate HTML/React each time
2. An `assets/hello-world/` template containing the boilerplate HTML/React project files would be helpful to store in the skill
Example: When building a `big-query` skill to handle queries like "How many users have logged in today?" the analysis shows:
1. Querying BigQuery requires re-discovering the table schemas and relationships each time
2. A `references/schema.md` file documenting the table schemas would be helpful to store in the skill
To establish the skill's contents, analyze each concrete example to create a list of the reusable resources to include: scripts, references, and assets.
### Step 3: Initializing the Skill
At this point, it is time to actually create the skill.
Skip this step only if the skill being developed already exists, and iteration or packaging is needed. In this case, continue to the next step.
When creating a new skill from scratch, always run the `init_skill.py` script. The script conveniently generates a new template skill directory that automatically includes everything a skill requires, making the skill creation process much more efficient and reliable.
Usage:
```bash
scripts/init_skill.py <skill-name> --path <output-directory> [--resources scripts,references,assets] [--examples]
```
Examples:
```bash
scripts/init_skill.py my-skill --path skills/public
scripts/init_skill.py my-skill --path skills/public --resources scripts,references
scripts/init_skill.py my-skill --path skills/public --resources scripts --examples
```
The script:
- Creates the skill directory at the specified path
- Generates a SKILL.md template with proper frontmatter and TODO placeholders
- Optionally creates resource directories based on `--resources`
- Optionally adds example files when `--examples` is set
After initialization, customize the SKILL.md and add resources as needed. If you used `--examples`, replace or delete placeholder files.
### Step 4: Edit the Skill
When editing the (newly-generated or existing) skill, remember that the skill is being created for another instance of Codex to use. Include information that would be beneficial and non-obvious to Codex. Consider what procedural knowledge, domain-specific details, or reusable assets would help another Codex instance execute these tasks more effectively.
#### Learn Proven Design Patterns
Consult these helpful guides based on your skill's needs:
- **Multi-step processes**: See references/workflows.md for sequential workflows and conditional logic
- **Specific output formats or quality standards**: See references/output-patterns.md for template and example patterns
These files contain established best practices for effective skill design.
#### Start with Reusable Skill Contents
To begin implementation, start with the reusable resources identified above: `scripts/`, `references/`, and `assets/` files. Note that this step may require user input. For example, when implementing a `brand-guidelines` skill, the user may need to provide brand assets or templates to store in `assets/`, or documentation to store in `references/`.
Added scripts must be tested by actually running them to ensure there are no bugs and that the output matches what is expected. If there are many similar scripts, only a representative sample needs to be tested to ensure confidence that they all work while balancing time to completion.
If you used `--examples`, delete any placeholder files that are not needed for the skill. Only create resource directories that are actually required.
#### Update SKILL.md
**Writing Guidelines:** Always use imperative/infinitive form.
##### Frontmatter
Write the YAML frontmatter with `name` and `description`:
- `name`: The skill name
- `description`: This is the primary triggering mechanism for your skill, and helps Codex understand when to use the skill.
- Include both what the Skill does and specific triggers/contexts for when to use it.
- Include all "when to use" information here - Not in the body. The body is only loaded after triggering, so "When to Use This Skill" sections in the body are not helpful to Codex.
- Example description for a `docx` skill: "Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. Use when Codex needs to work with professional documents (.docx files) for: (1) Creating new documents, (2) Modifying or editing content, (3) Working with tracked changes, (4) Adding comments, or any other document tasks"
Do not include any other fields in YAML frontmatter.
##### Body
Write instructions for using the skill and its bundled resources.
### Step 5: Packaging a Skill
Once development of the skill is complete, it must be packaged into a distributable .skill file that gets shared with the user. The packaging process automatically validates the skill first to ensure it meets all requirements:
```bash
scripts/package_skill.py <path/to/skill-folder>
```
Optional output directory specification:
```bash
scripts/package_skill.py <path/to/skill-folder> ./dist
```
The packaging script will:
1. **Validate** the skill automatically, checking:
- YAML frontmatter format and required fields
- Skill naming conventions and directory structure
- Description completeness and quality
- File organization and resource references
2. **Package** the skill if validation passes, creating a .skill file named after the skill (e.g., `my-skill.skill`) that includes all files and maintains the proper directory structure for distribution. The .skill file is a zip file with a .skill extension.
Security restriction: symlinks are rejected and packaging fails when any symlink is present.
If validation fails, the script will report the errors and exit without creating a package. Fix any validation errors and run the packaging command again.
### Step 6: Iterate
After testing the skill, users may request improvements. Often this happens right after using the skill, with fresh context of how the skill performed.
**Iteration workflow:**
1. Use the skill on real tasks
2. Notice struggles or inefficiencies
3. Identify how SKILL.md or bundled resources should be updated
4. Implement changes and test again
Read `references/design-guide.md` for workflow patterns, output patterns, and init/package script usage.

View File

@ -0,0 +1,135 @@
# Skill Creator — Design Guide
## Step 1: Understanding the Skill
Ask the user for concrete examples of usage. Key questions:
- "What functionality should this skill support?"
- "Can you give examples of how it would be used?"
- "What would a user say that should trigger this skill?"
Avoid asking too many at once. Start with the most important, follow up as needed.
Conclude when you have a clear sense of the functionality and trigger patterns.
## Step 2: Planning Reusable Contents
Analyze each concrete example by: (1) how you'd execute it from scratch, (2) what reusable assets would help repeated execution.
Examples:
- PDF rotation → same code written repeatedly → `scripts/rotate_pdf.py`
- Frontend webapp → same boilerplate each time → `assets/hello-world/` template
- BigQuery queries → rediscovering table schemas → `references/schema.md`
Output: a list of scripts, references, and assets to build.
## Step 3: Initialize
```bash
scripts/init_skill.py <skill-name> --path <output-directory> [--resources scripts,references,assets] [--examples]
```
Examples:
```bash
scripts/init_skill.py my-skill --path skills/public
scripts/init_skill.py my-skill --path skills/public --resources scripts,references
```
The script creates the skill directory, generates SKILL.md template, optionally creates resource dirs and example files. Delete/replace placeholder example files after init.
## Step 4: Edit the Skill
You're writing for another agent instance. Include what's non-obvious. Procedural knowledge, domain specifics, reusable assets.
### References for Patterns
- `references/workflows.md` — multi-step processes and conditional logic
- `references/output-patterns.md` — specific output formats and quality standards
### Implementation Order
1. Build `scripts/`, `references/`, `assets/` files first
2. Test scripts by running them (test a representative sample for large batches)
3. Write SKILL.md last — reference the files you've built
### Frontmatter Writing Guidelines
**description field** — this is the trigger:
- Include: what it does, when to use, specific trigger phrases, NOT for exclusions
- Example (docx skill): "Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. Use when working with .docx files for: (1) Creating new documents, (2) Modifying content, (3) Tracked changes, (4) Adding comments"
**Body writing**: Use imperative/infinitive form. Reference bundled files explicitly.
### Progressive Disclosure Patterns
**Pattern 1: High-level guide with references**
```markdown
## Advanced features
- **Form filling**: See `references/forms.md` for complete guide
- **API reference**: See `references/api.md` for all methods
```
**Pattern 2: Domain-specific organization**
```
bigquery-skill/
├── SKILL.md (overview + which file to read for which domain)
└── references/
├── finance.md (revenue, billing metrics)
├── sales.md (opportunities, pipeline)
└── product.md (API usage, features)
```
When user asks about sales → Codex reads only sales.md.
**Pattern 3: Framework variants**
```
cloud-deploy/
├── SKILL.md (workflow + provider selection guide)
└── references/
├── aws.md
├── gcp.md
└── azure.md
```
**Pattern 4: Conditional details**
```markdown
For simple edits, modify the XML directly.
**For tracked changes**: See `references/redlining.md`
**For OOXML details**: See `references/ooxml.md`
```
### Structure Guidelines
- Keep SKILL.md body under 500 lines — split into references when approaching this
- All reference files link directly from SKILL.md (no nesting)
- Large reference files (>100 lines): include table of contents at top
- Information lives in SKILL.md OR references — not both
- Keep only essential procedural instructions in SKILL.md; detailed patterns/schemas → references
## Step 5: Package
```bash
scripts/package_skill.py <path/to/skill-folder>
# Optional output dir:
scripts/package_skill.py <path/to/skill-folder> ./dist
```
The script:
1. **Validates**: YAML frontmatter, naming conventions, description quality, file organization
2. **Packages**: creates `<skill-name>.skill` zip file for distribution
If validation fails, fix errors and re-run. Security: symlinks are rejected.
## Step 6: Iterate
After real usage:
1. Notice struggles or inefficiencies
2. Update SKILL.md or bundled resources
3. Re-package and test
## What NOT to Include
- README.md, INSTALLATION_GUIDE.md, CHANGELOG.md, QUICK_REFERENCE.md
- Setup/testing documentation
- User-facing docs
- Any auxiliary context about how the skill was created
The skill should only contain what an AI agent needs to do the job.

View File

@ -28,434 +28,63 @@ metadata:
}
---
# xurl — Agent Skill Reference
# xurl — X API CLI
`xurl` is a CLI tool for the X API. It supports both **shortcut commands** (human/agentfriendly oneliners) and **raw curlstyle** access to any v2 endpoint. All commands return JSON to stdout.
`xurl` is a CLI tool for the X API. Supports **shortcut commands** (one-liners) and **raw curl-style** access to any v2 endpoint. All output is JSON to stdout.
---
**Auth must be configured before use.** Run `xurl auth status` to check.
## Installation
For full command details, examples, and workflows, read `references/command-reference.md`.
### Homebrew (macOS)
## Secret Safety (Mandatory)
```bash
brew install --cask xdevplatform/tap/xurl
```
### npm
```bash
npm install -g @xdevplatform/xurl
```
### Shell script
```bash
curl -fsSL https://raw.githubusercontent.com/xdevplatform/xurl/main/install.sh | bash
```
Installs to `~/.local/bin`. If it's not in your PATH, the script will tell you what to add.
### Go
```bash
go install github.com/xdevplatform/xurl@latest
```
---
## Prerequisites
This skill requires the `xurl` CLI utility: <https://github.com/xdevplatform/xurl>.
Before using any command you must be authenticated. Run `xurl auth status` to check.
### Secret Safety (Mandatory)
- Never read, print, parse, summarize, upload, or send `~/.xurl` (or copies of it) to the LLM context.
- Never ask the user to paste credentials/tokens into chat.
- The user must fill `~/.xurl` with required secrets manually on their own machine.
- Do not recommend or execute auth commands with inline secrets in agent/LLM sessions.
- Warn that using CLI secret options in agent sessions can leak credentials (prompt/context, logs, shell history).
- Never use `--verbose` / `-v` in agent/LLM sessions; it can expose sensitive headers/tokens in output.
- Sensitive flags that must never be used in agent commands: `--bearer-token`, `--consumer-key`, `--consumer-secret`, `--access-token`, `--token-secret`, `--client-id`, `--client-secret`.
- To verify whether at least one app with credentials is already registered, run: `xurl auth status`.
### Register an app (recommended)
App credential registration must be done manually by the user outside the agent/LLM session.
After credentials are registered, authenticate with:
```bash
xurl auth oauth2
```
For multiple pre-configured apps, switch between them:
```bash
xurl auth default prod-app # set default app
xurl auth default prod-app alice # set default app + user
xurl --app dev-app /2/users/me # one-off override
```
### Other auth methods
Examples with inline secret flags are intentionally omitted. If OAuth1 or app-only auth is needed, the user must run those commands manually outside agent/LLM context.
Tokens are persisted to `~/.xurl` in YAML format. Each app has its own isolated tokens. Do not read this file through the agent/LLM. Once authenticated, every command below will autoattach the right `Authorization` header.
---
- **Never** read, print, or send `~/.xurl` to LLM context
- **Never** use `--verbose` / `-v` in agent sessions (leaks auth headers)
- **Never** use inline secret flags: `--bearer-token`, `--consumer-key`, `--consumer-secret`, `--access-token`, `--token-secret`, `--client-id`, `--client-secret`
- Credential registration must be done manually by the user outside agent sessions
- After registration, authenticate with: `xurl auth oauth2`
## Quick Reference
| Action | Command |
| ------------------------- | ----------------------------------------------------- |
| Post | `xurl post "Hello world!"` |
| Reply | `xurl reply POST_ID "Nice post!"` |
| Quote | `xurl quote POST_ID "My take"` |
| Delete a post | `xurl delete POST_ID` |
| Read a post | `xurl read POST_ID` |
| Search posts | `xurl search "QUERY" -n 10` |
| Who am I | `xurl whoami` |
| Look up a user | `xurl user @handle` |
| Home timeline | `xurl timeline -n 20` |
| Mentions | `xurl mentions -n 10` |
| Like | `xurl like POST_ID` |
| Unlike | `xurl unlike POST_ID` |
| Repost | `xurl repost POST_ID` |
| Undo repost | `xurl unrepost POST_ID` |
| Bookmark | `xurl bookmark POST_ID` |
| Remove bookmark | `xurl unbookmark POST_ID` |
| List bookmarks | `xurl bookmarks -n 10` |
| List likes | `xurl likes -n 10` |
| Follow | `xurl follow @handle` |
| Unfollow | `xurl unfollow @handle` |
| List following | `xurl following -n 20` |
| List followers | `xurl followers -n 20` |
| Block | `xurl block @handle` |
| Unblock | `xurl unblock @handle` |
| Mute | `xurl mute @handle` |
| Unmute | `xurl unmute @handle` |
| Send DM | `xurl dm @handle "message"` |
| List DMs | `xurl dms -n 10` |
| Upload media | `xurl media upload path/to/file.mp4` |
| Media status | `xurl media status MEDIA_ID` |
| **App Management** | |
| Register app | Manual, outside agent (do not pass secrets via agent) |
| List apps | `xurl auth apps list` |
| Update app creds | Manual, outside agent (do not pass secrets via agent) |
| Remove app | `xurl auth apps remove NAME` |
| Set default (interactive) | `xurl auth default` |
| Set default (command) | `xurl auth default APP_NAME [USERNAME]` |
| Use app per-request | `xurl --app NAME /2/users/me` |
| Auth status | `xurl auth status` |
| Action | Command |
|--------|---------|
| Post | `xurl post "Hello world!"` |
| Reply | `xurl reply POST_ID "Nice post!"` |
| Quote | `xurl quote POST_ID "My take"` |
| Delete | `xurl delete POST_ID` |
| Read | `xurl read POST_ID` |
| Search | `xurl search "QUERY" -n 10` |
| Who am I | `xurl whoami` |
| User lookup | `xurl user @handle` |
| Timeline | `xurl timeline -n 20` |
| Mentions | `xurl mentions -n 10` |
| Like/Unlike | `xurl like POST_ID` / `xurl unlike POST_ID` |
| Repost/Undo | `xurl repost POST_ID` / `xurl unrepost POST_ID` |
| Bookmark | `xurl bookmark POST_ID` / `xurl unbookmark POST_ID` |
| List bookmarks | `xurl bookmarks -n 10` |
| Follow/Unfollow | `xurl follow @handle` / `xurl unfollow @handle` |
| Block/Mute | `xurl block @handle` / `xurl mute @handle` |
| DM | `xurl dm @handle "message"` |
| List DMs | `xurl dms -n 10` |
| Upload media | `xurl media upload path/to/file.mp4` |
| Media status | `xurl media status MEDIA_ID` |
| Raw API | `xurl /2/users/me` or `xurl -X POST /2/tweets -d '{"text":"hi"}'` |
> **Post IDs vs URLs:** Anywhere `POST_ID` appears above you can also paste a full post URL (e.g. `https://x.com/user/status/1234567890`) — xurl extracts the ID automatically.
> **Usernames:** Leading `@` is optional. `@elonmusk` and `elonmusk` both work.
---
## Command Details
### Posting
```bash
# Simple post
xurl post "Hello world!"
# Post with media (upload first, then attach)
xurl media upload photo.jpg # → note the media_id from response
xurl post "Check this out" --media-id MEDIA_ID
# Multiple media
xurl post "Thread pics" --media-id 111 --media-id 222
# Reply to a post (by ID or URL)
xurl reply 1234567890 "Great point!"
xurl reply https://x.com/user/status/1234567890 "Agreed!"
# Reply with media
xurl reply 1234567890 "Look at this" --media-id MEDIA_ID
# Quote a post
xurl quote 1234567890 "Adding my thoughts"
# Delete your own post
xurl delete 1234567890
```
### Reading
```bash
# Read a single post (returns author, text, metrics, entities)
xurl read 1234567890
xurl read https://x.com/user/status/1234567890
# Search recent posts (default 10 results)
xurl search "golang"
xurl search "from:elonmusk" -n 20
xurl search "#buildinpublic lang:en" -n 15
```
### User Info
```bash
# Your own profile
xurl whoami
# Look up any user
xurl user elonmusk
xurl user @XDevelopers
```
### Timelines & Mentions
```bash
# Home timeline (reverse chronological)
xurl timeline
xurl timeline -n 25
# Your mentions
xurl mentions
xurl mentions -n 20
```
### Engagement
```bash
# Like / unlike
xurl like 1234567890
xurl unlike 1234567890
# Repost / undo
xurl repost 1234567890
xurl unrepost 1234567890
# Bookmark / remove
xurl bookmark 1234567890
xurl unbookmark 1234567890
# List your bookmarks / likes
xurl bookmarks -n 20
xurl likes -n 20
```
### Social Graph
```bash
# Follow / unfollow
xurl follow @XDevelopers
xurl unfollow @XDevelopers
# List who you follow / your followers
xurl following -n 50
xurl followers -n 50
# List another user's following/followers
xurl following --of elonmusk -n 20
xurl followers --of elonmusk -n 20
# Block / unblock
xurl block @spammer
xurl unblock @spammer
# Mute / unmute
xurl mute @annoying
xurl unmute @annoying
```
### Direct Messages
```bash
# Send a DM
xurl dm @someuser "Hey, saw your post!"
# List recent DM events
xurl dms
xurl dms -n 25
```
### Media Upload
```bash
# Upload a file (autodetects type for images/videos)
xurl media upload photo.jpg
xurl media upload video.mp4
# Specify type and category explicitly
xurl media upload --media-type image/jpeg --category tweet_image photo.jpg
# Check processing status (videos need serverside processing)
xurl media status MEDIA_ID
xurl media status --wait MEDIA_ID # poll until done
# Full workflow: upload then post
xurl media upload meme.png # response includes media id
xurl post "lol" --media-id MEDIA_ID
```
---
> **Post IDs vs URLs:** Anywhere `POST_ID` appears, you can paste a full URL — xurl extracts the ID.
> **Usernames:** `@` prefix is optional.
## Global Flags
These flags work on every command:
| Flag | Description |
|------|-------------|
| `--app NAME` | Use a specific registered app |
| `--auth TYPE` | Force auth: `oauth1`, `oauth2`, or `app` |
| `-u USERNAME` | Which OAuth2 account to use |
| `-s` | Force streaming mode |
| Flag | Short | Description |
| ------------ | ----- | ------------------------------------------------------------------ |
| `--app` | | Use a specific registered app for this request (overrides default) |
| `--auth` | | Force auth type: `oauth1`, `oauth2`, or `app` |
| `--username` | `-u` | Which OAuth2 account to use (if you have multiple) |
| `--verbose` | `-v` | Forbidden in agent/LLM sessions (can leak auth headers/tokens) |
| `--trace` | `-t` | Add `X-B3-Flags: 1` trace header |
## Key Notes
---
## Raw API Access
The shortcut commands cover the most common operations. For anything else, use xurl's raw curlstyle mode — it works with **any** X API v2 endpoint:
```bash
# GET request (default)
xurl /2/users/me
# POST with JSON body
xurl -X POST /2/tweets -d '{"text":"Hello world!"}'
# PUT, PATCH, DELETE
xurl -X DELETE /2/tweets/1234567890
# Custom headers
xurl -H "Content-Type: application/json" /2/some/endpoint
# Force streaming mode
xurl -s /2/tweets/search/stream
# Full URLs also work
xurl https://api.x.com/2/users/me
```
---
## Streaming
Streaming endpoints are autodetected. Known streaming endpoints include:
- `/2/tweets/search/stream`
- `/2/tweets/sample/stream`
- `/2/tweets/sample10/stream`
You can force streaming on any endpoint with `-s`:
```bash
xurl -s /2/some/endpoint
```
---
## Output Format
All commands return **JSON** to stdout, prettyprinted with syntax highlighting. The output structure matches the X API v2 response format. A typical response looks like:
```json
{
"data": {
"id": "1234567890",
"text": "Hello world!"
}
}
```
Errors are also returned as JSON:
```json
{
"errors": [
{
"message": "Not authorized",
"code": 403
}
]
}
```
---
## Common Workflows
### Post with an image
```bash
# 1. Upload the image
xurl media upload photo.jpg
# 2. Copy the media_id from the response, then post
xurl post "Check out this photo!" --media-id MEDIA_ID
```
### Reply to a conversation
```bash
# 1. Read the post to understand context
xurl read https://x.com/user/status/1234567890
# 2. Reply
xurl reply 1234567890 "Here are my thoughts..."
```
### Search and engage
```bash
# 1. Search for relevant posts
xurl search "topic of interest" -n 10
# 2. Like an interesting one
xurl like POST_ID_FROM_RESULTS
# 3. Reply to it
xurl reply POST_ID_FROM_RESULTS "Great point!"
```
### Check your activity
```bash
# See who you are
xurl whoami
# Check your mentions
xurl mentions -n 20
# Check your timeline
xurl timeline -n 20
```
### Set up multiple apps
```bash
# App credentials must already be configured manually outside agent/LLM context.
# Authenticate users on each pre-configured app
xurl auth default prod
xurl auth oauth2 # authenticates on prod app
xurl auth default staging
xurl auth oauth2 # authenticates on staging app
# Switch between them
xurl auth default prod alice # prod app, alice user
xurl --app staging /2/users/me # one-off request against staging
```
---
## Error Handling
- Nonzero exit code on any error.
- API errors are printed as JSON to stdout (so you can still parse them).
- Auth errors suggest rerunning `xurl auth oauth2` or checking your tokens.
- If a command requires your user ID (like, repost, bookmark, follow, etc.), xurl will automatically fetch it via `/2/users/me`. If that fails, you'll see an auth error.
---
## Notes
- **Rate limits:** The X API enforces rate limits per endpoint. If you get a 429 error, wait and retry. Write endpoints (post, reply, like, repost) have stricter limits than read endpoints.
- **Scopes:** OAuth 2.0 tokens are requested with broad scopes. If you get a 403 on a specific action, your token may lack the required scope — rerun `xurl auth oauth2` to get a fresh token.
- **Token refresh:** OAuth 2.0 tokens autorefresh when expired. No manual intervention needed.
- **Multiple apps:** Each app has its own isolated credentials and tokens. Configure credentials manually outside agent/LLM context, then switch with `xurl auth default` or `--app`.
- **Multiple accounts:** You can authenticate multiple OAuth 2.0 accounts per app and switch between them with `--username` / `-u` or set a default with `xurl auth default APP USER`.
- **Default user:** When no `-u` flag is given, xurl uses the default user for the active app (set via `xurl auth default`). If no default user is set, it uses the first available token.
- **Token storage:** `~/.xurl` is YAML. Each app stores its own credentials and tokens. Never read or send this file to LLM context.
- **Rate limits:** 429 = wait and retry. Write endpoints have stricter limits.
- **Token refresh:** OAuth 2.0 tokens auto-refresh. No manual intervention.
- **Multiple apps:** Switch with `xurl auth default APP` or `--app APP`
- Non-zero exit on error; API errors still JSON-parseable on stdout.

View File

@ -0,0 +1,192 @@
# xurl — Command Reference
## Posting
```bash
# Simple post
xurl post "Hello world!"
# Post with media (upload first, then attach)
xurl media upload photo.jpg # → note the media_id from response
xurl post "Check this out" --media-id MEDIA_ID
# Multiple media
xurl post "Thread pics" --media-id 111 --media-id 222
# Reply to a post (by ID or URL)
xurl reply 1234567890 "Great point!"
xurl reply https://x.com/user/status/1234567890 "Agreed!"
# Reply with media
xurl reply 1234567890 "Look at this" --media-id MEDIA_ID
# Quote a post
xurl quote 1234567890 "Adding my thoughts"
# Delete your own post
xurl delete 1234567890
```
## Reading
```bash
# Read a single post (returns author, text, metrics, entities)
xurl read 1234567890
xurl read https://x.com/user/status/1234567890
# Search recent posts (default 10 results)
xurl search "golang"
xurl search "from:elonmusk" -n 20
xurl search "#buildinpublic lang:en" -n 15
```
## User Info
```bash
xurl whoami # Your own profile
xurl user elonmusk # Look up any user
xurl user @XDevelopers
```
## Timelines & Mentions
```bash
xurl timeline # Home timeline (reverse chronological)
xurl timeline -n 25
xurl mentions # Your mentions
xurl mentions -n 20
```
## Engagement
```bash
xurl like 1234567890 # Like / unlike
xurl unlike 1234567890
xurl repost 1234567890 # Repost / undo
xurl unrepost 1234567890
xurl bookmark 1234567890 # Bookmark / remove
xurl unbookmark 1234567890
xurl bookmarks -n 20 # List your bookmarks / likes
xurl likes -n 20
```
## Social Graph
```bash
xurl follow @XDevelopers # Follow / unfollow
xurl unfollow @XDevelopers
xurl following -n 50 # List who you follow / your followers
xurl followers -n 50
xurl following --of elonmusk -n 20 # Another user's following/followers
xurl followers --of elonmusk -n 20
xurl block @spammer # Block / unblock
xurl unblock @spammer
xurl mute @annoying # Mute / unmute
xurl unmute @annoying
```
## Direct Messages
```bash
xurl dm @someuser "Hey, saw your post!"
xurl dms # List recent DM events
xurl dms -n 25
```
## Media Upload
```bash
xurl media upload photo.jpg # Auto-detects type
xurl media upload video.mp4
xurl media upload --media-type image/jpeg --category tweet_image photo.jpg # Explicit type
xurl media status MEDIA_ID # Check processing status
xurl media status --wait MEDIA_ID # Poll until done
# Full workflow: upload then post
xurl media upload meme.png # response includes media id
xurl post "lol" --media-id MEDIA_ID
```
## App Management
```bash
xurl auth status # Check auth state
xurl auth apps list # List registered apps
xurl auth apps remove NAME # Remove an app
xurl auth default # Set default (interactive)
xurl auth default APP [USER] # Set default (command)
xurl --app NAME /2/users/me # One-off app override
```
**Note:** App registration and credential updates must be done manually outside agent/LLM sessions.
## Raw API Access
For any v2 endpoint not covered by shortcuts:
```bash
xurl /2/users/me # GET (default)
xurl -X POST /2/tweets -d '{"text":"Hello world!"}' # POST with JSON
xurl -X DELETE /2/tweets/1234567890 # DELETE
xurl -H "Content-Type: application/json" /2/some/endpoint # Custom headers
xurl -s /2/tweets/search/stream # Force streaming
xurl https://api.x.com/2/users/me # Full URLs work too
```
## Streaming
Auto-detected endpoints:
- `/2/tweets/search/stream`
- `/2/tweets/sample/stream`
- `/2/tweets/sample10/stream`
Force streaming on any endpoint with `-s`.
## Common Workflows
### Post with image
```bash
xurl media upload photo.jpg
xurl post "Check out this photo!" --media-id MEDIA_ID
```
### Reply to a conversation
```bash
xurl read https://x.com/user/status/1234567890
xurl reply 1234567890 "Here are my thoughts..."
```
### Search and engage
```bash
xurl search "topic of interest" -n 10
xurl like POST_ID_FROM_RESULTS
xurl reply POST_ID_FROM_RESULTS "Great point!"
```
### Multiple apps
```bash
# Authenticate on each pre-configured app
xurl auth default prod && xurl auth oauth2
xurl auth default staging && xurl auth oauth2
# Switch between them
xurl auth default prod alice
xurl --app staging /2/users/me
```
## Error Handling
- Non-zero exit code on error
- API errors are JSON on stdout (parseable)
- Auth errors → re-run `xurl auth oauth2`
- User ID auto-resolved via `/2/users/me` for like/repost/bookmark/follow
- 429 → rate limited, wait and retry
- 403 → may need `xurl auth oauth2` for fresh scopes