refactor!: IronClaw v2.0 - external OpenClaw runtime

BREAKING CHANGE: Convert repository to IronClaw-only package with strict
external dependency on globally installed `openclaw` runtime.

### Changes

- Remove entire OpenClaw core source from repository (src/agents/*, src/acp/*,
  src/commands/*, and related modules)
- Implement CLI delegation: non-bootstrap commands now delegate to global
  `openclaw` binary via external contract
- Remove local OpenClaw path resolution from web app; always spawn global
  `openclaw` binary instead of local scripts
- Rename package.json scripts: `pnpm openclaw` → `pnpm ironclaw`,
  `openclaw:rpc` → `ironclaw:rpc`
- Update bootstrap flow to verify and install global OpenClaw when missing
- Migrate web workspace/profile logic to align with OpenClaw state paths
- Add migration contract tests for stream-json, session subscribe, and profile
  resolution behaviors
- Update build/release pipeline for IronClaw-only artifacts
- Update documentation for new peer + global installation model

### Architecture

IronClaw is now strictly a frontend/UI/bootstrap layer:
- `npx ironclaw` bootstraps OpenClaw (if missing), runs guided onboarding
- IronClaw UI serves on localhost:3100
- OpenClaw Gateway runs on standard port 18789
- Communication via stable CLI contracts and Gateway WebSocket protocol only

### Migration

Users must have `openclaw` installed globally:
  npm install -g openclaw

Existing IronClaw profiles and sessions remain compatible through gateway
protocol stability.

Refs: bootstrap_dev_testing, ironclaw_frontend_split, strict-external-openclaw
This commit is contained in:
kumarabhirup 2026-03-01 16:11:40 -08:00
parent 9ca4263147
commit 52707f471d
No known key found for this signature in database
GPG Key ID: DB7CA2289CAB0167
3498 changed files with 2551 additions and 623864 deletions

View File

@ -0,0 +1,146 @@
---
name: Bootstrap dev testing
overview: Remove local OpenClaw paths from the web app, always use global `openclaw` binary, rename dev scripts to `ironclaw`, and verify bootstrap works standalone.
todos:
- id: remove-local-openclaw-agent-runner
content: Remove resolvePackageRoot, resolveOpenClawLaunch, IRONCLAW_USE_LOCAL_OPENCLAW from agent-runner.ts; spawn global `openclaw` directly
status: completed
- id: remove-local-openclaw-subagent-runs
content: Remove local script paths from subagent-runs.ts (sendGatewayAbortForSubagent, spawnSubagentMessage); use global `openclaw` instead
status: completed
- id: rename-pnpm-scripts
content: Rename `pnpm openclaw` to `pnpm ironclaw` and `openclaw:rpc` to `ironclaw:rpc` in package.json
status: completed
- id: update-agent-runner-tests
content: "Update agent-runner.test.ts: remove resolvePackageRoot tests, IRONCLAW_USE_LOCAL_OPENCLAW, update spawn assertions"
status: completed
- id: verify-builds-pass
content: Verify pnpm build, pnpm web:build, and workspace tests pass after changes
status: completed
isProject: false
---
# IronClaw Bootstrap: Clean Separation and Dev Testing
## Architecture
IronClaw is a frontend/UI/skills layer. OpenClaw is a separate, globally-installed runtime. IronClaw should NEVER bundle or run a local copy of OpenClaw.
```mermaid
flowchart TD
npx["npx ironclaw (or ironclaw)"] --> entry["openclaw.mjs → dist/entry.js"]
entry --> runMain["run-main.ts: bare ironclaw → bootstrap"]
runMain --> delegate{"primary == bootstrap?"}
delegate -->|yes, keep local| bootstrap["bootstrapCommand()"]
delegate -->|no, delegate| globalOC["spawn openclaw ...args"]
bootstrap --> checkOC{"openclaw on PATH?"}
checkOC -->|yes| onboard
checkOC -->|no| prompt["Prompt: install openclaw globally?"]
prompt -->|yes| npmInstall["npm install -g openclaw"]
npmInstall --> onboard
onboard["openclaw onboard --install-daemon"] --> gatewayStart["Gateway starts + spawns web app"]
gatewayStart --> probe["waitForWebAppPort(3100)"]
probe --> openBrowser["Open http://localhost:3100"]
```
The bootstrap flow is correctly wired:
- Bare `ironclaw` rewrites to `ironclaw bootstrap`
- `bootstrap` is never delegated to global `openclaw`
- `bootstrapCommand` calls `ensureOpenClawCliAvailable` which prompts to install
- Onboarding sets `gateway.webApp.enabled: true`
- Gateway starts the Next.js standalone server on port 3100
- Bootstrap probes and opens the browser
## Problem 1: Local OpenClaw paths in web app (must remove)
`[apps/web/lib/agent-runner.ts](apps/web/lib/agent-runner.ts)` has `resolveOpenClawLaunch` which, when `IRONCLAW_USE_LOCAL_OPENCLAW=1`, resolves a local `scripts/run-node.mjs` or `openclaw.mjs` and spawns it with `node`. This contradicts the architecture: IronClaw should always spawn the global `openclaw` binary.
The same pattern exists in `[apps/web/lib/subagent-runs.ts](apps/web/lib/subagent-runs.ts)` where `sendGatewayAbortForSubagent` and `spawnSubagentMessage` hardcode `node <local-script>` paths.
**Fix:**
- Remove `IRONCLAW_USE_LOCAL_OPENCLAW`, `resolveOpenClawLaunch`, `resolvePackageRoot`, and `OpenClawLaunch` type from `agent-runner.ts`
- All spawn calls become `spawn("openclaw", [...args], { env, stdio })`
- In `subagent-runs.ts`: replace `node <scriptPath> gateway call ...` with `openclaw gateway call ...`
- Remove `resolvePackageRoot` import from `subagent-runs.ts`
## Problem 2: `pnpm openclaw` script name is wrong
`package.json` has `"openclaw": "node scripts/run-node.mjs"`. This repo IS IronClaw, not OpenClaw.
**Fix:** Rename to `"ironclaw": "node scripts/run-node.mjs"`. Also `"openclaw:rpc"` to `"ironclaw:rpc"`.
## Dev workflow (after fixes)
```bash
# Prerequisite: install OpenClaw globally (one-time)
npm install -g openclaw
# Run IronClaw bootstrap (installs/configures everything, opens UI)
pnpm ironclaw
# Or for web UI dev only:
openclaw --profile ironclaw gateway --port 18789 # Terminal 1
pnpm web:dev # Terminal 2
```
## Implementation details
### 1. Simplify agent-runner.ts spawning
Remove ~40 lines (`resolvePackageRoot`, `OpenClawLaunch`, `resolveOpenClawLaunch`). Both `spawnLegacyAgentProcess` and `spawnLegacyAgentSubscribeProcess` become:
```typescript
function spawnLegacyAgentProcess(message: string, agentSessionId?: string) {
const args = ["agent", "--agent", "main", "--message", message, "--stream-json"];
if (agentSessionId) {
const sessionKey = `agent:main:web:${agentSessionId}`;
args.push("--session-key", sessionKey, "--lane", "web", "--channel", "webchat");
}
const profile = getEffectiveProfile();
const workspace = resolveWorkspaceRoot();
return spawn("openclaw", args, {
env: {
...process.env,
...(profile ? { OPENCLAW_PROFILE: profile } : {}),
...(workspace ? { OPENCLAW_WORKSPACE: workspace } : {}),
},
stdio: ["ignore", "pipe", "pipe"],
});
}
```
### 2. Simplify subagent-runs.ts spawning
`sendGatewayAbortForSubagent` and `spawnSubagentMessage` both have this pattern:
```typescript
const root = resolvePackageRoot();
const devScript = join(root, "scripts", "run-node.mjs");
const prodScript = join(root, "openclaw.mjs");
const scriptPath = existsSync(devScript) ? devScript : prodScript;
spawn("node", [scriptPath, "gateway", "call", ...], { cwd: root, ... });
```
Replace with:
```typescript
spawn("openclaw", ["gateway", "call", ...], { env: process.env, ... });
```
### 3. Update agent-runner.test.ts
- Remove `process.env.IRONCLAW_USE_LOCAL_OPENCLAW = "1"` from `beforeEach`
- Remove entire `resolvePackageRoot` describe block (~5 tests)
- The "uses global openclaw by default" test becomes the only spawn behavior test
- Update mock assertions: command is always `"openclaw"`, no `prefixArgs`
### 4. Rename package.json scripts
```diff
- "openclaw": "node scripts/run-node.mjs",
- "openclaw:rpc": "node scripts/run-node.mjs agent --mode rpc --json",
+ "ironclaw": "node scripts/run-node.mjs",
+ "ironclaw:rpc": "node scripts/run-node.mjs agent --mode rpc --json",
```

View File

@ -0,0 +1,122 @@
---
name: ironclaw_frontend_split
overview: Re-architect IronClaw into a separate frontend/bootstrap CLI that runs on top of OpenClaw, while preserving current IronClaw UX/features through compatibility adapters and phased cutover. Keep OpenClaw Gateway on its standard port and expose IronClaw UI on localhost:3100 with user-approved OpenClaw updates.
todos:
- id: freeze-migration-contract-tests
content: Add migration contract tests covering stream-json, session subscribe, profile/workspace resolution, and Dench always-on skill behavior
status: completed
- id: build-ironclaw-bootstrap-layer
content: Implement IronClaw bootstrap path that verifies/installs OpenClaw, runs onboard --install-daemon for profile ironclaw, and launches UI on 3100 with explicit update approval
status: completed
- id: extract-gateway-stream-client
content: Extract reusable gateway streaming client from agent-via-gateway and wire web chat APIs to it instead of spawning CLI processes
status: completed
- id: unify-profile-storage-paths
content: Align apps/web workspace and web-chat storage resolution with src/config/paths + src/cli/profile semantics and add migration for existing UI state
status: completed
- id: externalize-ironclaw-product-layer
content: Move IronClaw prompt/skill packaging out of core defaults into a product adapter/skill pack while preserving inject behavior
status: completed
- id: harden-onboarding-and-rollout
content: Add first-run diagnostics, side-by-side safety checks, staged feature flags, and fallback path before full cutover
status: completed
isProject: false
---
# IronClaw Frontend-Only Rewrite (No-Break Migration)
## Locked Decisions
- Runtime topology: OpenClaw Gateway stays on its normal port (default `18789`), IronClaw UI runs on `3100`.
- Update policy: install OpenClaw once, then update only when user explicitly approves.
## Target Architecture
```mermaid
flowchart LR
ironclawCli[IronclawCLI] --> bootstrapManager[BootstrapManager]
bootstrapManager --> openclawCli[OpenClawCLI]
bootstrapManager --> ironclawProfile[IronclawProfileState]
ironclawUi[IronclawUI3100] --> gatewayWs[GatewayWS18789]
gatewayWs --> openclawCore[OpenClawCore]
openclawCore --> workspaceData[WorkspaceAndChatStorage]
ironclawSkills[IronclawSkillsPack] --> openclawCore
```
## Why This Rewrite Is Needed (from current code)
- Web chat currently spawns the CLI directly in `[apps/web/lib/agent-runner.ts](apps/web/lib/agent-runner.ts)` (`openclaw.mjs` + `--stream-json`), which tightly couples UI and CLI process model.
- IronClaw product content is hardcoded in core prompt generation in `[src/agents/system-prompt.ts](src/agents/system-prompt.ts)` (`buildIronclawSection`).
- Web workspace/profile logic in `[apps/web/lib/workspace.ts](apps/web/lib/workspace.ts)` is not aligned with core state-dir resolution in `[src/config/paths.ts](src/config/paths.ts)` and profile env wiring in `[src/cli/profile.ts](src/cli/profile.ts)`.
- Bootstrapping and daemon install logic already exists and should be reused, not forked: `[src/commands/onboard.ts](src/commands/onboard.ts)`, `[src/wizard/onboarding.finalize.ts](src/wizard/onboarding.finalize.ts)`, `[src/commands/daemon-install-helpers.ts](src/commands/daemon-install-helpers.ts)`.
## Implementation Plan (Phased, Strangler Pattern)
## Phase 1: Freeze Behavior With Contract Tests
- Add regression tests that codify current IronClaw-critical behavior before changing architecture:
- stream transport + session subscribe behavior (`--stream-json`, `--subscribe-session-key`) from `[src/cli/program/register.agent.ts](src/cli/program/register.agent.ts)` and `[src/commands/agent-via-gateway.ts](src/commands/agent-via-gateway.ts)`.
- workspace/profile + web-chat path behavior from `[apps/web/lib/workspace.ts](apps/web/lib/workspace.ts)` and `[apps/web/lib/workspace-profiles.test.ts](apps/web/lib/workspace-profiles.test.ts)`.
- always-on injected skill behavior for Dench skill loading.
- Produce a “must-pass” migration suite so we can safely refactor internals without user-visible regressions.
## Phase 2: Create IronClaw Bootstrap Layer (Separate CLI Behavior)
- Introduce a bootstrap command path for `ironclaw` that:
- verifies OpenClaw availability;
- installs OpenClaw if missing (first-run flow);
- runs onboarding (`openclaw --profile ironclaw onboard --install-daemon`);
- starts/opens UI at `http://localhost:3100`.
- Reuse existing onboarding/daemon machinery instead of duplicating logic in a second stack:
- `[src/commands/onboard.ts](src/commands/onboard.ts)`
- `[src/wizard/onboarding.finalize.ts](src/wizard/onboarding.finalize.ts)`
- `[src/daemon/constants.ts](src/daemon/constants.ts)`
- Add explicit update prompt UX (policy #2): no silent auto-upgrades.
## Phase 3: Decouple UI Streaming From CLI Process Spawn
- Extract gateway streaming client logic from `[src/commands/agent-via-gateway.ts](src/commands/agent-via-gateway.ts)` into a reusable library module.
- Migrate web chat runtime from “spawn CLI process” to “connect directly to gateway stream API” in:
- `[apps/web/lib/agent-runner.ts](apps/web/lib/agent-runner.ts)`
- `[apps/web/lib/active-runs.ts](apps/web/lib/active-runs.ts)`
- `[apps/web/app/api/chat/route.ts](apps/web/app/api/chat/route.ts)`
- `[apps/web/app/api/chat/stream/route.ts](apps/web/app/api/chat/stream/route.ts)`
- Keep a temporary compatibility flag for rollback during rollout.
## Phase 4: Unify Profile + Storage Resolution
- Replace web-only state resolution logic with shared core semantics from `[src/config/paths.ts](src/config/paths.ts)` and profile env behavior from `[src/cli/profile.ts](src/cli/profile.ts)`.
- Normalize chat/workspace storage to profile-scoped OpenClaw state consistently (no split-brain between `~/.openclaw-*` and `~/.openclaw/web-chat-*` behaviors).
- Add one-time migration for existing `.ironclaw-ui-state.json` / web-chat index data to the new canonical profile paths.
## Phase 5: Move IronClaw Product Layer Outside Core
- Externalize IronClaw-specific identity/prompt sections currently in `[src/agents/system-prompt.ts](src/agents/system-prompt.ts)` behind a product adapter/config hook.
- Move Dench/IronClaw always-on skill packaging out of core bundled defaults and load it as IronClaw-provided skill pack.
- Keep `inject` capability in core, but remove hardcoded IronClaw assumptions from default OpenClaw prompt path.
## Phase 6: Onboarding UX Hardening (Zero-Conf Side-by-Side)
- First-run checklist in IronClaw bootstrap:
- OpenClaw installed and version shown
- profile verified (`ironclaw`)
- gateway reachable
- UI reachable at `3100`
- clear remediation output for port/token/device mismatch
- Ensure side-by-side safety with OpenClaw main profile (no daemon overwrite, no shared session collisions).
## Phase 7: Rollout and Safety Gates
- Roll out behind feature gates with staged enablement:
1. internal
2. opt-in beta
3. default
- Block full cutover until migration suite and onboarding E2E checks pass.
- Keep legacy path available for one release as emergency fallback.
## Definition of Done
- `npx ironclaw` bootstraps OpenClaw (if missing), runs guided onboarding, and reliably opens/serves UI on `localhost:3100`.
- IronClaw runs alongside default OpenClaw without daemon/profile/token collisions.
- Stream, workspaces, always-on skills, and storage features remain intact during and after migration.
- OpenClaw upgrades do not break IronClaw because integration is through stable gateway/CLI interfaces, not forked internals.

View File

@ -0,0 +1,135 @@
---
name: strict-external-openclaw
overview: Convert this repo into an IronClaw-only package that uses globally installed `openclaw` as an external runtime, with strict removal of bundled OpenClaw core source and full cutover of CLI/web flows to external contracts (CLI + gateway protocol).
todos:
- id: ironclaw-boundary-definition
content: Lock IronClaw-only module boundary and mark all OpenClaw-owned code paths for removal
status: completed
- id: remove-cross-imports
content: Eliminate `apps/web` and `ui` internal imports of local OpenClaw source by replacing with IronClaw-local adapters over CLI/gateway contracts
status: completed
- id: cli-delegation-cutover
content: Implement IronClaw command delegation to global `openclaw` for non-bootstrap commands
status: completed
- id: peer-global-packaging
content: Update package metadata/docs to enforce peer + global OpenClaw installation model
status: completed
- id: delete-openclaw-core-source
content: Remove OpenClaw core runtime source and obsolete shims/scripts from this repository
status: completed
- id: release-pipeline-realignment
content: Rework build/release checks to publish IronClaw-only artifacts with strict external OpenClaw dependency
status: completed
- id: full-cutover-validation
content: Run full test/smoke matrix and keep one-release emergency fallback
status: completed
isProject: false
---
# Strict External OpenClaw Cutover
## Goal
- Make this repository IronClaw-only.
- Remove OpenClaw core runtime code from this repo.
- Depend on globally installed `openclaw` (peer/global model), not bundled source.
- Keep IronClaw UX: `npx ironclaw` bootstrap + UI on `3100` over gateway `18789`.
Reference upstream runtime source of truth: [openclaw/openclaw](https://github.com/openclaw/openclaw).
## Non-Negotiable Constraints
- No vendored OpenClaw core runtime in this repo after cutover.
- `openclaw` consumed as global binary requirement (peer + global install), not shipped here.
- IronClaw must communicate with OpenClaw only via stable external contracts:
- `openclaw` CLI commands
- Gateway WebSocket protocol
## Target Architecture
```mermaid
flowchart LR
ironclawCli[ironclawCli] --> bootstrap[bootstrapFlow]
bootstrap --> openclawBin[globalOpenclawBin]
ironclawUi[ironclawUi3100] --> gatewayWs[gatewayWs18789]
gatewayWs --> openclawRuntime[openclawRuntimeExternal]
```
## Phase 1: Define IronClaw-Only Boundary
- Keep only IronClaw-owned surfaces:
- product layer and branding
- bootstrap/orchestration CLI
- web UI and workspace UX
- Mark OpenClaw-owned modules for removal from this repo.
- Primary files to re-boundary:
- [package.json](package.json)
- [openclaw.mjs](openclaw.mjs)
- [src/cli/run-main.ts](src/cli/run-main.ts)
- [src/cli/bootstrap.ts](src/cli/bootstrap.ts)
- [src/product/adapter.ts](src/product/adapter.ts)
## Phase 2: Replace Internal Core Imports With External Contracts
- Remove all `apps/web` / `ui` imports that currently reach into local OpenClaw source internals.
- Re-implement required behavior in IronClaw-local adapters using gateway protocol + local helpers.
- First critical edge:
- [apps/web/lib/agent-runner.ts](apps/web/lib/agent-runner.ts)
- Also migrate `ui/src/ui/**` consumers that import `../../../../src/*` internals.
## Phase 3: CLI Delegation Model
- Make IronClaw CLI own only bootstrap/product UX.
- Delegate non-bootstrap command execution to global `openclaw` binary.
- Keep rollout/fallback env gates while switching default to external execution.
- Primary files:
- [src/cli/run-main.ts](src/cli/run-main.ts)
- [src/cli/run-main.test.ts](src/cli/run-main.test.ts)
- [src/cli/bootstrap.ts](src/cli/bootstrap.ts)
## Phase 4: Package + Dependency Model (Peer + Global)
- Update package metadata so IronClaw does not bundle OpenClaw runtime code.
- Add peer requirement/documentation for global `openclaw` presence.
- Ensure bootstrap validates and remediates missing global CLI (`npm i -g openclaw`).
- Primary files:
- [package.json](package.json)
- [docs/reference/RELEASING.md](docs/reference/RELEASING.md)
- install/update docs under `docs/`
## Phase 5: Remove OpenClaw Core Source From Repo
- Delete OpenClaw-owned runtime modules from this repository once delegation and adapters are complete.
- Retain only IronClaw package code and tests.
- Remove obsolete build/release scripts that assume monolithic runtime shipping.
- Primary files/areas:
- `src/` (OpenClaw runtime portions)
- scripts that package core runtime artifacts
- compatibility shims that re-export local OpenClaw code
## Phase 6: Build/Release Pipeline Realignment
- Adjust build outputs to ship IronClaw only.
- Remove checks that require bundled OpenClaw dist artifacts.
- Keep web standalone packaging + bootstrap checks.
- Primary files:
- [tsdown.config.ts](tsdown.config.ts)
- [scripts/release-check.ts](scripts/release-check.ts)
- [scripts/deploy.sh](scripts/deploy.sh)
## Verification Gates
- `pnpm tsgo`, lint, and formatting pass after source removals.
- Unit/e2e coverage for:
- bootstrap diagnostics and remediation
- command delegation to global `openclaw`
- gateway streaming from IronClaw UI
- End-to-end smoke:
- clean machine with only global `openclaw`
- `npx ironclaw` bootstrap succeeds
- UI works on `3100`, gateway on `18789`, no profile/daemon collisions.
## Rollout Safety
- Keep emergency fallback env switch for one release window.
- Remove fallback after successful release telemetry and smoke matrix pass.

View File

@ -34,8 +34,8 @@
**Runtime: Node 22+**
```bash
npm i -g ironclaw
ironclaw onboard --install-daemon
npm i -g openclaw
npx ironclaw
```
Opens at `localhost:3100`. That's it.
@ -43,9 +43,9 @@ Opens at `localhost:3100`. That's it.
Three steps total:
```
1. npm i -g ironclaw
2. ironclaw onboard
3. ironclaw gateway start
1. npm i -g openclaw
2. npx ironclaw
3. bootstrap opens UI on localhost:3100
```
---

View File

@ -159,7 +159,8 @@ describe("Chat API routes", () => {
});
it("aborts run and returns result", async () => {
const { abortRun } = await import("@/lib/active-runs");
const { abortRun, getActiveRun } = await import("@/lib/active-runs");
vi.mocked(getActiveRun).mockReturnValue({ status: "running" } as never);
vi.mocked(abortRun).mockReturnValue(true);
const { POST } = await import("./stop/route.js");

View File

@ -115,15 +115,17 @@ describe("profiles API", () => {
it("discovers workspace-<name> directories", async () => {
const { existsSync: es, readdirSync: rds } = await import("node:fs");
const devStateDir = join("/home/testuser", ".openclaw-dev");
const devWorkspaceDir = join(devStateDir, "workspace");
vi.mocked(es).mockImplementation((p) => {
const s = String(p);
return (
s === STATE_DIR ||
s === join(STATE_DIR, "workspace-dev")
s === devWorkspaceDir
);
});
vi.mocked(rds).mockReturnValue([
makeDirent("workspace-dev", true),
makeDirent(".openclaw-dev", true),
] as unknown as Dirent[]);
const response = await callGet();
@ -206,7 +208,7 @@ describe("profiles API", () => {
const response = await callSwitch({ profile: "test" });
const json = await response.json();
expect(json.stateDir).toBe(STATE_DIR);
expect(json.stateDir).toBe(join("/home/testuser", ".openclaw-test"));
});
});
});

View File

@ -11,6 +11,7 @@ vi.mock("@/lib/workspace", () => ({
duckdbQueryOnFile: vi.fn(() => []),
duckdbExecOnFile: vi.fn(() => true),
findDuckDBForObject: vi.fn(() => null),
getObjectViews: vi.fn(() => ({ views: [], activeView: null })),
parseRelationValue: vi.fn((v: string | null) => (v ? [v] : [])),
resolveDuckdbBin: vi.fn(() => null),
discoverDuckDBPaths: vi.fn(() => []),
@ -27,6 +28,7 @@ describe("Workspace Objects API", () => {
duckdbQueryOnFile: vi.fn(() => []),
duckdbExecOnFile: vi.fn(() => true),
findDuckDBForObject: vi.fn(() => null),
getObjectViews: vi.fn(() => ({ views: [], activeView: null })),
parseRelationValue: vi.fn((v: string | null) => (v ? [v] : [])),
resolveDuckdbBin: vi.fn(() => null),
discoverDuckDBPaths: vi.fn(() => []),

View File

@ -17,6 +17,8 @@ vi.mock("node:os", () => ({
// Mock workspace
vi.mock("@/lib/workspace", () => ({
resolveWorkspaceRoot: vi.fn(() => null),
resolveOpenClawStateDir: vi.fn(() => "/home/testuser/.openclaw"),
getEffectiveProfile: vi.fn(() => "default"),
parseSimpleYaml: vi.fn(() => ({})),
duckdbQueryAll: vi.fn(() => []),
duckdbQueryAllAsync: vi.fn(async () => []),
@ -55,6 +57,8 @@ describe("Workspace Tree & Browse API", () => {
}));
vi.mock("@/lib/workspace", () => ({
resolveWorkspaceRoot: vi.fn(() => null),
resolveOpenClawStateDir: vi.fn(() => "/home/testuser/.openclaw"),
getEffectiveProfile: vi.fn(() => "default"),
parseSimpleYaml: vi.fn(() => ({})),
duckdbQueryAll: vi.fn(() => []),
duckdbQueryAllAsync: vi.fn(async () => []),
@ -74,7 +78,8 @@ describe("Workspace Tree & Browse API", () => {
describe("GET /api/workspace/tree", () => {
it("returns tree with exists=false when no workspace root", async () => {
const { GET } = await import("./tree/route.js");
const res = await GET();
const req = new Request("http://localhost/api/workspace/tree");
const res = await GET(req);
const json = await res.json();
expect(json.exists).toBe(false);
expect(json.tree).toEqual([]);
@ -96,7 +101,8 @@ describe("Workspace Tree & Browse API", () => {
});
const { GET } = await import("./tree/route.js");
const res = await GET();
const req = new Request("http://localhost/api/workspace/tree");
const res = await GET(req);
const json = await res.json();
expect(json.exists).toBe(true);
expect(json.tree.length).toBeGreaterThan(0);
@ -109,7 +115,8 @@ describe("Workspace Tree & Browse API", () => {
vi.mocked(mockExists).mockReturnValue(true);
const { GET } = await import("./tree/route.js");
const res = await GET();
const req = new Request("http://localhost/api/workspace/tree");
const res = await GET(req);
const json = await res.json();
expect(json.workspaceRoot).toBe("/ws");
});

View File

@ -8,7 +8,7 @@
* - Messages are written to persistent sessions as they arrive.
* - New HTTP connections can re-attach to a running stream.
*/
import { type ChildProcess, spawn } from "node:child_process";
import { spawn } from "node:child_process";
import { createInterface } from "node:readline";
import { join } from "node:path";
import {
@ -19,10 +19,10 @@ import {
} from "node:fs";
import { resolveWebChatDir, resolveOpenClawStateDir } from "./workspace";
import {
type AgentProcessHandle,
type AgentEvent,
spawnAgentProcess,
spawnAgentSubscribeProcess,
resolvePackageRoot,
extractToolResult,
buildToolOutput,
parseAgentErrorMessage,
@ -59,7 +59,7 @@ type AccumulatedMessage = {
export type ActiveRun = {
sessionId: string;
childProcess: ChildProcess;
childProcess: AgentProcessHandle;
eventBuffer: SseEvent[];
subscribers: Set<RunSubscriber>;
accumulated: AccumulatedMessage;
@ -74,7 +74,7 @@ export type ActiveRun = {
/** @internal last globalSeq seen from the gateway event stream */
lastGlobalSeq: number;
/** @internal subscribe child process for waiting-for-subagents continuation */
_subscribeProcess?: ChildProcess | null;
_subscribeProcess?: AgentProcessHandle | null;
/** Full gateway session key (used for subagent subscribe-only runs) */
sessionKey?: string;
/** Parent web session ID (for subagent runs) */
@ -251,14 +251,10 @@ export function reactivateSubscribeRun(sessionKey: string): boolean {
*/
export function sendSubagentFollowUp(sessionKey: string, message: string): boolean {
try {
const root = resolvePackageRoot();
const devScript = join(root, "scripts", "run-node.mjs");
const prodScript = join(root, "openclaw.mjs");
const scriptPath = existsSync(devScript) ? devScript : prodScript;
const child = spawn(
"node",
"openclaw",
[
scriptPath, "gateway", "call", "agent",
"gateway", "call", "agent",
"--params", JSON.stringify({
message, sessionKey,
idempotencyKey: `follow-${Date.now()}-${Math.random().toString(36).slice(2)}`,
@ -266,8 +262,9 @@ export function sendSubagentFollowUp(sessionKey: string, message: string): boole
}),
"--json", "--timeout", "10000",
],
{ cwd: root, env: { ...process.env }, stdio: "ignore", detached: true },
{ env: { ...process.env }, stdio: "ignore", detached: true },
);
child.on("error", () => {});
child.unref();
return true;
} catch {
@ -359,16 +356,10 @@ export function abortRun(sessionId: string): boolean {
*/
function sendGatewayAbort(sessionId: string): void {
try {
const root = resolvePackageRoot();
const devScript = join(root, "scripts", "run-node.mjs");
const prodScript = join(root, "openclaw.mjs");
const scriptPath = existsSync(devScript) ? devScript : prodScript;
const sessionKey = `agent:main:web:${sessionId}`;
const child = spawn(
"node",
"openclaw",
[
scriptPath,
"gateway",
"call",
"chat.abort",
@ -379,12 +370,12 @@ function sendGatewayAbort(sessionId: string): void {
"4000",
],
{
cwd: root,
env: { ...process.env },
stdio: "ignore",
detached: true,
},
);
child.on("error", () => {});
// Let the abort process run independently — don't block on it.
child.unref();
} catch {
@ -510,7 +501,7 @@ export function startSubscribeRun(params: {
*/
function wireSubscribeOnlyProcess(
run: ActiveRun,
child: ChildProcess,
child: AgentProcessHandle,
sessionKey: string,
): void {
let idCounter = 0;
@ -1361,7 +1352,8 @@ function wireChildProcess(run: ActiveRun): void {
if (run.status !== "running") {return;}
console.error("[active-runs] Child process error:", err);
emitError(`Failed to start agent: ${err.message}`);
const message = err instanceof Error ? err.message : String(err);
emitError(`Failed to start agent: ${message}`);
run.status = "error";
flushPersistence(run);
for (const sub of run.subscribers) {

View File

@ -1,10 +1,13 @@
import { spawn, type ChildProcess } from "node:child_process";
import { join } from "node:path";
import { describe, it, expect, vi, beforeEach, afterEach } from "vitest";
vi.mock("node:child_process", () => ({ spawn: vi.fn() }));
vi.mock("node:fs", () => ({ existsSync: vi.fn() }));
vi.mock("node:child_process", async (importOriginal) => {
const actual = await importOriginal<typeof import("node:child_process")>();
return {
...actual,
spawn: vi.fn(),
};
});
const spawnMock = vi.mocked(spawn);
/** Minimal mock ChildProcess for testing. */
@ -49,8 +52,13 @@ describe("agent-runner", () => {
vi.restoreAllMocks();
process.env = { ...originalEnv };
// Re-wire mocks after resetModules
vi.mock("node:child_process", () => ({ spawn: vi.fn() }));
vi.mock("node:fs", () => ({ existsSync: vi.fn() }));
vi.mock("node:child_process", async (importOriginal) => {
const actual = await importOriginal<typeof import("node:child_process")>();
return {
...actual,
spawn: vi.fn(),
};
});
});
afterEach(() => {
@ -58,177 +66,29 @@ describe("agent-runner", () => {
vi.restoreAllMocks();
});
// ── resolvePackageRoot ──────────────────────────────────────────────
describe("resolvePackageRoot", () => {
it("uses OPENCLAW_ROOT env var when set and valid", async () => {
process.env.OPENCLAW_ROOT = "/opt/ironclaw";
const { existsSync: mockExists } = await import("node:fs");
vi.mocked(mockExists).mockImplementation(
(p) => String(p) === "/opt/ironclaw",
);
const { resolvePackageRoot } = await import("./agent-runner.js");
expect(resolvePackageRoot()).toBe("/opt/ironclaw");
});
it("ignores OPENCLAW_ROOT when the path does not exist", async () => {
process.env.OPENCLAW_ROOT = "/nonexistent/path";
const { existsSync: mockExists } = await import("node:fs");
// OPENCLAW_ROOT doesn't exist, but we'll find openclaw.mjs by walking up
vi.mocked(mockExists).mockImplementation((p) => {
return String(p) === join("/pkg", "openclaw.mjs");
});
vi.spyOn(process, "cwd").mockReturnValue("/pkg/apps/web");
const { resolvePackageRoot } = await import("./agent-runner.js");
expect(resolvePackageRoot()).toBe("/pkg");
});
it("finds package root via openclaw.mjs in production (standalone cwd)", async () => {
delete process.env.OPENCLAW_ROOT;
const { existsSync: mockExists } = await import("node:fs");
vi.mocked(mockExists).mockImplementation((p) => {
// Only openclaw.mjs exists at the real package root
return String(p) === join("/pkg", "openclaw.mjs");
});
// Standalone mode: cwd is deep inside .next/standalone
vi.spyOn(process, "cwd").mockReturnValue(
"/pkg/apps/web/.next/standalone/apps/web",
);
const { resolvePackageRoot } = await import("./agent-runner.js");
expect(resolvePackageRoot()).toBe("/pkg");
});
it("finds package root via scripts/run-node.mjs in dev workspace", async () => {
delete process.env.OPENCLAW_ROOT;
const { existsSync: mockExists } = await import("node:fs");
vi.mocked(mockExists).mockImplementation((p) => {
return String(p) === join("/repo", "scripts", "run-node.mjs");
});
vi.spyOn(process, "cwd").mockReturnValue("/repo/apps/web");
const { resolvePackageRoot } = await import("./agent-runner.js");
expect(resolvePackageRoot()).toBe("/repo");
});
it("falls back to legacy 2-levels-up heuristic", async () => {
delete process.env.OPENCLAW_ROOT;
const { existsSync: mockExists } = await import("node:fs");
vi.mocked(mockExists).mockReturnValue(false); // nothing found
vi.spyOn(process, "cwd").mockReturnValue("/unknown/apps/web");
const { resolvePackageRoot } = await import("./agent-runner.js");
expect(resolvePackageRoot()).toBe(
join("/unknown/apps/web", "..", ".."),
);
});
});
// ── spawnAgentProcess ──────────────────────────────────────────────
describe("spawnAgentProcess", () => {
it("uses scripts/run-node.mjs in dev when both scripts exist", async () => {
delete process.env.OPENCLAW_ROOT;
const { existsSync: mockExists } = await import("node:fs");
it("always uses global openclaw", async () => {
const { spawn: mockSpawn } = await import("node:child_process");
vi.mocked(mockExists).mockImplementation((p) => {
const s = String(p);
// Package root found via scripts/run-node.mjs
if (s === join("/repo", "scripts", "run-node.mjs")) {return true;}
// openclaw.mjs also exists in dev
if (s === join("/repo", "openclaw.mjs")) {return true;}
return false;
});
vi.spyOn(process, "cwd").mockReturnValue("/repo/apps/web");
const child = mockChildProcess();
vi.mocked(mockSpawn).mockReturnValue(
child as unknown as ChildProcess,
);
vi.mocked(mockSpawn).mockReturnValue(child as unknown as ChildProcess);
const { spawnAgentProcess } = await import("./agent-runner.js");
spawnAgentProcess("hello");
expect(vi.mocked(mockSpawn)).toHaveBeenCalledWith(
"node",
expect.arrayContaining([
join("/repo", "scripts", "run-node.mjs"),
"agent",
"--agent",
"main",
"--message",
"hello",
"--stream-json",
]),
"openclaw",
expect.arrayContaining(["agent", "--agent", "main", "--message", "hello", "--stream-json"]),
expect.objectContaining({
cwd: "/repo",
}),
);
});
it("falls back to openclaw.mjs in production (standalone)", async () => {
process.env.OPENCLAW_ROOT = "/pkg";
const { existsSync: mockExists } = await import("node:fs");
const { spawn: mockSpawn } = await import("node:child_process");
vi.mocked(mockExists).mockImplementation((p) => {
const s = String(p);
if (s === "/pkg") {return true;} // OPENCLAW_ROOT valid
if (s === join("/pkg", "openclaw.mjs")) {return true;} // prod script
// scripts/run-node.mjs does NOT exist (production install)
return false;
});
const child = mockChildProcess();
vi.mocked(mockSpawn).mockReturnValue(
child as unknown as ChildProcess,
);
const { spawnAgentProcess } = await import("./agent-runner.js");
spawnAgentProcess("test message");
expect(vi.mocked(mockSpawn)).toHaveBeenCalledWith(
"node",
expect.arrayContaining([
join("/pkg", "openclaw.mjs"),
"agent",
"--agent",
"main",
"--message",
"test message",
"--stream-json",
]),
expect.objectContaining({
cwd: "/pkg",
stdio: ["ignore", "pipe", "pipe"],
}),
);
});
it("includes session-key and lane args when agentSessionId is set", async () => {
process.env.OPENCLAW_ROOT = "/pkg";
const { existsSync: mockExists } = await import("node:fs");
const { spawn: mockSpawn } = await import("node:child_process");
vi.mocked(mockExists).mockImplementation((p) => {
const s = String(p);
return s === "/pkg" || s === join("/pkg", "openclaw.mjs");
});
const child = mockChildProcess();
vi.mocked(mockSpawn).mockReturnValue(
child as unknown as ChildProcess,
@ -238,7 +98,7 @@ describe("agent-runner", () => {
spawnAgentProcess("msg", "session-123");
expect(vi.mocked(mockSpawn)).toHaveBeenCalledWith(
"node",
"openclaw",
expect.arrayContaining([
"--session-key",
"agent:main:web:session-123",
@ -358,31 +218,4 @@ describe("agent-runner", () => {
});
});
// ── spawnAgentProcess with file context ──────────────────────────
describe("spawnAgentProcess (additional)", () => {
it("includes file context flags when filePath is set", async () => {
process.env.OPENCLAW_ROOT = "/pkg";
const { existsSync: mockExists } = await import("node:fs");
const { spawn: mockSpawn } = await import("node:child_process");
vi.mocked(mockExists).mockImplementation((p) => {
const s = String(p);
return s === "/pkg" || s === join("/pkg", "openclaw.mjs");
});
const child = mockChildProcess();
vi.mocked(mockSpawn).mockReturnValue(child as unknown as ChildProcess);
const { spawnAgentProcess } = await import("./agent-runner.js");
spawnAgentProcess("analyze this file", "session-1", "knowledge/doc.md");
expect(vi.mocked(mockSpawn)).toHaveBeenCalledWith(
"node",
expect.arrayContaining(["--message"]),
expect.anything(),
);
});
});
});

View File

@ -1,7 +1,5 @@
import { spawn } from "node:child_process";
import { createInterface } from "node:readline";
import { existsSync } from "node:fs";
import { dirname, join } from "node:path";
import { getEffectiveProfile, resolveWorkspaceRoot } from "./workspace";
export type AgentEvent = {
@ -111,68 +109,42 @@ export type RunAgentOptions = {
sessionId?: string;
};
/**
* Resolve the ironclaw/openclaw package root directory.
*
* In a dev workspace the cwd is `<repo>/apps/web` and `scripts/run-node.mjs`
* exists two levels up. In a production standalone build the cwd is
* `<pkg>/apps/web/.next/standalone/apps/web/` walking two levels up lands
* inside the `.next` tree, not at the package root.
*
* Strategy:
* 1. Honour `OPENCLAW_ROOT` env var (set by the gateway when spawning the
* standalone server guaranteed correct).
* 2. Walk upward from cwd looking for `openclaw.mjs` (production) or
* `scripts/run-node.mjs` (dev).
* 3. Fallback: original 2-levels-up heuristic.
*/
export function resolvePackageRoot(): string {
// 1. Env var (fastest, most reliable in standalone mode).
if (process.env.OPENCLAW_ROOT && existsSync(process.env.OPENCLAW_ROOT)) {
return process.env.OPENCLAW_ROOT;
}
// 2. Walk up from cwd.
let dir = process.cwd();
for (let i = 0; i < 20; i++) {
if (
existsSync(join(dir, "openclaw.mjs")) ||
existsSync(join(dir, "scripts", "run-node.mjs"))
) {
return dir;
}
const parent = dirname(dir);
if (parent === dir) {break;}
dir = parent;
}
// 3. Fallback: legacy heuristic.
const cwd = process.cwd();
return cwd.endsWith(join("apps", "web"))
? join(cwd, "..", "..")
: cwd;
}
export type AgentProcessHandle = {
stdout: NodeJS.ReadableStream | null;
stderr: NodeJS.ReadableStream | null;
kill: (signal?: NodeJS.Signals | number) => boolean;
on: {
(
event: "close",
listener: (code: number | null, signal: NodeJS.Signals | null) => void,
): AgentProcessHandle;
(event: string, listener: (...args: unknown[]) => void): AgentProcessHandle;
};
once: {
(
event: "close",
listener: (code: number | null, signal: NodeJS.Signals | null) => void,
): AgentProcessHandle;
(event: string, listener: (...args: unknown[]) => void): AgentProcessHandle;
};
};
/**
* Spawn an agent child process and return the ChildProcess handle.
* Shared between `runAgent` (legacy callback API) and the ActiveRunManager.
*
* In a dev workspace uses `scripts/run-node.mjs` (auto-rebuilds TypeScript).
* In production / global-install uses `openclaw.mjs` directly (pre-built).
*/
export function spawnAgentProcess(
message: string,
agentSessionId?: string,
): AgentProcessHandle {
return spawnLegacyAgentProcess(message, agentSessionId);
}
function spawnLegacyAgentProcess(
message: string,
agentSessionId?: string,
): ReturnType<typeof spawn> {
const root = resolvePackageRoot();
// Dev: scripts/run-node.mjs (auto-rebuild). Prod: openclaw.mjs (pre-built).
const devScript = join(root, "scripts", "run-node.mjs");
const prodScript = join(root, "openclaw.mjs");
const scriptPath = existsSync(devScript) ? devScript : prodScript;
const args = [
scriptPath,
"agent",
"--agent",
"main",
@ -188,8 +160,7 @@ export function spawnAgentProcess(
const profile = getEffectiveProfile();
const workspace = resolveWorkspaceRoot();
return spawn("node", args, {
cwd: root,
return spawn("openclaw", args, {
env: {
...process.env,
...(profile ? { OPENCLAW_PROFILE: profile } : {}),
@ -206,15 +177,15 @@ export function spawnAgentProcess(
export function spawnAgentSubscribeProcess(
sessionKey: string,
afterSeq = 0,
): AgentProcessHandle {
return spawnLegacyAgentSubscribeProcess(sessionKey, afterSeq);
}
function spawnLegacyAgentSubscribeProcess(
sessionKey: string,
afterSeq = 0,
): ReturnType<typeof spawn> {
const root = resolvePackageRoot();
const devScript = join(root, "scripts", "run-node.mjs");
const prodScript = join(root, "openclaw.mjs");
const scriptPath = existsSync(devScript) ? devScript : prodScript;
const args = [
scriptPath,
"agent",
"--stream-json",
"--subscribe-session-key",
@ -225,8 +196,7 @@ export function spawnAgentSubscribeProcess(
const profile = getEffectiveProfile();
const workspace = resolveWorkspaceRoot();
return spawn("node", args, {
cwd: root,
return spawn("openclaw", args, {
env: {
...process.env,
...(profile ? { OPENCLAW_PROFILE: profile } : {}),
@ -472,7 +442,8 @@ export async function runAgent(
});
child.on("error", (err) => {
callback.onError(err);
const error = err instanceof Error ? err : new Error(String(err));
callback.onError(error);
resolve();
});

View File

@ -6,15 +6,15 @@
*
* Events are fed from CLI NDJSON streams (parent run + subscribe continuations).
*/
import { type ChildProcess, spawn } from "node:child_process";
import { spawn } from "node:child_process";
import { randomUUID } from "node:crypto";
import { createInterface } from "node:readline";
import { existsSync, readFileSync, writeFileSync, mkdirSync, appendFileSync } from "node:fs";
import { join } from "node:path";
import {
type AgentEvent,
type AgentProcessHandle,
spawnAgentSubscribeProcess,
resolvePackageRoot,
extractToolResult,
buildToolOutput,
parseAgentErrorMessage,
@ -43,7 +43,7 @@ type SubagentRun = SubagentInfo & {
subscribers: Set<SubagentSubscriber>;
/** Internal state for event-to-SSE transformation */
_state: TransformState;
_subscribeProcess: ChildProcess | null;
_subscribeProcess: AgentProcessHandle | null;
_cleanupTimer: ReturnType<typeof setTimeout> | null;
/** Set when lifecycle/end is received; actual finalization deferred to subscribe close. */
_lifecycleEnded: boolean;
@ -462,14 +462,9 @@ export function reactivateSubagent(sessionKey: string): boolean {
function sendGatewayAbortForSubagent(sessionKey: string): void {
try {
const root = resolvePackageRoot();
const devScript = join(root, "scripts", "run-node.mjs");
const prodScript = join(root, "openclaw.mjs");
const scriptPath = existsSync(devScript) ? devScript : prodScript;
const child = spawn(
"node",
"openclaw",
[
scriptPath,
"gateway",
"call",
"chat.abort",
@ -480,12 +475,12 @@ function sendGatewayAbortForSubagent(sessionKey: string): void {
"4000",
],
{
cwd: root,
env: { ...process.env },
stdio: "ignore",
detached: true,
},
);
child.on("error", () => {});
child.unref();
} catch {
// best effort
@ -504,15 +499,10 @@ export function spawnSubagentMessage(sessionKey: string, message: string): boole
try {
const run = getRegistry().runs.get(sessionKey);
if (!run) {return false;}
const root = resolvePackageRoot();
const devScript = join(root, "scripts", "run-node.mjs");
const prodScript = join(root, "openclaw.mjs");
const scriptPath = existsSync(devScript) ? devScript : prodScript;
const idempotencyKey = randomUUID();
const child = spawn(
"node",
"openclaw",
[
scriptPath,
"gateway",
"call",
"agent",
@ -531,12 +521,12 @@ export function spawnSubagentMessage(sessionKey: string, message: string): boole
"10000",
],
{
cwd: root,
env: { ...process.env },
stdio: "ignore",
detached: true,
},
);
child.on("error", () => {});
child.unref();
return true;
} catch {

View File

@ -1,12 +1,32 @@
import { describe, it, expect, vi, beforeEach, afterEach } from "vitest";
vi.mock("node:fs", () => ({
existsSync: vi.fn(() => false),
readFileSync: vi.fn(() => ""),
readdirSync: vi.fn(() => []),
writeFileSync: vi.fn(),
mkdirSync: vi.fn(),
}));
vi.mock("node:fs", async (importOriginal) => {
const actual = await importOriginal<typeof import("node:fs")>();
const existsSync = vi.fn(() => false);
const readFileSync = vi.fn(() => "");
const readdirSync = vi.fn(() => []);
const writeFileSync = vi.fn();
const mkdirSync = vi.fn();
const renameSync = vi.fn();
return {
...actual,
existsSync,
readFileSync,
readdirSync,
writeFileSync,
mkdirSync,
renameSync,
default: {
...actual,
existsSync,
readFileSync,
readdirSync,
writeFileSync,
mkdirSync,
renameSync,
},
};
});
vi.mock("node:child_process", () => ({
execSync: vi.fn(() => ""),
@ -29,7 +49,11 @@ import { join } from "node:path";
describe("profile-scoped chat session isolation", () => {
const originalEnv = { ...process.env };
const STATE_DIR = join("/home/testuser", ".openclaw");
const DEFAULT_STATE_DIR = join("/home/testuser", ".openclaw");
const stateDirForProfile = (profile: string | null) =>
!profile || profile.toLowerCase() === "default"
? DEFAULT_STATE_DIR
: join("/home/testuser", `.openclaw-${profile}`);
beforeEach(() => {
vi.resetModules();
@ -40,13 +64,33 @@ describe("profile-scoped chat session isolation", () => {
delete process.env.OPENCLAW_WORKSPACE;
delete process.env.OPENCLAW_STATE_DIR;
vi.mock("node:fs", () => ({
existsSync: vi.fn(() => false),
readFileSync: vi.fn(() => ""),
readdirSync: vi.fn(() => []),
writeFileSync: vi.fn(),
mkdirSync: vi.fn(),
}));
vi.mock("node:fs", async (importOriginal) => {
const actual = await importOriginal<typeof import("node:fs")>();
const existsSync = vi.fn(() => false);
const readFileSync = vi.fn(() => "");
const readdirSync = vi.fn(() => []);
const writeFileSync = vi.fn();
const mkdirSync = vi.fn();
const renameSync = vi.fn();
return {
...actual,
existsSync,
readFileSync,
readdirSync,
writeFileSync,
mkdirSync,
renameSync,
default: {
...actual,
existsSync,
readFileSync,
readdirSync,
writeFileSync,
mkdirSync,
renameSync,
},
};
});
vi.mock("node:child_process", () => ({
execSync: vi.fn(() => ""),
exec: vi.fn(
@ -85,15 +129,15 @@ describe("profile-scoped chat session isolation", () => {
mockReadFile.mockImplementation(() => {
throw new Error("ENOENT");
});
expect(resolveWebChatDir()).toBe(join(STATE_DIR, "web-chat"));
expect(resolveWebChatDir()).toBe(join(DEFAULT_STATE_DIR, "web-chat"));
});
it("named profile uses web-chat-<name> directory", async () => {
it("named profile uses profile-scoped web-chat directory", async () => {
const { resolveWebChatDir, setUIActiveProfile, mockReadFile } =
await importWorkspace();
mockReadFile.mockReturnValue(JSON.stringify({}) as never);
setUIActiveProfile("work");
expect(resolveWebChatDir()).toBe(join(STATE_DIR, "web-chat-work"));
expect(resolveWebChatDir()).toBe(join(stateDirForProfile("work"), "web-chat"));
});
it("different profiles produce different chat directories", async () => {
@ -109,8 +153,8 @@ describe("profile-scoped chat session isolation", () => {
const dirBeta = resolveWebChatDir();
expect(dirAlpha).not.toBe(dirBeta);
expect(dirAlpha).toBe(join(STATE_DIR, "web-chat-alpha"));
expect(dirBeta).toBe(join(STATE_DIR, "web-chat-beta"));
expect(dirAlpha).toBe(join(stateDirForProfile("alpha"), "web-chat"));
expect(dirBeta).toBe(join(stateDirForProfile("beta"), "web-chat"));
});
it("switching to default after named profile reverts to base dir", async () => {
@ -119,10 +163,10 @@ describe("profile-scoped chat session isolation", () => {
mockReadFile.mockReturnValue(JSON.stringify({}) as never);
setUIActiveProfile("work");
expect(resolveWebChatDir()).toBe(join(STATE_DIR, "web-chat-work"));
expect(resolveWebChatDir()).toBe(join(stateDirForProfile("work"), "web-chat"));
setUIActiveProfile(null);
expect(resolveWebChatDir()).toBe(join(STATE_DIR, "web-chat"));
expect(resolveWebChatDir()).toBe(join(DEFAULT_STATE_DIR, "web-chat"));
});
it("'default' profile name uses base web-chat dir (case-insensitive)", async () => {
@ -131,10 +175,10 @@ describe("profile-scoped chat session isolation", () => {
mockReadFile.mockReturnValue(JSON.stringify({}) as never);
setUIActiveProfile("Default");
expect(resolveWebChatDir()).toBe(join(STATE_DIR, "web-chat"));
expect(resolveWebChatDir()).toBe(join(DEFAULT_STATE_DIR, "web-chat"));
setUIActiveProfile("DEFAULT");
expect(resolveWebChatDir()).toBe(join(STATE_DIR, "web-chat"));
expect(resolveWebChatDir()).toBe(join(DEFAULT_STATE_DIR, "web-chat"));
});
it("OPENCLAW_STATE_DIR override changes base for chat dirs", async () => {
@ -147,7 +191,7 @@ describe("profile-scoped chat session isolation", () => {
expect(resolveWebChatDir()).toBe(join("/custom/state", "web-chat"));
setUIActiveProfile("test");
expect(resolveWebChatDir()).toBe(join("/custom/state", "web-chat-test"));
expect(resolveWebChatDir()).toBe(join("/custom/state", "web-chat"));
});
it("workspace roots are isolated per profile too", async () => {
@ -155,8 +199,8 @@ describe("profile-scoped chat session isolation", () => {
await importWorkspace();
mockReadFile.mockReturnValue(JSON.stringify({}) as never);
const defaultWs = join(STATE_DIR, "workspace");
const workWs = join(STATE_DIR, "workspace-work");
const defaultWs = join(DEFAULT_STATE_DIR, "workspace");
const workWs = join(stateDirForProfile("work"), "workspace");
mockExists.mockImplementation((p) => {
const s = String(p);

View File

@ -1,13 +1,33 @@
import { describe, it, expect, vi, beforeEach, afterEach } from "vitest";
import type { Dirent } from "node:fs";
vi.mock("node:fs", () => ({
existsSync: vi.fn(() => false),
readFileSync: vi.fn(() => ""),
readdirSync: vi.fn(() => []),
writeFileSync: vi.fn(),
mkdirSync: vi.fn(),
}));
vi.mock("node:fs", async (importOriginal) => {
const actual = await importOriginal<typeof import("node:fs")>();
const existsSync = vi.fn(() => false);
const readFileSync = vi.fn(() => "");
const readdirSync = vi.fn(() => []);
const writeFileSync = vi.fn();
const mkdirSync = vi.fn();
const renameSync = vi.fn();
return {
...actual,
existsSync,
readFileSync,
readdirSync,
writeFileSync,
mkdirSync,
renameSync,
default: {
...actual,
existsSync,
readFileSync,
readdirSync,
writeFileSync,
mkdirSync,
renameSync,
},
};
});
vi.mock("node:child_process", () => ({
execSync: vi.fn(() => ""),
@ -45,8 +65,12 @@ function makeDirent(name: string, isDir: boolean): Dirent {
describe("workspace profiles", () => {
const originalEnv = { ...process.env };
const STATE_DIR = join("/home/testuser", ".openclaw");
const UI_STATE_PATH = join(STATE_DIR, ".ironclaw-ui-state.json");
const DEFAULT_STATE_DIR = join("/home/testuser", ".openclaw");
const stateDirForProfile = (profile: string | null) =>
!profile || profile.toLowerCase() === "default"
? DEFAULT_STATE_DIR
: join("/home/testuser", `.openclaw-${profile}`);
const UI_STATE_PATH = join(DEFAULT_STATE_DIR, ".ironclaw-ui-state.json");
beforeEach(() => {
vi.resetModules();
@ -57,13 +81,33 @@ describe("workspace profiles", () => {
delete process.env.OPENCLAW_WORKSPACE;
delete process.env.OPENCLAW_STATE_DIR;
vi.mock("node:fs", () => ({
existsSync: vi.fn(() => false),
readFileSync: vi.fn(() => ""),
readdirSync: vi.fn(() => []),
writeFileSync: vi.fn(),
mkdirSync: vi.fn(),
}));
vi.mock("node:fs", async (importOriginal) => {
const actual = await importOriginal<typeof import("node:fs")>();
const existsSync = vi.fn(() => false);
const readFileSync = vi.fn(() => "");
const readdirSync = vi.fn(() => []);
const writeFileSync = vi.fn();
const mkdirSync = vi.fn();
const renameSync = vi.fn();
return {
...actual,
existsSync,
readFileSync,
readdirSync,
writeFileSync,
mkdirSync,
renameSync,
default: {
...actual,
existsSync,
readFileSync,
readdirSync,
writeFileSync,
mkdirSync,
renameSync,
},
};
});
vi.mock("node:child_process", () => ({
execSync: vi.fn(() => ""),
exec: vi.fn(
@ -91,6 +135,7 @@ describe("workspace profiles", () => {
readFileSync: rfs,
readdirSync: rds,
writeFileSync: wfs,
renameSync: rs,
} = await import("node:fs");
const mod = await import("./workspace.js");
return {
@ -99,6 +144,7 @@ describe("workspace profiles", () => {
mockReadFile: vi.mocked(rfs),
mockReaddir: vi.mocked(rds),
mockWriteFile: vi.mocked(wfs),
mockRename: vi.mocked(rs),
};
}
@ -261,20 +307,23 @@ describe("workspace profiles", () => {
expect(profiles[0].isActive).toBe(true);
});
it("discovers workspace-<name> directories", async () => {
it("discovers profile-scoped .openclaw-<name> state directories", async () => {
const { discoverProfiles, mockExists, mockReaddir } =
await importWorkspace();
const workStateDir = stateDirForProfile("work");
const personalStateDir = stateDirForProfile("personal");
mockExists.mockImplementation((p) => {
const s = String(p);
return (
s === STATE_DIR ||
s === join(STATE_DIR, "workspace-work") ||
s === join(STATE_DIR, "workspace-personal")
s === DEFAULT_STATE_DIR ||
s === join(DEFAULT_STATE_DIR, "openclaw.json") ||
s === join(workStateDir, "workspace") ||
s === join(personalStateDir, "workspace")
);
});
mockReaddir.mockReturnValue([
makeDirent("workspace-work", true),
makeDirent("workspace-personal", true),
makeDirent(".openclaw-work", true),
makeDirent(".openclaw-personal", true),
makeDirent("sessions", true),
makeDirent("config.json", false),
] as unknown as Dirent[]);
@ -290,12 +339,17 @@ describe("workspace profiles", () => {
it("marks active profile correctly", async () => {
const { discoverProfiles, setUIActiveProfile, mockExists, mockReaddir } =
await importWorkspace();
const workStateDir = stateDirForProfile("work");
mockExists.mockImplementation((p) => {
const s = String(p);
return s === STATE_DIR || s === join(STATE_DIR, "workspace-work");
return (
s === DEFAULT_STATE_DIR ||
s === join(DEFAULT_STATE_DIR, "openclaw.json") ||
s === join(workStateDir, "workspace")
);
});
mockReaddir.mockReturnValue([
makeDirent("workspace-work", true),
makeDirent(".openclaw-work", true),
] as unknown as Dirent[]);
setUIActiveProfile("work");
@ -311,7 +365,7 @@ describe("workspace profiles", () => {
await importWorkspace();
mockExists.mockImplementation((p) => {
const s = String(p);
return s === "/custom/workspace" || s === STATE_DIR;
return s === "/custom/workspace" || s === DEFAULT_STATE_DIR;
});
mockReadFile.mockReturnValue(
JSON.stringify({
@ -328,13 +382,14 @@ describe("workspace profiles", () => {
it("does not duplicate profiles seen via directory and registry", async () => {
const { discoverProfiles, mockExists, mockReaddir, mockReadFile } =
await importWorkspace();
const wsDir = join(STATE_DIR, "workspace-shared");
const stateDir = stateDirForProfile("shared");
const wsDir = join(stateDir, "workspace");
mockExists.mockImplementation((p) => {
const s = String(p);
return s === STATE_DIR || s === wsDir;
return s === DEFAULT_STATE_DIR || s === wsDir;
});
mockReaddir.mockReturnValue([
makeDirent("workspace-shared", true),
makeDirent(".openclaw-shared", true),
] as unknown as Dirent[]);
mockReadFile.mockReturnValue(
JSON.stringify({
@ -368,15 +423,39 @@ describe("workspace profiles", () => {
mockReadFile.mockImplementation(() => {
throw new Error("ENOENT");
});
expect(resolveWebChatDir()).toBe(join(STATE_DIR, "web-chat"));
expect(resolveWebChatDir()).toBe(join(DEFAULT_STATE_DIR, "web-chat"));
});
it("returns web-chat-<name> for named profile", async () => {
it("returns profile-scoped web-chat directory for named profile", async () => {
const { resolveWebChatDir, setUIActiveProfile, mockReadFile } =
await importWorkspace();
mockReadFile.mockReturnValue(JSON.stringify({}) as never);
setUIActiveProfile("work");
expect(resolveWebChatDir()).toBe(join(STATE_DIR, "web-chat-work"));
expect(resolveWebChatDir()).toBe(join(stateDirForProfile("work"), "web-chat"));
});
it("uses OPENCLAW_PROFILE when no UI override is set", async () => {
process.env.OPENCLAW_PROFILE = "ironclaw";
const { resolveWebChatDir, mockReadFile } = await importWorkspace();
mockReadFile.mockImplementation(() => {
throw new Error("ENOENT");
});
expect(resolveWebChatDir()).toBe(join(stateDirForProfile("ironclaw"), "web-chat"));
});
it("migrates legacy web-chat-<profile> into profile state dir", async () => {
const { resolveWebChatDir, setUIActiveProfile, mockExists, mockReadFile, mockRename } =
await importWorkspace();
mockReadFile.mockReturnValue(JSON.stringify({}) as never);
setUIActiveProfile("work");
const legacyDir = join(DEFAULT_STATE_DIR, "web-chat-work");
const targetDir = join(stateDirForProfile("work"), "web-chat");
mockExists.mockImplementation((p) => String(p) === legacyDir);
resolveWebChatDir();
expect(mockRename).toHaveBeenCalledWith(legacyDir, targetDir);
});
it("returns web-chat when profile is 'default'", async () => {
@ -384,7 +463,7 @@ describe("workspace profiles", () => {
await importWorkspace();
mockReadFile.mockReturnValue(JSON.stringify({}) as never);
setUIActiveProfile("default");
expect(resolveWebChatDir()).toBe(join(STATE_DIR, "web-chat"));
expect(resolveWebChatDir()).toBe(join(DEFAULT_STATE_DIR, "web-chat"));
});
it("respects OPENCLAW_STATE_DIR override", async () => {
@ -400,16 +479,27 @@ describe("workspace profiles", () => {
// ─── resolveWorkspaceRoot (profile-aware) ─────────────────────────
describe("resolveWorkspaceRoot (profile-aware)", () => {
it("returns workspace-<name> for named profile", async () => {
it("returns profile-scoped workspace for named profile", async () => {
const { resolveWorkspaceRoot, setUIActiveProfile, mockExists, mockReadFile } =
await importWorkspace();
mockReadFile.mockReturnValue(JSON.stringify({}) as never);
setUIActiveProfile("work");
const workDir = join(STATE_DIR, "workspace-work");
const workDir = join(stateDirForProfile("work"), "workspace");
mockExists.mockImplementation((p) => String(p) === workDir);
expect(resolveWorkspaceRoot()).toBe(workDir);
});
it("uses OPENCLAW_PROFILE to resolve profile-scoped workspace", async () => {
process.env.OPENCLAW_PROFILE = "ironclaw";
const { resolveWorkspaceRoot, mockExists, mockReadFile } = await importWorkspace();
mockReadFile.mockImplementation(() => {
throw new Error("ENOENT");
});
const profileWorkspaceDir = join(stateDirForProfile("ironclaw"), "workspace");
mockExists.mockImplementation((p) => String(p) === profileWorkspaceDir);
expect(resolveWorkspaceRoot()).toBe(profileWorkspaceDir);
});
it("prefers registry path over directory convention", async () => {
const {
resolveWorkspaceRoot,
@ -426,7 +516,7 @@ describe("workspace profiles", () => {
mockExists.mockImplementation((p) => {
const s = String(p);
return (
s === "/custom/work" || s === join(STATE_DIR, "workspace-work")
s === "/custom/work" || s === join(stateDirForProfile("work"), "workspace")
);
});
expect(resolveWorkspaceRoot()).toBe("/custom/work");
@ -442,14 +532,43 @@ describe("workspace profiles", () => {
expect(resolveWorkspaceRoot()).toBe("/env/workspace");
});
it("falls back to default workspace when named profile dir missing", async () => {
it("returns null when named profile workspace is missing", async () => {
const { resolveWorkspaceRoot, setUIActiveProfile, mockExists, mockReadFile } =
await importWorkspace();
mockReadFile.mockReturnValue(JSON.stringify({}) as never);
setUIActiveProfile("missing");
const defaultDir = join(STATE_DIR, "workspace");
mockExists.mockImplementation((p) => String(p) === defaultDir);
expect(resolveWorkspaceRoot()).toBe(defaultDir);
mockExists.mockReturnValue(false);
expect(resolveWorkspaceRoot()).toBeNull();
});
it("migrates legacy workspace-<profile> and updates resolution", async () => {
const { resolveWorkspaceRoot, setUIActiveProfile, mockExists, mockReadFile, mockRename } =
await importWorkspace();
mockReadFile.mockReturnValue(
JSON.stringify({
workspaceRegistry: {
work: join(DEFAULT_STATE_DIR, "workspace-work"),
},
}) as never,
);
setUIActiveProfile("work");
const legacyDir = join(DEFAULT_STATE_DIR, "workspace-work");
const targetDir = join(stateDirForProfile("work"), "workspace");
let moved = false;
mockExists.mockImplementation((p) => {
const s = String(p);
if (!moved) {
return s === legacyDir;
}
return s === targetDir;
});
mockRename.mockImplementation(() => {
moved = true;
});
expect(resolveWorkspaceRoot()).toBe(targetDir);
expect(mockRename).toHaveBeenCalledWith(legacyDir, targetDir);
});
});

View File

@ -1,4 +1,4 @@
import { existsSync, readFileSync, readdirSync, writeFileSync, mkdirSync } from "node:fs";
import { existsSync, readFileSync, readdirSync, writeFileSync, mkdirSync, renameSync } from "node:fs";
import { execSync, exec } from "node:child_process";
import { promisify } from "node:util";
import { join, resolve, normalize, relative } from "node:path";
@ -15,6 +15,8 @@ const execAsync = promisify(exec);
// ---------------------------------------------------------------------------
const UI_STATE_FILENAME = ".ironclaw-ui-state.json";
const LEGACY_STATE_DIRNAME = ".openclaw";
const migratedProfiles = new Set<string>();
/** In-memory override; takes precedence over the persisted file. */
let _uiActiveProfile: string | null | undefined;
@ -25,9 +27,107 @@ type UIState = {
workspaceRegistry?: Record<string, string>;
};
function resolveOpenClawHomeDir(): string {
return process.env.OPENCLAW_HOME?.trim() || homedir();
}
function expandUserPath(input: string): string {
const trimmed = input.trim();
if (!trimmed) {
return trimmed;
}
if (trimmed.startsWith("~")) {
return join(homedir(), trimmed.slice(1));
}
return trimmed;
}
function normalizeProfileName(profile: string | null | undefined): string | null {
const normalized = profile?.trim() || null;
if (!normalized || normalized.toLowerCase() === "default") {
return null;
}
return normalized;
}
function resolveLegacySharedStateDir(): string {
const override = process.env.OPENCLAW_STATE_DIR?.trim();
if (override) {
return expandUserPath(override);
}
return join(resolveOpenClawHomeDir(), LEGACY_STATE_DIRNAME);
}
function resolveProfileStateDir(profile: string | null | undefined): string {
const override = process.env.OPENCLAW_STATE_DIR?.trim();
if (override) {
return expandUserPath(override);
}
const normalizedProfile = normalizeProfileName(profile);
if (!normalizedProfile) {
return join(resolveOpenClawHomeDir(), LEGACY_STATE_DIRNAME);
}
return join(resolveOpenClawHomeDir(), `.openclaw-${normalizedProfile}`);
}
function moveDirIfMissingTarget(fromDir: string, toDir: string): boolean {
if (!existsSync(fromDir) || existsSync(toDir)) {
return false;
}
const parent = join(toDir, "..");
if (!existsSync(parent)) {
mkdirSync(parent, { recursive: true });
}
try {
renameSync(fromDir, toDir);
return true;
} catch {
return false;
}
}
function migrateLegacyProfileStorage(profile: string | null): void {
const normalizedProfile = normalizeProfileName(profile);
if (!normalizedProfile || process.env.OPENCLAW_STATE_DIR?.trim()) {
return;
}
const key = normalizedProfile.toLowerCase();
if (migratedProfiles.has(key)) {
return;
}
migratedProfiles.add(key);
const legacyStateDir = resolveLegacySharedStateDir();
const targetStateDir = resolveProfileStateDir(normalizedProfile);
const movedWorkspace = moveDirIfMissingTarget(
join(legacyStateDir, `workspace-${normalizedProfile}`),
join(targetStateDir, "workspace"),
);
const movedWebChat = moveDirIfMissingTarget(
join(legacyStateDir, `web-chat-${normalizedProfile}`),
join(targetStateDir, "web-chat"),
);
if (!movedWorkspace && !movedWebChat) {
return;
}
const state = readUIState();
const existing = state.workspaceRegistry?.[normalizedProfile];
if (
existing &&
resolve(existing) === resolve(join(legacyStateDir, `workspace-${normalizedProfile}`))
) {
const nextRegistry = { ...state.workspaceRegistry };
nextRegistry[normalizedProfile] = join(targetStateDir, "workspace");
writeUIState({
...state,
workspaceRegistry: nextRegistry,
});
}
}
function uiStatePath(): string {
const home = process.env.OPENCLAW_HOME?.trim() || homedir();
return join(home, ".openclaw", UI_STATE_FILENAME);
return join(resolveOpenClawHomeDir(), LEGACY_STATE_DIRNAME, UI_STATE_FILENAME);
}
function readUIState(): UIState {
@ -104,56 +204,67 @@ export type DiscoveredProfile = {
};
/**
* Discover all profiles by scanning ~/.openclaw for workspace-* directories
* and checking for profile-specific state dirs.
* Discover all profiles by scanning profile-scoped state directories
* (e.g. ~/.openclaw-ironclaw) and merging persisted registry entries.
*/
export function discoverProfiles(): DiscoveredProfile[] {
const home = process.env.OPENCLAW_HOME?.trim() || homedir();
const baseStateDir = join(home, ".openclaw");
const home = resolveOpenClawHomeDir();
const defaultStateDir = resolveProfileStateDir(null);
const activeProfile = getEffectiveProfile();
const activeNormalized = normalizeProfileName(activeProfile);
const profiles: DiscoveredProfile[] = [];
const seen = new Set<string>();
// Default profile
const defaultWs = join(baseStateDir, "workspace");
const defaultWs = join(defaultStateDir, "workspace");
profiles.push({
name: "default",
stateDir: baseStateDir,
stateDir: defaultStateDir,
workspaceDir: existsSync(defaultWs) ? defaultWs : null,
isActive: !activeProfile || activeProfile.toLowerCase() === "default",
hasConfig: existsSync(join(baseStateDir, "openclaw.json")),
isActive: !activeNormalized,
hasConfig: existsSync(join(defaultStateDir, "openclaw.json")),
});
seen.add("default");
// Scan for workspace-<profile> directories inside the state dir
if (existsSync(baseStateDir)) {
try {
const entries = readdirSync(baseStateDir, { withFileTypes: true });
for (const entry of entries) {
if (!entry.isDirectory()) {continue;}
const match = entry.name.match(/^workspace-(.+)$/);
if (!match) {continue;}
const profileName = match[1];
if (seen.has(profileName)) {continue;}
seen.add(profileName);
const wsDir = join(baseStateDir, entry.name);
profiles.push({
name: profileName,
stateDir: baseStateDir,
workspaceDir: existsSync(wsDir) ? wsDir : null,
isActive: activeProfile === profileName,
hasConfig: existsSync(join(baseStateDir, "openclaw.json")),
});
// Scan for profile-scoped state dirs: ~/.openclaw-<profile>
try {
const entries = readdirSync(home, { withFileTypes: true });
for (const entry of entries) {
if (!entry.isDirectory()) {
continue;
}
} catch {
// dir unreadable
const match = entry.name.match(/^\.openclaw-(.+)$/);
if (!match || !match[1]) {
continue;
}
const profileName = match[1];
if (seen.has(profileName)) {
continue;
}
migrateLegacyProfileStorage(profileName);
const stateDir = resolveProfileStateDir(profileName);
const wsDir = join(stateDir, "workspace");
profiles.push({
name: profileName,
stateDir,
workspaceDir: existsSync(wsDir) ? wsDir : null,
isActive: activeNormalized === profileName,
hasConfig: existsSync(join(stateDir, "openclaw.json")),
});
seen.add(profileName);
}
} catch {
// dir unreadable
}
// Merge workspaces registered via custom paths (outside ~/.openclaw/)
// Merge workspaces registered via custom paths (outside profile state dirs).
const registry = getWorkspaceRegistry();
for (const [profileName, wsPath] of Object.entries(registry)) {
for (const [rawProfileName, wsPath] of Object.entries(registry)) {
const normalized = normalizeProfileName(rawProfileName);
const profileName = normalized ?? "default";
if (normalized) {
migrateLegacyProfileStorage(normalized);
}
if (seen.has(profileName)) {
const existing = profiles.find((p) => p.name === profileName);
if (existing && !existing.workspaceDir && existsSync(wsPath)) {
@ -162,12 +273,13 @@ export function discoverProfiles(): DiscoveredProfile[] {
continue;
}
seen.add(profileName);
const stateDir = resolveProfileStateDir(normalized);
profiles.push({
name: profileName,
stateDir: baseStateDir,
stateDir,
workspaceDir: existsSync(wsPath) ? wsPath : null,
isActive: activeProfile === profileName,
hasConfig: existsSync(join(baseStateDir, "openclaw.json")),
isActive: normalized ? activeNormalized === normalized : !activeNormalized,
hasConfig: existsSync(join(stateDir, "openclaw.json")),
});
}
@ -180,55 +292,44 @@ export function discoverProfiles(): DiscoveredProfile[] {
/**
* Resolve the OpenClaw state directory (base dir for config, sessions, agents, etc.).
* Mirrors src/config/paths.ts:resolveStateDir() logic for the web app.
*
* Precedence:
* 1. OPENCLAW_STATE_DIR env var
* 2. OPENCLAW_HOME env var <home>/.openclaw
* 3. ~/.openclaw (default)
* Mirrors CLI profile semantics:
* - default profile: ~/.openclaw
* - named profile: ~/.openclaw-<profile>
* - OPENCLAW_STATE_DIR override wins for all profiles
*/
export function resolveOpenClawStateDir(): string {
const stateOverride = process.env.OPENCLAW_STATE_DIR?.trim();
if (stateOverride) {
return stateOverride.startsWith("~")
? join(homedir(), stateOverride.slice(1))
: stateOverride;
}
const home = process.env.OPENCLAW_HOME?.trim() || homedir();
return join(home, ".openclaw");
const profile = getEffectiveProfile();
migrateLegacyProfileStorage(profile);
return resolveProfileStateDir(profile);
}
/**
* Resolve the web-chat sessions directory, scoped to the active profile.
* Default profile: <stateDir>/web-chat
* Named profile: <stateDir>/web-chat-<profile>
* Always stores sessions at <profileStateDir>/web-chat.
*/
export function resolveWebChatDir(): string {
const stateDir = resolveOpenClawStateDir();
const profile = getEffectiveProfile();
if (profile && profile.toLowerCase() !== "default") {
return join(stateDir, `web-chat-${profile}`);
}
return join(stateDir, "web-chat");
}
/**
* Resolve the workspace directory, checking in order:
* 1. OPENCLAW_WORKSPACE env var
* 2. Effective profile <stateDir>/workspace-<profile>
* 3. <stateDir>/workspace
* 2. Registered profile-specific custom path
* 3. <profileStateDir>/workspace
* 4. Legacy fallback: ~/.openclaw/workspace-<profile> (non-default only)
*/
export function resolveWorkspaceRoot(): string | null {
const stateDir = resolveOpenClawStateDir();
const profile = getEffectiveProfile();
migrateLegacyProfileStorage(profile);
const normalizedProfile = normalizeProfileName(profile);
const stateDir = resolveProfileStateDir(profile);
const registryPath = getRegisteredWorkspacePath(profile);
const candidates = [
process.env.OPENCLAW_WORKSPACE,
registryPath,
profile && profile.toLowerCase() !== "default"
? join(stateDir, `workspace-${profile}`)
: null,
join(stateDir, "workspace"),
normalizedProfile ? join(resolveLegacySharedStateDir(), `workspace-${normalizedProfile}`) : null,
].filter(Boolean) as string[];
for (const dir of candidates) {

View File

@ -10,6 +10,8 @@ title: "Updating"
OpenClaw is moving fast (pre “1.0”). Treat updates like shipping infra: update → run checks → restart (or use `openclaw update`, which restarts) → verify.
If you run **IronClaw** as a frontend package, keep OpenClaw installed globally (`npm i -g openclaw`) and update OpenClaw separately; IronClaw delegates runtime commands to that global OpenClaw install.
## Recommended: re-run the website installer (upgrade in place)
The **preferred** update path is to re-run the installer from the website. It

View File

@ -22,16 +22,16 @@ When the operator says “release”, immediately do this preflight (no extra qu
1. **Version & metadata**
- [ ] Bump `package.json` version (e.g., `2026.1.29`).
- [ ] Run `pnpm plugins:sync` to align extension package versions + changelogs.
- [ ] Update CLI/version strings: [`src/cli/program.ts`](https://github.com/openclaw/openclaw/blob/main/src/cli/program.ts) and the Baileys user agent in [`src/provider-web.ts`](https://github.com/openclaw/openclaw/blob/main/src/provider-web.ts).
- [ ] Confirm package metadata (name, description, repository, keywords, license) and `bin` map points to [`openclaw.mjs`](https://github.com/openclaw/openclaw/blob/main/openclaw.mjs) for `openclaw`.
- [ ] Confirm package metadata (name, description, repository, keywords, license) and `bin` map points to [`openclaw.mjs`](https://github.com/openclaw/openclaw/blob/main/openclaw.mjs) for `ironclaw`.
- [ ] Confirm release notes/documentation call out the global runtime prerequisite: `npm i -g openclaw`.
- [ ] If dependencies changed, run `pnpm install` so `pnpm-lock.yaml` is current.
2. **Build & artifacts**
- [ ] If A2UI inputs changed, run `pnpm canvas:a2ui:bundle` and commit any updated [`src/canvas-host/a2ui/a2ui.bundle.js`](https://github.com/openclaw/openclaw/blob/main/src/canvas-host/a2ui/a2ui.bundle.js).
- [ ] `pnpm run build` (regenerates `dist/`).
- [ ] Verify npm package `files` includes all required `dist/*` folders (notably `dist/node-host/**` and `dist/acp/**` for headless node + ACP CLI).
- [ ] Verify npm package `files` includes only IronClaw artifacts (`dist/entry*`, web standalone, skills/assets) and does not rely on bundled OpenClaw core runtime code.
- [ ] Confirm `dist/build-info.json` exists and includes the expected `commit` hash (CLI banner uses this for npm installs).
- [ ] Optional: `npm pack --pack-destination /tmp` after the build; inspect the tarball contents and keep it handy for the GitHub release (do **not** commit it).

View File

@ -31,22 +31,12 @@
"README.md",
"assets/",
"dist/",
"docs/",
"extensions/",
"skills/"
],
"type": "module",
"main": "dist/index.js",
"main": "dist/entry.js",
"exports": {
".": "./dist/index.js",
"./plugin-sdk": {
"types": "./dist/plugin-sdk/index.d.ts",
"default": "./dist/plugin-sdk/index.js"
},
"./plugin-sdk/account-id": {
"types": "./dist/plugin-sdk/account-id.d.ts",
"default": "./dist/plugin-sdk/account-id.js"
},
".": "./dist/entry.js",
"./cli-entry": "./openclaw.mjs"
},
"scripts": {
@ -54,8 +44,7 @@
"android:install": "cd apps/android && ./gradlew :app:installDebug",
"android:run": "cd apps/android && ./gradlew :app:installDebug && adb shell am start -n ai.openclaw.android/.MainActivity",
"android:test": "cd apps/android && ./gradlew :app:testDebugUnitTest",
"build": "pnpm canvas:a2ui:bundle && tsdown && pnpm build:plugin-sdk:dts && node --import tsx scripts/write-plugin-sdk-entry-dts.ts && node --import tsx scripts/canvas-a2ui-copy.ts && node --import tsx scripts/copy-hook-metadata.ts && node --import tsx scripts/copy-export-html-templates.ts && node --import tsx scripts/write-build-info.ts && node --import tsx scripts/write-cli-compat.ts",
"build:plugin-sdk:dts": "tsc -p tsconfig.plugin-sdk.dts.json",
"build": "tsdown && node --import tsx scripts/write-build-info.ts",
"canvas:a2ui:bundle": "bash scripts/bundle-a2ui.sh",
"check": "pnpm format:check && pnpm tsgo && pnpm lint",
"check:docs": "pnpm format:docs:check && pnpm lint:docs && pnpm docs:check-links",
@ -90,6 +79,8 @@
"ios:gen": "bash -lc './scripts/ios-configure-signing.sh && cd apps/ios && xcodegen generate'",
"ios:open": "bash -lc './scripts/ios-configure-signing.sh && cd apps/ios && xcodegen generate && open OpenClaw.xcodeproj'",
"ios:run": "bash -lc './scripts/ios-configure-signing.sh && cd apps/ios && xcodegen generate && xcodebuild -project OpenClaw.xcodeproj -scheme OpenClaw -destination \"${IOS_DEST:-platform=iOS Simulator,name=iPhone 17}\" -configuration Debug build && xcrun simctl boot \"${IOS_SIM:-iPhone 17}\" || true && xcrun simctl launch booted ai.openclaw.ios'",
"ironclaw": "node scripts/run-node.mjs",
"ironclaw:rpc": "node scripts/run-node.mjs agent --mode rpc --json",
"lint": "oxlint --type-aware",
"lint:all": "pnpm lint && pnpm lint:swift",
"lint:docs": "pnpm dlx markdownlint-cli2",
@ -100,10 +91,8 @@
"mac:package": "bash scripts/package-mac-app.sh",
"mac:restart": "bash scripts/restart-mac.sh",
"moltbot:rpc": "node scripts/run-node.mjs agent --mode rpc --json",
"openclaw": "node scripts/run-node.mjs",
"openclaw:rpc": "node scripts/run-node.mjs agent --mode rpc --json",
"plugins:sync": "node --import tsx scripts/sync-plugin-versions.ts",
"prepack": "pnpm build && pnpm ui:build && pnpm web:build && pnpm web:prepack",
"prepack": "pnpm build && pnpm web:build && pnpm web:prepack",
"prepare": "command -v git >/dev/null 2>&1 && git rev-parse --is-inside-work-tree >/dev/null 2>&1 && git config core.hooksPath git-hooks || exit 0",
"protocol:check": "pnpm protocol:gen && pnpm protocol:gen:swift && git diff --exit-code -- dist/protocol.schema.json apps/macos/Sources/OpenClawProtocol/GatewayModels.swift",
"protocol:gen": "node --import tsx scripts/protocol-gen.ts",
@ -135,7 +124,7 @@
"test:ui": "pnpm --dir ui test",
"test:voicecall:closedloop": "vitest run extensions/voice-call/src/manager.test.ts extensions/voice-call/src/media-stream.test.ts src/plugins/voice-call.plugin.test.ts --maxWorkers=1",
"test:watch": "vitest",
"test:workspace": "vitest run --config vitest.unit.config.ts -- workspace-profiles workspace-chat-isolation workspace-context-awareness subagent-runs && pnpm --dir apps/web vitest run -- workspace-profiles workspace-chat-isolation subagent-runs route.test",
"test:workspace": "(cd apps/web && pnpm vitest run -- workspace-profiles workspace-chat-isolation subagent-runs route.test)",
"test:workspace:live": "LIVE=1 vitest run --config vitest.live.config.ts -- workspace-context-awareness && LIVE=1 pnpm --dir apps/web vitest run -- subagent-streaming.live",
"tsgo:test": "tsgo -p tsconfig.test.json",
"tui": "node scripts/run-node.mjs tui",
@ -241,7 +230,8 @@
},
"peerDependencies": {
"@napi-rs/canvas": "^0.1.89",
"node-llama-cpp": "3.15.1"
"node-llama-cpp": "3.15.1",
"openclaw": ">=2026.1.0"
},
"engines": {
"node": ">=22.12.0"

View File

@ -1,40 +0,0 @@
import fs from "node:fs/promises";
import path from "node:path";
import { fileURLToPath, pathToFileURL } from "node:url";
const repoRoot = path.resolve(path.dirname(fileURLToPath(import.meta.url)), "..");
export function getA2uiPaths(env = process.env) {
const srcDir = env.OPENCLAW_A2UI_SRC_DIR ?? path.join(repoRoot, "src", "canvas-host", "a2ui");
const outDir = env.OPENCLAW_A2UI_OUT_DIR ?? path.join(repoRoot, "dist", "canvas-host", "a2ui");
return { srcDir, outDir };
}
export async function copyA2uiAssets({ srcDir, outDir }: { srcDir: string; outDir: string }) {
const skipMissing = process.env.OPENCLAW_A2UI_SKIP_MISSING === "1";
try {
await fs.stat(path.join(srcDir, "index.html"));
await fs.stat(path.join(srcDir, "a2ui.bundle.js"));
} catch (err) {
const message = 'Missing A2UI bundle assets. Run "pnpm canvas:a2ui:bundle" and retry.';
if (skipMissing) {
console.warn(`${message} Skipping copy (OPENCLAW_A2UI_SKIP_MISSING=1).`);
return;
}
throw new Error(message, { cause: err });
}
await fs.mkdir(path.dirname(outDir), { recursive: true });
await fs.cp(srcDir, outDir, { recursive: true });
}
async function main() {
const { srcDir, outDir } = getA2uiPaths();
await copyA2uiAssets({ srcDir, outDir });
}
if (import.meta.url === pathToFileURL(process.argv[1] ?? "").href) {
main().catch((err) => {
console.error(String(err));
process.exit(1);
});
}

View File

@ -1,59 +0,0 @@
#!/usr/bin/env tsx
/**
* Copy export-html templates from src to dist
*/
import fs from "node:fs";
import path from "node:path";
import { fileURLToPath } from "node:url";
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const projectRoot = path.resolve(__dirname, "..");
const srcDir = path.join(projectRoot, "src", "auto-reply", "reply", "export-html");
const distDir = path.join(projectRoot, "dist", "export-html");
function copyExportHtmlTemplates() {
if (!fs.existsSync(srcDir)) {
console.warn("[copy-export-html-templates] Source directory not found:", srcDir);
return;
}
// Create dist directory
if (!fs.existsSync(distDir)) {
fs.mkdirSync(distDir, { recursive: true });
}
// Copy main template files
const templateFiles = ["template.html", "template.css", "template.js"];
for (const file of templateFiles) {
const srcFile = path.join(srcDir, file);
const distFile = path.join(distDir, file);
if (fs.existsSync(srcFile)) {
fs.copyFileSync(srcFile, distFile);
console.log(`[copy-export-html-templates] Copied ${file}`);
}
}
// Copy vendor files
const srcVendor = path.join(srcDir, "vendor");
const distVendor = path.join(distDir, "vendor");
if (fs.existsSync(srcVendor)) {
if (!fs.existsSync(distVendor)) {
fs.mkdirSync(distVendor, { recursive: true });
}
const vendorFiles = fs.readdirSync(srcVendor);
for (const file of vendorFiles) {
const srcFile = path.join(srcVendor, file);
const distFile = path.join(distVendor, file);
if (fs.statSync(srcFile).isFile()) {
fs.copyFileSync(srcFile, distFile);
console.log(`[copy-export-html-templates] Copied vendor/${file}`);
}
}
}
console.log("[copy-export-html-templates] Done");
}
copyExportHtmlTemplates();

View File

@ -1,55 +0,0 @@
#!/usr/bin/env tsx
/**
* Copy HOOK.md files from src/hooks/bundled to dist/bundled
*/
import fs from "node:fs";
import path from "node:path";
import { fileURLToPath } from "node:url";
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const projectRoot = path.resolve(__dirname, "..");
const srcBundled = path.join(projectRoot, "src", "hooks", "bundled");
const distBundled = path.join(projectRoot, "dist", "bundled");
function copyHookMetadata() {
if (!fs.existsSync(srcBundled)) {
console.warn("[copy-hook-metadata] Source directory not found:", srcBundled);
return;
}
if (!fs.existsSync(distBundled)) {
fs.mkdirSync(distBundled, { recursive: true });
}
const entries = fs.readdirSync(srcBundled, { withFileTypes: true });
for (const entry of entries) {
if (!entry.isDirectory()) {
continue;
}
const hookName = entry.name;
const srcHookDir = path.join(srcBundled, hookName);
const distHookDir = path.join(distBundled, hookName);
const srcHookMd = path.join(srcHookDir, "HOOK.md");
const distHookMd = path.join(distHookDir, "HOOK.md");
if (!fs.existsSync(srcHookMd)) {
console.warn(`[copy-hook-metadata] No HOOK.md found for ${hookName}`);
continue;
}
if (!fs.existsSync(distHookDir)) {
fs.mkdirSync(distHookDir, { recursive: true });
}
fs.copyFileSync(srcHookMd, distHookMd);
console.log(`[copy-hook-metadata] Copied ${hookName}/HOOK.md`);
}
console.log("[copy-hook-metadata] Done");
}
copyHookMetadata();

View File

@ -188,8 +188,8 @@ fi
# ── build ────────────────────────────────────────────────────────────────────
# The `prepack` script (triggered by `npm publish`) runs the full build chain:
# pnpm build && pnpm ui:build && pnpm web:build && pnpm web:prepack
# The `prepack` script (triggered by `npm publish`) runs the IronClaw build chain:
# pnpm build && pnpm web:build && pnpm web:prepack
# Running `pnpm build` here is a redundant fail-fast: catch CLI build errors
# before committing to a publish attempt.
if [[ "$SKIP_BUILD" != true ]]; then

View File

@ -6,7 +6,7 @@ WORKDIR /app
ENV NODE_OPTIONS="--disable-warning=ExperimentalWarning"
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml tsconfig.json tsconfig.plugin-sdk.dts.json tsdown.config.ts vitest.config.ts vitest.e2e.config.ts openclaw.mjs ./
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml tsconfig.json tsdown.config.ts vitest.config.ts vitest.e2e.config.ts openclaw.mjs ./
COPY src ./src
COPY test ./test
COPY scripts ./scripts

View File

@ -30,7 +30,7 @@ const outPaths = [
const header = `// Generated by scripts/protocol-gen-swift.ts — do not edit by hand\n// swiftlint:disable file_length\nimport Foundation\n\npublic let GATEWAY_PROTOCOL_VERSION = ${PROTOCOL_VERSION}\n\npublic enum ErrorCode: String, Codable, Sendable {\n${Object.values(
ErrorCodes,
)
.map((c) => ` case ${camelCase(c)} = "${c}"`)
.map((c: string) => ` case ${camelCase(c)} = "${c}"`)
.join("\n")}\n}\n`;
const reserved = new Set([
@ -211,7 +211,7 @@ function emitGatewayFrame(): string {
}
async function generate() {
const definitions = Object.entries(ProtocolSchemas) as Array<[string, JsonSchema]>;
const definitions: Array<[string, JsonSchema]> = Object.entries(ProtocolSchemas);
for (const [name, schema] of definitions) {
schemaNameByObject.set(schema as object, name);

View File

@ -1,30 +1,13 @@
#!/usr/bin/env -S node --import tsx
import { execSync } from "node:child_process";
import { readdirSync, readFileSync } from "node:fs";
import { join, resolve } from "node:path";
type PackFile = { path: string };
type PackResult = { files?: PackFile[] };
const requiredPathGroups = [
["dist/index.js", "dist/index.mjs"],
["dist/entry.js", "dist/entry.mjs"],
"dist/plugin-sdk/index.js",
"dist/plugin-sdk/index.d.ts",
"dist/build-info.json",
];
const requiredPathGroups = [["dist/entry.js", "dist/entry.mjs"], "dist/build-info.json"];
const forbiddenPrefixes = ["dist/OpenClaw.app/"];
type PackageJson = {
name?: string;
version?: string;
};
function normalizePluginSyncVersion(version: string): string {
return version.replace(/[-+].*$/, "");
}
function runPackDry(): PackResult[] {
const raw = execSync("npm pack --dry-run --json --ignore-scripts", {
encoding: "utf8",
@ -34,57 +17,7 @@ function runPackDry(): PackResult[] {
return JSON.parse(raw) as PackResult[];
}
function checkPluginVersions() {
const rootPackagePath = resolve("package.json");
const rootPackage = JSON.parse(readFileSync(rootPackagePath, "utf8")) as PackageJson;
const targetVersion = rootPackage.version;
const targetBaseVersion = targetVersion ? normalizePluginSyncVersion(targetVersion) : null;
if (!targetVersion || !targetBaseVersion) {
console.error("release-check: root package.json missing version.");
process.exit(1);
}
const extensionsDir = resolve("extensions");
const entries = readdirSync(extensionsDir, { withFileTypes: true }).filter((entry) =>
entry.isDirectory(),
);
const mismatches: string[] = [];
for (const entry of entries) {
const packagePath = join(extensionsDir, entry.name, "package.json");
let pkg: PackageJson;
try {
pkg = JSON.parse(readFileSync(packagePath, "utf8")) as PackageJson;
} catch {
continue;
}
if (!pkg.name || !pkg.version) {
continue;
}
if (normalizePluginSyncVersion(pkg.version) !== targetBaseVersion) {
mismatches.push(`${pkg.name} (${pkg.version})`);
}
}
if (mismatches.length > 0) {
console.error(
`release-check: plugin versions must match release base ${targetBaseVersion} (root ${targetVersion}):`,
);
for (const item of mismatches) {
console.error(` - ${item}`);
}
console.error("release-check: run `pnpm plugins:sync` to align plugin versions.");
process.exit(1);
}
}
function main() {
checkPluginVersions();
const results = runPackDry();
const files = results.flatMap((entry) => entry.files ?? []);
const paths = new Set(files.map((file) => file.path));

View File

@ -1,74 +0,0 @@
import fs from "node:fs";
import path from "node:path";
import { fileURLToPath } from "node:url";
import {
LEGACY_DAEMON_CLI_EXPORTS,
resolveLegacyDaemonCliAccessors,
} from "../src/cli/daemon-cli-compat.ts";
const rootDir = path.resolve(path.dirname(fileURLToPath(import.meta.url)), "..");
const distDir = path.join(rootDir, "dist");
const cliDir = path.join(distDir, "cli");
const findCandidates = () =>
fs.readdirSync(distDir).filter((entry) => {
const isDaemonCliBundle =
entry === "daemon-cli.js" || entry === "daemon-cli.mjs" || entry.startsWith("daemon-cli-");
if (!isDaemonCliBundle) {
return false;
}
// tsdown can emit either .js or .mjs depending on bundler settings/runtime.
return entry.endsWith(".js") || entry.endsWith(".mjs");
});
// In rare cases, build output can land slightly after this script starts (depending on FS timing).
// Retry briefly to avoid flaky builds.
let candidates = findCandidates();
for (let i = 0; i < 10 && candidates.length === 0; i++) {
await new Promise((resolve) => setTimeout(resolve, 50));
candidates = findCandidates();
}
if (candidates.length === 0) {
throw new Error("No daemon-cli bundle found in dist; cannot write legacy CLI shim.");
}
const orderedCandidates = candidates.toSorted();
const resolved = orderedCandidates
.map((entry) => {
const source = fs.readFileSync(path.join(distDir, entry), "utf8");
const accessors = resolveLegacyDaemonCliAccessors(source);
return { entry, accessors };
})
.find((entry) => Boolean(entry.accessors));
if (!resolved?.accessors) {
throw new Error(
`Could not resolve daemon-cli export aliases from dist bundles: ${orderedCandidates.join(", ")}`,
);
}
const target = resolved.entry;
const relPath = `../${target}`;
const { accessors } = resolved;
const missingExportError = (name: string) =>
`Legacy daemon CLI export "${name}" is unavailable in this build. Please upgrade OpenClaw.`;
const buildExportLine = (name: (typeof LEGACY_DAEMON_CLI_EXPORTS)[number]) => {
const accessor = accessors[name];
if (accessor) {
return `export const ${name} = daemonCli.${accessor};`;
}
if (name === "registerDaemonCli") {
return `export const ${name} = () => { throw new Error(${JSON.stringify(missingExportError(name))}); };`;
}
return `export const ${name} = async () => { throw new Error(${JSON.stringify(missingExportError(name))}); };`;
};
const contents =
"// Legacy shim for pre-tsdown update-cli imports.\n" +
`import * as daemonCli from "${relPath}";\n` +
LEGACY_DAEMON_CLI_EXPORTS.map(buildExportLine).join("\n") +
"\n";
fs.mkdirSync(cliDir, { recursive: true });
fs.writeFileSync(path.join(cliDir, "daemon-cli.js"), contents);

View File

@ -1,15 +0,0 @@
import fs from "node:fs";
import path from "node:path";
// `tsc` emits declarations under `dist/plugin-sdk/plugin-sdk/*` because the source lives
// at `src/plugin-sdk/*` and `rootDir` is `src/`.
//
// Our package export map points subpath `types` at `dist/plugin-sdk/<entry>.d.ts`, so we
// generate stable entry d.ts files that re-export the real declarations.
const entrypoints = ["index", "account-id"] as const;
for (const entry of entrypoints) {
const out = path.join(process.cwd(), `dist/plugin-sdk/${entry}.d.ts`);
fs.mkdirSync(path.dirname(out), { recursive: true });
// NodeNext: reference the runtime specifier with `.js`, TS will map it to `.d.ts`.
fs.writeFileSync(out, `export * from "./plugin-sdk/${entry}.js";\n`, "utf8");
}

View File

@ -1,217 +0,0 @@
import type { RequestPermissionRequest } from "@agentclientprotocol/sdk";
import { describe, expect, it, vi } from "vitest";
import { resolvePermissionRequest } from "./client.js";
import { extractAttachmentsFromPrompt, extractTextFromPrompt } from "./event-mapper.js";
function makePermissionRequest(
overrides: Partial<RequestPermissionRequest> = {},
): RequestPermissionRequest {
const { toolCall: toolCallOverride, options: optionsOverride, ...restOverrides } = overrides;
const base: RequestPermissionRequest = {
sessionId: "session-1",
toolCall: {
toolCallId: "tool-1",
title: "read: src/index.ts",
status: "pending",
},
options: [
{ kind: "allow_once", name: "Allow once", optionId: "allow" },
{ kind: "reject_once", name: "Reject once", optionId: "reject" },
],
};
return {
...base,
...restOverrides,
toolCall: toolCallOverride ? { ...base.toolCall, ...toolCallOverride } : base.toolCall,
options: optionsOverride ?? base.options,
};
}
describe("resolvePermissionRequest", () => {
it("auto-approves safe tools without prompting", async () => {
const prompt = vi.fn(async () => true);
const res = await resolvePermissionRequest(makePermissionRequest(), { prompt, log: () => {} });
expect(res).toEqual({ outcome: { outcome: "selected", optionId: "allow" } });
expect(prompt).not.toHaveBeenCalled();
});
it("prompts for dangerous tool names inferred from title", async () => {
const prompt = vi.fn(async () => true);
const res = await resolvePermissionRequest(
makePermissionRequest({
toolCall: { toolCallId: "tool-2", title: "exec: uname -a", status: "pending" },
}),
{ prompt, log: () => {} },
);
expect(prompt).toHaveBeenCalledTimes(1);
expect(prompt).toHaveBeenCalledWith("exec", "exec: uname -a");
expect(res).toEqual({ outcome: { outcome: "selected", optionId: "allow" } });
});
it("prompts for non-read/search tools (write)", async () => {
const prompt = vi.fn(async () => true);
const res = await resolvePermissionRequest(
makePermissionRequest({
toolCall: { toolCallId: "tool-w", title: "write: /tmp/pwn", status: "pending" },
}),
{ prompt, log: () => {} },
);
expect(prompt).toHaveBeenCalledTimes(1);
expect(prompt).toHaveBeenCalledWith("write", "write: /tmp/pwn");
expect(res).toEqual({ outcome: { outcome: "selected", optionId: "allow" } });
});
it("auto-approves search without prompting", async () => {
const prompt = vi.fn(async () => true);
const res = await resolvePermissionRequest(
makePermissionRequest({
toolCall: { toolCallId: "tool-s", title: "search: foo", status: "pending" },
}),
{ prompt, log: () => {} },
);
expect(res).toEqual({ outcome: { outcome: "selected", optionId: "allow" } });
expect(prompt).not.toHaveBeenCalled();
});
it("prompts for fetch even when tool name is known", async () => {
const prompt = vi.fn(async () => false);
const res = await resolvePermissionRequest(
makePermissionRequest({
toolCall: { toolCallId: "tool-f", title: "fetch: https://example.com", status: "pending" },
}),
{ prompt, log: () => {} },
);
expect(prompt).toHaveBeenCalledTimes(1);
expect(res).toEqual({ outcome: { outcome: "selected", optionId: "reject" } });
});
it("prompts when tool name contains read/search substrings but isn't a safe kind", async () => {
const prompt = vi.fn(async () => false);
const res = await resolvePermissionRequest(
makePermissionRequest({
toolCall: { toolCallId: "tool-t", title: "thread: reply", status: "pending" },
}),
{ prompt, log: () => {} },
);
expect(prompt).toHaveBeenCalledTimes(1);
expect(res).toEqual({ outcome: { outcome: "selected", optionId: "reject" } });
});
it("uses allow_always and reject_always when once options are absent", async () => {
const options: RequestPermissionRequest["options"] = [
{ kind: "allow_always", name: "Always allow", optionId: "allow-always" },
{ kind: "reject_always", name: "Always reject", optionId: "reject-always" },
];
const prompt = vi.fn(async () => false);
const res = await resolvePermissionRequest(
makePermissionRequest({
toolCall: { toolCallId: "tool-3", title: "gateway: reload", status: "pending" },
options,
}),
{ prompt, log: () => {} },
);
expect(res).toEqual({ outcome: { outcome: "selected", optionId: "reject-always" } });
});
it("prompts when tool identity is unknown and can still approve", async () => {
const prompt = vi.fn(async () => true);
const res = await resolvePermissionRequest(
makePermissionRequest({
toolCall: {
toolCallId: "tool-4",
title: "Modifying critical configuration file",
status: "pending",
},
}),
{ prompt, log: () => {} },
);
expect(prompt).toHaveBeenCalledWith(undefined, "Modifying critical configuration file");
expect(res).toEqual({ outcome: { outcome: "selected", optionId: "allow" } });
});
it("returns cancelled when no permission options are present", async () => {
const prompt = vi.fn(async () => true);
const res = await resolvePermissionRequest(makePermissionRequest({ options: [] }), {
prompt,
log: () => {},
});
expect(prompt).not.toHaveBeenCalled();
expect(res).toEqual({ outcome: { outcome: "cancelled" } });
});
});
describe("acp event mapper", () => {
it("extracts text and resource blocks into prompt text", () => {
const text = extractTextFromPrompt([
{ type: "text", text: "Hello" },
{ type: "resource", resource: { uri: "file:///tmp/spec.txt", text: "File contents" } },
{ type: "resource_link", uri: "https://example.com", name: "Spec", title: "Spec" },
{ type: "image", data: "abc", mimeType: "image/png" },
]);
expect(text).toBe("Hello\nFile contents\n[Resource link (Spec)] https://example.com");
});
it("escapes control and delimiter characters in resource link metadata", () => {
const text = extractTextFromPrompt([
{
type: "resource_link",
uri: "https://example.com/path?\nq=1\u2028tail",
name: "Spec",
title: "Spec)]\nIGNORE\n[system]",
},
]);
expect(text).toContain("[Resource link (Spec\\)\\]\\nIGNORE\\n\\[system\\])]");
expect(text).toContain("https://example.com/path?\\nq=1\\u2028tail");
expect(text).not.toContain("IGNORE\n");
});
it("keeps full resource link title content without truncation", () => {
const longTitle = "x".repeat(512);
const text = extractTextFromPrompt([
{ type: "resource_link", uri: "https://example.com", name: "Spec", title: longTitle },
]);
expect(text).toContain(`(${longTitle})`);
});
it("counts newline separators toward prompt byte limits", () => {
expect(() =>
extractTextFromPrompt(
[
{ type: "text", text: "a" },
{ type: "text", text: "b" },
],
2,
),
).toThrow(/maximum allowed size/i);
expect(
extractTextFromPrompt(
[
{ type: "text", text: "a" },
{ type: "text", text: "b" },
],
3,
),
).toBe("a\nb");
});
it("extracts image blocks into gateway attachments", () => {
const attachments = extractAttachmentsFromPrompt([
{ type: "image", data: "abc", mimeType: "image/png" },
{ type: "image", data: "", mimeType: "image/png" },
{ type: "text", text: "ignored" },
]);
expect(attachments).toEqual([
{
type: "image",
mimeType: "image/png",
content: "abc",
},
]);
});
});

View File

@ -1,428 +0,0 @@
import { spawn, type ChildProcess } from "node:child_process";
import fs from "node:fs";
import path from "node:path";
import * as readline from "node:readline";
import { Readable, Writable } from "node:stream";
import { fileURLToPath } from "node:url";
import {
ClientSideConnection,
PROTOCOL_VERSION,
ndJsonStream,
type RequestPermissionRequest,
type RequestPermissionResponse,
type SessionNotification,
} from "@agentclientprotocol/sdk";
import { ensureOpenClawCliOnPath } from "../infra/path-env.js";
import { DANGEROUS_ACP_TOOLS } from "../security/dangerous-tools.js";
const SAFE_AUTO_APPROVE_KINDS = new Set(["read", "search"]);
type PermissionOption = RequestPermissionRequest["options"][number];
type PermissionResolverDeps = {
prompt?: (toolName: string | undefined, toolTitle?: string) => Promise<boolean>;
log?: (line: string) => void;
};
function asRecord(value: unknown): Record<string, unknown> | undefined {
return value && typeof value === "object" && !Array.isArray(value)
? (value as Record<string, unknown>)
: undefined;
}
function readFirstStringValue(
source: Record<string, unknown> | undefined,
keys: string[],
): string | undefined {
if (!source) {
return undefined;
}
for (const key of keys) {
const value = source[key];
if (typeof value === "string" && value.trim()) {
return value.trim();
}
}
return undefined;
}
function normalizeToolName(value: string): string | undefined {
const normalized = value.trim().toLowerCase();
if (!normalized) {
return undefined;
}
return normalized;
}
function parseToolNameFromTitle(title: string | undefined | null): string | undefined {
if (!title) {
return undefined;
}
const head = title.split(":", 1)[0]?.trim();
if (!head || !/^[a-zA-Z0-9._-]+$/.test(head)) {
return undefined;
}
return normalizeToolName(head);
}
function resolveToolKindForPermission(
params: RequestPermissionRequest,
toolName: string | undefined,
): string | undefined {
const toolCall = params.toolCall as unknown as { kind?: unknown; title?: unknown } | undefined;
const kindRaw = typeof toolCall?.kind === "string" ? toolCall.kind.trim().toLowerCase() : "";
if (kindRaw) {
return kindRaw;
}
const name =
toolName ??
parseToolNameFromTitle(typeof toolCall?.title === "string" ? toolCall.title : undefined);
if (!name) {
return undefined;
}
const normalized = name.toLowerCase();
const hasToken = (token: string) => {
// Tool names tend to be snake_case. Avoid substring heuristics (ex: "thread" contains "read").
const re = new RegExp(`(?:^|[._-])${token}(?:$|[._-])`);
return re.test(normalized);
};
// Prefer a conservative classifier: only classify safe kinds when confident.
if (normalized === "read" || hasToken("read")) {
return "read";
}
if (normalized === "search" || hasToken("search") || hasToken("find")) {
return "search";
}
if (normalized.includes("fetch") || normalized.includes("http")) {
return "fetch";
}
if (normalized.includes("write") || normalized.includes("edit") || normalized.includes("patch")) {
return "edit";
}
if (normalized.includes("delete") || normalized.includes("remove")) {
return "delete";
}
if (normalized.includes("move") || normalized.includes("rename")) {
return "move";
}
if (normalized.includes("exec") || normalized.includes("run") || normalized.includes("bash")) {
return "execute";
}
return "other";
}
function resolveToolNameForPermission(params: RequestPermissionRequest): string | undefined {
const toolCall = params.toolCall;
const toolMeta = asRecord(toolCall?._meta);
const rawInput = asRecord(toolCall?.rawInput);
const fromMeta = readFirstStringValue(toolMeta, ["toolName", "tool_name", "name"]);
const fromRawInput = readFirstStringValue(rawInput, ["tool", "toolName", "tool_name", "name"]);
const fromTitle = parseToolNameFromTitle(toolCall?.title);
return normalizeToolName(fromMeta ?? fromRawInput ?? fromTitle ?? "");
}
function pickOption(
options: PermissionOption[],
kinds: PermissionOption["kind"][],
): PermissionOption | undefined {
for (const kind of kinds) {
const match = options.find((option) => option.kind === kind);
if (match) {
return match;
}
}
return undefined;
}
function selectedPermission(optionId: string): RequestPermissionResponse {
return { outcome: { outcome: "selected", optionId } };
}
function cancelledPermission(): RequestPermissionResponse {
return { outcome: { outcome: "cancelled" } };
}
function promptUserPermission(toolName: string | undefined, toolTitle?: string): Promise<boolean> {
if (!process.stdin.isTTY || !process.stderr.isTTY) {
console.error(`[permission denied] ${toolName ?? "unknown"}: non-interactive terminal`);
return Promise.resolve(false);
}
return new Promise((resolve) => {
let settled = false;
const rl = readline.createInterface({
input: process.stdin,
output: process.stderr,
});
const finish = (approved: boolean) => {
if (settled) {
return;
}
settled = true;
clearTimeout(timeout);
rl.close();
resolve(approved);
};
const timeout = setTimeout(() => {
console.error(`\n[permission timeout] denied: ${toolName ?? "unknown"}`);
finish(false);
}, 30_000);
const label = toolTitle
? toolName
? `${toolTitle} (${toolName})`
: toolTitle
: (toolName ?? "unknown tool");
rl.question(`\n[permission] Allow "${label}"? (y/N) `, (answer) => {
const approved = answer.trim().toLowerCase() === "y";
console.error(`[permission ${approved ? "approved" : "denied"}] ${toolName ?? "unknown"}`);
finish(approved);
});
});
}
export async function resolvePermissionRequest(
params: RequestPermissionRequest,
deps: PermissionResolverDeps = {},
): Promise<RequestPermissionResponse> {
const log = deps.log ?? ((line: string) => console.error(line));
const prompt = deps.prompt ?? promptUserPermission;
const options = params.options ?? [];
const toolTitle = params.toolCall?.title ?? "tool";
const toolName = resolveToolNameForPermission(params);
const toolKind = resolveToolKindForPermission(params, toolName);
if (options.length === 0) {
log(`[permission cancelled] ${toolName ?? "unknown"}: no options available`);
return cancelledPermission();
}
const allowOption = pickOption(options, ["allow_once", "allow_always"]);
const rejectOption = pickOption(options, ["reject_once", "reject_always"]);
const isSafeKind = Boolean(toolKind && SAFE_AUTO_APPROVE_KINDS.has(toolKind));
const promptRequired = !toolName || !isSafeKind || DANGEROUS_ACP_TOOLS.has(toolName);
if (!promptRequired) {
const option = allowOption ?? options[0];
if (!option) {
log(`[permission cancelled] ${toolName}: no selectable options`);
return cancelledPermission();
}
log(`[permission auto-approved] ${toolName} (${toolKind ?? "unknown"})`);
return selectedPermission(option.optionId);
}
log(
`\n[permission requested] ${toolTitle}${toolName ? ` (${toolName})` : ""}${toolKind ? ` [${toolKind}]` : ""}`,
);
const approved = await prompt(toolName, toolTitle);
if (approved && allowOption) {
return selectedPermission(allowOption.optionId);
}
if (!approved && rejectOption) {
return selectedPermission(rejectOption.optionId);
}
log(
`[permission cancelled] ${toolName ?? "unknown"}: missing ${approved ? "allow" : "reject"} option`,
);
return cancelledPermission();
}
export type AcpClientOptions = {
cwd?: string;
serverCommand?: string;
serverArgs?: string[];
serverVerbose?: boolean;
verbose?: boolean;
};
export type AcpClientHandle = {
client: ClientSideConnection;
agent: ChildProcess;
sessionId: string;
};
function toArgs(value: string[] | string | undefined): string[] {
if (!value) {
return [];
}
return Array.isArray(value) ? value : [value];
}
function buildServerArgs(opts: AcpClientOptions): string[] {
const args = ["acp", ...toArgs(opts.serverArgs)];
if (opts.serverVerbose && !args.includes("--verbose") && !args.includes("-v")) {
args.push("--verbose");
}
return args;
}
function resolveSelfEntryPath(): string | null {
// Prefer a path relative to the built module location (dist/acp/client.js -> dist/entry.js).
try {
const here = fileURLToPath(import.meta.url);
const candidate = path.resolve(path.dirname(here), "..", "entry.js");
if (fs.existsSync(candidate)) {
return candidate;
}
} catch {
// ignore
}
const argv1 = process.argv[1]?.trim();
if (argv1) {
return path.isAbsolute(argv1) ? argv1 : path.resolve(process.cwd(), argv1);
}
return null;
}
function printSessionUpdate(notification: SessionNotification): void {
const update = notification.update;
if (!("sessionUpdate" in update)) {
return;
}
switch (update.sessionUpdate) {
case "agent_message_chunk": {
if (update.content?.type === "text") {
process.stdout.write(update.content.text);
}
return;
}
case "tool_call": {
console.log(`\n[tool] ${update.title} (${update.status})`);
return;
}
case "tool_call_update": {
if (update.status) {
console.log(`[tool update] ${update.toolCallId}: ${update.status}`);
}
return;
}
case "available_commands_update": {
const names = update.availableCommands?.map((cmd) => `/${cmd.name}`).join(" ");
if (names) {
console.log(`\n[commands] ${names}`);
}
return;
}
default:
return;
}
}
export async function createAcpClient(opts: AcpClientOptions = {}): Promise<AcpClientHandle> {
const cwd = opts.cwd ?? process.cwd();
const verbose = Boolean(opts.verbose);
const log = verbose ? (msg: string) => console.error(`[acp-client] ${msg}`) : () => {};
ensureOpenClawCliOnPath();
const serverArgs = buildServerArgs(opts);
const entryPath = resolveSelfEntryPath();
const serverCommand = opts.serverCommand ?? (entryPath ? process.execPath : "openclaw");
const effectiveArgs = opts.serverCommand || !entryPath ? serverArgs : [entryPath, ...serverArgs];
log(`spawning: ${serverCommand} ${effectiveArgs.join(" ")}`);
const agent = spawn(serverCommand, effectiveArgs, {
stdio: ["pipe", "pipe", "inherit"],
cwd,
});
if (!agent.stdin || !agent.stdout) {
throw new Error("Failed to create ACP stdio pipes");
}
const input = Writable.toWeb(agent.stdin);
const output = Readable.toWeb(agent.stdout) as unknown as ReadableStream<Uint8Array>;
const stream = ndJsonStream(input, output);
const client = new ClientSideConnection(
() => ({
sessionUpdate: async (params: SessionNotification) => {
printSessionUpdate(params);
},
requestPermission: async (params: RequestPermissionRequest) => {
return resolvePermissionRequest(params);
},
}),
stream,
);
log("initializing");
await client.initialize({
protocolVersion: PROTOCOL_VERSION,
clientCapabilities: {
fs: { readTextFile: true, writeTextFile: true },
terminal: true,
},
clientInfo: { name: "openclaw-acp-client", version: "1.0.0" },
});
log("creating session");
const session = await client.newSession({
cwd,
mcpServers: [],
});
return {
client,
agent,
sessionId: session.sessionId,
};
}
export async function runAcpClientInteractive(opts: AcpClientOptions = {}): Promise<void> {
const { client, agent, sessionId } = await createAcpClient(opts);
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
console.log("OpenClaw ACP client");
console.log(`Session: ${sessionId}`);
console.log('Type a prompt, or "exit" to quit.\n');
const prompt = () => {
rl.question("> ", async (input) => {
const text = input.trim();
if (!text) {
prompt();
return;
}
if (text === "exit" || text === "quit") {
agent.kill();
rl.close();
process.exit(0);
}
try {
const response = await client.prompt({
sessionId,
prompt: [{ type: "text", text }],
});
console.log(`\n[${response.stopReason}]\n`);
} catch (err) {
console.error(`\n[error] ${String(err)}\n`);
}
prompt();
});
};
prompt();
agent.on("exit", (code) => {
console.log(`\nAgent exited with code ${code ?? 0}`);
rl.close();
process.exit(code ?? 0);
});
}

View File

@ -1,40 +0,0 @@
import type { AvailableCommand } from "@agentclientprotocol/sdk";
export function getAvailableCommands(): AvailableCommand[] {
return [
{ name: "help", description: "Show help and common commands." },
{ name: "commands", description: "List available commands." },
{ name: "status", description: "Show current status." },
{
name: "context",
description: "Explain context usage (list|detail|json).",
input: { hint: "list | detail | json" },
},
{ name: "whoami", description: "Show sender id (alias: /id)." },
{ name: "id", description: "Alias for /whoami." },
{ name: "subagents", description: "List or manage sub-agents." },
{ name: "config", description: "Read or write config (owner-only)." },
{ name: "debug", description: "Set runtime-only overrides (owner-only)." },
{ name: "usage", description: "Toggle usage footer (off|tokens|full)." },
{ name: "stop", description: "Stop the current run." },
{ name: "restart", description: "Restart the gateway (if enabled)." },
{ name: "dock-telegram", description: "Route replies to Telegram." },
{ name: "dock-discord", description: "Route replies to Discord." },
{ name: "dock-slack", description: "Route replies to Slack." },
{ name: "activation", description: "Set group activation (mention|always)." },
{ name: "send", description: "Set send mode (on|off|inherit)." },
{ name: "reset", description: "Reset the session (/new)." },
{ name: "new", description: "Reset the session (/reset)." },
{
name: "think",
description: "Set thinking level (off|minimal|low|medium|high|xhigh).",
},
{ name: "verbose", description: "Set verbose mode (on|full|off)." },
{ name: "reasoning", description: "Toggle reasoning output (on|off|stream)." },
{ name: "elevated", description: "Toggle elevated mode (on|off)." },
{ name: "model", description: "Select a model (list|status|<name>)." },
{ name: "queue", description: "Adjust queue mode and options." },
{ name: "bash", description: "Run a host command (if enabled)." },
{ name: "compact", description: "Compact the session history." },
];
}

View File

@ -1,133 +0,0 @@
import type { ContentBlock, ImageContent, ToolKind } from "@agentclientprotocol/sdk";
export type GatewayAttachment = {
type: string;
mimeType: string;
content: string;
};
function escapeInlineControlChars(value: string): string {
const withoutNull = value.replaceAll("\0", "\\0");
return withoutNull.replace(/[\r\n\t\v\f\u2028\u2029]/g, (char) => {
switch (char) {
case "\r":
return "\\r";
case "\n":
return "\\n";
case "\t":
return "\\t";
case "\v":
return "\\v";
case "\f":
return "\\f";
case "\u2028":
return "\\u2028";
case "\u2029":
return "\\u2029";
default:
return char;
}
});
}
function escapeResourceTitle(value: string): string {
// Keep title content, but escape characters that can break the resource-link annotation shape.
return escapeInlineControlChars(value).replace(/[()[\]]/g, (char) => `\\${char}`);
}
export function extractTextFromPrompt(prompt: ContentBlock[], maxBytes?: number): string {
const parts: string[] = [];
// Track accumulated byte count per block to catch oversized prompts before full concatenation
let totalBytes = 0;
for (const block of prompt) {
let blockText: string | undefined;
if (block.type === "text") {
blockText = block.text;
} else if (block.type === "resource") {
const resource = block.resource as { text?: string } | undefined;
if (resource?.text) {
blockText = resource.text;
}
} else if (block.type === "resource_link") {
const title = block.title ? ` (${escapeResourceTitle(block.title)})` : "";
const uri = block.uri ? escapeInlineControlChars(block.uri) : "";
blockText = uri ? `[Resource link${title}] ${uri}` : `[Resource link${title}]`;
}
if (blockText !== undefined) {
// Guard: reject before allocating the full concatenated string
if (maxBytes !== undefined) {
const separatorBytes = parts.length > 0 ? 1 : 0; // "\n" added by join() between blocks
totalBytes += separatorBytes + Buffer.byteLength(blockText, "utf-8");
if (totalBytes > maxBytes) {
throw new Error(`Prompt exceeds maximum allowed size of ${maxBytes} bytes`);
}
}
parts.push(blockText);
}
}
return parts.join("\n");
}
export function extractAttachmentsFromPrompt(prompt: ContentBlock[]): GatewayAttachment[] {
const attachments: GatewayAttachment[] = [];
for (const block of prompt) {
if (block.type !== "image") {
continue;
}
const image = block as ImageContent;
if (!image.data || !image.mimeType) {
continue;
}
attachments.push({
type: "image",
mimeType: image.mimeType,
content: image.data,
});
}
return attachments;
}
export function formatToolTitle(
name: string | undefined,
args: Record<string, unknown> | undefined,
): string {
const base = name ?? "tool";
if (!args || Object.keys(args).length === 0) {
return base;
}
const parts = Object.entries(args).map(([key, value]) => {
const raw = typeof value === "string" ? value : JSON.stringify(value);
const safe = raw.length > 100 ? `${raw.slice(0, 100)}...` : raw;
return `${key}: ${safe}`;
});
return `${base}: ${parts.join(", ")}`;
}
export function inferToolKind(name?: string): ToolKind {
if (!name) {
return "other";
}
const normalized = name.toLowerCase();
if (normalized.includes("read")) {
return "read";
}
if (normalized.includes("write") || normalized.includes("edit")) {
return "edit";
}
if (normalized.includes("delete") || normalized.includes("remove")) {
return "delete";
}
if (normalized.includes("move") || normalized.includes("rename")) {
return "move";
}
if (normalized.includes("search") || normalized.includes("find")) {
return "search";
}
if (normalized.includes("exec") || normalized.includes("run") || normalized.includes("bash")) {
return "execute";
}
if (normalized.includes("fetch") || normalized.includes("http")) {
return "fetch";
}
return "other";
}

View File

@ -1,4 +0,0 @@
export { serveAcpGateway } from "./server.js";
export { createInMemorySessionStore } from "./session.js";
export type { AcpSessionStore } from "./session.js";
export type { AcpServerOptions } from "./types.js";

View File

@ -1,47 +0,0 @@
export function readString(
meta: Record<string, unknown> | null | undefined,
keys: string[],
): string | undefined {
if (!meta) {
return undefined;
}
for (const key of keys) {
const value = meta[key];
if (typeof value === "string" && value.trim()) {
return value.trim();
}
}
return undefined;
}
export function readBool(
meta: Record<string, unknown> | null | undefined,
keys: string[],
): boolean | undefined {
if (!meta) {
return undefined;
}
for (const key of keys) {
const value = meta[key];
if (typeof value === "boolean") {
return value;
}
}
return undefined;
}
export function readNumber(
meta: Record<string, unknown> | null | undefined,
keys: string[],
): number | undefined {
if (!meta) {
return undefined;
}
for (const key of keys) {
const value = meta[key];
if (typeof value === "number" && Number.isFinite(value)) {
return value;
}
}
return undefined;
}

View File

@ -1,22 +0,0 @@
import fs from "node:fs";
import { resolveUserPath } from "../utils.js";
export function readSecretFromFile(filePath: string, label: string): string {
const resolvedPath = resolveUserPath(filePath.trim());
if (!resolvedPath) {
throw new Error(`${label} file path is empty.`);
}
let raw = "";
try {
raw = fs.readFileSync(resolvedPath, "utf8");
} catch (err) {
throw new Error(`Failed to read ${label} file at ${resolvedPath}: ${String(err)}`, {
cause: err,
});
}
const secret = raw.trim();
if (!secret) {
throw new Error(`${label} file at ${resolvedPath} is empty.`);
}
return secret;
}

View File

@ -1,212 +0,0 @@
#!/usr/bin/env node
import { Readable, Writable } from "node:stream";
import { fileURLToPath } from "node:url";
import { AgentSideConnection, ndJsonStream } from "@agentclientprotocol/sdk";
import { loadConfig } from "../config/config.js";
import { resolveGatewayAuth } from "../gateway/auth.js";
import { buildGatewayConnectionDetails } from "../gateway/call.js";
import { GatewayClient } from "../gateway/client.js";
import { isMainModule } from "../infra/is-main.js";
import { GATEWAY_CLIENT_MODES, GATEWAY_CLIENT_NAMES } from "../utils/message-channel.js";
import { readSecretFromFile } from "./secret-file.js";
import { AcpGatewayAgent } from "./translator.js";
import type { AcpServerOptions } from "./types.js";
export function serveAcpGateway(opts: AcpServerOptions = {}): Promise<void> {
const cfg = loadConfig();
const connection = buildGatewayConnectionDetails({
config: cfg,
url: opts.gatewayUrl,
});
const isRemoteMode = cfg.gateway?.mode === "remote";
const remote = isRemoteMode ? cfg.gateway?.remote : undefined;
const auth = resolveGatewayAuth({ authConfig: cfg.gateway?.auth, env: process.env });
const token =
opts.gatewayToken ??
(isRemoteMode ? remote?.token?.trim() : undefined) ??
process.env.OPENCLAW_GATEWAY_TOKEN ??
auth.token;
const password =
opts.gatewayPassword ??
(isRemoteMode ? remote?.password?.trim() : undefined) ??
process.env.OPENCLAW_GATEWAY_PASSWORD ??
auth.password;
let agent: AcpGatewayAgent | null = null;
let onClosed!: () => void;
const closed = new Promise<void>((resolve) => {
onClosed = resolve;
});
let stopped = false;
const gateway = new GatewayClient({
url: connection.url,
token: token || undefined,
password: password || undefined,
clientName: GATEWAY_CLIENT_NAMES.CLI,
clientDisplayName: "ACP",
clientVersion: "acp",
mode: GATEWAY_CLIENT_MODES.CLI,
onEvent: (evt) => {
void agent?.handleGatewayEvent(evt);
},
onHelloOk: () => {
agent?.handleGatewayReconnect();
},
onClose: (code, reason) => {
agent?.handleGatewayDisconnect(`${code}: ${reason}`);
// Resolve only on intentional shutdown (gateway.stop() sets closed
// which skips scheduleReconnect, then fires onClose). Transient
// disconnects are followed by automatic reconnect attempts.
if (stopped) {
onClosed();
}
},
});
const shutdown = () => {
if (stopped) {
return;
}
stopped = true;
gateway.stop();
// If no WebSocket is active (e.g. between reconnect attempts),
// gateway.stop() won't trigger onClose, so resolve directly.
onClosed();
};
process.once("SIGINT", shutdown);
process.once("SIGTERM", shutdown);
const input = Writable.toWeb(process.stdout);
const output = Readable.toWeb(process.stdin) as unknown as ReadableStream<Uint8Array>;
const stream = ndJsonStream(input, output);
new AgentSideConnection((conn: AgentSideConnection) => {
agent = new AcpGatewayAgent(conn, gateway, opts);
agent.start();
return agent;
}, stream);
gateway.start();
return closed;
}
function parseArgs(args: string[]): AcpServerOptions {
const opts: AcpServerOptions = {};
let tokenFile: string | undefined;
let passwordFile: string | undefined;
for (let i = 0; i < args.length; i += 1) {
const arg = args[i];
if (arg === "--url" || arg === "--gateway-url") {
opts.gatewayUrl = args[i + 1];
i += 1;
continue;
}
if (arg === "--token" || arg === "--gateway-token") {
opts.gatewayToken = args[i + 1];
i += 1;
continue;
}
if (arg === "--token-file" || arg === "--gateway-token-file") {
tokenFile = args[i + 1];
i += 1;
continue;
}
if (arg === "--password" || arg === "--gateway-password") {
opts.gatewayPassword = args[i + 1];
i += 1;
continue;
}
if (arg === "--password-file" || arg === "--gateway-password-file") {
passwordFile = args[i + 1];
i += 1;
continue;
}
if (arg === "--session") {
opts.defaultSessionKey = args[i + 1];
i += 1;
continue;
}
if (arg === "--session-label") {
opts.defaultSessionLabel = args[i + 1];
i += 1;
continue;
}
if (arg === "--require-existing") {
opts.requireExistingSession = true;
continue;
}
if (arg === "--reset-session") {
opts.resetSession = true;
continue;
}
if (arg === "--no-prefix-cwd") {
opts.prefixCwd = false;
continue;
}
if (arg === "--verbose" || arg === "-v") {
opts.verbose = true;
continue;
}
if (arg === "--help" || arg === "-h") {
printHelp();
process.exit(0);
}
}
if (opts.gatewayToken?.trim() && tokenFile?.trim()) {
throw new Error("Use either --token or --token-file.");
}
if (opts.gatewayPassword?.trim() && passwordFile?.trim()) {
throw new Error("Use either --password or --password-file.");
}
if (tokenFile?.trim()) {
opts.gatewayToken = readSecretFromFile(tokenFile, "Gateway token");
}
if (passwordFile?.trim()) {
opts.gatewayPassword = readSecretFromFile(passwordFile, "Gateway password");
}
return opts;
}
function printHelp(): void {
console.log(`Usage: openclaw acp [options]
Gateway-backed ACP server for IDE integration.
Options:
--url <url> Gateway WebSocket URL
--token <token> Gateway auth token
--token-file <path> Read gateway auth token from file
--password <password> Gateway auth password
--password-file <path> Read gateway auth password from file
--session <key> Default session key (e.g. "agent:main:main")
--session-label <label> Default session label to resolve
--require-existing Fail if the session key/label does not exist
--reset-session Reset the session key before first use
--no-prefix-cwd Do not prefix prompts with the working directory
--verbose, -v Verbose logging to stderr
--help, -h Show this help message
`);
}
if (isMainModule({ currentFile: fileURLToPath(import.meta.url) })) {
const argv = process.argv.slice(2);
if (argv.includes("--token") || argv.includes("--gateway-token")) {
console.error(
"Warning: --token can be exposed via process listings. Prefer --token-file or OPENCLAW_GATEWAY_TOKEN.",
);
}
if (argv.includes("--password") || argv.includes("--gateway-password")) {
console.error(
"Warning: --password can be exposed via process listings. Prefer --password-file or OPENCLAW_GATEWAY_PASSWORD.",
);
}
const opts = parseArgs(argv);
serveAcpGateway(opts).catch((err) => {
console.error(String(err));
process.exit(1);
});
}

View File

@ -1,56 +0,0 @@
import { describe, expect, it, vi } from "vitest";
import type { GatewayClient } from "../gateway/client.js";
import { parseSessionMeta, resolveSessionKey } from "./session-mapper.js";
function createGateway(resolveLabelKey = "agent:main:label"): {
gateway: GatewayClient;
request: ReturnType<typeof vi.fn>;
} {
const request = vi.fn(async (method: string, params: Record<string, unknown>) => {
if (method === "sessions.resolve" && "label" in params) {
return { ok: true, key: resolveLabelKey };
}
if (method === "sessions.resolve" && "key" in params) {
return { ok: true, key: params.key as string };
}
return { ok: true };
});
return {
gateway: { request } as unknown as GatewayClient,
request,
};
}
describe("acp session mapper", () => {
it("prefers explicit sessionLabel over sessionKey", async () => {
const { gateway, request } = createGateway();
const meta = parseSessionMeta({ sessionLabel: "support", sessionKey: "agent:main:main" });
const key = await resolveSessionKey({
meta,
fallbackKey: "acp:fallback",
gateway,
opts: {},
});
expect(key).toBe("agent:main:label");
expect(request).toHaveBeenCalledTimes(1);
expect(request).toHaveBeenCalledWith("sessions.resolve", { label: "support" });
});
it("lets meta sessionKey override default label", async () => {
const { gateway, request } = createGateway();
const meta = parseSessionMeta({ sessionKey: "agent:main:override" });
const key = await resolveSessionKey({
meta,
fallbackKey: "acp:fallback",
gateway,
opts: { defaultSessionLabel: "default-label" },
});
expect(key).toBe("agent:main:override");
expect(request).not.toHaveBeenCalled();
});
});

View File

@ -1,98 +0,0 @@
import type { GatewayClient } from "../gateway/client.js";
import { readBool, readString } from "./meta.js";
import type { AcpServerOptions } from "./types.js";
export type AcpSessionMeta = {
sessionKey?: string;
sessionLabel?: string;
resetSession?: boolean;
requireExisting?: boolean;
prefixCwd?: boolean;
};
export function parseSessionMeta(meta: unknown): AcpSessionMeta {
if (!meta || typeof meta !== "object") {
return {};
}
const record = meta as Record<string, unknown>;
return {
sessionKey: readString(record, ["sessionKey", "session", "key"]),
sessionLabel: readString(record, ["sessionLabel", "label"]),
resetSession: readBool(record, ["resetSession", "reset"]),
requireExisting: readBool(record, ["requireExistingSession", "requireExisting"]),
prefixCwd: readBool(record, ["prefixCwd"]),
};
}
export async function resolveSessionKey(params: {
meta: AcpSessionMeta;
fallbackKey: string;
gateway: GatewayClient;
opts: AcpServerOptions;
}): Promise<string> {
const requestedLabel = params.meta.sessionLabel ?? params.opts.defaultSessionLabel;
const requestedKey = params.meta.sessionKey ?? params.opts.defaultSessionKey;
const requireExisting =
params.meta.requireExisting ?? params.opts.requireExistingSession ?? false;
if (params.meta.sessionLabel) {
const resolved = await params.gateway.request<{ ok: true; key: string }>("sessions.resolve", {
label: params.meta.sessionLabel,
});
if (!resolved?.key) {
throw new Error(`Unable to resolve session label: ${params.meta.sessionLabel}`);
}
return resolved.key;
}
if (params.meta.sessionKey) {
if (!requireExisting) {
return params.meta.sessionKey;
}
const resolved = await params.gateway.request<{ ok: true; key: string }>("sessions.resolve", {
key: params.meta.sessionKey,
});
if (!resolved?.key) {
throw new Error(`Session key not found: ${params.meta.sessionKey}`);
}
return resolved.key;
}
if (requestedLabel) {
const resolved = await params.gateway.request<{ ok: true; key: string }>("sessions.resolve", {
label: requestedLabel,
});
if (!resolved?.key) {
throw new Error(`Unable to resolve session label: ${requestedLabel}`);
}
return resolved.key;
}
if (requestedKey) {
if (!requireExisting) {
return requestedKey;
}
const resolved = await params.gateway.request<{ ok: true; key: string }>("sessions.resolve", {
key: requestedKey,
});
if (!resolved?.key) {
throw new Error(`Session key not found: ${requestedKey}`);
}
return resolved.key;
}
return params.fallbackKey;
}
export async function resetSessionIfNeeded(params: {
meta: AcpSessionMeta;
sessionKey: string;
gateway: GatewayClient;
opts: AcpServerOptions;
}): Promise<void> {
const resetSession = params.meta.resetSession ?? params.opts.resetSession ?? false;
if (!resetSession) {
return;
}
await params.gateway.request("sessions.reset", { key: params.sessionKey });
}

View File

@ -1,146 +0,0 @@
import { afterEach, beforeEach, describe, expect, it } from "vitest";
import { createInMemorySessionStore } from "./session.js";
describe("acp session manager", () => {
let nowMs = 0;
const now = () => nowMs;
const advance = (ms: number) => {
nowMs += ms;
};
let store = createInMemorySessionStore({ now });
beforeEach(() => {
nowMs = 1_000;
store = createInMemorySessionStore({ now });
});
afterEach(() => {
store.clearAllSessionsForTest();
});
it("tracks active runs and clears on cancel", () => {
const session = store.createSession({
sessionKey: "acp:test",
cwd: "/tmp",
});
const controller = new AbortController();
store.setActiveRun(session.sessionId, "run-1", controller);
expect(store.getSessionByRunId("run-1")?.sessionId).toBe(session.sessionId);
const cancelled = store.cancelActiveRun(session.sessionId);
expect(cancelled).toBe(true);
expect(store.getSessionByRunId("run-1")).toBeUndefined();
});
it("refreshes existing session IDs instead of creating duplicates", () => {
const first = store.createSession({
sessionId: "existing",
sessionKey: "acp:one",
cwd: "/tmp/one",
});
advance(500);
const refreshed = store.createSession({
sessionId: "existing",
sessionKey: "acp:two",
cwd: "/tmp/two",
});
expect(refreshed).toBe(first);
expect(refreshed.sessionKey).toBe("acp:two");
expect(refreshed.cwd).toBe("/tmp/two");
expect(refreshed.createdAt).toBe(1_000);
expect(refreshed.lastTouchedAt).toBe(1_500);
expect(store.hasSession("existing")).toBe(true);
});
it("reaps idle sessions before enforcing the max session cap", () => {
const boundedStore = createInMemorySessionStore({
maxSessions: 1,
idleTtlMs: 1_000,
now,
});
try {
boundedStore.createSession({
sessionId: "old",
sessionKey: "acp:old",
cwd: "/tmp",
});
advance(2_000);
const fresh = boundedStore.createSession({
sessionId: "fresh",
sessionKey: "acp:fresh",
cwd: "/tmp",
});
expect(fresh.sessionId).toBe("fresh");
expect(boundedStore.getSession("old")).toBeUndefined();
expect(boundedStore.hasSession("old")).toBe(false);
} finally {
boundedStore.clearAllSessionsForTest();
}
});
it("uses soft-cap eviction for the oldest idle session when full", () => {
const boundedStore = createInMemorySessionStore({
maxSessions: 2,
idleTtlMs: 24 * 60 * 60 * 1_000,
now,
});
try {
const first = boundedStore.createSession({
sessionId: "first",
sessionKey: "acp:first",
cwd: "/tmp",
});
advance(100);
const second = boundedStore.createSession({
sessionId: "second",
sessionKey: "acp:second",
cwd: "/tmp",
});
const controller = new AbortController();
boundedStore.setActiveRun(second.sessionId, "run-2", controller);
advance(100);
const third = boundedStore.createSession({
sessionId: "third",
sessionKey: "acp:third",
cwd: "/tmp",
});
expect(third.sessionId).toBe("third");
expect(boundedStore.getSession(first.sessionId)).toBeUndefined();
expect(boundedStore.getSession(second.sessionId)).toBeDefined();
} finally {
boundedStore.clearAllSessionsForTest();
}
});
it("rejects when full and no session is evictable", () => {
const boundedStore = createInMemorySessionStore({
maxSessions: 1,
idleTtlMs: 24 * 60 * 60 * 1_000,
now,
});
try {
const only = boundedStore.createSession({
sessionId: "only",
sessionKey: "acp:only",
cwd: "/tmp",
});
boundedStore.setActiveRun(only.sessionId, "run-only", new AbortController());
expect(() =>
boundedStore.createSession({
sessionId: "next",
sessionKey: "acp:next",
cwd: "/tmp",
}),
).toThrow(/session limit reached/i);
} finally {
boundedStore.clearAllSessionsForTest();
}
});
});

View File

@ -1,190 +0,0 @@
import { randomUUID } from "node:crypto";
import type { AcpSession } from "./types.js";
export type AcpSessionStore = {
createSession: (params: { sessionKey: string; cwd: string; sessionId?: string }) => AcpSession;
hasSession: (sessionId: string) => boolean;
getSession: (sessionId: string) => AcpSession | undefined;
getSessionByRunId: (runId: string) => AcpSession | undefined;
setActiveRun: (sessionId: string, runId: string, abortController: AbortController) => void;
clearActiveRun: (sessionId: string) => void;
cancelActiveRun: (sessionId: string) => boolean;
clearAllSessionsForTest: () => void;
};
type AcpSessionStoreOptions = {
maxSessions?: number;
idleTtlMs?: number;
now?: () => number;
};
const DEFAULT_MAX_SESSIONS = 5_000;
const DEFAULT_IDLE_TTL_MS = 24 * 60 * 60 * 1_000;
export function createInMemorySessionStore(options: AcpSessionStoreOptions = {}): AcpSessionStore {
const maxSessions = Math.max(1, options.maxSessions ?? DEFAULT_MAX_SESSIONS);
const idleTtlMs = Math.max(1_000, options.idleTtlMs ?? DEFAULT_IDLE_TTL_MS);
const now = options.now ?? Date.now;
const sessions = new Map<string, AcpSession>();
const runIdToSessionId = new Map<string, string>();
const touchSession = (session: AcpSession, nowMs: number) => {
session.lastTouchedAt = nowMs;
};
const removeSession = (sessionId: string) => {
const session = sessions.get(sessionId);
if (!session) {
return false;
}
if (session.activeRunId) {
runIdToSessionId.delete(session.activeRunId);
}
session.abortController?.abort();
sessions.delete(sessionId);
return true;
};
const reapIdleSessions = (nowMs: number) => {
const idleBefore = nowMs - idleTtlMs;
for (const [sessionId, session] of sessions.entries()) {
if (session.activeRunId || session.abortController) {
continue;
}
if (session.lastTouchedAt > idleBefore) {
continue;
}
removeSession(sessionId);
}
};
const evictOldestIdleSession = () => {
let oldestSessionId: string | null = null;
let oldestLastTouchedAt = Number.POSITIVE_INFINITY;
for (const [sessionId, session] of sessions.entries()) {
if (session.activeRunId || session.abortController) {
continue;
}
if (session.lastTouchedAt >= oldestLastTouchedAt) {
continue;
}
oldestLastTouchedAt = session.lastTouchedAt;
oldestSessionId = sessionId;
}
if (!oldestSessionId) {
return false;
}
return removeSession(oldestSessionId);
};
const createSession: AcpSessionStore["createSession"] = (params) => {
const nowMs = now();
const sessionId = params.sessionId ?? randomUUID();
const existingSession = sessions.get(sessionId);
if (existingSession) {
existingSession.sessionKey = params.sessionKey;
existingSession.cwd = params.cwd;
touchSession(existingSession, nowMs);
return existingSession;
}
reapIdleSessions(nowMs);
if (sessions.size >= maxSessions && !evictOldestIdleSession()) {
throw new Error(
`ACP session limit reached (max ${maxSessions}). Close idle ACP clients and retry.`,
);
}
const session: AcpSession = {
sessionId,
sessionKey: params.sessionKey,
cwd: params.cwd,
createdAt: nowMs,
lastTouchedAt: nowMs,
abortController: null,
activeRunId: null,
};
sessions.set(sessionId, session);
return session;
};
const hasSession: AcpSessionStore["hasSession"] = (sessionId) => sessions.has(sessionId);
const getSession: AcpSessionStore["getSession"] = (sessionId) => {
const session = sessions.get(sessionId);
if (session) {
touchSession(session, now());
}
return session;
};
const getSessionByRunId: AcpSessionStore["getSessionByRunId"] = (runId) => {
const sessionId = runIdToSessionId.get(runId);
if (!sessionId) {
return undefined;
}
const session = sessions.get(sessionId);
if (session) {
touchSession(session, now());
}
return session;
};
const setActiveRun: AcpSessionStore["setActiveRun"] = (sessionId, runId, abortController) => {
const session = sessions.get(sessionId);
if (!session) {
return;
}
session.activeRunId = runId;
session.abortController = abortController;
runIdToSessionId.set(runId, sessionId);
touchSession(session, now());
};
const clearActiveRun: AcpSessionStore["clearActiveRun"] = (sessionId) => {
const session = sessions.get(sessionId);
if (!session) {
return;
}
if (session.activeRunId) {
runIdToSessionId.delete(session.activeRunId);
}
session.activeRunId = null;
session.abortController = null;
touchSession(session, now());
};
const cancelActiveRun: AcpSessionStore["cancelActiveRun"] = (sessionId) => {
const session = sessions.get(sessionId);
if (!session?.abortController) {
return false;
}
session.abortController.abort();
if (session.activeRunId) {
runIdToSessionId.delete(session.activeRunId);
}
session.abortController = null;
session.activeRunId = null;
touchSession(session, now());
return true;
};
const clearAllSessionsForTest: AcpSessionStore["clearAllSessionsForTest"] = () => {
for (const session of sessions.values()) {
session.abortController?.abort();
}
sessions.clear();
runIdToSessionId.clear();
};
return {
createSession,
hasSession,
getSession,
getSessionByRunId,
setActiveRun,
clearActiveRun,
cancelActiveRun,
clearAllSessionsForTest,
};
}
export const defaultAcpSessionStore = createInMemorySessionStore();

View File

@ -1,70 +0,0 @@
import os from "node:os";
import path from "node:path";
import type { AgentSideConnection, PromptRequest } from "@agentclientprotocol/sdk";
import { describe, expect, it, vi } from "vitest";
import type { GatewayClient } from "../gateway/client.js";
import { createInMemorySessionStore } from "./session.js";
import { AcpGatewayAgent } from "./translator.js";
function createConnection(): AgentSideConnection {
return {
sessionUpdate: vi.fn(async () => {}),
} as unknown as AgentSideConnection;
}
describe("acp prompt cwd prefix", () => {
async function runPromptWithCwd(cwd: string) {
const sessionStore = createInMemorySessionStore();
sessionStore.createSession({
sessionId: "session-1",
sessionKey: "agent:main:main",
cwd,
});
const requestSpy = vi.fn(async (method: string) => {
if (method === "chat.send") {
throw new Error("stop-after-send");
}
return {};
});
const gateway = {
request: requestSpy,
} as unknown as GatewayClient;
const agent = new AcpGatewayAgent(createConnection(), gateway, {
sessionStore,
prefixCwd: true,
});
await expect(
agent.prompt({
sessionId: "session-1",
prompt: [{ type: "text", text: "hello" }],
_meta: {},
} as unknown as PromptRequest),
).rejects.toThrow("stop-after-send");
return requestSpy;
}
it("redacts home directory in prompt prefix", async () => {
const requestSpy = await runPromptWithCwd(path.join(os.homedir(), "openclaw-test"));
expect(requestSpy).toHaveBeenCalledWith(
"chat.send",
expect.objectContaining({
message: expect.stringMatching(/\[Working directory: ~[\\/]openclaw-test\]/),
}),
{ expectFinal: true },
);
});
it("keeps backslash separators when cwd uses them", async () => {
const requestSpy = await runPromptWithCwd(`${os.homedir()}\\openclaw-test`);
expect(requestSpy).toHaveBeenCalledWith(
"chat.send",
expect.objectContaining({
message: expect.stringContaining("[Working directory: ~\\openclaw-test]"),
}),
{ expectFinal: true },
);
});
});

View File

@ -1,135 +0,0 @@
import type {
AgentSideConnection,
LoadSessionRequest,
NewSessionRequest,
PromptRequest,
} from "@agentclientprotocol/sdk";
import { describe, expect, it, vi } from "vitest";
import type { GatewayClient } from "../gateway/client.js";
import { createInMemorySessionStore } from "./session.js";
import { AcpGatewayAgent } from "./translator.js";
function createConnection(): AgentSideConnection {
return {
sessionUpdate: vi.fn(async () => {}),
} as unknown as AgentSideConnection;
}
function createGateway(
request: GatewayClient["request"] = vi.fn(async () => ({ ok: true })) as GatewayClient["request"],
): GatewayClient {
return {
request,
} as unknown as GatewayClient;
}
function createNewSessionRequest(cwd = "/tmp"): NewSessionRequest {
return {
cwd,
mcpServers: [],
_meta: {},
} as unknown as NewSessionRequest;
}
function createLoadSessionRequest(sessionId: string, cwd = "/tmp"): LoadSessionRequest {
return {
sessionId,
cwd,
mcpServers: [],
_meta: {},
} as unknown as LoadSessionRequest;
}
function createPromptRequest(
sessionId: string,
text: string,
meta: Record<string, unknown> = {},
): PromptRequest {
return {
sessionId,
prompt: [{ type: "text", text }],
_meta: meta,
} as unknown as PromptRequest;
}
describe("acp session creation rate limit", () => {
it("rate limits excessive newSession bursts", async () => {
const sessionStore = createInMemorySessionStore();
const agent = new AcpGatewayAgent(createConnection(), createGateway(), {
sessionStore,
sessionCreateRateLimit: {
maxRequests: 2,
windowMs: 60_000,
},
});
await agent.newSession(createNewSessionRequest());
await agent.newSession(createNewSessionRequest());
await expect(agent.newSession(createNewSessionRequest())).rejects.toThrow(
/session creation rate limit exceeded/i,
);
sessionStore.clearAllSessionsForTest();
});
it("does not count loadSession refreshes for an existing session ID", async () => {
const sessionStore = createInMemorySessionStore();
const agent = new AcpGatewayAgent(createConnection(), createGateway(), {
sessionStore,
sessionCreateRateLimit: {
maxRequests: 1,
windowMs: 60_000,
},
});
await agent.loadSession(createLoadSessionRequest("shared-session"));
await agent.loadSession(createLoadSessionRequest("shared-session"));
await expect(agent.loadSession(createLoadSessionRequest("new-session"))).rejects.toThrow(
/session creation rate limit exceeded/i,
);
sessionStore.clearAllSessionsForTest();
});
});
describe("acp prompt size hardening", () => {
it("rejects oversized prompt blocks without leaking active runs", async () => {
const request = vi.fn(async () => ({ ok: true })) as GatewayClient["request"];
const sessionStore = createInMemorySessionStore();
const agent = new AcpGatewayAgent(createConnection(), createGateway(request), {
sessionStore,
});
const sessionId = "prompt-limit-oversize";
await agent.loadSession(createLoadSessionRequest(sessionId));
await expect(
agent.prompt(createPromptRequest(sessionId, "a".repeat(2 * 1024 * 1024 + 1))),
).rejects.toThrow(/maximum allowed size/i);
expect(request).not.toHaveBeenCalledWith("chat.send", expect.anything(), expect.anything());
const session = sessionStore.getSession(sessionId);
expect(session?.activeRunId).toBeNull();
expect(session?.abortController).toBeNull();
sessionStore.clearAllSessionsForTest();
});
it("rejects oversize final messages from cwd prefix without leaking active runs", async () => {
const request = vi.fn(async () => ({ ok: true })) as GatewayClient["request"];
const sessionStore = createInMemorySessionStore();
const agent = new AcpGatewayAgent(createConnection(), createGateway(request), {
sessionStore,
});
const sessionId = "prompt-limit-prefix";
await agent.loadSession(createLoadSessionRequest(sessionId));
await expect(
agent.prompt(createPromptRequest(sessionId, "a".repeat(2 * 1024 * 1024))),
).rejects.toThrow(/maximum allowed size/i);
expect(request).not.toHaveBeenCalledWith("chat.send", expect.anything(), expect.anything());
const session = sessionStore.getSession(sessionId);
expect(session?.activeRunId).toBeNull();
expect(session?.abortController).toBeNull();
sessionStore.clearAllSessionsForTest();
});
});

View File

@ -1,498 +0,0 @@
import { randomUUID } from "node:crypto";
import type {
Agent,
AgentSideConnection,
AuthenticateRequest,
AuthenticateResponse,
CancelNotification,
InitializeRequest,
InitializeResponse,
ListSessionsRequest,
ListSessionsResponse,
LoadSessionRequest,
LoadSessionResponse,
NewSessionRequest,
NewSessionResponse,
PromptRequest,
PromptResponse,
SetSessionModeRequest,
SetSessionModeResponse,
StopReason,
} from "@agentclientprotocol/sdk";
import { PROTOCOL_VERSION } from "@agentclientprotocol/sdk";
import type { GatewayClient } from "../gateway/client.js";
import type { EventFrame } from "../gateway/protocol/index.js";
import type { SessionsListResult } from "../gateway/session-utils.js";
import {
createFixedWindowRateLimiter,
type FixedWindowRateLimiter,
} from "../infra/fixed-window-rate-limit.js";
import { shortenHomePath } from "../utils.js";
import { getAvailableCommands } from "./commands.js";
import {
extractAttachmentsFromPrompt,
extractTextFromPrompt,
formatToolTitle,
inferToolKind,
} from "./event-mapper.js";
import { readBool, readNumber, readString } from "./meta.js";
import { parseSessionMeta, resetSessionIfNeeded, resolveSessionKey } from "./session-mapper.js";
import { defaultAcpSessionStore, type AcpSessionStore } from "./session.js";
import { ACP_AGENT_INFO, type AcpServerOptions } from "./types.js";
// Maximum allowed prompt size (2MB) to prevent DoS via memory exhaustion (CWE-400, GHSA-cxpw-2g23-2vgw)
const MAX_PROMPT_BYTES = 2 * 1024 * 1024;
type PendingPrompt = {
sessionId: string;
sessionKey: string;
idempotencyKey: string;
resolve: (response: PromptResponse) => void;
reject: (err: Error) => void;
sentTextLength?: number;
sentText?: string;
toolCalls?: Set<string>;
};
type AcpGatewayAgentOptions = AcpServerOptions & {
sessionStore?: AcpSessionStore;
};
const SESSION_CREATE_RATE_LIMIT_DEFAULT_MAX_REQUESTS = 120;
const SESSION_CREATE_RATE_LIMIT_DEFAULT_WINDOW_MS = 10_000;
export class AcpGatewayAgent implements Agent {
private connection: AgentSideConnection;
private gateway: GatewayClient;
private opts: AcpGatewayAgentOptions;
private log: (msg: string) => void;
private sessionStore: AcpSessionStore;
private sessionCreateRateLimiter: FixedWindowRateLimiter;
private pendingPrompts = new Map<string, PendingPrompt>();
constructor(
connection: AgentSideConnection,
gateway: GatewayClient,
opts: AcpGatewayAgentOptions = {},
) {
this.connection = connection;
this.gateway = gateway;
this.opts = opts;
this.log = opts.verbose ? (msg: string) => process.stderr.write(`[acp] ${msg}\n`) : () => {};
this.sessionStore = opts.sessionStore ?? defaultAcpSessionStore;
this.sessionCreateRateLimiter = createFixedWindowRateLimiter({
maxRequests: Math.max(
1,
opts.sessionCreateRateLimit?.maxRequests ?? SESSION_CREATE_RATE_LIMIT_DEFAULT_MAX_REQUESTS,
),
windowMs: Math.max(
1_000,
opts.sessionCreateRateLimit?.windowMs ?? SESSION_CREATE_RATE_LIMIT_DEFAULT_WINDOW_MS,
),
});
}
start(): void {
this.log("ready");
}
handleGatewayReconnect(): void {
this.log("gateway reconnected");
}
handleGatewayDisconnect(reason: string): void {
this.log(`gateway disconnected: ${reason}`);
for (const pending of this.pendingPrompts.values()) {
pending.reject(new Error(`Gateway disconnected: ${reason}`));
this.sessionStore.clearActiveRun(pending.sessionId);
}
this.pendingPrompts.clear();
}
async handleGatewayEvent(evt: EventFrame): Promise<void> {
if (evt.event === "chat") {
await this.handleChatEvent(evt);
return;
}
if (evt.event === "agent") {
await this.handleAgentEvent(evt);
}
}
async initialize(_params: InitializeRequest): Promise<InitializeResponse> {
return {
protocolVersion: PROTOCOL_VERSION,
agentCapabilities: {
loadSession: true,
promptCapabilities: {
image: true,
audio: false,
embeddedContext: true,
},
mcpCapabilities: {
http: false,
sse: false,
},
sessionCapabilities: {
list: {},
},
},
agentInfo: ACP_AGENT_INFO,
authMethods: [],
};
}
async newSession(params: NewSessionRequest): Promise<NewSessionResponse> {
if (params.mcpServers.length > 0) {
this.log(`ignoring ${params.mcpServers.length} MCP servers`);
}
this.enforceSessionCreateRateLimit("newSession");
const sessionId = randomUUID();
const meta = parseSessionMeta(params._meta);
const sessionKey = await resolveSessionKey({
meta,
fallbackKey: `acp:${sessionId}`,
gateway: this.gateway,
opts: this.opts,
});
await resetSessionIfNeeded({
meta,
sessionKey,
gateway: this.gateway,
opts: this.opts,
});
const session = this.sessionStore.createSession({
sessionId,
sessionKey,
cwd: params.cwd,
});
this.log(`newSession: ${session.sessionId} -> ${session.sessionKey}`);
await this.sendAvailableCommands(session.sessionId);
return { sessionId: session.sessionId };
}
async loadSession(params: LoadSessionRequest): Promise<LoadSessionResponse> {
if (params.mcpServers.length > 0) {
this.log(`ignoring ${params.mcpServers.length} MCP servers`);
}
if (!this.sessionStore.hasSession(params.sessionId)) {
this.enforceSessionCreateRateLimit("loadSession");
}
const meta = parseSessionMeta(params._meta);
const sessionKey = await resolveSessionKey({
meta,
fallbackKey: params.sessionId,
gateway: this.gateway,
opts: this.opts,
});
await resetSessionIfNeeded({
meta,
sessionKey,
gateway: this.gateway,
opts: this.opts,
});
const session = this.sessionStore.createSession({
sessionId: params.sessionId,
sessionKey,
cwd: params.cwd,
});
this.log(`loadSession: ${session.sessionId} -> ${session.sessionKey}`);
await this.sendAvailableCommands(session.sessionId);
return {};
}
async unstable_listSessions(params: ListSessionsRequest): Promise<ListSessionsResponse> {
const limit = readNumber(params._meta, ["limit"]) ?? 100;
const result = await this.gateway.request<SessionsListResult>("sessions.list", { limit });
const cwd = params.cwd ?? process.cwd();
return {
sessions: result.sessions.map((session) => ({
sessionId: session.key,
cwd,
title: session.displayName ?? session.label ?? session.key,
updatedAt: session.updatedAt ? new Date(session.updatedAt).toISOString() : undefined,
_meta: {
sessionKey: session.key,
kind: session.kind,
channel: session.channel,
},
})),
nextCursor: null,
};
}
async authenticate(_params: AuthenticateRequest): Promise<AuthenticateResponse> {
return {};
}
async setSessionMode(params: SetSessionModeRequest): Promise<SetSessionModeResponse> {
const session = this.sessionStore.getSession(params.sessionId);
if (!session) {
throw new Error(`Session ${params.sessionId} not found`);
}
if (!params.modeId) {
return {};
}
try {
await this.gateway.request("sessions.patch", {
key: session.sessionKey,
thinkingLevel: params.modeId,
});
this.log(`setSessionMode: ${session.sessionId} -> ${params.modeId}`);
} catch (err) {
this.log(`setSessionMode error: ${String(err)}`);
}
return {};
}
async prompt(params: PromptRequest): Promise<PromptResponse> {
const session = this.sessionStore.getSession(params.sessionId);
if (!session) {
throw new Error(`Session ${params.sessionId} not found`);
}
if (session.abortController) {
this.sessionStore.cancelActiveRun(params.sessionId);
}
const meta = parseSessionMeta(params._meta);
// Pass MAX_PROMPT_BYTES so extractTextFromPrompt rejects oversized content
// block-by-block, before the full string is ever assembled in memory (CWE-400)
const userText = extractTextFromPrompt(params.prompt, MAX_PROMPT_BYTES);
const attachments = extractAttachmentsFromPrompt(params.prompt);
const prefixCwd = meta.prefixCwd ?? this.opts.prefixCwd ?? true;
const displayCwd = shortenHomePath(session.cwd);
const message = prefixCwd ? `[Working directory: ${displayCwd}]\n\n${userText}` : userText;
// Defense-in-depth: also check the final assembled message (includes cwd prefix)
if (Buffer.byteLength(message, "utf-8") > MAX_PROMPT_BYTES) {
throw new Error(`Prompt exceeds maximum allowed size of ${MAX_PROMPT_BYTES} bytes`);
}
const abortController = new AbortController();
const runId = randomUUID();
this.sessionStore.setActiveRun(params.sessionId, runId, abortController);
return new Promise<PromptResponse>((resolve, reject) => {
this.pendingPrompts.set(params.sessionId, {
sessionId: params.sessionId,
sessionKey: session.sessionKey,
idempotencyKey: runId,
resolve,
reject,
});
this.gateway
.request(
"chat.send",
{
sessionKey: session.sessionKey,
message,
attachments: attachments.length > 0 ? attachments : undefined,
idempotencyKey: runId,
thinking: readString(params._meta, ["thinking", "thinkingLevel"]),
deliver: readBool(params._meta, ["deliver"]),
timeoutMs: readNumber(params._meta, ["timeoutMs"]),
},
{ expectFinal: true },
)
.catch((err) => {
this.pendingPrompts.delete(params.sessionId);
this.sessionStore.clearActiveRun(params.sessionId);
reject(err instanceof Error ? err : new Error(String(err)));
});
});
}
async cancel(params: CancelNotification): Promise<void> {
const session = this.sessionStore.getSession(params.sessionId);
if (!session) {
return;
}
this.sessionStore.cancelActiveRun(params.sessionId);
try {
await this.gateway.request("chat.abort", { sessionKey: session.sessionKey });
} catch (err) {
this.log(`cancel error: ${String(err)}`);
}
const pending = this.pendingPrompts.get(params.sessionId);
if (pending) {
this.pendingPrompts.delete(params.sessionId);
pending.resolve({ stopReason: "cancelled" });
}
}
private async handleAgentEvent(evt: EventFrame): Promise<void> {
const payload = evt.payload as Record<string, unknown> | undefined;
if (!payload) {
return;
}
const stream = payload.stream as string | undefined;
const data = payload.data as Record<string, unknown> | undefined;
const sessionKey = payload.sessionKey as string | undefined;
if (!stream || !data || !sessionKey) {
return;
}
if (stream !== "tool") {
return;
}
const phase = data.phase as string | undefined;
const name = data.name as string | undefined;
const toolCallId = data.toolCallId as string | undefined;
if (!toolCallId) {
return;
}
const pending = this.findPendingBySessionKey(sessionKey);
if (!pending) {
return;
}
if (phase === "start") {
if (!pending.toolCalls) {
pending.toolCalls = new Set();
}
if (pending.toolCalls.has(toolCallId)) {
return;
}
pending.toolCalls.add(toolCallId);
const args = data.args as Record<string, unknown> | undefined;
await this.connection.sessionUpdate({
sessionId: pending.sessionId,
update: {
sessionUpdate: "tool_call",
toolCallId,
title: formatToolTitle(name, args),
status: "in_progress",
rawInput: args,
kind: inferToolKind(name),
},
});
return;
}
if (phase === "result") {
const isError = Boolean(data.isError);
await this.connection.sessionUpdate({
sessionId: pending.sessionId,
update: {
sessionUpdate: "tool_call_update",
toolCallId,
status: isError ? "failed" : "completed",
rawOutput: data.result,
},
});
}
}
private async handleChatEvent(evt: EventFrame): Promise<void> {
const payload = evt.payload as Record<string, unknown> | undefined;
if (!payload) {
return;
}
const sessionKey = payload.sessionKey as string | undefined;
const state = payload.state as string | undefined;
const runId = payload.runId as string | undefined;
const messageData = payload.message as Record<string, unknown> | undefined;
if (!sessionKey || !state) {
return;
}
const pending = this.findPendingBySessionKey(sessionKey);
if (!pending) {
return;
}
if (runId && pending.idempotencyKey !== runId) {
return;
}
if (state === "delta" && messageData) {
await this.handleDeltaEvent(pending.sessionId, messageData);
return;
}
if (state === "final") {
this.finishPrompt(pending.sessionId, pending, "end_turn");
return;
}
if (state === "aborted") {
this.finishPrompt(pending.sessionId, pending, "cancelled");
return;
}
if (state === "error") {
this.finishPrompt(pending.sessionId, pending, "refusal");
}
}
private async handleDeltaEvent(
sessionId: string,
messageData: Record<string, unknown>,
): Promise<void> {
const content = messageData.content as Array<{ type: string; text?: string }> | undefined;
const fullText = content?.find((c) => c.type === "text")?.text ?? "";
const pending = this.pendingPrompts.get(sessionId);
if (!pending) {
return;
}
const sentSoFar = pending.sentTextLength ?? 0;
if (fullText.length <= sentSoFar) {
return;
}
const newText = fullText.slice(sentSoFar);
pending.sentTextLength = fullText.length;
pending.sentText = fullText;
await this.connection.sessionUpdate({
sessionId,
update: {
sessionUpdate: "agent_message_chunk",
content: { type: "text", text: newText },
},
});
}
private finishPrompt(sessionId: string, pending: PendingPrompt, stopReason: StopReason): void {
this.pendingPrompts.delete(sessionId);
this.sessionStore.clearActiveRun(sessionId);
pending.resolve({ stopReason });
}
private findPendingBySessionKey(sessionKey: string): PendingPrompt | undefined {
for (const pending of this.pendingPrompts.values()) {
if (pending.sessionKey === sessionKey) {
return pending;
}
}
return undefined;
}
private async sendAvailableCommands(sessionId: string): Promise<void> {
await this.connection.sessionUpdate({
sessionId,
update: {
sessionUpdate: "available_commands_update",
availableCommands: getAvailableCommands(),
},
});
}
private enforceSessionCreateRateLimit(method: "newSession" | "loadSession"): void {
const budget = this.sessionCreateRateLimiter.consume();
if (budget.allowed) {
return;
}
throw new Error(
`ACP session creation rate limit exceeded for ${method}; retry after ${Math.ceil(budget.retryAfterMs / 1_000)}s.`,
);
}
}

View File

@ -1,34 +0,0 @@
import type { SessionId } from "@agentclientprotocol/sdk";
import { VERSION } from "../version.js";
export type AcpSession = {
sessionId: SessionId;
sessionKey: string;
cwd: string;
createdAt: number;
lastTouchedAt: number;
abortController: AbortController | null;
activeRunId: string | null;
};
export type AcpServerOptions = {
gatewayUrl?: string;
gatewayToken?: string;
gatewayPassword?: string;
defaultSessionKey?: string;
defaultSessionLabel?: string;
requireExistingSession?: boolean;
resetSession?: boolean;
prefixCwd?: boolean;
sessionCreateRateLimit?: {
maxRequests?: number;
windowMs?: number;
};
verbose?: boolean;
};
export const ACP_AGENT_INFO = {
name: "openclaw-acp",
title: "OpenClaw ACP Gateway",
version: VERSION,
};

View File

@ -1,85 +0,0 @@
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { describe, expect, it } from "vitest";
import { withEnv } from "../test-utils/env.js";
import { resolveOpenClawAgentDir } from "./agent-paths.js";
describe("resolveOpenClawAgentDir", () => {
const withTempStateDir = async (run: (stateDir: string) => void) => {
const stateDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-agent-"));
try {
run(stateDir);
} finally {
await fs.rm(stateDir, { recursive: true, force: true });
}
};
it("defaults to the multi-agent path when no overrides are set", async () => {
await withTempStateDir((stateDir) => {
withEnv(
{
OPENCLAW_STATE_DIR: stateDir,
OPENCLAW_AGENT_DIR: undefined,
PI_CODING_AGENT_DIR: undefined,
},
() => {
const resolved = resolveOpenClawAgentDir();
expect(resolved).toBe(path.join(stateDir, "agents", "main", "agent"));
},
);
});
});
it("honors OPENCLAW_AGENT_DIR overrides", async () => {
await withTempStateDir((stateDir) => {
const override = path.join(stateDir, "agent");
withEnv(
{
OPENCLAW_STATE_DIR: undefined,
OPENCLAW_AGENT_DIR: override,
PI_CODING_AGENT_DIR: undefined,
},
() => {
const resolved = resolveOpenClawAgentDir();
expect(resolved).toBe(path.resolve(override));
},
);
});
});
it("honors PI_CODING_AGENT_DIR when OPENCLAW_AGENT_DIR is unset", async () => {
await withTempStateDir((stateDir) => {
const override = path.join(stateDir, "pi-agent");
withEnv(
{
OPENCLAW_STATE_DIR: undefined,
OPENCLAW_AGENT_DIR: undefined,
PI_CODING_AGENT_DIR: override,
},
() => {
const resolved = resolveOpenClawAgentDir();
expect(resolved).toBe(path.resolve(override));
},
);
});
});
it("prefers OPENCLAW_AGENT_DIR over PI_CODING_AGENT_DIR when both are set", async () => {
await withTempStateDir((stateDir) => {
const primaryOverride = path.join(stateDir, "primary-agent");
const fallbackOverride = path.join(stateDir, "fallback-agent");
withEnv(
{
OPENCLAW_STATE_DIR: undefined,
OPENCLAW_AGENT_DIR: primaryOverride,
PI_CODING_AGENT_DIR: fallbackOverride,
},
() => {
const resolved = resolveOpenClawAgentDir();
expect(resolved).toBe(path.resolve(primaryOverride));
},
);
});
});
});

View File

@ -1,25 +0,0 @@
import path from "node:path";
import { resolveStateDir } from "../config/paths.js";
import { DEFAULT_AGENT_ID } from "../routing/session-key.js";
import { resolveUserPath } from "../utils.js";
export function resolveOpenClawAgentDir(): string {
const override =
process.env.OPENCLAW_AGENT_DIR?.trim() || process.env.PI_CODING_AGENT_DIR?.trim();
if (override) {
return resolveUserPath(override);
}
const defaultAgentDir = path.join(resolveStateDir(), "agents", DEFAULT_AGENT_ID, "agent");
return resolveUserPath(defaultAgentDir);
}
export function ensureOpenClawAgentEnv(): string {
const dir = resolveOpenClawAgentDir();
if (!process.env.OPENCLAW_AGENT_DIR) {
process.env.OPENCLAW_AGENT_DIR = dir;
}
if (!process.env.PI_CODING_AGENT_DIR) {
process.env.PI_CODING_AGENT_DIR = dir;
}
return dir;
}

View File

@ -1,283 +0,0 @@
import path from "node:path";
import { afterEach, describe, expect, it, vi } from "vitest";
import type { OpenClawConfig } from "../config/config.js";
import {
resolveAgentConfig,
resolveAgentDir,
resolveEffectiveModelFallbacks,
resolveAgentModelFallbacksOverride,
resolveAgentModelPrimary,
resolveAgentWorkspaceDir,
} from "./agent-scope.js";
afterEach(() => {
vi.unstubAllEnvs();
});
describe("resolveAgentConfig", () => {
it("should return undefined when no agents config exists", () => {
const cfg: OpenClawConfig = {};
const result = resolveAgentConfig(cfg, "main");
expect(result).toBeUndefined();
});
it("should return undefined when agent id does not exist", () => {
const cfg: OpenClawConfig = {
agents: {
list: [{ id: "main", workspace: "~/openclaw" }],
},
};
const result = resolveAgentConfig(cfg, "nonexistent");
expect(result).toBeUndefined();
});
it("should return basic agent config", () => {
const cfg: OpenClawConfig = {
agents: {
list: [
{
id: "main",
name: "Main Agent",
workspace: "~/openclaw",
agentDir: "~/.openclaw/agents/main",
model: "anthropic/claude-opus-4",
},
],
},
};
const result = resolveAgentConfig(cfg, "main");
expect(result).toEqual({
name: "Main Agent",
workspace: "~/openclaw",
agentDir: "~/.openclaw/agents/main",
model: "anthropic/claude-opus-4",
identity: undefined,
groupChat: undefined,
subagents: undefined,
sandbox: undefined,
tools: undefined,
});
});
it("supports per-agent model primary+fallbacks", () => {
const cfg: OpenClawConfig = {
agents: {
defaults: {
model: {
primary: "anthropic/claude-sonnet-4",
fallbacks: ["openai/gpt-4.1"],
},
},
list: [
{
id: "linus",
model: {
primary: "anthropic/claude-opus-4",
fallbacks: ["openai/gpt-5.2"],
},
},
],
},
};
expect(resolveAgentModelPrimary(cfg, "linus")).toBe("anthropic/claude-opus-4");
expect(resolveAgentModelFallbacksOverride(cfg, "linus")).toEqual(["openai/gpt-5.2"]);
// If fallbacks isn't present, we don't override the global fallbacks.
const cfgNoOverride: OpenClawConfig = {
agents: {
list: [
{
id: "linus",
model: {
primary: "anthropic/claude-opus-4",
},
},
],
},
};
expect(resolveAgentModelFallbacksOverride(cfgNoOverride, "linus")).toBe(undefined);
// Explicit empty list disables global fallbacks for that agent.
const cfgDisable: OpenClawConfig = {
agents: {
list: [
{
id: "linus",
model: {
primary: "anthropic/claude-opus-4",
fallbacks: [],
},
},
],
},
};
expect(resolveAgentModelFallbacksOverride(cfgDisable, "linus")).toEqual([]);
expect(
resolveEffectiveModelFallbacks({
cfg,
agentId: "linus",
hasSessionModelOverride: false,
}),
).toEqual(["openai/gpt-5.2"]);
expect(
resolveEffectiveModelFallbacks({
cfg,
agentId: "linus",
hasSessionModelOverride: true,
}),
).toEqual(["openai/gpt-5.2"]);
expect(
resolveEffectiveModelFallbacks({
cfg: cfgNoOverride,
agentId: "linus",
hasSessionModelOverride: true,
}),
).toEqual([]);
const cfgInheritDefaults: OpenClawConfig = {
agents: {
defaults: {
model: {
fallbacks: ["openai/gpt-4.1"],
},
},
list: [
{
id: "linus",
model: {
primary: "anthropic/claude-opus-4",
},
},
],
},
};
expect(
resolveEffectiveModelFallbacks({
cfg: cfgInheritDefaults,
agentId: "linus",
hasSessionModelOverride: true,
}),
).toEqual(["openai/gpt-4.1"]);
expect(
resolveEffectiveModelFallbacks({
cfg: cfgDisable,
agentId: "linus",
hasSessionModelOverride: true,
}),
).toEqual([]);
});
it("should return agent-specific sandbox config", () => {
const cfg: OpenClawConfig = {
agents: {
list: [
{
id: "work",
workspace: "~/openclaw-work",
sandbox: {
mode: "all",
scope: "agent",
perSession: false,
workspaceAccess: "ro",
workspaceRoot: "~/sandboxes",
},
},
],
},
};
const result = resolveAgentConfig(cfg, "work");
expect(result?.sandbox).toEqual({
mode: "all",
scope: "agent",
perSession: false,
workspaceAccess: "ro",
workspaceRoot: "~/sandboxes",
});
});
it("should return agent-specific tools config", () => {
const cfg: OpenClawConfig = {
agents: {
list: [
{
id: "restricted",
workspace: "~/openclaw-restricted",
tools: {
allow: ["read"],
deny: ["exec", "write", "edit"],
elevated: {
enabled: false,
allowFrom: { whatsapp: ["+15555550123"] },
},
},
},
],
},
};
const result = resolveAgentConfig(cfg, "restricted");
expect(result?.tools).toEqual({
allow: ["read"],
deny: ["exec", "write", "edit"],
elevated: {
enabled: false,
allowFrom: { whatsapp: ["+15555550123"] },
},
});
});
it("should return both sandbox and tools config", () => {
const cfg: OpenClawConfig = {
agents: {
list: [
{
id: "family",
workspace: "~/openclaw-family",
sandbox: {
mode: "all",
scope: "agent",
},
tools: {
allow: ["read"],
deny: ["exec"],
},
},
],
},
};
const result = resolveAgentConfig(cfg, "family");
expect(result?.sandbox?.mode).toBe("all");
expect(result?.tools?.allow).toEqual(["read"]);
});
it("should normalize agent id", () => {
const cfg: OpenClawConfig = {
agents: {
list: [{ id: "main", workspace: "~/openclaw" }],
},
};
// Should normalize to "main" (default)
const result = resolveAgentConfig(cfg, "");
expect(result).toBeDefined();
expect(result?.workspace).toBe("~/openclaw");
});
it("uses OPENCLAW_HOME for default agent workspace", () => {
const home = path.join(path.sep, "srv", "openclaw-home");
vi.stubEnv("OPENCLAW_HOME", home);
const workspace = resolveAgentWorkspaceDir({} as OpenClawConfig, "main");
expect(workspace).toBe(path.join(path.resolve(home), ".openclaw", "workspace"));
});
it("uses OPENCLAW_HOME for default agentDir", () => {
const home = path.join(path.sep, "srv", "openclaw-home");
vi.stubEnv("OPENCLAW_HOME", home);
// Clear state dir so it falls back to OPENCLAW_HOME
vi.stubEnv("OPENCLAW_STATE_DIR", "");
const agentDir = resolveAgentDir({} as OpenClawConfig, "main");
expect(agentDir).toBe(path.join(path.resolve(home), ".openclaw", "agents", "main", "agent"));
});
});

View File

@ -1,211 +0,0 @@
import path from "node:path";
import type { OpenClawConfig } from "../config/config.js";
import { resolveStateDir } from "../config/paths.js";
import { createSubsystemLogger } from "../logging/subsystem.js";
import {
DEFAULT_AGENT_ID,
normalizeAgentId,
parseAgentSessionKey,
} from "../routing/session-key.js";
import { resolveUserPath } from "../utils.js";
import { normalizeSkillFilter } from "./skills/filter.js";
import { resolveDefaultAgentWorkspaceDir } from "./workspace.js";
const log = createSubsystemLogger("agent-scope");
export { resolveAgentIdFromSessionKey } from "../routing/session-key.js";
type AgentEntry = NonNullable<NonNullable<OpenClawConfig["agents"]>["list"]>[number];
type ResolvedAgentConfig = {
name?: string;
workspace?: string;
agentDir?: string;
model?: AgentEntry["model"];
skills?: AgentEntry["skills"];
memorySearch?: AgentEntry["memorySearch"];
humanDelay?: AgentEntry["humanDelay"];
heartbeat?: AgentEntry["heartbeat"];
identity?: AgentEntry["identity"];
groupChat?: AgentEntry["groupChat"];
subagents?: AgentEntry["subagents"];
sandbox?: AgentEntry["sandbox"];
tools?: AgentEntry["tools"];
};
let defaultAgentWarned = false;
export function listAgentEntries(cfg: OpenClawConfig): AgentEntry[] {
const list = cfg.agents?.list;
if (!Array.isArray(list)) {
return [];
}
return list.filter((entry): entry is AgentEntry => Boolean(entry && typeof entry === "object"));
}
export function listAgentIds(cfg: OpenClawConfig): string[] {
const agents = listAgentEntries(cfg);
if (agents.length === 0) {
return [DEFAULT_AGENT_ID];
}
const seen = new Set<string>();
const ids: string[] = [];
for (const entry of agents) {
const id = normalizeAgentId(entry?.id);
if (seen.has(id)) {
continue;
}
seen.add(id);
ids.push(id);
}
return ids.length > 0 ? ids : [DEFAULT_AGENT_ID];
}
export function resolveDefaultAgentId(cfg: OpenClawConfig): string {
const agents = listAgentEntries(cfg);
if (agents.length === 0) {
return DEFAULT_AGENT_ID;
}
const defaults = agents.filter((agent) => agent?.default);
if (defaults.length > 1 && !defaultAgentWarned) {
defaultAgentWarned = true;
log.warn("Multiple agents marked default=true; using the first entry as default.");
}
const chosen = (defaults[0] ?? agents[0])?.id?.trim();
return normalizeAgentId(chosen || DEFAULT_AGENT_ID);
}
export function resolveSessionAgentIds(params: { sessionKey?: string; config?: OpenClawConfig }): {
defaultAgentId: string;
sessionAgentId: string;
} {
const defaultAgentId = resolveDefaultAgentId(params.config ?? {});
const sessionKey = params.sessionKey?.trim();
const normalizedSessionKey = sessionKey ? sessionKey.toLowerCase() : undefined;
const parsed = normalizedSessionKey ? parseAgentSessionKey(normalizedSessionKey) : null;
const sessionAgentId = parsed?.agentId ? normalizeAgentId(parsed.agentId) : defaultAgentId;
return { defaultAgentId, sessionAgentId };
}
export function resolveSessionAgentId(params: {
sessionKey?: string;
config?: OpenClawConfig;
}): string {
return resolveSessionAgentIds(params).sessionAgentId;
}
function resolveAgentEntry(cfg: OpenClawConfig, agentId: string): AgentEntry | undefined {
const id = normalizeAgentId(agentId);
return listAgentEntries(cfg).find((entry) => normalizeAgentId(entry.id) === id);
}
export function resolveAgentConfig(
cfg: OpenClawConfig,
agentId: string,
): ResolvedAgentConfig | undefined {
const id = normalizeAgentId(agentId);
const entry = resolveAgentEntry(cfg, id);
if (!entry) {
return undefined;
}
return {
name: typeof entry.name === "string" ? entry.name : undefined,
workspace: typeof entry.workspace === "string" ? entry.workspace : undefined,
agentDir: typeof entry.agentDir === "string" ? entry.agentDir : undefined,
model:
typeof entry.model === "string" || (entry.model && typeof entry.model === "object")
? entry.model
: undefined,
skills: Array.isArray(entry.skills) ? entry.skills : undefined,
memorySearch: entry.memorySearch,
humanDelay: entry.humanDelay,
heartbeat: entry.heartbeat,
identity: entry.identity,
groupChat: entry.groupChat,
subagents: typeof entry.subagents === "object" && entry.subagents ? entry.subagents : undefined,
sandbox: entry.sandbox,
tools: entry.tools,
};
}
export function resolveAgentSkillsFilter(
cfg: OpenClawConfig,
agentId: string,
): string[] | undefined {
return normalizeSkillFilter(resolveAgentConfig(cfg, agentId)?.skills);
}
export function resolveAgentModelPrimary(cfg: OpenClawConfig, agentId: string): string | undefined {
const raw = resolveAgentConfig(cfg, agentId)?.model;
if (!raw) {
return undefined;
}
if (typeof raw === "string") {
return raw.trim() || undefined;
}
const primary = raw.primary?.trim();
return primary || undefined;
}
export function resolveAgentModelFallbacksOverride(
cfg: OpenClawConfig,
agentId: string,
): string[] | undefined {
const raw = resolveAgentConfig(cfg, agentId)?.model;
if (!raw || typeof raw === "string") {
return undefined;
}
// Important: treat an explicitly provided empty array as an override to disable global fallbacks.
if (!Object.hasOwn(raw, "fallbacks")) {
return undefined;
}
return Array.isArray(raw.fallbacks) ? raw.fallbacks : undefined;
}
export function resolveEffectiveModelFallbacks(params: {
cfg: OpenClawConfig;
agentId: string;
hasSessionModelOverride: boolean;
}): string[] | undefined {
const agentFallbacksOverride = resolveAgentModelFallbacksOverride(params.cfg, params.agentId);
if (!params.hasSessionModelOverride) {
return agentFallbacksOverride;
}
const defaultFallbacks =
typeof params.cfg.agents?.defaults?.model === "object"
? (params.cfg.agents.defaults.model.fallbacks ?? [])
: [];
return agentFallbacksOverride ?? defaultFallbacks;
}
export function resolveAgentWorkspaceDir(cfg: OpenClawConfig, agentId: string) {
// OPENCLAW_WORKSPACE overrides everything (set by the web UI for profile switching).
const envWorkspace = process.env.OPENCLAW_WORKSPACE?.trim();
if (envWorkspace) {
return resolveUserPath(envWorkspace);
}
const id = normalizeAgentId(agentId);
const configured = resolveAgentConfig(cfg, id)?.workspace?.trim();
if (configured) {
return resolveUserPath(configured);
}
const defaultAgentId = resolveDefaultAgentId(cfg);
if (id === defaultAgentId) {
const fallback = cfg.agents?.defaults?.workspace?.trim();
if (fallback) {
return resolveUserPath(fallback);
}
return resolveDefaultAgentWorkspaceDir(process.env);
}
const stateDir = resolveStateDir(process.env);
return path.join(stateDir, `workspace-${id}`);
}
export function resolveAgentDir(cfg: OpenClawConfig, agentId: string) {
const id = normalizeAgentId(agentId);
const configured = resolveAgentConfig(cfg, id)?.agentDir?.trim();
if (configured) {
return resolveUserPath(configured);
}
const root = resolveStateDir(process.env);
return path.join(root, "agents", id, "agent");
}

View File

@ -1,403 +0,0 @@
/**
* AI SDK v6 event adapter for openclaw.
*
* This module converts AI SDK stream events to pi-agent compatible events.
* This ensures all existing consumers (UI, CLI, messaging channels) work
* without modification when using the AI SDK engine.
*
* Fork-friendly: emits same event protocol as pi-agent.
*/
import { streamText, type LanguageModel } from "ai";
import type { ConvertedAiSdkTool } from "./tools.js";
/**
* Pi-agent compatible event types.
* Matches the AgentEvent type from @mariozechner/pi-agent-core.
*/
export type PiAgentEvent =
| { type: "agent_start" }
| { type: "agent_end"; messages: PiAgentMessage[] }
| { type: "turn_start" }
| { type: "turn_end"; message: PiAgentMessage; toolResults: PiToolResultMessage[] }
| { type: "message_start"; message: PiAgentMessage }
| {
type: "message_update";
message: PiAgentMessage;
assistantMessageEvent: PiAssistantMessageEvent;
}
| { type: "message_end"; message: PiAgentMessage }
| { type: "tool_execution_start"; toolCallId: string; toolName: string; args: unknown }
| {
type: "tool_execution_update";
toolCallId: string;
toolName: string;
args: unknown;
partialResult: unknown;
}
| {
type: "tool_execution_end";
toolCallId: string;
toolName: string;
result: unknown;
isError: boolean;
};
/**
* Pi-agent message format (simplified).
*/
export interface PiAgentMessage {
role: "user" | "assistant" | "toolResult";
content: PiMessageContent[];
timestamp?: number;
}
/**
* Pi-agent message content block.
*/
export type PiMessageContent =
| { type: "text"; text: string }
| { type: "thinking"; thinking: string }
| { type: "toolCall"; id: string; name: string; arguments: unknown }
| { type: "image"; data: string; mimeType: string };
/**
* Pi-agent tool result message.
*/
export interface PiToolResultMessage {
role: "toolResult";
toolCallId: string;
toolName: string;
content: Array<{ type: "text"; text: string }>;
isError: boolean;
details?: unknown;
}
/**
* Pi-agent assistant message event (streaming update).
*/
export interface PiAssistantMessageEvent {
type: "text" | "thinking" | "toolCall";
text?: string;
thinking?: string;
toolCall?: { id: string; name: string; arguments: unknown };
}
/**
* Anthropic-specific provider options for thinking/reasoning.
* Based on: https://ai-sdk.dev/providers/ai-sdk-providers/anthropic#reasoning
*/
export interface AnthropicProviderOptions {
/** Enable thinking/reasoning with budget */
thinking?: { type: "enabled"; budgetTokens: number };
/** Effort level for Claude Opus 4.5 */
effort?: "high" | "medium" | "low";
}
/**
* Input parameters for the event adapter stream.
*/
export interface EventAdapterInput {
/** Language model to use */
model: LanguageModel;
/** System prompt */
system?: string;
/** Messages history (in AI SDK format) */
messages: Array<{
role: "user" | "assistant" | "tool";
content: string | unknown[];
}>;
/** Tools available to the model */
tools?: Record<string, ConvertedAiSdkTool>;
/** Temperature for generation */
temperature?: number;
/** Maximum output tokens */
maxTokens?: number;
/** Abort signal for cancellation */
abortSignal?: AbortSignal;
/** Top-p sampling parameter */
topP?: number;
/** Provider-specific options (e.g., Anthropic thinking/reasoning) */
providerOptions?: {
anthropic?: AnthropicProviderOptions;
};
}
/**
* Stream AI SDK responses as pi-agent compatible events.
*
* This is the main integration point between AI SDK and openclaw's event system.
* It wraps streamText() and yields events that match the pi-agent protocol.
*
* @param input - Stream input parameters
* @yields PiAgentEvent - Events compatible with pi-agent consumers
*/
export async function* streamWithPiAgentEvents(
input: EventAdapterInput,
): AsyncGenerator<PiAgentEvent, void, undefined> {
// Emit agent start
yield { type: "agent_start" };
const allMessages: PiAgentMessage[] = [];
const currentTurnToolResults: PiToolResultMessage[] = [];
let currentMessage: PiAgentMessage | null = null;
let accumulatedText = "";
let accumulatedReasoning = "";
// Track tool call inputs as they stream in
const toolCallInputs = new Map<string, { toolName: string; input: string }>();
try {
// Start streaming from AI SDK
// Build stream options with provider-specific settings
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const streamOptions: any = {
model: input.model,
system: input.system,
messages: input.messages,
tools: input.tools,
temperature: input.temperature,
maxOutputTokens: input.maxTokens,
abortSignal: input.abortSignal,
topP: input.topP,
};
// Add provider options for thinking/reasoning if specified
if (input.providerOptions) {
streamOptions.providerOptions = input.providerOptions;
}
const stream = streamText(streamOptions);
// Emit turn start
yield { type: "turn_start" };
// Initialize assistant message
currentMessage = {
role: "assistant",
content: [],
timestamp: Date.now(),
};
// Emit message start
yield { type: "message_start", message: currentMessage };
// Process the full stream (streamText returns a stream object, not a promise)
for await (const event of stream.fullStream) {
switch (event.type) {
case "text-delta": {
// Accumulate text
accumulatedText += event.text;
// Update current message content
const textBlock = currentMessage.content.find(
(c): c is { type: "text"; text: string } => c.type === "text",
);
if (textBlock) {
textBlock.text = accumulatedText;
} else {
currentMessage.content.push({ type: "text", text: accumulatedText });
}
// Emit message update
yield {
type: "message_update",
message: currentMessage,
assistantMessageEvent: { type: "text", text: event.text },
};
break;
}
case "reasoning-delta": {
// Handle thinking/reasoning content
accumulatedReasoning += event.text;
const thinkingBlock = currentMessage.content.find(
(c): c is { type: "thinking"; thinking: string } => c.type === "thinking",
);
if (thinkingBlock) {
thinkingBlock.thinking = accumulatedReasoning;
} else {
currentMessage.content.push({ type: "thinking", thinking: accumulatedReasoning });
}
yield {
type: "message_update",
message: currentMessage,
assistantMessageEvent: { type: "thinking", thinking: event.text },
};
break;
}
case "tool-input-start": {
// Start tracking this tool call's input
toolCallInputs.set(event.id, { toolName: event.toolName, input: "" });
break;
}
case "tool-input-delta": {
// Accumulate tool input
const existing = toolCallInputs.get(event.id);
if (existing) {
existing.input += event.delta;
}
break;
}
case "tool-call": {
// Get the tool input (already parsed by AI SDK)
// The event has toolCallId, toolName, and either args or input
const toolCallId = event.toolCallId;
const toolName = event.toolName;
// AI SDK v6 uses 'input' but may also be available as 'args' in some cases
const toolInput = "args" in event ? event.args : "input" in event ? event.input : {};
// Add tool call to message content
const toolCallBlock = {
type: "toolCall" as const,
id: toolCallId,
name: toolName,
arguments: toolInput,
};
currentMessage.content.push(toolCallBlock);
// Emit tool execution start
yield {
type: "tool_execution_start",
toolCallId,
toolName,
args: toolInput,
};
// Also emit message update for the tool call
yield {
type: "message_update",
message: currentMessage,
assistantMessageEvent: {
type: "toolCall",
toolCall: { id: toolCallId, name: toolName, arguments: toolInput },
},
};
break;
}
case "tool-result": {
// Get the result (AI SDK v6 uses 'output' not 'result')
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const ev = event as any;
const toolOutput = ev.output ?? ev.result ?? {};
// Create tool result message
const toolResult: PiToolResultMessage = {
role: "toolResult",
toolCallId: event.toolCallId,
toolName: event.toolName,
content: [
{
type: "text",
text: typeof toolOutput === "string" ? toolOutput : JSON.stringify(toolOutput),
},
],
isError: false,
details: toolOutput,
};
currentTurnToolResults.push(toolResult);
// Emit tool execution end
yield {
type: "tool_execution_end",
toolCallId: event.toolCallId,
toolName: event.toolName,
result: toolOutput,
isError: false,
};
break;
}
case "tool-error": {
// Handle tool errors
const errorOutput = "error" in event ? event.error : "Tool execution failed";
const toolResult: PiToolResultMessage = {
role: "toolResult",
toolCallId: event.toolCallId,
toolName: event.toolName,
content: [
{
type: "text",
text: typeof errorOutput === "string" ? errorOutput : JSON.stringify(errorOutput),
},
],
isError: true,
details: errorOutput,
};
currentTurnToolResults.push(toolResult);
yield {
type: "tool_execution_end",
toolCallId: event.toolCallId,
toolName: event.toolName,
result: errorOutput,
isError: true,
};
break;
}
case "start-step": {
// New step starting - this happens in multi-step tool loops
// For now, we don't have multi-step support enabled (maxSteps not set)
break;
}
case "finish-step": {
// Step finished - this happens in multi-step tool loops
break;
}
case "finish": {
// Stream finished
break;
}
case "error": {
// Handle error
console.error("[AI SDK Event Adapter] Stream error:", event.error);
break;
}
// Ignore other event types we don't need to translate
default:
break;
}
}
// End message
if (currentMessage) {
yield { type: "message_end", message: currentMessage };
allMessages.push(currentMessage);
}
// Emit turn end
yield {
type: "turn_end",
message: currentMessage ?? { role: "assistant", content: [] },
toolResults: currentTurnToolResults,
};
} catch (error) {
// Log error but still try to emit agent_end
console.error("[AI SDK Event Adapter] Stream error:", error);
}
// Emit agent end
yield { type: "agent_end", messages: allMessages };
}
/**
* Run a single LLM call and return pi-agent compatible events.
* Convenience wrapper for simple use cases.
*/
export async function collectPiAgentEvents(input: EventAdapterInput): Promise<PiAgentEvent[]> {
const events: PiAgentEvent[] = [];
for await (const event of streamWithPiAgentEvents(input)) {
events.push(event);
}
return events;
}

View File

@ -1,56 +0,0 @@
/**
* AI SDK v6 integration for openclaw.
*
* This module provides an alternative LLM engine using Vercel's AI SDK,
* alongside the existing pi-agent implementation. Users can choose between
* engines via configuration.
*
* Fork-friendly design:
* - All AI SDK code lives in this separate `aisdk/` directory
* - Original pi-agent code remains untouched
* - Minimal integration points for easy upstream merges
*/
// Types
export type {
AiSdkConfig,
AiSdkMessage,
AiSdkMessageContent,
AiSdkStreamInput,
AiSdkTool,
DirectProviderId,
DirectProviderConfig,
GatewayConfig,
ModelRef,
ProviderMode,
ResolvedModel,
} from "./types.js";
// Provider management
export {
getDefaultConfig,
listAvailableProviders,
parseModelRef,
resolveModel,
validateConfig,
} from "./provider.js";
// Tool conversion
export type { ConvertedAiSdkTool, ToolExecutionContext, ToolResult } from "./tools.js";
export { convertPiToolToAiSdk, convertPiToolsToAiSdk, createAiSdkTools } from "./tools.js";
// Event adapter (pi-agent protocol compatibility)
export type {
AnthropicProviderOptions,
EventAdapterInput,
PiAgentEvent,
PiAgentMessage,
PiAssistantMessageEvent,
PiMessageContent,
PiToolResultMessage,
} from "./event-adapter.js";
export { collectPiAgentEvents, streamWithPiAgentEvents } from "./event-adapter.js";
// Agent runner (main entry point)
export type { AiSdkRunnerConfig, AiSdkRunResult } from "./run.js";
export { isAiSdkEngineAvailable, mapThinkLevelToAnthropicOptions, runAiSdkAgent } from "./run.js";

View File

@ -1,151 +0,0 @@
import { describe, expect, it } from "vitest";
import {
getDefaultConfig,
parseModelRef,
validateConfig,
listAvailableProviders,
} from "./provider.js";
describe("AI SDK Provider", () => {
describe("parseModelRef", () => {
it("parses valid model reference", () => {
const result = parseModelRef("anthropic/claude-sonnet-4");
expect(result.providerId).toBe("anthropic");
expect(result.modelId).toBe("claude-sonnet-4");
});
it("parses model ref with multiple slashes", () => {
const result = parseModelRef("openai/gpt-4o/preview");
expect(result.providerId).toBe("openai");
expect(result.modelId).toBe("gpt-4o/preview");
});
it("throws on invalid model reference without slash", () => {
expect(() => parseModelRef("claude-sonnet-4")).toThrow(
'Invalid model reference "claude-sonnet-4"',
);
});
});
describe("getDefaultConfig", () => {
it("returns gateway mode when AI_GATEWAY_API_KEY is set", () => {
const original = process.env.AI_GATEWAY_API_KEY;
try {
process.env.AI_GATEWAY_API_KEY = "test-key";
const config = getDefaultConfig();
expect(config.mode).toBe("gateway");
expect(config.gateway?.apiKey).toBe("test-key");
} finally {
if (original === undefined) {
delete process.env.AI_GATEWAY_API_KEY;
} else {
process.env.AI_GATEWAY_API_KEY = original;
}
}
});
it("returns direct mode when no gateway key", () => {
const original = process.env.AI_GATEWAY_API_KEY;
try {
delete process.env.AI_GATEWAY_API_KEY;
const config = getDefaultConfig();
expect(config.mode).toBe("direct");
} finally {
if (original !== undefined) {
process.env.AI_GATEWAY_API_KEY = original;
}
}
});
it("includes default model reference", () => {
const config = getDefaultConfig();
expect(config.defaultModel).toBeDefined();
expect(config.defaultModel).toContain("/");
});
});
describe("validateConfig", () => {
it("returns null for valid gateway config", () => {
const result = validateConfig({
mode: "gateway",
gateway: { apiKey: "test-key" },
});
expect(result).toBeNull();
});
it("returns error for gateway mode without key", () => {
const original = process.env.AI_GATEWAY_API_KEY;
try {
delete process.env.AI_GATEWAY_API_KEY;
const result = validateConfig({
mode: "gateway",
});
expect(result).toContain("AI Gateway");
} finally {
if (original !== undefined) {
process.env.AI_GATEWAY_API_KEY = original;
}
}
});
it("returns error for direct mode without any provider keys", () => {
// Clear all provider env vars temporarily
const saved: Record<string, string | undefined> = {};
const providerVars = [
"ANTHROPIC_API_KEY",
"OPENAI_API_KEY",
"GOOGLE_GENERATIVE_AI_API_KEY",
"GOOGLE_API_KEY",
"GROQ_API_KEY",
"MISTRAL_API_KEY",
"XAI_API_KEY",
"OPENROUTER_API_KEY",
"AZURE_API_KEY",
];
for (const v of providerVars) {
saved[v] = process.env[v];
delete process.env[v];
}
try {
const result = validateConfig({
mode: "direct",
providers: {},
});
expect(result).toContain("at least one provider");
} finally {
for (const v of providerVars) {
if (saved[v] !== undefined) {
process.env[v] = saved[v];
}
}
}
});
});
describe("listAvailableProviders", () => {
it("returns all providers for gateway mode", () => {
const providers = listAvailableProviders({ mode: "gateway" });
expect(providers).toContain("anthropic");
expect(providers).toContain("openai");
expect(providers).toContain("google");
});
it("returns only providers with keys for direct mode", () => {
const original = process.env.ANTHROPIC_API_KEY;
try {
process.env.ANTHROPIC_API_KEY = "test-key";
const providers = listAvailableProviders({
mode: "direct",
providers: {},
});
expect(providers).toContain("anthropic");
} finally {
if (original === undefined) {
delete process.env.ANTHROPIC_API_KEY;
} else {
process.env.ANTHROPIC_API_KEY = original;
}
}
});
});
});

View File

@ -1,262 +0,0 @@
/**
* AI SDK v6 provider management for openclaw.
*
* Supports two modes:
* - "gateway": Vercel AI Gateway for unified access to all providers
* - "direct": Provider-specific SDK packages for full control
*
* This module is fork-friendly: all AI SDK code lives in this separate
* directory to avoid merge conflicts when pulling upstream updates.
*/
import type { LanguageModel } from "ai";
import type {
AiSdkConfig,
DirectProviderId,
DirectProviderConfig,
GatewayConfig,
ModelRef,
ResolvedModel,
} from "./types.js";
// Lazy-loaded provider factories to avoid importing unused providers
type ProviderFactory = (config: DirectProviderConfig) => {
languageModel: (modelId: string) => LanguageModel;
};
const providerFactories: Record<DirectProviderId, () => Promise<ProviderFactory>> = {
anthropic: async () => {
const { createAnthropic } = await import("@ai-sdk/anthropic");
return (config) => createAnthropic({ apiKey: config.apiKey, baseURL: config.baseUrl });
},
openai: async () => {
const { createOpenAI } = await import("@ai-sdk/openai");
return (config) => createOpenAI({ apiKey: config.apiKey, baseURL: config.baseUrl });
},
google: async () => {
const { createGoogleGenerativeAI } = await import("@ai-sdk/google");
return (config) => createGoogleGenerativeAI({ apiKey: config.apiKey, baseURL: config.baseUrl });
},
"amazon-bedrock": async () => {
const { createAmazonBedrock } = await import("@ai-sdk/amazon-bedrock");
return (config) =>
createAmazonBedrock(config.options as Parameters<typeof createAmazonBedrock>[0]);
},
azure: async () => {
const { createAzure } = await import("@ai-sdk/azure");
return (config) => createAzure({ apiKey: config.apiKey, baseURL: config.baseUrl });
},
groq: async () => {
const { createGroq } = await import("@ai-sdk/groq");
return (config) => createGroq({ apiKey: config.apiKey, baseURL: config.baseUrl });
},
mistral: async () => {
const { createMistral } = await import("@ai-sdk/mistral");
return (config) => createMistral({ apiKey: config.apiKey, baseURL: config.baseUrl });
},
xai: async () => {
const { createXai } = await import("@ai-sdk/xai");
return (config) => createXai({ apiKey: config.apiKey, baseURL: config.baseUrl });
},
openrouter: async () => {
const { createOpenRouter } = await import("@openrouter/ai-sdk-provider");
return (config) => createOpenRouter({ apiKey: config.apiKey, baseURL: config.baseUrl });
},
"openai-compatible": async () => {
const { createOpenAICompatible } = await import("@ai-sdk/openai-compatible");
return (config) => createOpenAICompatible({ baseURL: config.baseUrl ?? "", name: "custom" });
},
};
// Cache for initialized providers
const providerCache = new Map<string, Awaited<ReturnType<ProviderFactory>>>();
/**
* Parse a model reference into provider and model IDs.
* @example parseModelRef("anthropic/claude-sonnet-4") => { providerId: "anthropic", modelId: "claude-sonnet-4" }
*/
export function parseModelRef(ref: string): { providerId: string; modelId: string } {
const slashIndex = ref.indexOf("/");
if (slashIndex === -1) {
throw new Error(`Invalid model reference "${ref}": expected format "provider/model-id"`);
}
return {
providerId: ref.slice(0, slashIndex),
modelId: ref.slice(slashIndex + 1),
};
}
/**
* Get a language model using AI Gateway.
* AI Gateway uses the format "provider/model-id" directly.
*/
async function getGatewayModel(modelRef: string, config: GatewayConfig): Promise<ResolvedModel> {
const { createGateway } = await import("@ai-sdk/gateway");
const { providerId, modelId } = parseModelRef(modelRef);
const gateway = createGateway({
apiKey: config.apiKey ?? process.env.AI_GATEWAY_API_KEY,
baseURL: config.baseUrl,
});
const model = gateway.languageModel(modelRef);
return {
model,
providerId,
modelId,
ref: modelRef as ModelRef,
};
}
/**
* Get a language model using direct provider SDK.
*/
async function getDirectModel(modelRef: string, config: AiSdkConfig): Promise<ResolvedModel> {
const { providerId, modelId } = parseModelRef(modelRef);
// Check if provider is supported
const factoryLoader = providerFactories[providerId as DirectProviderId];
if (!factoryLoader) {
throw new Error(
`Unsupported provider "${providerId}". ` +
`Supported providers: ${Object.keys(providerFactories).join(", ")}. ` +
`Consider using mode: "gateway" for access to more providers.`,
);
}
// Get provider config
const providerConfig = config.providers?.[providerId as DirectProviderId] ?? {};
// Try to get API key from environment if not configured
const apiKey = providerConfig.apiKey ?? getEnvApiKey(providerId);
const configWithKey = { ...providerConfig, apiKey };
// Get or create cached provider instance
const cacheKey = `${providerId}:${JSON.stringify(configWithKey)}`;
let provider = providerCache.get(cacheKey);
if (!provider) {
const factory = await factoryLoader();
provider = factory(configWithKey);
providerCache.set(cacheKey, provider);
}
const model = provider.languageModel(modelId);
return {
model,
providerId,
modelId,
ref: modelRef as ModelRef,
};
}
/**
* Get API key from environment variables for a provider.
*/
function getEnvApiKey(providerId: string): string | undefined {
const envVarMap: Record<string, string[]> = {
anthropic: ["ANTHROPIC_API_KEY"],
openai: ["OPENAI_API_KEY"],
google: ["GOOGLE_GENERATIVE_AI_API_KEY", "GOOGLE_API_KEY"],
groq: ["GROQ_API_KEY"],
mistral: ["MISTRAL_API_KEY"],
xai: ["XAI_API_KEY"],
openrouter: ["OPENROUTER_API_KEY"],
azure: ["AZURE_API_KEY"],
};
const vars = envVarMap[providerId];
if (!vars) {
return undefined;
}
for (const varName of vars) {
const value = process.env[varName];
if (value) {
return value;
}
}
return undefined;
}
/**
* Resolve a model reference to an AI SDK language model.
*
* @param modelRef - Model reference in format "provider/model-id"
* @param config - AI SDK configuration
* @returns Resolved model ready for use with streamText/generateText
*
* @example
* const model = await resolveModel("anthropic/claude-sonnet-4", { mode: "gateway" });
* const result = await streamText({ model: model.model, ... });
*/
export async function resolveModel(modelRef: string, config: AiSdkConfig): Promise<ResolvedModel> {
if (config.mode === "gateway") {
return getGatewayModel(modelRef, config.gateway ?? {});
}
return getDirectModel(modelRef, config);
}
/**
* Get default AI SDK configuration.
* Reads from environment variables and returns sensible defaults.
*/
export function getDefaultConfig(): AiSdkConfig {
// Check for AI Gateway key first (simplest setup)
if (process.env.AI_GATEWAY_API_KEY) {
return {
mode: "gateway",
gateway: { apiKey: process.env.AI_GATEWAY_API_KEY },
defaultModel: "anthropic/claude-sonnet-4" as ModelRef,
};
}
// Fall back to direct mode, auto-detecting available providers
return {
mode: "direct",
providers: {},
defaultModel: "anthropic/claude-sonnet-4" as ModelRef,
};
}
/**
* Validate that the configuration is usable.
* Returns an error message if invalid, or null if valid.
*/
export function validateConfig(config: AiSdkConfig): string | null {
if (config.mode === "gateway") {
if (!config.gateway?.apiKey && !process.env.AI_GATEWAY_API_KEY) {
return "AI Gateway mode requires AI_GATEWAY_API_KEY environment variable or gateway.apiKey config";
}
return null;
}
// Direct mode: check if at least one provider has credentials
const hasAnyKey = Object.keys(providerFactories).some(
(provider) =>
config.providers?.[provider as DirectProviderId]?.apiKey || getEnvApiKey(provider),
);
if (!hasAnyKey) {
return "Direct mode requires at least one provider API key (e.g., ANTHROPIC_API_KEY, OPENAI_API_KEY)";
}
return null;
}
/**
* List available providers based on configuration and environment.
*/
export function listAvailableProviders(config: AiSdkConfig): string[] {
if (config.mode === "gateway") {
// Gateway mode supports all providers through the gateway
return ["anthropic", "openai", "google", "groq", "mistral", "xai", "amazon-bedrock", "azure"];
}
// Direct mode: only providers with API keys available
return Object.keys(providerFactories).filter(
(provider) =>
config.providers?.[provider as DirectProviderId]?.apiKey || getEnvApiKey(provider),
);
}

View File

@ -1,46 +0,0 @@
import { describe, expect, it } from "vitest";
import { mapThinkLevelToAnthropicOptions } from "./run.js";
describe("AI SDK Run", () => {
describe("mapThinkLevelToAnthropicOptions", () => {
it("returns empty object for non-anthropic provider", () => {
expect(mapThinkLevelToAnthropicOptions("high", "openai")).toEqual({});
expect(mapThinkLevelToAnthropicOptions("high", "google")).toEqual({});
});
it("returns empty object for off thinking level", () => {
expect(mapThinkLevelToAnthropicOptions("off", "anthropic")).toEqual({});
expect(mapThinkLevelToAnthropicOptions(undefined, "anthropic")).toEqual({});
});
it("maps minimal to 2000 budget tokens", () => {
const result = mapThinkLevelToAnthropicOptions("minimal", "anthropic");
expect(result.thinking).toEqual({ type: "enabled", budgetTokens: 2000 });
expect(result.effort).toBe("low");
});
it("maps low to 4000 budget tokens", () => {
const result = mapThinkLevelToAnthropicOptions("low", "anthropic");
expect(result.thinking).toEqual({ type: "enabled", budgetTokens: 4000 });
expect(result.effort).toBe("low");
});
it("maps medium to 8000 budget tokens", () => {
const result = mapThinkLevelToAnthropicOptions("medium", "anthropic");
expect(result.thinking).toEqual({ type: "enabled", budgetTokens: 8000 });
expect(result.effort).toBe("medium");
});
it("maps high to 16000 budget tokens with high effort", () => {
const result = mapThinkLevelToAnthropicOptions("high", "anthropic");
expect(result.thinking).toEqual({ type: "enabled", budgetTokens: 16000 });
expect(result.effort).toBe("high");
});
it("maps xhigh to 32000 budget tokens with high effort", () => {
const result = mapThinkLevelToAnthropicOptions("xhigh", "anthropic");
expect(result.thinking).toEqual({ type: "enabled", budgetTokens: 32000 });
expect(result.effort).toBe("high");
});
});
});

View File

@ -1,360 +0,0 @@
/**
* AI SDK v6 agent runner for openclaw.
*
* This module provides an AI SDK-based implementation that can run
* in place of the pi-agent runner. It uses the same interface and
* emits compatible events.
*
* Fork-friendly: parallel implementation, doesn't modify pi-agent code.
*/
import type { ThinkLevel } from "../../auto-reply/thinking.js";
import { resolveUserPath } from "../../utils.js";
import type { RunEmbeddedPiAgentParams } from "../pi-embedded-runner/run/params.js";
import type { EmbeddedPiRunResult, EmbeddedPiAgentMeta } from "../pi-embedded-runner/types.js";
import {
resolveSkillsPromptForRun,
applySkillEnvOverrides,
applySkillEnvOverridesFromSnapshot,
loadWorkspaceSkillEntries,
} from "../skills.js";
import { streamWithPiAgentEvents, type EventAdapterInput } from "./event-adapter.js";
import { resolveModel, getDefaultConfig, validateConfig } from "./provider.js";
import { createAiSdkTools, type ToolExecutionContext, type ConvertedAiSdkTool } from "./tools.js";
import type { AiSdkConfig, ResolvedModel } from "./types.js";
/**
* Configuration for the AI SDK agent runner.
*/
export interface AiSdkRunnerConfig {
/** AI SDK configuration */
aiSdkConfig?: AiSdkConfig;
/** Model reference (e.g., "anthropic/claude-sonnet-4") */
modelRef?: string;
}
/**
* Map OpenClaw ThinkLevel to AI SDK Anthropic thinking options.
* Based on: https://ai-sdk.dev/providers/ai-sdk-providers/anthropic#reasoning
*/
export function mapThinkLevelToAnthropicOptions(
thinkLevel?: ThinkLevel,
provider?: string,
): { thinking?: { type: "enabled"; budgetTokens: number }; effort?: "high" | "medium" | "low" } {
// Only apply to Anthropic provider
if (provider !== "anthropic") {
return {};
}
if (!thinkLevel || thinkLevel === "off") {
return {};
}
// Map thinking levels to budget tokens
const budgetMap: Record<Exclude<ThinkLevel, "off">, number> = {
minimal: 2000,
low: 4000,
medium: 8000,
high: 16000,
xhigh: 32000,
};
const budgetTokens = budgetMap[thinkLevel] ?? 4000;
const options: ReturnType<typeof mapThinkLevelToAnthropicOptions> = {
thinking: { type: "enabled", budgetTokens },
};
// For xhigh, also set effort to high (for Claude Opus 4.5)
if (thinkLevel === "xhigh") {
options.effort = "high";
} else if (thinkLevel === "high") {
options.effort = "high";
} else if (thinkLevel === "medium") {
options.effort = "medium";
} else {
options.effort = "low";
}
return options;
}
/**
* Result from the AI SDK agent run.
* Compatible with EmbeddedPiRunResult.
*/
export type AiSdkRunResult = EmbeddedPiRunResult;
/**
* Run the AI SDK agent with parameters matching runEmbeddedPiAgent.
*
* This is the main entry point for running the AI SDK engine.
* It aims to be a drop-in replacement for runEmbeddedPiAgent.
*
* @param params - Run parameters (compatible with pi-agent params)
* @param config - AI SDK specific configuration
* @returns Run result (compatible with pi-agent result)
*/
export async function runAiSdkAgent(
params: RunEmbeddedPiAgentParams,
config?: AiSdkRunnerConfig,
): Promise<AiSdkRunResult> {
const started = Date.now();
// Resolve AI SDK configuration
const aiSdkConfig = config?.aiSdkConfig ?? getDefaultConfig();
// Determine model reference
const provider = params.provider ?? "anthropic";
const modelId = params.model ?? "claude-sonnet-4";
const modelRef = config?.modelRef ?? `${provider}/${modelId}`;
// Resolve the model
let resolvedModel: ResolvedModel;
try {
resolvedModel = await resolveModel(modelRef, aiSdkConfig);
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
return {
payloads: [{ text: `Error resolving model: ${message}`, isError: true }],
meta: {
durationMs: Date.now() - started,
error: { kind: "context_overflow", message },
},
};
}
// Resolve workspace directory
const effectiveWorkspace = resolveUserPath(params.workspaceDir);
// === Skills Integration ===
// Apply skill environment overrides and build skills prompt
let restoreSkillEnv: (() => void) | undefined;
let skillsPrompt = "";
try {
const shouldLoadSkillEntries = !params.skillsSnapshot || !params.skillsSnapshot.resolvedSkills;
const skillEntries = shouldLoadSkillEntries
? loadWorkspaceSkillEntries(effectiveWorkspace)
: [];
// Apply environment overrides from skills
restoreSkillEnv = params.skillsSnapshot
? applySkillEnvOverridesFromSnapshot({
snapshot: params.skillsSnapshot,
config: params.config,
})
: applySkillEnvOverrides({
skills: skillEntries ?? [],
config: params.config,
});
// Resolve skills prompt
skillsPrompt = resolveSkillsPromptForRun({
skillsSnapshot: params.skillsSnapshot,
entries: shouldLoadSkillEntries ? skillEntries : undefined,
config: params.config,
workspaceDir: effectiveWorkspace,
});
} catch (error) {
console.error("[AI SDK Runner] Error loading skills:", error);
// Continue without skills if loading fails
}
// Create tool execution context
const toolContext: ToolExecutionContext = {
sessionKey: params.sessionKey,
workspaceDir: effectiveWorkspace,
abortSignal: params.abortSignal,
messageId: params.runId,
};
// Create tools if not disabled
let tools: Record<string, ConvertedAiSdkTool> | undefined;
if (!params.disableTools) {
try {
tools = await createAiSdkTools(
{
workspaceDir: effectiveWorkspace,
sessionKey: params.sessionKey,
config: params.config,
abortSignal: params.abortSignal,
messageProvider: params.messageProvider,
agentAccountId: params.agentAccountId,
messageTo: params.messageTo,
messageThreadId: params.messageThreadId,
groupId: params.groupId,
groupChannel: params.groupChannel,
groupSpace: params.groupSpace,
spawnedBy: params.spawnedBy,
senderId: params.senderId,
senderName: params.senderName,
senderUsername: params.senderUsername,
senderE164: params.senderE164,
modelProvider: provider,
modelId,
currentChannelId: params.currentChannelId,
currentThreadTs: params.currentThreadTs,
replyToMode: params.replyToMode,
hasRepliedRef: params.hasRepliedRef,
},
toolContext,
);
} catch (error) {
console.error("[AI SDK Runner] Error creating tools:", error);
// Continue without tools if creation fails
}
}
// === Build System Prompt ===
// Combine extra system prompt, skills prompt, and base prompt
const systemParts: string[] = [];
if (params.extraSystemPrompt) {
systemParts.push(params.extraSystemPrompt);
}
if (skillsPrompt) {
systemParts.push(skillsPrompt);
}
// Note: The main system prompt should be built by the caller (e.g., buildEmbeddedSystemPrompt)
// For now, we just pass through what we receive
const systemPrompt = systemParts.length > 0 ? systemParts.join("\n\n") : undefined;
// Build messages - for now, just the user prompt
// TODO: Load session history from sessionFile when implementing full session support
const messages: EventAdapterInput["messages"] = [{ role: "user", content: params.prompt }];
// === Thinking/Reasoning Options (Anthropic-specific) ===
// Map OpenClaw thinkLevel to AI SDK Anthropic provider options
const anthropicOptions = mapThinkLevelToAnthropicOptions(params.thinkLevel, provider);
// Create stream input
const streamInput: EventAdapterInput = {
model: resolvedModel.model,
system: systemPrompt,
messages,
tools,
temperature: 0.7,
maxTokens: 4096,
abortSignal: params.abortSignal,
// Pass provider-specific options for thinking/reasoning
providerOptions:
anthropicOptions.thinking || anthropicOptions.effort
? {
anthropic: anthropicOptions,
}
: undefined,
};
// Collect payloads from the stream
const payloads: AiSdkRunResult["payloads"] = [];
let accumulatedText = "";
let agentMeta: EmbeddedPiAgentMeta | undefined;
let aborted = false;
try {
// Stream events and process them
for await (const event of streamWithPiAgentEvents(streamInput)) {
// Call event callback if provided
if (params.onAgentEvent) {
params.onAgentEvent({
stream: "agent",
data: event as Record<string, unknown>,
});
}
// Process events
switch (event.type) {
case "message_start":
if (params.onAssistantMessageStart) {
await params.onAssistantMessageStart();
}
break;
case "message_update":
// Extract text from the event
if (event.assistantMessageEvent.type === "text" && event.assistantMessageEvent.text) {
accumulatedText += event.assistantMessageEvent.text;
if (params.onPartialReply) {
await params.onPartialReply({ text: event.assistantMessageEvent.text });
}
}
if (
event.assistantMessageEvent.type === "thinking" &&
event.assistantMessageEvent.thinking
) {
if (params.onReasoningStream) {
await params.onReasoningStream({ text: event.assistantMessageEvent.thinking });
}
}
break;
case "message_end":
// Block reply if callback provided
if (params.onBlockReply && accumulatedText) {
await params.onBlockReply({ text: accumulatedText });
}
if (params.onBlockReplyFlush) {
await params.onBlockReplyFlush();
}
break;
case "tool_execution_end":
// Report tool result if callbacks provided
if (params.onToolResult && params.shouldEmitToolResult?.()) {
const resultText =
typeof event.result === "string" ? event.result : JSON.stringify(event.result);
await params.onToolResult({ text: resultText });
}
break;
case "agent_end":
// Build agent meta
agentMeta = {
sessionId: params.sessionId,
provider: resolvedModel.providerId,
model: resolvedModel.modelId,
// TODO: Get actual usage from AI SDK response
usage: { input: 0, output: 0, total: 0 },
};
break;
}
}
} catch (error) {
if (error instanceof Error && error.name === "AbortError") {
aborted = true;
} else {
const message = error instanceof Error ? error.message : String(error);
payloads.push({ text: `Error: ${message}`, isError: true });
}
}
// Add final text as payload
if (accumulatedText) {
payloads.push({ text: accumulatedText });
}
// Restore skill environment overrides
if (restoreSkillEnv) {
try {
restoreSkillEnv();
} catch (error) {
console.error("[AI SDK Runner] Error restoring skill env:", error);
}
}
return {
payloads: payloads.length > 0 ? payloads : undefined,
meta: {
durationMs: Date.now() - started,
agentMeta,
aborted,
},
};
}
/**
* Check if AI SDK engine is available (has required configuration).
*/
export function isAiSdkEngineAvailable(config?: AiSdkConfig): boolean {
const cfg = config ?? getDefaultConfig();
return validateConfig(cfg) === null;
}

View File

@ -1,180 +0,0 @@
/**
* AI SDK v6 tool converter for openclaw.
*
* This module converts existing pi-agent tools to AI SDK format.
* The conversion is done at runtime to avoid duplicating tool logic.
*
* Fork-friendly: uses existing pi-tools without modification.
*/
import { tool } from "ai";
import { jsonSchema } from "ai";
import type { AnyAgentTool } from "../pi-tools.types.js";
/**
* Context passed to tool execution.
* Mirrors the pi-agent tool context for compatibility.
*/
export interface ToolExecutionContext {
/** Session key for the current agent session */
sessionKey?: string;
/** Workspace directory */
workspaceDir?: string;
/** Abort signal for cancellation */
abortSignal?: AbortSignal;
/** Current message ID */
messageId?: string;
}
/**
* Result from tool execution.
*/
export interface ToolResult {
/** Human-readable title/summary */
title?: string;
/** Full output text */
output: string;
/** Metadata about the execution */
metadata?: Record<string, unknown>;
/** Whether output was truncated */
truncated?: boolean;
/** Error message if failed */
error?: string;
}
/**
* Convert a TypeBox schema to JSON Schema format.
* TypeBox schemas are already JSON Schema compatible.
*/
function typeBoxToJsonSchema(schema: unknown): Record<string, unknown> {
// TypeBox schemas are already JSON Schema compatible
// Just ensure it's a valid object and return it
if (typeof schema === "object" && schema !== null) {
const s = schema as Record<string, unknown>;
return {
type: s.type ?? "object",
properties: s.properties ?? {},
required: s.required ?? [],
description: s.description,
};
}
return { type: "object", properties: {} };
}
/**
* Extract text content from AgentToolResult.
*/
function extractTextFromToolResult(result: {
content?: Array<{ type: string; text?: string }>;
details?: unknown;
}): string {
if (!result.content || !Array.isArray(result.content)) {
return JSON.stringify(result.details ?? result);
}
const textParts = result.content
.filter(
(c): c is { type: "text"; text: string } => c.type === "text" && typeof c.text === "string",
)
.map((c) => c.text);
return textParts.join("\n") || JSON.stringify(result.details ?? {});
}
/** AI SDK tool type alias for converted tools */
// eslint-disable-next-line @typescript-eslint/no-explicit-any
export type ConvertedAiSdkTool = ReturnType<typeof tool<any, any>>;
/**
* Convert a single pi-agent tool to AI SDK format.
*
* @param piTool - The pi-agent tool to convert
* @param context - Execution context for the tool
* @returns AI SDK compatible tool
*/
export function convertPiToolToAiSdk(
piTool: AnyAgentTool,
context: ToolExecutionContext,
): ConvertedAiSdkTool {
// Pi-agent tools have `parameters` (TypeBox schema)
const schema = typeBoxToJsonSchema(piTool.parameters);
return tool({
description: piTool.description ?? `Tool: ${piTool.name}`,
inputSchema: jsonSchema(schema),
execute: async (args: Record<string, unknown>): Promise<ToolResult> => {
try {
// Generate a unique tool call ID for this execution
const toolCallId = `aisdk_${Date.now()}_${Math.random().toString(36).slice(2, 8)}`;
// Call the pi-agent tool's execute function
// Signature: execute(toolCallId, params, signal?, onUpdate?)
const result = await piTool.execute(
toolCallId,
args,
context.abortSignal,
undefined, // onUpdate callback not used for now
);
// AgentToolResult has: { content: (TextContent | ImageContent)[], details: T }
const output = extractTextFromToolResult(result);
return {
output,
metadata: result.details as Record<string, unknown> | undefined,
};
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
return {
output: `Error: ${message}`,
error: message,
};
}
},
});
}
/**
* Convert multiple pi-agent tools to AI SDK format.
*
* @param piTools - Array of pi-agent tools to convert
* @param context - Execution context for the tools
* @returns Record of tool name to AI SDK tool
*/
export function convertPiToolsToAiSdk(
piTools: AnyAgentTool[],
context: ToolExecutionContext,
): Record<string, ConvertedAiSdkTool> {
const result: Record<string, ConvertedAiSdkTool> = {};
for (const piTool of piTools) {
if (!piTool.name) {
continue;
}
result[piTool.name] = convertPiToolToAiSdk(piTool, context);
}
return result;
}
/**
* Create AI SDK tools from openclaw's tool creation function.
*
* This is the main entry point for tool creation in the AI SDK engine.
* It reuses the existing createOpenClawCodingTools() function and converts
* the result to AI SDK format.
*
* @param options - Options passed to createOpenClawCodingTools
* @param context - Execution context for the tools
* @returns Record of tool name to AI SDK tool
*/
export async function createAiSdkTools(
options: Parameters<typeof import("../pi-tools.js").createOpenClawCodingTools>[0],
context: ToolExecutionContext,
): Promise<Record<string, ConvertedAiSdkTool>> {
// Dynamically import to avoid circular dependencies
const { createOpenClawCodingTools } = await import("../pi-tools.js");
// Create pi-agent tools using existing function
const piTools = createOpenClawCodingTools(options);
// Convert to AI SDK format
return convertPiToolsToAiSdk(piTools, context);
}

View File

@ -1,133 +0,0 @@
/**
* AI SDK v6 integration types for openclaw.
* This module defines the core types used by the AI SDK engine.
*/
import type { LanguageModel } from "ai";
/**
* Provider mode determines how models are accessed:
* - "gateway": Use Vercel AI Gateway for unified access to all providers
* - "direct": Use provider-specific SDK packages directly
*/
export type ProviderMode = "gateway" | "direct";
/**
* Supported AI SDK providers for direct mode.
*/
export type DirectProviderId =
| "anthropic"
| "openai"
| "google"
| "amazon-bedrock"
| "azure"
| "groq"
| "mistral"
| "xai"
| "openrouter"
| "openai-compatible";
/**
* Model reference in the format "provider/model-id".
* Examples: "anthropic/claude-sonnet-4", "openai/gpt-4o"
*/
export type ModelRef = `${string}/${string}`;
/**
* Provider configuration for direct mode.
*/
export interface DirectProviderConfig {
/** API key for the provider */
apiKey?: string;
/** Base URL override for custom endpoints */
baseUrl?: string;
/** Additional provider-specific options */
options?: Record<string, unknown>;
}
/**
* AI Gateway configuration.
*/
export interface GatewayConfig {
/** AI Gateway API key */
apiKey?: string;
/** Gateway base URL (defaults to Vercel AI Gateway) */
baseUrl?: string;
}
/**
* AI SDK engine configuration.
*/
export interface AiSdkConfig {
/** Provider mode: "gateway" or "direct" */
mode: ProviderMode;
/** AI Gateway configuration (when mode is "gateway") */
gateway?: GatewayConfig;
/** Direct provider configurations (when mode is "direct") */
providers?: Partial<Record<DirectProviderId, DirectProviderConfig>>;
/** Default model to use if not specified */
defaultModel?: ModelRef;
}
/**
* Resolved model ready for use with AI SDK.
*/
export interface ResolvedModel {
/** The AI SDK language model instance */
model: LanguageModel;
/** Provider ID */
providerId: string;
/** Model ID */
modelId: string;
/** Full model reference */
ref: ModelRef;
}
/**
* AI SDK stream input parameters.
* Matches the interface expected by streamText().
*/
export interface AiSdkStreamInput {
/** Resolved model to use */
model: ResolvedModel;
/** System prompt(s) */
system?: string | string[];
/** Message history */
messages: AiSdkMessage[];
/** Tools available to the model */
tools?: Record<string, AiSdkTool>;
/** Temperature for generation */
temperature?: number;
/** Maximum output tokens */
maxTokens?: number;
/** Abort signal for cancellation */
abortSignal?: AbortSignal;
/** Top-p sampling parameter */
topP?: number;
}
/**
* AI SDK message format.
*/
export interface AiSdkMessage {
role: "system" | "user" | "assistant" | "tool";
content: string | AiSdkMessageContent[];
}
/**
* AI SDK message content block.
*/
export type AiSdkMessageContent =
| { type: "text"; text: string }
| { type: "image"; image: string | Uint8Array; mimeType?: string }
| { type: "tool-call"; toolCallId: string; toolName: string; args: unknown }
| { type: "tool-result"; toolCallId: string; toolName: string; result: unknown };
/**
* AI SDK tool definition.
*/
export interface AiSdkTool {
description: string;
parameters: unknown; // JSON Schema
execute?: (args: unknown) => Promise<unknown>;
}

View File

@ -1,25 +0,0 @@
export type AnnounceIdFromChildRunParams = {
childSessionKey: string;
childRunId: string;
};
export function buildAnnounceIdFromChildRun(params: AnnounceIdFromChildRunParams): string {
return `v1:${params.childSessionKey}:${params.childRunId}`;
}
export function buildAnnounceIdempotencyKey(announceId: string): string {
return `announce:${announceId}`;
}
export function resolveQueueAnnounceId(params: {
announceId?: string;
sessionKey: string;
enqueuedAt: number;
}): string {
const announceId = params.announceId?.trim();
if (announceId) {
return announceId;
}
// Backward-compatible fallback for queue items that predate announceId.
return `legacy:${params.sessionKey}:${params.enqueuedAt}`;
}

View File

@ -1,185 +0,0 @@
import crypto from "node:crypto";
import path from "node:path";
import type { AgentMessage, StreamFn } from "@mariozechner/pi-agent-core";
import type { Api, Model } from "@mariozechner/pi-ai";
import { resolveStateDir } from "../config/paths.js";
import { createSubsystemLogger } from "../logging/subsystem.js";
import { resolveUserPath } from "../utils.js";
import { parseBooleanValue } from "../utils/boolean.js";
import { safeJsonStringify } from "../utils/safe-json.js";
import { getQueuedFileWriter, type QueuedFileWriter } from "./queued-file-writer.js";
type PayloadLogStage = "request" | "usage";
type PayloadLogEvent = {
ts: string;
stage: PayloadLogStage;
runId?: string;
sessionId?: string;
sessionKey?: string;
provider?: string;
modelId?: string;
modelApi?: string | null;
workspaceDir?: string;
payload?: unknown;
usage?: Record<string, unknown>;
error?: string;
payloadDigest?: string;
};
type PayloadLogConfig = {
enabled: boolean;
filePath: string;
};
type PayloadLogWriter = QueuedFileWriter;
const writers = new Map<string, PayloadLogWriter>();
const log = createSubsystemLogger("agent/anthropic-payload");
function resolvePayloadLogConfig(env: NodeJS.ProcessEnv): PayloadLogConfig {
const enabled = parseBooleanValue(env.OPENCLAW_ANTHROPIC_PAYLOAD_LOG) ?? false;
const fileOverride = env.OPENCLAW_ANTHROPIC_PAYLOAD_LOG_FILE?.trim();
const filePath = fileOverride
? resolveUserPath(fileOverride)
: path.join(resolveStateDir(env), "logs", "anthropic-payload.jsonl");
return { enabled, filePath };
}
function getWriter(filePath: string): PayloadLogWriter {
return getQueuedFileWriter(writers, filePath);
}
function formatError(error: unknown): string | undefined {
if (error instanceof Error) {
return error.message;
}
if (typeof error === "string") {
return error;
}
if (typeof error === "number" || typeof error === "boolean" || typeof error === "bigint") {
return String(error);
}
if (error && typeof error === "object") {
return safeJsonStringify(error) ?? "unknown error";
}
return undefined;
}
function digest(value: unknown): string | undefined {
const serialized = safeJsonStringify(value);
if (!serialized) {
return undefined;
}
return crypto.createHash("sha256").update(serialized).digest("hex");
}
function isAnthropicModel(model: Model<Api> | undefined | null): boolean {
return (model as { api?: unknown })?.api === "anthropic-messages";
}
function findLastAssistantUsage(messages: AgentMessage[]): Record<string, unknown> | null {
for (let i = messages.length - 1; i >= 0; i -= 1) {
const msg = messages[i] as { role?: unknown; usage?: unknown };
if (msg?.role === "assistant" && msg.usage && typeof msg.usage === "object") {
return msg.usage as Record<string, unknown>;
}
}
return null;
}
export type AnthropicPayloadLogger = {
enabled: true;
wrapStreamFn: (streamFn: StreamFn) => StreamFn;
recordUsage: (messages: AgentMessage[], error?: unknown) => void;
};
export function createAnthropicPayloadLogger(params: {
env?: NodeJS.ProcessEnv;
runId?: string;
sessionId?: string;
sessionKey?: string;
provider?: string;
modelId?: string;
modelApi?: string | null;
workspaceDir?: string;
}): AnthropicPayloadLogger | null {
const env = params.env ?? process.env;
const cfg = resolvePayloadLogConfig(env);
if (!cfg.enabled) {
return null;
}
const writer = getWriter(cfg.filePath);
const base: Omit<PayloadLogEvent, "ts" | "stage"> = {
runId: params.runId,
sessionId: params.sessionId,
sessionKey: params.sessionKey,
provider: params.provider,
modelId: params.modelId,
modelApi: params.modelApi,
workspaceDir: params.workspaceDir,
};
const record = (event: PayloadLogEvent) => {
const line = safeJsonStringify(event);
if (!line) {
return;
}
writer.write(`${line}\n`);
};
const wrapStreamFn: AnthropicPayloadLogger["wrapStreamFn"] = (streamFn) => {
const wrapped: StreamFn = (model, context, options) => {
if (!isAnthropicModel(model)) {
return streamFn(model, context, options);
}
const nextOnPayload = (payload: unknown) => {
record({
...base,
ts: new Date().toISOString(),
stage: "request",
payload,
payloadDigest: digest(payload),
});
options?.onPayload?.(payload);
};
return streamFn(model, context, {
...options,
onPayload: nextOnPayload,
});
};
return wrapped;
};
const recordUsage: AnthropicPayloadLogger["recordUsage"] = (messages, error) => {
const usage = findLastAssistantUsage(messages);
const errorMessage = formatError(error);
if (!usage) {
if (errorMessage) {
record({
...base,
ts: new Date().toISOString(),
stage: "usage",
error: errorMessage,
});
}
return;
}
record({
...base,
ts: new Date().toISOString(),
stage: "usage",
usage,
error: errorMessage,
});
log.info("anthropic usage", {
runId: params.runId,
sessionId: params.sessionId,
usage,
});
};
log.info("anthropic payload logger enabled", { filePath: writer.filePath });
return { enabled: true, wrapStreamFn, recordUsage };
}

View File

@ -1,249 +0,0 @@
import { randomUUID } from "node:crypto";
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { type Api, completeSimple, type Model } from "@mariozechner/pi-ai";
import { describe, expect, it } from "vitest";
import {
ANTHROPIC_SETUP_TOKEN_PREFIX,
validateAnthropicSetupToken,
} from "../commands/auth-token.js";
import { loadConfig } from "../config/config.js";
import { isTruthyEnvValue } from "../infra/env.js";
import { resolveOpenClawAgentDir } from "./agent-paths.js";
import {
type AuthProfileCredential,
ensureAuthProfileStore,
saveAuthProfileStore,
} from "./auth-profiles.js";
import { getApiKeyForModel, requireApiKey } from "./model-auth.js";
import { normalizeProviderId, parseModelRef } from "./model-selection.js";
import { ensureOpenClawModelsJson } from "./models-config.js";
import { discoverAuthStorage, discoverModels } from "./pi-model-discovery.js";
const LIVE = isTruthyEnvValue(process.env.LIVE) || isTruthyEnvValue(process.env.OPENCLAW_LIVE_TEST);
const SETUP_TOKEN_RAW = process.env.OPENCLAW_LIVE_SETUP_TOKEN?.trim() ?? "";
const SETUP_TOKEN_VALUE = process.env.OPENCLAW_LIVE_SETUP_TOKEN_VALUE?.trim() ?? "";
const SETUP_TOKEN_PROFILE = process.env.OPENCLAW_LIVE_SETUP_TOKEN_PROFILE?.trim() ?? "";
const SETUP_TOKEN_MODEL = process.env.OPENCLAW_LIVE_SETUP_TOKEN_MODEL?.trim() ?? "";
const ENABLED = LIVE && Boolean(SETUP_TOKEN_RAW || SETUP_TOKEN_VALUE || SETUP_TOKEN_PROFILE);
const describeLive = ENABLED ? describe : describe.skip;
type TokenSource = {
agentDir: string;
profileId: string;
cleanup?: () => Promise<void>;
};
function isSetupToken(value: string): boolean {
return value.startsWith(ANTHROPIC_SETUP_TOKEN_PREFIX);
}
function listSetupTokenProfiles(store: {
profiles: Record<string, AuthProfileCredential>;
}): string[] {
return Object.entries(store.profiles)
.filter(([, cred]) => {
if (cred.type !== "token") {
return false;
}
if (normalizeProviderId(cred.provider) !== "anthropic") {
return false;
}
return isSetupToken(cred.token);
})
.map(([id]) => id);
}
function pickSetupTokenProfile(candidates: string[]): string {
const preferred = ["anthropic:setup-token-test", "anthropic:setup-token", "anthropic:default"];
for (const id of preferred) {
if (candidates.includes(id)) {
return id;
}
}
return candidates[0] ?? "";
}
async function resolveTokenSource(): Promise<TokenSource> {
const explicitToken =
(SETUP_TOKEN_RAW && isSetupToken(SETUP_TOKEN_RAW) ? SETUP_TOKEN_RAW : "") || SETUP_TOKEN_VALUE;
if (explicitToken) {
const error = validateAnthropicSetupToken(explicitToken);
if (error) {
throw new Error(`Invalid setup-token: ${error}`);
}
const tempDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-setup-token-"));
const profileId = `anthropic:setup-token-live-${randomUUID()}`;
const store = ensureAuthProfileStore(tempDir, {
allowKeychainPrompt: false,
});
store.profiles[profileId] = {
type: "token",
provider: "anthropic",
token: explicitToken,
};
saveAuthProfileStore(store, tempDir);
return {
agentDir: tempDir,
profileId,
cleanup: async () => {
await fs.rm(tempDir, { recursive: true, force: true });
},
};
}
const agentDir = resolveOpenClawAgentDir();
const store = ensureAuthProfileStore(agentDir, {
allowKeychainPrompt: false,
});
const candidates = listSetupTokenProfiles(store);
if (SETUP_TOKEN_PROFILE) {
if (!candidates.includes(SETUP_TOKEN_PROFILE)) {
const available = candidates.length > 0 ? candidates.join(", ") : "(none)";
throw new Error(
`Setup-token profile "${SETUP_TOKEN_PROFILE}" not found. Available: ${available}.`,
);
}
return { agentDir, profileId: SETUP_TOKEN_PROFILE };
}
if (SETUP_TOKEN_RAW && SETUP_TOKEN_RAW !== "1" && SETUP_TOKEN_RAW !== "auto") {
throw new Error(
"OPENCLAW_LIVE_SETUP_TOKEN did not look like a setup-token. Use OPENCLAW_LIVE_SETUP_TOKEN_VALUE for raw tokens.",
);
}
if (candidates.length === 0) {
throw new Error(
"No Anthropics setup-token profiles found. Set OPENCLAW_LIVE_SETUP_TOKEN_VALUE or OPENCLAW_LIVE_SETUP_TOKEN_PROFILE.",
);
}
return { agentDir, profileId: pickSetupTokenProfile(candidates) };
}
function pickModel(models: Array<Model<Api>>, raw?: string): Model<Api> | null {
const normalized = raw?.trim() ?? "";
if (normalized) {
const parsed = parseModelRef(normalized, "anthropic");
if (!parsed) {
return null;
}
return (
models.find(
(model) =>
normalizeProviderId(model.provider) === parsed.provider && model.id === parsed.model,
) ?? null
);
}
const preferred = [
"claude-opus-4-5",
"claude-sonnet-4-6",
"claude-sonnet-4-5",
"claude-sonnet-4-0",
"claude-haiku-3-5",
];
for (const id of preferred) {
const match = models.find((model) => model.id === id);
if (match) {
return match;
}
}
return models[0] ?? null;
}
function buildTestModel(id: string, provider = "anthropic"): Model<Api> {
return { id, provider } as Model<Api>;
}
describe("pickModel", () => {
it("resolves sonnet-4.6 aliases to claude-sonnet-4-6", () => {
const model = pickModel(
[buildTestModel("claude-opus-4-6"), buildTestModel("claude-sonnet-4-6")],
"sonnet-4.6",
);
expect(model?.id).toBe("claude-sonnet-4-6");
});
it("resolves opus-4.6 aliases to claude-opus-4-6", () => {
const model = pickModel(
[buildTestModel("claude-sonnet-4-6"), buildTestModel("claude-opus-4-6")],
"opus-4.6",
);
expect(model?.id).toBe("claude-opus-4-6");
});
});
describeLive("live anthropic setup-token", () => {
it(
"completes using a setup-token profile",
async () => {
const tokenSource = await resolveTokenSource();
try {
const cfg = loadConfig();
await ensureOpenClawModelsJson(cfg, tokenSource.agentDir);
const authStorage = discoverAuthStorage(tokenSource.agentDir);
const modelRegistry = discoverModels(authStorage, tokenSource.agentDir);
const all = Array.isArray(modelRegistry) ? modelRegistry : modelRegistry.getAll();
const candidates = all.filter(
(model) => normalizeProviderId(model.provider) === "anthropic",
) as Array<Model<Api>>;
expect(candidates.length).toBeGreaterThan(0);
const model = pickModel(candidates, SETUP_TOKEN_MODEL);
if (!model) {
throw new Error(
SETUP_TOKEN_MODEL
? `Model not found: ${SETUP_TOKEN_MODEL}`
: "No Anthropic models available.",
);
}
const apiKeyInfo = await getApiKeyForModel({
model,
cfg,
profileId: tokenSource.profileId,
agentDir: tokenSource.agentDir,
});
const apiKey = requireApiKey(apiKeyInfo, model.provider);
const tokenError = validateAnthropicSetupToken(apiKey);
if (tokenError) {
throw new Error(`Resolved profile is not a setup-token: ${tokenError}`);
}
const res = await completeSimple(
model,
{
messages: [
{
role: "user",
content: "Reply with the word ok.",
timestamp: Date.now(),
},
],
},
{
apiKey,
maxTokens: 64,
temperature: 0,
},
);
const text = res.content
.filter((block) => block.type === "text")
.map((block) => block.text.trim())
.join(" ");
expect(text.toLowerCase()).toContain("ok");
} finally {
if (tokenSource.cleanup) {
await tokenSource.cleanup();
}
}
},
5 * 60 * 1000,
);
});

View File

@ -1,72 +0,0 @@
import { formatErrorMessage } from "../infra/errors.js";
import { collectProviderApiKeys, isApiKeyRateLimitError } from "./live-auth-keys.js";
type ApiKeyRetryParams = {
apiKey: string;
error: unknown;
attempt: number;
};
type ExecuteWithApiKeyRotationOptions<T> = {
provider: string;
apiKeys: string[];
execute: (apiKey: string) => Promise<T>;
shouldRetry?: (params: ApiKeyRetryParams & { message: string }) => boolean;
onRetry?: (params: ApiKeyRetryParams & { message: string }) => void;
};
function dedupeApiKeys(raw: string[]): string[] {
const seen = new Set<string>();
const keys: string[] = [];
for (const value of raw) {
const apiKey = value.trim();
if (!apiKey || seen.has(apiKey)) {
continue;
}
seen.add(apiKey);
keys.push(apiKey);
}
return keys;
}
export function collectProviderApiKeysForExecution(params: {
provider: string;
primaryApiKey?: string;
}): string[] {
const { primaryApiKey, provider } = params;
return dedupeApiKeys([primaryApiKey?.trim() ?? "", ...collectProviderApiKeys(provider)]);
}
export async function executeWithApiKeyRotation<T>(
params: ExecuteWithApiKeyRotationOptions<T>,
): Promise<T> {
const keys = dedupeApiKeys(params.apiKeys);
if (keys.length === 0) {
throw new Error(`No API keys configured for provider "${params.provider}".`);
}
let lastError: unknown;
for (let attempt = 0; attempt < keys.length; attempt += 1) {
const apiKey = keys[attempt];
try {
return await params.execute(apiKey);
} catch (error) {
lastError = error;
const message = formatErrorMessage(error);
const retryable = params.shouldRetry
? params.shouldRetry({ apiKey, error, attempt, message })
: isApiKeyRateLimitError(message);
if (!retryable || attempt + 1 >= keys.length) {
break;
}
params.onRetry?.({ apiKey, error, attempt, message });
}
}
if (lastError === undefined) {
throw new Error(`Failed to run API request for ${params.provider}.`);
}
throw lastError;
}

View File

@ -1,205 +0,0 @@
import fs from "node:fs/promises";
type UpdateFileChunk = {
changeContext?: string;
oldLines: string[];
newLines: string[];
isEndOfFile: boolean;
};
async function defaultReadFile(filePath: string): Promise<string> {
return fs.readFile(filePath, "utf8");
}
export async function applyUpdateHunk(
filePath: string,
chunks: UpdateFileChunk[],
options?: { readFile?: (filePath: string) => Promise<string> },
): Promise<string> {
const reader = options?.readFile ?? defaultReadFile;
const originalContents = await reader(filePath).catch((err) => {
throw new Error(`Failed to read file to update ${filePath}: ${err}`);
});
const originalLines = originalContents.split("\n");
if (originalLines.length > 0 && originalLines[originalLines.length - 1] === "") {
originalLines.pop();
}
const replacements = computeReplacements(originalLines, filePath, chunks);
let newLines = applyReplacements(originalLines, replacements);
if (newLines.length === 0 || newLines[newLines.length - 1] !== "") {
newLines = [...newLines, ""];
}
return newLines.join("\n");
}
function computeReplacements(
originalLines: string[],
filePath: string,
chunks: UpdateFileChunk[],
): Array<[number, number, string[]]> {
const replacements: Array<[number, number, string[]]> = [];
let lineIndex = 0;
for (const chunk of chunks) {
if (chunk.changeContext) {
const ctxIndex = seekSequence(originalLines, [chunk.changeContext], lineIndex, false);
if (ctxIndex === null) {
throw new Error(`Failed to find context '${chunk.changeContext}' in ${filePath}`);
}
lineIndex = ctxIndex + 1;
}
if (chunk.oldLines.length === 0) {
const insertionIndex =
originalLines.length > 0 && originalLines[originalLines.length - 1] === ""
? originalLines.length - 1
: originalLines.length;
replacements.push([insertionIndex, 0, chunk.newLines]);
continue;
}
let pattern = chunk.oldLines;
let newSlice = chunk.newLines;
let found = seekSequence(originalLines, pattern, lineIndex, chunk.isEndOfFile);
if (found === null && pattern[pattern.length - 1] === "") {
pattern = pattern.slice(0, -1);
if (newSlice.length > 0 && newSlice[newSlice.length - 1] === "") {
newSlice = newSlice.slice(0, -1);
}
found = seekSequence(originalLines, pattern, lineIndex, chunk.isEndOfFile);
}
if (found === null) {
throw new Error(
`Failed to find expected lines in ${filePath}:\n${chunk.oldLines.join("\n")}`,
);
}
replacements.push([found, pattern.length, newSlice]);
lineIndex = found + pattern.length;
}
replacements.sort((a, b) => a[0] - b[0]);
return replacements;
}
function applyReplacements(
lines: string[],
replacements: Array<[number, number, string[]]>,
): string[] {
const result = [...lines];
for (const [startIndex, oldLen, newLines] of [...replacements].toReversed()) {
for (let i = 0; i < oldLen; i += 1) {
if (startIndex < result.length) {
result.splice(startIndex, 1);
}
}
for (let i = 0; i < newLines.length; i += 1) {
result.splice(startIndex + i, 0, newLines[i]);
}
}
return result;
}
function seekSequence(
lines: string[],
pattern: string[],
start: number,
eof: boolean,
): number | null {
if (pattern.length === 0) {
return start;
}
if (pattern.length > lines.length) {
return null;
}
const maxStart = lines.length - pattern.length;
const searchStart = eof && lines.length >= pattern.length ? maxStart : start;
if (searchStart > maxStart) {
return null;
}
for (let i = searchStart; i <= maxStart; i += 1) {
if (linesMatch(lines, pattern, i, (value) => value)) {
return i;
}
}
for (let i = searchStart; i <= maxStart; i += 1) {
if (linesMatch(lines, pattern, i, (value) => value.trimEnd())) {
return i;
}
}
for (let i = searchStart; i <= maxStart; i += 1) {
if (linesMatch(lines, pattern, i, (value) => value.trim())) {
return i;
}
}
for (let i = searchStart; i <= maxStart; i += 1) {
if (linesMatch(lines, pattern, i, (value) => normalizePunctuation(value.trim()))) {
return i;
}
}
return null;
}
function linesMatch(
lines: string[],
pattern: string[],
start: number,
normalize: (value: string) => string,
): boolean {
for (let idx = 0; idx < pattern.length; idx += 1) {
if (normalize(lines[start + idx]) !== normalize(pattern[idx])) {
return false;
}
}
return true;
}
function normalizePunctuation(value: string): string {
return Array.from(value)
.map((char) => {
switch (char) {
case "\u2010":
case "\u2011":
case "\u2012":
case "\u2013":
case "\u2014":
case "\u2015":
case "\u2212":
return "-";
case "\u2018":
case "\u2019":
case "\u201A":
case "\u201B":
return "'";
case "\u201C":
case "\u201D":
case "\u201E":
case "\u201F":
return '"';
case "\u00A0":
case "\u2002":
case "\u2003":
case "\u2004":
case "\u2005":
case "\u2006":
case "\u2007":
case "\u2008":
case "\u2009":
case "\u200A":
case "\u202F":
case "\u205F":
case "\u3000":
return " ";
default:
return char;
}
})
.join("");
}

View File

@ -1,257 +0,0 @@
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { describe, expect, it } from "vitest";
import { applyPatch } from "./apply-patch.js";
async function withTempDir<T>(fn: (dir: string) => Promise<T>) {
const dir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-patch-"));
try {
return await fn(dir);
} finally {
await fs.rm(dir, { recursive: true, force: true });
}
}
function buildAddFilePatch(targetPath: string): string {
return `*** Begin Patch
*** Add File: ${targetPath}
+escaped
*** End Patch`;
}
async function expectOutsideWriteRejected(params: {
dir: string;
patchTargetPath: string;
outsidePath: string;
}) {
const patch = buildAddFilePatch(params.patchTargetPath);
await expect(applyPatch(patch, { cwd: params.dir })).rejects.toThrow(/Path escapes sandbox root/);
await expect(fs.readFile(params.outsidePath, "utf8")).rejects.toBeDefined();
}
describe("applyPatch", () => {
it("adds a file", async () => {
await withTempDir(async (dir) => {
const patch = `*** Begin Patch
*** Add File: hello.txt
+hello
*** End Patch`;
const result = await applyPatch(patch, { cwd: dir });
const contents = await fs.readFile(path.join(dir, "hello.txt"), "utf8");
expect(contents).toBe("hello\n");
expect(result.summary.added).toEqual(["hello.txt"]);
});
});
it("updates and moves a file", async () => {
await withTempDir(async (dir) => {
const source = path.join(dir, "source.txt");
await fs.writeFile(source, "foo\nbar\n", "utf8");
const patch = `*** Begin Patch
*** Update File: source.txt
*** Move to: dest.txt
@@
foo
-bar
+baz
*** End Patch`;
const result = await applyPatch(patch, { cwd: dir });
const dest = path.join(dir, "dest.txt");
const contents = await fs.readFile(dest, "utf8");
expect(contents).toBe("foo\nbaz\n");
await expect(fs.stat(source)).rejects.toBeDefined();
expect(result.summary.modified).toEqual(["dest.txt"]);
});
});
it("supports end-of-file inserts", async () => {
await withTempDir(async (dir) => {
const target = path.join(dir, "end.txt");
await fs.writeFile(target, "line1\n", "utf8");
const patch = `*** Begin Patch
*** Update File: end.txt
@@
+line2
*** End of File
*** End Patch`;
await applyPatch(patch, { cwd: dir });
const contents = await fs.readFile(target, "utf8");
expect(contents).toBe("line1\nline2\n");
});
});
it("rejects path traversal outside cwd by default", async () => {
await withTempDir(async (dir) => {
const escapedPath = path.join(
path.dirname(dir),
`escaped-${process.pid}-${Date.now()}-${Math.random().toString(16).slice(2)}.txt`,
);
const relativeEscape = path.relative(dir, escapedPath);
try {
await expectOutsideWriteRejected({
dir,
patchTargetPath: relativeEscape,
outsidePath: escapedPath,
});
} finally {
await fs.rm(escapedPath, { force: true });
}
});
});
it("rejects absolute paths outside cwd by default", async () => {
await withTempDir(async (dir) => {
const escapedPath = path.join(os.tmpdir(), `openclaw-apply-patch-${Date.now()}.txt`);
try {
await expectOutsideWriteRejected({
dir,
patchTargetPath: escapedPath,
outsidePath: escapedPath,
});
} finally {
await fs.rm(escapedPath, { force: true });
}
});
});
it("allows absolute paths within cwd by default", async () => {
await withTempDir(async (dir) => {
const target = path.join(dir, "nested", "inside.txt");
const patch = `*** Begin Patch
*** Add File: ${target}
+inside
*** End Patch`;
await applyPatch(patch, { cwd: dir });
const contents = await fs.readFile(target, "utf8");
expect(contents).toBe("inside\n");
});
});
it("rejects symlink escape attempts by default", async () => {
await withTempDir(async (dir) => {
const outside = path.join(path.dirname(dir), "outside-target.txt");
const linkPath = path.join(dir, "link.txt");
await fs.writeFile(outside, "initial\n", "utf8");
await fs.symlink(outside, linkPath);
const patch = `*** Begin Patch
*** Update File: link.txt
@@
-initial
+pwned
*** End Patch`;
await expect(applyPatch(patch, { cwd: dir })).rejects.toThrow(/Symlink escapes sandbox root/);
const outsideContents = await fs.readFile(outside, "utf8");
expect(outsideContents).toBe("initial\n");
await fs.rm(outside, { force: true });
});
});
it("allows symlinks that resolve within cwd by default", async () => {
await withTempDir(async (dir) => {
const target = path.join(dir, "target.txt");
const linkPath = path.join(dir, "link.txt");
await fs.writeFile(target, "initial\n", "utf8");
await fs.symlink(target, linkPath);
const patch = `*** Begin Patch
*** Update File: link.txt
@@
-initial
+updated
*** End Patch`;
await applyPatch(patch, { cwd: dir });
const contents = await fs.readFile(target, "utf8");
expect(contents).toBe("updated\n");
});
});
it("rejects delete path traversal via symlink directories by default", async () => {
await withTempDir(async (dir) => {
const outsideDir = path.join(path.dirname(dir), `outside-dir-${process.pid}-${Date.now()}`);
const outsideFile = path.join(outsideDir, "victim.txt");
await fs.mkdir(outsideDir, { recursive: true });
await fs.writeFile(outsideFile, "victim\n", "utf8");
const linkDir = path.join(dir, "linkdir");
await fs.symlink(outsideDir, linkDir);
const patch = `*** Begin Patch
*** Delete File: linkdir/victim.txt
*** End Patch`;
try {
await expect(applyPatch(patch, { cwd: dir })).rejects.toThrow(
/Symlink escapes sandbox root/,
);
const stillThere = await fs.readFile(outsideFile, "utf8");
expect(stillThere).toBe("victim\n");
} finally {
await fs.rm(outsideFile, { force: true });
await fs.rm(outsideDir, { recursive: true, force: true });
}
});
});
it("allows path traversal when workspaceOnly is explicitly disabled", async () => {
await withTempDir(async (dir) => {
const escapedPath = path.join(
path.dirname(dir),
`escaped-allow-${process.pid}-${Date.now()}-${Math.random().toString(16).slice(2)}.txt`,
);
const relativeEscape = path.relative(dir, escapedPath);
const patch = `*** Begin Patch
*** Add File: ${relativeEscape}
+escaped
*** End Patch`;
try {
const result = await applyPatch(patch, { cwd: dir, workspaceOnly: false });
expect(result.summary.added.length).toBe(1);
const contents = await fs.readFile(escapedPath, "utf8");
expect(contents).toBe("escaped\n");
} finally {
await fs.rm(escapedPath, { force: true });
}
});
});
it("allows deleting a symlink itself even if it points outside cwd", async () => {
await withTempDir(async (dir) => {
const outsideDir = await fs.mkdtemp(path.join(path.dirname(dir), "openclaw-patch-outside-"));
try {
const outsideTarget = path.join(outsideDir, "target.txt");
await fs.writeFile(outsideTarget, "keep\n", "utf8");
const linkDir = path.join(dir, "link");
await fs.symlink(outsideDir, linkDir);
const patch = `*** Begin Patch
*** Delete File: link
*** End Patch`;
const result = await applyPatch(patch, { cwd: dir });
expect(result.summary.deleted).toEqual(["link"]);
await expect(fs.lstat(linkDir)).rejects.toBeDefined();
const outsideContents = await fs.readFile(outsideTarget, "utf8");
expect(outsideContents).toBe("keep\n");
} finally {
await fs.rm(outsideDir, { recursive: true, force: true });
}
});
});
});

View File

@ -1,532 +0,0 @@
import fs from "node:fs/promises";
import path from "node:path";
import type { AgentTool } from "@mariozechner/pi-agent-core";
import { Type } from "@sinclair/typebox";
import { applyUpdateHunk } from "./apply-patch-update.js";
import { assertSandboxPath, resolveSandboxInputPath } from "./sandbox-paths.js";
import type { SandboxFsBridge } from "./sandbox/fs-bridge.js";
const BEGIN_PATCH_MARKER = "*** Begin Patch";
const END_PATCH_MARKER = "*** End Patch";
const ADD_FILE_MARKER = "*** Add File: ";
const DELETE_FILE_MARKER = "*** Delete File: ";
const UPDATE_FILE_MARKER = "*** Update File: ";
const MOVE_TO_MARKER = "*** Move to: ";
const EOF_MARKER = "*** End of File";
const CHANGE_CONTEXT_MARKER = "@@ ";
const EMPTY_CHANGE_CONTEXT_MARKER = "@@";
type AddFileHunk = {
kind: "add";
path: string;
contents: string;
};
type DeleteFileHunk = {
kind: "delete";
path: string;
};
type UpdateFileChunk = {
changeContext?: string;
oldLines: string[];
newLines: string[];
isEndOfFile: boolean;
};
type UpdateFileHunk = {
kind: "update";
path: string;
movePath?: string;
chunks: UpdateFileChunk[];
};
type Hunk = AddFileHunk | DeleteFileHunk | UpdateFileHunk;
export type ApplyPatchSummary = {
added: string[];
modified: string[];
deleted: string[];
};
export type ApplyPatchResult = {
summary: ApplyPatchSummary;
text: string;
};
export type ApplyPatchToolDetails = {
summary: ApplyPatchSummary;
};
type SandboxApplyPatchConfig = {
root: string;
bridge: SandboxFsBridge;
};
type ApplyPatchOptions = {
cwd: string;
sandbox?: SandboxApplyPatchConfig;
/** Restrict patch paths to the workspace root (cwd). Default: true. Set false to opt out. */
workspaceOnly?: boolean;
signal?: AbortSignal;
};
const applyPatchSchema = Type.Object({
input: Type.String({
description: "Patch content using the *** Begin Patch/End Patch format.",
}),
});
export function createApplyPatchTool(
options: { cwd?: string; sandbox?: SandboxApplyPatchConfig; workspaceOnly?: boolean } = {},
): AgentTool<typeof applyPatchSchema, ApplyPatchToolDetails> {
const cwd = options.cwd ?? process.cwd();
const sandbox = options.sandbox;
const workspaceOnly = options.workspaceOnly !== false;
return {
name: "apply_patch",
label: "apply_patch",
description:
"Apply a patch to one or more files using the apply_patch format. The input should include *** Begin Patch and *** End Patch markers.",
parameters: applyPatchSchema,
execute: async (_toolCallId, args, signal) => {
const params = args as { input?: string };
const input = typeof params.input === "string" ? params.input : "";
if (!input.trim()) {
throw new Error("Provide a patch input.");
}
if (signal?.aborted) {
const err = new Error("Aborted");
err.name = "AbortError";
throw err;
}
const result = await applyPatch(input, {
cwd,
sandbox,
workspaceOnly,
signal,
});
return {
content: [{ type: "text", text: result.text }],
details: { summary: result.summary },
};
},
};
}
export async function applyPatch(
input: string,
options: ApplyPatchOptions,
): Promise<ApplyPatchResult> {
const parsed = parsePatchText(input);
if (parsed.hunks.length === 0) {
throw new Error("No files were modified.");
}
const summary: ApplyPatchSummary = {
added: [],
modified: [],
deleted: [],
};
const seen = {
added: new Set<string>(),
modified: new Set<string>(),
deleted: new Set<string>(),
};
const fileOps = resolvePatchFileOps(options);
for (const hunk of parsed.hunks) {
if (options.signal?.aborted) {
const err = new Error("Aborted");
err.name = "AbortError";
throw err;
}
if (hunk.kind === "add") {
const target = await resolvePatchPath(hunk.path, options);
await ensureDir(target.resolved, fileOps);
await fileOps.writeFile(target.resolved, hunk.contents);
recordSummary(summary, seen, "added", target.display);
continue;
}
if (hunk.kind === "delete") {
const target = await resolvePatchPath(hunk.path, options, "unlink");
await fileOps.remove(target.resolved);
recordSummary(summary, seen, "deleted", target.display);
continue;
}
const target = await resolvePatchPath(hunk.path, options);
const applied = await applyUpdateHunk(target.resolved, hunk.chunks, {
readFile: (path) => fileOps.readFile(path),
});
if (hunk.movePath) {
const moveTarget = await resolvePatchPath(hunk.movePath, options);
await ensureDir(moveTarget.resolved, fileOps);
await fileOps.writeFile(moveTarget.resolved, applied);
await fileOps.remove(target.resolved);
recordSummary(summary, seen, "modified", moveTarget.display);
} else {
await fileOps.writeFile(target.resolved, applied);
recordSummary(summary, seen, "modified", target.display);
}
}
return {
summary,
text: formatSummary(summary),
};
}
function recordSummary(
summary: ApplyPatchSummary,
seen: {
added: Set<string>;
modified: Set<string>;
deleted: Set<string>;
},
bucket: keyof ApplyPatchSummary,
value: string,
) {
if (seen[bucket].has(value)) {
return;
}
seen[bucket].add(value);
summary[bucket].push(value);
}
function formatSummary(summary: ApplyPatchSummary): string {
const lines = ["Success. Updated the following files:"];
for (const file of summary.added) {
lines.push(`A ${file}`);
}
for (const file of summary.modified) {
lines.push(`M ${file}`);
}
for (const file of summary.deleted) {
lines.push(`D ${file}`);
}
return lines.join("\n");
}
type PatchFileOps = {
readFile: (filePath: string) => Promise<string>;
writeFile: (filePath: string, content: string) => Promise<void>;
remove: (filePath: string) => Promise<void>;
mkdirp: (dir: string) => Promise<void>;
};
function resolvePatchFileOps(options: ApplyPatchOptions): PatchFileOps {
if (options.sandbox) {
const { root, bridge } = options.sandbox;
return {
readFile: async (filePath) => {
const buf = await bridge.readFile({ filePath, cwd: root });
return buf.toString("utf8");
},
writeFile: (filePath, content) => bridge.writeFile({ filePath, cwd: root, data: content }),
remove: (filePath) => bridge.remove({ filePath, cwd: root, force: false }),
mkdirp: (dir) => bridge.mkdirp({ filePath: dir, cwd: root }),
};
}
return {
readFile: (filePath) => fs.readFile(filePath, "utf8"),
writeFile: (filePath, content) => fs.writeFile(filePath, content, "utf8"),
remove: (filePath) => fs.rm(filePath),
mkdirp: (dir) => fs.mkdir(dir, { recursive: true }).then(() => {}),
};
}
async function ensureDir(filePath: string, ops: PatchFileOps) {
const parent = path.dirname(filePath);
if (!parent || parent === ".") {
return;
}
await ops.mkdirp(parent);
}
async function resolvePatchPath(
filePath: string,
options: ApplyPatchOptions,
purpose: "readWrite" | "unlink" = "readWrite",
): Promise<{ resolved: string; display: string }> {
if (options.sandbox) {
const resolved = options.sandbox.bridge.resolvePath({
filePath,
cwd: options.cwd,
});
return {
resolved: resolved.hostPath,
display: resolved.relativePath || resolved.hostPath,
};
}
const workspaceOnly = options.workspaceOnly !== false;
const resolved = workspaceOnly
? (
await assertSandboxPath({
filePath,
cwd: options.cwd,
root: options.cwd,
allowFinalSymlink: purpose === "unlink",
})
).resolved
: resolvePathFromCwd(filePath, options.cwd);
return {
resolved,
display: toDisplayPath(resolved, options.cwd),
};
}
function resolvePathFromCwd(filePath: string, cwd: string): string {
return path.normalize(resolveSandboxInputPath(filePath, cwd));
}
function toDisplayPath(resolved: string, cwd: string): string {
const relative = path.relative(cwd, resolved);
if (!relative || relative === "") {
return path.basename(resolved);
}
if (relative.startsWith("..") || path.isAbsolute(relative)) {
return resolved;
}
return relative;
}
function parsePatchText(input: string): { hunks: Hunk[]; patch: string } {
const trimmed = input.trim();
if (!trimmed) {
throw new Error("Invalid patch: input is empty.");
}
const lines = trimmed.split(/\r?\n/);
const validated = checkPatchBoundariesLenient(lines);
const hunks: Hunk[] = [];
const lastLineIndex = validated.length - 1;
let remaining = validated.slice(1, lastLineIndex);
let lineNumber = 2;
while (remaining.length > 0) {
const { hunk, consumed } = parseOneHunk(remaining, lineNumber);
hunks.push(hunk);
lineNumber += consumed;
remaining = remaining.slice(consumed);
}
return { hunks, patch: validated.join("\n") };
}
function checkPatchBoundariesLenient(lines: string[]): string[] {
const strictError = checkPatchBoundariesStrict(lines);
if (!strictError) {
return lines;
}
if (lines.length < 4) {
throw new Error(strictError);
}
const first = lines[0];
const last = lines[lines.length - 1];
if ((first === "<<EOF" || first === "<<'EOF'" || first === '<<"EOF"') && last.endsWith("EOF")) {
const inner = lines.slice(1, lines.length - 1);
const innerError = checkPatchBoundariesStrict(inner);
if (!innerError) {
return inner;
}
throw new Error(innerError);
}
throw new Error(strictError);
}
function checkPatchBoundariesStrict(lines: string[]): string | null {
const firstLine = lines[0]?.trim();
const lastLine = lines[lines.length - 1]?.trim();
if (firstLine === BEGIN_PATCH_MARKER && lastLine === END_PATCH_MARKER) {
return null;
}
if (firstLine !== BEGIN_PATCH_MARKER) {
return "The first line of the patch must be '*** Begin Patch'";
}
return "The last line of the patch must be '*** End Patch'";
}
function parseOneHunk(lines: string[], lineNumber: number): { hunk: Hunk; consumed: number } {
if (lines.length === 0) {
throw new Error(`Invalid patch hunk at line ${lineNumber}: empty hunk`);
}
const firstLine = lines[0].trim();
if (firstLine.startsWith(ADD_FILE_MARKER)) {
const targetPath = firstLine.slice(ADD_FILE_MARKER.length);
let contents = "";
let consumed = 1;
for (const addLine of lines.slice(1)) {
if (addLine.startsWith("+")) {
contents += `${addLine.slice(1)}\n`;
consumed += 1;
} else {
break;
}
}
return {
hunk: { kind: "add", path: targetPath, contents },
consumed,
};
}
if (firstLine.startsWith(DELETE_FILE_MARKER)) {
const targetPath = firstLine.slice(DELETE_FILE_MARKER.length);
return {
hunk: { kind: "delete", path: targetPath },
consumed: 1,
};
}
if (firstLine.startsWith(UPDATE_FILE_MARKER)) {
const targetPath = firstLine.slice(UPDATE_FILE_MARKER.length);
let remaining = lines.slice(1);
let consumed = 1;
let movePath: string | undefined;
const moveCandidate = remaining[0]?.trim();
if (moveCandidate?.startsWith(MOVE_TO_MARKER)) {
movePath = moveCandidate.slice(MOVE_TO_MARKER.length);
remaining = remaining.slice(1);
consumed += 1;
}
const chunks: UpdateFileChunk[] = [];
while (remaining.length > 0) {
if (remaining[0].trim() === "") {
remaining = remaining.slice(1);
consumed += 1;
continue;
}
if (remaining[0].startsWith("***")) {
break;
}
const { chunk, consumed: chunkLines } = parseUpdateFileChunk(
remaining,
lineNumber + consumed,
chunks.length === 0,
);
chunks.push(chunk);
remaining = remaining.slice(chunkLines);
consumed += chunkLines;
}
if (chunks.length === 0) {
throw new Error(
`Invalid patch hunk at line ${lineNumber}: Update file hunk for path '${targetPath}' is empty`,
);
}
return {
hunk: {
kind: "update",
path: targetPath,
movePath,
chunks,
},
consumed,
};
}
throw new Error(
`Invalid patch hunk at line ${lineNumber}: '${lines[0]}' is not a valid hunk header. Valid hunk headers: '*** Add File: {path}', '*** Delete File: {path}', '*** Update File: {path}'`,
);
}
function parseUpdateFileChunk(
lines: string[],
lineNumber: number,
allowMissingContext: boolean,
): { chunk: UpdateFileChunk; consumed: number } {
if (lines.length === 0) {
throw new Error(
`Invalid patch hunk at line ${lineNumber}: Update hunk does not contain any lines`,
);
}
let changeContext: string | undefined;
let startIndex = 0;
if (lines[0] === EMPTY_CHANGE_CONTEXT_MARKER) {
startIndex = 1;
} else if (lines[0].startsWith(CHANGE_CONTEXT_MARKER)) {
changeContext = lines[0].slice(CHANGE_CONTEXT_MARKER.length);
startIndex = 1;
} else if (!allowMissingContext) {
throw new Error(
`Invalid patch hunk at line ${lineNumber}: Expected update hunk to start with a @@ context marker, got: '${lines[0]}'`,
);
}
if (startIndex >= lines.length) {
throw new Error(
`Invalid patch hunk at line ${lineNumber + 1}: Update hunk does not contain any lines`,
);
}
const chunk: UpdateFileChunk = {
changeContext,
oldLines: [],
newLines: [],
isEndOfFile: false,
};
let parsedLines = 0;
for (const line of lines.slice(startIndex)) {
if (line === EOF_MARKER) {
if (parsedLines === 0) {
throw new Error(
`Invalid patch hunk at line ${lineNumber + 1}: Update hunk does not contain any lines`,
);
}
chunk.isEndOfFile = true;
parsedLines += 1;
break;
}
const marker = line[0];
if (!marker) {
chunk.oldLines.push("");
chunk.newLines.push("");
parsedLines += 1;
continue;
}
if (marker === " ") {
const content = line.slice(1);
chunk.oldLines.push(content);
chunk.newLines.push(content);
parsedLines += 1;
continue;
}
if (marker === "+") {
chunk.newLines.push(line.slice(1));
parsedLines += 1;
continue;
}
if (marker === "-") {
chunk.oldLines.push(line.slice(1));
parsedLines += 1;
continue;
}
if (parsedLines === 0) {
throw new Error(
`Invalid patch hunk at line ${lineNumber + 1}: Unexpected line found in update hunk: '${line}'. Every line should start with ' ' (context line), '+' (added line), or '-' (removed line)`,
);
}
break;
}
return { chunk, consumed: parsedLines + startIndex };
}

View File

@ -1,100 +0,0 @@
import { afterEach, describe, expect, it, vi } from "vitest";
import {
buildAuthHealthSummary,
DEFAULT_OAUTH_WARN_MS,
formatRemainingShort,
} from "./auth-health.js";
describe("buildAuthHealthSummary", () => {
const now = 1_700_000_000_000;
afterEach(() => {
vi.restoreAllMocks();
});
it("classifies OAuth and API key profiles", () => {
vi.spyOn(Date, "now").mockReturnValue(now);
const store = {
version: 1,
profiles: {
"anthropic:ok": {
type: "oauth" as const,
provider: "anthropic",
access: "access",
refresh: "refresh",
expires: now + DEFAULT_OAUTH_WARN_MS + 60_000,
},
"anthropic:expiring": {
type: "oauth" as const,
provider: "anthropic",
access: "access",
refresh: "refresh",
expires: now + 10_000,
},
"anthropic:expired": {
type: "oauth" as const,
provider: "anthropic",
access: "access",
refresh: "refresh",
expires: now - 10_000,
},
"anthropic:api": {
type: "api_key" as const,
provider: "anthropic",
key: "sk-ant-api",
},
},
};
const summary = buildAuthHealthSummary({
store,
warnAfterMs: DEFAULT_OAUTH_WARN_MS,
});
const statuses = Object.fromEntries(
summary.profiles.map((profile) => [profile.profileId, profile.status]),
);
expect(statuses["anthropic:ok"]).toBe("ok");
// OAuth credentials with refresh tokens are auto-renewable, so they report "ok"
expect(statuses["anthropic:expiring"]).toBe("ok");
expect(statuses["anthropic:expired"]).toBe("ok");
expect(statuses["anthropic:api"]).toBe("static");
const provider = summary.providers.find((entry) => entry.provider === "anthropic");
expect(provider?.status).toBe("ok");
});
it("reports expired for OAuth without a refresh token", () => {
vi.spyOn(Date, "now").mockReturnValue(now);
const store = {
version: 1,
profiles: {
"google:no-refresh": {
type: "oauth" as const,
provider: "google-antigravity",
access: "access",
refresh: "",
expires: now - 10_000,
},
},
};
const summary = buildAuthHealthSummary({
store,
warnAfterMs: DEFAULT_OAUTH_WARN_MS,
});
const statuses = Object.fromEntries(
summary.profiles.map((profile) => [profile.profileId, profile.status]),
);
expect(statuses["google:no-refresh"]).toBe("expired");
});
});
describe("formatRemainingShort", () => {
it("supports an explicit under-minute label override", () => {
expect(formatRemainingShort(20_000)).toBe("1m");
expect(formatRemainingShort(20_000, { underMinuteLabel: "soon" })).toBe("soon");
});
});

View File

@ -1,261 +0,0 @@
import type { OpenClawConfig } from "../config/config.js";
import {
type AuthProfileCredential,
type AuthProfileStore,
resolveAuthProfileDisplayLabel,
} from "./auth-profiles.js";
export type AuthProfileSource = "store";
export type AuthProfileHealthStatus = "ok" | "expiring" | "expired" | "missing" | "static";
export type AuthProfileHealth = {
profileId: string;
provider: string;
type: "oauth" | "token" | "api_key";
status: AuthProfileHealthStatus;
expiresAt?: number;
remainingMs?: number;
source: AuthProfileSource;
label: string;
};
export type AuthProviderHealthStatus = "ok" | "expiring" | "expired" | "missing" | "static";
export type AuthProviderHealth = {
provider: string;
status: AuthProviderHealthStatus;
expiresAt?: number;
remainingMs?: number;
profiles: AuthProfileHealth[];
};
export type AuthHealthSummary = {
now: number;
warnAfterMs: number;
profiles: AuthProfileHealth[];
providers: AuthProviderHealth[];
};
export const DEFAULT_OAUTH_WARN_MS = 24 * 60 * 60 * 1000;
export function resolveAuthProfileSource(_profileId: string): AuthProfileSource {
return "store";
}
export function formatRemainingShort(
remainingMs?: number,
opts?: {
underMinuteLabel?: string;
},
): string {
if (remainingMs === undefined || Number.isNaN(remainingMs)) {
return "unknown";
}
if (remainingMs <= 0) {
return "0m";
}
const roundedMinutes = Math.round(remainingMs / 60_000);
if (roundedMinutes < 1) {
return opts?.underMinuteLabel ?? "1m";
}
const minutes = roundedMinutes;
if (minutes < 60) {
return `${minutes}m`;
}
const hours = Math.round(minutes / 60);
if (hours < 48) {
return `${hours}h`;
}
const days = Math.round(hours / 24);
return `${days}d`;
}
function resolveOAuthStatus(
expiresAt: number | undefined,
now: number,
warnAfterMs: number,
): { status: AuthProfileHealthStatus; remainingMs?: number } {
if (!expiresAt || !Number.isFinite(expiresAt) || expiresAt <= 0) {
return { status: "missing" };
}
const remainingMs = expiresAt - now;
if (remainingMs <= 0) {
return { status: "expired", remainingMs };
}
if (remainingMs <= warnAfterMs) {
return { status: "expiring", remainingMs };
}
return { status: "ok", remainingMs };
}
function buildProfileHealth(params: {
profileId: string;
credential: AuthProfileCredential;
store: AuthProfileStore;
cfg?: OpenClawConfig;
now: number;
warnAfterMs: number;
}): AuthProfileHealth {
const { profileId, credential, store, cfg, now, warnAfterMs } = params;
const label = resolveAuthProfileDisplayLabel({ cfg, store, profileId });
const source = resolveAuthProfileSource(profileId);
if (credential.type === "api_key") {
return {
profileId,
provider: credential.provider,
type: "api_key",
status: "static",
source,
label,
};
}
if (credential.type === "token") {
const expiresAt =
typeof credential.expires === "number" && Number.isFinite(credential.expires)
? credential.expires
: undefined;
if (!expiresAt || expiresAt <= 0) {
return {
profileId,
provider: credential.provider,
type: "token",
status: "static",
source,
label,
};
}
const { status, remainingMs } = resolveOAuthStatus(expiresAt, now, warnAfterMs);
return {
profileId,
provider: credential.provider,
type: "token",
status,
expiresAt,
remainingMs,
source,
label,
};
}
const hasRefreshToken = typeof credential.refresh === "string" && credential.refresh.length > 0;
const { status: rawStatus, remainingMs } = resolveOAuthStatus(
credential.expires,
now,
warnAfterMs,
);
// OAuth credentials with a valid refresh token auto-renew on first API call,
// so don't warn about access token expiration.
const status =
hasRefreshToken && (rawStatus === "expired" || rawStatus === "expiring") ? "ok" : rawStatus;
return {
profileId,
provider: credential.provider,
type: "oauth",
status,
expiresAt: credential.expires,
remainingMs,
source,
label,
};
}
export function buildAuthHealthSummary(params: {
store: AuthProfileStore;
cfg?: OpenClawConfig;
warnAfterMs?: number;
providers?: string[];
}): AuthHealthSummary {
const now = Date.now();
const warnAfterMs = params.warnAfterMs ?? DEFAULT_OAUTH_WARN_MS;
const providerFilter = params.providers
? new Set(params.providers.map((p) => p.trim()).filter(Boolean))
: null;
const profiles = Object.entries(params.store.profiles)
.filter(([_, cred]) => (providerFilter ? providerFilter.has(cred.provider) : true))
.map(([profileId, credential]) =>
buildProfileHealth({
profileId,
credential,
store: params.store,
cfg: params.cfg,
now,
warnAfterMs,
}),
)
.toSorted((a, b) => {
if (a.provider !== b.provider) {
return a.provider.localeCompare(b.provider);
}
return a.profileId.localeCompare(b.profileId);
});
const providersMap = new Map<string, AuthProviderHealth>();
for (const profile of profiles) {
const existing = providersMap.get(profile.provider);
if (!existing) {
providersMap.set(profile.provider, {
provider: profile.provider,
status: "missing",
profiles: [profile],
});
} else {
existing.profiles.push(profile);
}
}
if (providerFilter) {
for (const provider of providerFilter) {
if (!providersMap.has(provider)) {
providersMap.set(provider, {
provider,
status: "missing",
profiles: [],
});
}
}
}
for (const provider of providersMap.values()) {
if (provider.profiles.length === 0) {
provider.status = "missing";
continue;
}
const oauthProfiles = provider.profiles.filter((p) => p.type === "oauth");
const tokenProfiles = provider.profiles.filter((p) => p.type === "token");
const apiKeyProfiles = provider.profiles.filter((p) => p.type === "api_key");
const expirable = [...oauthProfiles, ...tokenProfiles];
if (expirable.length === 0) {
provider.status = apiKeyProfiles.length > 0 ? "static" : "missing";
continue;
}
const expiryCandidates = expirable
.map((p) => p.expiresAt)
.filter((v): v is number => typeof v === "number" && Number.isFinite(v));
if (expiryCandidates.length > 0) {
provider.expiresAt = Math.min(...expiryCandidates);
provider.remainingMs = provider.expiresAt - now;
}
const statuses = new Set(expirable.map((p) => p.status));
if (statuses.has("expired") || statuses.has("missing")) {
provider.status = "expired";
} else if (statuses.has("expiring")) {
provider.status = "expiring";
} else {
provider.status = "ok";
}
}
const providers = Array.from(providersMap.values()).toSorted((a, b) =>
a.provider.localeCompare(b.provider),
);
return { now, warnAfterMs, profiles, providers };
}

View File

@ -1,84 +0,0 @@
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { afterEach, describe, expect, it, vi } from "vitest";
import { withEnvAsync } from "../test-utils/env.js";
import {
type AuthProfileStore,
ensureAuthProfileStore,
resolveApiKeyForProfile,
} from "./auth-profiles.js";
import { CHUTES_TOKEN_ENDPOINT } from "./chutes-oauth.js";
describe("auth-profiles (chutes)", () => {
let tempDir: string | null = null;
afterEach(async () => {
vi.unstubAllGlobals();
if (tempDir) {
await fs.rm(tempDir, { recursive: true, force: true });
tempDir = null;
}
});
it("refreshes expired Chutes OAuth credentials", async () => {
tempDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-chutes-"));
const agentDir = path.join(tempDir, "agents", "main", "agent");
await withEnvAsync(
{
OPENCLAW_STATE_DIR: tempDir,
OPENCLAW_AGENT_DIR: agentDir,
PI_CODING_AGENT_DIR: agentDir,
CHUTES_CLIENT_ID: undefined,
},
async () => {
const authProfilePath = path.join(agentDir, "auth-profiles.json");
await fs.mkdir(path.dirname(authProfilePath), { recursive: true });
const store: AuthProfileStore = {
version: 1,
profiles: {
"chutes:default": {
type: "oauth",
provider: "chutes",
access: "at_old",
refresh: "rt_old",
expires: Date.now() - 60_000,
clientId: "cid_test",
},
},
};
await fs.writeFile(authProfilePath, `${JSON.stringify(store)}\n`);
const fetchSpy = vi.fn(async (input: string | URL) => {
const url = typeof input === "string" ? input : input.toString();
if (url !== CHUTES_TOKEN_ENDPOINT) {
return new Response("not found", { status: 404 });
}
return new Response(
JSON.stringify({
access_token: "at_new",
expires_in: 3600,
}),
{ status: 200, headers: { "Content-Type": "application/json" } },
);
});
vi.stubGlobal("fetch", fetchSpy);
const loaded = ensureAuthProfileStore();
const resolved = await resolveApiKeyForProfile({
store: loaded,
profileId: "chutes:default",
});
expect(resolved?.apiKey).toBe("at_new");
expect(fetchSpy).toHaveBeenCalled();
const persisted = JSON.parse(await fs.readFile(authProfilePath, "utf8")) as {
profiles?: Record<string, { access?: string }>;
};
expect(persisted.profiles?.["chutes:default"]?.access).toBe("at_new");
},
);
});
});

View File

@ -1,159 +0,0 @@
import { describe, expect, it } from "vitest";
import { resolveAuthProfileOrder } from "./auth-profiles/order.js";
import type { AuthProfileStore } from "./auth-profiles/types.js";
import { isProfileInCooldown } from "./auth-profiles/usage.js";
/**
* Integration tests for cooldown auto-expiry through resolveAuthProfileOrder.
* Verifies that profiles with expired cooldowns are treated as available and
* have their error state reset, preventing the escalation loop described in
* #3604, #13623, #15851, and #11972.
*/
function makeStoreWithProfiles(): AuthProfileStore {
return {
version: 1,
profiles: {
"anthropic:default": { type: "api_key", provider: "anthropic", key: "sk-1" },
"anthropic:secondary": { type: "api_key", provider: "anthropic", key: "sk-2" },
"openai:default": { type: "api_key", provider: "openai", key: "sk-oi" },
},
usageStats: {},
};
}
describe("resolveAuthProfileOrder — cooldown auto-expiry", () => {
it("places profile with expired cooldown in available list (round-robin path)", () => {
const store = makeStoreWithProfiles();
store.usageStats = {
"anthropic:default": {
cooldownUntil: Date.now() - 10_000,
errorCount: 4,
failureCounts: { rate_limit: 4 },
lastFailureAt: Date.now() - 70_000,
},
};
const order = resolveAuthProfileOrder({ store, provider: "anthropic" });
// Profile should be in the result (available, not skipped)
expect(order).toContain("anthropic:default");
// Should no longer report as in cooldown
expect(isProfileInCooldown(store, "anthropic:default")).toBe(false);
// Error state should have been reset
expect(store.usageStats?.["anthropic:default"]?.errorCount).toBe(0);
expect(store.usageStats?.["anthropic:default"]?.cooldownUntil).toBeUndefined();
});
it("places profile with expired cooldown in available list (explicit-order path)", () => {
const store = makeStoreWithProfiles();
store.order = { anthropic: ["anthropic:secondary", "anthropic:default"] };
store.usageStats = {
"anthropic:default": {
cooldownUntil: Date.now() - 5_000,
errorCount: 3,
},
};
const order = resolveAuthProfileOrder({ store, provider: "anthropic" });
// Both profiles available — explicit order respected
expect(order[0]).toBe("anthropic:secondary");
expect(order).toContain("anthropic:default");
// Expired cooldown cleared
expect(store.usageStats?.["anthropic:default"]?.cooldownUntil).toBeUndefined();
expect(store.usageStats?.["anthropic:default"]?.errorCount).toBe(0);
});
it("keeps profile with active cooldown in cooldown list", () => {
const futureMs = Date.now() + 300_000;
const store = makeStoreWithProfiles();
store.usageStats = {
"anthropic:default": {
cooldownUntil: futureMs,
errorCount: 3,
},
};
const order = resolveAuthProfileOrder({ store, provider: "anthropic" });
// Profile is still in the result (appended after available profiles)
expect(order).toContain("anthropic:default");
// Should still be in cooldown
expect(isProfileInCooldown(store, "anthropic:default")).toBe(true);
expect(store.usageStats?.["anthropic:default"]?.errorCount).toBe(3);
});
it("expired cooldown resets error count — prevents escalation on next failure", () => {
const store = makeStoreWithProfiles();
store.usageStats = {
"anthropic:default": {
cooldownUntil: Date.now() - 1_000,
errorCount: 4, // Would cause 1-hour cooldown on next failure
failureCounts: { rate_limit: 4 },
lastFailureAt: Date.now() - 3_700_000,
},
};
resolveAuthProfileOrder({ store, provider: "anthropic" });
// After clearing, errorCount is 0. If the profile fails again,
// the next cooldown will be 60 seconds (errorCount 1) instead of
// 1 hour (errorCount 5). This is the core fix for #3604.
expect(store.usageStats?.["anthropic:default"]?.errorCount).toBe(0);
expect(store.usageStats?.["anthropic:default"]?.failureCounts).toBeUndefined();
});
it("mixed active and expired cooldowns across profiles", () => {
const store = makeStoreWithProfiles();
store.usageStats = {
"anthropic:default": {
cooldownUntil: Date.now() - 1_000,
errorCount: 3,
},
"anthropic:secondary": {
cooldownUntil: Date.now() + 300_000,
errorCount: 2,
},
};
const order = resolveAuthProfileOrder({ store, provider: "anthropic" });
// anthropic:default should be available (expired, cleared)
expect(store.usageStats?.["anthropic:default"]?.cooldownUntil).toBeUndefined();
expect(store.usageStats?.["anthropic:default"]?.errorCount).toBe(0);
// anthropic:secondary should still be in cooldown
expect(store.usageStats?.["anthropic:secondary"]?.cooldownUntil).toBeGreaterThan(Date.now());
expect(store.usageStats?.["anthropic:secondary"]?.errorCount).toBe(2);
// Available profile should come first
expect(order[0]).toBe("anthropic:default");
});
it("does not affect profiles from other providers", () => {
const store = makeStoreWithProfiles();
store.usageStats = {
"anthropic:default": {
cooldownUntil: Date.now() - 1_000,
errorCount: 4,
},
"openai:default": {
cooldownUntil: Date.now() - 1_000,
errorCount: 3,
},
};
// Resolve only anthropic
resolveAuthProfileOrder({ store, provider: "anthropic" });
// Both should be cleared since clearExpiredCooldowns sweeps all profiles
// in the store — this is intentional for correctness.
expect(store.usageStats?.["anthropic:default"]?.errorCount).toBe(0);
expect(store.usageStats?.["openai:default"]?.errorCount).toBe(0);
});
});

View File

@ -1,125 +0,0 @@
import fs from "node:fs";
import os from "node:os";
import path from "node:path";
import { describe, expect, it } from "vitest";
import { ensureAuthProfileStore } from "./auth-profiles.js";
import { AUTH_STORE_VERSION } from "./auth-profiles/constants.js";
describe("ensureAuthProfileStore", () => {
it("migrates legacy auth.json and deletes it (PR #368)", () => {
const agentDir = fs.mkdtempSync(path.join(os.tmpdir(), "openclaw-auth-profiles-"));
try {
const legacyPath = path.join(agentDir, "auth.json");
fs.writeFileSync(
legacyPath,
`${JSON.stringify(
{
anthropic: {
type: "oauth",
provider: "anthropic",
access: "access-token",
refresh: "refresh-token",
expires: Date.now() + 60_000,
},
},
null,
2,
)}\n`,
"utf8",
);
const store = ensureAuthProfileStore(agentDir);
expect(store.profiles["anthropic:default"]).toMatchObject({
type: "oauth",
provider: "anthropic",
});
const migratedPath = path.join(agentDir, "auth-profiles.json");
expect(fs.existsSync(migratedPath)).toBe(true);
expect(fs.existsSync(legacyPath)).toBe(false);
// idempotent
const store2 = ensureAuthProfileStore(agentDir);
expect(store2.profiles["anthropic:default"]).toBeDefined();
expect(fs.existsSync(legacyPath)).toBe(false);
} finally {
fs.rmSync(agentDir, { recursive: true, force: true });
}
});
it("merges main auth profiles into agent store and keeps agent overrides", () => {
const root = fs.mkdtempSync(path.join(os.tmpdir(), "openclaw-auth-merge-"));
const previousAgentDir = process.env.OPENCLAW_AGENT_DIR;
const previousPiAgentDir = process.env.PI_CODING_AGENT_DIR;
try {
const mainDir = path.join(root, "main-agent");
const agentDir = path.join(root, "agent-x");
fs.mkdirSync(mainDir, { recursive: true });
fs.mkdirSync(agentDir, { recursive: true });
process.env.OPENCLAW_AGENT_DIR = mainDir;
process.env.PI_CODING_AGENT_DIR = mainDir;
const mainStore = {
version: AUTH_STORE_VERSION,
profiles: {
"openai:default": {
type: "api_key",
provider: "openai",
key: "main-key",
},
"anthropic:default": {
type: "api_key",
provider: "anthropic",
key: "main-anthropic-key",
},
},
};
fs.writeFileSync(
path.join(mainDir, "auth-profiles.json"),
`${JSON.stringify(mainStore, null, 2)}\n`,
"utf8",
);
const agentStore = {
version: AUTH_STORE_VERSION,
profiles: {
"openai:default": {
type: "api_key",
provider: "openai",
key: "agent-key",
},
},
};
fs.writeFileSync(
path.join(agentDir, "auth-profiles.json"),
`${JSON.stringify(agentStore, null, 2)}\n`,
"utf8",
);
const store = ensureAuthProfileStore(agentDir);
expect(store.profiles["anthropic:default"]).toMatchObject({
type: "api_key",
provider: "anthropic",
key: "main-anthropic-key",
});
expect(store.profiles["openai:default"]).toMatchObject({
type: "api_key",
provider: "openai",
key: "agent-key",
});
} finally {
if (previousAgentDir === undefined) {
delete process.env.OPENCLAW_AGENT_DIR;
} else {
process.env.OPENCLAW_AGENT_DIR = previousAgentDir;
}
if (previousPiAgentDir === undefined) {
delete process.env.PI_CODING_AGENT_DIR;
} else {
process.env.PI_CODING_AGENT_DIR = previousPiAgentDir;
}
fs.rmSync(root, { recursive: true, force: true });
}
});
});

View File

@ -1,77 +0,0 @@
import { describe, expect, it } from "vitest";
import type { AuthProfileStore } from "./auth-profiles.js";
import { getSoonestCooldownExpiry } from "./auth-profiles.js";
function makeStore(usageStats?: AuthProfileStore["usageStats"]): AuthProfileStore {
return {
version: 1,
profiles: {},
usageStats,
};
}
describe("getSoonestCooldownExpiry", () => {
it("returns null when no cooldown timestamps exist", () => {
const store = makeStore();
expect(getSoonestCooldownExpiry(store, ["openai:p1"])).toBeNull();
});
it("returns earliest unusable time across profiles", () => {
const store = makeStore({
"openai:p1": {
cooldownUntil: 1_700_000_002_000,
disabledUntil: 1_700_000_004_000,
},
"openai:p2": {
cooldownUntil: 1_700_000_003_000,
},
"openai:p3": {
disabledUntil: 1_700_000_001_000,
},
});
expect(getSoonestCooldownExpiry(store, ["openai:p1", "openai:p2", "openai:p3"])).toBe(
1_700_000_001_000,
);
});
it("ignores unknown profiles and invalid cooldown values", () => {
const store = makeStore({
"openai:p1": {
cooldownUntil: -1,
},
"openai:p2": {
cooldownUntil: Infinity,
},
"openai:p3": {
disabledUntil: NaN,
},
"openai:p4": {
cooldownUntil: 1_700_000_005_000,
},
});
expect(
getSoonestCooldownExpiry(store, [
"missing",
"openai:p1",
"openai:p2",
"openai:p3",
"openai:p4",
]),
).toBe(1_700_000_005_000);
});
it("returns past timestamps when cooldown already expired", () => {
const store = makeStore({
"openai:p1": {
cooldownUntil: 1_700_000_000_000,
},
"openai:p2": {
disabledUntil: 1_700_000_010_000,
},
});
expect(getSoonestCooldownExpiry(store, ["openai:p1", "openai:p2"])).toBe(1_700_000_000_000);
});
});

View File

@ -1,139 +0,0 @@
import fs from "node:fs";
import os from "node:os";
import path from "node:path";
import { describe, expect, it } from "vitest";
import {
calculateAuthProfileCooldownMs,
ensureAuthProfileStore,
markAuthProfileFailure,
} from "./auth-profiles.js";
type AuthProfileStore = ReturnType<typeof ensureAuthProfileStore>;
async function withAuthProfileStore(
fn: (ctx: { agentDir: string; store: AuthProfileStore }) => Promise<void>,
): Promise<void> {
const agentDir = fs.mkdtempSync(path.join(os.tmpdir(), "openclaw-auth-"));
try {
const authPath = path.join(agentDir, "auth-profiles.json");
fs.writeFileSync(
authPath,
JSON.stringify({
version: 1,
profiles: {
"anthropic:default": {
type: "api_key",
provider: "anthropic",
key: "sk-default",
},
},
}),
);
const store = ensureAuthProfileStore(agentDir);
await fn({ agentDir, store });
} finally {
fs.rmSync(agentDir, { recursive: true, force: true });
}
}
function expectCooldownInRange(remainingMs: number, minMs: number, maxMs: number): void {
expect(remainingMs).toBeGreaterThan(minMs);
expect(remainingMs).toBeLessThan(maxMs);
}
describe("markAuthProfileFailure", () => {
it("disables billing failures for ~5 hours by default", async () => {
await withAuthProfileStore(async ({ agentDir, store }) => {
const startedAt = Date.now();
await markAuthProfileFailure({
store,
profileId: "anthropic:default",
reason: "billing",
agentDir,
});
const disabledUntil = store.usageStats?.["anthropic:default"]?.disabledUntil;
expect(typeof disabledUntil).toBe("number");
const remainingMs = (disabledUntil as number) - startedAt;
expectCooldownInRange(remainingMs, 4.5 * 60 * 60 * 1000, 5.5 * 60 * 60 * 1000);
});
});
it("honors per-provider billing backoff overrides", async () => {
await withAuthProfileStore(async ({ agentDir, store }) => {
const startedAt = Date.now();
await markAuthProfileFailure({
store,
profileId: "anthropic:default",
reason: "billing",
agentDir,
cfg: {
auth: {
cooldowns: {
billingBackoffHoursByProvider: { Anthropic: 1 },
billingMaxHours: 2,
},
},
} as never,
});
const disabledUntil = store.usageStats?.["anthropic:default"]?.disabledUntil;
expect(typeof disabledUntil).toBe("number");
const remainingMs = (disabledUntil as number) - startedAt;
expectCooldownInRange(remainingMs, 0.8 * 60 * 60 * 1000, 1.2 * 60 * 60 * 1000);
});
});
it("resets backoff counters outside the failure window", async () => {
const agentDir = fs.mkdtempSync(path.join(os.tmpdir(), "openclaw-auth-"));
try {
const authPath = path.join(agentDir, "auth-profiles.json");
const now = Date.now();
fs.writeFileSync(
authPath,
JSON.stringify({
version: 1,
profiles: {
"anthropic:default": {
type: "api_key",
provider: "anthropic",
key: "sk-default",
},
},
usageStats: {
"anthropic:default": {
errorCount: 9,
failureCounts: { billing: 3 },
lastFailureAt: now - 48 * 60 * 60 * 1000,
},
},
}),
);
const store = ensureAuthProfileStore(agentDir);
await markAuthProfileFailure({
store,
profileId: "anthropic:default",
reason: "billing",
agentDir,
cfg: {
auth: { cooldowns: { failureWindowHours: 24 } },
} as never,
});
expect(store.usageStats?.["anthropic:default"]?.errorCount).toBe(1);
expect(store.usageStats?.["anthropic:default"]?.failureCounts?.billing).toBe(1);
} finally {
fs.rmSync(agentDir, { recursive: true, force: true });
}
});
});
describe("calculateAuthProfileCooldownMs", () => {
it("applies exponential backoff with a 1h cap", () => {
expect(calculateAuthProfileCooldownMs(1)).toBe(60_000);
expect(calculateAuthProfileCooldownMs(2)).toBe(5 * 60_000);
expect(calculateAuthProfileCooldownMs(3)).toBe(25 * 60_000);
expect(calculateAuthProfileCooldownMs(4)).toBe(60 * 60_000);
expect(calculateAuthProfileCooldownMs(5)).toBe(60 * 60_000);
});
});

View File

@ -1,218 +0,0 @@
import { describe, expect, it } from "vitest";
import { resolveAuthProfileOrder } from "./auth-profiles.js";
import {
ANTHROPIC_CFG,
ANTHROPIC_STORE,
} from "./auth-profiles.resolve-auth-profile-order.fixtures.js";
import type { AuthProfileStore } from "./auth-profiles/types.js";
describe("resolveAuthProfileOrder", () => {
const store = ANTHROPIC_STORE;
const cfg = ANTHROPIC_CFG;
it("does not prioritize lastGood over round-robin ordering", () => {
const order = resolveAuthProfileOrder({
cfg,
store: {
...store,
lastGood: { anthropic: "anthropic:work" },
usageStats: {
"anthropic:default": { lastUsed: 100 },
"anthropic:work": { lastUsed: 200 },
},
},
provider: "anthropic",
});
expect(order[0]).toBe("anthropic:default");
});
it("uses explicit profiles when order is missing", () => {
const order = resolveAuthProfileOrder({
cfg,
store,
provider: "anthropic",
});
expect(order).toEqual(["anthropic:default", "anthropic:work"]);
});
it("uses configured order when provided", () => {
const order = resolveAuthProfileOrder({
cfg: {
auth: {
order: { anthropic: ["anthropic:work", "anthropic:default"] },
profiles: cfg.auth?.profiles,
},
},
store,
provider: "anthropic",
});
expect(order).toEqual(["anthropic:work", "anthropic:default"]);
});
it("prefers store order over config order", () => {
const order = resolveAuthProfileOrder({
cfg: {
auth: {
order: { anthropic: ["anthropic:default", "anthropic:work"] },
profiles: cfg.auth?.profiles,
},
},
store: {
...store,
order: { anthropic: ["anthropic:work", "anthropic:default"] },
},
provider: "anthropic",
});
expect(order).toEqual(["anthropic:work", "anthropic:default"]);
});
it("pushes cooldown profiles to the end even with store order", () => {
const now = Date.now();
const order = resolveAuthProfileOrder({
store: {
...store,
order: { anthropic: ["anthropic:default", "anthropic:work"] },
usageStats: {
"anthropic:default": { cooldownUntil: now + 60_000 },
"anthropic:work": { lastUsed: 1 },
},
},
provider: "anthropic",
});
expect(order).toEqual(["anthropic:work", "anthropic:default"]);
});
it("pushes cooldown profiles to the end even with configured order", () => {
const now = Date.now();
const order = resolveAuthProfileOrder({
cfg: {
auth: {
order: { anthropic: ["anthropic:default", "anthropic:work"] },
profiles: cfg.auth?.profiles,
},
},
store: {
...store,
usageStats: {
"anthropic:default": { cooldownUntil: now + 60_000 },
"anthropic:work": { lastUsed: 1 },
},
},
provider: "anthropic",
});
expect(order).toEqual(["anthropic:work", "anthropic:default"]);
});
it("pushes disabled profiles to the end even with store order", () => {
const now = Date.now();
const order = resolveAuthProfileOrder({
store: {
...store,
order: { anthropic: ["anthropic:default", "anthropic:work"] },
usageStats: {
"anthropic:default": {
disabledUntil: now + 60_000,
disabledReason: "billing",
},
"anthropic:work": { lastUsed: 1 },
},
},
provider: "anthropic",
});
expect(order).toEqual(["anthropic:work", "anthropic:default"]);
});
it("pushes disabled profiles to the end even with configured order", () => {
const now = Date.now();
const order = resolveAuthProfileOrder({
cfg: {
auth: {
order: { anthropic: ["anthropic:default", "anthropic:work"] },
profiles: cfg.auth?.profiles,
},
},
store: {
...store,
usageStats: {
"anthropic:default": {
disabledUntil: now + 60_000,
disabledReason: "billing",
},
"anthropic:work": { lastUsed: 1 },
},
},
provider: "anthropic",
});
expect(order).toEqual(["anthropic:work", "anthropic:default"]);
});
it("mode: oauth config accepts both oauth and token credentials (issue #559)", () => {
const now = Date.now();
const storeWithBothTypes: AuthProfileStore = {
version: 1,
profiles: {
"anthropic:oauth-cred": {
type: "oauth",
provider: "anthropic",
access: "access-token",
refresh: "refresh-token",
expires: now + 60_000,
},
"anthropic:token-cred": {
type: "token",
provider: "anthropic",
token: "just-a-token",
expires: now + 60_000,
},
},
};
const orderOauthCred = resolveAuthProfileOrder({
store: storeWithBothTypes,
provider: "anthropic",
cfg: {
auth: {
profiles: {
"anthropic:oauth-cred": { provider: "anthropic", mode: "oauth" },
},
},
},
});
expect(orderOauthCred).toContain("anthropic:oauth-cred");
const orderTokenCred = resolveAuthProfileOrder({
store: storeWithBothTypes,
provider: "anthropic",
cfg: {
auth: {
profiles: {
"anthropic:token-cred": { provider: "anthropic", mode: "oauth" },
},
},
},
});
expect(orderTokenCred).toContain("anthropic:token-cred");
});
it("mode: token config rejects oauth credentials (issue #559 root cause)", () => {
const now = Date.now();
const storeWithOauth: AuthProfileStore = {
version: 1,
profiles: {
"anthropic:oauth-cred": {
type: "oauth",
provider: "anthropic",
access: "access-token",
refresh: "refresh-token",
expires: now + 60_000,
},
},
};
const order = resolveAuthProfileOrder({
store: storeWithOauth,
provider: "anthropic",
cfg: {
auth: {
profiles: {
"anthropic:oauth-cred": { provider: "anthropic", mode: "token" },
},
},
},
});
expect(order).not.toContain("anthropic:oauth-cred");
});
});

View File

@ -1,27 +0,0 @@
import type { OpenClawConfig } from "../config/config.js";
import type { AuthProfileStore } from "./auth-profiles.js";
export const ANTHROPIC_STORE: AuthProfileStore = {
version: 1,
profiles: {
"anthropic:default": {
type: "api_key",
provider: "anthropic",
key: "sk-default",
},
"anthropic:work": {
type: "api_key",
provider: "anthropic",
key: "sk-work",
},
},
};
export const ANTHROPIC_CFG: OpenClawConfig = {
auth: {
profiles: {
"anthropic:default": { provider: "anthropic", mode: "api_key" },
"anthropic:work": { provider: "anthropic", mode: "api_key" },
},
},
};

View File

@ -1,103 +0,0 @@
import { describe, expect, it } from "vitest";
import { type AuthProfileStore, resolveAuthProfileOrder } from "./auth-profiles.js";
function makeApiKeyStore(provider: string, profileIds: string[]): AuthProfileStore {
return {
version: 1,
profiles: Object.fromEntries(
profileIds.map((profileId) => [
profileId,
{
type: "api_key",
provider,
key: profileId.endsWith(":work") ? "sk-work" : "sk-default",
},
]),
),
};
}
function makeApiKeyProfilesByProviderProvider(
providerByProfileId: Record<string, string>,
): Record<string, { provider: string; mode: "api_key" }> {
return Object.fromEntries(
Object.entries(providerByProfileId).map(([profileId, provider]) => [
profileId,
{ provider, mode: "api_key" },
]),
);
}
describe("resolveAuthProfileOrder", () => {
it("normalizes z.ai aliases in auth.order", () => {
const order = resolveAuthProfileOrder({
cfg: {
auth: {
order: { "z.ai": ["zai:work", "zai:default"] },
profiles: makeApiKeyProfilesByProviderProvider({
"zai:default": "zai",
"zai:work": "zai",
}),
},
},
store: makeApiKeyStore("zai", ["zai:default", "zai:work"]),
provider: "zai",
});
expect(order).toEqual(["zai:work", "zai:default"]);
});
it("normalizes provider casing in auth.order keys", () => {
const order = resolveAuthProfileOrder({
cfg: {
auth: {
order: { OpenAI: ["openai:work", "openai:default"] },
profiles: makeApiKeyProfilesByProviderProvider({
"openai:default": "openai",
"openai:work": "openai",
}),
},
},
store: makeApiKeyStore("openai", ["openai:default", "openai:work"]),
provider: "openai",
});
expect(order).toEqual(["openai:work", "openai:default"]);
});
it("normalizes z.ai aliases in auth.profiles", () => {
const order = resolveAuthProfileOrder({
cfg: {
auth: {
profiles: makeApiKeyProfilesByProviderProvider({
"zai:default": "z.ai",
"zai:work": "Z.AI",
}),
},
},
store: makeApiKeyStore("zai", ["zai:default", "zai:work"]),
provider: "zai",
});
expect(order).toEqual(["zai:default", "zai:work"]);
});
it("prioritizes oauth profiles when order missing", () => {
const mixedStore: AuthProfileStore = {
version: 1,
profiles: {
"anthropic:default": {
type: "api_key",
provider: "anthropic",
key: "sk-default",
},
"anthropic:oauth": {
type: "oauth",
provider: "anthropic",
access: "access-token",
refresh: "refresh-token",
expires: Date.now() + 60_000,
},
},
};
const order = resolveAuthProfileOrder({
store: mixedStore,
provider: "anthropic",
});
expect(order).toEqual(["anthropic:oauth", "anthropic:default"]);
});
});

View File

@ -1,72 +0,0 @@
import { describe, expect, it } from "vitest";
import { resolveAuthProfileOrder } from "./auth-profiles.js";
describe("resolveAuthProfileOrder", () => {
it("orders by lastUsed when no explicit order exists", () => {
const order = resolveAuthProfileOrder({
store: {
version: 1,
profiles: {
"anthropic:a": {
type: "oauth",
provider: "anthropic",
access: "access-token",
refresh: "refresh-token",
expires: Date.now() + 60_000,
},
"anthropic:b": {
type: "api_key",
provider: "anthropic",
key: "sk-b",
},
"anthropic:c": {
type: "api_key",
provider: "anthropic",
key: "sk-c",
},
},
usageStats: {
"anthropic:a": { lastUsed: 200 },
"anthropic:b": { lastUsed: 100 },
"anthropic:c": { lastUsed: 300 },
},
},
provider: "anthropic",
});
expect(order).toEqual(["anthropic:a", "anthropic:b", "anthropic:c"]);
});
it("pushes cooldown profiles to the end, ordered by cooldown expiry", () => {
const now = Date.now();
const order = resolveAuthProfileOrder({
store: {
version: 1,
profiles: {
"anthropic:ready": {
type: "api_key",
provider: "anthropic",
key: "sk-ready",
},
"anthropic:cool1": {
type: "oauth",
provider: "anthropic",
access: "access-token",
refresh: "refresh-token",
expires: now + 60_000,
},
"anthropic:cool2": {
type: "api_key",
provider: "anthropic",
key: "sk-cool",
},
},
usageStats: {
"anthropic:ready": { lastUsed: 50 },
"anthropic:cool1": { cooldownUntil: now + 5_000 },
"anthropic:cool2": { cooldownUntil: now + 1_000 },
},
},
provider: "anthropic",
});
expect(order).toEqual(["anthropic:ready", "anthropic:cool2", "anthropic:cool1"]);
});
});

View File

@ -1,220 +0,0 @@
import { describe, expect, it } from "vitest";
import { resolveAuthProfileOrder } from "./auth-profiles.js";
import {
ANTHROPIC_CFG,
ANTHROPIC_STORE,
} from "./auth-profiles.resolve-auth-profile-order.fixtures.js";
describe("resolveAuthProfileOrder", () => {
const store = ANTHROPIC_STORE;
const cfg = ANTHROPIC_CFG;
it("uses stored profiles when no config exists", () => {
const order = resolveAuthProfileOrder({
store,
provider: "anthropic",
});
expect(order).toEqual(["anthropic:default", "anthropic:work"]);
});
it("prioritizes preferred profiles", () => {
const order = resolveAuthProfileOrder({
cfg,
store,
provider: "anthropic",
preferredProfile: "anthropic:work",
});
expect(order[0]).toBe("anthropic:work");
expect(order).toContain("anthropic:default");
});
it("drops explicit order entries that are missing from the store", () => {
const order = resolveAuthProfileOrder({
cfg: {
auth: {
order: {
minimax: ["minimax:default", "minimax:prod"],
},
},
},
store: {
version: 1,
profiles: {
"minimax:prod": {
type: "api_key",
provider: "minimax",
key: "sk-prod",
},
},
},
provider: "minimax",
});
expect(order).toEqual(["minimax:prod"]);
});
it("falls back to stored provider profiles when config profile ids drift", () => {
const order = resolveAuthProfileOrder({
cfg: {
auth: {
profiles: {
"openai-codex:default": {
provider: "openai-codex",
mode: "oauth",
},
},
order: {
"openai-codex": ["openai-codex:default"],
},
},
},
store: {
version: 1,
profiles: {
"openai-codex:user@example.com": {
type: "oauth",
provider: "openai-codex",
access: "access-token",
refresh: "refresh-token",
expires: Date.now() + 60_000,
},
},
},
provider: "openai-codex",
});
expect(order).toEqual(["openai-codex:user@example.com"]);
});
it("does not bypass explicit ids when the configured profile exists but is invalid", () => {
const order = resolveAuthProfileOrder({
cfg: {
auth: {
profiles: {
"openai-codex:default": {
provider: "openai-codex",
mode: "token",
},
},
order: {
"openai-codex": ["openai-codex:default"],
},
},
},
store: {
version: 1,
profiles: {
"openai-codex:default": {
type: "token",
provider: "openai-codex",
token: "expired-token",
expires: Date.now() - 1_000,
},
"openai-codex:user@example.com": {
type: "oauth",
provider: "openai-codex",
access: "access-token",
refresh: "refresh-token",
expires: Date.now() + 60_000,
},
},
},
provider: "openai-codex",
});
expect(order).toEqual([]);
});
it("drops explicit order entries that belong to another provider", () => {
const order = resolveAuthProfileOrder({
cfg: {
auth: {
order: {
minimax: ["openai:default", "minimax:prod"],
},
},
},
store: {
version: 1,
profiles: {
"openai:default": {
type: "api_key",
provider: "openai",
key: "sk-openai",
},
"minimax:prod": {
type: "api_key",
provider: "minimax",
key: "sk-mini",
},
},
},
provider: "minimax",
});
expect(order).toEqual(["minimax:prod"]);
});
it("drops token profiles with empty credentials", () => {
const order = resolveAuthProfileOrder({
cfg: {
auth: {
order: {
minimax: ["minimax:default"],
},
},
},
store: {
version: 1,
profiles: {
"minimax:default": {
type: "token",
provider: "minimax",
token: " ",
},
},
},
provider: "minimax",
});
expect(order).toEqual([]);
});
it("drops token profiles that are already expired", () => {
const order = resolveAuthProfileOrder({
cfg: {
auth: {
order: {
minimax: ["minimax:default"],
},
},
},
store: {
version: 1,
profiles: {
"minimax:default": {
type: "token",
provider: "minimax",
token: "sk-minimax",
expires: Date.now() - 1000,
},
},
},
provider: "minimax",
});
expect(order).toEqual([]);
});
it("keeps oauth profiles that can refresh", () => {
const order = resolveAuthProfileOrder({
cfg: {
auth: {
order: {
anthropic: ["anthropic:oauth"],
},
},
},
store: {
version: 1,
profiles: {
"anthropic:oauth": {
type: "oauth",
provider: "anthropic",
access: "",
refresh: "refresh-token",
expires: Date.now() - 1000,
},
},
},
provider: "anthropic",
});
expect(order).toEqual(["anthropic:oauth"]);
});
});

View File

@ -1,44 +0,0 @@
export { CLAUDE_CLI_PROFILE_ID, CODEX_CLI_PROFILE_ID } from "./auth-profiles/constants.js";
export { resolveAuthProfileDisplayLabel } from "./auth-profiles/display.js";
export { formatAuthDoctorHint } from "./auth-profiles/doctor.js";
export { resolveApiKeyForProfile } from "./auth-profiles/oauth.js";
export { resolveAuthProfileOrder } from "./auth-profiles/order.js";
export { resolveAuthStorePathForDisplay } from "./auth-profiles/paths.js";
export {
dedupeProfileIds,
listProfilesForProvider,
markAuthProfileGood,
setAuthProfileOrder,
upsertAuthProfile,
upsertAuthProfileWithLock,
} from "./auth-profiles/profiles.js";
export {
repairOAuthProfileIdMismatch,
suggestOAuthProfileIdForLegacyDefault,
} from "./auth-profiles/repair.js";
export {
ensureAuthProfileStore,
loadAuthProfileStore,
saveAuthProfileStore,
} from "./auth-profiles/store.js";
export type {
ApiKeyCredential,
AuthProfileCredential,
AuthProfileFailureReason,
AuthProfileIdRepairResult,
AuthProfileStore,
OAuthCredential,
ProfileUsageStats,
TokenCredential,
} from "./auth-profiles/types.js";
export {
calculateAuthProfileCooldownMs,
clearAuthProfileCooldown,
clearExpiredCooldowns,
getSoonestCooldownExpiry,
isProfileInCooldown,
markAuthProfileCooldown,
markAuthProfileFailure,
markAuthProfileUsed,
resolveProfileUnusableUntilForDisplay,
} from "./auth-profiles/usage.js";

View File

@ -1,26 +0,0 @@
import { createSubsystemLogger } from "../../logging/subsystem.js";
export const AUTH_STORE_VERSION = 1;
export const AUTH_PROFILE_FILENAME = "auth-profiles.json";
export const LEGACY_AUTH_FILENAME = "auth.json";
export const CLAUDE_CLI_PROFILE_ID = "anthropic:claude-cli";
export const CODEX_CLI_PROFILE_ID = "openai-codex:codex-cli";
export const QWEN_CLI_PROFILE_ID = "qwen-portal:qwen-cli";
export const MINIMAX_CLI_PROFILE_ID = "minimax-portal:minimax-cli";
export const AUTH_STORE_LOCK_OPTIONS = {
retries: {
retries: 10,
factor: 2,
minTimeout: 100,
maxTimeout: 10_000,
randomize: true,
},
stale: 30_000,
} as const;
export const EXTERNAL_CLI_SYNC_TTL_MS = 15 * 60 * 1000;
export const EXTERNAL_CLI_NEAR_EXPIRY_MS = 10 * 60 * 1000;
export const log = createSubsystemLogger("agents/auth-profiles");

View File

@ -1,17 +0,0 @@
import type { OpenClawConfig } from "../../config/config.js";
import type { AuthProfileStore } from "./types.js";
export function resolveAuthProfileDisplayLabel(params: {
cfg?: OpenClawConfig;
store: AuthProfileStore;
profileId: string;
}): string {
const { cfg, store, profileId } = params;
const profile = store.profiles[profileId];
const configEmail = cfg?.auth?.profiles?.[profileId]?.email?.trim();
const email = configEmail || (profile && "email" in profile ? profile.email?.trim() : undefined);
if (email) {
return `${profileId} (${email})`;
}
return profileId;
}

View File

@ -1,47 +0,0 @@
import { formatCliCommand } from "../../cli/command-format.js";
import type { OpenClawConfig } from "../../config/config.js";
import { normalizeProviderId } from "../model-selection.js";
import { listProfilesForProvider } from "./profiles.js";
import { suggestOAuthProfileIdForLegacyDefault } from "./repair.js";
import type { AuthProfileStore } from "./types.js";
export function formatAuthDoctorHint(params: {
cfg?: OpenClawConfig;
store: AuthProfileStore;
provider: string;
profileId?: string;
}): string {
const providerKey = normalizeProviderId(params.provider);
if (providerKey !== "anthropic") {
return "";
}
const legacyProfileId = params.profileId ?? "anthropic:default";
const suggested = suggestOAuthProfileIdForLegacyDefault({
cfg: params.cfg,
store: params.store,
provider: providerKey,
legacyProfileId,
});
if (!suggested || suggested === legacyProfileId) {
return "";
}
const storeOauthProfiles = listProfilesForProvider(params.store, providerKey)
.filter((id) => params.store.profiles[id]?.type === "oauth")
.join(", ");
const cfgMode = params.cfg?.auth?.profiles?.[legacyProfileId]?.mode;
const cfgProvider = params.cfg?.auth?.profiles?.[legacyProfileId]?.provider;
return [
"Doctor hint (for GitHub issue):",
`- provider: ${providerKey}`,
`- config: ${legacyProfileId}${
cfgProvider || cfgMode ? ` (provider=${cfgProvider ?? "?"}, mode=${cfgMode ?? "?"})` : ""
}`,
`- auth store oauth profiles: ${storeOauthProfiles || "(none)"}`,
`- suggested profile: ${suggested}`,
`Fix: run "${formatCliCommand("openclaw doctor --yes")}"`,
].join("\n");
}

View File

@ -1,135 +0,0 @@
import {
readQwenCliCredentialsCached,
readMiniMaxCliCredentialsCached,
} from "../cli-credentials.js";
import {
EXTERNAL_CLI_NEAR_EXPIRY_MS,
EXTERNAL_CLI_SYNC_TTL_MS,
QWEN_CLI_PROFILE_ID,
MINIMAX_CLI_PROFILE_ID,
log,
} from "./constants.js";
import type { AuthProfileCredential, AuthProfileStore, OAuthCredential } from "./types.js";
function shallowEqualOAuthCredentials(a: OAuthCredential | undefined, b: OAuthCredential): boolean {
if (!a) {
return false;
}
if (a.type !== "oauth") {
return false;
}
return (
a.provider === b.provider &&
a.access === b.access &&
a.refresh === b.refresh &&
a.expires === b.expires &&
a.email === b.email &&
a.enterpriseUrl === b.enterpriseUrl &&
a.projectId === b.projectId &&
a.accountId === b.accountId
);
}
function isExternalProfileFresh(cred: AuthProfileCredential | undefined, now: number): boolean {
if (!cred) {
return false;
}
if (cred.type !== "oauth" && cred.type !== "token") {
return false;
}
if (cred.provider !== "qwen-portal" && cred.provider !== "minimax-portal") {
return false;
}
if (typeof cred.expires !== "number") {
return true;
}
return cred.expires > now + EXTERNAL_CLI_NEAR_EXPIRY_MS;
}
/** Sync external CLI credentials into the store for a given provider. */
function syncExternalCliCredentialsForProvider(
store: AuthProfileStore,
profileId: string,
provider: string,
readCredentials: () => OAuthCredential | null,
now: number,
): boolean {
const existing = store.profiles[profileId];
const shouldSync =
!existing || existing.provider !== provider || !isExternalProfileFresh(existing, now);
const creds = shouldSync ? readCredentials() : null;
if (!creds) {
return false;
}
const existingOAuth = existing?.type === "oauth" ? existing : undefined;
const shouldUpdate =
!existingOAuth ||
existingOAuth.provider !== provider ||
existingOAuth.expires <= now ||
creds.expires > existingOAuth.expires;
if (shouldUpdate && !shallowEqualOAuthCredentials(existingOAuth, creds)) {
store.profiles[profileId] = creds;
log.info(`synced ${provider} credentials from external cli`, {
profileId,
expires: new Date(creds.expires).toISOString(),
});
return true;
}
return false;
}
/**
* Sync OAuth credentials from external CLI tools (Qwen Code CLI, MiniMax CLI) into the store.
*
* Returns true if any credentials were updated.
*/
export function syncExternalCliCredentials(store: AuthProfileStore): boolean {
let mutated = false;
const now = Date.now();
// Sync from Qwen Code CLI
const existingQwen = store.profiles[QWEN_CLI_PROFILE_ID];
const shouldSyncQwen =
!existingQwen ||
existingQwen.provider !== "qwen-portal" ||
!isExternalProfileFresh(existingQwen, now);
const qwenCreds = shouldSyncQwen
? readQwenCliCredentialsCached({ ttlMs: EXTERNAL_CLI_SYNC_TTL_MS })
: null;
if (qwenCreds) {
const existing = store.profiles[QWEN_CLI_PROFILE_ID];
const existingOAuth = existing?.type === "oauth" ? existing : undefined;
const shouldUpdate =
!existingOAuth ||
existingOAuth.provider !== "qwen-portal" ||
existingOAuth.expires <= now ||
qwenCreds.expires > existingOAuth.expires;
if (shouldUpdate && !shallowEqualOAuthCredentials(existingOAuth, qwenCreds)) {
store.profiles[QWEN_CLI_PROFILE_ID] = qwenCreds;
mutated = true;
log.info("synced qwen credentials from qwen cli", {
profileId: QWEN_CLI_PROFILE_ID,
expires: new Date(qwenCreds.expires).toISOString(),
});
}
}
// Sync from MiniMax Portal CLI
if (
syncExternalCliCredentialsForProvider(
store,
MINIMAX_CLI_PROFILE_ID,
"minimax-portal",
() => readMiniMaxCliCredentialsCached({ ttlMs: EXTERNAL_CLI_SYNC_TTL_MS }),
now,
)
) {
mutated = true;
}
return mutated;
}

View File

@ -1,360 +0,0 @@
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
import { captureEnv } from "../../test-utils/env.js";
import { resolveApiKeyForProfile } from "./oauth.js";
import { ensureAuthProfileStore } from "./store.js";
import type { AuthProfileStore } from "./types.js";
describe("resolveApiKeyForProfile fallback to main agent", () => {
const envSnapshot = captureEnv([
"OPENCLAW_STATE_DIR",
"OPENCLAW_AGENT_DIR",
"PI_CODING_AGENT_DIR",
]);
let tmpDir: string;
let mainAgentDir: string;
let secondaryAgentDir: string;
beforeEach(async () => {
tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), "oauth-fallback-test-"));
mainAgentDir = path.join(tmpDir, "agents", "main", "agent");
secondaryAgentDir = path.join(tmpDir, "agents", "kids", "agent");
await fs.mkdir(mainAgentDir, { recursive: true });
await fs.mkdir(secondaryAgentDir, { recursive: true });
// Set environment variables so resolveOpenClawAgentDir() returns mainAgentDir
process.env.OPENCLAW_STATE_DIR = tmpDir;
process.env.OPENCLAW_AGENT_DIR = mainAgentDir;
process.env.PI_CODING_AGENT_DIR = mainAgentDir;
});
afterEach(async () => {
vi.unstubAllGlobals();
envSnapshot.restore();
await fs.rm(tmpDir, { recursive: true, force: true });
});
it("falls back to main agent credentials when secondary agent token is expired and refresh fails", async () => {
const profileId = "anthropic:claude-cli";
const now = Date.now();
const expiredTime = now - 60 * 60 * 1000; // 1 hour ago
const freshTime = now + 60 * 60 * 1000; // 1 hour from now
// Write expired credentials for secondary agent
const secondaryStore: AuthProfileStore = {
version: 1,
profiles: {
[profileId]: {
type: "oauth",
provider: "anthropic",
access: "expired-access-token",
refresh: "expired-refresh-token",
expires: expiredTime,
},
},
};
await fs.writeFile(
path.join(secondaryAgentDir, "auth-profiles.json"),
JSON.stringify(secondaryStore),
);
// Write fresh credentials for main agent
const mainStore: AuthProfileStore = {
version: 1,
profiles: {
[profileId]: {
type: "oauth",
provider: "anthropic",
access: "fresh-access-token",
refresh: "fresh-refresh-token",
expires: freshTime,
},
},
};
await fs.writeFile(path.join(mainAgentDir, "auth-profiles.json"), JSON.stringify(mainStore));
// Mock fetch to simulate OAuth refresh failure
const fetchSpy = vi.fn(async () => {
return new Response(JSON.stringify({ error: "invalid_grant" }), {
status: 400,
headers: { "Content-Type": "application/json" },
});
});
vi.stubGlobal("fetch", fetchSpy);
// Load the secondary agent's store (will merge with main agent's store)
const loadedSecondaryStore = ensureAuthProfileStore(secondaryAgentDir);
// Call resolveApiKeyForProfile with the secondary agent's expired credentials
// This should:
// 1. Try to refresh the expired token (fails due to mocked fetch)
// 2. Fall back to main agent's fresh credentials
// 3. Copy those credentials to the secondary agent
const result = await resolveApiKeyForProfile({
store: loadedSecondaryStore,
profileId,
agentDir: secondaryAgentDir,
});
expect(result).not.toBeNull();
expect(result?.apiKey).toBe("fresh-access-token");
expect(result?.provider).toBe("anthropic");
// Verify the credentials were copied to the secondary agent
const updatedSecondaryStore = JSON.parse(
await fs.readFile(path.join(secondaryAgentDir, "auth-profiles.json"), "utf8"),
) as AuthProfileStore;
expect(updatedSecondaryStore.profiles[profileId]).toMatchObject({
access: "fresh-access-token",
expires: freshTime,
});
});
it("adopts newer OAuth token from main agent even when secondary token is still valid", async () => {
const profileId = "anthropic:claude-cli";
const now = Date.now();
const secondaryExpiry = now + 30 * 60 * 1000;
const mainExpiry = now + 2 * 60 * 60 * 1000;
const secondaryStore: AuthProfileStore = {
version: 1,
profiles: {
[profileId]: {
type: "oauth",
provider: "anthropic",
access: "secondary-access-token",
refresh: "secondary-refresh-token",
expires: secondaryExpiry,
},
},
};
await fs.writeFile(
path.join(secondaryAgentDir, "auth-profiles.json"),
JSON.stringify(secondaryStore),
);
const mainStore: AuthProfileStore = {
version: 1,
profiles: {
[profileId]: {
type: "oauth",
provider: "anthropic",
access: "main-newer-access-token",
refresh: "main-newer-refresh-token",
expires: mainExpiry,
},
},
};
await fs.writeFile(path.join(mainAgentDir, "auth-profiles.json"), JSON.stringify(mainStore));
const loadedSecondaryStore = ensureAuthProfileStore(secondaryAgentDir);
const result = await resolveApiKeyForProfile({
store: loadedSecondaryStore,
profileId,
agentDir: secondaryAgentDir,
});
expect(result?.apiKey).toBe("main-newer-access-token");
const updatedSecondaryStore = JSON.parse(
await fs.readFile(path.join(secondaryAgentDir, "auth-profiles.json"), "utf8"),
) as AuthProfileStore;
expect(updatedSecondaryStore.profiles[profileId]).toMatchObject({
access: "main-newer-access-token",
expires: mainExpiry,
});
});
it("adopts main token when secondary expires is NaN/malformed", async () => {
const profileId = "anthropic:claude-cli";
const now = Date.now();
const mainExpiry = now + 2 * 60 * 60 * 1000;
const secondaryStore: AuthProfileStore = {
version: 1,
profiles: {
[profileId]: {
type: "oauth",
provider: "anthropic",
access: "secondary-stale",
refresh: "secondary-refresh",
expires: NaN,
},
},
};
await fs.writeFile(
path.join(secondaryAgentDir, "auth-profiles.json"),
JSON.stringify(secondaryStore),
);
const mainStore: AuthProfileStore = {
version: 1,
profiles: {
[profileId]: {
type: "oauth",
provider: "anthropic",
access: "main-fresh-token",
refresh: "main-refresh",
expires: mainExpiry,
},
},
};
await fs.writeFile(path.join(mainAgentDir, "auth-profiles.json"), JSON.stringify(mainStore));
const loadedSecondaryStore = ensureAuthProfileStore(secondaryAgentDir);
const result = await resolveApiKeyForProfile({
store: loadedSecondaryStore,
profileId,
agentDir: secondaryAgentDir,
});
expect(result?.apiKey).toBe("main-fresh-token");
});
it("accepts mode=token + type=oauth for legacy compatibility", async () => {
const profileId = "anthropic:default";
const store: AuthProfileStore = {
version: 1,
profiles: {
[profileId]: {
type: "oauth",
provider: "anthropic",
access: "oauth-token",
refresh: "refresh-token",
expires: Date.now() + 60_000,
},
},
};
const result = await resolveApiKeyForProfile({
cfg: {
auth: {
profiles: {
[profileId]: {
provider: "anthropic",
mode: "token",
},
},
},
},
store,
profileId,
});
expect(result?.apiKey).toBe("oauth-token");
});
it("accepts mode=oauth + type=token (regression)", async () => {
const profileId = "anthropic:default";
const store: AuthProfileStore = {
version: 1,
profiles: {
[profileId]: {
type: "token",
provider: "anthropic",
token: "static-token",
expires: Date.now() + 60_000,
},
},
};
const result = await resolveApiKeyForProfile({
cfg: {
auth: {
profiles: {
[profileId]: {
provider: "anthropic",
mode: "oauth",
},
},
},
},
store,
profileId,
});
expect(result?.apiKey).toBe("static-token");
});
it("rejects true mode/type mismatches", async () => {
const profileId = "anthropic:default";
const store: AuthProfileStore = {
version: 1,
profiles: {
[profileId]: {
type: "oauth",
provider: "anthropic",
access: "oauth-token",
refresh: "refresh-token",
expires: Date.now() + 60_000,
},
},
};
const result = await resolveApiKeyForProfile({
cfg: {
auth: {
profiles: {
[profileId]: {
provider: "anthropic",
mode: "api_key",
},
},
},
},
store,
profileId,
});
expect(result).toBeNull();
});
it("throws error when both secondary and main agent credentials are expired", async () => {
const profileId = "anthropic:claude-cli";
const now = Date.now();
const expiredTime = now - 60 * 60 * 1000; // 1 hour ago
// Write expired credentials for both agents
const expiredStore: AuthProfileStore = {
version: 1,
profiles: {
[profileId]: {
type: "oauth",
provider: "anthropic",
access: "expired-access-token",
refresh: "expired-refresh-token",
expires: expiredTime,
},
},
};
await fs.writeFile(
path.join(secondaryAgentDir, "auth-profiles.json"),
JSON.stringify(expiredStore),
);
await fs.writeFile(path.join(mainAgentDir, "auth-profiles.json"), JSON.stringify(expiredStore));
// Mock fetch to simulate OAuth refresh failure
const fetchSpy = vi.fn(async () => {
return new Response(JSON.stringify({ error: "invalid_grant" }), {
status: 400,
headers: { "Content-Type": "application/json" },
});
});
vi.stubGlobal("fetch", fetchSpy);
const loadedSecondaryStore = ensureAuthProfileStore(secondaryAgentDir);
// Should throw because both agents have expired credentials
await expect(
resolveApiKeyForProfile({
store: loadedSecondaryStore,
profileId,
agentDir: secondaryAgentDir,
}),
).rejects.toThrow(/OAuth token refresh failed/);
});
});

View File

@ -1,161 +0,0 @@
import { describe, expect, it } from "vitest";
import type { OpenClawConfig } from "../../config/config.js";
import { resolveApiKeyForProfile } from "./oauth.js";
import type { AuthProfileStore } from "./types.js";
function cfgFor(profileId: string, provider: string, mode: "api_key" | "token" | "oauth") {
return {
auth: {
profiles: {
[profileId]: { provider, mode },
},
},
} satisfies OpenClawConfig;
}
describe("resolveApiKeyForProfile config compatibility", () => {
it("accepts token credentials when config mode is oauth", async () => {
const profileId = "anthropic:token";
const store: AuthProfileStore = {
version: 1,
profiles: {
[profileId]: {
type: "token",
provider: "anthropic",
token: "tok-123",
},
},
};
const result = await resolveApiKeyForProfile({
cfg: cfgFor(profileId, "anthropic", "oauth"),
store,
profileId,
});
expect(result).toEqual({
apiKey: "tok-123",
provider: "anthropic",
email: undefined,
});
});
it("rejects token credentials when config mode is api_key", async () => {
const profileId = "anthropic:token";
const store: AuthProfileStore = {
version: 1,
profiles: {
[profileId]: {
type: "token",
provider: "anthropic",
token: "tok-123",
},
},
};
const result = await resolveApiKeyForProfile({
cfg: cfgFor(profileId, "anthropic", "api_key"),
store,
profileId,
});
expect(result).toBeNull();
});
it("accepts oauth credentials when config mode is token (bidirectional compat)", async () => {
const profileId = "anthropic:oauth";
const store: AuthProfileStore = {
version: 1,
profiles: {
[profileId]: {
type: "oauth",
provider: "anthropic",
access: "access-123",
refresh: "refresh-123",
expires: Date.now() + 60_000,
},
},
};
const result = await resolveApiKeyForProfile({
cfg: cfgFor(profileId, "anthropic", "token"),
store,
profileId,
});
// token ↔ oauth are bidirectionally compatible bearer-token auth paths.
expect(result).toEqual({
apiKey: "access-123",
provider: "anthropic",
email: undefined,
});
});
it("rejects credentials when provider does not match config", async () => {
const profileId = "anthropic:token";
const store: AuthProfileStore = {
version: 1,
profiles: {
[profileId]: {
type: "token",
provider: "anthropic",
token: "tok-123",
},
},
};
const result = await resolveApiKeyForProfile({
cfg: cfgFor(profileId, "openai", "token"),
store,
profileId,
});
expect(result).toBeNull();
});
});
describe("resolveApiKeyForProfile token expiry handling", () => {
it("returns null for expired token credentials", async () => {
const profileId = "anthropic:token-expired";
const store: AuthProfileStore = {
version: 1,
profiles: {
[profileId]: {
type: "token",
provider: "anthropic",
token: "tok-expired",
expires: Date.now() - 1_000,
},
},
};
const result = await resolveApiKeyForProfile({
cfg: cfgFor(profileId, "anthropic", "token"),
store,
profileId,
});
expect(result).toBeNull();
});
it("accepts token credentials when expires is 0", async () => {
const profileId = "anthropic:token-no-expiry";
const store: AuthProfileStore = {
version: 1,
profiles: {
[profileId]: {
type: "token",
provider: "anthropic",
token: "tok-123",
expires: 0,
},
},
};
const result = await resolveApiKeyForProfile({
cfg: cfgFor(profileId, "anthropic", "token"),
store,
profileId,
});
expect(result).toEqual({
apiKey: "tok-123",
provider: "anthropic",
email: undefined,
});
});
});

View File

@ -1,376 +0,0 @@
import {
getOAuthApiKey,
getOAuthProviders,
type OAuthCredentials,
type OAuthProvider,
} from "@mariozechner/pi-ai";
import type { OpenClawConfig } from "../../config/config.js";
import { withFileLock } from "../../infra/file-lock.js";
import { refreshQwenPortalCredentials } from "../../providers/qwen-portal-oauth.js";
import { refreshChutesTokens } from "../chutes-oauth.js";
import { AUTH_STORE_LOCK_OPTIONS, log } from "./constants.js";
import { formatAuthDoctorHint } from "./doctor.js";
import { ensureAuthStoreFile, resolveAuthStorePath } from "./paths.js";
import { suggestOAuthProfileIdForLegacyDefault } from "./repair.js";
import { ensureAuthProfileStore, saveAuthProfileStore } from "./store.js";
import type { AuthProfileStore } from "./types.js";
const OAUTH_PROVIDER_IDS = new Set<string>(getOAuthProviders().map((provider) => provider.id));
const isOAuthProvider = (provider: string): provider is OAuthProvider =>
OAUTH_PROVIDER_IDS.has(provider);
const resolveOAuthProvider = (provider: string): OAuthProvider | null =>
isOAuthProvider(provider) ? provider : null;
/** Bearer-token auth modes that are interchangeable (oauth tokens and raw tokens). */
const BEARER_AUTH_MODES = new Set(["oauth", "token"]);
const isCompatibleModeType = (mode: string | undefined, type: string | undefined): boolean => {
if (!mode || !type) {
return false;
}
if (mode === type) {
return true;
}
// Both token and oauth represent bearer-token auth paths — allow bidirectional compat.
return BEARER_AUTH_MODES.has(mode) && BEARER_AUTH_MODES.has(type);
};
function isProfileConfigCompatible(params: {
cfg?: OpenClawConfig;
profileId: string;
provider: string;
mode: "api_key" | "token" | "oauth";
allowOAuthTokenCompatibility?: boolean;
}): boolean {
const profileConfig = params.cfg?.auth?.profiles?.[params.profileId];
if (profileConfig && profileConfig.provider !== params.provider) {
return false;
}
if (profileConfig && !isCompatibleModeType(profileConfig.mode, params.mode)) {
return false;
}
return true;
}
function buildOAuthApiKey(provider: string, credentials: OAuthCredentials): string {
const needsProjectId = provider === "google-gemini-cli" || provider === "google-antigravity";
return needsProjectId
? JSON.stringify({
token: credentials.access,
projectId: credentials.projectId,
})
: credentials.access;
}
function buildApiKeyProfileResult(params: { apiKey: string; provider: string; email?: string }) {
return {
apiKey: params.apiKey,
provider: params.provider,
email: params.email,
};
}
function buildOAuthProfileResult(params: {
provider: string;
credentials: OAuthCredentials;
email?: string;
}) {
return buildApiKeyProfileResult({
apiKey: buildOAuthApiKey(params.provider, params.credentials),
provider: params.provider,
email: params.email,
});
}
function isExpiredCredential(expires: number | undefined): boolean {
return (
typeof expires === "number" && Number.isFinite(expires) && expires > 0 && Date.now() >= expires
);
}
type ResolveApiKeyForProfileParams = {
cfg?: OpenClawConfig;
store: AuthProfileStore;
profileId: string;
agentDir?: string;
};
function adoptNewerMainOAuthCredential(params: {
store: AuthProfileStore;
profileId: string;
agentDir?: string;
cred: OAuthCredentials & { type: "oauth"; provider: string; email?: string };
}): (OAuthCredentials & { type: "oauth"; provider: string; email?: string }) | null {
if (!params.agentDir) {
return null;
}
try {
const mainStore = ensureAuthProfileStore(undefined);
const mainCred = mainStore.profiles[params.profileId];
if (
mainCred?.type === "oauth" &&
mainCred.provider === params.cred.provider &&
Number.isFinite(mainCred.expires) &&
(!Number.isFinite(params.cred.expires) || mainCred.expires > params.cred.expires)
) {
params.store.profiles[params.profileId] = { ...mainCred };
saveAuthProfileStore(params.store, params.agentDir);
log.info("adopted newer OAuth credentials from main agent", {
profileId: params.profileId,
agentDir: params.agentDir,
expires: new Date(mainCred.expires).toISOString(),
});
return mainCred;
}
} catch (err) {
// Best-effort: don't crash if main agent store is missing or unreadable.
log.debug("adoptNewerMainOAuthCredential failed", {
profileId: params.profileId,
error: err instanceof Error ? err.message : String(err),
});
}
return null;
}
async function refreshOAuthTokenWithLock(params: {
profileId: string;
agentDir?: string;
}): Promise<{ apiKey: string; newCredentials: OAuthCredentials } | null> {
const authPath = resolveAuthStorePath(params.agentDir);
ensureAuthStoreFile(authPath);
return await withFileLock(authPath, AUTH_STORE_LOCK_OPTIONS, async () => {
const store = ensureAuthProfileStore(params.agentDir);
const cred = store.profiles[params.profileId];
if (!cred || cred.type !== "oauth") {
return null;
}
if (Date.now() < cred.expires) {
return {
apiKey: buildOAuthApiKey(cred.provider, cred),
newCredentials: cred,
};
}
const oauthCreds: Record<string, OAuthCredentials> = {
[cred.provider]: cred,
};
const result =
String(cred.provider) === "chutes"
? await (async () => {
const newCredentials = await refreshChutesTokens({
credential: cred,
});
return { apiKey: newCredentials.access, newCredentials };
})()
: String(cred.provider) === "qwen-portal"
? await (async () => {
const newCredentials = await refreshQwenPortalCredentials(cred);
return { apiKey: newCredentials.access, newCredentials };
})()
: await (async () => {
const oauthProvider = resolveOAuthProvider(cred.provider);
if (!oauthProvider) {
return null;
}
return await getOAuthApiKey(oauthProvider, oauthCreds);
})();
if (!result) {
return null;
}
store.profiles[params.profileId] = {
...cred,
...result.newCredentials,
type: "oauth",
};
saveAuthProfileStore(store, params.agentDir);
return result;
});
}
async function tryResolveOAuthProfile(
params: ResolveApiKeyForProfileParams,
): Promise<{ apiKey: string; provider: string; email?: string } | null> {
const { cfg, store, profileId } = params;
const cred = store.profiles[profileId];
if (!cred || cred.type !== "oauth") {
return null;
}
if (
!isProfileConfigCompatible({
cfg,
profileId,
provider: cred.provider,
mode: cred.type,
})
) {
return null;
}
if (Date.now() < cred.expires) {
return buildOAuthProfileResult({
provider: cred.provider,
credentials: cred,
email: cred.email,
});
}
const refreshed = await refreshOAuthTokenWithLock({
profileId,
agentDir: params.agentDir,
});
if (!refreshed) {
return null;
}
return buildApiKeyProfileResult({
apiKey: refreshed.apiKey,
provider: cred.provider,
email: cred.email,
});
}
export async function resolveApiKeyForProfile(
params: ResolveApiKeyForProfileParams,
): Promise<{ apiKey: string; provider: string; email?: string } | null> {
const { cfg, store, profileId } = params;
const cred = store.profiles[profileId];
if (!cred) {
return null;
}
if (
!isProfileConfigCompatible({
cfg,
profileId,
provider: cred.provider,
mode: cred.type,
// Compatibility: treat "oauth" config as compatible with stored token profiles.
allowOAuthTokenCompatibility: true,
})
) {
return null;
}
if (cred.type === "api_key") {
const key = cred.key?.trim();
if (!key) {
return null;
}
return buildApiKeyProfileResult({ apiKey: key, provider: cred.provider, email: cred.email });
}
if (cred.type === "token") {
const token = cred.token?.trim();
if (!token) {
return null;
}
if (isExpiredCredential(cred.expires)) {
return null;
}
return buildApiKeyProfileResult({ apiKey: token, provider: cred.provider, email: cred.email });
}
const oauthCred =
adoptNewerMainOAuthCredential({
store,
profileId,
agentDir: params.agentDir,
cred,
}) ?? cred;
if (Date.now() < oauthCred.expires) {
return buildOAuthProfileResult({
provider: oauthCred.provider,
credentials: oauthCred,
email: oauthCred.email,
});
}
try {
const result = await refreshOAuthTokenWithLock({
profileId,
agentDir: params.agentDir,
});
if (!result) {
return null;
}
return buildApiKeyProfileResult({
apiKey: result.apiKey,
provider: cred.provider,
email: cred.email,
});
} catch (error) {
const refreshedStore = ensureAuthProfileStore(params.agentDir);
const refreshed = refreshedStore.profiles[profileId];
if (refreshed?.type === "oauth" && Date.now() < refreshed.expires) {
return buildOAuthProfileResult({
provider: refreshed.provider,
credentials: refreshed,
email: refreshed.email ?? cred.email,
});
}
const fallbackProfileId = suggestOAuthProfileIdForLegacyDefault({
cfg,
store: refreshedStore,
provider: cred.provider,
legacyProfileId: profileId,
});
if (fallbackProfileId && fallbackProfileId !== profileId) {
try {
const fallbackResolved = await tryResolveOAuthProfile({
cfg,
store: refreshedStore,
profileId: fallbackProfileId,
agentDir: params.agentDir,
});
if (fallbackResolved) {
return fallbackResolved;
}
} catch {
// keep original error
}
}
// Fallback: if this is a secondary agent, try using the main agent's credentials
if (params.agentDir) {
try {
const mainStore = ensureAuthProfileStore(undefined); // main agent (no agentDir)
const mainCred = mainStore.profiles[profileId];
if (mainCred?.type === "oauth" && Date.now() < mainCred.expires) {
// Main agent has fresh credentials - copy them to this agent and use them
refreshedStore.profiles[profileId] = { ...mainCred };
saveAuthProfileStore(refreshedStore, params.agentDir);
log.info("inherited fresh OAuth credentials from main agent", {
profileId,
agentDir: params.agentDir,
expires: new Date(mainCred.expires).toISOString(),
});
return buildOAuthProfileResult({
provider: mainCred.provider,
credentials: mainCred,
email: mainCred.email,
});
}
} catch {
// keep original error if main agent fallback also fails
}
}
const message = error instanceof Error ? error.message : String(error);
const hint = formatAuthDoctorHint({
cfg,
store: refreshedStore,
provider: cred.provider,
profileId,
});
throw new Error(
`OAuth token refresh failed for ${cred.provider}: ${message}. ` +
"Please try again or re-authenticate." +
(hint ? `\n\n${hint}` : ""),
{ cause: error },
);
}
}

View File

@ -1,189 +0,0 @@
import type { OpenClawConfig } from "../../config/config.js";
import { findNormalizedProviderValue, normalizeProviderId } from "../model-selection.js";
import { dedupeProfileIds, listProfilesForProvider } from "./profiles.js";
import type { AuthProfileStore } from "./types.js";
import {
clearExpiredCooldowns,
isProfileInCooldown,
resolveProfileUnusableUntil,
} from "./usage.js";
export function resolveAuthProfileOrder(params: {
cfg?: OpenClawConfig;
store: AuthProfileStore;
provider: string;
preferredProfile?: string;
}): string[] {
const { cfg, store, provider, preferredProfile } = params;
const providerKey = normalizeProviderId(provider);
const now = Date.now();
// Clear any cooldowns that have expired since the last check so profiles
// get a fresh error count and are not immediately re-penalized on the
// next transient failure. See #3604.
clearExpiredCooldowns(store, now);
const storedOrder = findNormalizedProviderValue(store.order, providerKey);
const configuredOrder = findNormalizedProviderValue(cfg?.auth?.order, providerKey);
const explicitOrder = storedOrder ?? configuredOrder;
const explicitProfiles = cfg?.auth?.profiles
? Object.entries(cfg.auth.profiles)
.filter(([, profile]) => normalizeProviderId(profile.provider) === providerKey)
.map(([profileId]) => profileId)
: [];
const baseOrder =
explicitOrder ??
(explicitProfiles.length > 0 ? explicitProfiles : listProfilesForProvider(store, providerKey));
if (baseOrder.length === 0) {
return [];
}
const isValidProfile = (profileId: string): boolean => {
const cred = store.profiles[profileId];
if (!cred) {
return false;
}
if (normalizeProviderId(cred.provider) !== providerKey) {
return false;
}
const profileConfig = cfg?.auth?.profiles?.[profileId];
if (profileConfig) {
if (normalizeProviderId(profileConfig.provider) !== providerKey) {
return false;
}
if (profileConfig.mode !== cred.type) {
const oauthCompatible = profileConfig.mode === "oauth" && cred.type === "token";
if (!oauthCompatible) {
return false;
}
}
}
if (cred.type === "api_key") {
return Boolean(cred.key?.trim());
}
if (cred.type === "token") {
if (!cred.token?.trim()) {
return false;
}
if (
typeof cred.expires === "number" &&
Number.isFinite(cred.expires) &&
cred.expires > 0 &&
now >= cred.expires
) {
return false;
}
return true;
}
if (cred.type === "oauth") {
return Boolean(cred.access?.trim() || cred.refresh?.trim());
}
return false;
};
let filtered = baseOrder.filter(isValidProfile);
// Repair config/store profile-id drift from older onboarding flows:
// if configured profile ids no longer exist in auth-profiles.json, scan the
// provider's stored credentials and use any valid entries.
const allBaseProfilesMissing = baseOrder.every((profileId) => !store.profiles[profileId]);
if (filtered.length === 0 && explicitProfiles.length > 0 && allBaseProfilesMissing) {
const storeProfiles = listProfilesForProvider(store, providerKey);
filtered = storeProfiles.filter(isValidProfile);
}
const deduped = dedupeProfileIds(filtered);
// If user specified explicit order (store override or config), respect it
// exactly, but still apply cooldown sorting to avoid repeatedly selecting
// known-bad/rate-limited keys as the first candidate.
if (explicitOrder && explicitOrder.length > 0) {
// ...but still respect cooldown tracking to avoid repeatedly selecting a
// known-bad/rate-limited key as the first candidate.
const available: string[] = [];
const inCooldown: Array<{ profileId: string; cooldownUntil: number }> = [];
for (const profileId of deduped) {
const cooldownUntil = resolveProfileUnusableUntil(store.usageStats?.[profileId] ?? {}) ?? 0;
if (
typeof cooldownUntil === "number" &&
Number.isFinite(cooldownUntil) &&
cooldownUntil > 0 &&
now < cooldownUntil
) {
inCooldown.push({ profileId, cooldownUntil });
} else {
available.push(profileId);
}
}
const cooldownSorted = inCooldown
.toSorted((a, b) => a.cooldownUntil - b.cooldownUntil)
.map((entry) => entry.profileId);
const ordered = [...available, ...cooldownSorted];
// Still put preferredProfile first if specified
if (preferredProfile && ordered.includes(preferredProfile)) {
return [preferredProfile, ...ordered.filter((e) => e !== preferredProfile)];
}
return ordered;
}
// Otherwise, use round-robin: sort by lastUsed (oldest first)
// preferredProfile goes first if specified (for explicit user choice)
// lastGood is NOT prioritized - that would defeat round-robin
const sorted = orderProfilesByMode(deduped, store);
if (preferredProfile && sorted.includes(preferredProfile)) {
return [preferredProfile, ...sorted.filter((e) => e !== preferredProfile)];
}
return sorted;
}
function orderProfilesByMode(order: string[], store: AuthProfileStore): string[] {
const now = Date.now();
// Partition into available and in-cooldown
const available: string[] = [];
const inCooldown: string[] = [];
for (const profileId of order) {
if (isProfileInCooldown(store, profileId)) {
inCooldown.push(profileId);
} else {
available.push(profileId);
}
}
// Sort available profiles by type preference, then by lastUsed (oldest first = round-robin within type)
const scored = available.map((profileId) => {
const type = store.profiles[profileId]?.type;
const typeScore = type === "oauth" ? 0 : type === "token" ? 1 : type === "api_key" ? 2 : 3;
const lastUsed = store.usageStats?.[profileId]?.lastUsed ?? 0;
return { profileId, typeScore, lastUsed };
});
// Primary sort: type preference (oauth > token > api_key).
// Secondary sort: lastUsed (oldest first for round-robin within type).
const sorted = scored
.toSorted((a, b) => {
// First by type (oauth > token > api_key)
if (a.typeScore !== b.typeScore) {
return a.typeScore - b.typeScore;
}
// Then by lastUsed (oldest first)
return a.lastUsed - b.lastUsed;
})
.map((entry) => entry.profileId);
// Append cooldown profiles at the end (sorted by cooldown expiry, soonest first)
const cooldownSorted = inCooldown
.map((profileId) => ({
profileId,
cooldownUntil: resolveProfileUnusableUntil(store.usageStats?.[profileId] ?? {}) ?? now,
}))
.toSorted((a, b) => a.cooldownUntil - b.cooldownUntil)
.map((entry) => entry.profileId);
return [...sorted, ...cooldownSorted];
}

View File

@ -1,33 +0,0 @@
import fs from "node:fs";
import path from "node:path";
import { saveJsonFile } from "../../infra/json-file.js";
import { resolveUserPath } from "../../utils.js";
import { resolveOpenClawAgentDir } from "../agent-paths.js";
import { AUTH_PROFILE_FILENAME, AUTH_STORE_VERSION, LEGACY_AUTH_FILENAME } from "./constants.js";
import type { AuthProfileStore } from "./types.js";
export function resolveAuthStorePath(agentDir?: string): string {
const resolved = resolveUserPath(agentDir ?? resolveOpenClawAgentDir());
return path.join(resolved, AUTH_PROFILE_FILENAME);
}
export function resolveLegacyAuthStorePath(agentDir?: string): string {
const resolved = resolveUserPath(agentDir ?? resolveOpenClawAgentDir());
return path.join(resolved, LEGACY_AUTH_FILENAME);
}
export function resolveAuthStorePathForDisplay(agentDir?: string): string {
const pathname = resolveAuthStorePath(agentDir);
return pathname.startsWith("~") ? pathname : resolveUserPath(pathname);
}
export function ensureAuthStoreFile(pathname: string) {
if (fs.existsSync(pathname)) {
return;
}
const payload: AuthProfileStore = {
version: AUTH_STORE_VERSION,
profiles: {},
};
saveJsonFile(pathname, payload);
}

View File

@ -1,116 +0,0 @@
import { normalizeSecretInput } from "../../utils/normalize-secret-input.js";
import { normalizeProviderId } from "../model-selection.js";
import {
ensureAuthProfileStore,
saveAuthProfileStore,
updateAuthProfileStoreWithLock,
} from "./store.js";
import type { AuthProfileCredential, AuthProfileStore } from "./types.js";
export function dedupeProfileIds(profileIds: string[]): string[] {
return [...new Set(profileIds)];
}
export async function setAuthProfileOrder(params: {
agentDir?: string;
provider: string;
order?: string[] | null;
}): Promise<AuthProfileStore | null> {
const providerKey = normalizeProviderId(params.provider);
const sanitized =
params.order && Array.isArray(params.order)
? params.order.map((entry) => String(entry).trim()).filter(Boolean)
: [];
const deduped = dedupeProfileIds(sanitized);
return await updateAuthProfileStoreWithLock({
agentDir: params.agentDir,
updater: (store) => {
store.order = store.order ?? {};
if (deduped.length === 0) {
if (!store.order[providerKey]) {
return false;
}
delete store.order[providerKey];
if (Object.keys(store.order).length === 0) {
store.order = undefined;
}
return true;
}
store.order[providerKey] = deduped;
return true;
},
});
}
export function upsertAuthProfile(params: {
profileId: string;
credential: AuthProfileCredential;
agentDir?: string;
}): void {
const credential =
params.credential.type === "api_key"
? {
...params.credential,
...(typeof params.credential.key === "string"
? { key: normalizeSecretInput(params.credential.key) }
: {}),
}
: params.credential.type === "token"
? { ...params.credential, token: normalizeSecretInput(params.credential.token) }
: params.credential;
const store = ensureAuthProfileStore(params.agentDir);
store.profiles[params.profileId] = credential;
saveAuthProfileStore(store, params.agentDir);
}
export async function upsertAuthProfileWithLock(params: {
profileId: string;
credential: AuthProfileCredential;
agentDir?: string;
}): Promise<AuthProfileStore | null> {
return await updateAuthProfileStoreWithLock({
agentDir: params.agentDir,
updater: (store) => {
store.profiles[params.profileId] = params.credential;
return true;
},
});
}
export function listProfilesForProvider(store: AuthProfileStore, provider: string): string[] {
const providerKey = normalizeProviderId(provider);
return Object.entries(store.profiles)
.filter(([, cred]) => normalizeProviderId(cred.provider) === providerKey)
.map(([id]) => id);
}
export async function markAuthProfileGood(params: {
store: AuthProfileStore;
provider: string;
profileId: string;
agentDir?: string;
}): Promise<void> {
const { store, provider, profileId, agentDir } = params;
const updated = await updateAuthProfileStoreWithLock({
agentDir,
updater: (freshStore) => {
const profile = freshStore.profiles[profileId];
if (!profile || profile.provider !== provider) {
return false;
}
freshStore.lastGood = { ...freshStore.lastGood, [provider]: profileId };
return true;
},
});
if (updated) {
store.lastGood = updated.lastGood;
return;
}
const profile = store.profiles[profileId];
if (!profile || profile.provider !== provider) {
return;
}
store.lastGood = { ...store.lastGood, [provider]: profileId };
saveAuthProfileStore(store, agentDir);
}

View File

@ -1,164 +0,0 @@
import type { OpenClawConfig } from "../../config/config.js";
import type { AuthProfileConfig } from "../../config/types.js";
import { findNormalizedProviderKey, normalizeProviderId } from "../model-selection.js";
import { dedupeProfileIds, listProfilesForProvider } from "./profiles.js";
import type { AuthProfileIdRepairResult, AuthProfileStore } from "./types.js";
function getProfileSuffix(profileId: string): string {
const idx = profileId.indexOf(":");
if (idx < 0) {
return "";
}
return profileId.slice(idx + 1);
}
function isEmailLike(value: string): boolean {
const trimmed = value.trim();
if (!trimmed) {
return false;
}
return trimmed.includes("@") && trimmed.includes(".");
}
export function suggestOAuthProfileIdForLegacyDefault(params: {
cfg?: OpenClawConfig;
store: AuthProfileStore;
provider: string;
legacyProfileId: string;
}): string | null {
const providerKey = normalizeProviderId(params.provider);
const legacySuffix = getProfileSuffix(params.legacyProfileId);
if (legacySuffix !== "default") {
return null;
}
const legacyCfg = params.cfg?.auth?.profiles?.[params.legacyProfileId];
if (
legacyCfg &&
normalizeProviderId(legacyCfg.provider) === providerKey &&
legacyCfg.mode !== "oauth"
) {
return null;
}
const oauthProfiles = listProfilesForProvider(params.store, providerKey).filter(
(id) => params.store.profiles[id]?.type === "oauth",
);
if (oauthProfiles.length === 0) {
return null;
}
const configuredEmail = legacyCfg?.email?.trim();
if (configuredEmail) {
const byEmail = oauthProfiles.find((id) => {
const cred = params.store.profiles[id];
if (!cred || cred.type !== "oauth") {
return false;
}
const email = cred.email?.trim();
return email === configuredEmail || id === `${providerKey}:${configuredEmail}`;
});
if (byEmail) {
return byEmail;
}
}
const lastGood = params.store.lastGood?.[providerKey] ?? params.store.lastGood?.[params.provider];
if (lastGood && oauthProfiles.includes(lastGood)) {
return lastGood;
}
const nonLegacy = oauthProfiles.filter((id) => id !== params.legacyProfileId);
if (nonLegacy.length === 1) {
return nonLegacy[0] ?? null;
}
const emailLike = nonLegacy.filter((id) => isEmailLike(getProfileSuffix(id)));
if (emailLike.length === 1) {
return emailLike[0] ?? null;
}
return null;
}
export function repairOAuthProfileIdMismatch(params: {
cfg: OpenClawConfig;
store: AuthProfileStore;
provider: string;
legacyProfileId?: string;
}): AuthProfileIdRepairResult {
const legacyProfileId =
params.legacyProfileId ?? `${normalizeProviderId(params.provider)}:default`;
const legacyCfg = params.cfg.auth?.profiles?.[legacyProfileId];
if (!legacyCfg) {
return { config: params.cfg, changes: [], migrated: false };
}
if (legacyCfg.mode !== "oauth") {
return { config: params.cfg, changes: [], migrated: false };
}
if (normalizeProviderId(legacyCfg.provider) !== normalizeProviderId(params.provider)) {
return { config: params.cfg, changes: [], migrated: false };
}
const toProfileId = suggestOAuthProfileIdForLegacyDefault({
cfg: params.cfg,
store: params.store,
provider: params.provider,
legacyProfileId,
});
if (!toProfileId || toProfileId === legacyProfileId) {
return { config: params.cfg, changes: [], migrated: false };
}
const toCred = params.store.profiles[toProfileId];
const toEmail = toCred?.type === "oauth" ? toCred.email?.trim() : undefined;
const nextProfiles = {
...params.cfg.auth?.profiles,
} as Record<string, AuthProfileConfig>;
delete nextProfiles[legacyProfileId];
nextProfiles[toProfileId] = {
...legacyCfg,
...(toEmail ? { email: toEmail } : {}),
};
const providerKey = normalizeProviderId(params.provider);
const nextOrder = (() => {
const order = params.cfg.auth?.order;
if (!order) {
return undefined;
}
const resolvedKey = findNormalizedProviderKey(order, providerKey);
if (!resolvedKey) {
return order;
}
const existing = order[resolvedKey];
if (!Array.isArray(existing)) {
return order;
}
const replaced = existing
.map((id) => (id === legacyProfileId ? toProfileId : id))
.filter((id): id is string => typeof id === "string" && id.trim().length > 0);
const deduped = dedupeProfileIds(replaced);
return { ...order, [resolvedKey]: deduped };
})();
const nextCfg: OpenClawConfig = {
...params.cfg,
auth: {
...params.cfg.auth,
profiles: nextProfiles,
...(nextOrder ? { order: nextOrder } : {}),
},
};
const changes = [`Auth: migrate ${legacyProfileId}${toProfileId} (OAuth profile id)`];
return {
config: nextCfg,
changes,
migrated: true,
fromProfileId: legacyProfileId,
toProfileId,
};
}

View File

@ -1,53 +0,0 @@
import fs from "node:fs/promises";
import path from "node:path";
import { describe, expect, it } from "vitest";
import type { OpenClawConfig } from "../../config/config.js";
import type { SessionEntry } from "../../config/sessions.js";
import { withStateDirEnv } from "../../test-helpers/state-dir-env.js";
import { resolveSessionAuthProfileOverride } from "./session-override.js";
async function writeAuthStore(agentDir: string) {
const authPath = path.join(agentDir, "auth-profiles.json");
const payload = {
version: 1,
profiles: {
"zai:work": { type: "api_key", provider: "zai", key: "sk-test" },
},
order: {
zai: ["zai:work"],
},
};
await fs.writeFile(authPath, JSON.stringify(payload), "utf-8");
}
describe("resolveSessionAuthProfileOverride", () => {
it("keeps user override when provider alias differs", async () => {
await withStateDirEnv("openclaw-auth-", async ({ stateDir }) => {
const agentDir = path.join(stateDir, "agent");
await fs.mkdir(agentDir, { recursive: true });
await writeAuthStore(agentDir);
const sessionEntry: SessionEntry = {
sessionId: "s1",
updatedAt: Date.now(),
authProfileOverride: "zai:work",
authProfileOverrideSource: "user",
};
const sessionStore = { "agent:main:main": sessionEntry };
const resolved = await resolveSessionAuthProfileOverride({
cfg: {} as OpenClawConfig,
provider: "z.ai",
agentDir,
sessionEntry,
sessionStore,
sessionKey: "agent:main:main",
storePath: undefined,
isNewSession: false,
});
expect(resolved).toBe("zai:work");
expect(sessionEntry.authProfileOverride).toBe("zai:work");
});
});
});

View File

@ -1,151 +0,0 @@
import type { OpenClawConfig } from "../../config/config.js";
import { updateSessionStore, type SessionEntry } from "../../config/sessions.js";
import {
ensureAuthProfileStore,
isProfileInCooldown,
resolveAuthProfileOrder,
} from "../auth-profiles.js";
import { normalizeProviderId } from "../model-selection.js";
function isProfileForProvider(params: {
provider: string;
profileId: string;
store: ReturnType<typeof ensureAuthProfileStore>;
}): boolean {
const entry = params.store.profiles[params.profileId];
if (!entry?.provider) {
return false;
}
return normalizeProviderId(entry.provider) === normalizeProviderId(params.provider);
}
export async function clearSessionAuthProfileOverride(params: {
sessionEntry: SessionEntry;
sessionStore: Record<string, SessionEntry>;
sessionKey: string;
storePath?: string;
}) {
const { sessionEntry, sessionStore, sessionKey, storePath } = params;
delete sessionEntry.authProfileOverride;
delete sessionEntry.authProfileOverrideSource;
delete sessionEntry.authProfileOverrideCompactionCount;
sessionEntry.updatedAt = Date.now();
sessionStore[sessionKey] = sessionEntry;
if (storePath) {
await updateSessionStore(storePath, (store) => {
store[sessionKey] = sessionEntry;
});
}
}
export async function resolveSessionAuthProfileOverride(params: {
cfg: OpenClawConfig;
provider: string;
agentDir: string;
sessionEntry?: SessionEntry;
sessionStore?: Record<string, SessionEntry>;
sessionKey?: string;
storePath?: string;
isNewSession: boolean;
}): Promise<string | undefined> {
const {
cfg,
provider,
agentDir,
sessionEntry,
sessionStore,
sessionKey,
storePath,
isNewSession,
} = params;
if (!sessionEntry || !sessionStore || !sessionKey) {
return sessionEntry?.authProfileOverride;
}
const store = ensureAuthProfileStore(agentDir, { allowKeychainPrompt: false });
const order = resolveAuthProfileOrder({ cfg, store, provider });
let current = sessionEntry.authProfileOverride?.trim();
if (current && !store.profiles[current]) {
await clearSessionAuthProfileOverride({ sessionEntry, sessionStore, sessionKey, storePath });
current = undefined;
}
if (current && !isProfileForProvider({ provider, profileId: current, store })) {
await clearSessionAuthProfileOverride({ sessionEntry, sessionStore, sessionKey, storePath });
current = undefined;
}
if (current && order.length > 0 && !order.includes(current)) {
await clearSessionAuthProfileOverride({ sessionEntry, sessionStore, sessionKey, storePath });
current = undefined;
}
if (order.length === 0) {
return undefined;
}
const pickFirstAvailable = () =>
order.find((profileId) => !isProfileInCooldown(store, profileId)) ?? order[0];
const pickNextAvailable = (active: string) => {
const startIndex = order.indexOf(active);
if (startIndex < 0) {
return pickFirstAvailable();
}
for (let offset = 1; offset <= order.length; offset += 1) {
const candidate = order[(startIndex + offset) % order.length];
if (!isProfileInCooldown(store, candidate)) {
return candidate;
}
}
return order[startIndex] ?? order[0];
};
const compactionCount = sessionEntry.compactionCount ?? 0;
const storedCompaction =
typeof sessionEntry.authProfileOverrideCompactionCount === "number"
? sessionEntry.authProfileOverrideCompactionCount
: compactionCount;
const source =
sessionEntry.authProfileOverrideSource ??
(typeof sessionEntry.authProfileOverrideCompactionCount === "number"
? "auto"
: current
? "user"
: undefined);
if (source === "user" && current && !isNewSession) {
return current;
}
let next = current;
if (isNewSession) {
next = current ? pickNextAvailable(current) : pickFirstAvailable();
} else if (current && compactionCount > storedCompaction) {
next = pickNextAvailable(current);
} else if (!current || isProfileInCooldown(store, current)) {
next = pickFirstAvailable();
}
if (!next) {
return current;
}
const shouldPersist =
next !== sessionEntry.authProfileOverride ||
sessionEntry.authProfileOverrideSource !== "auto" ||
sessionEntry.authProfileOverrideCompactionCount !== compactionCount;
if (shouldPersist) {
sessionEntry.authProfileOverride = next;
sessionEntry.authProfileOverrideSource = "auto";
sessionEntry.authProfileOverrideCompactionCount = compactionCount;
sessionEntry.updatedAt = Date.now();
sessionStore[sessionKey] = sessionEntry;
if (storePath) {
await updateSessionStore(storePath, (store) => {
store[sessionKey] = sessionEntry;
});
}
}
return next;
}

View File

@ -1,346 +0,0 @@
import fs from "node:fs";
import type { OAuthCredentials } from "@mariozechner/pi-ai";
import { resolveOAuthPath } from "../../config/paths.js";
import { withFileLock } from "../../infra/file-lock.js";
import { loadJsonFile, saveJsonFile } from "../../infra/json-file.js";
import { AUTH_STORE_LOCK_OPTIONS, AUTH_STORE_VERSION, log } from "./constants.js";
import { syncExternalCliCredentials } from "./external-cli-sync.js";
import { ensureAuthStoreFile, resolveAuthStorePath, resolveLegacyAuthStorePath } from "./paths.js";
import type { AuthProfileCredential, AuthProfileStore, ProfileUsageStats } from "./types.js";
type LegacyAuthStore = Record<string, AuthProfileCredential>;
function _syncAuthProfileStore(target: AuthProfileStore, source: AuthProfileStore): void {
target.version = source.version;
target.profiles = source.profiles;
target.order = source.order;
target.lastGood = source.lastGood;
target.usageStats = source.usageStats;
}
export async function updateAuthProfileStoreWithLock(params: {
agentDir?: string;
updater: (store: AuthProfileStore) => boolean;
}): Promise<AuthProfileStore | null> {
const authPath = resolveAuthStorePath(params.agentDir);
ensureAuthStoreFile(authPath);
try {
return await withFileLock(authPath, AUTH_STORE_LOCK_OPTIONS, async () => {
const store = ensureAuthProfileStore(params.agentDir);
const shouldSave = params.updater(store);
if (shouldSave) {
saveAuthProfileStore(store, params.agentDir);
}
return store;
});
} catch {
return null;
}
}
function coerceLegacyStore(raw: unknown): LegacyAuthStore | null {
if (!raw || typeof raw !== "object") {
return null;
}
const record = raw as Record<string, unknown>;
if ("profiles" in record) {
return null;
}
const entries: LegacyAuthStore = {};
for (const [key, value] of Object.entries(record)) {
if (!value || typeof value !== "object") {
continue;
}
const typed = value as Partial<AuthProfileCredential>;
if (typed.type !== "api_key" && typed.type !== "oauth" && typed.type !== "token") {
continue;
}
entries[key] = {
...typed,
provider: String(typed.provider ?? key),
} as AuthProfileCredential;
}
return Object.keys(entries).length > 0 ? entries : null;
}
function coerceAuthStore(raw: unknown): AuthProfileStore | null {
if (!raw || typeof raw !== "object") {
return null;
}
const record = raw as Record<string, unknown>;
if (!record.profiles || typeof record.profiles !== "object") {
return null;
}
const profiles = record.profiles as Record<string, unknown>;
const normalized: Record<string, AuthProfileCredential> = {};
for (const [key, value] of Object.entries(profiles)) {
if (!value || typeof value !== "object") {
continue;
}
const typed = value as Partial<AuthProfileCredential>;
if (typed.type !== "api_key" && typed.type !== "oauth" && typed.type !== "token") {
continue;
}
if (!typed.provider) {
continue;
}
normalized[key] = typed as AuthProfileCredential;
}
const order =
record.order && typeof record.order === "object"
? Object.entries(record.order as Record<string, unknown>).reduce(
(acc, [provider, value]) => {
if (!Array.isArray(value)) {
return acc;
}
const list = value
.map((entry) => (typeof entry === "string" ? entry.trim() : ""))
.filter(Boolean);
if (list.length === 0) {
return acc;
}
acc[provider] = list;
return acc;
},
{} as Record<string, string[]>,
)
: undefined;
return {
version: Number(record.version ?? AUTH_STORE_VERSION),
profiles: normalized,
order,
lastGood:
record.lastGood && typeof record.lastGood === "object"
? (record.lastGood as Record<string, string>)
: undefined,
usageStats:
record.usageStats && typeof record.usageStats === "object"
? (record.usageStats as Record<string, ProfileUsageStats>)
: undefined,
};
}
function mergeRecord<T>(
base?: Record<string, T>,
override?: Record<string, T>,
): Record<string, T> | undefined {
if (!base && !override) {
return undefined;
}
if (!base) {
return { ...override };
}
if (!override) {
return { ...base };
}
return { ...base, ...override };
}
function mergeAuthProfileStores(
base: AuthProfileStore,
override: AuthProfileStore,
): AuthProfileStore {
if (
Object.keys(override.profiles).length === 0 &&
!override.order &&
!override.lastGood &&
!override.usageStats
) {
return base;
}
return {
version: Math.max(base.version, override.version ?? base.version),
profiles: { ...base.profiles, ...override.profiles },
order: mergeRecord(base.order, override.order),
lastGood: mergeRecord(base.lastGood, override.lastGood),
usageStats: mergeRecord(base.usageStats, override.usageStats),
};
}
function mergeOAuthFileIntoStore(store: AuthProfileStore): boolean {
const oauthPath = resolveOAuthPath();
const oauthRaw = loadJsonFile(oauthPath);
if (!oauthRaw || typeof oauthRaw !== "object") {
return false;
}
const oauthEntries = oauthRaw as Record<string, OAuthCredentials>;
let mutated = false;
for (const [provider, creds] of Object.entries(oauthEntries)) {
if (!creds || typeof creds !== "object") {
continue;
}
const profileId = `${provider}:default`;
if (store.profiles[profileId]) {
continue;
}
store.profiles[profileId] = {
type: "oauth",
provider,
...creds,
};
mutated = true;
}
return mutated;
}
function applyLegacyStore(store: AuthProfileStore, legacy: LegacyAuthStore): void {
for (const [provider, cred] of Object.entries(legacy)) {
const profileId = `${provider}:default`;
if (cred.type === "api_key") {
store.profiles[profileId] = {
type: "api_key",
provider: String(cred.provider ?? provider),
key: cred.key,
...(cred.email ? { email: cred.email } : {}),
};
continue;
}
if (cred.type === "token") {
store.profiles[profileId] = {
type: "token",
provider: String(cred.provider ?? provider),
token: cred.token,
...(typeof cred.expires === "number" ? { expires: cred.expires } : {}),
...(cred.email ? { email: cred.email } : {}),
};
continue;
}
store.profiles[profileId] = {
type: "oauth",
provider: String(cred.provider ?? provider),
access: cred.access,
refresh: cred.refresh,
expires: cred.expires,
...(cred.enterpriseUrl ? { enterpriseUrl: cred.enterpriseUrl } : {}),
...(cred.projectId ? { projectId: cred.projectId } : {}),
...(cred.accountId ? { accountId: cred.accountId } : {}),
...(cred.email ? { email: cred.email } : {}),
};
}
}
export function loadAuthProfileStore(): AuthProfileStore {
const authPath = resolveAuthStorePath();
const raw = loadJsonFile(authPath);
const asStore = coerceAuthStore(raw);
if (asStore) {
// Sync from external CLI tools on every load
const synced = syncExternalCliCredentials(asStore);
if (synced) {
saveJsonFile(authPath, asStore);
}
return asStore;
}
const legacyRaw = loadJsonFile(resolveLegacyAuthStorePath());
const legacy = coerceLegacyStore(legacyRaw);
if (legacy) {
const store: AuthProfileStore = {
version: AUTH_STORE_VERSION,
profiles: {},
};
applyLegacyStore(store, legacy);
syncExternalCliCredentials(store);
return store;
}
const store: AuthProfileStore = { version: AUTH_STORE_VERSION, profiles: {} };
syncExternalCliCredentials(store);
return store;
}
function loadAuthProfileStoreForAgent(
agentDir?: string,
_options?: { allowKeychainPrompt?: boolean },
): AuthProfileStore {
const authPath = resolveAuthStorePath(agentDir);
const raw = loadJsonFile(authPath);
const asStore = coerceAuthStore(raw);
if (asStore) {
// Sync from external CLI tools on every load
const synced = syncExternalCliCredentials(asStore);
if (synced) {
saveJsonFile(authPath, asStore);
}
return asStore;
}
// Fallback: inherit auth-profiles from main agent if subagent has none
if (agentDir) {
const mainAuthPath = resolveAuthStorePath(); // without agentDir = main
const mainRaw = loadJsonFile(mainAuthPath);
const mainStore = coerceAuthStore(mainRaw);
if (mainStore && Object.keys(mainStore.profiles).length > 0) {
// Clone main store to subagent directory for auth inheritance
saveJsonFile(authPath, mainStore);
log.info("inherited auth-profiles from main agent", { agentDir });
return mainStore;
}
}
const legacyRaw = loadJsonFile(resolveLegacyAuthStorePath(agentDir));
const legacy = coerceLegacyStore(legacyRaw);
const store: AuthProfileStore = {
version: AUTH_STORE_VERSION,
profiles: {},
};
if (legacy) {
applyLegacyStore(store, legacy);
}
const mergedOAuth = mergeOAuthFileIntoStore(store);
const syncedCli = syncExternalCliCredentials(store);
const shouldWrite = legacy !== null || mergedOAuth || syncedCli;
if (shouldWrite) {
saveJsonFile(authPath, store);
}
// PR #368: legacy auth.json could get re-migrated from other agent dirs,
// overwriting fresh OAuth creds with stale tokens (fixes #363). Delete only
// after we've successfully written auth-profiles.json.
if (shouldWrite && legacy !== null) {
const legacyPath = resolveLegacyAuthStorePath(agentDir);
try {
fs.unlinkSync(legacyPath);
} catch (err) {
if ((err as NodeJS.ErrnoException)?.code !== "ENOENT") {
log.warn("failed to delete legacy auth.json after migration", {
err,
legacyPath,
});
}
}
}
return store;
}
export function ensureAuthProfileStore(
agentDir?: string,
options?: { allowKeychainPrompt?: boolean },
): AuthProfileStore {
const store = loadAuthProfileStoreForAgent(agentDir, options);
const authPath = resolveAuthStorePath(agentDir);
const mainAuthPath = resolveAuthStorePath();
if (!agentDir || authPath === mainAuthPath) {
return store;
}
const mainStore = loadAuthProfileStoreForAgent(undefined, options);
const merged = mergeAuthProfileStores(mainStore, store);
return merged;
}
export function saveAuthProfileStore(store: AuthProfileStore, agentDir?: string): void {
const authPath = resolveAuthStorePath(agentDir);
const payload = {
version: AUTH_STORE_VERSION,
profiles: store.profiles,
order: store.order ?? undefined,
lastGood: store.lastGood ?? undefined,
usageStats: store.usageStats ?? undefined,
} satisfies AuthProfileStore;
saveJsonFile(authPath, payload);
}

View File

@ -1,75 +0,0 @@
import type { OAuthCredentials } from "@mariozechner/pi-ai";
import type { OpenClawConfig } from "../../config/config.js";
export type ApiKeyCredential = {
type: "api_key";
provider: string;
key?: string;
email?: string;
/** Optional provider-specific metadata (e.g., account IDs, gateway IDs). */
metadata?: Record<string, string>;
};
export type TokenCredential = {
/**
* Static bearer-style token (often OAuth access token / PAT).
* Not refreshable by OpenClaw (unlike `type: "oauth"`).
*/
type: "token";
provider: string;
token: string;
/** Optional expiry timestamp (ms since epoch). */
expires?: number;
email?: string;
};
export type OAuthCredential = OAuthCredentials & {
type: "oauth";
provider: string;
clientId?: string;
email?: string;
};
export type AuthProfileCredential = ApiKeyCredential | TokenCredential | OAuthCredential;
export type AuthProfileFailureReason =
| "auth"
| "format"
| "rate_limit"
| "billing"
| "timeout"
| "model_not_found"
| "unknown";
/** Per-profile usage statistics for round-robin and cooldown tracking */
export type ProfileUsageStats = {
lastUsed?: number;
cooldownUntil?: number;
disabledUntil?: number;
disabledReason?: AuthProfileFailureReason;
errorCount?: number;
failureCounts?: Partial<Record<AuthProfileFailureReason, number>>;
lastFailureAt?: number;
};
export type AuthProfileStore = {
version: number;
profiles: Record<string, AuthProfileCredential>;
/**
* Optional per-agent preferred profile order overrides.
* This lets you lock/override auth rotation for a specific agent without
* changing the global config.
*/
order?: Record<string, string[]>;
lastGood?: Record<string, string>;
/** Usage statistics per profile for round-robin rotation */
usageStats?: Record<string, ProfileUsageStats>;
};
export type AuthProfileIdRepairResult = {
config: OpenClawConfig;
changes: string[];
migrated: boolean;
fromProfileId?: string;
toProfileId?: string;
};

View File

@ -1,347 +0,0 @@
import { describe, expect, it, vi } from "vitest";
import type { AuthProfileStore } from "./types.js";
import {
clearAuthProfileCooldown,
clearExpiredCooldowns,
isProfileInCooldown,
resolveProfileUnusableUntil,
} from "./usage.js";
vi.mock("./store.js", async (importOriginal) => {
const original = await importOriginal<typeof import("./store.js")>();
return {
...original,
updateAuthProfileStoreWithLock: vi.fn().mockResolvedValue(null),
saveAuthProfileStore: vi.fn(),
};
});
function makeStore(usageStats: AuthProfileStore["usageStats"]): AuthProfileStore {
return {
version: 1,
profiles: {
"anthropic:default": { type: "api_key", provider: "anthropic", key: "sk-test" },
"openai:default": { type: "api_key", provider: "openai", key: "sk-test-2" },
},
usageStats,
};
}
describe("resolveProfileUnusableUntil", () => {
it("returns null when both values are missing or invalid", () => {
expect(resolveProfileUnusableUntil({})).toBeNull();
expect(resolveProfileUnusableUntil({ cooldownUntil: 0, disabledUntil: Number.NaN })).toBeNull();
});
it("returns the latest active timestamp", () => {
expect(resolveProfileUnusableUntil({ cooldownUntil: 100, disabledUntil: 200 })).toBe(200);
expect(resolveProfileUnusableUntil({ cooldownUntil: 300 })).toBe(300);
});
});
// ---------------------------------------------------------------------------
// isProfileInCooldown
// ---------------------------------------------------------------------------
describe("isProfileInCooldown", () => {
it("returns false when profile has no usage stats", () => {
const store = makeStore(undefined);
expect(isProfileInCooldown(store, "anthropic:default")).toBe(false);
});
it("returns true when cooldownUntil is in the future", () => {
const store = makeStore({
"anthropic:default": { cooldownUntil: Date.now() + 60_000 },
});
expect(isProfileInCooldown(store, "anthropic:default")).toBe(true);
});
it("returns false when cooldownUntil has passed", () => {
const store = makeStore({
"anthropic:default": { cooldownUntil: Date.now() - 1_000 },
});
expect(isProfileInCooldown(store, "anthropic:default")).toBe(false);
});
it("returns true when disabledUntil is in the future (even if cooldownUntil expired)", () => {
const store = makeStore({
"anthropic:default": {
cooldownUntil: Date.now() - 1_000,
disabledUntil: Date.now() + 60_000,
},
});
expect(isProfileInCooldown(store, "anthropic:default")).toBe(true);
});
});
// ---------------------------------------------------------------------------
// clearExpiredCooldowns
// ---------------------------------------------------------------------------
describe("clearExpiredCooldowns", () => {
it("returns false on empty usageStats", () => {
const store = makeStore(undefined);
expect(clearExpiredCooldowns(store)).toBe(false);
});
it("returns false when no profiles have cooldowns", () => {
const store = makeStore({
"anthropic:default": { lastUsed: Date.now() },
});
expect(clearExpiredCooldowns(store)).toBe(false);
});
it("returns false when cooldown is still active", () => {
const future = Date.now() + 300_000;
const store = makeStore({
"anthropic:default": { cooldownUntil: future, errorCount: 3 },
});
expect(clearExpiredCooldowns(store)).toBe(false);
expect(store.usageStats?.["anthropic:default"]?.cooldownUntil).toBe(future);
expect(store.usageStats?.["anthropic:default"]?.errorCount).toBe(3);
});
it("clears expired cooldownUntil and resets errorCount", () => {
const store = makeStore({
"anthropic:default": {
cooldownUntil: Date.now() - 1_000,
errorCount: 4,
failureCounts: { rate_limit: 3, timeout: 1 },
lastFailureAt: Date.now() - 120_000,
},
});
expect(clearExpiredCooldowns(store)).toBe(true);
const stats = store.usageStats?.["anthropic:default"];
expect(stats?.cooldownUntil).toBeUndefined();
expect(stats?.errorCount).toBe(0);
expect(stats?.failureCounts).toBeUndefined();
// lastFailureAt preserved for failureWindowMs decay
expect(stats?.lastFailureAt).toBeDefined();
});
it("clears expired disabledUntil and disabledReason", () => {
const store = makeStore({
"anthropic:default": {
disabledUntil: Date.now() - 1_000,
disabledReason: "billing",
errorCount: 2,
failureCounts: { billing: 2 },
},
});
expect(clearExpiredCooldowns(store)).toBe(true);
const stats = store.usageStats?.["anthropic:default"];
expect(stats?.disabledUntil).toBeUndefined();
expect(stats?.disabledReason).toBeUndefined();
expect(stats?.errorCount).toBe(0);
expect(stats?.failureCounts).toBeUndefined();
});
it("handles independent expiry: cooldown expired but disabled still active", () => {
const future = Date.now() + 3_600_000;
const store = makeStore({
"anthropic:default": {
cooldownUntil: Date.now() - 1_000,
disabledUntil: future,
disabledReason: "billing",
errorCount: 5,
failureCounts: { rate_limit: 3, billing: 2 },
},
});
expect(clearExpiredCooldowns(store)).toBe(true);
const stats = store.usageStats?.["anthropic:default"];
// cooldownUntil cleared
expect(stats?.cooldownUntil).toBeUndefined();
// disabledUntil still active — not touched
expect(stats?.disabledUntil).toBe(future);
expect(stats?.disabledReason).toBe("billing");
// errorCount NOT reset because profile still has an active unusable window
expect(stats?.errorCount).toBe(5);
expect(stats?.failureCounts).toEqual({ rate_limit: 3, billing: 2 });
});
it("handles independent expiry: disabled expired but cooldown still active", () => {
const future = Date.now() + 300_000;
const store = makeStore({
"anthropic:default": {
cooldownUntil: future,
disabledUntil: Date.now() - 1_000,
disabledReason: "billing",
errorCount: 3,
},
});
expect(clearExpiredCooldowns(store)).toBe(true);
const stats = store.usageStats?.["anthropic:default"];
expect(stats?.cooldownUntil).toBe(future);
expect(stats?.disabledUntil).toBeUndefined();
expect(stats?.disabledReason).toBeUndefined();
// errorCount NOT reset because cooldown is still active
expect(stats?.errorCount).toBe(3);
});
it("resets errorCount only when both cooldown and disabled have expired", () => {
const store = makeStore({
"anthropic:default": {
cooldownUntil: Date.now() - 2_000,
disabledUntil: Date.now() - 1_000,
disabledReason: "billing",
errorCount: 4,
failureCounts: { rate_limit: 2, billing: 2 },
},
});
expect(clearExpiredCooldowns(store)).toBe(true);
const stats = store.usageStats?.["anthropic:default"];
expect(stats?.cooldownUntil).toBeUndefined();
expect(stats?.disabledUntil).toBeUndefined();
expect(stats?.disabledReason).toBeUndefined();
expect(stats?.errorCount).toBe(0);
expect(stats?.failureCounts).toBeUndefined();
});
it("processes multiple profiles independently", () => {
const store = makeStore({
"anthropic:default": {
cooldownUntil: Date.now() - 1_000,
errorCount: 3,
},
"openai:default": {
cooldownUntil: Date.now() + 300_000,
errorCount: 2,
},
});
expect(clearExpiredCooldowns(store)).toBe(true);
// Anthropic: expired → cleared
expect(store.usageStats?.["anthropic:default"]?.cooldownUntil).toBeUndefined();
expect(store.usageStats?.["anthropic:default"]?.errorCount).toBe(0);
// OpenAI: still active → untouched
expect(store.usageStats?.["openai:default"]?.cooldownUntil).toBeGreaterThan(Date.now());
expect(store.usageStats?.["openai:default"]?.errorCount).toBe(2);
});
it("accepts an explicit `now` timestamp for deterministic testing", () => {
const fixedNow = 1_700_000_000_000;
const store = makeStore({
"anthropic:default": {
cooldownUntil: fixedNow - 1,
errorCount: 2,
},
});
expect(clearExpiredCooldowns(store, fixedNow)).toBe(true);
expect(store.usageStats?.["anthropic:default"]?.cooldownUntil).toBeUndefined();
expect(store.usageStats?.["anthropic:default"]?.errorCount).toBe(0);
});
it("clears cooldownUntil that equals exactly `now`", () => {
const fixedNow = 1_700_000_000_000;
const store = makeStore({
"anthropic:default": {
cooldownUntil: fixedNow,
errorCount: 2,
},
});
// ts >= cooldownUntil → should clear (cooldown "until" means the instant
// at cooldownUntil the profile becomes available again).
expect(clearExpiredCooldowns(store, fixedNow)).toBe(true);
expect(store.usageStats?.["anthropic:default"]?.cooldownUntil).toBeUndefined();
expect(store.usageStats?.["anthropic:default"]?.errorCount).toBe(0);
});
it("ignores NaN and Infinity cooldown values", () => {
const store = makeStore({
"anthropic:default": {
cooldownUntil: NaN,
errorCount: 2,
},
"openai:default": {
cooldownUntil: Infinity,
errorCount: 3,
},
});
expect(clearExpiredCooldowns(store)).toBe(false);
expect(store.usageStats?.["anthropic:default"]?.errorCount).toBe(2);
expect(store.usageStats?.["openai:default"]?.errorCount).toBe(3);
});
it("ignores zero and negative cooldown values", () => {
const store = makeStore({
"anthropic:default": {
cooldownUntil: 0,
errorCount: 1,
},
"openai:default": {
cooldownUntil: -1,
errorCount: 1,
},
});
expect(clearExpiredCooldowns(store)).toBe(false);
});
});
// ---------------------------------------------------------------------------
// clearAuthProfileCooldown
// ---------------------------------------------------------------------------
describe("clearAuthProfileCooldown", () => {
it("clears all error state fields including disabledUntil and failureCounts", async () => {
const store = makeStore({
"anthropic:default": {
cooldownUntil: Date.now() + 60_000,
disabledUntil: Date.now() + 3_600_000,
disabledReason: "billing",
errorCount: 5,
failureCounts: { billing: 3, rate_limit: 2 },
},
});
await clearAuthProfileCooldown({ store, profileId: "anthropic:default" });
const stats = store.usageStats?.["anthropic:default"];
expect(stats?.cooldownUntil).toBeUndefined();
expect(stats?.disabledUntil).toBeUndefined();
expect(stats?.disabledReason).toBeUndefined();
expect(stats?.errorCount).toBe(0);
expect(stats?.failureCounts).toBeUndefined();
});
it("preserves lastUsed and lastFailureAt timestamps", async () => {
const lastUsed = Date.now() - 10_000;
const lastFailureAt = Date.now() - 5_000;
const store = makeStore({
"anthropic:default": {
cooldownUntil: Date.now() + 60_000,
errorCount: 3,
lastUsed,
lastFailureAt,
},
});
await clearAuthProfileCooldown({ store, profileId: "anthropic:default" });
const stats = store.usageStats?.["anthropic:default"];
expect(stats?.lastUsed).toBe(lastUsed);
expect(stats?.lastFailureAt).toBe(lastFailureAt);
});
it("no-ops for unknown profile id", async () => {
const store = makeStore(undefined);
await clearAuthProfileCooldown({ store, profileId: "nonexistent" });
expect(store.usageStats).toBeUndefined();
});
});

View File

@ -1,427 +0,0 @@
import type { OpenClawConfig } from "../../config/config.js";
import { normalizeProviderId } from "../model-selection.js";
import { saveAuthProfileStore, updateAuthProfileStoreWithLock } from "./store.js";
import type { AuthProfileFailureReason, AuthProfileStore, ProfileUsageStats } from "./types.js";
export function resolveProfileUnusableUntil(
stats: Pick<ProfileUsageStats, "cooldownUntil" | "disabledUntil">,
): number | null {
const values = [stats.cooldownUntil, stats.disabledUntil]
.filter((value): value is number => typeof value === "number")
.filter((value) => Number.isFinite(value) && value > 0);
if (values.length === 0) {
return null;
}
return Math.max(...values);
}
/**
* Check if a profile is currently in cooldown (due to rate limiting or errors).
*/
export function isProfileInCooldown(store: AuthProfileStore, profileId: string): boolean {
const stats = store.usageStats?.[profileId];
if (!stats) {
return false;
}
const unusableUntil = resolveProfileUnusableUntil(stats);
return unusableUntil ? Date.now() < unusableUntil : false;
}
/**
* Return the soonest `unusableUntil` timestamp (ms epoch) among the given
* profiles, or `null` when no profile has a recorded cooldown. Note: the
* returned timestamp may be in the past if the cooldown has already expired.
*/
export function getSoonestCooldownExpiry(
store: AuthProfileStore,
profileIds: string[],
): number | null {
let soonest: number | null = null;
for (const id of profileIds) {
const stats = store.usageStats?.[id];
if (!stats) {
continue;
}
const until = resolveProfileUnusableUntil(stats);
if (typeof until !== "number" || !Number.isFinite(until) || until <= 0) {
continue;
}
if (soonest === null || until < soonest) {
soonest = until;
}
}
return soonest;
}
/**
* Clear expired cooldowns from all profiles in the store.
*
* When `cooldownUntil` or `disabledUntil` has passed, the corresponding fields
* are removed and error counters are reset so the profile gets a fresh start
* (circuit-breaker half-open closed). Without this, a stale `errorCount`
* causes the *next* transient failure to immediately escalate to a much longer
* cooldown the root cause of profiles appearing "stuck" after rate limits.
*
* `cooldownUntil` and `disabledUntil` are handled independently: if a profile
* has both and only one has expired, only that field is cleared.
*
* Mutates the in-memory store; disk persistence happens lazily on the next
* store write (e.g. `markAuthProfileUsed` / `markAuthProfileFailure`), which
* matches the existing save pattern throughout the auth-profiles module.
*
* @returns `true` if any profile was modified.
*/
export function clearExpiredCooldowns(store: AuthProfileStore, now?: number): boolean {
const usageStats = store.usageStats;
if (!usageStats) {
return false;
}
const ts = now ?? Date.now();
let mutated = false;
for (const [profileId, stats] of Object.entries(usageStats)) {
if (!stats) {
continue;
}
let profileMutated = false;
const cooldownExpired =
typeof stats.cooldownUntil === "number" &&
Number.isFinite(stats.cooldownUntil) &&
stats.cooldownUntil > 0 &&
ts >= stats.cooldownUntil;
const disabledExpired =
typeof stats.disabledUntil === "number" &&
Number.isFinite(stats.disabledUntil) &&
stats.disabledUntil > 0 &&
ts >= stats.disabledUntil;
if (cooldownExpired) {
stats.cooldownUntil = undefined;
profileMutated = true;
}
if (disabledExpired) {
stats.disabledUntil = undefined;
stats.disabledReason = undefined;
profileMutated = true;
}
// Reset error counters when ALL cooldowns have expired so the profile gets
// a fair retry window. Preserves lastFailureAt for the failureWindowMs
// decay check in computeNextProfileUsageStats.
if (profileMutated && !resolveProfileUnusableUntil(stats)) {
stats.errorCount = 0;
stats.failureCounts = undefined;
}
if (profileMutated) {
usageStats[profileId] = stats;
mutated = true;
}
}
return mutated;
}
/**
* Mark a profile as successfully used. Resets error count and updates lastUsed.
* Uses store lock to avoid overwriting concurrent usage updates.
*/
export async function markAuthProfileUsed(params: {
store: AuthProfileStore;
profileId: string;
agentDir?: string;
}): Promise<void> {
const { store, profileId, agentDir } = params;
const updated = await updateAuthProfileStoreWithLock({
agentDir,
updater: (freshStore) => {
if (!freshStore.profiles[profileId]) {
return false;
}
freshStore.usageStats = freshStore.usageStats ?? {};
freshStore.usageStats[profileId] = {
...freshStore.usageStats[profileId],
lastUsed: Date.now(),
errorCount: 0,
cooldownUntil: undefined,
disabledUntil: undefined,
disabledReason: undefined,
failureCounts: undefined,
};
return true;
},
});
if (updated) {
store.usageStats = updated.usageStats;
return;
}
if (!store.profiles[profileId]) {
return;
}
store.usageStats = store.usageStats ?? {};
store.usageStats[profileId] = {
...store.usageStats[profileId],
lastUsed: Date.now(),
errorCount: 0,
cooldownUntil: undefined,
disabledUntil: undefined,
disabledReason: undefined,
failureCounts: undefined,
};
saveAuthProfileStore(store, agentDir);
}
export function calculateAuthProfileCooldownMs(errorCount: number): number {
const normalized = Math.max(1, errorCount);
return Math.min(
60 * 60 * 1000, // 1 hour max
60 * 1000 * 5 ** Math.min(normalized - 1, 3),
);
}
type ResolvedAuthCooldownConfig = {
billingBackoffMs: number;
billingMaxMs: number;
failureWindowMs: number;
};
function resolveAuthCooldownConfig(params: {
cfg?: OpenClawConfig;
providerId: string;
}): ResolvedAuthCooldownConfig {
const defaults = {
billingBackoffHours: 5,
billingMaxHours: 24,
failureWindowHours: 24,
} as const;
const resolveHours = (value: unknown, fallback: number) =>
typeof value === "number" && Number.isFinite(value) && value > 0 ? value : fallback;
const cooldowns = params.cfg?.auth?.cooldowns;
const billingOverride = (() => {
const map = cooldowns?.billingBackoffHoursByProvider;
if (!map) {
return undefined;
}
for (const [key, value] of Object.entries(map)) {
if (normalizeProviderId(key) === params.providerId) {
return value;
}
}
return undefined;
})();
const billingBackoffHours = resolveHours(
billingOverride ?? cooldowns?.billingBackoffHours,
defaults.billingBackoffHours,
);
const billingMaxHours = resolveHours(cooldowns?.billingMaxHours, defaults.billingMaxHours);
const failureWindowHours = resolveHours(
cooldowns?.failureWindowHours,
defaults.failureWindowHours,
);
return {
billingBackoffMs: billingBackoffHours * 60 * 60 * 1000,
billingMaxMs: billingMaxHours * 60 * 60 * 1000,
failureWindowMs: failureWindowHours * 60 * 60 * 1000,
};
}
function calculateAuthProfileBillingDisableMsWithConfig(params: {
errorCount: number;
baseMs: number;
maxMs: number;
}): number {
const normalized = Math.max(1, params.errorCount);
const baseMs = Math.max(60_000, params.baseMs);
const maxMs = Math.max(baseMs, params.maxMs);
const exponent = Math.min(normalized - 1, 10);
const raw = baseMs * 2 ** exponent;
return Math.min(maxMs, raw);
}
export function resolveProfileUnusableUntilForDisplay(
store: AuthProfileStore,
profileId: string,
): number | null {
const stats = store.usageStats?.[profileId];
if (!stats) {
return null;
}
return resolveProfileUnusableUntil(stats);
}
function computeNextProfileUsageStats(params: {
existing: ProfileUsageStats;
now: number;
reason: AuthProfileFailureReason;
cfgResolved: ResolvedAuthCooldownConfig;
}): ProfileUsageStats {
const windowMs = params.cfgResolved.failureWindowMs;
const windowExpired =
typeof params.existing.lastFailureAt === "number" &&
params.existing.lastFailureAt > 0 &&
params.now - params.existing.lastFailureAt > windowMs;
const baseErrorCount = windowExpired ? 0 : (params.existing.errorCount ?? 0);
const nextErrorCount = baseErrorCount + 1;
const failureCounts = windowExpired ? {} : { ...params.existing.failureCounts };
failureCounts[params.reason] = (failureCounts[params.reason] ?? 0) + 1;
const updatedStats: ProfileUsageStats = {
...params.existing,
errorCount: nextErrorCount,
failureCounts,
lastFailureAt: params.now,
};
if (params.reason === "billing") {
const billingCount = failureCounts.billing ?? 1;
const backoffMs = calculateAuthProfileBillingDisableMsWithConfig({
errorCount: billingCount,
baseMs: params.cfgResolved.billingBackoffMs,
maxMs: params.cfgResolved.billingMaxMs,
});
updatedStats.disabledUntil = params.now + backoffMs;
updatedStats.disabledReason = "billing";
} else {
const backoffMs = calculateAuthProfileCooldownMs(nextErrorCount);
updatedStats.cooldownUntil = params.now + backoffMs;
}
return updatedStats;
}
/**
* Mark a profile as failed for a specific reason. Billing failures are treated
* as "disabled" (longer backoff) vs the regular cooldown window.
*/
export async function markAuthProfileFailure(params: {
store: AuthProfileStore;
profileId: string;
reason: AuthProfileFailureReason;
cfg?: OpenClawConfig;
agentDir?: string;
}): Promise<void> {
const { store, profileId, reason, agentDir, cfg } = params;
const updated = await updateAuthProfileStoreWithLock({
agentDir,
updater: (freshStore) => {
const profile = freshStore.profiles[profileId];
if (!profile) {
return false;
}
freshStore.usageStats = freshStore.usageStats ?? {};
const existing = freshStore.usageStats[profileId] ?? {};
const now = Date.now();
const providerKey = normalizeProviderId(profile.provider);
const cfgResolved = resolveAuthCooldownConfig({
cfg,
providerId: providerKey,
});
freshStore.usageStats[profileId] = computeNextProfileUsageStats({
existing,
now,
reason,
cfgResolved,
});
return true;
},
});
if (updated) {
store.usageStats = updated.usageStats;
return;
}
if (!store.profiles[profileId]) {
return;
}
store.usageStats = store.usageStats ?? {};
const existing = store.usageStats[profileId] ?? {};
const now = Date.now();
const providerKey = normalizeProviderId(store.profiles[profileId]?.provider ?? "");
const cfgResolved = resolveAuthCooldownConfig({
cfg,
providerId: providerKey,
});
store.usageStats[profileId] = computeNextProfileUsageStats({
existing,
now,
reason,
cfgResolved,
});
saveAuthProfileStore(store, agentDir);
}
/**
* Mark a profile as failed/rate-limited. Applies exponential backoff cooldown.
* Cooldown times: 1min, 5min, 25min, max 1 hour.
* Uses store lock to avoid overwriting concurrent usage updates.
*/
export async function markAuthProfileCooldown(params: {
store: AuthProfileStore;
profileId: string;
agentDir?: string;
}): Promise<void> {
await markAuthProfileFailure({
store: params.store,
profileId: params.profileId,
reason: "unknown",
agentDir: params.agentDir,
});
}
/**
* Clear cooldown for a profile (e.g., manual reset).
* Uses store lock to avoid overwriting concurrent usage updates.
*/
export async function clearAuthProfileCooldown(params: {
store: AuthProfileStore;
profileId: string;
agentDir?: string;
}): Promise<void> {
const { store, profileId, agentDir } = params;
const updated = await updateAuthProfileStoreWithLock({
agentDir,
updater: (freshStore) => {
if (!freshStore.usageStats?.[profileId]) {
return false;
}
freshStore.usageStats[profileId] = {
...freshStore.usageStats[profileId],
errorCount: 0,
cooldownUntil: undefined,
disabledUntil: undefined,
disabledReason: undefined,
failureCounts: undefined,
};
return true;
},
});
if (updated) {
store.usageStats = updated.usageStats;
return;
}
if (!store.usageStats?.[profileId]) {
return;
}
store.usageStats[profileId] = {
...store.usageStats[profileId],
errorCount: 0,
cooldownUntil: undefined,
disabledUntil: undefined,
disabledReason: undefined,
failureCounts: undefined,
};
saveAuthProfileStore(store, agentDir);
}

View File

@ -1,117 +0,0 @@
import type { ChildProcessWithoutNullStreams } from "node:child_process";
import { beforeEach, describe, expect, it, vi } from "vitest";
import type { ProcessSession } from "./bash-process-registry.js";
import {
addSession,
appendOutput,
drainSession,
listFinishedSessions,
markBackgrounded,
markExited,
resetProcessRegistryForTests,
} from "./bash-process-registry.js";
import { createProcessSessionFixture } from "./bash-process-registry.test-helpers.js";
describe("bash process registry", () => {
function createRegistrySession(params: {
id?: string;
maxOutputChars: number;
pendingMaxOutputChars: number;
backgrounded: boolean;
}): ProcessSession {
return createProcessSessionFixture({
id: params.id ?? "sess",
command: "echo test",
child: { pid: 123, removeAllListeners: vi.fn() } as unknown as ChildProcessWithoutNullStreams,
maxOutputChars: params.maxOutputChars,
pendingMaxOutputChars: params.pendingMaxOutputChars,
backgrounded: params.backgrounded,
});
}
beforeEach(() => {
resetProcessRegistryForTests();
});
it("captures output and truncates", () => {
const session = createRegistrySession({
maxOutputChars: 10,
pendingMaxOutputChars: 30_000,
backgrounded: false,
});
addSession(session);
appendOutput(session, "stdout", "0123456789");
appendOutput(session, "stdout", "abcdef");
expect(session.aggregated).toBe("6789abcdef");
expect(session.truncated).toBe(true);
});
it("caps pending output to avoid runaway polls", () => {
const session = createRegistrySession({
maxOutputChars: 100_000,
pendingMaxOutputChars: 20_000,
backgrounded: true,
});
addSession(session);
const payload = `${"a".repeat(70_000)}${"b".repeat(20_000)}`;
appendOutput(session, "stdout", payload);
const drained = drainSession(session);
expect(drained.stdout).toBe("b".repeat(20_000));
expect(session.pendingStdout).toHaveLength(0);
expect(session.pendingStdoutChars).toBe(0);
expect(session.truncated).toBe(true);
});
it("respects max output cap when pending cap is larger", () => {
const session = createRegistrySession({
maxOutputChars: 5_000,
pendingMaxOutputChars: 30_000,
backgrounded: true,
});
addSession(session);
appendOutput(session, "stdout", "x".repeat(10_000));
const drained = drainSession(session);
expect(drained.stdout.length).toBe(5_000);
expect(session.truncated).toBe(true);
});
it("caps stdout and stderr independently", () => {
const session = createRegistrySession({
maxOutputChars: 100,
pendingMaxOutputChars: 10,
backgrounded: true,
});
addSession(session);
appendOutput(session, "stdout", "a".repeat(6));
appendOutput(session, "stdout", "b".repeat(6));
appendOutput(session, "stderr", "c".repeat(12));
const drained = drainSession(session);
expect(drained.stdout).toBe("a".repeat(4) + "b".repeat(6));
expect(drained.stderr).toBe("c".repeat(10));
expect(session.truncated).toBe(true);
});
it("only persists finished sessions when backgrounded", () => {
const session = createRegistrySession({
maxOutputChars: 100,
pendingMaxOutputChars: 30_000,
backgrounded: false,
});
addSession(session);
markExited(session, 0, null, "completed");
expect(listFinishedSessions()).toHaveLength(0);
markBackgrounded(session);
markExited(session, 0, null, "completed");
expect(listFinishedSessions()).toHaveLength(1);
});
});

View File

@ -1,42 +0,0 @@
import type { ChildProcessWithoutNullStreams } from "node:child_process";
import type { ProcessSession } from "./bash-process-registry.js";
export function createProcessSessionFixture(params: {
id: string;
command?: string;
startedAt?: number;
cwd?: string;
maxOutputChars?: number;
pendingMaxOutputChars?: number;
backgrounded?: boolean;
pid?: number;
child?: ChildProcessWithoutNullStreams;
}): ProcessSession {
const session: ProcessSession = {
id: params.id,
command: params.command ?? "test",
startedAt: params.startedAt ?? Date.now(),
cwd: params.cwd ?? "/tmp",
maxOutputChars: params.maxOutputChars ?? 10_000,
pendingMaxOutputChars: params.pendingMaxOutputChars ?? 30_000,
totalOutputChars: 0,
pendingStdout: [],
pendingStderr: [],
pendingStdoutChars: 0,
pendingStderrChars: 0,
aggregated: "",
tail: "",
exited: false,
exitCode: undefined,
exitSignal: undefined,
truncated: false,
backgrounded: params.backgrounded ?? false,
};
if (params.pid !== undefined) {
session.pid = params.pid;
}
if (params.child) {
session.child = params.child;
}
return session;
}

View File

@ -1,309 +0,0 @@
import type { ChildProcessWithoutNullStreams } from "node:child_process";
import { createSessionSlug as createSessionSlugId } from "./session-slug.js";
const DEFAULT_JOB_TTL_MS = 30 * 60 * 1000; // 30 minutes
const MIN_JOB_TTL_MS = 60 * 1000; // 1 minute
const MAX_JOB_TTL_MS = 3 * 60 * 60 * 1000; // 3 hours
const DEFAULT_PENDING_OUTPUT_CHARS = 30_000;
function clampTtl(value: number | undefined) {
if (!value || Number.isNaN(value)) {
return DEFAULT_JOB_TTL_MS;
}
return Math.min(Math.max(value, MIN_JOB_TTL_MS), MAX_JOB_TTL_MS);
}
let jobTtlMs = clampTtl(Number.parseInt(process.env.PI_BASH_JOB_TTL_MS ?? "", 10));
export type ProcessStatus = "running" | "completed" | "failed" | "killed";
export type SessionStdin = {
write: (data: string, cb?: (err?: Error | null) => void) => void;
end: () => void;
// When backed by a real Node stream (child.stdin), this exists; for PTY wrappers it may not.
destroy?: () => void;
destroyed?: boolean;
};
export interface ProcessSession {
id: string;
command: string;
scopeKey?: string;
sessionKey?: string;
notifyOnExit?: boolean;
notifyOnExitEmptySuccess?: boolean;
exitNotified?: boolean;
child?: ChildProcessWithoutNullStreams;
stdin?: SessionStdin;
pid?: number;
startedAt: number;
cwd?: string;
maxOutputChars: number;
pendingMaxOutputChars?: number;
totalOutputChars: number;
pendingStdout: string[];
pendingStderr: string[];
pendingStdoutChars: number;
pendingStderrChars: number;
aggregated: string;
tail: string;
exitCode?: number | null;
exitSignal?: NodeJS.Signals | number | null;
exited: boolean;
truncated: boolean;
backgrounded: boolean;
}
export interface FinishedSession {
id: string;
command: string;
scopeKey?: string;
startedAt: number;
endedAt: number;
cwd?: string;
status: ProcessStatus;
exitCode?: number | null;
exitSignal?: NodeJS.Signals | number | null;
aggregated: string;
tail: string;
truncated: boolean;
totalOutputChars: number;
}
const runningSessions = new Map<string, ProcessSession>();
const finishedSessions = new Map<string, FinishedSession>();
let sweeper: NodeJS.Timeout | null = null;
function isSessionIdTaken(id: string) {
return runningSessions.has(id) || finishedSessions.has(id);
}
export function createSessionSlug(): string {
return createSessionSlugId(isSessionIdTaken);
}
export function addSession(session: ProcessSession) {
runningSessions.set(session.id, session);
startSweeper();
}
export function getSession(id: string) {
return runningSessions.get(id);
}
export function getFinishedSession(id: string) {
return finishedSessions.get(id);
}
export function deleteSession(id: string) {
runningSessions.delete(id);
finishedSessions.delete(id);
}
export function appendOutput(session: ProcessSession, stream: "stdout" | "stderr", chunk: string) {
session.pendingStdout ??= [];
session.pendingStderr ??= [];
session.pendingStdoutChars ??= sumPendingChars(session.pendingStdout);
session.pendingStderrChars ??= sumPendingChars(session.pendingStderr);
const buffer = stream === "stdout" ? session.pendingStdout : session.pendingStderr;
const bufferChars = stream === "stdout" ? session.pendingStdoutChars : session.pendingStderrChars;
const pendingCap = Math.min(
session.pendingMaxOutputChars ?? DEFAULT_PENDING_OUTPUT_CHARS,
session.maxOutputChars,
);
buffer.push(chunk);
let pendingChars = bufferChars + chunk.length;
if (pendingChars > pendingCap) {
session.truncated = true;
pendingChars = capPendingBuffer(buffer, pendingChars, pendingCap);
}
if (stream === "stdout") {
session.pendingStdoutChars = pendingChars;
} else {
session.pendingStderrChars = pendingChars;
}
session.totalOutputChars += chunk.length;
const aggregated = trimWithCap(session.aggregated + chunk, session.maxOutputChars);
session.truncated =
session.truncated || aggregated.length < session.aggregated.length + chunk.length;
session.aggregated = aggregated;
session.tail = tail(session.aggregated, 2000);
}
export function drainSession(session: ProcessSession) {
const stdout = session.pendingStdout.join("");
const stderr = session.pendingStderr.join("");
session.pendingStdout = [];
session.pendingStderr = [];
session.pendingStdoutChars = 0;
session.pendingStderrChars = 0;
return { stdout, stderr };
}
export function markExited(
session: ProcessSession,
exitCode: number | null,
exitSignal: NodeJS.Signals | number | null,
status: ProcessStatus,
) {
session.exited = true;
session.exitCode = exitCode;
session.exitSignal = exitSignal;
session.tail = tail(session.aggregated, 2000);
moveToFinished(session, status);
}
export function markBackgrounded(session: ProcessSession) {
session.backgrounded = true;
}
function moveToFinished(session: ProcessSession, status: ProcessStatus) {
runningSessions.delete(session.id);
// Clean up child process stdio streams to prevent FD leaks
if (session.child) {
// Destroy stdio streams to release file descriptors
session.child.stdin?.destroy?.();
session.child.stdout?.destroy?.();
session.child.stderr?.destroy?.();
// Remove all event listeners to prevent memory leaks
session.child.removeAllListeners();
// Clear the reference
delete session.child;
}
// Clean up stdin wrapper - call destroy if available, otherwise just remove reference
if (session.stdin) {
// Try to call destroy/end method if exists
if (typeof session.stdin.destroy === "function") {
session.stdin.destroy();
} else if (typeof session.stdin.end === "function") {
session.stdin.end();
}
// Only set flag if writable
try {
(session.stdin as { destroyed?: boolean }).destroyed = true;
} catch {
// Ignore if read-only
}
delete session.stdin;
}
if (!session.backgrounded) {
return;
}
finishedSessions.set(session.id, {
id: session.id,
command: session.command,
scopeKey: session.scopeKey,
startedAt: session.startedAt,
endedAt: Date.now(),
cwd: session.cwd,
status,
exitCode: session.exitCode,
exitSignal: session.exitSignal,
aggregated: session.aggregated,
tail: session.tail,
truncated: session.truncated,
totalOutputChars: session.totalOutputChars,
});
}
export function tail(text: string, max = 2000) {
if (text.length <= max) {
return text;
}
return text.slice(text.length - max);
}
function sumPendingChars(buffer: string[]) {
let total = 0;
for (const chunk of buffer) {
total += chunk.length;
}
return total;
}
function capPendingBuffer(buffer: string[], pendingChars: number, cap: number) {
if (pendingChars <= cap) {
return pendingChars;
}
const last = buffer.at(-1);
if (last && last.length >= cap) {
buffer.length = 0;
buffer.push(last.slice(last.length - cap));
return cap;
}
while (buffer.length && pendingChars - buffer[0].length >= cap) {
pendingChars -= buffer[0].length;
buffer.shift();
}
if (buffer.length && pendingChars > cap) {
const overflow = pendingChars - cap;
buffer[0] = buffer[0].slice(overflow);
pendingChars = cap;
}
return pendingChars;
}
export function trimWithCap(text: string, max: number) {
if (text.length <= max) {
return text;
}
return text.slice(text.length - max);
}
export function listRunningSessions() {
return Array.from(runningSessions.values()).filter((s) => s.backgrounded);
}
export function listFinishedSessions() {
return Array.from(finishedSessions.values());
}
export function clearFinished() {
finishedSessions.clear();
}
export function resetProcessRegistryForTests() {
runningSessions.clear();
finishedSessions.clear();
stopSweeper();
}
export function setJobTtlMs(value?: number) {
if (value === undefined || Number.isNaN(value)) {
return;
}
jobTtlMs = clampTtl(value);
stopSweeper();
startSweeper();
}
function pruneFinishedSessions() {
const cutoff = Date.now() - jobTtlMs;
for (const [id, session] of finishedSessions.entries()) {
if (session.endedAt < cutoff) {
finishedSessions.delete(id);
}
}
}
function startSweeper() {
if (sweeper) {
return;
}
sweeper = setInterval(pruneFinishedSessions, Math.max(30_000, jobTtlMs / 6));
sweeper.unref?.();
}
function stopSweeper() {
if (!sweeper) {
return;
}
clearInterval(sweeper);
sweeper = null;
}

View File

@ -1,535 +0,0 @@
import path from "node:path";
import { afterEach, beforeEach, describe, expect, it } from "vitest";
import { peekSystemEvents, resetSystemEventsForTest } from "../infra/system-events.js";
import { captureEnv } from "../test-utils/env.js";
import { getFinishedSession, resetProcessRegistryForTests } from "./bash-process-registry.js";
import { createExecTool, createProcessTool, execTool, processTool } from "./bash-tools.js";
import { buildDockerExecArgs } from "./bash-tools.shared.js";
import { resolveShellFromPath, sanitizeBinaryOutput } from "./shell-utils.js";
const isWin = process.platform === "win32";
const defaultShell = isWin
? undefined
: process.env.OPENCLAW_TEST_SHELL || resolveShellFromPath("bash") || process.env.SHELL || "sh";
// PowerShell: Start-Sleep for delays, ; for command separation, $null for null device
const shortDelayCmd = isWin ? "Start-Sleep -Milliseconds 50" : "sleep 0.05";
const yieldDelayCmd = isWin ? "Start-Sleep -Milliseconds 200" : "sleep 0.2";
const longDelayCmd = isWin ? "Start-Sleep -Seconds 2" : "sleep 2";
// Both PowerShell and bash use ; for command separation
const joinCommands = (commands: string[]) => commands.join("; ");
const echoAfterDelay = (message: string) => joinCommands([shortDelayCmd, `echo ${message}`]);
const echoLines = (lines: string[]) => joinCommands(lines.map((line) => `echo ${line}`));
const normalizeText = (value?: string) =>
sanitizeBinaryOutput(value ?? "")
.replace(/\r\n/g, "\n")
.replace(/\r/g, "\n")
.split("\n")
.map((line) => line.replace(/\s+$/u, ""))
.join("\n")
.trim();
async function waitForCompletion(sessionId: string) {
let status = "running";
await expect
.poll(
async () => {
const poll = await processTool.execute("call-wait", {
action: "poll",
sessionId,
});
status = (poll.details as { status: string }).status;
return status;
},
{ timeout: process.platform === "win32" ? 8000 : 2000, interval: 20 },
)
.not.toBe("running");
return status;
}
async function runBackgroundEchoLines(lines: string[]) {
const result = await execTool.execute("call1", {
command: echoLines(lines),
background: true,
});
const sessionId = (result.details as { sessionId: string }).sessionId;
await waitForCompletion(sessionId);
return sessionId;
}
beforeEach(() => {
resetProcessRegistryForTests();
resetSystemEventsForTest();
});
describe("exec tool backgrounding", () => {
let envSnapshot: ReturnType<typeof captureEnv>;
beforeEach(() => {
envSnapshot = captureEnv(["SHELL"]);
if (!isWin && defaultShell) {
process.env.SHELL = defaultShell;
}
});
afterEach(() => {
envSnapshot.restore();
});
it(
"backgrounds after yield and can be polled",
async () => {
const result = await execTool.execute("call1", {
command: joinCommands([yieldDelayCmd, "echo done"]),
yieldMs: 10,
});
expect(result.details.status).toBe("running");
const sessionId = (result.details as { sessionId: string }).sessionId;
let output = "";
await expect
.poll(
async () => {
const poll = await processTool.execute("call2", {
action: "poll",
sessionId,
});
const status = (poll.details as { status: string }).status;
const textBlock = poll.content.find((c) => c.type === "text");
output = textBlock?.text ?? "";
return status;
},
{ timeout: process.platform === "win32" ? 8000 : 2000, interval: 20 },
)
.toBe("completed");
expect(output).toContain("done");
},
isWin ? 15_000 : 5_000,
);
it("supports explicit background", async () => {
const result = await execTool.execute("call1", {
command: echoAfterDelay("later"),
background: true,
});
expect(result.details.status).toBe("running");
const sessionId = (result.details as { sessionId: string }).sessionId;
const list = await processTool.execute("call2", { action: "list" });
const sessions = (list.details as { sessions: Array<{ sessionId: string }> }).sessions;
expect(sessions.some((s) => s.sessionId === sessionId)).toBe(true);
});
it("derives a session name from the command", async () => {
const result = await execTool.execute("call1", {
command: "echo hello",
background: true,
});
const sessionId = (result.details as { sessionId: string }).sessionId;
await expect
.poll(
async () => {
const list = await processTool.execute("call2", { action: "list" });
const sessions = (
list.details as { sessions: Array<{ sessionId: string; name?: string }> }
).sessions;
return sessions.find((s) => s.sessionId === sessionId)?.name;
},
{ timeout: process.platform === "win32" ? 8000 : 2000, interval: 20 },
)
.toBe("echo hello");
});
it("uses default timeout when timeout is omitted", async () => {
const customBash = createExecTool({ timeoutSec: 0.2, backgroundMs: 10 });
const customProcess = createProcessTool();
const result = await customBash.execute("call1", {
command: longDelayCmd,
background: true,
});
const sessionId = (result.details as { sessionId: string }).sessionId;
await expect
.poll(
async () => {
const poll = await customProcess.execute("call2", {
action: "poll",
sessionId,
});
return (poll.details as { status: string }).status;
},
{ timeout: 5000, interval: 20 },
)
.toBe("failed");
});
it("rejects elevated requests when not allowed", async () => {
const customBash = createExecTool({
elevated: { enabled: true, allowed: false, defaultLevel: "off" },
messageProvider: "telegram",
sessionKey: "agent:main:main",
});
await expect(
customBash.execute("call1", {
command: "echo hi",
elevated: true,
}),
).rejects.toThrow("Context: provider=telegram session=agent:main:main");
});
it("does not default to elevated when not allowed", async () => {
const customBash = createExecTool({
elevated: { enabled: true, allowed: false, defaultLevel: "on" },
backgroundMs: 1000,
timeoutSec: 5,
});
const result = await customBash.execute("call1", {
command: "echo hi",
});
const text = result.content.find((c) => c.type === "text")?.text ?? "";
expect(text).toContain("hi");
});
it("logs line-based slices and defaults to last lines", async () => {
const result = await execTool.execute("call1", {
command: echoLines(["one", "two", "three"]),
background: true,
});
const sessionId = (result.details as { sessionId: string }).sessionId;
const status = await waitForCompletion(sessionId);
const log = await processTool.execute("call3", {
action: "log",
sessionId,
limit: 2,
});
const textBlock = log.content.find((c) => c.type === "text");
expect(normalizeText(textBlock?.text)).toBe("two\nthree");
expect((log.details as { totalLines?: number }).totalLines).toBe(3);
expect(status).toBe("completed");
});
it("defaults process log to a bounded tail when no window is provided", async () => {
const lines = Array.from({ length: 260 }, (_value, index) => `line-${index + 1}`);
const sessionId = await runBackgroundEchoLines(lines);
const log = await processTool.execute("call2", {
action: "log",
sessionId,
});
const textBlock = log.content.find((c) => c.type === "text")?.text ?? "";
const firstLine = textBlock.split("\n")[0]?.trim();
expect(textBlock).toContain("showing last 200 of 260 lines");
expect(firstLine).toBe("line-61");
expect(textBlock).toContain("line-61");
expect(textBlock).toContain("line-260");
expect((log.details as { totalLines?: number }).totalLines).toBe(260);
});
it("supports line offsets for log slices", async () => {
const result = await execTool.execute("call1", {
command: echoLines(["alpha", "beta", "gamma"]),
background: true,
});
const sessionId = (result.details as { sessionId: string }).sessionId;
await waitForCompletion(sessionId);
const log = await processTool.execute("call2", {
action: "log",
sessionId,
offset: 1,
limit: 1,
});
const textBlock = log.content.find((c) => c.type === "text");
expect(normalizeText(textBlock?.text)).toBe("beta");
});
it("keeps offset-only log requests unbounded by default tail mode", async () => {
const lines = Array.from({ length: 260 }, (_value, index) => `line-${index + 1}`);
const sessionId = await runBackgroundEchoLines(lines);
const log = await processTool.execute("call2", {
action: "log",
sessionId,
offset: 30,
});
const textBlock = log.content.find((c) => c.type === "text")?.text ?? "";
const renderedLines = textBlock.split("\n");
expect(renderedLines[0]?.trim()).toBe("line-31");
expect(renderedLines[renderedLines.length - 1]?.trim()).toBe("line-260");
expect(textBlock).not.toContain("showing last 200");
expect((log.details as { totalLines?: number }).totalLines).toBe(260);
});
it("scopes process sessions by scopeKey", async () => {
const bashA = createExecTool({ backgroundMs: 10, scopeKey: "agent:alpha" });
const processA = createProcessTool({ scopeKey: "agent:alpha" });
const bashB = createExecTool({ backgroundMs: 10, scopeKey: "agent:beta" });
const processB = createProcessTool({ scopeKey: "agent:beta" });
const resultA = await bashA.execute("call1", {
command: shortDelayCmd,
background: true,
});
const resultB = await bashB.execute("call2", {
command: shortDelayCmd,
background: true,
});
const sessionA = (resultA.details as { sessionId: string }).sessionId;
const sessionB = (resultB.details as { sessionId: string }).sessionId;
const listA = await processA.execute("call3", { action: "list" });
const sessionsA = (listA.details as { sessions: Array<{ sessionId: string }> }).sessions;
expect(sessionsA.some((s) => s.sessionId === sessionA)).toBe(true);
expect(sessionsA.some((s) => s.sessionId === sessionB)).toBe(false);
const pollB = await processB.execute("call4", {
action: "poll",
sessionId: sessionA,
});
const pollBDetails = pollB.details as { status?: string };
expect(pollBDetails.status).toBe("failed");
});
});
describe("exec exit codes", () => {
let envSnapshot: ReturnType<typeof captureEnv>;
beforeEach(() => {
envSnapshot = captureEnv(["SHELL"]);
if (!isWin && defaultShell) {
process.env.SHELL = defaultShell;
}
});
afterEach(() => {
envSnapshot.restore();
});
it("treats non-zero exits as completed and appends exit code", async () => {
const command = isWin
? joinCommands(["Write-Output nope", "exit 1"])
: joinCommands(["echo nope", "exit 1"]);
const result = await execTool.execute("call1", { command });
const resultDetails = result.details as { status?: string; exitCode?: number | null };
expect(resultDetails.status).toBe("completed");
expect(resultDetails.exitCode).toBe(1);
const text = normalizeText(result.content.find((c) => c.type === "text")?.text);
expect(text).toContain("nope");
expect(text).toContain("Command exited with code 1");
});
});
describe("exec notifyOnExit", () => {
it("enqueues a system event when a backgrounded exec exits", async () => {
const tool = createExecTool({
allowBackground: true,
backgroundMs: 0,
notifyOnExit: true,
sessionKey: "agent:main:main",
});
const result = await tool.execute("call1", {
command: echoAfterDelay("notify"),
background: true,
});
expect(result.details.status).toBe("running");
const sessionId = (result.details as { sessionId: string }).sessionId;
const prefix = sessionId.slice(0, 8);
let finished = getFinishedSession(sessionId);
let hasEvent = peekSystemEvents("agent:main:main").some((event) => event.includes(prefix));
await expect
.poll(
() => {
finished = getFinishedSession(sessionId);
hasEvent = peekSystemEvents("agent:main:main").some((event) => event.includes(prefix));
return Boolean(finished && hasEvent);
},
{ timeout: isWin ? 12_000 : 5_000, interval: 20 },
)
.toBe(true);
if (!finished) {
finished = getFinishedSession(sessionId);
}
if (!hasEvent) {
hasEvent = peekSystemEvents("agent:main:main").some((event) => event.includes(prefix));
}
expect(finished).toBeTruthy();
expect(hasEvent).toBe(true);
});
it("skips no-op completion events when command succeeds without output", async () => {
const tool = createExecTool({
allowBackground: true,
backgroundMs: 0,
notifyOnExit: true,
sessionKey: "agent:main:main",
});
const result = await tool.execute("call2", {
command: shortDelayCmd,
background: true,
});
expect(result.details.status).toBe("running");
const sessionId = (result.details as { sessionId: string }).sessionId;
const status = await waitForCompletion(sessionId);
expect(status).toBe("completed");
expect(peekSystemEvents("agent:main:main")).toEqual([]);
});
it("can re-enable no-op completion events via notifyOnExitEmptySuccess", async () => {
const tool = createExecTool({
allowBackground: true,
backgroundMs: 0,
notifyOnExit: true,
notifyOnExitEmptySuccess: true,
sessionKey: "agent:main:main",
});
const result = await tool.execute("call3", {
command: shortDelayCmd,
background: true,
});
expect(result.details.status).toBe("running");
const sessionId = (result.details as { sessionId: string }).sessionId;
const status = await waitForCompletion(sessionId);
expect(status).toBe("completed");
const events = peekSystemEvents("agent:main:main");
expect(events.length).toBeGreaterThan(0);
expect(events.some((event) => event.includes("Exec completed"))).toBe(true);
});
});
describe("exec PATH handling", () => {
let envSnapshot: ReturnType<typeof captureEnv>;
beforeEach(() => {
envSnapshot = captureEnv(["PATH", "SHELL"]);
if (!isWin && defaultShell) {
process.env.SHELL = defaultShell;
}
});
afterEach(() => {
envSnapshot.restore();
});
it("prepends configured path entries", async () => {
const basePath = isWin ? "C:\\Windows\\System32" : "/usr/bin";
const prepend = isWin ? ["C:\\custom\\bin", "C:\\oss\\bin"] : ["/custom/bin", "/opt/oss/bin"];
process.env.PATH = basePath;
const tool = createExecTool({ pathPrepend: prepend });
const result = await tool.execute("call1", {
command: isWin ? "Write-Output $env:PATH" : "echo $PATH",
});
const text = normalizeText(result.content.find((c) => c.type === "text")?.text);
expect(text).toBe([...prepend, basePath].join(path.delimiter));
});
});
describe("buildDockerExecArgs", () => {
it("prepends custom PATH after login shell sourcing to preserve both custom and system tools", () => {
const args = buildDockerExecArgs({
containerName: "test-container",
command: "echo hello",
env: {
PATH: "/custom/bin:/usr/local/bin:/usr/bin",
HOME: "/home/user",
},
tty: false,
});
const commandArg = args[args.length - 1];
expect(args).toContain("OPENCLAW_PREPEND_PATH=/custom/bin:/usr/local/bin:/usr/bin");
expect(commandArg).toContain('export PATH="${OPENCLAW_PREPEND_PATH}:$PATH"');
expect(commandArg).toContain("echo hello");
expect(commandArg).toBe(
'export PATH="${OPENCLAW_PREPEND_PATH}:$PATH"; unset OPENCLAW_PREPEND_PATH; echo hello',
);
});
it("does not interpolate PATH into the shell command", () => {
const injectedPath = "$(touch /tmp/openclaw-path-injection)";
const args = buildDockerExecArgs({
containerName: "test-container",
command: "echo hello",
env: {
PATH: injectedPath,
HOME: "/home/user",
},
tty: false,
});
const commandArg = args[args.length - 1];
expect(args).toContain(`OPENCLAW_PREPEND_PATH=${injectedPath}`);
expect(commandArg).not.toContain(injectedPath);
expect(commandArg).toContain("OPENCLAW_PREPEND_PATH");
});
it("does not add PATH export when PATH is not in env", () => {
const args = buildDockerExecArgs({
containerName: "test-container",
command: "echo hello",
env: {
HOME: "/home/user",
},
tty: false,
});
const commandArg = args[args.length - 1];
expect(commandArg).toBe("echo hello");
expect(commandArg).not.toContain("export PATH");
});
it("includes workdir flag when specified", () => {
const args = buildDockerExecArgs({
containerName: "test-container",
command: "pwd",
workdir: "/workspace",
env: { HOME: "/home/user" },
tty: false,
});
expect(args).toContain("-w");
expect(args).toContain("/workspace");
});
it("uses login shell for consistent environment", () => {
const args = buildDockerExecArgs({
containerName: "test-container",
command: "echo test",
env: { HOME: "/home/user" },
tty: false,
});
expect(args).toContain("sh");
expect(args).toContain("-lc");
});
it("includes tty flag when requested", () => {
const args = buildDockerExecArgs({
containerName: "test-container",
command: "bash",
env: { HOME: "/home/user" },
tty: true,
});
expect(args).toContain("-t");
});
});

View File

@ -1,83 +0,0 @@
import { beforeAll, beforeEach, describe, expect, it, vi } from "vitest";
import {
DEFAULT_APPROVAL_REQUEST_TIMEOUT_MS,
DEFAULT_APPROVAL_TIMEOUT_MS,
} from "./bash-tools.exec-runtime.js";
vi.mock("./tools/gateway.js", () => ({
callGatewayTool: vi.fn(),
}));
let callGatewayTool: typeof import("./tools/gateway.js").callGatewayTool;
let requestExecApprovalDecision: typeof import("./bash-tools.exec-approval-request.js").requestExecApprovalDecision;
describe("requestExecApprovalDecision", () => {
beforeAll(async () => {
({ callGatewayTool } = await import("./tools/gateway.js"));
({ requestExecApprovalDecision } = await import("./bash-tools.exec-approval-request.js"));
});
beforeEach(() => {
vi.mocked(callGatewayTool).mockReset();
});
it("returns string decisions", async () => {
vi.mocked(callGatewayTool).mockResolvedValue({ decision: "allow-once" });
const result = await requestExecApprovalDecision({
id: "approval-id",
command: "echo hi",
cwd: "/tmp",
host: "gateway",
security: "allowlist",
ask: "always",
agentId: "main",
resolvedPath: "/usr/bin/echo",
sessionKey: "session",
});
expect(result).toBe("allow-once");
expect(callGatewayTool).toHaveBeenCalledWith(
"exec.approval.request",
{ timeoutMs: DEFAULT_APPROVAL_REQUEST_TIMEOUT_MS },
{
id: "approval-id",
command: "echo hi",
cwd: "/tmp",
host: "gateway",
security: "allowlist",
ask: "always",
agentId: "main",
resolvedPath: "/usr/bin/echo",
sessionKey: "session",
timeoutMs: DEFAULT_APPROVAL_TIMEOUT_MS,
},
);
});
it("returns null for missing or non-string decisions", async () => {
vi.mocked(callGatewayTool).mockResolvedValueOnce({});
await expect(
requestExecApprovalDecision({
id: "approval-id",
command: "echo hi",
cwd: "/tmp",
host: "node",
security: "allowlist",
ask: "on-miss",
}),
).resolves.toBeNull();
vi.mocked(callGatewayTool).mockResolvedValueOnce({ decision: 123 });
await expect(
requestExecApprovalDecision({
id: "approval-id-2",
command: "echo hi",
cwd: "/tmp",
host: "node",
security: "allowlist",
ask: "on-miss",
}),
).resolves.toBeNull();
});
});

View File

@ -1,44 +0,0 @@
import type { ExecAsk, ExecSecurity } from "../infra/exec-approvals.js";
import {
DEFAULT_APPROVAL_REQUEST_TIMEOUT_MS,
DEFAULT_APPROVAL_TIMEOUT_MS,
} from "./bash-tools.exec-runtime.js";
import { callGatewayTool } from "./tools/gateway.js";
export type RequestExecApprovalDecisionParams = {
id: string;
command: string;
cwd: string;
host: "gateway" | "node";
security: ExecSecurity;
ask: ExecAsk;
agentId?: string;
resolvedPath?: string;
sessionKey?: string;
};
export async function requestExecApprovalDecision(
params: RequestExecApprovalDecisionParams,
): Promise<string | null> {
const decisionResult = await callGatewayTool<{ decision: string }>(
"exec.approval.request",
{ timeoutMs: DEFAULT_APPROVAL_REQUEST_TIMEOUT_MS },
{
id: params.id,
command: params.command,
cwd: params.cwd,
host: params.host,
security: params.security,
ask: params.ask,
agentId: params.agentId,
resolvedPath: params.resolvedPath,
sessionKey: params.sessionKey,
timeoutMs: DEFAULT_APPROVAL_TIMEOUT_MS,
},
);
const decisionValue =
decisionResult && typeof decisionResult === "object"
? (decisionResult as { decision?: unknown }).decision
: undefined;
return typeof decisionValue === "string" ? decisionValue : null;
}

View File

@ -1,339 +0,0 @@
import crypto from "node:crypto";
import type { AgentToolResult } from "@mariozechner/pi-agent-core";
import {
addAllowlistEntry,
type ExecAsk,
type ExecSecurity,
buildSafeBinsShellCommand,
buildSafeShellCommand,
evaluateShellAllowlist,
maxAsk,
minSecurity,
recordAllowlistUse,
requiresExecApproval,
resolveAllowAlwaysPatterns,
resolveExecApprovals,
} from "../infra/exec-approvals.js";
import { markBackgrounded, tail } from "./bash-process-registry.js";
import { requestExecApprovalDecision } from "./bash-tools.exec-approval-request.js";
import {
DEFAULT_APPROVAL_TIMEOUT_MS,
DEFAULT_NOTIFY_TAIL_CHARS,
createApprovalSlug,
emitExecSystemEvent,
normalizeNotifyOutput,
runExecProcess,
} from "./bash-tools.exec-runtime.js";
import type { ExecToolDetails } from "./bash-tools.exec-types.js";
export type ProcessGatewayAllowlistParams = {
command: string;
workdir: string;
env: Record<string, string>;
pty: boolean;
timeoutSec?: number;
defaultTimeoutSec: number;
security: ExecSecurity;
ask: ExecAsk;
safeBins: Set<string>;
agentId?: string;
sessionKey?: string;
scopeKey?: string;
warnings: string[];
notifySessionKey?: string;
approvalRunningNoticeMs: number;
maxOutput: number;
pendingMaxOutput: number;
trustedSafeBinDirs?: ReadonlySet<string>;
};
export type ProcessGatewayAllowlistResult = {
execCommandOverride?: string;
pendingResult?: AgentToolResult<ExecToolDetails>;
};
export async function processGatewayAllowlist(
params: ProcessGatewayAllowlistParams,
): Promise<ProcessGatewayAllowlistResult> {
const approvals = resolveExecApprovals(params.agentId, {
security: params.security,
ask: params.ask,
});
const hostSecurity = minSecurity(params.security, approvals.agent.security);
const hostAsk = maxAsk(params.ask, approvals.agent.ask);
const askFallback = approvals.agent.askFallback;
if (hostSecurity === "deny") {
throw new Error("exec denied: host=gateway security=deny");
}
const allowlistEval = evaluateShellAllowlist({
command: params.command,
allowlist: approvals.allowlist,
safeBins: params.safeBins,
cwd: params.workdir,
env: params.env,
platform: process.platform,
trustedSafeBinDirs: params.trustedSafeBinDirs,
});
const allowlistMatches = allowlistEval.allowlistMatches;
const analysisOk = allowlistEval.analysisOk;
const allowlistSatisfied =
hostSecurity === "allowlist" && analysisOk ? allowlistEval.allowlistSatisfied : false;
const hasHeredocSegment = allowlistEval.segments.some((segment) =>
segment.argv.some((token) => token.startsWith("<<")),
);
const requiresHeredocApproval =
hostSecurity === "allowlist" && analysisOk && allowlistSatisfied && hasHeredocSegment;
const requiresAsk =
requiresExecApproval({
ask: hostAsk,
security: hostSecurity,
analysisOk,
allowlistSatisfied,
}) || requiresHeredocApproval;
if (requiresHeredocApproval) {
params.warnings.push(
"Warning: heredoc execution requires explicit approval in allowlist mode.",
);
}
if (requiresAsk) {
const approvalId = crypto.randomUUID();
const approvalSlug = createApprovalSlug(approvalId);
const expiresAtMs = Date.now() + DEFAULT_APPROVAL_TIMEOUT_MS;
const contextKey = `exec:${approvalId}`;
const resolvedPath = allowlistEval.segments[0]?.resolution?.resolvedPath;
const noticeSeconds = Math.max(1, Math.round(params.approvalRunningNoticeMs / 1000));
const effectiveTimeout =
typeof params.timeoutSec === "number" ? params.timeoutSec : params.defaultTimeoutSec;
const warningText = params.warnings.length ? `${params.warnings.join("\n")}\n\n` : "";
void (async () => {
let decision: string | null = null;
try {
decision = await requestExecApprovalDecision({
id: approvalId,
command: params.command,
cwd: params.workdir,
host: "gateway",
security: hostSecurity,
ask: hostAsk,
agentId: params.agentId,
resolvedPath,
sessionKey: params.sessionKey,
});
} catch {
emitExecSystemEvent(
`Exec denied (gateway id=${approvalId}, approval-request-failed): ${params.command}`,
{
sessionKey: params.notifySessionKey,
contextKey,
},
);
return;
}
let approvedByAsk = false;
let deniedReason: string | null = null;
if (decision === "deny") {
deniedReason = "user-denied";
} else if (!decision) {
if (askFallback === "full") {
approvedByAsk = true;
} else if (askFallback === "allowlist") {
if (!analysisOk || !allowlistSatisfied) {
deniedReason = "approval-timeout (allowlist-miss)";
} else {
approvedByAsk = true;
}
} else {
deniedReason = "approval-timeout";
}
} else if (decision === "allow-once") {
approvedByAsk = true;
} else if (decision === "allow-always") {
approvedByAsk = true;
if (hostSecurity === "allowlist") {
const patterns = resolveAllowAlwaysPatterns({
segments: allowlistEval.segments,
cwd: params.workdir,
env: params.env,
platform: process.platform,
});
for (const pattern of patterns) {
if (pattern) {
addAllowlistEntry(approvals.file, params.agentId, pattern);
}
}
}
}
if (hostSecurity === "allowlist" && (!analysisOk || !allowlistSatisfied) && !approvedByAsk) {
deniedReason = deniedReason ?? "allowlist-miss";
}
if (deniedReason) {
emitExecSystemEvent(
`Exec denied (gateway id=${approvalId}, ${deniedReason}): ${params.command}`,
{
sessionKey: params.notifySessionKey,
contextKey,
},
);
return;
}
if (allowlistMatches.length > 0) {
const seen = new Set<string>();
for (const match of allowlistMatches) {
if (seen.has(match.pattern)) {
continue;
}
seen.add(match.pattern);
recordAllowlistUse(
approvals.file,
params.agentId,
match,
params.command,
resolvedPath ?? undefined,
);
}
}
let run: Awaited<ReturnType<typeof runExecProcess>> | null = null;
try {
run = await runExecProcess({
command: params.command,
workdir: params.workdir,
env: params.env,
sandbox: undefined,
containerWorkdir: null,
usePty: params.pty,
warnings: params.warnings,
maxOutput: params.maxOutput,
pendingMaxOutput: params.pendingMaxOutput,
notifyOnExit: false,
notifyOnExitEmptySuccess: false,
scopeKey: params.scopeKey,
sessionKey: params.notifySessionKey,
timeoutSec: effectiveTimeout,
});
} catch {
emitExecSystemEvent(
`Exec denied (gateway id=${approvalId}, spawn-failed): ${params.command}`,
{
sessionKey: params.notifySessionKey,
contextKey,
},
);
return;
}
markBackgrounded(run.session);
let runningTimer: NodeJS.Timeout | null = null;
if (params.approvalRunningNoticeMs > 0) {
runningTimer = setTimeout(() => {
emitExecSystemEvent(
`Exec running (gateway id=${approvalId}, session=${run?.session.id}, >${noticeSeconds}s): ${params.command}`,
{ sessionKey: params.notifySessionKey, contextKey },
);
}, params.approvalRunningNoticeMs);
}
const outcome = await run.promise;
if (runningTimer) {
clearTimeout(runningTimer);
}
const output = normalizeNotifyOutput(
tail(outcome.aggregated || "", DEFAULT_NOTIFY_TAIL_CHARS),
);
const exitLabel = outcome.timedOut ? "timeout" : `code ${outcome.exitCode ?? "?"}`;
const summary = output
? `Exec finished (gateway id=${approvalId}, session=${run.session.id}, ${exitLabel})\n${output}`
: `Exec finished (gateway id=${approvalId}, session=${run.session.id}, ${exitLabel})`;
emitExecSystemEvent(summary, { sessionKey: params.notifySessionKey, contextKey });
})();
return {
pendingResult: {
content: [
{
type: "text",
text:
`${warningText}Approval required (id ${approvalSlug}). ` +
"Approve to run; updates will arrive after completion.",
},
],
details: {
status: "approval-pending",
approvalId,
approvalSlug,
expiresAtMs,
host: "gateway",
command: params.command,
cwd: params.workdir,
},
},
};
}
if (hostSecurity === "allowlist" && (!analysisOk || !allowlistSatisfied)) {
throw new Error("exec denied: allowlist miss");
}
let execCommandOverride: string | undefined;
// If allowlist uses safeBins, sanitize only those stdin-only segments:
// disable glob/var expansion by forcing argv tokens to be literal via single-quoting.
if (
hostSecurity === "allowlist" &&
analysisOk &&
allowlistSatisfied &&
allowlistEval.segmentSatisfiedBy.some((by) => by === "safeBins")
) {
const safe = buildSafeBinsShellCommand({
command: params.command,
segments: allowlistEval.segments,
segmentSatisfiedBy: allowlistEval.segmentSatisfiedBy,
platform: process.platform,
});
if (!safe.ok || !safe.command) {
// Fallback: quote everything (safe, but may change glob behavior).
const fallback = buildSafeShellCommand({
command: params.command,
platform: process.platform,
});
if (!fallback.ok || !fallback.command) {
throw new Error(`exec denied: safeBins sanitize failed (${safe.reason ?? "unknown"})`);
}
params.warnings.push(
"Warning: safeBins hardening used fallback quoting due to parser mismatch.",
);
execCommandOverride = fallback.command;
} else {
params.warnings.push(
"Warning: safeBins hardening disabled glob/variable expansion for stdin-only segments.",
);
execCommandOverride = safe.command;
}
}
if (allowlistMatches.length > 0) {
const seen = new Set<string>();
for (const match of allowlistMatches) {
if (seen.has(match.pattern)) {
continue;
}
seen.add(match.pattern);
recordAllowlistUse(
approvals.file,
params.agentId,
match,
params.command,
allowlistEval.segments[0]?.resolution?.resolvedPath,
);
}
}
return { execCommandOverride };
}

View File

@ -1,316 +0,0 @@
import crypto from "node:crypto";
import type { AgentToolResult } from "@mariozechner/pi-agent-core";
import {
type ExecApprovalsFile,
type ExecAsk,
type ExecSecurity,
evaluateShellAllowlist,
maxAsk,
minSecurity,
requiresExecApproval,
resolveExecApprovals,
resolveExecApprovalsFromFile,
} from "../infra/exec-approvals.js";
import { buildNodeShellCommand } from "../infra/node-shell.js";
import { requestExecApprovalDecision } from "./bash-tools.exec-approval-request.js";
import {
DEFAULT_APPROVAL_TIMEOUT_MS,
createApprovalSlug,
emitExecSystemEvent,
} from "./bash-tools.exec-runtime.js";
import type { ExecToolDetails } from "./bash-tools.exec-types.js";
import { callGatewayTool } from "./tools/gateway.js";
import { listNodes, resolveNodeIdFromList } from "./tools/nodes-utils.js";
export type ExecuteNodeHostCommandParams = {
command: string;
workdir: string;
env: Record<string, string>;
requestedEnv?: Record<string, string>;
requestedNode?: string;
boundNode?: string;
sessionKey?: string;
agentId?: string;
security: ExecSecurity;
ask: ExecAsk;
timeoutSec?: number;
defaultTimeoutSec: number;
approvalRunningNoticeMs: number;
warnings: string[];
notifySessionKey?: string;
trustedSafeBinDirs?: ReadonlySet<string>;
};
export async function executeNodeHostCommand(
params: ExecuteNodeHostCommandParams,
): Promise<AgentToolResult<ExecToolDetails>> {
const approvals = resolveExecApprovals(params.agentId, {
security: params.security,
ask: params.ask,
});
const hostSecurity = minSecurity(params.security, approvals.agent.security);
const hostAsk = maxAsk(params.ask, approvals.agent.ask);
const askFallback = approvals.agent.askFallback;
if (hostSecurity === "deny") {
throw new Error("exec denied: host=node security=deny");
}
if (params.boundNode && params.requestedNode && params.boundNode !== params.requestedNode) {
throw new Error(`exec node not allowed (bound to ${params.boundNode})`);
}
const nodeQuery = params.boundNode || params.requestedNode;
const nodes = await listNodes({});
if (nodes.length === 0) {
throw new Error(
"exec host=node requires a paired node (none available). This requires a companion app or node host.",
);
}
let nodeId: string;
try {
nodeId = resolveNodeIdFromList(nodes, nodeQuery, !nodeQuery);
} catch (err) {
if (!nodeQuery && String(err).includes("node required")) {
throw new Error(
"exec host=node requires a node id when multiple nodes are available (set tools.exec.node or exec.node).",
{ cause: err },
);
}
throw err;
}
const nodeInfo = nodes.find((entry) => entry.nodeId === nodeId);
const supportsSystemRun = Array.isArray(nodeInfo?.commands)
? nodeInfo?.commands?.includes("system.run")
: false;
if (!supportsSystemRun) {
throw new Error(
"exec host=node requires a node that supports system.run (companion app or node host).",
);
}
const argv = buildNodeShellCommand(params.command, nodeInfo?.platform);
const nodeEnv = params.requestedEnv ? { ...params.requestedEnv } : undefined;
const baseAllowlistEval = evaluateShellAllowlist({
command: params.command,
allowlist: [],
safeBins: new Set(),
cwd: params.workdir,
env: params.env,
platform: nodeInfo?.platform,
trustedSafeBinDirs: params.trustedSafeBinDirs,
});
let analysisOk = baseAllowlistEval.analysisOk;
let allowlistSatisfied = false;
if (hostAsk === "on-miss" && hostSecurity === "allowlist" && analysisOk) {
try {
const approvalsSnapshot = await callGatewayTool<{ file: string }>(
"exec.approvals.node.get",
{ timeoutMs: 10_000 },
{ nodeId },
);
const approvalsFile =
approvalsSnapshot && typeof approvalsSnapshot === "object"
? approvalsSnapshot.file
: undefined;
if (approvalsFile && typeof approvalsFile === "object") {
const resolved = resolveExecApprovalsFromFile({
file: approvalsFile as ExecApprovalsFile,
agentId: params.agentId,
overrides: { security: "allowlist" },
});
// Allowlist-only precheck; safe bins are node-local and may diverge.
const allowlistEval = evaluateShellAllowlist({
command: params.command,
allowlist: resolved.allowlist,
safeBins: new Set(),
cwd: params.workdir,
env: params.env,
platform: nodeInfo?.platform,
trustedSafeBinDirs: params.trustedSafeBinDirs,
});
allowlistSatisfied = allowlistEval.allowlistSatisfied;
analysisOk = allowlistEval.analysisOk;
}
} catch {
// Fall back to requiring approval if node approvals cannot be fetched.
}
}
const requiresAsk = requiresExecApproval({
ask: hostAsk,
security: hostSecurity,
analysisOk,
allowlistSatisfied,
});
const invokeTimeoutMs = Math.max(
10_000,
(typeof params.timeoutSec === "number" ? params.timeoutSec : params.defaultTimeoutSec) * 1000 +
5_000,
);
const buildInvokeParams = (
approvedByAsk: boolean,
approvalDecision: "allow-once" | "allow-always" | null,
runId?: string,
) =>
({
nodeId,
command: "system.run",
params: {
command: argv,
rawCommand: params.command,
cwd: params.workdir,
env: nodeEnv,
timeoutMs: typeof params.timeoutSec === "number" ? params.timeoutSec * 1000 : undefined,
agentId: params.agentId,
sessionKey: params.sessionKey,
approved: approvedByAsk,
approvalDecision: approvalDecision ?? undefined,
runId: runId ?? undefined,
},
idempotencyKey: crypto.randomUUID(),
}) satisfies Record<string, unknown>;
if (requiresAsk) {
const approvalId = crypto.randomUUID();
const approvalSlug = createApprovalSlug(approvalId);
const expiresAtMs = Date.now() + DEFAULT_APPROVAL_TIMEOUT_MS;
const contextKey = `exec:${approvalId}`;
const noticeSeconds = Math.max(1, Math.round(params.approvalRunningNoticeMs / 1000));
const warningText = params.warnings.length ? `${params.warnings.join("\n")}\n\n` : "";
void (async () => {
let decision: string | null = null;
try {
decision = await requestExecApprovalDecision({
id: approvalId,
command: params.command,
cwd: params.workdir,
host: "node",
security: hostSecurity,
ask: hostAsk,
agentId: params.agentId,
sessionKey: params.sessionKey,
});
} catch {
emitExecSystemEvent(
`Exec denied (node=${nodeId} id=${approvalId}, approval-request-failed): ${params.command}`,
{ sessionKey: params.notifySessionKey, contextKey },
);
return;
}
let approvedByAsk = false;
let approvalDecision: "allow-once" | "allow-always" | null = null;
let deniedReason: string | null = null;
if (decision === "deny") {
deniedReason = "user-denied";
} else if (!decision) {
if (askFallback === "full") {
approvedByAsk = true;
approvalDecision = "allow-once";
} else if (askFallback === "allowlist") {
// Defer allowlist enforcement to the node host.
} else {
deniedReason = "approval-timeout";
}
} else if (decision === "allow-once") {
approvedByAsk = true;
approvalDecision = "allow-once";
} else if (decision === "allow-always") {
approvedByAsk = true;
approvalDecision = "allow-always";
}
if (deniedReason) {
emitExecSystemEvent(
`Exec denied (node=${nodeId} id=${approvalId}, ${deniedReason}): ${params.command}`,
{
sessionKey: params.notifySessionKey,
contextKey,
},
);
return;
}
let runningTimer: NodeJS.Timeout | null = null;
if (params.approvalRunningNoticeMs > 0) {
runningTimer = setTimeout(() => {
emitExecSystemEvent(
`Exec running (node=${nodeId} id=${approvalId}, >${noticeSeconds}s): ${params.command}`,
{ sessionKey: params.notifySessionKey, contextKey },
);
}, params.approvalRunningNoticeMs);
}
try {
await callGatewayTool(
"node.invoke",
{ timeoutMs: invokeTimeoutMs },
buildInvokeParams(approvedByAsk, approvalDecision, approvalId),
);
} catch {
emitExecSystemEvent(
`Exec denied (node=${nodeId} id=${approvalId}, invoke-failed): ${params.command}`,
{
sessionKey: params.notifySessionKey,
contextKey,
},
);
} finally {
if (runningTimer) {
clearTimeout(runningTimer);
}
}
})();
return {
content: [
{
type: "text",
text:
`${warningText}Approval required (id ${approvalSlug}). ` +
"Approve to run; updates will arrive after completion.",
},
],
details: {
status: "approval-pending",
approvalId,
approvalSlug,
expiresAtMs,
host: "node",
command: params.command,
cwd: params.workdir,
nodeId,
},
};
}
const startedAt = Date.now();
const raw = await callGatewayTool(
"node.invoke",
{ timeoutMs: invokeTimeoutMs },
buildInvokeParams(false, null),
);
const payload =
raw && typeof raw === "object" ? (raw as { payload?: unknown }).payload : undefined;
const payloadObj =
payload && typeof payload === "object" ? (payload as Record<string, unknown>) : {};
const stdout = typeof payloadObj.stdout === "string" ? payloadObj.stdout : "";
const stderr = typeof payloadObj.stderr === "string" ? payloadObj.stderr : "";
const errorText = typeof payloadObj.error === "string" ? payloadObj.error : "";
const success = typeof payloadObj.success === "boolean" ? payloadObj.success : false;
const exitCode = typeof payloadObj.exitCode === "number" ? payloadObj.exitCode : null;
return {
content: [
{
type: "text",
text: stdout || stderr || errorText || "",
},
],
details: {
status: success ? "completed" : "failed",
exitCode,
durationMs: Date.now() - startedAt,
aggregated: [stdout, stderr, errorText].filter(Boolean).join("\n"),
cwd: params.workdir,
} satisfies ExecToolDetails,
};
}

Some files were not shown because too many files have changed in this diff Show More