Merge 465f8aaa14c32b49d56dcc75833554b18f9d4600 into 598f1826d8b2bc969aace2c6459824737667218c

This commit is contained in:
Marc J Saint-jour 2026-03-20 23:55:19 -04:00 committed by GitHub
commit c2bfa84cd6
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
116 changed files with 7586 additions and 1599 deletions

7
.gitignore vendored
View File

@ -102,6 +102,13 @@ USER.md
package-lock.json
.claude/
.agent/
!.agent/
!.agent/workflows/
!.agent/workflows/update_clawdbot.md
!.agents/
!.agents/skills/
!.agents/skills/parallels-discord-roundtrip/
!.agents/skills/parallels-discord-roundtrip/SKILL.md
skills-lock.json
# Local iOS signing overrides

View File

@ -19,7 +19,7 @@
</p>
**OpenClaw** is a _personal AI assistant_ you run on your own devices.
It answers you on the channels you already use (WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, BlueBubbles, IRC, Microsoft Teams, Matrix, Feishu, LINE, Mattermost, Nextcloud Talk, Nostr, Synology Chat, Tlon, Twitch, Zalo, Zalo Personal, WebChat). It can speak and listen on macOS/iOS/Android, and can render a live Canvas you control. The Gateway is just the control plane — the product is the assistant.
It answers you on the channels you already use (WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, BlueBubbles, IRC, Microsoft Teams, Matrix, Feishu, LINE, Mattermost, Nextcloud Talk, Nostr, Synology Chat, Tlon, Twitch, Zalo, Zalo Personal, WebChat). It can speak and listen on macOS/iOS/Android, and can render a live Canvas you control. The Gateway is the control plane, not the product. OpenClaw can also plug into Cortex, giving you local, inspectable memory, previewable context, conflict handling, and coding sync.
If you want a personal, single-user assistant that feels local, fast, and always-on, this is it.
@ -30,6 +30,17 @@ OpenClaw Onboard guides you step by step through setting up the gateway, workspa
Works with npm, pnpm, or bun.
New install? Start here: [Getting started](https://docs.openclaw.ai/start/getting-started)
## Cortex Companion
This integration branch also pairs OpenClaw with [Cortex AI](https://github.com/Junebugg1214/Cortex-AI) for local memory, previewable context, conflict handling, and coding sync. If you want the assistant's memory to stay inspectable and versionable on your machine, Cortex is the companion repo for that flow.
```bash
openclaw memory cortex enable
/cortex preview
/cortex conflicts
/cortex sync coding
```
## Sponsors
| OpenAI | Vercel | Blacksmith | Convex |
@ -128,12 +139,44 @@ Run `openclaw doctor` to surface risky/misconfigured DM policies.
- **[Local-first Gateway](https://docs.openclaw.ai/gateway)** — single control plane for sessions, channels, tools, and events.
- **[Multi-channel inbox](https://docs.openclaw.ai/channels)** — WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, BlueBubbles (iMessage), iMessage (legacy), IRC, Microsoft Teams, Matrix, Feishu, LINE, Mattermost, Nextcloud Talk, Nostr, Synology Chat, Tlon, Twitch, Zalo, Zalo Personal, WebChat, macOS, iOS/Android.
- **[Multi-agent routing](https://docs.openclaw.ai/gateway/configuration)** — route inbound channels/accounts/peers to isolated agents (workspaces + per-agent sessions).
- **Cortex memory** — local, inspectable memory with preview, conflicts, coding sync, and status visibility.
- **[Voice Wake](https://docs.openclaw.ai/nodes/voicewake) + [Talk Mode](https://docs.openclaw.ai/nodes/talk)** — wake words on macOS/iOS and continuous voice on Android (ElevenLabs + system TTS fallback).
- **[Live Canvas](https://docs.openclaw.ai/platforms/mac/canvas)** — agent-driven visual workspace with [A2UI](https://docs.openclaw.ai/platforms/mac/canvas#canvas-a2ui).
- **[First-class tools](https://docs.openclaw.ai/tools)** — browser, canvas, nodes, cron, sessions, and Discord/Slack actions.
- **[Companion apps](https://docs.openclaw.ai/platforms/macos)** — macOS menu bar app + iOS/Android [nodes](https://docs.openclaw.ai/nodes).
- **[Onboarding](https://docs.openclaw.ai/start/wizard) + [skills](https://docs.openclaw.ai/tools/skills)** — onboarding-driven setup with bundled/managed/workspace skills.
## Cortex Memory
OpenClaw can use Cortex as a local memory graph. In plain terms, that means the assistant has a notebook it can inspect instead of starting from zero every time.
That means you can:
- preview memory before it changes the answer
- inspect conflicts when two memories disagree
- ask why the assistant answered a certain way
- sync coding context into the tools you already use
- keep the memory file local and versionable in `.cortex/context.json`
Common commands:
```bash
openclaw memory cortex enable
/cortex preview
/cortex conflicts
/cortex sync coding
```
```mermaid
flowchart LR
You[You] -->|chat / /cortex commands| OpenClaw[OpenClaw]
OpenClaw -->|reads and writes| Cortex["Cortex local memory graph"]
Cortex -->|preview / why / conflicts / sync| OpenClaw
OpenClaw -->|answers with memory| Assistant[Assistant]
```
That makes the memory flow visible instead of hidden, which is the big difference from a normal chat bot.
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=openclaw/openclaw&type=date&legend=top-left)](https://www.star-history.com/#openclaw/openclaw&type=date&legend=top-left)

View File

@ -304,6 +304,7 @@ public struct Snapshot: Codable, Sendable {
public let sessiondefaults: [String: AnyCodable]?
public let authmode: AnyCodable?
public let updateavailable: [String: AnyCodable]?
public let cortex: [String: AnyCodable]?
public init(
presence: [PresenceEntry],
@ -314,7 +315,8 @@ public struct Snapshot: Codable, Sendable {
statedir: String?,
sessiondefaults: [String: AnyCodable]?,
authmode: AnyCodable?,
updateavailable: [String: AnyCodable]?)
updateavailable: [String: AnyCodable]?,
cortex: [String: AnyCodable]?)
{
self.presence = presence
self.health = health
@ -325,6 +327,7 @@ public struct Snapshot: Codable, Sendable {
self.sessiondefaults = sessiondefaults
self.authmode = authmode
self.updateavailable = updateavailable
self.cortex = cortex
}
private enum CodingKeys: String, CodingKey {
@ -337,6 +340,7 @@ public struct Snapshot: Codable, Sendable {
case sessiondefaults = "sessionDefaults"
case authmode = "authMode"
case updateavailable = "updateAvailable"
case cortex
}
}

View File

@ -304,6 +304,7 @@ public struct Snapshot: Codable, Sendable {
public let sessiondefaults: [String: AnyCodable]?
public let authmode: AnyCodable?
public let updateavailable: [String: AnyCodable]?
public let cortex: [String: AnyCodable]?
public init(
presence: [PresenceEntry],
@ -314,7 +315,8 @@ public struct Snapshot: Codable, Sendable {
statedir: String?,
sessiondefaults: [String: AnyCodable]?,
authmode: AnyCodable?,
updateavailable: [String: AnyCodable]?)
updateavailable: [String: AnyCodable]?,
cortex: [String: AnyCodable]?)
{
self.presence = presence
self.health = health
@ -325,6 +327,7 @@ public struct Snapshot: Codable, Sendable {
self.sessiondefaults = sessiondefaults
self.authmode = authmode
self.updateavailable = updateavailable
self.cortex = cortex
}
private enum CodingKeys: String, CodingKey {
@ -337,6 +340,7 @@ public struct Snapshot: Codable, Sendable {
case sessiondefaults = "sessionDefaults"
case authmode = "authMode"
case updateavailable = "updateAvailable"
case cortex
}
}

View File

@ -80,6 +80,12 @@ Text + native (when enabled):
- `/allowlist` (list/add/remove allowlist entries)
- `/approve <id> allow-once|allow-always|deny` (resolve exec approval prompts)
- `/context [list|detail|json]` (explain “context”; `detail` shows per-file + per-tool + per-skill + system prompt size)
- `/cortex preview|why|continuity|conflicts|resolve|sync coding|mode show|mode set|mode reset` (inspect, explain, and manage Cortex prompt context for the active conversation)
- After `/cortex mode set ...` or `/cortex mode reset`, use `/status` or `/cortex preview` to verify the active mode and source.
- `/cortex why` shows the injected Cortex context plus the active mode, source, graph, session, and channel.
- `/cortex continuity` explains which shared graph backs the current conversation so you can verify cross-channel continuity.
- `/cortex conflicts` lists memory conflicts and suggests the exact `/cortex resolve ...` command to run next.
- `/cortex sync coding` pushes the current graph into coding-tool context files (default: Claude Code, Cursor, Copilot).
- `/btw <question>` (ask an ephemeral side question about the current session without changing future session context; see [/tools/btw](/tools/btw))
- `/export-session [path]` (alias: `/export`) (export current session to HTML with full system prompt)
- `/whoami` (show your sender id; alias: `/id`)
@ -120,6 +126,9 @@ Text + native (when enabled):
Text-only:
- `/cortex preview|why|continuity|conflicts|resolve|sync coding|mode show|mode set|mode reset` (Cortex prompt preview, explanation, conflict resolution, coding-context sync, and per-conversation mode overrides)
- Recommended verification loop: `/cortex mode set minimal` then `/cortex preview` or `/status`.
- Continuity demo: run `/cortex continuity` in two channels bound to the same agent and compare the shared graph path.
- `/compact [instructions]` (see [/concepts/compaction](/concepts/compaction))
- `! <command>` (host-only; one at a time; use `!poll` + `!stop` for long-running jobs)
- `!poll` (check output / status; accepts optional `sessionId`; `/bash poll` also works)
@ -131,7 +140,6 @@ Notes:
- `/new <model>` accepts a model alias, `provider/model`, or a provider name (fuzzy match); if no match, the text is treated as the message body.
- For full provider usage breakdown, use `openclaw status --usage`.
- `/allowlist add|remove` requires `commands.config=true` and honors channel `configWrites`.
- In multi-account channels, config-targeted `/allowlist --account <id>` and `/config set channels.<provider>.accounts.<id>...` also honor the target account's `configWrites`.
- `/usage` controls the per-response usage footer; `/usage cost` prints a local cost summary from OpenClaw session logs.
- `/restart` is enabled by default; set `commands.restart: false` to disable it.
- Discord-only native command: `/vc join|leave|status` controls voice channels (requires `channels.discord.voice` and native commands; not available as text).

View File

@ -37,7 +37,7 @@ x-i18n:
| **Pi 3B+** | 1GB | ⚠️ 较慢 | 可用,但反应迟缓 |
| **Pi Zero 2 W** | 512MB | ❌ | 不推荐 |
**最低规格:** 1 GB RAM、1 核、500 MB 磁盘
**最低规格:** 1 GB RAM、1 核、500 MB 磁盘
**推荐规格:** 2 GB 以上 RAM、64 位 OS、16 GB 以上 SD 卡(或 USB SSD
## 你需要准备的内容

View File

@ -103,7 +103,6 @@ describe("monitorDiscordProvider", () => {
}) ?? {}
);
};
const getHealthProbe = () => {
expect(reconcileAcpThreadBindingsOnStartupMock).toHaveBeenCalledTimes(1);
const firstCall = reconcileAcpThreadBindingsOnStartupMock.mock.calls.at(0) as
@ -561,7 +560,6 @@ describe("monitorDiscordProvider", () => {
expect(clientHandleDeployRequestMock).toHaveBeenCalledTimes(1);
expect(getConstructedClientOptions().eventQueue?.listenerTimeout).toBe(120_000);
});
it("reports connected status on startup and shutdown", async () => {
const { monitorDiscordProvider } = await import("./provider.js");
const setStatus = vi.fn();

View File

@ -629,12 +629,14 @@ export type MonitorSingleAccountParams = {
runtime?: RuntimeEnv;
abortSignal?: AbortSignal;
botOpenIdSource?: BotOpenIdSource;
fireAndForget?: boolean;
};
export async function monitorSingleAccount(params: MonitorSingleAccountParams): Promise<void> {
const { cfg, account, runtime, abortSignal } = params;
const { accountId } = account;
const log = runtime?.log ?? console.log;
const fireAndForget = params.fireAndForget ?? true;
const botOpenIdSource = params.botOpenIdSource ?? { kind: "fetch" };
const botIdentity =
@ -675,7 +677,7 @@ export async function monitorSingleAccount(params: MonitorSingleAccountParams):
accountId,
runtime,
chatHistories,
fireAndForget: true,
fireAndForget,
});
if (connectionMode === "webhook") {

View File

@ -1,10 +1,11 @@
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
import { createPluginRuntimeMock } from "../../../test/helpers/extensions/plugin-runtime-mock.js";
import type { ClawdbotConfig, PluginRuntime, RuntimeEnv } from "../runtime-api.js";
import { monitorSingleAccount } from "./monitor.account.js";
import { setFeishuRuntime } from "./runtime.js";
import type { ResolvedFeishuAccount } from "./types.js";
type MonitorSingleAccount = typeof import("./monitor.account.js").monitorSingleAccount;
type SetFeishuRuntime = typeof import("./runtime.js").setFeishuRuntime;
const createEventDispatcherMock = vi.hoisted(() => vi.fn());
const monitorWebSocketMock = vi.hoisted(() => vi.fn(async () => {}));
const monitorWebhookMock = vi.hoisted(() => vi.fn(async () => {}));
@ -25,6 +26,8 @@ const listFeishuThreadMessagesMock = vi.hoisted(() => vi.fn(async () => []));
let handlers: Record<string, (data: unknown) => Promise<void>> = {};
let lastRuntime: RuntimeEnv | null = null;
let monitorSingleAccount: MonitorSingleAccount;
let setFeishuRuntime: SetFeishuRuntime;
const originalStateDir = process.env.OPENCLAW_STATE_DIR;
vi.mock("./client.js", async () => {
@ -178,6 +181,7 @@ async function setupLifecycleMonitor() {
cfg: createLifecycleConfig(),
account: createLifecycleAccount(),
runtime: lastRuntime,
fireAndForget: false,
botOpenIdSource: {
kind: "prefetched",
botOpenId: "ou_bot_1",
@ -193,8 +197,15 @@ async function setupLifecycleMonitor() {
}
describe("Feishu ACP-init failure lifecycle", () => {
beforeEach(() => {
beforeEach(async () => {
vi.useRealTimers();
vi.resetModules();
vi.doUnmock("./bot.js");
vi.doUnmock("./card-action.js");
vi.doUnmock("./monitor.account.js");
vi.doUnmock("./runtime.js");
({ monitorSingleAccount } = await import("./monitor.account.js"));
({ setFeishuRuntime } = await import("./runtime.js"));
vi.clearAllMocks();
handlers = {};
lastRuntime = null;
@ -329,6 +340,11 @@ describe("Feishu ACP-init failure lifecycle", () => {
afterEach(() => {
vi.useRealTimers();
vi.resetModules();
vi.doUnmock("./bot.js");
vi.doUnmock("./card-action.js");
vi.doUnmock("./monitor.account.js");
vi.doUnmock("./runtime.js");
if (originalStateDir === undefined) {
delete process.env.OPENCLAW_STATE_DIR;
return;

View File

@ -1,10 +1,11 @@
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
import { createPluginRuntimeMock } from "../../../test/helpers/extensions/plugin-runtime-mock.js";
import type { ClawdbotConfig, PluginRuntime, RuntimeEnv } from "../runtime-api.js";
import { monitorSingleAccount } from "./monitor.account.js";
import { setFeishuRuntime } from "./runtime.js";
import type { ResolvedFeishuAccount } from "./types.js";
type MonitorSingleAccount = typeof import("./monitor.account.js").monitorSingleAccount;
type SetFeishuRuntime = typeof import("./runtime.js").setFeishuRuntime;
type BoundConversation = {
bindingId: string;
targetSessionKey: string;
@ -34,6 +35,8 @@ const sendMessageFeishuMock = vi.hoisted(() =>
let handlers: Record<string, (data: unknown) => Promise<void>> = {};
let lastRuntime: RuntimeEnv | null = null;
let monitorSingleAccount: MonitorSingleAccount;
let setFeishuRuntime: SetFeishuRuntime;
const originalStateDir = process.env.OPENCLAW_STATE_DIR;
vi.mock("./client.js", async () => {
@ -167,6 +170,7 @@ async function setupLifecycleMonitor() {
cfg: createLifecycleConfig(),
account: createLifecycleAccount(),
runtime: lastRuntime,
fireAndForget: false,
botOpenIdSource: {
kind: "prefetched",
botOpenId: "ou_bot_1",
@ -182,8 +186,15 @@ async function setupLifecycleMonitor() {
}
describe("Feishu bot-menu lifecycle", () => {
beforeEach(() => {
beforeEach(async () => {
vi.useRealTimers();
vi.resetModules();
vi.doUnmock("./bot.js");
vi.doUnmock("./card-action.js");
vi.doUnmock("./monitor.account.js");
vi.doUnmock("./runtime.js");
({ monitorSingleAccount } = await import("./monitor.account.js"));
({ setFeishuRuntime } = await import("./runtime.js"));
vi.clearAllMocks();
handlers = {};
lastRuntime = null;
@ -287,6 +298,11 @@ describe("Feishu bot-menu lifecycle", () => {
afterEach(() => {
vi.useRealTimers();
vi.resetModules();
vi.doUnmock("./bot.js");
vi.doUnmock("./card-action.js");
vi.doUnmock("./monitor.account.js");
vi.doUnmock("./runtime.js");
if (originalStateDir === undefined) {
delete process.env.OPENCLAW_STATE_DIR;
return;

View File

@ -1,10 +1,11 @@
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
import { createPluginRuntimeMock } from "../../../test/helpers/extensions/plugin-runtime-mock.js";
import type { ClawdbotConfig, PluginRuntime, RuntimeEnv } from "../runtime-api.js";
import { monitorSingleAccount } from "./monitor.account.js";
import { setFeishuRuntime } from "./runtime.js";
import type { ResolvedFeishuAccount } from "./types.js";
type MonitorSingleAccount = typeof import("./monitor.account.js").monitorSingleAccount;
type SetFeishuRuntime = typeof import("./runtime.js").setFeishuRuntime;
const createEventDispatcherMock = vi.hoisted(() => vi.fn());
const monitorWebSocketMock = vi.hoisted(() => vi.fn(async () => {}));
const monitorWebhookMock = vi.hoisted(() => vi.fn(async () => {}));
@ -31,6 +32,8 @@ const sendMessageFeishuMock = vi.hoisted(() =>
let handlersByAccount = new Map<string, Record<string, (data: unknown) => Promise<void>>>();
let runtimesByAccount = new Map<string, RuntimeEnv>();
let monitorSingleAccount: MonitorSingleAccount;
let setFeishuRuntime: SetFeishuRuntime;
const originalStateDir = process.env.OPENCLAW_STATE_DIR;
vi.mock("./client.js", async () => {
@ -197,6 +200,7 @@ async function setupLifecycleMonitor(accountId: "account-A" | "account-B") {
cfg: createLifecycleConfig(),
account: createLifecycleAccount(accountId),
runtime,
fireAndForget: false,
botOpenIdSource: {
kind: "prefetched",
botOpenId: "ou_bot_1",
@ -212,8 +216,15 @@ async function setupLifecycleMonitor(accountId: "account-A" | "account-B") {
}
describe("Feishu broadcast reply-once lifecycle", () => {
beforeEach(() => {
beforeEach(async () => {
vi.useRealTimers();
vi.resetModules();
vi.doUnmock("./bot.js");
vi.doUnmock("./card-action.js");
vi.doUnmock("./monitor.account.js");
vi.doUnmock("./runtime.js");
({ monitorSingleAccount } = await import("./monitor.account.js"));
({ setFeishuRuntime } = await import("./runtime.js"));
vi.clearAllMocks();
handlersByAccount = new Map();
runtimesByAccount = new Map();
@ -322,6 +333,11 @@ describe("Feishu broadcast reply-once lifecycle", () => {
afterEach(() => {
vi.useRealTimers();
vi.resetModules();
vi.doUnmock("./bot.js");
vi.doUnmock("./card-action.js");
vi.doUnmock("./monitor.account.js");
vi.doUnmock("./runtime.js");
if (originalStateDir === undefined) {
delete process.env.OPENCLAW_STATE_DIR;
return;

View File

@ -3,10 +3,11 @@ import { createPluginRuntimeMock } from "../../../test/helpers/extensions/plugin
import type { ClawdbotConfig, PluginRuntime, RuntimeEnv } from "../runtime-api.js";
import { resetProcessedFeishuCardActionTokensForTests } from "./card-action.js";
import { createFeishuCardInteractionEnvelope } from "./card-interaction.js";
import { monitorSingleAccount } from "./monitor.account.js";
import { setFeishuRuntime } from "./runtime.js";
import type { ResolvedFeishuAccount } from "./types.js";
type MonitorSingleAccount = typeof import("./monitor.account.js").monitorSingleAccount;
type SetFeishuRuntime = typeof import("./runtime.js").setFeishuRuntime;
type BoundConversation = {
bindingId: string;
targetSessionKey: string;
@ -36,6 +37,8 @@ const listFeishuThreadMessagesMock = vi.hoisted(() => vi.fn(async () => []));
let handlers: Record<string, (data: unknown) => Promise<void>> = {};
let lastRuntime: RuntimeEnv | null = null;
let monitorSingleAccount: MonitorSingleAccount;
let setFeishuRuntime: SetFeishuRuntime;
const originalStateDir = process.env.OPENCLAW_STATE_DIR;
vi.mock("./client.js", async () => {
@ -194,6 +197,7 @@ async function setupLifecycleMonitor() {
cfg: createLifecycleConfig(),
account: createLifecycleAccount(),
runtime: lastRuntime,
fireAndForget: false,
botOpenIdSource: {
kind: "prefetched",
botOpenId: "ou_bot_1",
@ -209,8 +213,15 @@ async function setupLifecycleMonitor() {
}
describe("Feishu card-action lifecycle", () => {
beforeEach(() => {
beforeEach(async () => {
vi.useRealTimers();
vi.resetModules();
vi.doUnmock("./bot.js");
vi.doUnmock("./card-action.js");
vi.doUnmock("./monitor.account.js");
vi.doUnmock("./runtime.js");
({ monitorSingleAccount } = await import("./monitor.account.js"));
({ setFeishuRuntime } = await import("./runtime.js"));
vi.clearAllMocks();
handlers = {};
lastRuntime = null;
@ -315,6 +326,11 @@ describe("Feishu card-action lifecycle", () => {
afterEach(() => {
vi.useRealTimers();
vi.resetModules();
vi.doUnmock("./bot.js");
vi.doUnmock("./card-action.js");
vi.doUnmock("./monitor.account.js");
vi.doUnmock("./runtime.js");
resetProcessedFeishuCardActionTokensForTests();
if (originalStateDir === undefined) {
delete process.env.OPENCLAW_STATE_DIR;

View File

@ -688,6 +688,8 @@ describe("Feishu inbound debounce regressions", () => {
await Promise.resolve();
await Promise.resolve();
await vi.advanceTimersByTimeAsync(25);
await Promise.resolve();
await Promise.resolve();
const dispatched = expectSingleDispatchedEvent();
expect(dispatched.message.message_id).toBe("om_new");
@ -740,6 +742,8 @@ describe("Feishu inbound debounce regressions", () => {
await vi.advanceTimersByTimeAsync(25);
await enqueueDebouncedMessage(onMessage, event);
await vi.advanceTimersByTimeAsync(25);
await Promise.resolve();
await Promise.resolve();
expect(handleFeishuMessageMock).toHaveBeenCalledTimes(1);
});

View File

@ -1,10 +1,11 @@
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
import { createPluginRuntimeMock } from "../../../test/helpers/extensions/plugin-runtime-mock.js";
import type { ClawdbotConfig, PluginRuntime, RuntimeEnv } from "../runtime-api.js";
import { monitorSingleAccount } from "./monitor.account.js";
import { setFeishuRuntime } from "./runtime.js";
import type { ResolvedFeishuAccount } from "./types.js";
type MonitorSingleAccount = typeof import("./monitor.account.js").monitorSingleAccount;
type SetFeishuRuntime = typeof import("./runtime.js").setFeishuRuntime;
const createEventDispatcherMock = vi.hoisted(() => vi.fn());
const monitorWebSocketMock = vi.hoisted(() => vi.fn(async () => {}));
const monitorWebhookMock = vi.hoisted(() => vi.fn(async () => {}));
@ -31,6 +32,8 @@ const sendMessageFeishuMock = vi.hoisted(() =>
let handlers: Record<string, (data: unknown) => Promise<void>> = {};
let lastRuntime: RuntimeEnv | null = null;
let monitorSingleAccount: MonitorSingleAccount;
let setFeishuRuntime: SetFeishuRuntime;
const originalStateDir = process.env.OPENCLAW_STATE_DIR;
vi.mock("./client.js", async () => {
@ -179,6 +182,7 @@ async function setupLifecycleMonitor() {
cfg: createLifecycleConfig(),
account: createLifecycleAccount(),
runtime: lastRuntime,
fireAndForget: false,
botOpenIdSource: {
kind: "prefetched",
botOpenId: "ou_bot_1",
@ -194,8 +198,15 @@ async function setupLifecycleMonitor() {
}
describe("Feishu reply-once lifecycle", () => {
beforeEach(() => {
beforeEach(async () => {
vi.useRealTimers();
vi.resetModules();
vi.doUnmock("./bot.js");
vi.doUnmock("./card-action.js");
vi.doUnmock("./monitor.account.js");
vi.doUnmock("./runtime.js");
({ monitorSingleAccount } = await import("./monitor.account.js"));
({ setFeishuRuntime } = await import("./runtime.js"));
vi.clearAllMocks();
handlers = {};
lastRuntime = null;
@ -299,6 +310,11 @@ describe("Feishu reply-once lifecycle", () => {
afterEach(() => {
vi.useRealTimers();
vi.resetModules();
vi.doUnmock("./bot.js");
vi.doUnmock("./card-action.js");
vi.doUnmock("./monitor.account.js");
vi.doUnmock("./runtime.js");
if (originalStateDir === undefined) {
delete process.env.OPENCLAW_STATE_DIR;
return;

View File

@ -3,7 +3,6 @@
export * from "../../src/plugin-sdk/line.js";
export {
DEFAULT_ACCOUNT_ID,
formatDocsLink,
resolveExactLineGroupConfigKey,
setSetupChannelEnabled,

View File

@ -1,5 +1,5 @@
import MarkdownIt from "markdown-it";
import { isAutoLinkedFileRef } from "openclaw/plugin-sdk/text-runtime";
import { isAutoLinkedFileRef } from "openclaw/plugin-sdk/text-core";
const md = new MarkdownIt({
html: false,

View File

@ -101,15 +101,19 @@ vi.mock("../../runtime.js", () => ({
}),
}));
vi.mock("../accounts.js", () => ({
resolveConfiguredMatrixBotUserIds: vi.fn(() => new Set<string>()),
resolveMatrixAccount: () => ({
accountId: "default",
config: {
dm: {},
},
}),
}));
vi.mock("../accounts.js", async (importOriginal) => {
const actual = await importOriginal<typeof import("../accounts.js")>();
return {
...actual,
resolveConfiguredMatrixBotUserIds: vi.fn(() => new Set<string>()),
resolveMatrixAccount: () => ({
accountId: "default",
config: {
dm: {},
},
}),
};
});
vi.mock("../active-client.js", () => ({
setActiveMatrixClient: hoisted.setActiveMatrixClient,

View File

@ -105,7 +105,7 @@ function rewriteLocalPaths(value: string, roots: { workspace: string; agent: str
}
function normalizeScriptForLocalShell(script: string) {
return script
const normalizedScript = script
.replace(
'stats=$(stat -c "%F|%h" -- "$1")',
`stats=$(python3 - "$1" <<'PY'
@ -125,6 +125,13 @@ kind = 'directory' if stat.S_ISDIR(st.st_mode) else 'regular file' if stat.S_ISR
print(f"{kind}|{st.st_size}|{int(st.st_mtime)}")
PY`,
);
const mutationHelperPattern = /python3 \/dev\/fd\/3 "\$@" 3<<'PY'\n([\s\S]*?)\nPY/;
const mutationHelperMatch = normalizedScript.match(mutationHelperPattern);
if (!mutationHelperMatch) {
return normalizedScript;
}
const helperSource = mutationHelperMatch[1]?.replaceAll("'", `'"'"'`) ?? "";
return normalizedScript.replace(mutationHelperPattern, `python3 -c '${helperSource}' "$@"`);
}
describe("openshell remote fs bridge", () => {

View File

@ -2,9 +2,9 @@ import type { MarkdownTableMode } from "openclaw/plugin-sdk/config-runtime";
import {
chunkMarkdownIR,
markdownToIR,
renderMarkdownWithMarkers,
type MarkdownLinkSpan,
} from "openclaw/plugin-sdk/text-runtime";
import { renderMarkdownWithMarkers } from "openclaw/plugin-sdk/text-runtime";
} from "openclaw/plugin-sdk/text-core";
// Escape special characters for Slack mrkdwn format.
// Preserve Slack's angle-bracket tokens so mentions and links stay intact.

View File

@ -7,7 +7,7 @@ import {
normalizeHyphenSlug,
normalizeStringEntries,
normalizeStringEntriesLower,
} from "openclaw/plugin-sdk/text-runtime";
} from "openclaw/plugin-sdk/text-core";
const SLACK_SLUG_CACHE_MAX = 512;
const slackSlugCache = new Map<string, string>();

View File

@ -24,7 +24,7 @@ import { normalizeMainKey } from "openclaw/plugin-sdk/routing";
import { warn } from "openclaw/plugin-sdk/runtime-env";
import { createNonExitingRuntime, type RuntimeEnv } from "openclaw/plugin-sdk/runtime-env";
import { normalizeResolvedSecretInputString } from "openclaw/plugin-sdk/secret-input";
import { normalizeStringEntries } from "openclaw/plugin-sdk/text-runtime";
import { normalizeStringEntries } from "openclaw/plugin-sdk/text-core";
import { resolveSlackAccount } from "../accounts.js";
import { resolveSlackWebClientOptions } from "../client.js";
import { normalizeSlackWebhookPath, registerSlackHttpHandler } from "../http/index.js";

View File

@ -11,7 +11,7 @@ import {
} from "openclaw/plugin-sdk/config-runtime";
import type { ReplyPayload } from "openclaw/plugin-sdk/reply-runtime";
import { danger, logVerbose } from "openclaw/plugin-sdk/runtime-env";
import { chunkItems } from "openclaw/plugin-sdk/text-runtime";
import { chunkItems } from "openclaw/plugin-sdk/text-core";
import type { ResolvedSlackAccount } from "../accounts.js";
import { truncateSlackText } from "../truncate.js";
import { resolveSlackAllowListMatch, resolveSlackUserAllowed } from "./allow-list.js";

View File

@ -1,5 +1,5 @@
import type { BaseProbeResult } from "openclaw/plugin-sdk/channel-contract";
import { withTimeout } from "openclaw/plugin-sdk/text-runtime";
import { withTimeout } from "openclaw/plugin-sdk/text-core";
import { createSlackWebClient } from "./client.js";
export type SlackProbe = BaseProbeResult & {

View File

@ -1,5 +1,5 @@
import type { WebClient } from "@slack/web-api";
import { isRecord } from "openclaw/plugin-sdk/text-runtime";
import { isRecord } from "openclaw/plugin-sdk/text-core";
import { createSlackWebClient } from "./client.js";
export type SlackScopesResult = {

View File

@ -1,4 +1,4 @@
import { resolveGlobalMap } from "openclaw/plugin-sdk/text-runtime";
import { resolveGlobalMap } from "openclaw/plugin-sdk/text-core";
/**
* In-memory cache of Slack threads the bot has participated in.

View File

@ -1,5 +1,4 @@
import { isRecord } from "openclaw/plugin-sdk/text-runtime";
import { fetchWithTimeout } from "openclaw/plugin-sdk/text-runtime";
import { fetchWithTimeout, isRecord } from "openclaw/plugin-sdk/text-core";
import type {
AuditTelegramGroupMembershipParams,
TelegramGroupMembershipAudit,

View File

@ -22,12 +22,6 @@ type DispatchReplyWithBufferedBlockDispatcherResult = Awaited<
>;
type DispatchReplyHarnessParams = Parameters<DispatchReplyWithBufferedBlockDispatcherFn>[0];
const EMPTY_REPLY_COUNTS: DispatchReplyWithBufferedBlockDispatcherResult["counts"] = {
block: 0,
final: 0,
tool: 0,
};
const { sessionStorePath } = vi.hoisted(() => ({
sessionStorePath: `/tmp/openclaw-telegram-${process.pid}-${process.env.VITEST_POOL_ID ?? "0"}.json`,
}));
@ -40,13 +34,6 @@ export function getLoadWebMediaMock(): AnyMock {
return loadWebMedia;
}
vi.mock("openclaw/plugin-sdk/web-media", () => ({
loadWebMedia,
}));
vi.mock("openclaw/plugin-sdk/web-media.js", () => ({
loadWebMedia,
}));
const { loadConfig, resolveStorePathMock } = vi.hoisted(
(): {
loadConfig: MockFn<LoadConfigFn>;
@ -109,7 +96,94 @@ vi.doMock("openclaw/plugin-sdk/conversation-runtime.js", async (importOriginal)
};
});
const skillCommandListHoisted = vi.hoisted(() => ({
// All spy variables used inside vi.mock("grammy", ...) must be created via
// vi.hoisted() so they are available when the hoisted factory runs, regardless
// of module evaluation order across different test files.
const grammySpies = vi.hoisted(() => ({
useSpy: vi.fn() as MockFn<(arg: unknown) => void>,
middlewareUseSpy: vi.fn() as AnyMock,
onSpy: vi.fn() as AnyMock,
stopSpy: vi.fn() as AnyMock,
commandSpy: vi.fn() as AnyMock,
botCtorSpy: vi.fn((_: string, __?: { client?: { fetch?: typeof fetch } }) => undefined),
answerCallbackQuerySpy: vi.fn(async () => undefined) as AnyAsyncMock,
sendChatActionSpy: vi.fn() as AnyMock,
editMessageTextSpy: vi.fn(async () => ({ message_id: 88 })) as AnyAsyncMock,
editMessageReplyMarkupSpy: vi.fn(async () => ({ message_id: 88 })) as AnyAsyncMock,
sendMessageDraftSpy: vi.fn(async () => true) as AnyAsyncMock,
setMessageReactionSpy: vi.fn(async () => undefined) as AnyAsyncMock,
setMyCommandsSpy: vi.fn(async () => undefined) as AnyAsyncMock,
getMeSpy: vi.fn(async () => ({
username: "openclaw_bot",
has_topics_enabled: true,
})) as AnyAsyncMock,
sendMessageSpy: vi.fn(async () => ({ message_id: 77 })) as AnyAsyncMock,
sendAnimationSpy: vi.fn(async () => ({ message_id: 78 })) as AnyAsyncMock,
sendPhotoSpy: vi.fn(async () => ({ message_id: 79 })) as AnyAsyncMock,
getFileSpy: vi.fn(async () => ({ file_path: "media/file.jpg" })) as AnyAsyncMock,
}));
export const useSpy: MockFn<(arg: unknown) => void> = grammySpies.useSpy;
export const middlewareUseSpy: AnyMock = grammySpies.middlewareUseSpy;
export const onSpy: AnyMock = grammySpies.onSpy;
export const stopSpy: AnyMock = grammySpies.stopSpy;
export const commandSpy: AnyMock = grammySpies.commandSpy;
export const botCtorSpy: MockFn<
(token: string, options?: { client?: { fetch?: typeof fetch } }) => void
> = grammySpies.botCtorSpy;
export const answerCallbackQuerySpy: AnyAsyncMock = grammySpies.answerCallbackQuerySpy;
export const sendChatActionSpy: AnyMock = grammySpies.sendChatActionSpy;
export const editMessageTextSpy: AnyAsyncMock = grammySpies.editMessageTextSpy;
export const editMessageReplyMarkupSpy: AnyAsyncMock = grammySpies.editMessageReplyMarkupSpy;
export const sendMessageDraftSpy: AnyAsyncMock = grammySpies.sendMessageDraftSpy;
export const setMessageReactionSpy: AnyAsyncMock = grammySpies.setMessageReactionSpy;
export const setMyCommandsSpy: AnyAsyncMock = grammySpies.setMyCommandsSpy;
export const getMeSpy: AnyAsyncMock = grammySpies.getMeSpy;
export const sendMessageSpy: AnyAsyncMock = grammySpies.sendMessageSpy;
export const sendAnimationSpy: AnyAsyncMock = grammySpies.sendAnimationSpy;
export const sendPhotoSpy: AnyAsyncMock = grammySpies.sendPhotoSpy;
export const getFileSpy: AnyAsyncMock = grammySpies.getFileSpy;
vi.mock("grammy", () => ({
Bot: class {
api = {
config: { use: grammySpies.useSpy },
answerCallbackQuery: grammySpies.answerCallbackQuerySpy,
sendChatAction: grammySpies.sendChatActionSpy,
editMessageText: grammySpies.editMessageTextSpy,
editMessageReplyMarkup: grammySpies.editMessageReplyMarkupSpy,
sendMessageDraft: grammySpies.sendMessageDraftSpy,
setMessageReaction: grammySpies.setMessageReactionSpy,
setMyCommands: grammySpies.setMyCommandsSpy,
getMe: grammySpies.getMeSpy,
sendMessage: grammySpies.sendMessageSpy,
sendAnimation: grammySpies.sendAnimationSpy,
sendPhoto: grammySpies.sendPhotoSpy,
getFile: grammySpies.getFileSpy,
};
use = grammySpies.middlewareUseSpy;
on = grammySpies.onSpy;
stop = grammySpies.stopSpy;
command = grammySpies.commandSpy;
catch = vi.fn();
constructor(
public token: string,
public options?: { client?: { fetch?: typeof fetch } },
) {
(grammySpies.botCtorSpy as unknown as (token: string, options?: unknown) => void)(
token,
options,
);
}
},
InputFile: class {},
HttpError: class MockHttpError extends Error {},
GrammyError: class MockGrammyError extends Error {},
API_CONSTANTS: { DEFAULT_UPDATE_TYPES: [] },
webhookCallback: vi.fn(),
}));
const skillCommandsHoisted = vi.hoisted(() => ({
listSkillCommandsForAgents: vi.fn(() => []),
}));
const modelProviderDataHoisted = vi.hoisted(() => ({
@ -127,6 +201,9 @@ const replySpyHoisted = vi.hoisted(() => ({
) => Promise<ReplyPayload | ReplyPayload[] | undefined>
>,
}));
export const listSkillCommandsForAgents = skillCommandsHoisted.listSkillCommandsForAgents;
const buildModelsProviderData = modelProviderDataHoisted.buildModelsProviderData;
export const replySpy = replySpyHoisted.replySpy;
async function dispatchHarnessReplies(
params: DispatchReplyHarnessParams,
@ -180,9 +257,6 @@ const dispatchReplyHoisted = vi.hoisted(() => ({
}),
),
}));
export const listSkillCommandsForAgents = skillCommandListHoisted.listSkillCommandsForAgents;
const buildModelsProviderData = modelProviderDataHoisted.buildModelsProviderData;
export const replySpy = replySpyHoisted.replySpy;
export const dispatchReplyWithBufferedBlockDispatcher =
dispatchReplyHoisted.dispatchReplyWithBufferedBlockDispatcher;
@ -234,7 +308,7 @@ vi.doMock("openclaw/plugin-sdk/command-auth", async (importOriginal) => {
const actual = await importOriginal<typeof import("openclaw/plugin-sdk/command-auth")>();
return {
...actual,
listSkillCommandsForAgents: skillCommandListHoisted.listSkillCommandsForAgents,
listSkillCommandsForAgents: skillCommandsHoisted.listSkillCommandsForAgents,
buildModelsProviderData,
};
});
@ -242,7 +316,7 @@ vi.doMock("openclaw/plugin-sdk/command-auth.js", async (importOriginal) => {
const actual = await importOriginal<typeof import("openclaw/plugin-sdk/command-auth")>();
return {
...actual,
listSkillCommandsForAgents: skillCommandListHoisted.listSkillCommandsForAgents,
listSkillCommandsForAgents: skillCommandsHoisted.listSkillCommandsForAgents,
buildModelsProviderData,
};
});
@ -250,20 +324,24 @@ vi.doMock("openclaw/plugin-sdk/reply-runtime", async (importOriginal) => {
const actual = await importOriginal<typeof import("openclaw/plugin-sdk/reply-runtime")>();
return {
...actual,
listSkillCommandsForAgents: skillCommandsHoisted.listSkillCommandsForAgents,
getReplyFromConfig: replySpyHoisted.replySpy,
__replySpy: replySpyHoisted.replySpy,
dispatchReplyWithBufferedBlockDispatcher:
dispatchReplyHoisted.dispatchReplyWithBufferedBlockDispatcher,
buildModelsProviderData: modelProviderDataHoisted.buildModelsProviderData,
};
});
vi.doMock("openclaw/plugin-sdk/reply-runtime.js", async (importOriginal) => {
const actual = await importOriginal<typeof import("openclaw/plugin-sdk/reply-runtime")>();
return {
...actual,
listSkillCommandsForAgents: skillCommandsHoisted.listSkillCommandsForAgents,
getReplyFromConfig: replySpyHoisted.replySpy,
__replySpy: replySpyHoisted.replySpy,
dispatchReplyWithBufferedBlockDispatcher:
dispatchReplyHoisted.dispatchReplyWithBufferedBlockDispatcher,
buildModelsProviderData: modelProviderDataHoisted.buildModelsProviderData,
};
});
@ -292,54 +370,6 @@ vi.doMock("./sent-message-cache.js", () => ({
clearSentMessageCache: vi.fn(),
}));
// All spy variables used inside vi.mock("grammy", ...) must be created via
// vi.hoisted() so they are available when the hoisted factory runs, regardless
// of module evaluation order across different test files.
const grammySpies = vi.hoisted(() => ({
useSpy: vi.fn() as MockFn<(arg: unknown) => void>,
middlewareUseSpy: vi.fn() as AnyMock,
onSpy: vi.fn() as AnyMock,
stopSpy: vi.fn() as AnyMock,
commandSpy: vi.fn() as AnyMock,
botCtorSpy: vi.fn((_: string, __?: { client?: { fetch?: typeof fetch } }) => undefined),
answerCallbackQuerySpy: vi.fn(async () => undefined) as AnyAsyncMock,
sendChatActionSpy: vi.fn() as AnyMock,
editMessageTextSpy: vi.fn(async () => ({ message_id: 88 })) as AnyAsyncMock,
editMessageReplyMarkupSpy: vi.fn(async () => ({ message_id: 88 })) as AnyAsyncMock,
sendMessageDraftSpy: vi.fn(async () => true) as AnyAsyncMock,
setMessageReactionSpy: vi.fn(async () => undefined) as AnyAsyncMock,
setMyCommandsSpy: vi.fn(async () => undefined) as AnyAsyncMock,
getMeSpy: vi.fn(async () => ({
username: "openclaw_bot",
has_topics_enabled: true,
})) as AnyAsyncMock,
sendMessageSpy: vi.fn(async () => ({ message_id: 77 })) as AnyAsyncMock,
sendAnimationSpy: vi.fn(async () => ({ message_id: 78 })) as AnyAsyncMock,
sendPhotoSpy: vi.fn(async () => ({ message_id: 79 })) as AnyAsyncMock,
getFileSpy: vi.fn(async () => ({ file_path: "media/file.jpg" })) as AnyAsyncMock,
}));
export const useSpy: MockFn<(arg: unknown) => void> = grammySpies.useSpy;
export const middlewareUseSpy: AnyMock = grammySpies.middlewareUseSpy;
export const onSpy: AnyMock = grammySpies.onSpy;
export const stopSpy: AnyMock = grammySpies.stopSpy;
export const commandSpy: AnyMock = grammySpies.commandSpy;
export const botCtorSpy: MockFn<
(token: string, options?: { client?: { fetch?: typeof fetch } }) => void
> = grammySpies.botCtorSpy;
export const answerCallbackQuerySpy: AnyAsyncMock = grammySpies.answerCallbackQuerySpy;
export const sendChatActionSpy: AnyMock = grammySpies.sendChatActionSpy;
export const editMessageTextSpy: AnyAsyncMock = grammySpies.editMessageTextSpy;
export const editMessageReplyMarkupSpy: AnyAsyncMock = grammySpies.editMessageReplyMarkupSpy;
export const sendMessageDraftSpy: AnyAsyncMock = grammySpies.sendMessageDraftSpy;
export const setMessageReactionSpy: AnyAsyncMock = grammySpies.setMessageReactionSpy;
export const setMyCommandsSpy: AnyAsyncMock = grammySpies.setMyCommandsSpy;
export const getMeSpy: AnyAsyncMock = grammySpies.getMeSpy;
export const sendMessageSpy: AnyAsyncMock = grammySpies.sendMessageSpy;
export const sendAnimationSpy: AnyAsyncMock = grammySpies.sendAnimationSpy;
export const sendPhotoSpy: AnyAsyncMock = grammySpies.sendPhotoSpy;
export const getFileSpy: AnyAsyncMock = grammySpies.getFileSpy;
const runnerHoisted = vi.hoisted(() => ({
sequentializeMiddleware: vi.fn(async (_ctx: unknown, next?: () => Promise<void>) => {
if (typeof next === "function") {
@ -413,6 +443,15 @@ export const telegramBotDepsForTest: TelegramBotDeps = {
};
vi.doMock("./bot.runtime.js", () => telegramBotRuntimeForTest);
vi.mock("@grammyjs/runner", () => ({
sequentialize: (keyFn: (ctx: unknown) => string) => {
sequentializeKey = keyFn;
return runnerHoisted.sequentializeSpy();
},
}));
vi.mock("@grammyjs/transformer-throttler", () => ({
apiThrottler: () => runnerHoisted.throttlerSpy(),
}));
export const getOnHandler = (event: string) => {
const handler = onSpy.mock.calls.find((call) => call[0] === event)?.[1];
@ -487,8 +526,6 @@ beforeEach(() => {
resetInboundDedupe();
loadConfig.mockReset();
loadConfig.mockReturnValue(DEFAULT_TELEGRAM_TEST_CONFIG);
resolveStorePathMock.mockReset();
resolveStorePathMock.mockImplementation((storePath?: string) => storePath ?? sessionStorePath);
loadWebMedia.mockReset();
readChannelAllowFromStore.mockReset();
readChannelAllowFromStore.mockResolvedValue([]);
@ -499,7 +536,7 @@ beforeEach(() => {
stopSpy.mockReset();
useSpy.mockReset();
replySpy.mockReset();
replySpy.mockImplementation(async (_ctx: MsgContext, opts?: GetReplyOptions) => {
replySpy.mockImplementation(async (_ctx, opts) => {
await opts?.onReplyStart?.();
return undefined;
});

View File

@ -1,6 +1,6 @@
import type { Bot } from "grammy";
import { createFinalizableDraftLifecycle } from "openclaw/plugin-sdk/channel-lifecycle";
import { resolveGlobalSingleton } from "openclaw/plugin-sdk/text-runtime";
import { resolveGlobalSingleton } from "openclaw/plugin-sdk/text-core";
import { buildTelegramThreadParams, type TelegramThreadSpec } from "./bot/helpers.js";
import { isSafeToRetrySendError, isTelegramClientRejection } from "./network-errors.js";

View File

@ -4,10 +4,10 @@ import {
FILE_REF_EXTENSIONS_WITH_TLD,
isAutoLinkedFileRef,
markdownToIR,
renderMarkdownWithMarkers,
type MarkdownLinkSpan,
type MarkdownIR,
} from "openclaw/plugin-sdk/text-runtime";
import { renderMarkdownWithMarkers } from "openclaw/plugin-sdk/text-runtime";
} from "openclaw/plugin-sdk/text-core";
export type TelegramFormattedChunk = {
html: string;

View File

@ -1,5 +1,5 @@
import type { BaseProbeResult } from "openclaw/plugin-sdk/channel-contract";
import { fetchWithTimeout } from "openclaw/plugin-sdk/text-runtime";
import { fetchWithTimeout } from "openclaw/plugin-sdk/text-core";
import type { TelegramNetworkConfig } from "../runtime-api.js";
import { resolveTelegramFetch } from "./fetch.js";
import { makeProxyFetch } from "./proxy.js";

View File

@ -3,7 +3,7 @@ import {
resolveReactionLevel,
type ReactionLevel,
type ResolvedReactionLevel as BaseResolvedReactionLevel,
} from "openclaw/plugin-sdk/text-runtime";
} from "openclaw/plugin-sdk/text-core";
import { resolveTelegramAccount } from "./accounts.js";
export type TelegramReactionLevel = ReactionLevel;

View File

@ -1,7 +1,10 @@
import { formatReasoningMessage } from "openclaw/plugin-sdk/agent-runtime";
import type { ReplyPayload } from "openclaw/plugin-sdk/reply-runtime";
import { findCodeRegions, isInsideCode } from "openclaw/plugin-sdk/text-runtime";
import { stripReasoningTagsFromText } from "openclaw/plugin-sdk/text-runtime";
import {
findCodeRegions,
isInsideCode,
stripReasoningTagsFromText,
} from "openclaw/plugin-sdk/text-core";
const REASONING_MESSAGE_PREFIX = "Reasoning:\n";
const REASONING_TAG_PREFIXES = [

View File

@ -18,7 +18,7 @@ import { isGifMedia, kindFromMime } from "openclaw/plugin-sdk/media-runtime";
import { normalizePollInput, type PollInput } from "openclaw/plugin-sdk/media-runtime";
import { logVerbose } from "openclaw/plugin-sdk/runtime-env";
import { createSubsystemLogger } from "openclaw/plugin-sdk/runtime-env";
import { redactSensitiveText } from "openclaw/plugin-sdk/text-runtime";
import { redactSensitiveText } from "openclaw/plugin-sdk/text-core";
import { loadWebMedia } from "openclaw/plugin-sdk/web-media";
import { type ResolvedTelegramAccount, resolveTelegramAccount } from "./accounts.js";
import { withTelegramApiErrorLogging } from "./api-logging.js";

View File

@ -1,4 +1,4 @@
import { resolveGlobalMap } from "openclaw/plugin-sdk/text-runtime";
import { resolveGlobalMap } from "openclaw/plugin-sdk/text-core";
/**
* In-memory cache of sent message IDs per chat.

View File

@ -14,7 +14,7 @@ import { writeJsonAtomic } from "openclaw/plugin-sdk/infra-runtime";
import { normalizeAccountId } from "openclaw/plugin-sdk/routing";
import { logVerbose } from "openclaw/plugin-sdk/runtime-env";
import { resolveStateDir } from "openclaw/plugin-sdk/state-paths";
import { resolveGlobalSingleton } from "openclaw/plugin-sdk/text-runtime";
import { resolveGlobalSingleton } from "openclaw/plugin-sdk/text-core";
const DEFAULT_THREAD_BINDING_IDLE_TIMEOUT_MS = 24 * 60 * 60 * 1000;
const DEFAULT_THREAD_BINDING_MAX_AGE_MS = 0;
@ -363,6 +363,14 @@ function persistBindingsSafely(params: {
});
}
async function flushBindingsPersistence(): Promise<void> {
const pending = [...PERSIST_QUEUE_BY_ACCOUNT_ID.values()];
if (pending.length === 0) {
return;
}
await Promise.allSettled(pending);
PERSIST_QUEUE_BY_ACCOUNT_ID.clear();
}
function normalizeTimestampMs(raw: unknown): number {
if (typeof raw !== "number" || !Number.isFinite(raw)) {
return Date.now();
@ -802,9 +810,9 @@ export const __testing = {
for (const manager of MANAGERS_BY_ACCOUNT_ID.values()) {
manager.stop();
}
await Promise.allSettled(PERSIST_QUEUE_BY_ACCOUNT_ID.values());
PERSIST_QUEUE_BY_ACCOUNT_ID.clear();
await flushBindingsPersistence();
MANAGERS_BY_ACCOUNT_ID.clear();
BINDINGS_BY_ACCOUNT_CONVERSATION.clear();
PERSIST_QUEUE_BY_ACCOUNT_ID.clear();
},
};

View File

@ -1,5 +1,7 @@
import fsSync from "node:fs";
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { vi } from "vitest";
import type { MockBaileysSocket } from "../../../test/mocks/baileys.js";
import { createMockBaileys } from "../../../test/mocks/baileys.js";
@ -32,6 +34,42 @@ export function resetLoadConfigMock() {
(globalThis as Record<symbol, unknown>)[CONFIG_KEY] = () => DEFAULT_CONFIG;
}
function normalizeAgentIdForStorePath(value: string | undefined): string {
const trimmed = (value ?? "").trim();
if (!trimmed) {
return "main";
}
return (
trimmed
.toLowerCase()
.replace(/[^a-z0-9_-]+/g, "-")
.replace(/^-+/, "")
.replace(/-+$/, "")
.slice(0, 64) || "main"
);
}
function resolveStorePathFallback(store?: string, opts?: { agentId?: string }): string {
const agentId = normalizeAgentIdForStorePath(opts?.agentId);
if (!store) {
return path.resolve(
process.env.HOME ?? os.homedir(),
".openclaw",
"agents",
agentId,
"sessions",
"sessions.json",
);
}
if (store.includes("{agentId}")) {
return path.resolve(store.replaceAll("{agentId}", agentId));
}
if (store.startsWith("~")) {
return path.resolve(path.join(process.env.HOME ?? os.homedir(), store.slice(1)));
}
return path.resolve(store);
}
vi.mock("openclaw/plugin-sdk/config-runtime", async (importOriginal) => {
const actual = await importOriginal<typeof import("openclaw/plugin-sdk/config-runtime")>();
const mockModule = Object.create(null) as Record<string, unknown>;
@ -92,7 +130,7 @@ vi.mock("openclaw/plugin-sdk/config-runtime", async (importOriginal) => {
configurable: true,
enumerable: true,
writable: true,
value: actual.resolveStorePath,
value: actual.resolveStorePath ?? resolveStorePathFallback,
},
});
return mockModule;

View File

@ -149,6 +149,10 @@
"types": "./dist/plugin-sdk/thread-bindings-runtime.d.ts",
"default": "./dist/plugin-sdk/thread-bindings-runtime.js"
},
"./plugin-sdk/text-core": {
"types": "./dist/plugin-sdk/text-core.d.ts",
"default": "./dist/plugin-sdk/text-core.js"
},
"./plugin-sdk/text-runtime": {
"types": "./dist/plugin-sdk/text-runtime.d.ts",
"default": "./dist/plugin-sdk/text-runtime.js"
@ -545,6 +549,10 @@
"test:auth:compat": "vitest run --config vitest.gateway.config.ts src/gateway/server.auth.compat-baseline.test.ts src/gateway/client.test.ts src/gateway/reconnect-gating.test.ts src/gateway/protocol/connect-error-details.test.ts",
"test:build:singleton": "node scripts/test-built-plugin-singleton.mjs",
"test:channels": "vitest run --config vitest.channels.config.ts",
"test:ci:cortex": "pnpm exec vitest run src/cli/memory-cli.test.ts src/auto-reply/reply/commands.test.ts src/agents/cortex.test.ts",
"test:ci:daemon-windows": "pnpm exec vitest run src/daemon/schtasks.stop.test.ts src/daemon/schtasks.startup-fallback.test.ts",
"test:ci:path-windows": "pnpm exec vitest run src/infra/update-global.test.ts src/infra/home-dir.test.ts src/infra/executable-path.test.ts src/infra/exec-approvals-store.test.ts src/infra/pairing-files.test.ts src/infra/stable-node-path.test.ts src/infra/hardlink-guards.test.ts src/infra/exec-allowlist-pattern.test.ts src/security/temp-path-guard.test.ts src/infra/run-node.test.ts",
"test:ci:ui-parse": "pnpm exec vitest run ui/src/ui/views/agents-utils.test.ts",
"test:contracts": "pnpm test:contracts:channels && pnpm test:contracts:plugins",
"test:contracts:channels": "OPENCLAW_TEST_PROFILE=low pnpm test -- src/channels/plugins/contracts",
"test:contracts:plugins": "OPENCLAW_TEST_PROFILE=low pnpm test -- src/plugins/contracts",

View File

@ -27,6 +27,7 @@
"matrix-runtime-heavy",
"matrix-runtime-shared",
"thread-bindings-runtime",
"text-core",
"text-runtime",
"agent-runtime",
"speech-runtime",

View File

@ -60,6 +60,10 @@ describe("resolveAgentConfig", () => {
workspace: "~/openclaw",
agentDir: "~/.openclaw/agents/main",
model: "anthropic/claude-opus-4",
memorySearch: undefined,
cortex: undefined,
humanDelay: undefined,
heartbeat: undefined,
identity: undefined,
groupChat: undefined,
subagents: undefined,

View File

@ -32,6 +32,7 @@ type ResolvedAgentConfig = {
model?: AgentEntry["model"];
skills?: AgentEntry["skills"];
memorySearch?: AgentEntry["memorySearch"];
cortex?: AgentEntry["cortex"];
humanDelay?: AgentEntry["humanDelay"];
heartbeat?: AgentEntry["heartbeat"];
identity?: AgentEntry["identity"];
@ -134,6 +135,7 @@ export function resolveAgentConfig(
: undefined,
skills: Array.isArray(entry.skills) ? entry.skills : undefined,
memorySearch: entry.memorySearch,
cortex: entry.cortex,
humanDelay: entry.humanDelay,
heartbeat: entry.heartbeat,
identity: entry.identity,

View File

@ -1,9 +1,5 @@
import {
getOAuthApiKey,
getOAuthProviders,
type OAuthCredentials,
type OAuthProvider,
} from "@mariozechner/pi-ai/oauth";
import type { OAuthCredentials, OAuthProvider } from "@mariozechner/pi-ai";
import { getOAuthApiKey, getOAuthProviders } from "@mariozechner/pi-ai/oauth";
import { loadConfig, type OpenClawConfig } from "../../config/config.js";
import { coerceSecretRef } from "../../config/types.secrets.js";
import { withFileLock } from "../../infra/file-lock.js";

View File

@ -0,0 +1,132 @@
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { afterEach, describe, expect, it, vi } from "vitest";
import {
appendCortexCaptureHistory,
getCachedLatestCortexCaptureHistoryEntry,
getLatestCortexCaptureHistoryEntry,
getLatestCortexCaptureHistoryEntrySync,
readRecentCortexCaptureHistory,
} from "./cortex-history.js";
describe("cortex capture history", () => {
afterEach(() => {
vi.unstubAllEnvs();
});
it("appends and reads recent capture history", async () => {
const stateDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-cortex-history-"));
vi.stubEnv("OPENCLAW_STATE_DIR", stateDir);
await appendCortexCaptureHistory({
agentId: "main",
sessionId: "session-1",
channelId: "channel-1",
captured: true,
score: 0.7,
reason: "high-signal memory candidate",
timestamp: 1_000,
});
const recent = await readRecentCortexCaptureHistory({ limit: 5 });
expect(recent).toHaveLength(1);
expect(recent[0]).toMatchObject({
agentId: "main",
captured: true,
reason: "high-signal memory candidate",
});
});
it("returns the latest matching capture entry in async and sync modes", async () => {
const stateDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-cortex-history-sync-"));
vi.stubEnv("OPENCLAW_STATE_DIR", stateDir);
await appendCortexCaptureHistory({
agentId: "main",
sessionId: "session-1",
channelId: "channel-1",
captured: false,
score: 0.1,
reason: "low-signal short reply",
timestamp: 1_000,
});
await appendCortexCaptureHistory({
agentId: "main",
sessionId: "session-1",
channelId: "channel-1",
captured: true,
score: 0.7,
reason: "high-signal memory candidate",
syncedCodingContext: true,
syncPlatforms: ["claude-code", "cursor", "copilot"],
timestamp: 2_000,
});
const asyncEntry = await getLatestCortexCaptureHistoryEntry({
agentId: "main",
sessionId: "session-1",
channelId: "channel-1",
});
const cachedEntry = getCachedLatestCortexCaptureHistoryEntry({
agentId: "main",
sessionId: "session-1",
channelId: "channel-1",
});
const syncEntry = getLatestCortexCaptureHistoryEntrySync({
agentId: "main",
sessionId: "session-1",
channelId: "channel-1",
});
expect(asyncEntry?.timestamp).toBe(2_000);
expect(asyncEntry?.syncedCodingContext).toBe(true);
expect(cachedEntry?.timestamp).toBe(2_000);
expect(syncEntry?.timestamp).toBe(2_000);
expect(syncEntry?.syncPlatforms).toEqual(["claude-code", "cursor", "copilot"]);
});
it("finds an older matching conversation entry even when newer unrelated entries exceed 100", async () => {
const stateDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-cortex-history-scan-"));
vi.stubEnv("OPENCLAW_STATE_DIR", stateDir);
await appendCortexCaptureHistory({
agentId: "main",
sessionId: "session-target",
channelId: "channel-target",
captured: true,
score: 0.8,
reason: "target conversation capture",
timestamp: 1_000,
});
for (let index = 0; index < 150; index += 1) {
await appendCortexCaptureHistory({
agentId: "main",
sessionId: `session-${index}`,
channelId: `channel-${index}`,
captured: true,
score: 0.5,
reason: `other capture ${index}`,
timestamp: 2_000 + index,
});
}
const asyncEntry = await getLatestCortexCaptureHistoryEntry({
agentId: "main",
sessionId: "session-target",
channelId: "channel-target",
});
const syncEntry = getLatestCortexCaptureHistoryEntrySync({
agentId: "main",
sessionId: "session-target",
channelId: "channel-target",
});
expect(asyncEntry?.reason).toBe("target conversation capture");
expect(asyncEntry?.timestamp).toBe(1_000);
expect(syncEntry?.reason).toBe("target conversation capture");
expect(syncEntry?.timestamp).toBe(1_000);
});
});

View File

@ -0,0 +1,175 @@
import fs from "node:fs";
import fsp from "node:fs/promises";
import path from "node:path";
import { resolveStateDir } from "../config/paths.js";
export type CortexCaptureHistoryEntry = {
agentId: string;
sessionId?: string;
channelId?: string;
captured: boolean;
score: number;
reason: string;
error?: string;
syncedCodingContext?: boolean;
syncPlatforms?: string[];
timestamp: number;
};
const latestCortexCaptureHistoryByKey = new Map<string, CortexCaptureHistoryEntry>();
function matchesHistoryEntry(
entry: CortexCaptureHistoryEntry,
params: {
agentId: string;
sessionId?: string;
channelId?: string;
},
): boolean {
return (
entry.agentId === params.agentId &&
(params.sessionId ? entry.sessionId === params.sessionId : true) &&
(params.channelId ? entry.channelId === params.channelId : true)
);
}
function parseLatestMatchingHistoryEntry(
raw: string,
params: {
agentId: string;
sessionId?: string;
channelId?: string;
},
): CortexCaptureHistoryEntry | null {
const lines = raw
.split("\n")
.map((line) => line.trim())
.filter(Boolean);
for (let index = lines.length - 1; index >= 0; index -= 1) {
const line = lines[index];
if (!line) {
continue;
}
try {
const entry = JSON.parse(line) as CortexCaptureHistoryEntry;
if (matchesHistoryEntry(entry, params)) {
return entry;
}
} catch {
continue;
}
}
return null;
}
function buildHistoryCacheKey(params: {
agentId: string;
sessionId?: string;
channelId?: string;
}): string {
return [params.agentId, params.sessionId ?? "", params.channelId ?? ""].join("\u0000");
}
function cacheHistoryEntry(entry: CortexCaptureHistoryEntry): void {
latestCortexCaptureHistoryByKey.set(
buildHistoryCacheKey({
agentId: entry.agentId,
sessionId: entry.sessionId,
channelId: entry.channelId,
}),
entry,
);
}
function resolveHistoryPath(env: NodeJS.ProcessEnv = process.env): string {
return path.join(resolveStateDir(env), "logs", "cortex-memory-captures.jsonl");
}
export async function appendCortexCaptureHistory(
entry: CortexCaptureHistoryEntry,
env: NodeJS.ProcessEnv = process.env,
): Promise<void> {
const historyPath = resolveHistoryPath(env);
await fsp.mkdir(path.dirname(historyPath), { recursive: true });
await fsp.appendFile(historyPath, `${JSON.stringify(entry)}\n`, "utf8");
cacheHistoryEntry(entry);
}
export async function readRecentCortexCaptureHistory(params?: {
limit?: number;
env?: NodeJS.ProcessEnv;
}): Promise<CortexCaptureHistoryEntry[]> {
const historyPath = resolveHistoryPath(params?.env);
let raw: string;
try {
raw = await fsp.readFile(historyPath, "utf8");
} catch {
return [];
}
const parsed = raw
.split("\n")
.map((line) => line.trim())
.filter(Boolean)
.map((line) => {
try {
return JSON.parse(line) as CortexCaptureHistoryEntry;
} catch {
return null;
}
})
.filter((entry): entry is CortexCaptureHistoryEntry => entry != null);
const limit = Math.max(1, params?.limit ?? 20);
return parsed.slice(-limit).toReversed();
}
export function getLatestCortexCaptureHistoryEntrySync(params: {
agentId: string;
sessionId?: string;
channelId?: string;
env?: NodeJS.ProcessEnv;
}): CortexCaptureHistoryEntry | null {
const historyPath = resolveHistoryPath(params.env);
let raw: string;
try {
raw = fs.readFileSync(historyPath, "utf8");
} catch {
return null;
}
return parseLatestMatchingHistoryEntry(raw, params);
}
export function getCachedLatestCortexCaptureHistoryEntry(params: {
agentId: string;
sessionId?: string;
channelId?: string;
}): CortexCaptureHistoryEntry | null {
return (
latestCortexCaptureHistoryByKey.get(
buildHistoryCacheKey({
agentId: params.agentId,
sessionId: params.sessionId,
channelId: params.channelId,
}),
) ?? null
);
}
export async function getLatestCortexCaptureHistoryEntry(params: {
agentId: string;
sessionId?: string;
channelId?: string;
env?: NodeJS.ProcessEnv;
}): Promise<CortexCaptureHistoryEntry | null> {
const historyPath = resolveHistoryPath(params.env);
let raw: string;
try {
raw = await fsp.readFile(historyPath, "utf8");
} catch {
return null;
}
const match = parseLatestMatchingHistoryEntry(raw, params);
if (match) {
cacheHistoryEntry(match);
}
return match;
}

769
src/agents/cortex.test.ts Normal file
View File

@ -0,0 +1,769 @@
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
import type { OpenClawConfig } from "../config/config.js";
import { setActivePluginRegistry } from "../plugins/runtime.js";
import { createChannelTestPluginBase, createTestRegistry } from "../test-utils/channel-plugins.js";
const {
previewCortexContext,
getCortexStatus,
getCortexModeOverride,
listCortexMemoryConflicts,
ingestCortexMemoryFromText,
syncCortexCodingContext,
} = vi.hoisted(() => ({
previewCortexContext: vi.fn(),
getCortexStatus: vi.fn(),
getCortexModeOverride: vi.fn(),
listCortexMemoryConflicts: vi.fn(),
ingestCortexMemoryFromText: vi.fn(),
syncCortexCodingContext: vi.fn(),
}));
vi.mock("../memory/cortex.js", () => ({
previewCortexContext,
getCortexStatus,
listCortexMemoryConflicts,
ingestCortexMemoryFromText,
syncCortexCodingContext,
}));
vi.mock("../memory/cortex-mode-overrides.js", () => ({
getCortexModeOverride,
}));
import {
getAgentCortexMemoryCaptureStatus,
ingestAgentCortexMemoryCandidate,
resetAgentCortexConflictNoticeStateForTests,
resolveAgentCortexConflictNotice,
resolveAgentCortexConfig,
resolveAgentCortexModeStatus,
resolveAgentCortexPromptContext,
resolveAgentTurnCortexContext,
resolveCortexChannelTarget,
} from "./cortex.js";
beforeEach(() => {
setActivePluginRegistry(createTestRegistry([]));
getCortexStatus.mockResolvedValue({
available: true,
workspaceDir: "/tmp/openclaw-workspace",
graphPath: "/tmp/openclaw-workspace/.cortex/context.json",
graphExists: true,
});
});
afterEach(() => {
vi.clearAllMocks();
setActivePluginRegistry(createTestRegistry([]));
resetAgentCortexConflictNoticeStateForTests();
});
describe("resolveAgentCortexConfig", () => {
it("returns null when Cortex prompt bridge is disabled", () => {
const cfg: OpenClawConfig = {
agents: {
defaults: {},
list: [{ id: "main" }],
},
};
expect(resolveAgentCortexConfig(cfg, "main")).toBeNull();
});
it("merges defaults with per-agent overrides", () => {
const cfg: OpenClawConfig = {
agents: {
defaults: {
cortex: {
enabled: true,
mode: "professional",
maxChars: 1200,
graphPath: ".cortex/default.json",
},
},
list: [
{
id: "main",
cortex: {
mode: "technical",
maxChars: 3000,
},
},
],
},
};
expect(resolveAgentCortexConfig(cfg, "main")).toEqual({
enabled: true,
graphPath: ".cortex/default.json",
mode: "technical",
maxChars: 3000,
});
});
it("clamps max chars to a bounded value", () => {
const cfg: OpenClawConfig = {
agents: {
defaults: {
cortex: {
enabled: true,
maxChars: 999999,
},
},
list: [{ id: "main" }],
},
};
expect(resolveAgentCortexConfig(cfg, "main")?.maxChars).toBe(8000);
});
});
describe("resolveAgentCortexPromptContext", () => {
it("skips Cortex lookup in minimal prompt mode", async () => {
const cfg: OpenClawConfig = {
agents: {
defaults: {
cortex: {
enabled: true,
},
},
list: [{ id: "main" }],
},
};
const result = await resolveAgentCortexPromptContext({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
promptMode: "minimal",
});
expect(result).toEqual({});
expect(previewCortexContext).not.toHaveBeenCalled();
});
it("returns exported context when enabled", async () => {
getCortexModeOverride.mockResolvedValueOnce(null);
previewCortexContext.mockResolvedValueOnce({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: "/tmp/openclaw-workspace/.cortex/context.json",
policy: "technical",
maxChars: 1500,
context: "## Cortex Context\n- Shipping",
});
const cfg: OpenClawConfig = {
agents: {
defaults: {
cortex: {
enabled: true,
},
},
list: [{ id: "main" }],
},
};
const result = await resolveAgentCortexPromptContext({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
promptMode: "full",
});
expect(result).toEqual({
context: "## Cortex Context\n- Shipping",
});
expect(previewCortexContext).toHaveBeenCalledWith(
expect.objectContaining({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: undefined,
policy: "technical",
maxChars: 1500,
}),
);
});
it("prefers stored session/channel mode overrides", async () => {
getCortexModeOverride.mockResolvedValueOnce({
agentId: "main",
scope: "session",
targetId: "session-1",
mode: "minimal",
updatedAt: new Date().toISOString(),
});
previewCortexContext.mockResolvedValueOnce({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: "/tmp/openclaw-workspace/.cortex/context.json",
policy: "minimal",
maxChars: 1500,
context: "## Cortex Context\n- Minimal",
});
const cfg: OpenClawConfig = {
agents: {
defaults: {
cortex: {
enabled: true,
mode: "technical",
},
},
list: [{ id: "main" }],
},
};
const result = await resolveAgentCortexPromptContext({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
promptMode: "full",
sessionId: "session-1",
channelId: "slack",
});
expect(result).toEqual({
context: "## Cortex Context\n- Minimal",
});
expect(getCortexModeOverride).toHaveBeenCalledWith({
agentId: "main",
sessionId: "session-1",
channelId: "slack",
});
expect(previewCortexContext).toHaveBeenCalledWith(
expect.objectContaining({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: undefined,
policy: "minimal",
maxChars: 1500,
}),
);
});
it("returns an error without throwing when Cortex preview fails", async () => {
getCortexModeOverride.mockResolvedValueOnce(null);
previewCortexContext.mockRejectedValueOnce(new Error("Cortex graph not found"));
const cfg: OpenClawConfig = {
agents: {
defaults: {
cortex: {
enabled: true,
},
},
list: [{ id: "main" }],
},
};
const result = await resolveAgentCortexPromptContext({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
promptMode: "full",
});
expect(result.error).toContain("Cortex graph not found");
});
it("reuses resolved turn status when provided", async () => {
getCortexModeOverride.mockResolvedValueOnce(null);
previewCortexContext.mockResolvedValueOnce({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: "/tmp/openclaw-workspace/.cortex/context.json",
policy: "technical",
maxChars: 1500,
context: "## Cortex Context\n- Shipping",
});
const cfg: OpenClawConfig = {
agents: {
defaults: {
cortex: {
enabled: true,
},
},
list: [{ id: "main" }],
},
};
const resolved = await resolveAgentTurnCortexContext({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
});
const result = await resolveAgentCortexPromptContext({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
promptMode: "full",
resolved,
});
expect(result.context).toContain("Shipping");
expect(getCortexStatus).toHaveBeenCalledTimes(1);
});
});
describe("resolveAgentCortexConflictNotice", () => {
it("returns a throttled high-severity conflict notice", async () => {
listCortexMemoryConflicts.mockResolvedValueOnce([
{
id: "conf_1",
type: "temporal_flip",
severity: 0.91,
summary: "Hiring status changed from active to paused",
},
]);
const cfg: OpenClawConfig = {
agents: {
defaults: {
cortex: {
enabled: true,
},
},
list: [{ id: "main" }],
},
};
const notice = await resolveAgentCortexConflictNotice({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
sessionId: "session-1",
channelId: "channel-1",
now: 1_000,
cooldownMs: 10_000,
});
expect(notice?.conflictId).toBe("conf_1");
expect(notice?.text).toContain("Cortex conflict detected");
expect(notice?.text).toContain("/cortex resolve conf_1");
const second = await resolveAgentCortexConflictNotice({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
sessionId: "session-1",
channelId: "channel-1",
now: 5_000,
cooldownMs: 10_000,
});
expect(second).toBeNull();
});
it("reuses resolved turn status when checking conflicts", async () => {
listCortexMemoryConflicts.mockResolvedValueOnce([
{
id: "conf_1",
type: "temporal_flip",
severity: 0.91,
summary: "Hiring status changed from active to paused",
},
]);
const cfg: OpenClawConfig = {
agents: {
defaults: {
cortex: {
enabled: true,
},
},
list: [{ id: "main" }],
},
};
const resolved = await resolveAgentTurnCortexContext({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
sessionId: "session-1",
channelId: "channel-1",
});
await resolveAgentCortexConflictNotice({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
sessionId: "session-1",
channelId: "channel-1",
resolved,
});
expect(getCortexStatus).toHaveBeenCalledTimes(1);
});
it("applies cooldown even when no Cortex conflicts are found", async () => {
listCortexMemoryConflicts.mockResolvedValueOnce([]);
const cfg: OpenClawConfig = {
agents: {
defaults: {
cortex: {
enabled: true,
},
},
list: [{ id: "main" }],
},
};
const first = await resolveAgentCortexConflictNotice({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
sessionId: "session-1",
channelId: "channel-1",
now: 1_000,
cooldownMs: 10_000,
});
expect(first).toBeNull();
expect(listCortexMemoryConflicts).toHaveBeenCalledTimes(1);
const second = await resolveAgentCortexConflictNotice({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
sessionId: "session-1",
channelId: "channel-1",
now: 5_000,
cooldownMs: 10_000,
});
expect(second).toBeNull();
expect(listCortexMemoryConflicts).toHaveBeenCalledTimes(1);
});
it("returns null when Cortex is disabled", async () => {
const cfg: OpenClawConfig = {
agents: {
defaults: {},
list: [{ id: "main" }],
},
};
const notice = await resolveAgentCortexConflictNotice({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
});
expect(notice).toBeNull();
expect(listCortexMemoryConflicts).not.toHaveBeenCalled();
});
});
describe("ingestAgentCortexMemoryCandidate", () => {
it("captures high-signal user text into Cortex", async () => {
ingestCortexMemoryFromText.mockResolvedValueOnce({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: "/tmp/openclaw-workspace/.cortex/context.json",
stored: true,
});
const cfg: OpenClawConfig = {
agents: {
defaults: {
cortex: {
enabled: true,
},
},
list: [{ id: "main" }],
},
};
const result = await ingestAgentCortexMemoryCandidate({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
commandBody: "I prefer concise answers and I am focused on fundraising this quarter.",
sessionId: "session-1",
channelId: "channel-1",
});
expect(result.captured).toBe(true);
expect(result.reason).toBe("high-signal memory candidate");
expect(ingestCortexMemoryFromText).toHaveBeenCalledWith(
expect.objectContaining({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: undefined,
event: {
actor: "user",
text: "I prefer concise answers and I am focused on fundraising this quarter.",
agentId: "main",
sessionId: "session-1",
channelId: "channel-1",
provider: undefined,
},
}),
);
expect(
getAgentCortexMemoryCaptureStatus({
agentId: "main",
sessionId: "session-1",
channelId: "channel-1",
}),
).toMatchObject({
captured: true,
reason: "high-signal memory candidate",
});
});
it("auto-syncs coding context for technical captures", async () => {
ingestCortexMemoryFromText.mockResolvedValueOnce({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: "/tmp/openclaw-workspace/.cortex/context.json",
stored: true,
});
syncCortexCodingContext.mockResolvedValueOnce({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: "/tmp/openclaw-workspace/.cortex/context.json",
policy: "technical",
platforms: ["cursor"],
});
const cfg: OpenClawConfig = {
agents: {
defaults: {
cortex: {
enabled: true,
},
},
list: [{ id: "main" }],
},
};
const result = await ingestAgentCortexMemoryCandidate({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
commandBody: "I am debugging a Python backend API bug in this repo.",
sessionId: "session-1",
channelId: "channel-1",
provider: "cursor",
});
expect(result).toMatchObject({
captured: true,
syncedCodingContext: true,
syncPlatforms: ["cursor"],
});
expect(syncCortexCodingContext).toHaveBeenCalledWith(
expect.objectContaining({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: undefined,
policy: "technical",
platforms: ["cursor"],
}),
);
});
it("does not auto-sync generic technical chatter from messaging providers", async () => {
ingestCortexMemoryFromText.mockResolvedValueOnce({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: "/tmp/openclaw-workspace/.cortex/context.json",
stored: true,
});
const cfg: OpenClawConfig = {
agents: {
defaults: {
cortex: {
enabled: true,
},
},
list: [{ id: "main" }],
},
};
const result = await ingestAgentCortexMemoryCandidate({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
commandBody: "I am debugging a Python API bug right now.",
sessionId: "session-1",
channelId: "telegram:1",
provider: "telegram",
});
expect(result).toMatchObject({
captured: true,
syncedCodingContext: false,
});
expect(syncCortexCodingContext).not.toHaveBeenCalled();
});
it("does not auto-sync generic technical chatter from registered channel plugins", async () => {
setActivePluginRegistry(
createTestRegistry([
{
pluginId: "matrix",
source: "test",
plugin: createChannelTestPluginBase({
id: "matrix",
label: "Matrix",
docsPath: "/channels/matrix",
}),
},
]),
);
ingestCortexMemoryFromText.mockResolvedValueOnce({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: "/tmp/openclaw-workspace/.cortex/context.json",
stored: true,
});
const cfg: OpenClawConfig = {
agents: {
defaults: {
cortex: {
enabled: true,
},
},
list: [{ id: "main" }],
},
};
const result = await ingestAgentCortexMemoryCandidate({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
commandBody: "I am debugging a Python API bug right now.",
sessionId: "session-1",
channelId: "!room:example.org",
provider: "matrix",
});
expect(result).toMatchObject({
captured: true,
syncedCodingContext: false,
});
expect(syncCortexCodingContext).not.toHaveBeenCalled();
});
it("skips low-signal text", async () => {
const cfg: OpenClawConfig = {
agents: {
defaults: {
cortex: {
enabled: true,
},
},
list: [{ id: "main" }],
},
};
const result = await ingestAgentCortexMemoryCandidate({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
commandBody: "ok",
});
expect(result).toMatchObject({
captured: false,
reason: "low-signal short reply",
});
expect(ingestCortexMemoryFromText).not.toHaveBeenCalled();
});
it("reuses the same graph path across channels for the same agent", async () => {
ingestCortexMemoryFromText.mockResolvedValue({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: "/tmp/openclaw-workspace/.cortex/context.json",
stored: true,
});
const cfg: OpenClawConfig = {
agents: {
defaults: {
cortex: {
enabled: true,
graphPath: ".cortex/context.json",
},
},
list: [{ id: "main" }],
},
};
await ingestAgentCortexMemoryCandidate({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
commandBody: "I prefer concise answers for work updates.",
sessionId: "session-1",
channelId: "slack:C123",
provider: "slack",
});
await ingestAgentCortexMemoryCandidate({
cfg,
agentId: "main",
workspaceDir: "/tmp/openclaw-workspace",
commandBody: "I am focused on fundraising this quarter.",
sessionId: "session-2",
channelId: "telegram:456",
provider: "telegram",
});
expect(ingestCortexMemoryFromText).toHaveBeenNthCalledWith(
1,
expect.objectContaining({
graphPath: ".cortex/context.json",
}),
);
expect(ingestCortexMemoryFromText).toHaveBeenNthCalledWith(
2,
expect.objectContaining({
graphPath: ".cortex/context.json",
}),
);
});
});
describe("resolveAgentCortexModeStatus", () => {
it("reports the active source for a session override", async () => {
getCortexModeOverride.mockResolvedValueOnce({
agentId: "main",
scope: "session",
targetId: "session-1",
mode: "minimal",
updatedAt: new Date().toISOString(),
});
const cfg: OpenClawConfig = {
agents: {
defaults: {
cortex: {
enabled: true,
mode: "technical",
maxChars: 1500,
},
},
list: [{ id: "main" }],
},
};
await expect(
resolveAgentCortexModeStatus({
cfg,
agentId: "main",
sessionId: "session-1",
channelId: "slack",
}),
).resolves.toMatchObject({
mode: "minimal",
source: "session-override",
});
});
});
describe("resolveCortexChannelTarget", () => {
it("prefers concrete conversation ids before provider labels", () => {
expect(
resolveCortexChannelTarget({
channel: "slack",
channelId: "slack",
nativeChannelId: "C123",
}),
).toBe("C123");
});
});

617
src/agents/cortex.ts Normal file
View File

@ -0,0 +1,617 @@
import type { OpenClawConfig } from "../config/config.js";
import type { AgentCortexConfig } from "../config/types.agent-defaults.js";
import { getCortexModeOverride } from "../memory/cortex-mode-overrides.js";
import {
getCortexStatus,
ingestCortexMemoryFromText,
listCortexMemoryConflicts,
previewCortexContext,
syncCortexCodingContext,
type CortexPolicy,
type CortexStatus,
} from "../memory/cortex.js";
import { resolveGatewayMessageChannel } from "../utils/message-channel.js";
import { resolveAgentConfig } from "./agent-scope.js";
import {
appendCortexCaptureHistory,
getLatestCortexCaptureHistoryEntry,
} from "./cortex-history.js";
export type ResolvedAgentCortexConfig = {
enabled: true;
graphPath?: string;
mode: CortexPolicy;
maxChars: number;
};
export type AgentCortexPromptContextResult = {
context?: string;
error?: string;
};
export type ResolvedAgentCortexModeStatus = {
enabled: true;
mode: CortexPolicy;
source: "agent-config" | "session-override" | "channel-override";
graphPath?: string;
maxChars: number;
};
export type ResolvedAgentTurnCortexContext = {
config: ResolvedAgentCortexModeStatus;
status: CortexStatus;
};
export type AgentCortexConflictNotice = {
text: string;
conflictId: string;
severity: number;
};
export type AgentCortexMemoryCaptureResult = {
captured: boolean;
score: number;
reason: string;
error?: string;
syncedCodingContext?: boolean;
syncPlatforms?: string[];
};
export type AgentCortexMemoryCaptureStatus = AgentCortexMemoryCaptureResult & {
updatedAt: number;
};
const DEFAULT_CORTEX_MODE: CortexPolicy = "technical";
const DEFAULT_CORTEX_MAX_CHARS = 1_500;
const MAX_CORTEX_MAX_CHARS = 8_000;
const DEFAULT_CORTEX_CONFLICT_SEVERITY = 0.75;
const DEFAULT_CORTEX_CONFLICT_COOLDOWN_MS = 30 * 60 * 1000;
const cortexConflictNoticeCooldowns = new Map<string, number>();
const cortexMemoryCaptureStatuses = new Map<string, AgentCortexMemoryCaptureStatus>();
const MIN_CORTEX_MEMORY_CONTENT_LENGTH = 24;
const DEFAULT_CORTEX_CODING_SYNC_COOLDOWN_MS = 10 * 60 * 1000;
const LOW_SIGNAL_PATTERNS = [
/^ok[.!]?$/i,
/^okay[.!]?$/i,
/^thanks?[.!]?$/i,
/^cool[.!]?$/i,
/^sounds good[.!]?$/i,
/^yes[.!]?$/i,
/^no[.!]?$/i,
/^lol[.!]?$/i,
/^haha[.!]?$/i,
/^test$/i,
];
const HIGH_SIGNAL_PATTERNS = [
/\bI prefer\b/i,
/\bmy preference\b/i,
/\bI am working on\b/i,
/\bIm working on\b/i,
/\bmy project\b/i,
/\bI use\b/i,
/\bI don't use\b/i,
/\bI do not use\b/i,
/\bI need\b/i,
/\bmy goal\b/i,
/\bmy priority\b/i,
/\bremember that\b/i,
/\bI like\b/i,
/\bI dislike\b/i,
/\bI am focused on\b/i,
/\bI'm focused on\b/i,
/\bI've been focused on\b/i,
/\bI work with\b/i,
/\bI work on\b/i,
];
const TECHNICAL_SIGNAL_PATTERNS = [
/\bpython\b/i,
/\btypescript\b/i,
/\bjavascript\b/i,
/\brepo\b/i,
/\bbug\b/i,
/\bdebug\b/i,
/\bdeploy\b/i,
/\bpr\b/i,
/\bcursor\b/i,
/\bcopilot\b/i,
/\bclaude code\b/i,
/\bgemini\b/i,
/\bapi\b/i,
/\bbackend\b/i,
];
const STRONG_CODING_SYNC_PATTERNS = [
/\brepo\b/i,
/\bcodebase\b/i,
/\bpull request\b/i,
/\bpackage\.json\b/i,
/\btsconfig\b/i,
/\bpytest\b/i,
/\bclaude code\b/i,
/\bcursor\b/i,
/\bcopilot\b/i,
/\bgemini cli\b/i,
];
const CORTEX_CODING_PROVIDER_PLATFORM_MAP: Record<string, string[]> = {
"claude-code": ["claude-code"],
copilot: ["copilot"],
cursor: ["cursor"],
"gemini-cli": ["gemini-cli"],
};
const CORTEX_NON_GATEWAY_MESSAGING_PROVIDERS = new Set(["voice"]);
const cortexCodingSyncCooldowns = new Map<string, number>();
function normalizeMode(mode?: AgentCortexConfig["mode"]): CortexPolicy {
if (mode === "full" || mode === "professional" || mode === "technical" || mode === "minimal") {
return mode;
}
return DEFAULT_CORTEX_MODE;
}
function normalizeMaxChars(value?: number): number {
if (typeof value !== "number" || !Number.isFinite(value) || value <= 0) {
return DEFAULT_CORTEX_MAX_CHARS;
}
return Math.min(MAX_CORTEX_MAX_CHARS, Math.max(1, Math.floor(value)));
}
function isCortexMessagingProvider(provider?: string): boolean {
const normalized = provider?.trim().toLowerCase();
if (!normalized) {
return false;
}
return (
CORTEX_NON_GATEWAY_MESSAGING_PROVIDERS.has(normalized) ||
resolveGatewayMessageChannel(normalized) !== undefined
);
}
export function resolveAgentCortexConfig(
cfg: OpenClawConfig,
agentId: string,
): ResolvedAgentCortexConfig | null {
const defaults = cfg.agents?.defaults?.cortex;
const overrides = resolveAgentConfig(cfg, agentId)?.cortex;
const enabled = overrides?.enabled ?? defaults?.enabled ?? false;
if (!enabled) {
return null;
}
return {
enabled: true,
graphPath: overrides?.graphPath ?? defaults?.graphPath,
mode: normalizeMode(overrides?.mode ?? defaults?.mode),
maxChars: normalizeMaxChars(overrides?.maxChars ?? defaults?.maxChars),
};
}
export function resolveCortexChannelTarget(params: {
channel?: string;
channelId?: string;
originatingChannel?: string;
originatingTo?: string;
nativeChannelId?: string;
to?: string;
from?: string;
}): string {
const directConversationId = params.originatingTo?.trim();
if (directConversationId) {
return directConversationId;
}
const nativeConversationId = params.nativeChannelId?.trim();
if (nativeConversationId) {
return nativeConversationId;
}
const destinationId = params.to?.trim();
if (destinationId) {
return destinationId;
}
const sourceId = params.from?.trim();
if (sourceId) {
return sourceId;
}
const providerChannelId = params.channelId?.trim();
if (providerChannelId) {
return providerChannelId;
}
return String(params.originatingChannel ?? params.channel ?? "").trim();
}
export async function resolveAgentCortexModeStatus(params: {
cfg?: OpenClawConfig;
agentId: string;
sessionId?: string;
channelId?: string;
}): Promise<ResolvedAgentCortexModeStatus | null> {
if (!params.cfg) {
return null;
}
const cortex = resolveAgentCortexConfig(params.cfg, params.agentId);
if (!cortex) {
return null;
}
const modeOverride = await getCortexModeOverride({
agentId: params.agentId,
sessionId: params.sessionId,
channelId: params.channelId,
});
return {
enabled: true,
graphPath: cortex.graphPath,
maxChars: cortex.maxChars,
mode: modeOverride?.mode ?? cortex.mode,
source:
modeOverride?.scope === "session"
? "session-override"
: modeOverride?.scope === "channel"
? "channel-override"
: "agent-config",
};
}
export async function resolveAgentCortexPromptContext(params: {
cfg?: OpenClawConfig;
agentId: string;
workspaceDir: string;
promptMode: "full" | "minimal";
sessionId?: string;
channelId?: string;
resolved?: ResolvedAgentTurnCortexContext | null;
}): Promise<AgentCortexPromptContextResult> {
if (!params.cfg || params.promptMode !== "full") {
return {};
}
const resolved =
params.resolved ??
(await resolveAgentTurnCortexContext({
cfg: params.cfg,
agentId: params.agentId,
workspaceDir: params.workspaceDir,
sessionId: params.sessionId,
channelId: params.channelId,
}));
if (!resolved) {
return {};
}
try {
const preview = await previewCortexContext({
workspaceDir: params.workspaceDir,
graphPath: resolved.config.graphPath,
policy: resolved.config.mode,
maxChars: resolved.config.maxChars,
status: resolved.status,
});
return preview.context ? { context: preview.context } : {};
} catch (error) {
return {
error: error instanceof Error ? error.message : String(error),
};
}
}
export async function resolveAgentTurnCortexContext(params: {
cfg?: OpenClawConfig;
agentId: string;
workspaceDir: string;
sessionId?: string;
channelId?: string;
}): Promise<ResolvedAgentTurnCortexContext | null> {
if (!params.cfg) {
return null;
}
const config = await resolveAgentCortexModeStatus({
cfg: params.cfg,
agentId: params.agentId,
sessionId: params.sessionId,
channelId: params.channelId,
});
if (!config) {
return null;
}
const status = await getCortexStatus({
workspaceDir: params.workspaceDir,
graphPath: config.graphPath,
});
return { config, status };
}
export function resetAgentCortexConflictNoticeStateForTests(): void {
cortexConflictNoticeCooldowns.clear();
cortexMemoryCaptureStatuses.clear();
cortexCodingSyncCooldowns.clear();
}
function buildAgentCortexConversationKey(params: {
agentId: string;
sessionId?: string;
channelId?: string;
}): string {
// Use NUL as separator to avoid collisions when IDs contain colons
// (e.g. "session:test" vs separate "session" + "test" tokens).
return [params.agentId, params.sessionId ?? "", params.channelId ?? ""].join("\0");
}
export function getAgentCortexMemoryCaptureStatus(params: {
agentId: string;
sessionId?: string;
channelId?: string;
}): AgentCortexMemoryCaptureStatus | null {
const key = buildAgentCortexConversationKey({
agentId: params.agentId,
sessionId: params.sessionId,
channelId: params.channelId,
});
return cortexMemoryCaptureStatuses.get(key) ?? null;
}
function scoreAgentCortexMemoryCandidate(commandBody: string): AgentCortexMemoryCaptureResult {
const content = commandBody.trim();
if (!content) {
return { captured: false, score: 0, reason: "empty content" };
}
if (content.startsWith("/") || content.startsWith("!")) {
return { captured: false, score: 0, reason: "command content" };
}
if (LOW_SIGNAL_PATTERNS.some((pattern) => pattern.test(content))) {
return { captured: false, score: 0.05, reason: "low-signal short reply" };
}
let score = 0.1;
if (content.length >= MIN_CORTEX_MEMORY_CONTENT_LENGTH) {
score += 0.2;
}
if (content.length >= 80) {
score += 0.1;
}
if (HIGH_SIGNAL_PATTERNS.some((pattern) => pattern.test(content))) {
score += 0.4;
}
if (TECHNICAL_SIGNAL_PATTERNS.some((pattern) => pattern.test(content))) {
score += 0.2;
}
const captured = score >= 0.45;
return {
captured,
score,
reason: captured ? "high-signal memory candidate" : "below memory threshold",
};
}
function resolveAutoSyncCortexCodingContext(params: {
commandBody: string;
provider?: string;
}): { policy: CortexPolicy; platforms: string[] } | null {
if (!TECHNICAL_SIGNAL_PATTERNS.some((pattern) => pattern.test(params.commandBody))) {
return null;
}
const provider = params.provider?.trim().toLowerCase();
if (provider) {
const directPlatforms = CORTEX_CODING_PROVIDER_PLATFORM_MAP[provider];
if (directPlatforms) {
return {
policy: "technical",
platforms: directPlatforms,
};
}
}
const hasStrongCodingIntent = STRONG_CODING_SYNC_PATTERNS.some((pattern) =>
pattern.test(params.commandBody),
);
if (provider && isCortexMessagingProvider(provider) && !hasStrongCodingIntent) {
return null;
}
return {
policy: "technical",
platforms: ["claude-code", "cursor", "copilot"],
};
}
export async function resolveAgentCortexConflictNotice(params: {
cfg?: OpenClawConfig;
agentId: string;
workspaceDir: string;
sessionId?: string;
channelId?: string;
minSeverity?: number;
now?: number;
cooldownMs?: number;
resolved?: ResolvedAgentTurnCortexContext | null;
}): Promise<AgentCortexConflictNotice | null> {
if (!params.cfg) {
return null;
}
const resolved =
params.resolved ??
(await resolveAgentTurnCortexContext({
cfg: params.cfg,
agentId: params.agentId,
workspaceDir: params.workspaceDir,
sessionId: params.sessionId,
channelId: params.channelId,
}));
if (!resolved) {
return null;
}
const targetKey = buildAgentCortexConversationKey({
agentId: params.agentId,
sessionId: params.sessionId,
channelId: params.channelId,
});
const now = params.now ?? Date.now();
const cooldownMs = params.cooldownMs ?? DEFAULT_CORTEX_CONFLICT_COOLDOWN_MS;
const nextAllowedAt = cortexConflictNoticeCooldowns.get(targetKey) ?? 0;
if (nextAllowedAt > now) {
return null;
}
try {
const conflicts = await listCortexMemoryConflicts({
workspaceDir: params.workspaceDir,
graphPath: resolved.config.graphPath,
minSeverity: params.minSeverity ?? DEFAULT_CORTEX_CONFLICT_SEVERITY,
status: resolved.status,
});
const topConflict = conflicts
.filter((entry) => entry.id && entry.summary)
.toSorted((left, right) => right.severity - left.severity)[0];
if (!topConflict) {
cortexConflictNoticeCooldowns.set(targetKey, now + cooldownMs);
return null;
}
cortexConflictNoticeCooldowns.set(targetKey, now + cooldownMs);
return {
conflictId: topConflict.id,
severity: topConflict.severity,
text: [
`⚠️ Cortex conflict detected: ${topConflict.summary}`,
`Resolve with: /cortex resolve ${topConflict.id} <accept-new|keep-old|merge|ignore>`,
].join("\n"),
};
} catch {
cortexConflictNoticeCooldowns.set(targetKey, now + cooldownMs);
return null;
}
}
export async function ingestAgentCortexMemoryCandidate(params: {
cfg?: OpenClawConfig;
agentId: string;
workspaceDir: string;
commandBody: string;
sessionId?: string;
channelId?: string;
provider?: string;
resolved?: ResolvedAgentTurnCortexContext | null;
}): Promise<AgentCortexMemoryCaptureResult> {
const conversationKey = buildAgentCortexConversationKey({
agentId: params.agentId,
sessionId: params.sessionId,
channelId: params.channelId,
});
if (!params.cfg) {
const result = { captured: false, score: 0, reason: "missing config" };
cortexMemoryCaptureStatuses.set(conversationKey, { ...result, updatedAt: Date.now() });
return result;
}
const resolved =
params.resolved ??
(await resolveAgentTurnCortexContext({
cfg: params.cfg,
agentId: params.agentId,
workspaceDir: params.workspaceDir,
sessionId: params.sessionId,
channelId: params.channelId,
}));
if (!resolved) {
const result = { captured: false, score: 0, reason: "cortex disabled" };
cortexMemoryCaptureStatuses.set(conversationKey, { ...result, updatedAt: Date.now() });
return result;
}
const decision = scoreAgentCortexMemoryCandidate(params.commandBody);
if (!decision.captured) {
cortexMemoryCaptureStatuses.set(conversationKey, { ...decision, updatedAt: Date.now() });
return decision;
}
try {
await ingestCortexMemoryFromText({
workspaceDir: params.workspaceDir,
graphPath: resolved.config.graphPath,
event: {
actor: "user",
text: params.commandBody,
agentId: params.agentId,
sessionId: params.sessionId,
channelId: params.channelId,
provider: params.provider,
},
status: resolved.status,
});
let syncedCodingContext = false;
let syncPlatforms: string[] | undefined;
const syncPolicy = resolveAutoSyncCortexCodingContext({
commandBody: params.commandBody,
provider: params.provider,
});
if (syncPolicy) {
const nextAllowedAt = cortexCodingSyncCooldowns.get(conversationKey) ?? 0;
const now = Date.now();
if (nextAllowedAt <= now) {
try {
const syncResult = await syncCortexCodingContext({
workspaceDir: params.workspaceDir,
graphPath: resolved.config.graphPath,
policy: syncPolicy.policy,
platforms: syncPolicy.platforms,
status: resolved.status,
});
syncedCodingContext = true;
syncPlatforms = syncResult.platforms;
cortexCodingSyncCooldowns.set(
conversationKey,
now + DEFAULT_CORTEX_CODING_SYNC_COOLDOWN_MS,
);
} catch {
syncedCodingContext = false;
}
}
}
const result = { ...decision, syncedCodingContext, syncPlatforms };
const updatedAt = Date.now();
cortexMemoryCaptureStatuses.set(conversationKey, { ...result, updatedAt });
await appendCortexCaptureHistory({
agentId: params.agentId,
sessionId: params.sessionId,
channelId: params.channelId,
captured: result.captured,
score: result.score,
reason: result.reason,
syncedCodingContext: result.syncedCodingContext,
syncPlatforms: result.syncPlatforms,
timestamp: updatedAt,
}).catch(() => {});
return result;
} catch (error) {
const result = {
captured: false,
score: decision.score,
reason: decision.reason,
error: error instanceof Error ? error.message : String(error),
};
const updatedAt = Date.now();
cortexMemoryCaptureStatuses.set(conversationKey, { ...result, updatedAt });
await appendCortexCaptureHistory({
agentId: params.agentId,
sessionId: params.sessionId,
channelId: params.channelId,
captured: result.captured,
score: result.score,
reason: result.reason,
error: result.error,
timestamp: updatedAt,
}).catch(() => {});
return result;
}
}
export async function getAgentCortexMemoryCaptureStatusWithHistory(params: {
agentId: string;
sessionId?: string;
channelId?: string;
}): Promise<AgentCortexMemoryCaptureStatus | null> {
const live = getAgentCortexMemoryCaptureStatus(params);
if (live) {
return live;
}
const fromHistory = await getLatestCortexCaptureHistoryEntry(params).catch(() => null);
if (!fromHistory) {
return null;
}
return {
captured: fromHistory.captured,
score: fromHistory.score,
reason: fromHistory.reason,
error: fromHistory.error,
syncedCodingContext: fromHistory.syncedCodingContext,
syncPlatforms: fromHistory.syncPlatforms,
updatedAt: fromHistory.timestamp,
};
}

View File

@ -113,6 +113,29 @@ describe("memory search config", () => {
expect(resolved?.fallback).toBe("none");
});
it("preserves output dimensionality overrides", () => {
const cfg = asConfig({
agents: {
defaults: {
memorySearch: {
outputDimensionality: 768,
},
},
list: [
{
id: "main",
default: true,
memorySearch: {
outputDimensionality: 512,
},
},
],
},
});
const resolved = resolveMemorySearchConfig(cfg, "main");
expect(resolved?.outputDimensionality).toBe(512);
});
it("merges defaults and overrides", () => {
const cfg = asConfig({
agents: {
@ -330,6 +353,37 @@ describe("memory search config", () => {
});
});
it("preserves sync.sessions.postCompactionForce overrides", () => {
const cfg = asConfig({
agents: {
defaults: {
memorySearch: {
sync: {
sessions: {
postCompactionForce: true,
},
},
},
},
list: [
{
id: "main",
default: true,
memorySearch: {
sync: {
sessions: {
postCompactionForce: false,
},
},
},
},
],
},
});
const resolved = resolveMemorySearchConfig(cfg, "main");
expect(resolved?.sync.sessions.postCompactionForce).toBe(false);
});
it("merges remote defaults with agent overrides", () => {
const cfg = configWithRemoteDefaults({
baseUrl: "https://default.example/v1",

View File

@ -16,7 +16,6 @@ export type ResolvedMemorySearchConfig = {
enabled: boolean;
sources: Array<"memory" | "sessions">;
extraPaths: string[];
multimodal: MemoryMultimodalSettings;
provider: "openai" | "local" | "gemini" | "voyage" | "mistral" | "ollama" | "auto";
remote?: {
baseUrl?: string;
@ -36,6 +35,7 @@ export type ResolvedMemorySearchConfig = {
fallback: "openai" | "gemini" | "local" | "voyage" | "mistral" | "ollama" | "none";
model: string;
outputDimensionality?: number;
multimodal: MemoryMultimodalSettings;
local: {
modelPath?: string;
modelCacheDir?: string;
@ -202,21 +202,25 @@ function mergeConfig(
? DEFAULT_OLLAMA_MODEL
: undefined;
const model = overrides?.model ?? defaults?.model ?? modelDefault ?? "";
const outputDimensionality = overrides?.outputDimensionality ?? defaults?.outputDimensionality;
const rawOutputDimensionality = overrides?.outputDimensionality ?? defaults?.outputDimensionality;
const outputDimensionality =
typeof rawOutputDimensionality === "number" && Number.isFinite(rawOutputDimensionality)
? clampInt(rawOutputDimensionality, 1, Number.MAX_SAFE_INTEGER)
: undefined;
const local = {
modelPath: overrides?.local?.modelPath ?? defaults?.local?.modelPath,
modelCacheDir: overrides?.local?.modelCacheDir ?? defaults?.local?.modelCacheDir,
};
const sources = normalizeSources(overrides?.sources ?? defaults?.sources, sessionMemory);
const rawPaths = [...(defaults?.extraPaths ?? []), ...(overrides?.extraPaths ?? [])]
.map((value) => value.trim())
.filter(Boolean);
const extraPaths = Array.from(new Set(rawPaths));
const multimodal = normalizeMemoryMultimodalSettings({
enabled: overrides?.multimodal?.enabled ?? defaults?.multimodal?.enabled,
modalities: overrides?.multimodal?.modalities ?? defaults?.multimodal?.modalities,
maxFileBytes: overrides?.multimodal?.maxFileBytes ?? defaults?.multimodal?.maxFileBytes,
});
const sources = normalizeSources(overrides?.sources ?? defaults?.sources, sessionMemory);
const rawPaths = [...(defaults?.extraPaths ?? []), ...(overrides?.extraPaths ?? [])]
.map((value) => value.trim())
.filter(Boolean);
const extraPaths = Array.from(new Set(rawPaths));
const vector = {
enabled: overrides?.store?.vector?.enabled ?? defaults?.store?.vector?.enabled ?? true,
extensionPath:
@ -320,12 +324,10 @@ function mergeConfig(
);
const deltaBytes = clampInt(sync.sessions.deltaBytes, 0, Number.MAX_SAFE_INTEGER);
const deltaMessages = clampInt(sync.sessions.deltaMessages, 0, Number.MAX_SAFE_INTEGER);
const postCompactionForce = sync.sessions.postCompactionForce;
return {
enabled,
sources,
extraPaths,
multimodal,
provider,
remote,
experimental: {
@ -334,6 +336,7 @@ function mergeConfig(
fallback,
model,
outputDimensionality,
multimodal,
local,
store,
chunking: { tokens: Math.max(1, chunking.tokens), overlap },
@ -342,7 +345,7 @@ function mergeConfig(
sessions: {
deltaBytes,
deltaMessages,
postCompactionForce,
postCompactionForce: sync.sessions.postCompactionForce,
},
},
query: {

View File

@ -363,7 +363,7 @@ describe("applyExtraParamsToAgent", () => {
const payloads: Record<string, unknown>[] = [];
const baseStreamFn: StreamFn = (_model, _context, options) => {
const payload: Record<string, unknown> = { model: "deepseek/deepseek-r1" };
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -395,7 +395,7 @@ describe("applyExtraParamsToAgent", () => {
const payloads: Record<string, unknown>[] = [];
const baseStreamFn: StreamFn = (_model, _context, options) => {
const payload: Record<string, unknown> = {};
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -419,7 +419,7 @@ describe("applyExtraParamsToAgent", () => {
const payloads: Record<string, unknown>[] = [];
const baseStreamFn: StreamFn = (_model, _context, options) => {
const payload: Record<string, unknown> = { reasoning_effort: "high" };
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -444,7 +444,7 @@ describe("applyExtraParamsToAgent", () => {
const payloads: Record<string, unknown>[] = [];
const baseStreamFn: StreamFn = (_model, _context, options) => {
const payload: Record<string, unknown> = { reasoning: { max_tokens: 256 } };
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -468,7 +468,7 @@ describe("applyExtraParamsToAgent", () => {
const payloads: Record<string, unknown>[] = [];
const baseStreamFn: StreamFn = (_model, _context, options) => {
const payload: Record<string, unknown> = { reasoning_effort: "medium" };
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -675,7 +675,7 @@ describe("applyExtraParamsToAgent", () => {
const payloads: Record<string, unknown>[] = [];
const baseStreamFn: StreamFn = (_model, _context, options) => {
const payload: Record<string, unknown> = { thinking: "off" };
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -706,7 +706,7 @@ describe("applyExtraParamsToAgent", () => {
const payloads: Record<string, unknown>[] = [];
const baseStreamFn: StreamFn = (_model, _context, options) => {
const payload: Record<string, unknown> = { thinking: "off" };
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -737,7 +737,7 @@ describe("applyExtraParamsToAgent", () => {
const payloads: Record<string, unknown>[] = [];
const baseStreamFn: StreamFn = (_model, _context, options) => {
const payload: Record<string, unknown> = {};
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -761,7 +761,7 @@ describe("applyExtraParamsToAgent", () => {
const payloads: Record<string, unknown>[] = [];
const baseStreamFn: StreamFn = (_model, _context, options) => {
const payload: Record<string, unknown> = { tool_choice: "required" };
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -788,7 +788,7 @@ describe("applyExtraParamsToAgent", () => {
const payload: Record<string, unknown> = {
tool_choice: { type: "tool", name: "read" },
};
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -813,7 +813,7 @@ describe("applyExtraParamsToAgent", () => {
const payloads: Record<string, unknown>[] = [];
const baseStreamFn: StreamFn = (_model, _context, options) => {
const payload: Record<string, unknown> = {};
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -850,7 +850,7 @@ describe("applyExtraParamsToAgent", () => {
const payloads: Record<string, unknown>[] = [];
const baseStreamFn: StreamFn = (_model, _context, options) => {
const payload: Record<string, unknown> = { tool_choice: "required" };
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -875,7 +875,7 @@ describe("applyExtraParamsToAgent", () => {
const payloads: Record<string, unknown>[] = [];
const baseStreamFn: StreamFn = (_model, _context, options) => {
const payload: Record<string, unknown> = {};
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -901,7 +901,7 @@ describe("applyExtraParamsToAgent", () => {
const payload: Record<string, unknown> = {
tool_choice: { type: "function", function: { name: "read" } },
};
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -942,7 +942,7 @@ describe("applyExtraParamsToAgent", () => {
],
tool_choice: { type: "tool", name: "read" },
};
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -986,7 +986,7 @@ describe("applyExtraParamsToAgent", () => {
},
],
};
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -1025,7 +1025,7 @@ describe("applyExtraParamsToAgent", () => {
},
],
};
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -1147,7 +1147,7 @@ describe("applyExtraParamsToAgent", () => {
},
},
};
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};
@ -1194,7 +1194,7 @@ describe("applyExtraParamsToAgent", () => {
},
},
};
options?.onPayload?.(payload, _model);
options?.onPayload?.(payload, model);
payloads.push(payload);
return {} as ReturnType<StreamFn>;
};

View File

@ -50,6 +50,7 @@ import {
listChannelSupportedActions,
resolveChannelMessageToolHints,
} from "../../channel-tools.js";
import { resolveAgentCortexPromptContext } from "../../cortex.js";
import { ensureCustomApiRegistered } from "../../custom-api-registry.js";
import { DEFAULT_CONTEXT_TOKENS } from "../../defaults.js";
import { resolveOpenClawDocsPath } from "../../docs-path.js";
@ -2698,12 +2699,23 @@ export async function runEmbeddedAttempt(
legacyBeforeAgentStartResult: params.legacyBeforeAgentStartResult,
});
{
const cortexPromptContext = await resolveAgentCortexPromptContext({
cfg: params.config,
agentId: sessionAgentId,
workspaceDir: params.workspaceDir,
promptMode,
sessionId: params.sessionId,
channelId: params.messageChannel ?? params.messageProvider ?? undefined,
});
if (hookResult?.prependContext) {
effectivePrompt = `${hookResult.prependContext}\n\n${effectivePrompt}`;
log.debug(
`hooks: prepended context to prompt (${hookResult.prependContext.length} chars)`,
);
}
if (cortexPromptContext.error) {
log.warn(`cortex prompt context unavailable: ${cortexPromptContext.error}`);
}
const legacySystemPrompt =
typeof hookResult?.systemPrompt === "string" ? hookResult.systemPrompt.trim() : "";
if (legacySystemPrompt) {
@ -2713,16 +2725,22 @@ export async function runEmbeddedAttempt(
}
const prependedOrAppendedSystemPrompt = composeSystemPromptWithHookContext({
baseSystemPrompt: systemPromptText,
prependSystemContext: hookResult?.prependSystemContext,
prependSystemContext: joinPresentTextSegments([
cortexPromptContext.context,
hookResult?.prependSystemContext,
]),
appendSystemContext: hookResult?.appendSystemContext,
});
if (prependedOrAppendedSystemPrompt) {
const prependSystemLen = hookResult?.prependSystemContext?.trim().length ?? 0;
const prependSystemLen = joinPresentTextSegments([
cortexPromptContext.context,
hookResult?.prependSystemContext,
])?.trim().length;
const appendSystemLen = hookResult?.appendSystemContext?.trim().length ?? 0;
applySystemPromptOverrideToSession(activeSession, prependedOrAppendedSystemPrompt);
systemPromptText = prependedOrAppendedSystemPrompt;
log.debug(
`hooks: applied prependSystemContext/appendSystemContext (${prependSystemLen}+${appendSystemLen} chars)`,
`hooks: applied prependSystemContext/appendSystemContext (${prependSystemLen ?? 0}+${appendSystemLen} chars)`,
);
}
}

View File

@ -108,7 +108,7 @@ describe("web_fetch Cloudflare Markdown for Agents", () => {
},
},
},
},
} as unknown as typeof baseToolConfig.config,
sandboxed: false,
runtimeFirecrawl: {
active: false,

View File

@ -1,5 +1,5 @@
import { listChannelPlugins } from "../channels/plugins/index.js";
import { getActivePluginRegistry } from "../plugins/runtime.js";
import { getActivePluginRegistryVersion } from "../plugins/runtime.js";
import { COMMAND_ARG_FORMATTERS } from "./commands-args.js";
import type {
ChatCommandDefinition,
@ -124,9 +124,9 @@ function assertCommandRegistry(commands: ChatCommandDefinition[]): void {
}
let cachedCommands: ChatCommandDefinition[] | null = null;
let cachedRegistry: ReturnType<typeof getActivePluginRegistry> | null = null;
let cachedNativeCommandSurfaces: Set<string> | null = null;
let cachedNativeRegistry: ReturnType<typeof getActivePluginRegistry> | null = null;
let cachedCommandsPluginRegistryVersion: number | null = null;
let cachedNativeCommandSurfacesPluginRegistryVersion: number | null = null;
function buildChatCommands(): ChatCommandDefinition[] {
const commands: ChatCommandDefinition[] = [
@ -196,6 +196,22 @@ function buildChatCommands(): ChatCommandDefinition[] {
acceptsArgs: true,
category: "status",
}),
defineChatCommand({
key: "fast",
description: "Show or change fast mode for this session.",
textAlias: "/fast",
acceptsArgs: true,
scope: "text",
category: "status",
}),
defineChatCommand({
key: "cortex",
description: "Inspect or override Cortex prompt mode for this conversation.",
textAlias: "/cortex",
acceptsArgs: true,
scope: "text",
category: "status",
}),
defineChatCommand({
key: "btw",
nativeName: "btw",
@ -655,22 +671,6 @@ function buildChatCommands(): ChatCommandDefinition[] {
],
argsMenu: "auto",
}),
defineChatCommand({
key: "fast",
nativeName: "fast",
description: "Toggle fast mode.",
textAlias: "/fast",
category: "options",
args: [
{
name: "mode",
description: "status, on, or off",
type: "string",
choices: ["status", "on", "off"],
},
],
argsMenu: "auto",
}),
defineChatCommand({
key: "reasoning",
nativeName: "reasoning",
@ -825,20 +825,24 @@ function buildChatCommands(): ChatCommandDefinition[] {
}
export function getChatCommands(): ChatCommandDefinition[] {
const registry = getActivePluginRegistry();
if (cachedCommands && registry === cachedRegistry) {
const registryVersion = getActivePluginRegistryVersion();
if (cachedCommands && cachedCommandsPluginRegistryVersion === registryVersion) {
return cachedCommands;
}
const commands = buildChatCommands();
cachedCommands = commands;
cachedRegistry = registry;
cachedNativeCommandSurfaces = null;
cachedCommandsPluginRegistryVersion = registryVersion;
cachedNativeCommandSurfacesPluginRegistryVersion = null;
return commands;
}
export function getNativeCommandSurfaces(): Set<string> {
const registry = getActivePluginRegistry();
if (cachedNativeCommandSurfaces && registry === cachedNativeRegistry) {
const registryVersion = getActivePluginRegistryVersion();
if (
cachedNativeCommandSurfaces &&
cachedNativeCommandSurfacesPluginRegistryVersion === registryVersion
) {
return cachedNativeCommandSurfaces;
}
cachedNativeCommandSurfaces = new Set(
@ -846,6 +850,6 @@ export function getNativeCommandSurfaces(): Set<string> {
.filter((plugin) => plugin.capabilities.nativeCommands)
.map((plugin) => plugin.id),
);
cachedNativeRegistry = registry;
cachedNativeCommandSurfacesPluginRegistryVersion = registryVersion;
return cachedNativeCommandSurfaces;
}

View File

@ -1,6 +1,7 @@
import { afterEach, beforeEach, describe, expect, it } from "vitest";
import { setActivePluginRegistry } from "../plugins/runtime.js";
import { createTestRegistry } from "../test-utils/channel-plugins.js";
import { getNativeCommandSurfaces } from "./commands-registry.data.js";
import {
buildCommandText,
buildCommandTextFromArgs,
@ -203,15 +204,83 @@ describe("commands registry", () => {
]);
});
it("registers fast mode as a first-class options command", () => {
it("registers /fast as a text command", () => {
const fast = listChatCommands().find((command) => command.key === "fast");
expect(fast).toMatchObject({
nativeName: "fast",
textAliases: ["/fast"],
category: "options",
});
const modeArg = fast?.args?.find((arg) => arg.name === "mode");
expect(modeArg?.choices).toEqual(["status", "on", "off"]);
expect(fast).toBeTruthy();
expect(fast?.scope).toBe("text");
expect(fast?.textAliases).toContain("/fast");
});
it("invalidates cached command lists after plugin registry updates", () => {
const before = listChatCommands();
expect(before.find((command) => command.key === "dock:msteams")).toBeFalsy();
setActivePluginRegistry(
createTestRegistry([
{
pluginId: "test-plugin",
source: "test",
plugin: {
id: "msteams",
meta: {
id: "msteams",
label: "Microsoft Teams",
selectionLabel: "Microsoft Teams",
docsPath: "/channels/msteams",
blurb: "test stub.",
},
capabilities: {
chatTypes: ["direct"],
nativeCommands: true,
},
config: {
listAccountIds: () => ["default"],
resolveAccount: () => ({}),
},
},
},
]),
);
const after = listChatCommands();
expect(after.find((command) => command.key === "dock:msteams")).toBeTruthy();
});
it("does not let native-surface cache refresh mask stale chat command cache", () => {
const before = listChatCommands();
expect(before.find((command) => command.key === "dock:msteams")).toBeFalsy();
setActivePluginRegistry(
createTestRegistry([
{
pluginId: "test-plugin",
source: "test",
plugin: {
id: "msteams",
meta: {
id: "msteams",
label: "Microsoft Teams",
selectionLabel: "Microsoft Teams",
docsPath: "/channels/msteams",
blurb: "test stub.",
},
capabilities: {
chatTypes: ["direct"],
nativeCommands: true,
},
config: {
listAccountIds: () => ["default"],
resolveAccount: () => ({}),
},
},
},
]),
);
expect(getNativeCommandSurfaces().has("msteams")).toBe(true);
const after = listChatCommands();
expect(after.find((command) => command.key === "dock:msteams")).toBeTruthy();
});
it("detects known text commands", () => {

View File

@ -3,6 +3,7 @@ import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
import { BILLING_ERROR_USER_MESSAGE } from "../../agents/pi-embedded-helpers.js";
import type { SessionEntry } from "../../config/sessions.js";
import { loadSessionStore, saveSessionStore } from "../../config/sessions.js";
import { onAgentEvent } from "../../infra/agent-events.js";
@ -68,6 +69,10 @@ vi.mock("./queue.js", async () => {
});
const loadCronStoreMock = vi.fn();
const resolveAgentCortexModeStatusMock = vi.hoisted(() => vi.fn());
const resolveAgentCortexConflictNoticeMock = vi.hoisted(() => vi.fn());
const ingestAgentCortexMemoryCandidateMock = vi.hoisted(() => vi.fn());
const resolveCortexChannelTargetMock = vi.hoisted(() => vi.fn());
vi.mock("../../cron/store.js", async () => {
const actual = await vi.importActual<typeof import("../../cron/store.js")>("../../cron/store.js");
return {
@ -76,6 +81,18 @@ vi.mock("../../cron/store.js", async () => {
};
});
vi.mock("../../agents/cortex.js", async () => {
const actual =
await vi.importActual<typeof import("../../agents/cortex.js")>("../../agents/cortex.js");
return {
...actual,
ingestAgentCortexMemoryCandidate: ingestAgentCortexMemoryCandidateMock,
resolveAgentCortexModeStatus: resolveAgentCortexModeStatusMock,
resolveAgentCortexConflictNotice: resolveAgentCortexConflictNoticeMock,
resolveCortexChannelTarget: resolveCortexChannelTargetMock,
};
});
import { runReplyAgent } from "./agent-runner.js";
type RunWithModelFallbackParams = {
@ -90,6 +107,21 @@ beforeEach(() => {
runWithModelFallbackMock.mockClear();
runtimeErrorMock.mockClear();
loadCronStoreMock.mockClear();
resolveAgentCortexModeStatusMock.mockReset();
resolveAgentCortexConflictNoticeMock.mockReset();
ingestAgentCortexMemoryCandidateMock.mockReset();
resolveCortexChannelTargetMock.mockReset();
resolveAgentCortexModeStatusMock.mockResolvedValue(null);
resolveAgentCortexConflictNoticeMock.mockResolvedValue(null);
ingestAgentCortexMemoryCandidateMock.mockResolvedValue({
captured: false,
score: 0,
reason: "below memory threshold",
});
resolveCortexChannelTargetMock.mockImplementation(
(params: { originatingTo?: string; channel?: string }) =>
params.originatingTo ?? params.channel ?? "unknown",
);
// Default: no cron jobs in store.
loadCronStoreMock.mockResolvedValue({ version: 1, jobs: [] });
resetSystemEventsForTest();
@ -193,6 +225,20 @@ describe("runReplyAgent onAgentRunStart", () => {
});
});
it("returns billing message for mixed-signal error", async () => {
runEmbeddedPiAgentMock.mockRejectedValueOnce(
new Error(
"HTTP 402 Payment Required: insufficient credits. Request size exceeds model context window.",
),
);
const result = await createRun();
const payload = Array.isArray(result) ? result[0] : result;
expect(payload?.text).toBe(BILLING_ERROR_USER_MESSAGE);
expect(payload?.text).not.toContain("Context overflow");
});
it("emits start callback when cli runner starts", async () => {
runCliAgentMock.mockResolvedValueOnce({
payloads: [{ text: "ok" }],
@ -215,6 +261,62 @@ describe("runReplyAgent onAgentRunStart", () => {
expect(onAgentRunStart).toHaveBeenCalledWith("run-started");
expect(result).toMatchObject({ text: "ok" });
});
it("prepends a Cortex conflict notice when unresolved conflicts exist", async () => {
runEmbeddedPiAgentMock.mockResolvedValueOnce({
payloads: [{ text: "ok" }],
meta: {
agentMeta: {
provider: "anthropic",
model: "claude",
},
},
});
resolveAgentCortexConflictNoticeMock.mockResolvedValueOnce({
conflictId: "conf_1",
severity: 0.91,
text: "⚠️ Cortex conflict detected: Hiring status changed\nResolve with: /cortex resolve conf_1 <accept-new|keep-old|merge|ignore>",
});
const result = await createRun();
expect(result).toEqual([
expect.objectContaining({
text: expect.stringContaining("⚠️ Cortex conflict detected"),
}),
expect.objectContaining({ text: "ok" }),
]);
});
it("captures high-signal user text into Cortex before checking conflicts", async () => {
runEmbeddedPiAgentMock.mockResolvedValueOnce({
payloads: [{ text: "ok" }],
meta: {
agentMeta: {
provider: "anthropic",
model: "claude",
},
},
});
ingestAgentCortexMemoryCandidateMock.mockResolvedValueOnce({
captured: true,
score: 0.7,
reason: "high-signal memory candidate",
});
await createRun();
expect(ingestAgentCortexMemoryCandidateMock).toHaveBeenCalledWith({
cfg: {},
agentId: "main",
workspaceDir: "/tmp",
commandBody: "hello",
sessionId: "session",
channelId: "session:1",
provider: "webchat",
});
expect(resolveAgentCortexConflictNoticeMock).toHaveBeenCalled();
});
});
describe("runReplyAgent authProfileId fallback scoping", () => {
@ -1866,72 +1968,3 @@ describe("runReplyAgent transient HTTP retry", () => {
expect(payload?.text).toContain("Recovered response");
});
});
describe("runReplyAgent billing error classification", () => {
// Regression guard for the runner-level catch block in runAgentTurnWithFallback.
// Billing errors from providers like OpenRouter can contain token/size wording that
// matches context overflow heuristics. This test verifies the final user-visible
// message is the billing-specific one, not the "Context overflow" fallback.
it("returns billing message for mixed-signal error (billing text + overflow patterns)", async () => {
runEmbeddedPiAgentMock.mockRejectedValueOnce(
new Error("402 Payment Required: request token limit exceeded for this billing plan"),
);
const typing = createMockTypingController();
const sessionCtx = {
Provider: "telegram",
MessageSid: "msg",
} as unknown as TemplateContext;
const resolvedQueue = { mode: "interrupt" } as unknown as QueueSettings;
const followupRun = {
prompt: "hello",
summaryLine: "hello",
enqueuedAt: Date.now(),
run: {
sessionId: "session",
sessionKey: "main",
messageProvider: "telegram",
sessionFile: "/tmp/session.jsonl",
workspaceDir: "/tmp",
config: {},
skillsSnapshot: {},
provider: "anthropic",
model: "claude",
thinkLevel: "low",
verboseLevel: "off",
elevatedLevel: "off",
bashElevated: {
enabled: false,
allowed: false,
defaultLevel: "off",
},
timeoutMs: 1_000,
blockReplyBreak: "message_end",
},
} as unknown as FollowupRun;
const result = await runReplyAgent({
commandBody: "hello",
followupRun,
queueKey: "main",
resolvedQueue,
shouldSteer: false,
shouldFollowup: false,
isActive: false,
isStreaming: false,
typing,
sessionCtx,
defaultModel: "anthropic/claude",
resolvedVerboseLevel: "off",
isNewSession: false,
blockStreamingEnabled: false,
resolvedBlockStreamingBreak: "message_end",
shouldInjectGroupIntro: false,
typingMode: "instant",
});
const payload = Array.isArray(result) ? result[0] : result;
expect(payload?.text).toContain("billing error");
expect(payload?.text).not.toContain("Context overflow");
});
});

View File

@ -21,14 +21,13 @@ type AgentRunParams = {
onAssistantMessageStart?: () => Promise<void> | void;
onReasoningStream?: (payload: { text?: string }) => Promise<void> | void;
onBlockReply?: (payload: { text?: string; mediaUrls?: string[] }) => Promise<void> | void;
onToolResult?: (payload: ReplyPayload) => Promise<void> | void;
onToolResult?: (payload: { text?: string; mediaUrls?: string[] }) => Promise<void> | void;
onAgentEvent?: (evt: { stream: string; data: Record<string, unknown> }) => void;
};
type EmbeddedRunParams = {
prompt?: string;
extraSystemPrompt?: string;
memoryFlushWritePath?: string;
bootstrapPromptWarningSignaturesSeen?: string[];
bootstrapPromptWarningSignature?: string;
onAgentEvent?: (evt: { stream?: string; data?: { phase?: string; willRetry?: boolean } }) => void;
@ -37,6 +36,8 @@ type EmbeddedRunParams = {
const state = vi.hoisted(() => ({
runEmbeddedPiAgentMock: vi.fn(),
runCliAgentMock: vi.fn(),
resolveAgentCortexModeStatusMock: vi.fn(),
resolveCortexChannelTargetMock: vi.fn(),
}));
let modelFallbackModule: typeof import("../../agents/model-fallback.js");
@ -79,6 +80,17 @@ vi.mock("../../agents/cli-runner.js", () => ({
runCliAgent: (params: unknown) => state.runCliAgentMock(params),
}));
vi.mock("../../agents/cortex.js", async () => {
const actual =
await vi.importActual<typeof import("../../agents/cortex.js")>("../../agents/cortex.js");
return {
...actual,
resolveAgentCortexModeStatus: (params: unknown) =>
state.resolveAgentCortexModeStatusMock(params),
resolveCortexChannelTarget: (params: unknown) => state.resolveCortexChannelTargetMock(params),
};
});
vi.mock("./queue.js", () => ({
enqueueFollowupRun: vi.fn(),
scheduleFollowupDrain: vi.fn(),
@ -94,6 +106,13 @@ beforeAll(async () => {
beforeEach(() => {
state.runEmbeddedPiAgentMock.mockClear();
state.runCliAgentMock.mockClear();
state.resolveAgentCortexModeStatusMock.mockReset();
state.resolveCortexChannelTargetMock.mockReset();
state.resolveAgentCortexModeStatusMock.mockResolvedValue(null);
state.resolveCortexChannelTargetMock.mockImplementation(
(params: { originatingTo?: string; channel?: string }) =>
params.originatingTo ?? params.channel ?? "unknown",
);
vi.mocked(enqueueFollowupRun).mockClear();
vi.mocked(scheduleFollowupDrain).mockClear();
vi.stubEnv("OPENCLAW_TEST_FAST", "1");
@ -595,40 +614,6 @@ describe("runReplyAgent typing (heartbeat)", () => {
}
});
it("preserves channelData on forwarded tool results", async () => {
const onToolResult = vi.fn();
state.runEmbeddedPiAgentMock.mockImplementationOnce(async (params: AgentRunParams) => {
await params.onToolResult?.({
text: "Approval required.\n\n```txt\n/approve 117ba06d allow-once\n```",
channelData: {
execApproval: {
approvalId: "117ba06d-1111-2222-3333-444444444444",
approvalSlug: "117ba06d",
allowedDecisions: ["allow-once", "allow-always", "deny"],
},
},
});
return { payloads: [{ text: "final" }], meta: {} };
});
const { run } = createMinimalRun({
typingMode: "message",
opts: { onToolResult },
});
await run();
expect(onToolResult).toHaveBeenCalledWith({
text: "Approval required.\n\n```txt\n/approve 117ba06d allow-once\n```",
channelData: {
execApproval: {
approvalId: "117ba06d-1111-2222-3333-444444444444",
approvalSlug: "117ba06d",
allowedDecisions: ["allow-once", "allow-always", "deny"],
},
},
});
});
it("retries transient HTTP failures once with timer-driven backoff", async () => {
vi.useFakeTimers();
let calls = 0;
@ -737,6 +722,40 @@ describe("runReplyAgent typing (heartbeat)", () => {
});
});
it("announces active Cortex mode only when verbose mode is enabled", async () => {
const cases = [
{ name: "verbose on", verbose: "on" as const, expectNotice: true },
{ name: "verbose off", verbose: "off" as const, expectNotice: false },
] as const;
for (const testCase of cases) {
state.resolveAgentCortexModeStatusMock.mockResolvedValueOnce({
enabled: true,
mode: "minimal",
source: "session-override",
maxChars: 1500,
});
state.runEmbeddedPiAgentMock.mockResolvedValueOnce({
payloads: [{ text: "final" }],
meta: {},
});
const { run } = createMinimalRun({
resolvedVerboseLevel: testCase.verbose,
});
const res = await run();
const payload = Array.isArray(res)
? (res[0] as { text?: string })
: (res as { text?: string });
if (testCase.expectNotice) {
expect(payload.text, testCase.name).toContain("Cortex: minimal (session override)");
continue;
}
expect(payload.text, testCase.name).not.toContain("Cortex:");
}
});
it("announces model fallback only when verbose mode is enabled", async () => {
const cases = [
{ name: "verbose on", verbose: "on" as const, expectNotice: true },
@ -1255,79 +1274,6 @@ describe("runReplyAgent typing (heartbeat)", () => {
});
});
it("clears stale runtime model fields when resetSession retries after compaction failure", async () => {
await withTempStateDir(async (stateDir) => {
const sessionId = "session-stale-model";
const storePath = path.join(stateDir, "sessions", "sessions.json");
const transcriptPath = sessions.resolveSessionTranscriptPath(sessionId);
const sessionEntry: SessionEntry = {
sessionId,
updatedAt: Date.now(),
sessionFile: transcriptPath,
modelProvider: "qwencode",
model: "qwen3.5-plus-2026-02-15",
contextTokens: 123456,
systemPromptReport: {
source: "run",
generatedAt: Date.now(),
sessionId,
sessionKey: "main",
provider: "qwencode",
model: "qwen3.5-plus-2026-02-15",
workspaceDir: stateDir,
bootstrapMaxChars: 1000,
bootstrapTotalMaxChars: 2000,
systemPrompt: {
chars: 10,
projectContextChars: 5,
nonProjectContextChars: 5,
},
injectedWorkspaceFiles: [],
skills: {
promptChars: 0,
entries: [],
},
tools: {
listChars: 0,
schemaChars: 0,
entries: [],
},
},
};
const sessionStore = { main: sessionEntry };
await fs.mkdir(path.dirname(storePath), { recursive: true });
await fs.writeFile(storePath, JSON.stringify(sessionStore), "utf-8");
await fs.mkdir(path.dirname(transcriptPath), { recursive: true });
await fs.writeFile(transcriptPath, "ok", "utf-8");
state.runEmbeddedPiAgentMock.mockImplementationOnce(async () => {
throw new Error(
'Context overflow: Summarization failed: 400 {"message":"prompt is too long"}',
);
});
const { run } = createMinimalRun({
sessionEntry,
sessionStore,
sessionKey: "main",
storePath,
});
await run();
expect(sessionStore.main.modelProvider).toBeUndefined();
expect(sessionStore.main.model).toBeUndefined();
expect(sessionStore.main.contextTokens).toBeUndefined();
expect(sessionStore.main.systemPromptReport).toBeUndefined();
const persisted = JSON.parse(await fs.readFile(storePath, "utf-8"));
expect(persisted.main.modelProvider).toBeUndefined();
expect(persisted.main.model).toBeUndefined();
expect(persisted.main.contextTokens).toBeUndefined();
expect(persisted.main.systemPromptReport).toBeUndefined();
});
});
it("surfaces overflow fallback when embedded run returns empty payloads", async () => {
state.runEmbeddedPiAgentMock.mockImplementationOnce(async () => ({
payloads: [],
@ -1685,14 +1631,9 @@ describe("runReplyAgent memory flush", () => {
const flushCall = calls[0];
expect(flushCall?.prompt).toContain("Write notes.");
expect(flushCall?.prompt).toContain("NO_REPLY");
expect(flushCall?.prompt).toMatch(/memory\/\d{4}-\d{2}-\d{2}\.md/);
expect(flushCall?.prompt).toContain("MEMORY.md");
expect(flushCall?.memoryFlushWritePath).toMatch(/^memory\/\d{4}-\d{2}-\d{2}\.md$/);
expect(flushCall?.extraSystemPrompt).toContain("extra system");
expect(flushCall?.extraSystemPrompt).toContain("Flush memory now.");
expect(flushCall?.extraSystemPrompt).toContain("NO_REPLY");
expect(flushCall?.extraSystemPrompt).toContain("memory/YYYY-MM-DD.md");
expect(flushCall?.extraSystemPrompt).toContain("MEMORY.md");
expect(calls[1]?.prompt).toBe("hello");
});
});
@ -1780,17 +1721,9 @@ describe("runReplyAgent memory flush", () => {
await seedSessionStore({ storePath, sessionKey, entry: sessionEntry });
const calls: Array<{
prompt?: string;
extraSystemPrompt?: string;
memoryFlushWritePath?: string;
}> = [];
const calls: Array<{ prompt?: string }> = [];
state.runEmbeddedPiAgentMock.mockImplementation(async (params: EmbeddedRunParams) => {
calls.push({
prompt: params.prompt,
extraSystemPrompt: params.extraSystemPrompt,
memoryFlushWritePath: params.memoryFlushWritePath,
});
calls.push({ prompt: params.prompt });
if (params.prompt?.includes("Pre-compaction memory flush.")) {
return { payloads: [], meta: {} };
}
@ -1817,10 +1750,6 @@ describe("runReplyAgent memory flush", () => {
expect(calls[0]?.prompt).toContain("Pre-compaction memory flush.");
expect(calls[0]?.prompt).toContain("Current time:");
expect(calls[0]?.prompt).toMatch(/memory\/\d{4}-\d{2}-\d{2}\.md/);
expect(calls[0]?.prompt).toContain("MEMORY.md");
expect(calls[0]?.memoryFlushWritePath).toMatch(/^memory\/\d{4}-\d{2}-\d{2}\.md$/);
expect(calls[0]?.extraSystemPrompt).toContain("memory/YYYY-MM-DD.md");
expect(calls[0]?.extraSystemPrompt).toContain("MEMORY.md");
expect(calls[1]?.prompt).toBe("hello");
const stored = JSON.parse(await fs.readFile(storePath, "utf-8"));
@ -2077,4 +2006,3 @@ describe("runReplyAgent memory flush", () => {
});
});
});
import type { ReplyPayload } from "../types.js";

View File

@ -1,5 +1,11 @@
import fs from "node:fs";
import { lookupContextTokens } from "../../agents/context.js";
import {
ingestAgentCortexMemoryCandidate,
resolveAgentCortexConflictNotice,
resolveAgentTurnCortexContext,
resolveCortexChannelTarget,
} from "../../agents/cortex.js";
import { DEFAULT_CONTEXT_TOKENS } from "../../agents/defaults.js";
import { resolveModelAuthMode } from "../../agents/model-auth.js";
import { isCliProvider } from "../../agents/model-selection.js";
@ -702,6 +708,71 @@ export async function runReplyAgent(params: {
verboseNotices.push({ text: `🧹 Auto-compaction complete${suffix}.` });
}
}
const cortexAgentId =
followupRun.run.agentId ?? (sessionKey ? resolveAgentIdFromSessionKey(sessionKey) : "main");
const cortexChannelId = resolveCortexChannelTarget({
channel: followupRun.run.messageProvider,
originatingChannel: String(sessionCtx.OriginatingChannel ?? ""),
originatingTo: sessionCtx.OriginatingTo,
nativeChannelId: sessionCtx.NativeChannelId,
to: sessionCtx.To,
from: sessionCtx.From,
});
const resolvedTurnCortex = cfg
? await resolveAgentTurnCortexContext({
cfg,
agentId: cortexAgentId,
workspaceDir: followupRun.run.workspaceDir,
sessionId: followupRun.run.sessionId,
channelId: cortexChannelId,
})
: null;
const cortexMemoryCapture = cfg
? await ingestAgentCortexMemoryCandidate({
cfg,
agentId: cortexAgentId,
workspaceDir: followupRun.run.workspaceDir,
commandBody,
sessionId: followupRun.run.sessionId,
channelId: cortexChannelId,
provider: followupRun.run.messageProvider,
resolved: resolvedTurnCortex,
})
: null;
const cortexConflictNotice = cfg
? await resolveAgentCortexConflictNotice({
cfg,
agentId: cortexAgentId,
workspaceDir: followupRun.run.workspaceDir,
sessionId: followupRun.run.sessionId,
channelId: cortexChannelId,
resolved: resolvedTurnCortex,
})
: null;
if (verboseEnabled && resolvedTurnCortex) {
const sourceLabel =
resolvedTurnCortex.config.source === "session-override"
? "session override"
: resolvedTurnCortex.config.source === "channel-override"
? "channel override"
: "agent config";
verboseNotices.push({
text: `🧠 Cortex: ${resolvedTurnCortex.config.mode} (${sourceLabel})`,
});
}
if (verboseEnabled && cortexMemoryCapture?.captured) {
verboseNotices.push({
text: `🧠 Cortex memory updated (${cortexMemoryCapture.reason}, score ${cortexMemoryCapture.score.toFixed(2)})`,
});
}
if (verboseEnabled && cortexMemoryCapture?.syncedCodingContext) {
verboseNotices.push({
text: `🧠 Cortex coding sync updated (${(cortexMemoryCapture.syncPlatforms ?? []).join(", ")})`,
});
}
if (cortexConflictNotice) {
finalPayloads = [{ text: cortexConflictNotice.text }, ...finalPayloads];
}
if (verboseNotices.length > 0) {
finalPayloads = [...verboseNotices, ...finalPayloads];
}

View File

@ -17,6 +17,7 @@ import { handleConfigCommand, handleDebugCommand } from "./commands-config.js";
import {
handleCommandsListCommand,
handleContextCommand,
handleCortexCommand,
handleExportSessionCommand,
handleHelpCommand,
handleStatusCommand,
@ -192,6 +193,7 @@ export async function handleCommands(params: HandleCommandsParams): Promise<Comm
handleAllowlistCommand,
handleApproveCommand,
handleContextCommand,
handleCortexCommand,
handleExportSessionCommand,
handleWhoamiCommand,
handleSubagentsCommand,

View File

@ -0,0 +1,549 @@
import {
getAgentCortexMemoryCaptureStatusWithHistory,
resolveAgentCortexConfig,
resolveAgentCortexModeStatus,
resolveCortexChannelTarget,
} from "../../agents/cortex.js";
import { logVerbose } from "../../globals.js";
import {
clearCortexModeOverride,
getCortexModeOverride,
setCortexModeOverride,
type CortexModeScope,
} from "../../memory/cortex-mode-overrides.js";
import type { CortexMemoryResolveAction } from "../../memory/cortex.js";
import {
type CortexMemoryConflict,
listCortexMemoryConflicts,
previewCortexContext,
resolveCortexMemoryConflict,
syncCortexCodingContext,
type CortexPolicy,
} from "../../memory/cortex.js";
import type { ReplyPayload } from "../types.js";
import type { CommandHandler, HandleCommandsParams } from "./commands-types.js";
function parseCortexCommandArgs(commandBodyNormalized: string): string {
if (commandBodyNormalized === "/cortex") {
return "";
}
if (commandBodyNormalized.startsWith("/cortex ")) {
return commandBodyNormalized.slice(8).trim();
}
return "";
}
function parseMode(value?: string): CortexPolicy | null {
if (
value === "full" ||
value === "professional" ||
value === "technical" ||
value === "minimal"
) {
return value;
}
return null;
}
function parseResolveAction(value?: string): CortexMemoryResolveAction | null {
if (value === "accept-new" || value === "keep-old" || value === "merge" || value === "ignore") {
return value;
}
return null;
}
function resolveActiveSessionId(params: HandleCommandsParams): string | undefined {
return params.sessionEntry?.sessionId;
}
function resolveActiveChannelId(params: HandleCommandsParams): string {
return resolveCortexChannelTarget({
channel: params.command.channel,
channelId: params.command.channelId,
originatingChannel: String(params.ctx.OriginatingChannel ?? ""),
originatingTo: params.ctx.OriginatingTo,
nativeChannelId: params.ctx.NativeChannelId,
to: params.command.to ?? params.ctx.To,
from: params.command.from ?? params.ctx.From,
});
}
function resolveScopeTarget(
params: HandleCommandsParams,
rawScope?: string,
): { scope: CortexModeScope; targetId: string } | { error: string } {
const requested = rawScope?.trim().toLowerCase();
if (!requested || requested === "here" || requested === "channel") {
return {
scope: "channel",
targetId: resolveActiveChannelId(params),
};
}
if (requested === "session") {
const sessionId = resolveActiveSessionId(params);
if (sessionId) {
return { scope: "session", targetId: sessionId };
}
return { error: "No active session id is available for this conversation." };
}
return { error: "Use `/cortex mode set <mode> [here|session|channel]`." };
}
async function buildCortexHelpReply(): Promise<ReplyPayload> {
return {
text: [
"🧠 /cortex",
"",
"Manage Cortex prompt context for the active conversation.",
"",
"Try:",
"- /cortex preview",
"- /cortex why",
"- /cortex continuity",
"- /cortex conflicts",
"- /cortex conflict <conflictId>",
"- /cortex resolve <conflictId> <accept-new|keep-old|merge|ignore>",
"- /cortex sync coding",
"- /cortex mode show",
"- /cortex mode set minimal",
"- /cortex mode set professional channel",
"- /cortex mode reset",
"",
"Tip: omitting the scope uses the current conversation.",
"Use `session` only when you want one mode shared across the full session.",
"",
"Tip: after changing mode, run /status or /cortex preview to verify what will be used.",
].join("\n"),
};
}
function formatCortexConflictLines(conflict: CortexMemoryConflict, index?: number): string[] {
const prefix = typeof index === "number" ? `${index + 1}. ` : "";
return [
`${prefix}${conflict.id} · ${conflict.type} · severity ${conflict.severity.toFixed(2)}`,
conflict.summary,
conflict.nodeLabel ? `Node: ${conflict.nodeLabel}` : null,
conflict.oldValue ? `Old: ${conflict.oldValue}` : null,
conflict.newValue ? `New: ${conflict.newValue}` : null,
`Inspect: /cortex conflict ${conflict.id}`,
`Resolve newer: /cortex resolve ${conflict.id} accept-new`,
`Keep older: /cortex resolve ${conflict.id} keep-old`,
`Ignore: /cortex resolve ${conflict.id} ignore`,
].filter(Boolean) as string[];
}
async function resolveCortexConversationState(params: HandleCommandsParams) {
const agentId = params.agentId ?? "main";
const cortex = resolveAgentCortexConfig(params.cfg, agentId);
if (!cortex) {
return null;
}
const sessionId = resolveActiveSessionId(params);
const channelId = resolveActiveChannelId(params);
const modeStatus = await resolveAgentCortexModeStatus({
agentId,
cfg: params.cfg,
sessionId,
channelId,
});
const source =
modeStatus?.source === "session-override"
? "session override"
: modeStatus?.source === "channel-override"
? "channel override"
: "agent config";
return {
agentId,
cortex,
sessionId,
channelId,
mode: modeStatus?.mode ?? cortex.mode,
source,
};
}
async function buildCortexPreviewReply(params: HandleCommandsParams): Promise<ReplyPayload> {
const state = await resolveCortexConversationState(params);
if (!state) {
return {
text: "Cortex prompt bridge is disabled for this agent. Enable it in config or with `openclaw memory cortex enable`.",
};
}
const preview = await previewCortexContext({
workspaceDir: params.workspaceDir,
graphPath: state.cortex.graphPath,
policy: state.mode,
maxChars: state.cortex.maxChars,
});
if (!preview.context) {
return {
text: `No Cortex context available for mode ${state.mode}.`,
};
}
return {
text: [`Cortex preview (${state.mode}, ${state.source})`, "", preview.context].join("\n"),
};
}
async function buildCortexWhyReply(params: HandleCommandsParams): Promise<ReplyPayload> {
const state = await resolveCortexConversationState(params);
if (!state) {
return {
text: "Cortex prompt bridge is disabled for this agent. Enable it in config or with `openclaw memory cortex enable`.",
};
}
let previewBody = "No Cortex context is currently being injected.";
let previewGraphPath = state.cortex.graphPath ?? ".cortex/context.json";
let previewError: string | null = null;
try {
const preview = await previewCortexContext({
workspaceDir: params.workspaceDir,
graphPath: state.cortex.graphPath,
policy: state.mode,
maxChars: state.cortex.maxChars,
});
previewGraphPath = preview.graphPath;
previewBody = preview.context || previewBody;
} catch (error) {
previewError = error instanceof Error ? error.message : String(error);
}
const captureStatus = await getAgentCortexMemoryCaptureStatusWithHistory({
agentId: state.agentId,
sessionId: state.sessionId,
channelId: state.channelId,
});
return {
text: [
"Why I answered this way",
"",
`Mode: ${state.mode}`,
`Source: ${state.source}`,
`Graph: ${previewGraphPath}`,
state.sessionId ? `Session: ${state.sessionId}` : null,
state.channelId ? `Channel: ${state.channelId}` : null,
captureStatus
? `Last memory capture: ${captureStatus.captured ? "stored" : "skipped"} (${captureStatus.reason}, score ${captureStatus.score.toFixed(2)})`
: "Last memory capture: not evaluated yet",
captureStatus?.error ? `Capture error: ${captureStatus.error}` : null,
captureStatus?.syncedCodingContext
? `Coding sync: updated (${(captureStatus.syncPlatforms ?? []).join(", ")})`
: null,
previewError ? `Preview error: ${previewError}` : null,
"",
"Injected Cortex context:",
previewBody,
]
.filter(Boolean)
.join("\n"),
};
}
async function buildCortexContinuityReply(params: HandleCommandsParams): Promise<ReplyPayload> {
const state = await resolveCortexConversationState(params);
if (!state) {
return {
text: "Cortex prompt bridge is disabled for this agent. Enable it in config or with `openclaw memory cortex enable`.",
};
}
return {
text: [
"Cortex continuity",
"",
"This conversation is using the shared Cortex graph for the active agent.",
`Agent: ${state.agentId}`,
`Mode: ${state.mode} (${state.source})`,
`Graph: ${state.cortex.graphPath ?? ".cortex/context.json"}`,
state.sessionId ? `Session: ${state.sessionId}` : null,
state.channelId ? `Channel: ${state.channelId}` : null,
"",
"Messages from other channels on this agent reuse the same graph unless you override the graph path or mode there.",
"Try /cortex preview from another channel to verify continuity.",
]
.filter(Boolean)
.join("\n"),
};
}
async function buildCortexConflictsReply(params: HandleCommandsParams): Promise<ReplyPayload> {
const state = await resolveCortexConversationState(params);
if (!state) {
return {
text: "Cortex prompt bridge is disabled for this agent. Enable it in config or with `openclaw memory cortex enable`.",
};
}
const conflicts = await listCortexMemoryConflicts({
workspaceDir: params.workspaceDir,
graphPath: state.cortex.graphPath,
});
if (conflicts.length === 0) {
return {
text: "No Cortex memory conflicts.",
};
}
return {
text: [
`Cortex conflicts (${conflicts.length})`,
"",
...conflicts
.slice(0, 3)
.flatMap((conflict, index) => [...formatCortexConflictLines(conflict, index), ""]),
conflicts.length > 3 ? `…and ${conflicts.length - 3} more.` : null,
"",
"Use /cortex conflict <conflictId> for the full structured view.",
]
.filter(Boolean)
.join("\n"),
};
}
async function buildCortexConflictDetailReply(
params: HandleCommandsParams,
args: string,
): Promise<ReplyPayload> {
const state = await resolveCortexConversationState(params);
if (!state) {
return {
text: "Cortex prompt bridge is disabled for this agent. Enable it in config or with `openclaw memory cortex enable`.",
};
}
const tokens = args.split(/\s+/).filter(Boolean);
const conflictId = tokens[1];
if (!conflictId) {
return {
text: "Usage: /cortex conflict <conflictId>",
};
}
const conflicts = await listCortexMemoryConflicts({
workspaceDir: params.workspaceDir,
graphPath: state.cortex.graphPath,
});
const conflict = conflicts.find((entry) => entry.id === conflictId);
if (!conflict) {
return {
text: `Cortex conflict not found: ${conflictId}`,
};
}
return {
text: ["Cortex conflict detail", "", ...formatCortexConflictLines(conflict)].join("\n"),
};
}
async function buildCortexResolveReply(
params: HandleCommandsParams,
args: string,
): Promise<ReplyPayload> {
const state = await resolveCortexConversationState(params);
if (!state) {
return {
text: "Cortex prompt bridge is disabled for this agent. Enable it in config or with `openclaw memory cortex enable`.",
};
}
const tokens = args.split(/\s+/).filter(Boolean);
const conflictId = tokens[1];
const action = parseResolveAction(tokens[2]);
if (!conflictId || !action) {
return {
text: "Usage: /cortex resolve <conflictId> <accept-new|keep-old|merge|ignore>",
};
}
const result = await resolveCortexMemoryConflict({
workspaceDir: params.workspaceDir,
graphPath: state.cortex.graphPath,
conflictId,
action,
commitMessage: `openclaw cortex resolve ${conflictId} ${action}`,
});
return {
text: [
`Resolved Cortex conflict ${result.conflictId}.`,
`Action: ${result.action}`,
`Status: ${result.status}`,
typeof result.nodesUpdated === "number" ? `Nodes updated: ${result.nodesUpdated}` : null,
typeof result.nodesRemoved === "number" ? `Nodes removed: ${result.nodesRemoved}` : null,
result.commitId ? `Commit: ${result.commitId}` : null,
result.message ?? null,
"Use /cortex conflicts or /cortex preview to inspect the updated memory state.",
]
.filter(Boolean)
.join("\n"),
};
}
async function buildCortexSyncReply(
params: HandleCommandsParams,
args: string,
): Promise<ReplyPayload> {
const state = await resolveCortexConversationState(params);
if (!state) {
return {
text: "Cortex prompt bridge is disabled for this agent. Enable it in config or with `openclaw memory cortex enable`.",
};
}
const tokens = args.split(/\s+/).filter(Boolean);
if (tokens[1]?.toLowerCase() !== "coding") {
return {
text: "Usage: /cortex sync coding [full|professional|technical|minimal] [platform ...]",
};
}
const requestedMode = parseMode(tokens[2]);
const policy = requestedMode ?? "technical";
const platformStartIndex = requestedMode ? 3 : 2;
const platforms = tokens.slice(platformStartIndex).filter(Boolean);
const result = await syncCortexCodingContext({
workspaceDir: params.workspaceDir,
graphPath: state.cortex.graphPath,
policy,
platforms,
});
return {
text: [
"Synced Cortex coding context.",
`Mode: ${result.policy}`,
`Platforms: ${result.platforms.join(", ")}`,
`Graph: ${result.graphPath}`,
].join("\n"),
};
}
async function buildCortexModeReply(
params: HandleCommandsParams,
args: string,
): Promise<ReplyPayload> {
const tokens = args.split(/\s+/).filter(Boolean);
const action = tokens[1]?.toLowerCase();
const agentId = params.agentId ?? "main";
if (!action || action === "help") {
return {
text: [
"Usage:",
"- /cortex mode show",
"- /cortex mode set <full|professional|technical|minimal> [here|session|channel]",
"- /cortex mode reset [here|session|channel]",
].join("\n"),
};
}
if (action === "show") {
const target = resolveScopeTarget(params, tokens[2]);
if ("error" in target) {
return { text: target.error };
}
const override = await getCortexModeOverride({
agentId,
sessionId: target.scope === "session" ? target.targetId : undefined,
channelId: target.scope === "channel" ? target.targetId : undefined,
});
if (!override) {
return {
text: `No Cortex mode override for this ${target.scope}.`,
};
}
return {
text: `Cortex mode for this ${target.scope}: ${override.mode}`,
};
}
if (action === "set") {
const mode = parseMode(tokens[2]);
if (!mode) {
return {
text: "Usage: /cortex mode set <full|professional|technical|minimal> [here|session|channel]",
};
}
const target = resolveScopeTarget(params, tokens[3]);
if ("error" in target) {
return { text: target.error };
}
await setCortexModeOverride({
agentId,
scope: target.scope,
targetId: target.targetId,
mode,
});
return {
text: [
`Set Cortex mode for this ${target.scope} to ${mode}.`,
"Use /status or /cortex preview to verify.",
].join("\n"),
};
}
if (action === "reset") {
const target = resolveScopeTarget(params, tokens[2]);
if ("error" in target) {
return { text: target.error };
}
const removed = await clearCortexModeOverride({
agentId,
scope: target.scope,
targetId: target.targetId,
});
return {
text: removed
? [
`Cleared Cortex mode override for this ${target.scope}.`,
"Use /status or /cortex preview to verify.",
].join("\n")
: `No Cortex mode override for this ${target.scope}.`,
};
}
return {
text: "Usage: /cortex preview | /cortex mode <show|set|reset> ...",
};
}
export const handleCortexCommand: CommandHandler = async (params, allowTextCommands) => {
if (!allowTextCommands) {
return null;
}
const normalized = params.command.commandBodyNormalized;
if (normalized !== "/cortex" && !normalized.startsWith("/cortex ")) {
return null;
}
if (!params.command.isAuthorizedSender) {
logVerbose(
`Ignoring /cortex from unauthorized sender: ${params.command.senderId || "<unknown>"}`,
);
return { shouldContinue: false };
}
try {
const args = parseCortexCommandArgs(normalized);
const subcommand = args.split(/\s+/).filter(Boolean)[0]?.toLowerCase() ?? "";
const reply =
!subcommand || subcommand === "help"
? await buildCortexHelpReply()
: subcommand === "preview"
? await buildCortexPreviewReply(params)
: subcommand === "why"
? await buildCortexWhyReply(params)
: subcommand === "continuity"
? await buildCortexContinuityReply(params)
: subcommand === "conflicts"
? await buildCortexConflictsReply(params)
: subcommand === "conflict"
? await buildCortexConflictDetailReply(params, args)
: subcommand === "resolve"
? await buildCortexResolveReply(params, args)
: subcommand === "sync"
? await buildCortexSyncReply(params, args)
: subcommand === "mode"
? await buildCortexModeReply(params, args)
: {
text: "Usage: /cortex preview | /cortex why | /cortex continuity | /cortex conflicts | /cortex conflict <id> | /cortex resolve ... | /cortex sync coding ... | /cortex mode <show|set|reset> ...",
};
return {
shouldContinue: false,
reply,
};
} catch (error) {
return {
shouldContinue: false,
reply: {
text: error instanceof Error ? error.message : String(error),
},
};
}
};

View File

@ -6,6 +6,7 @@ import {
buildHelpMessage,
} from "../status.js";
import { buildContextReply } from "./commands-context-report.js";
import { handleCortexCommand } from "./commands-cortex.js";
import { buildExportSessionReply } from "./commands-export-session.js";
import { buildStatusReply } from "./commands-status.js";
import type { CommandHandler } from "./commands-types.js";
@ -133,6 +134,7 @@ export const handleStatusCommand: CommandHandler = async (params, allowTextComma
}
const reply = await buildStatusReply({
cfg: params.cfg,
ctx: params.ctx,
command: params.command,
sessionEntry: params.sessionEntry,
sessionKey: params.sessionKey,
@ -226,3 +228,5 @@ export const handleWhoamiCommand: CommandHandler = async (params, allowTextComma
}
return { shouldContinue: false, reply: { text: lines.join("\n") } };
};
export { handleCortexCommand };

View File

@ -3,6 +3,7 @@ import {
resolveDefaultAgentId,
resolveSessionAgentId,
} from "../../agents/agent-scope.js";
import { resolveAgentCortexModeStatus, resolveCortexChannelTarget } from "../../agents/cortex.js";
import { resolveFastModeState } from "../../agents/fast-mode.js";
import { resolveModelAuthLabel } from "../../agents/model-auth-label.js";
import { listSubagentRunsForRequester } from "../../agents/subagent-registry.js";
@ -23,6 +24,7 @@ import type { MediaUnderstandingDecision } from "../../media-understanding/types
import { normalizeGroupActivation } from "../group-activation.js";
import { resolveSelectedAndActiveModel } from "../model-runtime.js";
import { buildStatusMessage } from "../status.js";
import type { MsgContext } from "../templating.js";
import type { ElevatedLevel, ReasoningLevel, ThinkLevel, VerboseLevel } from "../thinking.js";
import type { ReplyPayload } from "../types.js";
import type { CommandContext } from "./commands-types.js";
@ -31,6 +33,7 @@ import { resolveSubagentLabel } from "./subagents-utils.js";
export async function buildStatusReply(params: {
cfg: OpenClawConfig;
ctx?: MsgContext;
command: CommandContext;
sessionEntry?: SessionEntry;
sessionKey: string;
@ -52,6 +55,7 @@ export async function buildStatusReply(params: {
}): Promise<ReplyPayload | undefined> {
const {
cfg,
ctx,
command,
sessionEntry,
sessionKey,
@ -120,6 +124,7 @@ export async function buildStatusReply(params: {
);
let subagentsLine: string | undefined;
let cortexLine: string | undefined;
if (sessionKey) {
const { mainKey, alias } = resolveMainSessionAlias(cfg);
const requesterKey = resolveInternalSessionKey({ key: sessionKey, alias, mainKey });
@ -140,6 +145,29 @@ export async function buildStatusReply(params: {
}
}
}
const cortexStatus = await resolveAgentCortexModeStatus({
cfg,
agentId: statusAgentId,
sessionId: sessionEntry?.sessionId ?? undefined,
channelId: resolveCortexChannelTarget({
channel: command.channel,
channelId: command.channelId,
originatingChannel: String(ctx?.OriginatingChannel ?? command.channel),
originatingTo: ctx?.OriginatingTo,
nativeChannelId: ctx?.NativeChannelId,
to: command.to ?? ctx?.To,
from: command.from ?? ctx?.From,
}),
});
if (cortexStatus) {
const sourceLabel =
cortexStatus.source === "session-override"
? "session override"
: cortexStatus.source === "channel-override"
? "channel override"
: "agent config";
cortexLine = `🧠 Cortex: ${cortexStatus.mode} (${sourceLabel})`;
}
const groupActivation = isGroup
? (normalizeGroupActivation(sessionEntry?.groupActivation) ?? defaultGroupActivation())
: undefined;
@ -199,6 +227,7 @@ export async function buildStatusReply(params: {
modelAuth: selectedModelAuth,
activeModelAuth,
usageLine: usageLine ?? undefined,
cortexLine,
queue: {
mode: queueSettings.mode,
depth: queueDepth,

View File

@ -1,5 +1,5 @@
import type { OpenClawConfig } from "../../config/config.js";
import type { MsgContext } from "../templating.js";
import type { MsgContext, TemplateContext } from "../templating.js";
import type { HandleCommandsParams } from "./commands-types.js";
import { buildCommandContext } from "./commands.js";
import { parseInlineDirectives } from "./directive-handling.js";
@ -7,7 +7,7 @@ import { parseInlineDirectives } from "./directive-handling.js";
export function buildCommandTestParams(
commandBody: string,
cfg: OpenClawConfig,
ctxOverrides?: Partial<MsgContext>,
ctxOverrides?: Partial<TemplateContext>,
options?: {
workspaceDir?: string;
},

View File

@ -7,6 +7,7 @@ import { updateSessionStore, type SessionEntry } from "../../config/sessions.js"
import { typedCases } from "../../test-utils/typed-cases.js";
import { INTERNAL_MESSAGE_CHANNEL } from "../../utils/message-channel.js";
import type { MsgContext } from "../templating.js";
import type { HandleCommandsParams } from "./commands-types.js";
const readConfigFileSnapshotMock = vi.hoisted(() => vi.fn());
const validateConfigObjectWithPluginsMock = vi.hoisted(() => vi.fn());
@ -89,7 +90,71 @@ vi.mock("../../gateway/call.js", () => ({
callGateway: callGatewayMock,
}));
import type { HandleCommandsParams } from "./commands-types.js";
const previewCortexContextMock = vi.hoisted(() => vi.fn());
const listCortexMemoryConflictsMock = vi.hoisted(() => vi.fn());
const resolveCortexMemoryConflictMock = vi.hoisted(() => vi.fn());
const syncCortexCodingContextMock = vi.hoisted(() => vi.fn());
const getCortexModeOverrideMock = vi.hoisted(() => vi.fn());
const setCortexModeOverrideMock = vi.hoisted(() => vi.fn());
const clearCortexModeOverrideMock = vi.hoisted(() => vi.fn());
const resolveAgentCortexModeStatusMock = vi.hoisted(() => vi.fn());
const resolveCortexChannelTargetMock = vi.hoisted(() => vi.fn());
const getAgentCortexMemoryCaptureStatusWithHistoryMock = vi.hoisted(() => vi.fn());
vi.mock("../../memory/cortex.js", async () => {
const actual =
await vi.importActual<typeof import("../../memory/cortex.js")>("../../memory/cortex.js");
return {
...actual,
previewCortexContext: previewCortexContextMock,
listCortexMemoryConflicts: listCortexMemoryConflictsMock,
resolveCortexMemoryConflict: resolveCortexMemoryConflictMock,
syncCortexCodingContext: syncCortexCodingContextMock,
};
});
vi.mock("../../memory/cortex-mode-overrides.js", async () => {
const actual = await vi.importActual<typeof import("../../memory/cortex-mode-overrides.js")>(
"../../memory/cortex-mode-overrides.js",
);
return {
...actual,
getCortexModeOverride: getCortexModeOverrideMock,
setCortexModeOverride: setCortexModeOverrideMock,
clearCortexModeOverride: clearCortexModeOverrideMock,
};
});
vi.mock("../../agents/cortex.js", async () => {
const actual =
await vi.importActual<typeof import("../../agents/cortex.js")>("../../agents/cortex.js");
return {
...actual,
getAgentCortexMemoryCaptureStatusWithHistory: getAgentCortexMemoryCaptureStatusWithHistoryMock,
resolveAgentCortexModeStatus: resolveAgentCortexModeStatusMock,
resolveCortexChannelTarget: resolveCortexChannelTargetMock,
};
});
type ResetAcpSessionInPlaceResult = { ok: true } | { ok: false; skipped?: boolean; error?: string };
const resetAcpSessionInPlaceMock = vi.hoisted(() =>
vi.fn(
async (_params: unknown): Promise<ResetAcpSessionInPlaceResult> => ({
ok: false,
skipped: true,
}),
),
);
vi.mock("../../acp/persistent-bindings.lifecycle.js", async () => {
const actual = await vi.importActual<typeof import("../../acp/persistent-bindings.lifecycle.js")>(
"../../acp/persistent-bindings.lifecycle.js",
);
return {
...actual,
resetAcpSessionInPlace: (params: unknown) => resetAcpSessionInPlaceMock(params),
};
});
// Avoid expensive workspace scans during /context tests.
vi.mock("./commands-context-report.js", () => ({
@ -191,6 +256,26 @@ function buildParams(commandBody: string, cfg: OpenClawConfig, ctxOverrides?: Pa
return buildCommandTestParams(commandBody, cfg, ctxOverrides, { workspaceDir: testWorkspaceDir });
}
beforeEach(() => {
resetAcpSessionInPlaceMock.mockReset();
resetAcpSessionInPlaceMock.mockResolvedValue({ ok: false, skipped: true } as const);
previewCortexContextMock.mockReset();
listCortexMemoryConflictsMock.mockReset();
resolveCortexMemoryConflictMock.mockReset();
syncCortexCodingContextMock.mockReset();
getCortexModeOverrideMock.mockReset();
setCortexModeOverrideMock.mockReset();
clearCortexModeOverrideMock.mockReset();
resolveAgentCortexModeStatusMock.mockReset();
resolveCortexChannelTargetMock.mockReset();
getAgentCortexMemoryCaptureStatusWithHistoryMock.mockReset();
resolveAgentCortexModeStatusMock.mockResolvedValue(null);
getAgentCortexMemoryCaptureStatusWithHistoryMock.mockResolvedValue(null);
resolveCortexChannelTargetMock.mockImplementation(
(params: { nativeChannelId?: string; to?: string; channel?: string }) =>
params.nativeChannelId ?? params.to ?? params.channel ?? "unknown",
);
});
describe("handleCommands gating", () => {
it("blocks gated commands when disabled or not elevated-allowlisted", async () => {
const cases = typedCases<{
@ -610,6 +695,435 @@ describe("/compact command", () => {
});
});
describe("/cortex command", () => {
it("shows help for bare /cortex", async () => {
const cfg = {
commands: { text: true },
channels: { whatsapp: { allowFrom: ["*"] } },
} as OpenClawConfig;
const params = buildParams("/cortex", cfg);
const result = await handleCommands(params);
expect(result.shouldContinue).toBe(false);
expect(result.reply?.text).toContain("Manage Cortex prompt context");
expect(result.reply?.text).toContain("/cortex preview");
expect(result.reply?.text).toContain("/cortex why");
expect(result.reply?.text).toContain("/cortex conflicts");
});
it("previews Cortex context using the active override", async () => {
const cfg = {
commands: { text: true },
channels: { whatsapp: { allowFrom: ["*"] } },
agents: {
defaults: {
cortex: {
enabled: true,
mode: "technical",
maxChars: 1500,
},
},
},
} as OpenClawConfig;
resolveAgentCortexModeStatusMock.mockResolvedValueOnce({
enabled: true,
mode: "minimal",
source: "session-override",
maxChars: 1500,
});
previewCortexContextMock.mockResolvedValueOnce({
workspaceDir: testWorkspaceDir,
graphPath: path.join(testWorkspaceDir, ".cortex", "context.json"),
policy: "minimal",
maxChars: 1500,
context: "## Cortex Context\n- Minimal",
});
const params = buildParams("/cortex preview", cfg);
const result = await handleCommands(params);
expect(result.shouldContinue).toBe(false);
expect(result.reply?.text).toContain("Cortex preview (minimal, session override)");
expect(result.reply?.text).toContain("## Cortex Context");
expect(resolveAgentCortexModeStatusMock).toHaveBeenCalled();
expect(previewCortexContextMock).toHaveBeenCalledWith({
workspaceDir: testWorkspaceDir,
graphPath: undefined,
policy: "minimal",
maxChars: 1500,
});
});
it("explains why Cortex context affected the reply", async () => {
const cfg = {
commands: { text: true },
channels: { whatsapp: { allowFrom: ["*"] } },
agents: {
defaults: {
cortex: {
enabled: true,
mode: "technical",
maxChars: 1500,
graphPath: ".cortex/context.json",
},
},
},
} as OpenClawConfig;
resolveAgentCortexModeStatusMock.mockResolvedValueOnce({
enabled: true,
mode: "professional",
source: "channel-override",
maxChars: 1500,
graphPath: path.join(testWorkspaceDir, ".cortex", "context.json"),
});
previewCortexContextMock.mockResolvedValueOnce({
workspaceDir: testWorkspaceDir,
graphPath: path.join(testWorkspaceDir, ".cortex", "context.json"),
policy: "professional",
maxChars: 1500,
context: "## Cortex Context\n- Work priorities",
});
getAgentCortexMemoryCaptureStatusWithHistoryMock.mockResolvedValueOnce({
captured: true,
score: 0.7,
reason: "high-signal memory candidate",
updatedAt: Date.now(),
});
const params = buildParams("/cortex why", cfg, {
NativeChannelId: "C123",
});
const result = await handleCommands(params);
expect(result.shouldContinue).toBe(false);
expect(result.reply?.text).toContain("Why I answered this way");
expect(result.reply?.text).toContain("Mode: professional");
expect(result.reply?.text).toContain("Source: channel override");
expect(result.reply?.text).toContain(
"Last memory capture: stored (high-signal memory candidate, score 0.70)",
);
expect(result.reply?.text).toContain("Injected Cortex context:");
});
it("keeps /cortex why useful when preview fails", async () => {
const cfg = {
commands: { text: true },
channels: { whatsapp: { allowFrom: ["*"] } },
agents: {
defaults: {
cortex: {
enabled: true,
mode: "technical",
maxChars: 1500,
graphPath: ".cortex/context.json",
},
},
},
} as OpenClawConfig;
resolveAgentCortexModeStatusMock.mockResolvedValueOnce({
enabled: true,
mode: "professional",
source: "agent-config",
maxChars: 1500,
graphPath: path.join(testWorkspaceDir, ".cortex", "context.json"),
});
previewCortexContextMock.mockRejectedValueOnce(new Error("Cortex graph not found"));
const params = buildParams("/cortex why", cfg, {
NativeChannelId: "C123",
});
const result = await handleCommands(params);
expect(result.shouldContinue).toBe(false);
expect(result.reply?.text).toContain("Why I answered this way");
expect(result.reply?.text).toContain("Mode: professional");
expect(result.reply?.text).toContain("Preview error: Cortex graph not found");
expect(result.reply?.text).toContain("Injected Cortex context:");
expect(result.reply?.text).toContain("No Cortex context is currently being injected.");
});
it("shows continuity details for the active conversation", async () => {
const cfg = {
commands: { text: true },
channels: { whatsapp: { allowFrom: ["*"] } },
agents: {
defaults: {
cortex: {
enabled: true,
mode: "technical",
maxChars: 1500,
graphPath: ".cortex/context.json",
},
},
},
} as OpenClawConfig;
const params = buildParams("/cortex continuity", cfg, {
NativeChannelId: "C123",
});
const result = await handleCommands(params);
expect(result.shouldContinue).toBe(false);
expect(result.reply?.text).toContain("Cortex continuity");
expect(result.reply?.text).toContain("shared Cortex graph");
expect(result.reply?.text).toContain("Try /cortex preview from another channel");
});
it("lists Cortex conflicts and suggests a resolve command", async () => {
const cfg = {
commands: { text: true },
channels: { whatsapp: { allowFrom: ["*"] } },
agents: {
defaults: {
cortex: {
enabled: true,
mode: "technical",
maxChars: 1500,
},
},
},
} as OpenClawConfig;
listCortexMemoryConflictsMock.mockResolvedValueOnce([
{
id: "conf_1",
type: "temporal_flip",
severity: 0.91,
summary: "Hiring status changed from active hiring to not hiring",
nodeLabel: "Hiring",
oldValue: "active hiring",
newValue: "not hiring",
},
]);
const params = buildParams("/cortex conflicts", cfg);
const result = await handleCommands(params);
expect(result.shouldContinue).toBe(false);
expect(result.reply?.text).toContain("Cortex conflicts (1)");
expect(result.reply?.text).toContain("conf_1 · temporal_flip · severity 0.91");
expect(result.reply?.text).toContain("Node: Hiring");
expect(result.reply?.text).toContain("Old: active hiring");
expect(result.reply?.text).toContain("New: not hiring");
expect(result.reply?.text).toContain("/cortex conflict conf_1");
expect(result.reply?.text).toContain("/cortex resolve conf_1 accept-new");
});
it("shows a structured Cortex conflict detail view", async () => {
const cfg = {
commands: { text: true },
channels: { whatsapp: { allowFrom: ["*"] } },
agents: {
defaults: {
cortex: {
enabled: true,
mode: "technical",
maxChars: 1500,
},
},
},
} as OpenClawConfig;
listCortexMemoryConflictsMock.mockResolvedValueOnce([
{
id: "conf_1",
type: "temporal_flip",
severity: 0.91,
summary: "Hiring status changed from active hiring to not hiring",
nodeLabel: "Hiring",
oldValue: "active hiring",
newValue: "not hiring",
},
]);
const params = buildParams("/cortex conflict conf_1", cfg);
const result = await handleCommands(params);
expect(result.shouldContinue).toBe(false);
expect(result.reply?.text).toContain("Cortex conflict detail");
expect(result.reply?.text).toContain("Node: Hiring");
expect(result.reply?.text).toContain("/cortex resolve conf_1 keep-old");
});
it("resolves a Cortex conflict from chat", async () => {
const cfg = {
commands: { text: true },
channels: { whatsapp: { allowFrom: ["*"] } },
agents: {
defaults: {
cortex: {
enabled: true,
mode: "technical",
maxChars: 1500,
},
},
},
} as OpenClawConfig;
resolveCortexMemoryConflictMock.mockResolvedValueOnce({
status: "ok",
conflictId: "conf_1",
action: "accept-new",
nodesUpdated: 1,
nodesRemoved: 1,
commitId: "ver_123",
});
const params = buildParams("/cortex resolve conf_1 accept-new", cfg);
const result = await handleCommands(params);
expect(result.shouldContinue).toBe(false);
expect(resolveCortexMemoryConflictMock).toHaveBeenCalledWith({
workspaceDir: testWorkspaceDir,
graphPath: undefined,
conflictId: "conf_1",
action: "accept-new",
commitMessage: "openclaw cortex resolve conf_1 accept-new",
});
expect(result.reply?.text).toContain("Resolved Cortex conflict conf_1.");
expect(result.reply?.text).toContain("Commit: ver_123");
expect(result.reply?.text).toContain("/cortex preview");
});
it("syncs Cortex coding context from chat", async () => {
const cfg = {
commands: { text: true },
channels: { whatsapp: { allowFrom: ["*"] } },
agents: {
defaults: {
cortex: {
enabled: true,
mode: "technical",
maxChars: 1500,
},
},
},
} as OpenClawConfig;
syncCortexCodingContextMock.mockResolvedValueOnce({
workspaceDir: testWorkspaceDir,
graphPath: path.join(testWorkspaceDir, ".cortex", "context.json"),
policy: "technical",
platforms: ["claude-code", "cursor", "copilot"],
});
const params = buildParams("/cortex sync coding", cfg);
const result = await handleCommands(params);
expect(result.shouldContinue).toBe(false);
expect(syncCortexCodingContextMock).toHaveBeenCalledWith({
workspaceDir: testWorkspaceDir,
graphPath: undefined,
policy: "technical",
platforms: [],
});
expect(result.reply?.text).toContain("Synced Cortex coding context.");
expect(result.reply?.text).toContain("Platforms: claude-code, cursor, copilot");
});
it("sets Cortex mode for the active conversation", async () => {
const cfg = {
commands: { text: true },
channels: { whatsapp: { allowFrom: ["*"] } },
} as OpenClawConfig;
const params = buildParams("/cortex mode set professional", cfg, {
Surface: "slack",
Provider: "slack",
NativeChannelId: "C123",
});
const result = await handleCommands(params);
expect(result.shouldContinue).toBe(false);
expect(setCortexModeOverrideMock).toHaveBeenCalledWith({
agentId: "main",
scope: "channel",
targetId: "C123",
mode: "professional",
});
expect(result.reply?.text).toContain("Set Cortex mode for this channel to professional.");
expect(result.reply?.text).toContain("Use /status or /cortex preview to verify.");
});
it("shows Cortex mode for the active conversation when no scope is specified", async () => {
const cfg = {
commands: { text: true },
channels: { whatsapp: { allowFrom: ["*"] } },
} as OpenClawConfig;
getCortexModeOverrideMock.mockResolvedValueOnce({
agentId: "main",
scope: "channel",
targetId: "C123",
mode: "minimal",
updatedAt: new Date().toISOString(),
});
const params = buildParams("/cortex mode show", cfg, {
Surface: "slack",
Provider: "slack",
NativeChannelId: "C123",
});
const result = await handleCommands(params);
expect(result.shouldContinue).toBe(false);
expect(getCortexModeOverrideMock).toHaveBeenCalledWith({
agentId: "main",
sessionId: undefined,
channelId: "C123",
});
expect(result.reply?.text).toContain("Cortex mode for this channel: minimal");
});
it("resets Cortex mode for the active conversation when requested", async () => {
const cfg = {
commands: { text: true },
channels: { whatsapp: { allowFrom: ["*"] } },
} as OpenClawConfig;
clearCortexModeOverrideMock.mockResolvedValueOnce(true);
const params = buildParams("/cortex mode reset", cfg, {
Surface: "slack",
Provider: "slack",
NativeChannelId: "C123",
});
const result = await handleCommands(params);
expect(result.shouldContinue).toBe(false);
expect(resolveCortexChannelTargetMock).toHaveBeenCalled();
expect(clearCortexModeOverrideMock).toHaveBeenCalledWith({
agentId: "main",
scope: "channel",
targetId: "C123",
});
expect(result.reply?.text).toContain("Cleared Cortex mode override for this channel.");
expect(result.reply?.text).toContain("Use /status or /cortex preview to verify.");
});
it("shows the active Cortex mode in /status", async () => {
const cfg = {
commands: { text: true },
channels: { whatsapp: { allowFrom: ["*"] } },
} as OpenClawConfig;
resolveAgentCortexModeStatusMock.mockResolvedValueOnce({
enabled: true,
mode: "technical",
source: "session-override",
maxChars: 1500,
});
const params = buildParams("/status", cfg);
const result = await handleCommands(params);
expect(result.shouldContinue).toBe(false);
expect(result.reply?.text).toContain("Cortex: technical (session override)");
});
});
describe("abort trigger command", () => {
beforeEach(() => {
vi.clearAllMocks();

File diff suppressed because it is too large Load Diff

View File

@ -84,6 +84,7 @@ type StatusArgs = {
modelAuth?: string;
activeModelAuth?: string;
usageLine?: string;
cortexLine?: string;
timeLine?: string;
queue?: QueueStatus;
mediaDecisions?: ReadonlyArray<MediaUnderstandingDecision>;
@ -676,6 +677,7 @@ export function buildStatusMessage(args: StatusArgs): string {
usageCostLine,
cacheLine,
`📚 ${contextLine}`,
args.cortexLine,
mediaLine,
args.usageLine,
`🧵 ${sessionLine}`,
@ -746,6 +748,10 @@ export function buildHelpMessage(cfg?: OpenClawConfig): string {
lines.push(" /status | /whoami | /context");
lines.push("");
lines.push("Cortex");
lines.push(" /cortex preview | /cortex mode show | /cortex mode set <mode>");
lines.push("");
lines.push("Skills");
lines.push(" /skill <name> [input]");
@ -871,6 +877,12 @@ export function buildCommandsMessagePaginated(
if (!isTelegram) {
const lines = [" Slash commands", ""];
lines.push(formatCommandList(items));
if (items.some((item) => item.text.startsWith("/cortex "))) {
lines.push("");
lines.push(
"Tip: /cortex preview shows the active Cortex context; /status shows mode and source.",
);
}
return {
text: lines.join("\n").trim(),
totalPages: 1,
@ -889,6 +901,10 @@ export function buildCommandsMessagePaginated(
const lines = [` Commands (${currentPage}/${totalPages})`, ""];
lines.push(formatCommandList(pageItems));
if (currentPage === 1 && items.some((item) => item.text.startsWith("/cortex "))) {
lines.push("");
lines.push("Tip: /cortex preview shows context; /status shows mode/source.");
}
return {
text: lines.join("\n").trim(),

View File

@ -317,7 +317,7 @@ describe("resolveCommandSecretRefsViaGateway", () => {
},
},
},
} as OpenClawConfig,
} as unknown as OpenClawConfig,
commandName: "agent",
targetIds: new Set(["tools.web.fetch.firecrawl.apiKey"]),
});

View File

@ -5,8 +5,18 @@ import { Command } from "commander";
import { afterEach, beforeAll, beforeEach, describe, expect, it, vi } from "vitest";
const getMemorySearchManager = vi.hoisted(() => vi.fn());
const getCortexStatus = vi.hoisted(() => vi.fn());
const previewCortexContext = vi.hoisted(() => vi.fn());
const ensureCortexGraphInitialized = vi.hoisted(() => vi.fn());
const resolveAgentCortexConfig = vi.hoisted(() => vi.fn());
const getCortexModeOverride = vi.hoisted(() => vi.fn());
const setCortexModeOverride = vi.hoisted(() => vi.fn());
const clearCortexModeOverride = vi.hoisted(() => vi.fn());
const loadConfig = vi.hoisted(() => vi.fn(() => ({})));
const readConfigFileSnapshot = vi.hoisted(() => vi.fn());
const writeConfigFile = vi.hoisted(() => vi.fn(async () => {}));
const resolveDefaultAgentId = vi.hoisted(() => vi.fn(() => "main"));
const resolveAgentWorkspaceDir = vi.hoisted(() => vi.fn(() => "/tmp/openclaw-workspace"));
const resolveCommandSecretRefsViaGateway = vi.hoisted(() =>
vi.fn(async ({ config }: { config: unknown }) => ({
resolvedConfig: config,
@ -18,12 +28,31 @@ vi.mock("../memory/index.js", () => ({
getMemorySearchManager,
}));
vi.mock("../memory/cortex.js", () => ({
ensureCortexGraphInitialized,
getCortexStatus,
previewCortexContext,
}));
vi.mock("../agents/cortex.js", () => ({
resolveAgentCortexConfig,
}));
vi.mock("../memory/cortex-mode-overrides.js", () => ({
getCortexModeOverride,
setCortexModeOverride,
clearCortexModeOverride,
}));
vi.mock("../config/config.js", () => ({
loadConfig,
readConfigFileSnapshot,
writeConfigFile,
}));
vi.mock("../agents/agent-scope.js", () => ({
resolveDefaultAgentId,
resolveAgentWorkspaceDir,
}));
vi.mock("./command-secret-gateway.js", () => ({
@ -44,6 +73,7 @@ beforeAll(async () => {
beforeEach(() => {
getMemorySearchManager.mockReset();
loadConfig.mockReset().mockReturnValue({});
resolveAgentCortexConfig.mockReset().mockReturnValue(null);
resolveDefaultAgentId.mockReset().mockReturnValue("main");
resolveCommandSecretRefsViaGateway.mockReset().mockImplementation(async ({ config }) => ({
resolvedConfig: config,
@ -53,6 +83,16 @@ beforeEach(() => {
afterEach(() => {
vi.restoreAllMocks();
getMemorySearchManager.mockClear();
getCortexStatus.mockClear();
previewCortexContext.mockClear();
ensureCortexGraphInitialized.mockClear();
getCortexModeOverride.mockClear();
setCortexModeOverride.mockClear();
clearCortexModeOverride.mockClear();
readConfigFileSnapshot.mockClear();
writeConfigFile.mockClear();
resolveCommandSecretRefsViaGateway.mockClear();
process.exitCode = undefined;
setVerbose(false);
});
@ -97,6 +137,21 @@ describe("memory cli", () => {
getMemorySearchManager.mockResolvedValueOnce({ manager });
}
function mockWritableConfigSnapshot(resolved: Record<string, unknown>) {
readConfigFileSnapshot.mockResolvedValueOnce({
exists: true,
valid: true,
config: resolved,
resolved,
issues: [],
warnings: [],
legacyIssues: [],
path: "/tmp/openclaw.json",
raw: JSON.stringify(resolved),
parsed: resolved,
});
}
function setupMemoryStatusWithInactiveSecretDiagnostics(close: ReturnType<typeof vi.fn>) {
resolveCommandSecretRefsViaGateway.mockResolvedValueOnce({
resolvedConfig: {},
@ -262,6 +317,11 @@ describe("memory cli", () => {
expect(helpText).toContain("Quick search using positional query.");
expect(helpText).toContain('openclaw memory search --query "deployment" --max-results 20');
expect(helpText).toContain("Limit results for focused troubleshooting.");
expect(helpText).toContain("openclaw memory cortex status");
expect(helpText).toContain("Check local Cortex bridge availability.");
expect(helpText).toContain("openclaw memory cortex preview --mode technical");
expect(helpText).toContain("openclaw memory cortex enable --mode technical");
expect(helpText).toContain("openclaw memory cortex mode set minimal --session-id abc123");
});
it("prints vector error when unavailable", async () => {
@ -575,4 +635,242 @@ describe("memory cli", () => {
expect(payload.results as unknown[]).toHaveLength(1);
expect(close).toHaveBeenCalled();
});
it("prints Cortex bridge status", async () => {
getCortexStatus.mockResolvedValueOnce({
available: true,
workspaceDir: "/tmp/openclaw-workspace",
graphPath: "/tmp/openclaw-workspace/.cortex/context.json",
graphExists: true,
});
const log = spyRuntimeLogs();
await runMemoryCli(["cortex", "status"]);
expect(getCortexStatus).toHaveBeenCalledWith({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: undefined,
});
expect(log).toHaveBeenCalledWith(expect.stringContaining("Cortex Bridge"));
expect(log).toHaveBeenCalledWith(expect.stringContaining("CLI: ready"));
expect(log).toHaveBeenCalledWith(expect.stringContaining("Graph: present"));
});
it("prints Cortex bridge status as json", async () => {
getCortexStatus.mockResolvedValueOnce({
available: false,
workspaceDir: "/tmp/openclaw-workspace",
graphPath: "/tmp/openclaw-workspace/.cortex/context.json",
graphExists: false,
error: "spawn cortex ENOENT",
});
const log = spyRuntimeLogs();
await runMemoryCli(["cortex", "status", "--json"]);
const payload = firstLoggedJson(log);
expect(payload.agentId).toBe("main");
expect(payload.available).toBe(false);
expect(payload.error).toBe("spawn cortex ENOENT");
});
it("prints Cortex preview context", async () => {
previewCortexContext.mockResolvedValueOnce({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: "/tmp/openclaw-workspace/.cortex/context.json",
policy: "technical",
maxChars: 1500,
context: "## Cortex Context\n- Python",
});
const log = spyRuntimeLogs();
await runMemoryCli(["cortex", "preview", "--mode", "technical"]);
expect(previewCortexContext).toHaveBeenCalledWith({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: undefined,
policy: "technical",
maxChars: undefined,
});
expect(log).toHaveBeenCalledWith("## Cortex Context\n- Python");
});
it("fails Cortex preview when bridge errors", async () => {
previewCortexContext.mockRejectedValueOnce(new Error("Cortex graph not found"));
const error = spyRuntimeErrors();
await runMemoryCli(["cortex", "preview"]);
expect(error).toHaveBeenCalledWith("Cortex graph not found");
expect(process.exitCode).toBe(1);
});
it("enables Cortex prompt bridge in agent defaults", async () => {
mockWritableConfigSnapshot({});
ensureCortexGraphInitialized.mockResolvedValueOnce({
graphPath: "/tmp/openclaw-workspace/.cortex/context.json",
created: true,
});
const log = spyRuntimeLogs();
await runMemoryCli(["cortex", "enable", "--mode", "professional", "--max-chars", "2200"]);
expect(writeConfigFile).toHaveBeenCalledWith({
agents: {
defaults: {
cortex: {
enabled: true,
mode: "professional",
maxChars: 2200,
},
},
},
});
expect(log).toHaveBeenCalledWith(
"Enabled Cortex prompt bridge for agent defaults (professional, 2200 chars).",
);
expect(log).toHaveBeenCalledWith(
"Initialized Cortex graph: /tmp/openclaw-workspace/.cortex/context.json",
);
expect(ensureCortexGraphInitialized).toHaveBeenCalledWith({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: undefined,
});
});
it("initializes a Cortex graph without changing config", async () => {
ensureCortexGraphInitialized.mockResolvedValueOnce({
graphPath: "/tmp/openclaw-workspace/.cortex/context.json",
created: false,
});
const log = spyRuntimeLogs();
await runMemoryCli(["cortex", "init"]);
expect(writeConfigFile).not.toHaveBeenCalled();
expect(ensureCortexGraphInitialized).toHaveBeenCalledWith({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: undefined,
});
expect(log).toHaveBeenCalledWith(
"Cortex graph already present: /tmp/openclaw-workspace/.cortex/context.json",
);
});
it("initializes the configured Cortex graph path when --graph is omitted", async () => {
resolveAgentCortexConfig.mockReturnValue({
enabled: true,
graphPath: ".cortex/agent-main.json",
mode: "technical",
maxChars: 1500,
});
ensureCortexGraphInitialized.mockResolvedValueOnce({
graphPath: "/tmp/openclaw-workspace/.cortex/agent-main.json",
created: true,
});
const log = spyRuntimeLogs();
await runMemoryCli(["cortex", "init"]);
expect(ensureCortexGraphInitialized).toHaveBeenCalledWith({
workspaceDir: "/tmp/openclaw-workspace",
graphPath: ".cortex/agent-main.json",
});
expect(log).toHaveBeenCalledWith(
"Initialized Cortex graph: /tmp/openclaw-workspace/.cortex/agent-main.json",
);
});
it("disables Cortex prompt bridge for a specific agent", async () => {
mockWritableConfigSnapshot({
agents: {
list: [{ id: "oracle", cortex: { enabled: true, mode: "technical", maxChars: 1500 } }],
},
});
const log = spyRuntimeLogs();
await runMemoryCli(["cortex", "disable", "--agent", "oracle"]);
expect(writeConfigFile).toHaveBeenCalledWith({
agents: {
list: [{ id: "oracle", cortex: { enabled: false, mode: "technical", maxChars: 1500 } }],
},
});
expect(log).toHaveBeenCalledWith("Disabled Cortex prompt bridge for agent oracle.");
});
it("fails Cortex enable for an unknown agent", async () => {
mockWritableConfigSnapshot({
agents: {
list: [{ id: "main" }],
},
});
const error = spyRuntimeErrors();
await runMemoryCli(["cortex", "enable", "--agent", "oracle"]);
expect(writeConfigFile).not.toHaveBeenCalled();
expect(error).toHaveBeenCalledWith("Agent not found: oracle");
expect(process.exitCode).toBe(1);
});
it("sets a session-level Cortex mode override", async () => {
setCortexModeOverride.mockResolvedValueOnce({
agentId: "main",
scope: "session",
targetId: "session-1",
mode: "minimal",
updatedAt: "2026-03-08T23:00:00.000Z",
});
const log = spyRuntimeLogs();
await runMemoryCli(["cortex", "mode", "set", "minimal", "--session-id", "session-1"]);
expect(setCortexModeOverride).toHaveBeenCalledWith({
agentId: "main",
scope: "session",
targetId: "session-1",
mode: "minimal",
});
expect(log).toHaveBeenCalledWith(
"Set Cortex mode override for session session-1 to minimal (main).",
);
});
it("shows a stored channel-level Cortex mode override as json", async () => {
getCortexModeOverride.mockResolvedValueOnce({
agentId: "main",
scope: "channel",
targetId: "slack",
mode: "professional",
updatedAt: "2026-03-08T23:00:00.000Z",
});
const log = spyRuntimeLogs();
await runMemoryCli(["cortex", "mode", "show", "--channel", "slack", "--json"]);
const payload = firstLoggedJson(log);
expect(payload.agentId).toBe("main");
expect(payload.scope).toBe("channel");
expect(payload.targetId).toBe("slack");
expect(payload.override).toMatchObject({ mode: "professional" });
});
it("rejects ambiguous Cortex mode targets", async () => {
const error = spyRuntimeErrors();
await runMemoryCli([
"cortex",
"mode",
"set",
"technical",
"--session-id",
"session-1",
"--channel",
"slack",
]);
expect(setCortexModeOverride).not.toHaveBeenCalled();
expect(error).toHaveBeenCalledWith("Choose either --session-id or --channel, not both.");
expect(process.exitCode).toBe(1);
});
});

View File

@ -3,11 +3,24 @@ import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import type { Command } from "commander";
import { resolveDefaultAgentId } from "../agents/agent-scope.js";
import { loadConfig } from "../config/config.js";
import { resolveDefaultAgentId, resolveAgentWorkspaceDir } from "../agents/agent-scope.js";
import { resolveAgentCortexConfig } from "../agents/cortex.js";
import { loadConfig, readConfigFileSnapshot, writeConfigFile } from "../config/config.js";
import { resolveStateDir } from "../config/paths.js";
import { resolveSessionTranscriptsDirForAgent } from "../config/sessions/paths.js";
import { setVerbose } from "../globals.js";
import {
clearCortexModeOverride,
getCortexModeOverride,
setCortexModeOverride,
type CortexModeScope,
} from "../memory/cortex-mode-overrides.js";
import {
ensureCortexGraphInitialized,
getCortexStatus,
previewCortexContext,
type CortexPolicy,
} from "../memory/cortex.js";
import { getMemorySearchManager, type MemorySearchManagerResult } from "../memory/index.js";
import { listMemoryFiles, normalizeExtraMemoryPaths } from "../memory/internal.js";
import { defaultRuntime } from "../runtime.js";
@ -29,6 +42,24 @@ type MemoryCommandOptions = {
verbose?: boolean;
};
type CortexCommandOptions = {
agent?: string;
graph?: string;
json?: boolean;
};
type CortexEnableCommandOptions = CortexCommandOptions & {
mode?: CortexPolicy;
maxChars?: number;
};
type CortexModeCommandOptions = {
agent?: string;
sessionId?: string;
channel?: string;
json?: boolean;
};
type MemoryManager = NonNullable<MemorySearchManagerResult["manager"]>;
type MemoryManagerPurpose = Parameters<typeof getMemorySearchManager>[0]["purpose"];
@ -307,6 +338,331 @@ async function summarizeQmdIndexArtifact(manager: MemoryManager): Promise<string
return `QMD index: ${shortenHomePath(dbPath)} (${stat.size} bytes)`;
}
async function runCortexStatus(opts: CortexCommandOptions): Promise<void> {
const cfg = loadConfig();
const agentId = resolveAgent(cfg, opts.agent);
const workspaceDir = resolveAgentWorkspaceDir(cfg, agentId);
const status = await getCortexStatus({
workspaceDir,
graphPath: opts.graph,
});
if (opts.json) {
defaultRuntime.log(JSON.stringify({ agentId, ...status }, null, 2));
return;
}
const rich = isRich();
const heading = (text: string) => colorize(rich, theme.heading, text);
const muted = (text: string) => colorize(rich, theme.muted, text);
const info = (text: string) => colorize(rich, theme.info, text);
const success = (text: string) => colorize(rich, theme.success, text);
const warn = (text: string) => colorize(rich, theme.warn, text);
const label = (text: string) => muted(`${text}:`);
const lines = [
`${heading("Cortex Bridge")} ${muted(`(${agentId})`)}`,
`${label("CLI")} ${status.available ? success("ready") : warn("unavailable")}`,
`${label("Graph")} ${status.graphExists ? success("present") : warn("missing")}`,
`${label("Path")} ${info(shortenHomePath(status.graphPath))}`,
`${label("Workspace")} ${info(shortenHomePath(status.workspaceDir))}`,
];
if (status.error) {
lines.push(`${label("Error")} ${warn(status.error)}`);
}
defaultRuntime.log(lines.join("\n"));
}
async function runCortexPreview(
opts: CortexCommandOptions & {
mode?: CortexPolicy;
maxChars?: number;
},
): Promise<void> {
const cfg = loadConfig();
const agentId = resolveAgent(cfg, opts.agent);
const workspaceDir = resolveAgentWorkspaceDir(cfg, agentId);
try {
const preview = await previewCortexContext({
workspaceDir,
graphPath: opts.graph,
policy: opts.mode,
maxChars: opts.maxChars,
});
if (opts.json) {
defaultRuntime.log(JSON.stringify({ agentId, ...preview }, null, 2));
return;
}
if (!preview.context) {
defaultRuntime.log("No Cortex context available.");
return;
}
defaultRuntime.log(preview.context);
} catch (err) {
defaultRuntime.error(formatErrorMessage(err));
process.exitCode = 1;
}
}
async function runCortexInit(opts: CortexCommandOptions): Promise<void> {
const cfg = loadConfig();
const agentId = resolveAgent(cfg, opts.agent);
const workspaceDir = resolveAgentWorkspaceDir(cfg, agentId);
try {
const graphPath = opts.graph?.trim() || resolveAgentCortexConfig(cfg, agentId)?.graphPath;
const result = await ensureCortexGraphInitialized({
workspaceDir,
graphPath,
});
if (opts.json) {
defaultRuntime.log(JSON.stringify({ agentId, workspaceDir, ...result }, null, 2));
return;
}
defaultRuntime.log(
result.created
? `Initialized Cortex graph: ${shortenHomePath(result.graphPath)}`
: `Cortex graph already present: ${shortenHomePath(result.graphPath)}`,
);
} catch (err) {
defaultRuntime.error(formatErrorMessage(err));
process.exitCode = 1;
}
}
async function loadWritableMemoryConfig(): Promise<Record<string, unknown> | null> {
const snapshot = await readConfigFileSnapshot();
if (!snapshot.valid) {
defaultRuntime.error(
"Config invalid. Run `openclaw config validate` or `openclaw doctor` first.",
);
process.exitCode = 1;
return null;
}
return structuredClone(snapshot.resolved) as Record<string, unknown>;
}
function parseCortexMode(mode?: string): CortexPolicy {
if (mode === undefined) {
return "technical";
}
if (mode === "full" || mode === "professional" || mode === "technical" || mode === "minimal") {
return mode;
}
throw new Error(`Invalid Cortex mode: ${mode}`);
}
function normalizeCortexMaxChars(value?: number): number {
if (typeof value !== "number" || !Number.isFinite(value) || value <= 0) {
return 1_500;
}
return Math.min(8_000, Math.max(1, Math.floor(value)));
}
function resolveCortexModeTarget(opts: CortexModeCommandOptions): {
scope: CortexModeScope;
targetId: string;
} {
const sessionId = opts.sessionId?.trim();
const channelId = opts.channel?.trim();
if (sessionId && channelId) {
throw new Error("Choose either --session-id or --channel, not both.");
}
if (sessionId) {
return { scope: "session", targetId: sessionId };
}
if (channelId) {
return { scope: "channel", targetId: channelId };
}
throw new Error("Missing target. Use --session-id <id> or --channel <id>.");
}
function updateAgentCortexConfig(params: {
root: Record<string, unknown>;
agentId?: string;
updater: (current: Record<string, unknown>) => Record<string, unknown>;
}): void {
const agents = ((params.root.agents as Record<string, unknown> | undefined) ??= {});
if (params.agentId?.trim()) {
const list = Array.isArray(agents.list) ? (agents.list as Record<string, unknown>[]) : [];
const index = list.findIndex(
(entry) => typeof entry.id === "string" && entry.id === params.agentId?.trim(),
);
if (index === -1) {
throw new Error(`Agent not found: ${params.agentId}`);
}
const entry = list[index] ?? {};
list[index] = {
...entry,
cortex: params.updater((entry.cortex as Record<string, unknown> | undefined) ?? {}),
};
agents.list = list;
return;
}
const defaults = ((agents.defaults as Record<string, unknown> | undefined) ??= {});
defaults.cortex = params.updater((defaults.cortex as Record<string, unknown> | undefined) ?? {});
}
async function runCortexEnable(opts: CortexEnableCommandOptions): Promise<void> {
try {
const cfg = loadConfig();
const agentId = resolveAgent(cfg, opts.agent);
const workspaceDir = resolveAgentWorkspaceDir(cfg, agentId);
const next = await loadWritableMemoryConfig();
if (!next) {
return;
}
updateAgentCortexConfig({
root: next,
agentId: opts.agent,
updater: (current) => ({
...current,
enabled: true,
mode: parseCortexMode(opts.mode),
maxChars: normalizeCortexMaxChars(opts.maxChars),
...(opts.graph ? { graphPath: opts.graph } : {}),
}),
});
await writeConfigFile(next);
const initResult = await ensureCortexGraphInitialized({
workspaceDir,
graphPath: opts.graph,
});
const scope = opts.agent?.trim() ? `agent ${opts.agent.trim()}` : "agent defaults";
defaultRuntime.log(
`Enabled Cortex prompt bridge for ${scope} (${parseCortexMode(opts.mode)}, ${normalizeCortexMaxChars(opts.maxChars)} chars).`,
);
defaultRuntime.log(
initResult.created
? `Initialized Cortex graph: ${shortenHomePath(initResult.graphPath)}`
: `Cortex graph ready: ${shortenHomePath(initResult.graphPath)}`,
);
} catch (err) {
defaultRuntime.error(formatErrorMessage(err));
process.exitCode = 1;
}
}
async function runCortexDisable(opts: CortexCommandOptions): Promise<void> {
try {
const next = await loadWritableMemoryConfig();
if (!next) {
return;
}
updateAgentCortexConfig({
root: next,
agentId: opts.agent,
updater: (current) => ({
...current,
enabled: false,
}),
});
await writeConfigFile(next);
const scope = opts.agent?.trim() ? `agent ${opts.agent.trim()}` : "agent defaults";
defaultRuntime.log(`Disabled Cortex prompt bridge for ${scope}.`);
} catch (err) {
defaultRuntime.error(formatErrorMessage(err));
process.exitCode = 1;
}
}
async function runCortexModeShow(opts: CortexModeCommandOptions): Promise<void> {
try {
const cfg = loadConfig();
const agentId = resolveAgent(cfg, opts.agent);
const target = resolveCortexModeTarget(opts);
const override = await getCortexModeOverride({
agentId,
sessionId: target.scope === "session" ? target.targetId : undefined,
channelId: target.scope === "channel" ? target.targetId : undefined,
});
if (opts.json) {
defaultRuntime.log(
JSON.stringify(
{
agentId,
scope: target.scope,
targetId: target.targetId,
override,
},
null,
2,
),
);
return;
}
if (!override) {
defaultRuntime.log(`No Cortex mode override for ${target.scope} ${target.targetId}.`);
return;
}
defaultRuntime.log(
`Cortex mode override for ${target.scope} ${target.targetId}: ${override.mode} (${agentId})`,
);
} catch (err) {
defaultRuntime.error(formatErrorMessage(err));
process.exitCode = 1;
}
}
async function runCortexModeSet(mode: CortexPolicy, opts: CortexModeCommandOptions): Promise<void> {
try {
const cfg = loadConfig();
const agentId = resolveAgent(cfg, opts.agent);
const target = resolveCortexModeTarget(opts);
const next = await setCortexModeOverride({
agentId,
scope: target.scope,
targetId: target.targetId,
mode: parseCortexMode(mode),
});
if (opts.json) {
defaultRuntime.log(JSON.stringify(next, null, 2));
return;
}
defaultRuntime.log(
`Set Cortex mode override for ${target.scope} ${target.targetId} to ${next.mode} (${agentId}).`,
);
} catch (err) {
defaultRuntime.error(formatErrorMessage(err));
process.exitCode = 1;
}
}
async function runCortexModeReset(opts: CortexModeCommandOptions): Promise<void> {
try {
const cfg = loadConfig();
const agentId = resolveAgent(cfg, opts.agent);
const target = resolveCortexModeTarget(opts);
const removed = await clearCortexModeOverride({
agentId,
scope: target.scope,
targetId: target.targetId,
});
if (opts.json) {
defaultRuntime.log(
JSON.stringify(
{
agentId,
scope: target.scope,
targetId: target.targetId,
removed,
},
null,
2,
),
);
return;
}
if (!removed) {
defaultRuntime.log(`No Cortex mode override found for ${target.scope} ${target.targetId}.`);
return;
}
defaultRuntime.log(`Cleared Cortex mode override for ${target.scope} ${target.targetId}.`);
} catch (err) {
defaultRuntime.error(formatErrorMessage(err));
process.exitCode = 1;
}
}
async function scanMemorySources(params: {
workspaceDir: string;
agentId: string;
@ -590,6 +946,23 @@ export function registerMemoryCli(program: Command) {
"Limit results for focused troubleshooting.",
],
["openclaw memory status --json", "Output machine-readable JSON (good for scripts)."],
["openclaw memory cortex status", "Check local Cortex bridge availability."],
[
"openclaw memory cortex preview --mode technical",
"Preview filtered Cortex context for the active agent workspace.",
],
[
"openclaw memory cortex enable --mode technical",
"Turn on Cortex prompt injection without editing openclaw.json manually.",
],
[
"openclaw memory cortex mode set minimal --session-id abc123",
"Override Cortex mode for one OpenClaw session.",
],
[
"openclaw memory cortex mode set professional --channel slack",
"Override Cortex mode for a channel surface.",
],
])}\n\n${theme.muted("Docs:")} ${formatDocsLink("/cli/memory", "docs.openclaw.ai/cli/memory")}\n`,
);
@ -814,4 +1187,94 @@ export function registerMemoryCli(program: Command) {
});
},
);
const cortex = memory.command("cortex").description("Inspect the local Cortex memory bridge");
cortex
.command("status")
.description("Check Cortex CLI and graph availability")
.option("--agent <id>", "Agent id (default: default agent)")
.option("--graph <path>", "Override Cortex graph path")
.option("--json", "Print JSON")
.action(async (opts: CortexCommandOptions) => {
await runCortexStatus(opts);
});
cortex
.command("preview")
.description("Preview Cortex context export for the active workspace")
.option("--agent <id>", "Agent id (default: default agent)")
.option("--graph <path>", "Override Cortex graph path")
.option("--mode <mode>", "Context mode", "technical")
.option("--max-chars <n>", "Max characters", (value: string) => Number(value))
.option("--json", "Print JSON")
.action(
async (
opts: CortexCommandOptions & {
mode?: CortexPolicy;
maxChars?: number;
},
) => {
await runCortexPreview(opts);
},
);
cortex
.command("init")
.description("Create the default Cortex graph if it does not exist")
.option("--agent <id>", "Agent id (default: default agent)")
.option("--graph <path>", "Override Cortex graph path")
.option("--json", "Print JSON")
.action(async (opts: CortexCommandOptions) => {
await runCortexInit(opts);
});
cortex
.command("enable")
.description("Enable Cortex prompt context injection in config")
.option("--agent <id>", "Apply to a specific agent id instead of agent defaults")
.option("--graph <path>", "Override Cortex graph path")
.option("--mode <mode>", "Context mode", "technical")
.option("--max-chars <n>", "Max characters", (value: string) => Number(value))
.action(async (opts: CortexEnableCommandOptions) => {
await runCortexEnable(opts);
});
cortex
.command("disable")
.description("Disable Cortex prompt context injection in config")
.option("--agent <id>", "Apply to a specific agent id instead of agent defaults")
.action(async (opts: CortexCommandOptions) => {
await runCortexDisable(opts);
});
const cortexMode = cortex.command("mode").description("Manage runtime Cortex mode overrides");
const applyModeTargetOptions = (command: Command) =>
command
.option("--agent <id>", "Agent id (default: default agent)")
.option("--session-id <id>", "Apply override to a specific OpenClaw session")
.option("--channel <id>", "Apply override to a specific channel or surface")
.option("--json", "Print JSON");
applyModeTargetOptions(
cortexMode.command("show").description("Show the stored Cortex mode override for a target"),
).action(async (opts: CortexModeCommandOptions) => {
await runCortexModeShow(opts);
});
applyModeTargetOptions(
cortexMode.command("reset").description("Clear the stored Cortex mode override for a target"),
).action(async (opts: CortexModeCommandOptions) => {
await runCortexModeReset(opts);
});
applyModeTargetOptions(
cortexMode
.command("set")
.description("Set a runtime Cortex mode override for a target")
.argument("<mode>", "Mode (full|professional|technical|minimal)"),
).action(async (mode: CortexPolicy, opts: CortexModeCommandOptions) => {
await runCortexModeSet(mode, opts);
});
}

View File

@ -637,7 +637,9 @@ describe("update-cli", () => {
call[0][2] === "-g",
);
const updateOptions =
typeof updateCall?.[1] === "object" && updateCall[1] !== null ? updateCall[1] : undefined;
updateCall && typeof updateCall[1] === "object" && updateCall[1] !== null
? (updateCall[1] as { env?: Record<string, string | undefined> })
: undefined;
const mergedPath = updateOptions?.env?.Path ?? updateOptions?.env?.PATH ?? "";
expect(mergedPath.split(path.delimiter).slice(0, 2)).toEqual([
portableGitMingw,

View File

@ -11,6 +11,7 @@ export const CONFIGURE_WIZARD_SECTIONS = [
"workspace",
"model",
"web",
"memory",
"gateway",
"daemon",
"channels",
@ -53,6 +54,11 @@ export const CONFIGURE_SECTION_OPTIONS: Array<{
{ value: "workspace", label: "Workspace", hint: "Set workspace + sessions" },
{ value: "model", label: "Model", hint: "Pick provider + credentials" },
{ value: "web", label: "Web tools", hint: "Configure web search (Perplexity/Brave) + fetch" },
{
value: "memory",
label: "Cortex memory",
hint: "Enable the local Cortex prompt bridge for agent context",
},
{ value: "gateway", label: "Gateway", hint: "Port, bind, auth, tailscale" },
{
value: "daemon",

View File

@ -158,6 +158,7 @@ describe("runConfigureWizard", () => {
mocks.resolveControlUiLinks.mockReturnValue({ wsUrl: "ws://127.0.0.1:18789" });
mocks.summarizeExistingConfig.mockReturnValue("");
mocks.createClackPrompter.mockReturnValue({});
mocks.ensureControlUiAssetsBuilt.mockResolvedValue({ ok: true });
const selectQueue = ["local", "__continue"];
mocks.clackSelect.mockImplementation(async () => selectQueue.shift());
@ -182,6 +183,51 @@ describe("runConfigureWizard", () => {
);
});
it("configures Cortex memory through the wizard sections flow", async () => {
mocks.readConfigFileSnapshot.mockResolvedValue({
exists: false,
valid: true,
config: {},
issues: [],
});
mocks.resolveGatewayPort.mockReturnValue(18789);
mocks.probeGatewayReachable.mockResolvedValue({ ok: false });
mocks.resolveControlUiLinks.mockReturnValue({ wsUrl: "ws://127.0.0.1:18789" });
mocks.summarizeExistingConfig.mockReturnValue("");
mocks.createClackPrompter.mockReturnValue({});
const selectQueue = ["local", "technical"];
mocks.clackSelect.mockImplementation(async () => selectQueue.shift());
const confirmQueue = [true, true];
mocks.clackConfirm.mockImplementation(async () => confirmQueue.shift());
mocks.clackIntro.mockResolvedValue(undefined);
mocks.clackOutro.mockResolvedValue(undefined);
mocks.clackText.mockResolvedValue("2048");
await runConfigureWizard(
{ command: "configure", sections: ["memory"] },
{
log: vi.fn(),
error: vi.fn(),
exit: vi.fn(),
},
);
expect(mocks.writeConfigFile).toHaveBeenCalledWith(
expect.objectContaining({
agents: expect.objectContaining({
defaults: expect.objectContaining({
cortex: expect.objectContaining({
enabled: true,
mode: "technical",
maxChars: 2048,
}),
}),
}),
}),
);
});
it("exits with code 1 when configure wizard is cancelled", async () => {
const runtime = {
log: vi.fn(),
@ -199,6 +245,7 @@ describe("runConfigureWizard", () => {
mocks.resolveControlUiLinks.mockReturnValue({ wsUrl: "ws://127.0.0.1:18789" });
mocks.summarizeExistingConfig.mockReturnValue("");
mocks.createClackPrompter.mockReturnValue({});
mocks.ensureControlUiAssetsBuilt.mockResolvedValue({ ok: true });
mocks.clackSelect.mockRejectedValueOnce(new WizardCancelledError());
await runConfigureWizard({ command: "configure" }, runtime);

View File

@ -4,6 +4,7 @@ import { formatCliCommand } from "../cli/command-format.js";
import type { OpenClawConfig } from "../config/config.js";
import { readConfigFileSnapshot, resolveGatewayPort, writeConfigFile } from "../config/config.js";
import { logConfigUpdated } from "../config/logging.js";
import type { AgentCortexConfig } from "../config/types.agent-defaults.js";
import { ensureControlUiAssetsBuilt } from "../infra/control-ui-assets.js";
import type { RuntimeEnv } from "../runtime.js";
import { defaultRuntime } from "../runtime.js";
@ -47,6 +48,7 @@ import { promptRemoteGatewayConfig } from "./onboard-remote.js";
import { setupSkills } from "./onboard-skills.js";
type ConfigureSectionChoice = WizardSection | "__continue";
type CortexMode = NonNullable<AgentCortexConfig["mode"]>;
async function resolveGatewaySecretInputForWizard(params: {
cfg: OpenClawConfig;
@ -320,6 +322,122 @@ async function promptWebToolsConfig(
};
}
async function promptCortexMemoryConfig(
nextConfig: OpenClawConfig,
runtime: RuntimeEnv,
workspaceDir: string,
): Promise<OpenClawConfig> {
const existing: AgentCortexConfig | undefined = nextConfig.agents?.defaults?.cortex;
const defaultGraphPath = nodePath.join(workspaceDir, ".cortex", "context.json");
const graphExists = await fsPromises
.access(defaultGraphPath)
.then(() => true)
.catch(() => false);
note(
[
"Cortex can prepend a filtered local memory graph into agent system prompts.",
`Default workspace graph: ${defaultGraphPath}`,
graphExists
? "A local Cortex graph was detected in this workspace."
: "No default Cortex graph was detected yet; you can still enable the bridge now.",
].join("\n"),
"Cortex memory",
);
const enable = guardCancel(
await confirm({
message: "Enable Cortex prompt bridge?",
initialValue: existing?.enabled ?? graphExists,
}),
runtime,
);
if (!enable) {
return {
...nextConfig,
agents: {
...nextConfig.agents,
defaults: {
...nextConfig.agents?.defaults,
cortex: {
...existing,
enabled: false,
},
},
},
};
}
const mode = guardCancel(
await select({
message: "Cortex prompt mode",
options: [
{ value: "technical", label: "Technical", hint: "Project and coding context" },
{ value: "professional", label: "Professional", hint: "Work-safe context slice" },
{ value: "minimal", label: "Minimal", hint: "Smallest safe context" },
{ value: "full", label: "Full", hint: "Largest context slice" },
],
initialValue: existing?.mode ?? "technical",
}),
runtime,
) as CortexMode;
const maxCharsInput = guardCancel(
await text({
message: "Cortex max prompt chars",
initialValue: String(existing?.maxChars ?? 1500),
validate: (value) => {
const parsed = Number.parseInt(String(value), 10);
if (!Number.isFinite(parsed) || parsed <= 0) {
return "Enter a positive integer";
}
return undefined;
},
}),
runtime,
);
const maxChars = Number.parseInt(String(maxCharsInput), 10);
const useDefaultGraph = guardCancel(
await confirm({
message: "Use the default workspace Cortex graph path?",
initialValue: !existing?.graphPath,
}),
runtime,
);
let graphPath: string | undefined;
if (!useDefaultGraph) {
const graphInput = guardCancel(
await text({
message: "Cortex graph path",
initialValue: existing?.graphPath ?? defaultGraphPath,
}),
runtime,
);
const trimmed = String(graphInput ?? "").trim();
graphPath = trimmed || defaultGraphPath;
}
return {
...nextConfig,
agents: {
...nextConfig.agents,
defaults: {
...nextConfig.agents?.defaults,
cortex: {
...existing,
enabled: true,
mode,
maxChars,
...(graphPath ? { graphPath } : {}),
},
},
},
};
}
export async function runConfigureWizard(
opts: ConfigureWizardParams,
runtime: RuntimeEnv = defaultRuntime,
@ -547,6 +665,10 @@ export async function runConfigureWizard(
nextConfig = await promptWebToolsConfig(nextConfig, runtime);
}
if (selected.includes("memory")) {
nextConfig = await promptCortexMemoryConfig(nextConfig, runtime, workspaceDir);
}
if (selected.includes("gateway")) {
const gateway = await promptGatewayConfig(nextConfig, runtime);
nextConfig = gateway.config;
@ -601,6 +723,11 @@ export async function runConfigureWizard(
await persistConfig();
}
if (choice === "memory") {
nextConfig = await promptCortexMemoryConfig(nextConfig, runtime, workspaceDir);
await persistConfig();
}
if (choice === "gateway") {
const gateway = await promptGatewayConfig(nextConfig, runtime);
nextConfig = gateway.config;

View File

@ -1 +1,66 @@
export { loginOpenAICodexOAuth } from "../plugins/provider-openai-codex-oauth.js";
import type { OAuthCredentials } from "@mariozechner/pi-ai";
import { loginOpenAICodex } from "@mariozechner/pi-ai/oauth";
import type { RuntimeEnv } from "../runtime.js";
import type { WizardPrompter } from "../wizard/prompts.js";
import { createVpsAwareOAuthHandlers } from "./oauth-flow.js";
import {
formatOpenAIOAuthTlsPreflightFix,
runOpenAIOAuthTlsPreflight,
} from "./oauth-tls-preflight.js";
export async function loginOpenAICodexOAuth(params: {
prompter: WizardPrompter;
runtime: RuntimeEnv;
isRemote: boolean;
openUrl: (url: string) => Promise<void>;
localBrowserMessage?: string;
}): Promise<OAuthCredentials | null> {
const { prompter, runtime, isRemote, openUrl, localBrowserMessage } = params;
const preflight = await runOpenAIOAuthTlsPreflight();
if (!preflight.ok && preflight.kind === "tls-cert") {
const hint = formatOpenAIOAuthTlsPreflightFix(preflight);
runtime.error(hint);
await prompter.note(hint, "OAuth prerequisites");
throw new Error(preflight.message);
}
await prompter.note(
isRemote
? [
"You are running in a remote/VPS environment.",
"A URL will be shown for you to open in your LOCAL browser.",
"After signing in, paste the redirect URL back here.",
].join("\n")
: [
"Browser will open for OpenAI authentication.",
"If the callback doesn't auto-complete, paste the redirect URL.",
"OpenAI OAuth uses localhost:1455 for the callback.",
].join("\n"),
"OpenAI Codex OAuth",
);
const spin = prompter.progress("Starting OAuth flow…");
try {
const { onAuth: baseOnAuth, onPrompt } = createVpsAwareOAuthHandlers({
isRemote,
prompter,
runtime,
spin,
openUrl,
localBrowserMessage: localBrowserMessage ?? "Complete sign-in in browser…",
});
const creds = await loginOpenAICodex({
onAuth: baseOnAuth,
onPrompt,
onProgress: (msg: string) => spin.update(msg),
});
spin.stop("OpenAI OAuth complete");
return creds ?? null;
} catch (err) {
spin.stop("OpenAI OAuth failed");
runtime.error(String(err));
await prompter.note("Trouble with OAuth? See https://docs.openclaw.ai/start/faq", "OAuth help");
throw err;
}
}

View File

@ -1,7 +1,16 @@
import { resolveNodeService } from "../daemon/node-service.js";
import { resolveGatewayService } from "../daemon/service.js";
import {
NODE_SERVICE_KIND,
NODE_SERVICE_MARKER,
NODE_WINDOWS_TASK_SCRIPT_NAME,
resolveNodeLaunchAgentLabel,
resolveNodeSystemdServiceName,
resolveNodeWindowsTaskName,
} from "../daemon/constants.js";
import { formatDaemonRuntimeShort } from "./status.format.js";
import { readServiceStatusSummary } from "./status.service-summary.js";
import {
readServiceStatusSummary,
type GatewayServiceStatusReader,
} from "./status.service-summary.js";
type DaemonStatusSummary = {
label: string;
@ -12,10 +21,81 @@ type DaemonStatusSummary = {
runtimeShort: string | null;
};
function withNodeServiceEnv(
env: Record<string, string | undefined>,
): Record<string, string | undefined> {
return {
...env,
OPENCLAW_LAUNCHD_LABEL: resolveNodeLaunchAgentLabel(),
OPENCLAW_SYSTEMD_UNIT: resolveNodeSystemdServiceName(),
OPENCLAW_WINDOWS_TASK_NAME: resolveNodeWindowsTaskName(),
OPENCLAW_TASK_SCRIPT_NAME: NODE_WINDOWS_TASK_SCRIPT_NAME,
OPENCLAW_LOG_PREFIX: "node",
OPENCLAW_SERVICE_MARKER: NODE_SERVICE_MARKER,
OPENCLAW_SERVICE_KIND: NODE_SERVICE_KIND,
};
}
async function loadGatewayServiceStatusReader(): Promise<GatewayServiceStatusReader> {
if (process.platform === "darwin") {
const launchd = await import("../daemon/launchd.js");
return {
label: "LaunchAgent",
loadedText: "loaded",
notLoadedText: "not loaded",
isLoaded: launchd.isLaunchAgentLoaded,
readCommand: launchd.readLaunchAgentProgramArguments,
readRuntime: launchd.readLaunchAgentRuntime,
};
}
if (process.platform === "linux") {
const systemd = await import("../daemon/systemd.js");
return {
label: "systemd",
loadedText: "enabled",
notLoadedText: "disabled",
isLoaded: systemd.isSystemdServiceEnabled,
readCommand: systemd.readSystemdServiceExecStart,
readRuntime: systemd.readSystemdServiceRuntime,
};
}
if (process.platform === "win32") {
const schtasks = await import("../daemon/schtasks.js");
return {
label: "Scheduled Task",
loadedText: "registered",
notLoadedText: "missing",
isLoaded: schtasks.isScheduledTaskInstalled,
readCommand: schtasks.readScheduledTaskCommand,
readRuntime: schtasks.readScheduledTaskRuntime,
};
}
throw new Error(`Gateway service install not supported on ${process.platform}`);
}
async function loadNodeServiceStatusReader(): Promise<GatewayServiceStatusReader> {
const base = await loadGatewayServiceStatusReader();
return {
...base,
isLoaded: async (args) => {
return base.isLoaded({ env: withNodeServiceEnv(args.env ?? {}) });
},
readCommand: async (env) => {
return base.readCommand(withNodeServiceEnv(env));
},
readRuntime: async (env) => {
return base.readRuntime(withNodeServiceEnv(env));
},
};
}
async function buildDaemonStatusSummary(
serviceLabel: "gateway" | "node",
): Promise<DaemonStatusSummary> {
const service = serviceLabel === "gateway" ? resolveGatewayService() : resolveNodeService();
const service =
serviceLabel === "gateway"
? await loadGatewayServiceStatusReader()
: await loadNodeServiceStatusReader();
const fallbackLabel = serviceLabel === "gateway" ? "Daemon" : "Node";
const summary = await readServiceStatusSummary(service, fallbackLabel);
return {

View File

@ -134,8 +134,8 @@ vi.mock("../agents/memory-search.js", () => ({
resolveMemorySearchConfig: mocks.resolveMemorySearchConfig,
}));
vi.mock("../gateway/call.js", () => ({
buildGatewayConnectionDetails: mocks.buildGatewayConnectionDetails,
vi.mock("../gateway/connection-details.js", () => ({
buildGatewayConnectionDetailsFromConfig: mocks.buildGatewayConnectionDetails,
}));
vi.mock("../gateway/probe.js", () => ({

View File

@ -1,6 +1,9 @@
import { existsSync } from "node:fs";
import type { OpenClawConfig } from "../config/types.js";
import { buildGatewayConnectionDetails } from "../gateway/call.js";
import {
buildGatewayConnectionDetailsFromConfig,
type GatewayConnectionDetails,
} from "../gateway/connection-details.js";
import { normalizeControlUiBasePath } from "../gateway/control-ui-shared.js";
import { probeGateway } from "../gateway/probe.js";
import type { MemoryProviderStatus } from "../memory/types.js";
@ -20,7 +23,7 @@ export type MemoryPluginStatus = {
};
export type GatewayProbeSnapshot = {
gatewayConnection: ReturnType<typeof buildGatewayConnectionDetails>;
gatewayConnection: GatewayConnectionDetails;
remoteUrlMissing: boolean;
gatewayMode: "local" | "remote";
gatewayProbeAuth: {
@ -60,7 +63,7 @@ export async function resolveGatewayProbeSnapshot(params: {
cfg: OpenClawConfig;
opts: { timeoutMs?: number; all?: boolean };
}): Promise<GatewayProbeSnapshot> {
const gatewayConnection = buildGatewayConnectionDetails({ config: params.cfg });
const gatewayConnection = buildGatewayConnectionDetailsFromConfig({ config: params.cfg });
const isRemoteMode = params.cfg.gateway?.mode === "remote";
const remoteUrlRaw =
typeof params.cfg.gateway?.remote?.url === "string" ? params.cfg.gateway.remote.url : "";

View File

@ -79,8 +79,11 @@ vi.mock("./status.scan.deps.runtime.js", () => ({
getMemorySearchManager: mocks.getMemorySearchManager,
}));
vi.mock("../gateway/connection-details.js", () => ({
buildGatewayConnectionDetailsFromConfig: mocks.buildGatewayConnectionDetails,
}));
vi.mock("../gateway/call.js", () => ({
buildGatewayConnectionDetails: mocks.buildGatewayConnectionDetails,
callGateway: mocks.callGateway,
}));

View File

@ -1,5 +1,18 @@
import type { GatewayServiceRuntime } from "../daemon/service-runtime.js";
import type { GatewayService } from "../daemon/service.js";
import type {
GatewayServiceCommandConfig,
GatewayServiceEnv,
GatewayServiceEnvArgs,
} from "../daemon/service-types.js";
export type GatewayServiceStatusReader = {
label: string;
loadedText: string;
notLoadedText: string;
isLoaded: (args: GatewayServiceEnvArgs) => Promise<boolean>;
readCommand: (env: GatewayServiceEnv) => Promise<GatewayServiceCommandConfig | null>;
readRuntime: (env: GatewayServiceEnv) => Promise<GatewayServiceRuntime | undefined>;
};
export type ServiceStatusSummary = {
label: string;
@ -12,7 +25,7 @@ export type ServiceStatusSummary = {
};
export async function readServiceStatusSummary(
service: GatewayService,
service: GatewayServiceStatusReader,
fallbackLabel: string,
): Promise<ServiceStatusSummary> {
try {

View File

@ -51,6 +51,39 @@ describe("config schema regressions", () => {
expect(res.ok).toBe(true);
});
it("accepts memorySearch outputDimensionality", () => {
const res = validateConfigObject({
agents: {
defaults: {
memorySearch: {
provider: "gemini",
outputDimensionality: 768,
},
},
},
});
expect(res.ok).toBe(true);
});
it("accepts memorySearch sync.sessions.postCompactionForce", () => {
const res = validateConfigObject({
agents: {
defaults: {
memorySearch: {
sync: {
sessions: {
postCompactionForce: false,
},
},
},
},
},
});
expect(res.ok).toBe(true);
});
it("accepts safe iMessage remoteHost", () => {
const res = validateConfigObject({
channels: {

View File

@ -667,10 +667,33 @@ export const FIELD_HELP: Record<string, string> = {
"tools.message.broadcast.enabled": "Enable broadcast action (default: true).",
"tools.web.search.enabled": "Enable the web_search tool (requires a provider API key).",
"tools.web.search.provider":
"Search provider id. Auto-detected from available API keys if omitted.",
"tools.web.search.maxResults": "Number of results to return (1-10).",
'Search provider ("brave", "firecrawl", "gemini", "grok", "kimi", or "perplexity"). Auto-detected from available API keys if omitted.',
"tools.web.search.apiKey": "Brave Search API key (fallback: BRAVE_API_KEY env var).",
"tools.web.search.maxResults": "Default number of results to return (1-10).",
"tools.web.search.timeoutSeconds": "Timeout in seconds for web_search requests.",
"tools.web.search.cacheTtlMinutes": "Cache TTL in minutes for web_search results.",
"tools.web.search.firecrawl.apiKey":
"Firecrawl API key for web search (fallback: FIRECRAWL_API_KEY env var).",
"tools.web.search.firecrawl.baseUrl":
'Firecrawl Search base URL override (default: "https://api.firecrawl.dev").',
"tools.web.search.brave.mode":
'Brave Search mode: "web" (URL results) or "llm-context" (pre-extracted page content for LLM grounding).',
"tools.web.search.gemini.apiKey":
"Gemini API key for Google Search grounding (fallback: GEMINI_API_KEY env var).",
"tools.web.search.gemini.model": 'Gemini model override (default: "gemini-2.5-flash").',
"tools.web.search.grok.apiKey": "Grok (xAI) API key (fallback: XAI_API_KEY env var).", // pragma: allowlist secret
"tools.web.search.grok.model": 'Grok model override (default: "grok-4-1-fast").',
"tools.web.search.kimi.apiKey":
"Moonshot/Kimi API key (fallback: KIMI_API_KEY or MOONSHOT_API_KEY env var).",
"tools.web.search.kimi.baseUrl":
'Kimi base URL override (default: "https://api.moonshot.ai/v1").',
"tools.web.search.kimi.model": 'Kimi model override (default: "moonshot-v1-128k").',
"tools.web.search.perplexity.apiKey":
"Perplexity or OpenRouter API key (fallback: PERPLEXITY_API_KEY or OPENROUTER_API_KEY env var). Direct Perplexity keys default to the Search API; OpenRouter keys use Sonar chat completions.",
"tools.web.search.perplexity.baseUrl":
"Optional Perplexity/OpenRouter chat-completions base URL override. Setting this opts Perplexity into the legacy Sonar/OpenRouter compatibility path.",
"tools.web.search.perplexity.model":
'Optional Sonar/OpenRouter model override (default: "perplexity/sonar-pro"). Setting this opts Perplexity into the legacy chat-completions compatibility path.',
"tools.web.fetch.enabled": "Enable the web_fetch tool (lightweight HTTP fetch).",
"tools.web.fetch.maxChars": "Max characters returned by web_fetch (truncated).",
"tools.web.fetch.maxCharsCap":
@ -774,20 +797,30 @@ export const FIELD_HELP: Record<string, string> = {
"agents.defaults.models": "Configured model catalog (keys are full provider/model IDs).",
"agents.defaults.memorySearch":
"Vector search over MEMORY.md and memory/*.md (per-agent overrides supported).",
"agents.defaults.cortex":
"Optional Cortex prompt bridge that injects filtered context from a local Cortex graph into the agent system prompt. Keep this off unless you intentionally want OpenClaw to reuse Cortex-managed identity or memory context.",
"agents.defaults.cortex.enabled":
"Enables Cortex prompt-context injection for this agent profile. Keep disabled by default and enable only when a local Cortex graph is available for the workspace.",
"agents.defaults.cortex.graphPath":
"Optional Cortex graph JSON path. Relative paths resolve from the agent workspace; leave unset to use .cortex/context.json inside the workspace.",
"agents.defaults.cortex.mode":
'Disclosure mode used when exporting Cortex context into the prompt: "technical", "professional", "minimal", or "full". Use narrower modes unless you intentionally want broader context sharing.',
"agents.defaults.cortex.maxChars":
"Maximum number of Cortex-exported characters injected into the system prompt. Keep this bounded so prompt overhead stays predictable.",
"agents.defaults.memorySearch.enabled":
"Master toggle for memory search indexing and retrieval behavior on this agent profile. Keep enabled for semantic recall, and disable when you want fully stateless responses.",
"agents.defaults.memorySearch.sources":
'Chooses which sources are indexed: "memory" reads MEMORY.md + memory files, and "sessions" includes transcript history. Keep ["memory"] unless you need recall from prior chat transcripts.',
"agents.defaults.memorySearch.extraPaths":
"Adds extra directories or .md files to the memory index beyond default memory files. Use this when key reference docs live elsewhere in your repo; when multimodal memory is enabled, matching image/audio files under these paths are also eligible for indexing.",
"Adds extra directories or .md files to the memory index beyond default memory files. Use this when key reference docs live elsewhere in your repo; keep paths small and intentional to avoid noisy recall.",
"agents.defaults.memorySearch.multimodal":
'Optional multimodal memory settings for indexing image and audio files from configured extra paths. Keep this off unless your embedding model explicitly supports cross-modal embeddings, and set `memorySearch.fallback` to "none" while it is enabled. Matching files are uploaded to the configured remote embedding provider during indexing.',
"Optional multimodal indexing for image/audio files discovered through memorySearch.extraPaths. Enable this only when you intentionally want Gemini multimodal embeddings to include image or audio reference files alongside markdown memory.",
"agents.defaults.memorySearch.multimodal.enabled":
"Enables image/audio memory indexing from extraPaths. This currently requires Gemini embedding-2, keeps the default memory roots Markdown-only, disables memory-search fallback providers, and uploads matching binary content to the configured remote embedding provider.",
"Turns on multimodal extra-path indexing for supported image/audio file types. Keep this off unless the selected memory embedding provider and model support structured multimodal inputs.",
"agents.defaults.memorySearch.multimodal.modalities":
'Selects which multimodal file types are indexed from extraPaths: "image", "audio", or "all". Keep this narrow to avoid indexing large binary corpora unintentionally.',
'Chooses which non-markdown media kinds are indexed from extra paths: "image", "audio", or both via "all". Limit this to the media you actually want searchable so indexing stays focused and cheap.',
"agents.defaults.memorySearch.multimodal.maxFileBytes":
"Sets the maximum bytes allowed per multimodal file before it is skipped during memory indexing. Use this to cap upload cost and indexing latency, or raise it for short high-quality audio clips.",
"Maximum file size accepted for multimodal memory indexing before the file is skipped. Keep this lower when large media files would bloat embedding payloads or hit provider limits, and raise it only when you intentionally need bigger image or audio files searchable.",
"agents.defaults.memorySearch.experimental.sessionMemory":
"Indexes session transcripts into memory search so responses can reference prior chat turns. Keep this off unless transcript recall is needed, because indexing cost and storage usage both increase.",
"agents.defaults.memorySearch.provider":
@ -795,7 +828,7 @@ export const FIELD_HELP: Record<string, string> = {
"agents.defaults.memorySearch.model":
"Embedding model override used by the selected memory provider when a non-default model is required. Set this only when you need explicit recall quality/cost tuning beyond provider defaults.",
"agents.defaults.memorySearch.outputDimensionality":
"Gemini embedding-2 only: chooses the output vector size for memory embeddings. Use 768, 1536, or 3072 (default), and expect a full reindex when you change it because stored vector dimensions must stay consistent.",
"Optional embedding dimension override for providers and models that support configurable output size, such as Gemini embedding v2. Use this only when you intentionally need a smaller vector footprint or strict dimension compatibility with an existing memory index.",
"agents.defaults.memorySearch.remote.baseUrl":
"Overrides the embedding API endpoint, such as an OpenAI-compatible proxy or custom Gemini base URL. Use this only when routing through your own gateway or vendor endpoint; keep provider defaults otherwise.",
"agents.defaults.memorySearch.remote.apiKey":
@ -1426,17 +1459,17 @@ export const FIELD_HELP: Record<string, string> = {
"channels.telegram.capabilities.inlineButtons":
"Enable Telegram inline button components for supported command and interaction surfaces. Disable if your deployment needs plain-text-only compatibility behavior.",
"channels.telegram.execApprovals":
"Telegram-native exec approval routing and approver authorization. Enable this only when Telegram should act as an explicit exec-approval client for the selected bot account.",
"Telegram-side execution approval routing for commands that require a human approver before running. Use this when Telegram is part of your operator approval flow and keep the filters narrow so approval prompts only reach the right reviewers.",
"channels.telegram.execApprovals.enabled":
"Enable Telegram exec approvals for this account. When false or unset, Telegram messages/buttons cannot approve exec requests.",
"Enables Telegram delivery of execution-approval requests when command approvals are pending. Keep this disabled unless Telegram operators are expected to review and approve tool execution from chat.",
"channels.telegram.execApprovals.approvers":
"Telegram user IDs allowed to approve exec requests for this bot account. Use numeric Telegram user IDs; prompts are only delivered to these approvers when target includes dm.",
"Allowlist of Telegram user or chat identities permitted to receive and act on execution-approval prompts. Keep this list explicit so approval authority does not drift to unintended accounts.",
"channels.telegram.execApprovals.agentFilter":
'Optional allowlist of agent IDs eligible for Telegram exec approvals, for example `["main", "ops-agent"]`. Use this to keep approval prompts scoped to the agents you actually operate from Telegram.',
"Optional agent filter that limits which agents can emit Telegram execution-approval requests. Use this to keep sensitive approval workflows tied to only the agents that need operator review.",
"channels.telegram.execApprovals.sessionFilter":
"Optional session-key filters matched as substring or regex-style patterns before Telegram approval routing is used. Use narrow patterns so Telegram approvals only appear for intended sessions.",
"Optional session filter that narrows which conversations may trigger Telegram approval requests. Use this to keep noisy or untrusted sessions from generating approval prompts in operator chats.",
"channels.telegram.execApprovals.target":
'Controls where Telegram approval prompts are sent: "dm" sends to approver DMs (default), "channel" sends to the originating Telegram chat/topic, and "both" sends to both. Channel delivery exposes the command text to the chat, so only use it in trusted groups/topics.',
"Telegram destination used for approval prompts, such as a specific operator DM or admin group/thread. Point this at a controlled channel where approvers already expect to handle execution requests.",
"channels.slack.configWrites":
"Allow Slack to write config in response to channel events/commands (default: true).",
"channels.slack.botToken":

View File

@ -216,9 +216,23 @@ export const FIELD_LABELS: Record<string, string> = {
"tools.message.broadcast.enabled": "Enable Message Broadcast",
"tools.web.search.enabled": "Enable Web Search Tool",
"tools.web.search.provider": "Web Search Provider",
"tools.web.search.apiKey": "Brave Search API Key",
"tools.web.search.maxResults": "Web Search Max Results",
"tools.web.search.timeoutSeconds": "Web Search Timeout (sec)",
"tools.web.search.cacheTtlMinutes": "Web Search Cache TTL (min)",
"tools.web.search.firecrawl.apiKey": "Web Search Firecrawl API Key", // pragma: allowlist secret
"tools.web.search.firecrawl.baseUrl": "Web Search Firecrawl Base URL",
"tools.web.search.brave.mode": "Brave Search Mode",
"tools.web.search.gemini.apiKey": "Web Search Gemini API Key", // pragma: allowlist secret
"tools.web.search.gemini.model": "Web Search Gemini Model",
"tools.web.search.grok.apiKey": "Web Search Grok API Key", // pragma: allowlist secret
"tools.web.search.grok.model": "Web Search Grok Model",
"tools.web.search.kimi.apiKey": "Web Search Kimi API Key", // pragma: allowlist secret
"tools.web.search.kimi.baseUrl": "Web Search Kimi Base URL",
"tools.web.search.kimi.model": "Web Search Kimi Model",
"tools.web.search.perplexity.apiKey": "Web Search Perplexity API Key", // pragma: allowlist secret
"tools.web.search.perplexity.baseUrl": "Web Search Perplexity Base URL",
"tools.web.search.perplexity.model": "Web Search Perplexity Model",
"tools.web.fetch.enabled": "Enable Web Fetch Tool",
"tools.web.fetch.maxChars": "Web Fetch Max Chars",
"tools.web.fetch.maxCharsCap": "Web Fetch Hard Max Chars",
@ -311,17 +325,23 @@ export const FIELD_LABELS: Record<string, string> = {
"agents.defaults.envelopeTimezone": "Envelope Timezone",
"agents.defaults.envelopeTimestamp": "Envelope Timestamp",
"agents.defaults.envelopeElapsed": "Envelope Elapsed",
"agents.defaults.cortex": "Cortex Prompt Bridge",
"agents.defaults.cortex.enabled": "Enable Cortex Prompt Bridge",
"agents.defaults.cortex.graphPath": "Cortex Graph Path",
"agents.defaults.cortex.mode": "Cortex Prompt Mode",
"agents.defaults.cortex.maxChars": "Cortex Prompt Max Chars",
"agents.defaults.memorySearch": "Memory Search",
"agents.defaults.memorySearch.enabled": "Enable Memory Search",
"agents.defaults.memorySearch.sources": "Memory Search Sources",
"agents.defaults.memorySearch.extraPaths": "Extra Memory Paths",
"agents.defaults.memorySearch.multimodal": "Memory Search Multimodal",
"agents.defaults.memorySearch.multimodal.enabled": "Enable Memory Search Multimodal",
"agents.defaults.memorySearch.multimodal.modalities": "Memory Search Multimodal Modalities",
"agents.defaults.memorySearch.multimodal.maxFileBytes": "Memory Search Multimodal Max File Bytes",
"agents.defaults.memorySearch.multimodal": "Multimodal Memory Search",
"agents.defaults.memorySearch.multimodal.enabled": "Enable Multimodal Memory Search",
"agents.defaults.memorySearch.multimodal.modalities": "Multimodal Memory Modalities",
"agents.defaults.memorySearch.multimodal.maxFileBytes": "Multimodal Memory Max File Size (bytes)",
"agents.defaults.memorySearch.experimental.sessionMemory":
"Memory Search Session Index (Experimental)",
"agents.defaults.memorySearch.provider": "Memory Search Provider",
"agents.defaults.memorySearch.outputDimensionality": "Memory Search Output Dimensionality",
"agents.defaults.memorySearch.remote.baseUrl": "Remote Embedding Base URL",
"agents.defaults.memorySearch.remote.apiKey": "Remote Embedding API Key",
"agents.defaults.memorySearch.remote.headers": "Remote Embedding Headers",
@ -331,7 +351,6 @@ export const FIELD_LABELS: Record<string, string> = {
"agents.defaults.memorySearch.remote.batch.pollIntervalMs": "Remote Batch Poll Interval (ms)",
"agents.defaults.memorySearch.remote.batch.timeoutMinutes": "Remote Batch Timeout (min)",
"agents.defaults.memorySearch.model": "Memory Search Model",
"agents.defaults.memorySearch.outputDimensionality": "Memory Search Output Dimensionality",
"agents.defaults.memorySearch.fallback": "Memory Search Fallback",
"agents.defaults.memorySearch.local.modelPath": "Local Embedding Model Path",
"agents.defaults.memorySearch.store.path": "Memory Search Index Path",

View File

@ -8,6 +8,17 @@ import type {
} from "./types.base.js";
import type { MemorySearchConfig } from "./types.tools.js";
export type AgentCortexConfig = {
/** Enable Cortex-backed prompt context injection for this agent. */
enabled?: boolean;
/** Optional Cortex graph path (absolute or relative to the agent workspace). */
graphPath?: string;
/** Disclosure mode used when exporting Cortex context. */
mode?: "full" | "professional" | "technical" | "minimal";
/** Max characters exported into the system prompt. */
maxChars?: number;
};
export type AgentModelEntryConfig = {
alias?: string;
/** Provider-specific API parameters (e.g., GLM-4.7 thinking mode). */
@ -185,6 +196,8 @@ export type AgentDefaultsConfig = {
};
/** Vector memory search configuration (per-agent overrides supported). */
memorySearch?: MemorySearchConfig;
/** Optional Cortex-backed prompt context injection (per-agent overrides supported). */
cortex?: AgentCortexConfig;
/** Default thinking level when no /think directive is present. */
thinkingDefault?: "off" | "minimal" | "low" | "medium" | "high" | "xhigh" | "adaptive";
/** Default verbose level when no /verbose directive is present. */

View File

@ -1,5 +1,5 @@
import type { ChatType } from "../channels/chat-type.js";
import type { AgentDefaultsConfig } from "./types.agent-defaults.js";
import type { AgentCortexConfig, AgentDefaultsConfig } from "./types.agent-defaults.js";
import type { AgentModelConfig, AgentSandboxConfig } from "./types.agents-shared.js";
import type { HumanDelayConfig, IdentityConfig } from "./types.base.js";
import type { GroupChatConfig } from "./types.messages.js";
@ -68,6 +68,7 @@ export type AgentConfig = {
/** Optional allowlist of skills for this agent (omit = all skills; empty = none). */
skills?: string[];
memorySearch?: MemorySearchConfig;
cortex?: AgentCortexConfig;
/** Human-like delay between block replies for this agent. */
humanDelay?: HumanDelayConfig;
/** Optional per-agent heartbeat overrides. */

View File

@ -319,15 +319,6 @@ export type MemorySearchConfig = {
sources?: Array<"memory" | "sessions">;
/** Extra paths to include in memory search (directories or .md files). */
extraPaths?: string[];
/** Optional multimodal file indexing for selected extra paths. */
multimodal?: {
/** Enable image/audio embeddings from extraPaths. */
enabled?: boolean;
/** Which non-text file types to index. */
modalities?: Array<"image" | "audio" | "all">;
/** Max bytes allowed per multimodal file before it is skipped. */
maxFileBytes?: number;
};
/** Experimental memory search settings. */
experimental?: {
/** Enable session transcript indexing (experimental, default: false). */
@ -356,11 +347,14 @@ export type MemorySearchConfig = {
fallback?: "openai" | "gemini" | "local" | "voyage" | "mistral" | "ollama" | "none";
/** Embedding model id (remote) or alias (local). */
model?: string;
/**
* Gemini embedding-2 models only: output vector dimensions.
* Supported values today are 768, 1536, and 3072.
*/
/** Optional embedding output dimensionality override (for providers that support it). */
outputDimensionality?: number;
/** Optional multimodal indexing for image/audio files in extra paths. */
multimodal?: {
enabled?: boolean;
modalities?: Array<"image" | "audio" | "all">;
maxFileBytes?: number;
};
/** Local embedding settings (node-llama-cpp). */
local?: {
/** GGUF model path or hf: URI. */
@ -402,7 +396,7 @@ export type MemorySearchConfig = {
deltaBytes?: number;
/** Minimum appended JSONL lines before session transcripts are reindexed. */
deltaMessages?: number;
/** Force session reindex after compaction-triggered transcript updates (default: true). */
/** Force session-memory sync after compaction even if deltas are below threshold (default: true). */
postCompactionForce?: boolean;
};
};
@ -475,6 +469,7 @@ export type ToolsConfig = {
timeoutSeconds?: number;
/** Cache TTL in minutes for search results. */
cacheTtlMinutes?: number;
/** Provider-specific configuration (used when provider="brave"). */
/** @deprecated Legacy Brave scoped config. */
brave?: WebSearchLegacyProviderConfig;
/** @deprecated Legacy Firecrawl scoped config. */
@ -509,7 +504,7 @@ export type ToolsConfig = {
/** Enable Firecrawl fallback (default: true when apiKey is set). */
enabled?: boolean;
/** Firecrawl API key (optional; defaults to FIRECRAWL_API_KEY env var). */
apiKey?: SecretInput;
apiKey?: string;
/** Firecrawl base URL (default: https://api.firecrawl.dev). */
baseUrl?: string;
/** Whether to keep only main content (default: true). */

View File

@ -52,6 +52,22 @@ export const AgentDefaultsSchema = z
contextTokens: z.number().int().positive().optional(),
cliBackends: z.record(z.string(), CliBackendSchema).optional(),
memorySearch: MemorySearchSchema,
cortex: z
.object({
enabled: z.boolean().optional(),
graphPath: z.string().optional(),
mode: z
.union([
z.literal("full"),
z.literal("professional"),
z.literal("technical"),
z.literal("minimal"),
])
.optional(),
maxChars: z.number().int().positive().optional(),
})
.strict()
.optional(),
contextPruning: z
.object({
mode: z.union([z.literal("off"), z.literal("cache-ttl")]).optional(),

View File

@ -588,16 +588,6 @@ export const MemorySearchSchema = z
enabled: z.boolean().optional(),
sources: z.array(z.union([z.literal("memory"), z.literal("sessions")])).optional(),
extraPaths: z.array(z.string()).optional(),
multimodal: z
.object({
enabled: z.boolean().optional(),
modalities: z
.array(z.union([z.literal("image"), z.literal("audio"), z.literal("all")]))
.optional(),
maxFileBytes: z.number().int().positive().optional(),
})
.strict()
.optional(),
experimental: z
.object({
sessionMemory: z.boolean().optional(),
@ -645,6 +635,16 @@ export const MemorySearchSchema = z
.optional(),
model: z.string().optional(),
outputDimensionality: z.number().int().positive().optional(),
multimodal: z
.object({
enabled: z.boolean().optional(),
modalities: z
.array(z.union([z.literal("image"), z.literal("audio"), z.literal("all")]))
.optional(),
maxFileBytes: z.number().int().positive().optional(),
})
.strict()
.optional(),
local: z
.object({
modelPath: z.string().optional(),
@ -769,6 +769,22 @@ export const AgentEntrySchema = z
model: AgentModelSchema.optional(),
skills: z.array(z.string()).optional(),
memorySearch: MemorySearchSchema,
cortex: z
.object({
enabled: z.boolean().optional(),
graphPath: z.string().optional(),
mode: z
.union([
z.literal("full"),
z.literal("professional"),
z.literal("technical"),
z.literal("minimal"),
])
.optional(),
maxChars: z.number().int().positive().optional(),
})
.strict()
.optional(),
humanDelay: HumanDelaySchema.optional(),
heartbeat: HeartbeatSchema,
identity: IdentitySchema,

View File

@ -75,11 +75,13 @@ vi.mock("../../agents/skills/refresh.js", async (importOriginal) => {
};
});
vi.mock("../../agents/workspace.js", async (importOriginal) => {
const actual = await importOriginal<typeof import("../../agents/workspace.js")>();
vi.mock("../../agents/workspace.js", async () => {
const actual = await vi.importActual<typeof import("../../agents/workspace.js")>(
"../../agents/workspace.js",
);
return {
...actual,
DEFAULT_IDENTITY_FILENAME: "IDENTITY.md",
DEFAULT_AGENT_WORKSPACE_DIR: "/tmp/workspace",
ensureAgentWorkspace: vi.fn().mockResolvedValue({ dir: "/tmp/workspace" }),
};
});
@ -260,8 +262,10 @@ vi.mock("../../logger.js", async (importOriginal) => {
};
});
vi.mock("../../security/external-content.js", async (importOriginal) => {
const actual = await importOriginal<typeof import("../../security/external-content.js")>();
vi.mock("../../security/external-content.js", async () => {
const actual = await vi.importActual<typeof import("../../security/external-content.js")>(
"../../security/external-content.js",
);
return {
...actual,
buildSafeExternalPrompt: vi.fn().mockReturnValue("safe prompt"),

View File

@ -1,11 +1,6 @@
import { randomUUID } from "node:crypto";
import type { OpenClawConfig } from "../config/config.js";
import {
loadConfig,
resolveConfigPath,
resolveGatewayPort,
resolveStateDir,
} from "../config/config.js";
import { loadConfig, resolveConfigPath, resolveStateDir } from "../config/config.js";
import { resolveSecretInputRef } from "../config/types.secrets.js";
import { loadOrCreateDeviceIdentity } from "../infra/device-identity.js";
import { loadGatewayTlsRuntime } from "../infra/tls/gateway.js";
@ -18,6 +13,10 @@ import {
} from "../utils/message-channel.js";
import { VERSION } from "../version.js";
import { GatewayClient } from "./client.js";
import {
buildGatewayConnectionDetailsFromConfig,
type GatewayConnectionDetails,
} from "./connection-details.js";
import {
GatewaySecretRefUnavailableError,
resolveGatewayCredentialsFromConfig,
@ -32,7 +31,6 @@ import {
resolveLeastPrivilegeOperatorScopesForMethod,
type OperatorScope,
} from "./method-scopes.js";
import { isSecureWebSocketUrl } from "./net.js";
import { PROTOCOL_VERSION } from "./protocol/index.js";
type CallGatewayBaseOptions = {
@ -73,14 +71,6 @@ export type CallGatewayOptions = CallGatewayBaseOptions & {
scopes?: OperatorScope[];
};
export type GatewayConnectionDetails = {
url: string;
urlSource: string;
bindDetail?: string;
remoteFallbackNote?: string;
message: string;
};
function shouldAttachDeviceIdentityForGatewayCall(params: {
url: string;
token?: string;
@ -156,86 +146,12 @@ export function buildGatewayConnectionDetails(
} = {},
): GatewayConnectionDetails {
const config = options.config ?? loadConfig();
const configPath =
options.configPath ?? resolveConfigPath(process.env, resolveStateDir(process.env));
const isRemoteMode = config.gateway?.mode === "remote";
const remote = isRemoteMode ? config.gateway?.remote : undefined;
const tlsEnabled = config.gateway?.tls?.enabled === true;
const localPort = resolveGatewayPort(config);
const bindMode = config.gateway?.bind ?? "loopback";
const scheme = tlsEnabled ? "wss" : "ws";
// Self-connections should always target loopback; bind mode only controls listener exposure.
const localUrl = `${scheme}://127.0.0.1:${localPort}`;
const cliUrlOverride =
typeof options.url === "string" && options.url.trim().length > 0
? options.url.trim()
: undefined;
const envUrlOverride = cliUrlOverride
? undefined
: (trimToUndefined(process.env.OPENCLAW_GATEWAY_URL) ??
trimToUndefined(process.env.CLAWDBOT_GATEWAY_URL));
const urlOverride = cliUrlOverride ?? envUrlOverride;
const remoteUrl =
typeof remote?.url === "string" && remote.url.trim().length > 0 ? remote.url.trim() : undefined;
const remoteMisconfigured = isRemoteMode && !urlOverride && !remoteUrl;
const urlSourceHint =
options.urlSource ?? (cliUrlOverride ? "cli" : envUrlOverride ? "env" : undefined);
const url = urlOverride || remoteUrl || localUrl;
const urlSource = urlOverride
? urlSourceHint === "env"
? "env OPENCLAW_GATEWAY_URL"
: "cli --url"
: remoteUrl
? "config gateway.remote.url"
: remoteMisconfigured
? "missing gateway.remote.url (fallback local)"
: "local loopback";
const bindDetail = !urlOverride && !remoteUrl ? `Bind: ${bindMode}` : undefined;
const remoteFallbackNote = remoteMisconfigured
? "Warn: gateway.mode=remote but gateway.remote.url is missing; set gateway.remote.url or switch gateway.mode=local."
: undefined;
const allowPrivateWs = process.env.OPENCLAW_ALLOW_INSECURE_PRIVATE_WS === "1";
// Security check: block ALL insecure ws:// to non-loopback addresses (CWE-319, CVSS 9.8)
// This applies to the FINAL resolved URL, regardless of source (config, CLI override, etc).
// Both credentials and chat/conversation data must not be transmitted over plaintext to remote hosts.
if (!isSecureWebSocketUrl(url, { allowPrivateWs })) {
throw new Error(
[
`SECURITY ERROR: Gateway URL "${url}" uses plaintext ws:// to a non-loopback address.`,
"Both credentials and chat data would be exposed to network interception.",
`Source: ${urlSource}`,
`Config: ${configPath}`,
"Fix: Use wss:// for remote gateway URLs.",
"Safe remote access defaults:",
"- keep gateway.bind=loopback and use an SSH tunnel (ssh -N -L 18789:127.0.0.1:18789 user@gateway-host)",
"- or use Tailscale Serve/Funnel for HTTPS remote access",
allowPrivateWs
? undefined
: "Break-glass (trusted private networks only): set OPENCLAW_ALLOW_INSECURE_PRIVATE_WS=1",
"Doctor: openclaw doctor --fix",
"Docs: https://docs.openclaw.ai/gateway/remote",
].join("\n"),
);
}
const message = [
`Gateway target: ${url}`,
`Source: ${urlSource}`,
`Config: ${configPath}`,
bindDetail,
remoteFallbackNote,
]
.filter(Boolean)
.join("\n");
return {
url,
urlSource,
bindDetail,
remoteFallbackNote,
message,
};
return buildGatewayConnectionDetailsFromConfig({
config,
url: options.url,
configPath: options.configPath,
urlSource: options.urlSource,
});
}
type GatewayRemoteSettings = {

View File

@ -0,0 +1,101 @@
import { resolveConfigPath, resolveGatewayPort, resolveStateDir } from "../config/paths.js";
import type { OpenClawConfig } from "../config/types.js";
import { isSecureWebSocketUrl } from "./net.js";
export type GatewayConnectionDetails = {
url: string;
urlSource: string;
bindDetail?: string;
remoteFallbackNote?: string;
message: string;
};
function trimToUndefined(value: string | undefined): string | undefined {
const trimmed = value?.trim();
return trimmed ? trimmed : undefined;
}
export function buildGatewayConnectionDetailsFromConfig(options: {
config: OpenClawConfig;
url?: string;
configPath?: string;
urlSource?: "cli" | "env";
}): GatewayConnectionDetails {
const configPath =
options.configPath ?? resolveConfigPath(process.env, resolveStateDir(process.env));
const isRemoteMode = options.config.gateway?.mode === "remote";
const remote = isRemoteMode ? options.config.gateway?.remote : undefined;
const tlsEnabled = options.config.gateway?.tls?.enabled === true;
const localPort = resolveGatewayPort(options.config);
const bindMode = options.config.gateway?.bind ?? "loopback";
const scheme = tlsEnabled ? "wss" : "ws";
// Self-connections should always target loopback; bind mode only controls listener exposure.
const localUrl = `${scheme}://127.0.0.1:${localPort}`;
const cliUrlOverride =
typeof options.url === "string" && options.url.trim().length > 0
? options.url.trim()
: undefined;
const envUrlOverride = cliUrlOverride
? undefined
: (trimToUndefined(process.env.OPENCLAW_GATEWAY_URL) ??
trimToUndefined(process.env.CLAWDBOT_GATEWAY_URL));
const urlOverride = cliUrlOverride ?? envUrlOverride;
const remoteUrl =
typeof remote?.url === "string" && remote.url.trim().length > 0 ? remote.url.trim() : undefined;
const remoteMisconfigured = isRemoteMode && !urlOverride && !remoteUrl;
const urlSourceHint =
options.urlSource ?? (cliUrlOverride ? "cli" : envUrlOverride ? "env" : undefined);
const url = urlOverride || remoteUrl || localUrl;
const urlSource = urlOverride
? urlSourceHint === "env"
? "env OPENCLAW_GATEWAY_URL"
: "cli --url"
: remoteUrl
? "config gateway.remote.url"
: remoteMisconfigured
? "missing gateway.remote.url (fallback local)"
: "local loopback";
const bindDetail = !urlOverride && !remoteUrl ? `Bind: ${bindMode}` : undefined;
const remoteFallbackNote = remoteMisconfigured
? "Warn: gateway.mode=remote but gateway.remote.url is missing; set gateway.remote.url or switch gateway.mode=local."
: undefined;
const allowPrivateWs = process.env.OPENCLAW_ALLOW_INSECURE_PRIVATE_WS === "1";
if (!isSecureWebSocketUrl(url, { allowPrivateWs })) {
throw new Error(
[
`SECURITY ERROR: Gateway URL "${url}" uses plaintext ws:// to a non-loopback address.`,
"Both credentials and chat data would be exposed to network interception.",
`Source: ${urlSource}`,
`Config: ${configPath}`,
"Fix: Use wss:// for remote gateway URLs.",
"Safe remote access defaults:",
"- keep gateway.bind=loopback and use an SSH tunnel (ssh -N -L 18789:127.0.0.1:18789 user@gateway-host)",
"- or use Tailscale Serve/Funnel for HTTPS remote access",
allowPrivateWs
? undefined
: "Break-glass (trusted private networks only): set OPENCLAW_ALLOW_INSECURE_PRIVATE_WS=1",
"Doctor: openclaw doctor --fix",
"Docs: https://docs.openclaw.ai/gateway/remote",
].join("\n"),
);
}
const message = [
`Gateway target: ${url}`,
`Source: ${urlSource}`,
`Config: ${configPath}`,
bindDetail,
remoteFallbackNote,
]
.filter(Boolean)
.join("\n");
return {
url,
urlSource,
bindDetail,
remoteFallbackNote,
message,
};
}

View File

@ -12,6 +12,13 @@ vi.mock("../config/config.js", async (importOriginal) => {
return {
...actual,
loadConfig: loadConfigMock,
};
});
vi.mock("../config/paths.js", async (importOriginal) => {
const actual = await importOriginal<typeof import("../config/paths.js")>();
return {
...actual,
resolveGatewayPort: resolveGatewayPortMock,
};
});

View File

@ -67,6 +67,20 @@ export const SnapshotSchema = Type.Object(
channel: NonEmptyString,
}),
),
cortex: Type.Optional(
Type.Object(
{
enabled: Type.Boolean(),
mode: Type.Optional(NonEmptyString),
graphPath: Type.Optional(NonEmptyString),
lastCaptureAtMs: Type.Optional(Type.Integer({ minimum: 0 })),
lastCaptureReason: Type.Optional(Type.String()),
lastCaptureStored: Type.Optional(Type.Boolean()),
lastSyncPlatforms: Type.Optional(Type.Array(NonEmptyString)),
},
{ additionalProperties: false },
),
),
},
{ additionalProperties: false },
);

View File

@ -0,0 +1,148 @@
import { afterEach, describe, expect, it, vi } from "vitest";
const loadConfigMock = vi.hoisted(() => vi.fn());
const createConfigIOMock = vi.hoisted(() => vi.fn());
const resolveDefaultAgentIdMock = vi.hoisted(() => vi.fn());
const resolveMainSessionKeyMock = vi.hoisted(() => vi.fn());
const resolveStorePathMock = vi.hoisted(() => vi.fn());
const loadSessionStoreMock = vi.hoisted(() => vi.fn());
const normalizeMainKeyMock = vi.hoisted(() => vi.fn());
const listSystemPresenceMock = vi.hoisted(() => vi.fn());
const resolveGatewayAuthMock = vi.hoisted(() => vi.fn());
const getUpdateAvailableMock = vi.hoisted(() => vi.fn());
const resolveAgentCortexModeStatusMock = vi.hoisted(() => vi.fn());
const resolveCortexChannelTargetMock = vi.hoisted(() => vi.fn());
const getLatestCortexCaptureHistoryEntryMock = vi.hoisted(() => vi.fn());
vi.mock("../../config/config.js", () => ({
STATE_DIR: "/tmp/openclaw-state",
createConfigIO: createConfigIOMock,
loadConfig: loadConfigMock,
}));
vi.mock("../../agents/agent-scope.js", () => ({
resolveDefaultAgentId: resolveDefaultAgentIdMock,
}));
vi.mock("../../config/sessions.js", () => ({
loadSessionStore: loadSessionStoreMock,
resolveMainSessionKey: resolveMainSessionKeyMock,
resolveStorePath: resolveStorePathMock,
}));
vi.mock("../../routing/session-key.js", async (importOriginal) => {
const actual = await importOriginal<typeof import("../../routing/session-key.js")>();
return {
...actual,
DEFAULT_ACCOUNT_ID: "default",
normalizeMainKey: normalizeMainKeyMock,
};
});
vi.mock("../../infra/system-presence.js", () => ({
listSystemPresence: listSystemPresenceMock,
}));
vi.mock("../auth.js", () => ({
resolveGatewayAuth: resolveGatewayAuthMock,
}));
vi.mock("../../infra/update-startup.js", () => ({
getUpdateAvailable: getUpdateAvailableMock,
}));
vi.mock("../../agents/cortex.js", () => ({
resolveAgentCortexModeStatus: resolveAgentCortexModeStatusMock,
resolveCortexChannelTarget: resolveCortexChannelTargetMock,
}));
vi.mock("../../agents/cortex-history.js", () => ({
getLatestCortexCaptureHistoryEntry: getLatestCortexCaptureHistoryEntryMock,
}));
import { buildGatewaySnapshot } from "./health-state.js";
describe("buildGatewaySnapshot", () => {
afterEach(() => {
vi.clearAllMocks();
});
it("includes Cortex snapshot details when the prompt bridge is enabled", async () => {
loadConfigMock.mockReturnValue({
session: { mainKey: "main", scope: "per-sender" },
});
createConfigIOMock.mockReturnValue({ configPath: "/tmp/openclaw/openclaw.json" });
resolveDefaultAgentIdMock.mockReturnValue("main");
resolveMainSessionKeyMock.mockReturnValue("agent:main:main");
resolveStorePathMock.mockReturnValue("/tmp/openclaw-state/sessions/main/sessions.json");
loadSessionStoreMock.mockReturnValue({
"agent:main:main": {
sessionId: "session-1",
updatedAt: 1234,
lastChannel: "telegram",
lastTo: "telegram:user-123",
deliveryContext: {
channel: "telegram",
to: "telegram:user-123",
},
},
});
normalizeMainKeyMock.mockReturnValue("main");
listSystemPresenceMock.mockReturnValue([]);
resolveGatewayAuthMock.mockReturnValue({ mode: "token" });
getUpdateAvailableMock.mockReturnValue(undefined);
resolveAgentCortexModeStatusMock.mockResolvedValue({
enabled: true,
mode: "minimal",
source: "session-override",
maxChars: 1500,
graphPath: ".cortex/context.json",
});
resolveCortexChannelTargetMock.mockReturnValue("telegram:user-123");
getLatestCortexCaptureHistoryEntryMock.mockResolvedValue({
agentId: "main",
sessionId: "session-1",
channelId: "telegram:user-123",
captured: true,
score: 0.7,
reason: "high-signal memory candidate",
syncPlatforms: ["claude-code", "cursor", "copilot"],
timestamp: 1234,
});
const snapshot = await buildGatewaySnapshot();
expect(resolveAgentCortexModeStatusMock).toHaveBeenCalledWith({
cfg: expect.any(Object),
agentId: "main",
sessionId: "session-1",
channelId: "telegram:user-123",
});
expect(resolveStorePathMock).toHaveBeenCalledWith(undefined, { agentId: "main" });
expect(loadSessionStoreMock).toHaveBeenCalledWith(
"/tmp/openclaw-state/sessions/main/sessions.json",
);
expect(resolveCortexChannelTargetMock).toHaveBeenCalledWith({
channel: "telegram",
originatingChannel: "telegram",
originatingTo: "telegram:user-123",
nativeChannelId: "telegram:user-123",
to: "telegram:user-123",
});
expect(getLatestCortexCaptureHistoryEntryMock).toHaveBeenCalledWith({
agentId: "main",
sessionId: "session-1",
channelId: "telegram:user-123",
});
expect(snapshot.cortex).toEqual({
enabled: true,
mode: "minimal",
graphPath: ".cortex/context.json",
lastCaptureAtMs: 1234,
lastCaptureReason: "high-signal memory candidate",
lastCaptureStored: true,
lastSyncPlatforms: ["claude-code", "cursor", "copilot"],
});
});
});

View File

@ -1,7 +1,16 @@
import { resolveDefaultAgentId } from "../../agents/agent-scope.js";
import {
getCachedLatestCortexCaptureHistoryEntry,
getLatestCortexCaptureHistoryEntry,
} from "../../agents/cortex-history.js";
import { resolveAgentCortexModeStatus, resolveCortexChannelTarget } from "../../agents/cortex.js";
import { getHealthSnapshot, type HealthSummary } from "../../commands/health.js";
import { STATE_DIR, createConfigIO, loadConfig } from "../../config/config.js";
import { resolveMainSessionKey } from "../../config/sessions.js";
import {
loadSessionStore,
resolveMainSessionKey,
resolveStorePath,
} from "../../config/sessions.js";
import { listSystemPresence } from "../../infra/system-presence.js";
import { getUpdateAvailable } from "../../infra/update-startup.js";
import { normalizeMainKey } from "../../routing/session-key.js";
@ -14,12 +23,39 @@ let healthCache: HealthSummary | null = null;
let healthRefresh: Promise<HealthSummary> | null = null;
let broadcastHealthUpdate: ((snap: HealthSummary) => void) | null = null;
export function buildGatewaySnapshot(): Snapshot {
export async function buildGatewaySnapshot(): Promise<Snapshot> {
const cfg = loadConfig();
const configPath = createConfigIO().configPath;
const defaultAgentId = resolveDefaultAgentId(cfg);
const mainKey = normalizeMainKey(cfg.session?.mainKey);
const mainSessionKey = resolveMainSessionKey(cfg);
const sessionStorePath = resolveStorePath(cfg.session?.store, { agentId: defaultAgentId });
const mainSessionEntry = loadSessionStore(sessionStorePath)[mainSessionKey];
const channelId = resolveCortexChannelTarget({
channel: mainSessionEntry?.lastChannel,
originatingChannel: mainSessionEntry?.deliveryContext?.channel,
originatingTo: mainSessionEntry?.deliveryContext?.to,
nativeChannelId: mainSessionEntry?.deliveryContext?.to,
to: mainSessionEntry?.lastTo,
});
const cortex = await resolveAgentCortexModeStatus({
cfg,
agentId: defaultAgentId,
sessionId: mainSessionEntry?.sessionId,
channelId,
});
// Prefer the in-memory cache to avoid reading the full JSONL during
// WebSocket connect handshakes. Fall back to async read only when
// the cache is cold (first snapshot after restart).
const cortexHistoryParams = {
agentId: defaultAgentId,
sessionId: mainSessionEntry?.sessionId,
channelId,
};
const latestCortexCapture = cortex
? (getCachedLatestCortexCaptureHistoryEntry(cortexHistoryParams) ??
(await getLatestCortexCaptureHistoryEntry(cortexHistoryParams).catch(() => null)))
: null;
const scope = cfg.session?.scope ?? "per-sender";
const presence = listSystemPresence();
const uptimeMs = Math.round(process.uptime() * 1000);
@ -43,6 +79,17 @@ export function buildGatewaySnapshot(): Snapshot {
},
authMode: auth.mode,
updateAvailable,
cortex: cortex
? {
enabled: true,
mode: cortex.mode,
graphPath: cortex.graphPath,
lastCaptureAtMs: latestCortexCapture?.timestamp,
lastCaptureReason: latestCortexCapture?.reason,
lastCaptureStored: latestCortexCapture?.captured,
lastSyncPlatforms: latestCortexCapture?.syncPlatforms,
}
: undefined,
};
}

View File

@ -940,7 +940,7 @@ export function attachGatewayWsMessageHandler(params: {
incrementPresenceVersion();
}
const snapshot = buildGatewaySnapshot();
const snapshot = await buildGatewaySnapshot();
const cachedHealth = getHealthCache();
if (cachedHealth) {
snapshot.health = cachedHealth;

View File

@ -0,0 +1,79 @@
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { afterEach, describe, expect, it } from "vitest";
import {
clearCortexModeOverride,
getCortexModeOverride,
setCortexModeOverride,
} from "./cortex-mode-overrides.js";
describe("cortex mode overrides", () => {
const tempDirs: string[] = [];
afterEach(async () => {
await Promise.all(tempDirs.map((dir) => fs.rm(dir, { recursive: true, force: true })));
tempDirs.length = 0;
});
async function createStorePath() {
const dir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-cortex-mode-"));
tempDirs.push(dir);
return path.join(dir, "cortex-mode-overrides.json");
}
it("prefers channel overrides over session overrides", async () => {
const pathname = await createStorePath();
await setCortexModeOverride({
pathname,
agentId: "main",
scope: "channel",
targetId: "slack",
mode: "professional",
});
await setCortexModeOverride({
pathname,
agentId: "main",
scope: "session",
targetId: "session-1",
mode: "minimal",
});
const resolved = await getCortexModeOverride({
pathname,
agentId: "main",
sessionId: "session-1",
channelId: "slack",
});
expect(resolved?.mode).toBe("professional");
expect(resolved?.scope).toBe("channel");
});
it("can clear a stored override", async () => {
const pathname = await createStorePath();
await setCortexModeOverride({
pathname,
agentId: "main",
scope: "channel",
targetId: "telegram",
mode: "minimal",
});
const removed = await clearCortexModeOverride({
pathname,
agentId: "main",
scope: "channel",
targetId: "telegram",
});
const resolved = await getCortexModeOverride({
pathname,
agentId: "main",
channelId: "telegram",
});
expect(removed).toBe(true);
expect(resolved).toBeNull();
});
});

View File

@ -0,0 +1,114 @@
import fs from "node:fs/promises";
import path from "node:path";
import { resolveStateDir } from "../config/paths.js";
import type { CortexPolicy } from "./cortex.js";
export type CortexModeScope = "session" | "channel";
export type CortexModeOverride = {
agentId: string;
scope: CortexModeScope;
targetId: string;
mode: CortexPolicy;
updatedAt: string;
};
type CortexModeOverrideStore = {
session: Record<string, CortexModeOverride>;
channel: Record<string, CortexModeOverride>;
};
function buildKey(agentId: string, targetId: string): string {
// Use NUL separator to avoid collisions when IDs contain colons.
return `${agentId}\0${targetId}`;
}
export function resolveCortexModeOverridesPath(env: NodeJS.ProcessEnv = process.env): string {
return path.join(resolveStateDir(env), "cortex-mode-overrides.json");
}
async function readStore(
pathname = resolveCortexModeOverridesPath(),
): Promise<CortexModeOverrideStore> {
try {
const raw = await fs.readFile(pathname, "utf-8");
const parsed = JSON.parse(raw) as Partial<CortexModeOverrideStore>;
return {
session: parsed.session ?? {},
channel: parsed.channel ?? {},
};
} catch {
return {
session: {},
channel: {},
};
}
}
async function writeStore(
store: CortexModeOverrideStore,
pathname = resolveCortexModeOverridesPath(),
): Promise<void> {
await fs.mkdir(path.dirname(pathname), { recursive: true });
await fs.writeFile(pathname, JSON.stringify(store, null, 2));
}
export async function getCortexModeOverride(params: {
agentId: string;
sessionId?: string;
channelId?: string;
pathname?: string;
}): Promise<CortexModeOverride | null> {
const store = await readStore(params.pathname);
const channelId = params.channelId?.trim();
if (channelId) {
const channel = store.channel[buildKey(params.agentId, channelId)];
if (channel) {
return channel;
}
}
const sessionId = params.sessionId?.trim();
if (sessionId) {
const session = store.session[buildKey(params.agentId, sessionId)];
if (session) {
return session;
}
}
return null;
}
export async function setCortexModeOverride(params: {
agentId: string;
scope: CortexModeScope;
targetId: string;
mode: CortexPolicy;
pathname?: string;
}): Promise<CortexModeOverride> {
const store = await readStore(params.pathname);
const next: CortexModeOverride = {
agentId: params.agentId,
scope: params.scope,
targetId: params.targetId,
mode: params.mode,
updatedAt: new Date().toISOString(),
};
store[params.scope][buildKey(params.agentId, params.targetId)] = next;
await writeStore(store, params.pathname);
return next;
}
export async function clearCortexModeOverride(params: {
agentId: string;
scope: CortexModeScope;
targetId: string;
pathname?: string;
}): Promise<boolean> {
const store = await readStore(params.pathname);
const key = buildKey(params.agentId, params.targetId);
if (!store[params.scope][key]) {
return false;
}
delete store[params.scope][key];
await writeStore(store, params.pathname);
return true;
}

271
src/memory/cortex.test.ts Normal file
View File

@ -0,0 +1,271 @@
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { afterEach, describe, expect, it, vi } from "vitest";
const { runExec } = vi.hoisted(() => ({
runExec: vi.fn(),
}));
vi.mock("../process/exec.js", () => ({
runExec,
}));
import {
getCortexStatus,
ingestCortexMemoryFromText,
listCortexMemoryConflicts,
previewCortexContext,
resolveCortexGraphPath,
resolveCortexMemoryConflict,
syncCortexCodingContext,
} from "./cortex.js";
afterEach(() => {
vi.restoreAllMocks();
runExec.mockReset();
});
describe("cortex bridge", () => {
it("resolves the default graph path inside the workspace", () => {
expect(resolveCortexGraphPath("/tmp/workspace")).toBe(
path.normalize(path.join("/tmp/workspace", ".cortex", "context.json")),
);
});
it("resolves relative graph overrides against the workspace", () => {
expect(resolveCortexGraphPath("/tmp/workspace", "graphs/main.json")).toBe(
path.normalize(path.resolve("/tmp/workspace", "graphs/main.json")),
);
});
it("reports availability and graph presence", async () => {
const tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-cortex-status-"));
const graphPath = path.join(tmpDir, ".cortex", "context.json");
await fs.mkdir(path.dirname(graphPath), { recursive: true });
await fs.writeFile(graphPath, "{}", "utf8");
runExec.mockResolvedValueOnce({ stdout: "", stderr: "" });
const status = await getCortexStatus({ workspaceDir: tmpDir });
expect(status.available).toBe(true);
expect(status.graphExists).toBe(true);
expect(status.graphPath).toBe(graphPath);
});
it("surfaces Cortex CLI errors in status", async () => {
runExec.mockRejectedValueOnce(new Error("spawn cortex ENOENT"));
const status = await getCortexStatus({ workspaceDir: "/tmp/workspace" });
expect(status.available).toBe(false);
expect(status.error).toContain("spawn cortex ENOENT");
});
it("exports preview context", async () => {
const tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-cortex-preview-"));
const graphPath = path.join(tmpDir, ".cortex", "context.json");
await fs.mkdir(path.dirname(graphPath), { recursive: true });
await fs.writeFile(graphPath, "{}", "utf8");
runExec
.mockResolvedValueOnce({ stdout: "", stderr: "" })
.mockResolvedValueOnce({ stdout: "## Cortex Context\n- Python\n", stderr: "" });
const preview = await previewCortexContext({
workspaceDir: tmpDir,
policy: "technical",
maxChars: 500,
});
expect(preview.graphPath).toBe(graphPath);
expect(preview.policy).toBe("technical");
expect(preview.maxChars).toBe(500);
expect(preview.context).toBe("## Cortex Context\n- Python");
});
it("reuses a pre-resolved Cortex status for preview without re-probing", async () => {
const status = {
available: true,
workspaceDir: "/tmp/workspace",
graphPath: "/tmp/workspace/.cortex/context.json",
graphExists: true,
} as const;
runExec.mockResolvedValueOnce({ stdout: "## Cortex Context\n- Python\n", stderr: "" });
const preview = await previewCortexContext({
workspaceDir: status.workspaceDir,
status,
});
expect(preview.context).toBe("## Cortex Context\n- Python");
expect(runExec).toHaveBeenCalledTimes(1);
});
it("fails preview when graph is missing", async () => {
const tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-cortex-preview-missing-"));
runExec.mockResolvedValueOnce({ stdout: "", stderr: "" });
await expect(previewCortexContext({ workspaceDir: tmpDir })).rejects.toThrow(
"Cortex graph not found",
);
});
it("lists memory conflicts from Cortex JSON output", async () => {
const tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-cortex-conflicts-"));
const graphPath = path.join(tmpDir, ".cortex", "context.json");
await fs.mkdir(path.dirname(graphPath), { recursive: true });
await fs.writeFile(graphPath, "{}", "utf8");
runExec.mockResolvedValueOnce({ stdout: "", stderr: "" }).mockResolvedValueOnce({
stdout: JSON.stringify({
conflicts: [
{
id: "conf_1",
type: "temporal_flip",
severity: 0.91,
summary: "Hiring status changed",
},
],
}),
stderr: "",
});
const conflicts = await listCortexMemoryConflicts({ workspaceDir: tmpDir });
expect(conflicts).toEqual([
{
id: "conf_1",
type: "temporal_flip",
severity: 0.91,
summary: "Hiring status changed",
nodeLabel: undefined,
oldValue: undefined,
newValue: undefined,
},
]);
});
it("resolves memory conflicts from Cortex JSON output", async () => {
const tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-cortex-resolve-"));
const graphPath = path.join(tmpDir, ".cortex", "context.json");
await fs.mkdir(path.dirname(graphPath), { recursive: true });
await fs.writeFile(graphPath, "{}", "utf8");
runExec.mockResolvedValueOnce({ stdout: "", stderr: "" }).mockResolvedValueOnce({
stdout: JSON.stringify({
status: "ok",
conflict_id: "conf_1",
nodes_updated: 1,
nodes_removed: 1,
commit_id: "ver_123",
}),
stderr: "",
});
const result = await resolveCortexMemoryConflict({
workspaceDir: tmpDir,
conflictId: "conf_1",
action: "accept-new",
});
expect(result).toEqual({
status: "ok",
conflictId: "conf_1",
action: "accept-new",
nodesUpdated: 1,
nodesRemoved: 1,
commitId: "ver_123",
message: undefined,
});
});
it("syncs coding context to default coding platforms", async () => {
const tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-cortex-sync-"));
const graphPath = path.join(tmpDir, ".cortex", "context.json");
await fs.mkdir(path.dirname(graphPath), { recursive: true });
await fs.writeFile(graphPath, "{}", "utf8");
runExec
.mockResolvedValueOnce({ stdout: "", stderr: "" })
.mockResolvedValueOnce({ stdout: "", stderr: "" });
const result = await syncCortexCodingContext({
workspaceDir: tmpDir,
policy: "technical",
});
expect(result.policy).toBe("technical");
expect(result.platforms).toEqual(["claude-code", "cursor", "copilot"]);
expect(runExec).toHaveBeenLastCalledWith(
"cortex",
[
"context-write",
graphPath,
"--platforms",
"claude-code",
"cursor",
"copilot",
"--policy",
"technical",
],
expect.any(Object),
);
});
it("ingests high-signal text into the Cortex graph with merge", async () => {
const tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-cortex-ingest-"));
const graphPath = path.join(tmpDir, ".cortex", "context.json");
await fs.mkdir(path.dirname(graphPath), { recursive: true });
await fs.writeFile(graphPath, "{}", "utf8");
runExec
.mockResolvedValueOnce({ stdout: "", stderr: "" })
.mockResolvedValueOnce({ stdout: "", stderr: "" });
const result = await ingestCortexMemoryFromText({
workspaceDir: tmpDir,
event: {
actor: "user",
text: "I prefer concise answers and I am focused on fundraising this quarter.",
sessionId: "session-1",
channelId: "channel-1",
agentId: "main",
},
});
expect(result).toEqual({
workspaceDir: tmpDir,
graphPath,
stored: true,
});
expect(runExec).toHaveBeenLastCalledWith(
"cortex",
expect.arrayContaining(["extract", "-o", graphPath, "--merge", graphPath]),
expect.any(Object),
);
});
it("initializes the Cortex graph on first ingest when it is missing", async () => {
const tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-cortex-first-ingest-"));
const graphPath = path.join(tmpDir, ".cortex", "context.json");
runExec
.mockResolvedValueOnce({ stdout: "", stderr: "" })
.mockResolvedValueOnce({ stdout: "", stderr: "" });
const result = await ingestCortexMemoryFromText({
workspaceDir: tmpDir,
event: {
actor: "user",
text: "I prefer concise answers.",
},
});
await expect(fs.readFile(graphPath, "utf8")).resolves.toContain('"nodes"');
expect(result).toEqual({
workspaceDir: tmpDir,
graphPath,
stored: true,
});
expect(runExec).toHaveBeenLastCalledWith(
"cortex",
expect.arrayContaining(["extract", "-o", graphPath, "--merge", graphPath]),
expect.any(Object),
);
});
});

450
src/memory/cortex.ts Normal file
View File

@ -0,0 +1,450 @@
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { runExec } from "../process/exec.js";
export type CortexPolicy = "full" | "professional" | "technical" | "minimal";
export type CortexStatus = {
available: boolean;
workspaceDir: string;
graphPath: string;
graphExists: boolean;
error?: string;
};
export type CortexPreview = {
workspaceDir: string;
graphPath: string;
policy: CortexPolicy;
maxChars: number;
context: string;
};
export type CortexMemoryConflict = {
id: string;
type: string;
severity: number;
summary: string;
nodeLabel?: string;
oldValue?: string;
newValue?: string;
};
export type CortexMemoryResolveAction = "accept-new" | "keep-old" | "merge" | "ignore";
export type CortexMemoryResolveResult = {
status: string;
conflictId: string;
action: CortexMemoryResolveAction;
nodesUpdated?: number;
nodesRemoved?: number;
commitId?: string;
message?: string;
};
export type CortexCodingSyncResult = {
workspaceDir: string;
graphPath: string;
policy: CortexPolicy;
platforms: string[];
};
export type CortexMemoryIngestResult = {
workspaceDir: string;
graphPath: string;
stored: boolean;
};
export type CortexMemoryEvent = {
actor: "user" | "assistant" | "tool";
text: string;
agentId?: string;
sessionId?: string;
channelId?: string;
provider?: string;
timestamp?: string;
};
const DEFAULT_GRAPH_RELATIVE_PATH = path.join(".cortex", "context.json");
const DEFAULT_POLICY: CortexPolicy = "technical";
const DEFAULT_MAX_CHARS = 1_500;
export const DEFAULT_CORTEX_CODING_PLATFORMS = ["claude-code", "cursor", "copilot"] as const;
const EMPTY_CORTEX_GRAPH = {
schema_version: "5.0",
graph: {
nodes: [],
edges: [],
},
meta: {},
} as const;
type CortexStatusParams = {
workspaceDir: string;
graphPath?: string;
status?: CortexStatus;
};
function parseJson<T>(raw: string, label: string): T {
try {
return JSON.parse(raw) as T;
} catch (error) {
throw new Error(`Cortex ${label} returned invalid JSON`, { cause: error });
}
}
export function resolveCortexGraphPath(workspaceDir: string, graphPath?: string): string {
const trimmed = graphPath?.trim();
if (!trimmed) {
return path.join(workspaceDir, DEFAULT_GRAPH_RELATIVE_PATH);
}
if (path.isAbsolute(trimmed)) {
return path.normalize(trimmed);
}
return path.normalize(path.resolve(workspaceDir, trimmed));
}
export async function ensureCortexGraphInitialized(params: {
workspaceDir: string;
graphPath?: string;
}): Promise<{ graphPath: string; created: boolean }> {
const graphPath = resolveCortexGraphPath(params.workspaceDir, params.graphPath);
if (await pathExists(graphPath)) {
return { graphPath, created: false };
}
await fs.mkdir(path.dirname(graphPath), { recursive: true });
await fs.writeFile(graphPath, `${JSON.stringify(EMPTY_CORTEX_GRAPH, null, 2)}\n`, "utf8");
return { graphPath, created: true };
}
async function pathExists(pathname: string): Promise<boolean> {
try {
await fs.access(pathname);
return true;
} catch {
return false;
}
}
function formatCortexExecError(error: unknown): string {
const message =
error instanceof Error ? error.message : typeof error === "string" ? error : "unknown error";
const stderr =
typeof error === "object" && error && "stderr" in error && typeof error.stderr === "string"
? error.stderr
: "";
const combined = stderr.trim() || message.trim();
return combined || "unknown error";
}
function asOptionalString(value: unknown): string | undefined {
return typeof value === "string" ? value : undefined;
}
function asString(value: unknown, fallback = ""): string {
return typeof value === "string" ? value : fallback;
}
function asNumber(value: unknown, fallback = 0): number {
if (typeof value === "number" && Number.isFinite(value)) {
return value;
}
if (typeof value === "string") {
const parsed = Number.parseFloat(value);
return Number.isFinite(parsed) ? parsed : fallback;
}
return fallback;
}
export async function getCortexStatus(params: {
workspaceDir: string;
graphPath?: string;
}): Promise<CortexStatus> {
const graphPath = resolveCortexGraphPath(params.workspaceDir, params.graphPath);
const graphExists = await pathExists(graphPath);
try {
await runExec("cortex", ["context-export", "--help"], {
timeoutMs: 5_000,
cwd: params.workspaceDir,
maxBuffer: 512 * 1024,
});
return {
available: true,
workspaceDir: params.workspaceDir,
graphPath,
graphExists,
};
} catch (error) {
return {
available: false,
workspaceDir: params.workspaceDir,
graphPath,
graphExists,
error: formatCortexExecError(error),
};
}
}
async function resolveCortexStatus(params: CortexStatusParams): Promise<CortexStatus> {
return params.status ?? getCortexStatus(params);
}
function requireCortexStatus(status: CortexStatus): CortexStatus {
if (!status.available) {
throw new Error(`Cortex CLI unavailable: ${status.error ?? "unknown error"}`);
}
if (!status.graphExists) {
throw new Error(`Cortex graph not found: ${status.graphPath}`);
}
return status;
}
export async function previewCortexContext(params: {
workspaceDir: string;
graphPath?: string;
policy?: CortexPolicy;
maxChars?: number;
status?: CortexStatus;
}): Promise<CortexPreview> {
const status = requireCortexStatus(
await resolveCortexStatus({
workspaceDir: params.workspaceDir,
graphPath: params.graphPath,
status: params.status,
}),
);
const policy = params.policy ?? DEFAULT_POLICY;
const maxChars = params.maxChars ?? DEFAULT_MAX_CHARS;
try {
const { stdout } = await runExec(
"cortex",
["context-export", status.graphPath, "--policy", policy, "--max-chars", String(maxChars)],
{
timeoutMs: 10_000,
cwd: params.workspaceDir,
maxBuffer: 2 * 1024 * 1024,
},
);
return {
workspaceDir: params.workspaceDir,
graphPath: status.graphPath,
policy,
maxChars,
context: stdout.trim(),
};
} catch (error) {
throw new Error(`Cortex preview failed: ${formatCortexExecError(error)}`, { cause: error });
}
}
export async function listCortexMemoryConflicts(params: {
workspaceDir: string;
graphPath?: string;
minSeverity?: number;
status?: CortexStatus;
}): Promise<CortexMemoryConflict[]> {
const status = requireCortexStatus(
await resolveCortexStatus({
workspaceDir: params.workspaceDir,
graphPath: params.graphPath,
status: params.status,
}),
);
const args = ["memory", "conflicts", status.graphPath, "--format", "json"];
if (typeof params.minSeverity === "number" && Number.isFinite(params.minSeverity)) {
args.push("--severity", String(params.minSeverity));
}
try {
const { stdout } = await runExec("cortex", args, {
timeoutMs: 10_000,
cwd: params.workspaceDir,
maxBuffer: 2 * 1024 * 1024,
});
const parsed = parseJson<{ conflicts?: Array<Record<string, unknown>> }>(stdout, "conflicts");
return (parsed.conflicts ?? []).map((entry) => ({
id: asString(entry.id),
type: asString(entry.type),
severity: asNumber(entry.severity),
summary: asString(entry.summary, asString(entry.description)),
nodeLabel: asOptionalString(entry.node_label),
oldValue: asOptionalString(entry.old_value),
newValue: asOptionalString(entry.new_value),
}));
} catch (error) {
throw new Error(`Cortex conflicts failed: ${formatCortexExecError(error)}`, { cause: error });
}
}
export async function resolveCortexMemoryConflict(params: {
workspaceDir: string;
graphPath?: string;
conflictId: string;
action: CortexMemoryResolveAction;
commitMessage?: string;
status?: CortexStatus;
}): Promise<CortexMemoryResolveResult> {
const status = requireCortexStatus(
await resolveCortexStatus({
workspaceDir: params.workspaceDir,
graphPath: params.graphPath,
status: params.status,
}),
);
const args = [
"memory",
"resolve",
status.graphPath,
"--conflict-id",
params.conflictId,
"--action",
params.action,
"--format",
"json",
];
if (params.commitMessage?.trim()) {
args.push("--commit-message", params.commitMessage.trim());
}
try {
const { stdout } = await runExec("cortex", args, {
timeoutMs: 10_000,
cwd: params.workspaceDir,
maxBuffer: 2 * 1024 * 1024,
});
const parsed = parseJson<Record<string, unknown>>(stdout, "resolve");
return {
status: asString(parsed.status, "unknown"),
conflictId: asString(parsed.conflict_id, params.conflictId),
action: params.action,
nodesUpdated: typeof parsed.nodes_updated === "number" ? parsed.nodes_updated : undefined,
nodesRemoved: typeof parsed.nodes_removed === "number" ? parsed.nodes_removed : undefined,
commitId: asOptionalString(parsed.commit_id),
message: asOptionalString(parsed.message),
};
} catch (error) {
throw new Error(`Cortex resolve failed: ${formatCortexExecError(error)}`, { cause: error });
}
}
export async function syncCortexCodingContext(params: {
workspaceDir: string;
graphPath?: string;
policy?: CortexPolicy;
platforms?: string[];
status?: CortexStatus;
}): Promise<CortexCodingSyncResult> {
const status = requireCortexStatus(
await resolveCortexStatus({
workspaceDir: params.workspaceDir,
graphPath: params.graphPath,
status: params.status,
}),
);
const policy = params.policy ?? DEFAULT_POLICY;
const requestedPlatforms = params.platforms?.map((entry) => entry.trim()).filter(Boolean) ?? [];
const platforms =
requestedPlatforms.length > 0 ? requestedPlatforms : [...DEFAULT_CORTEX_CODING_PLATFORMS];
try {
await runExec(
"cortex",
["context-write", status.graphPath, "--platforms", ...platforms, "--policy", policy],
{
timeoutMs: 15_000,
cwd: params.workspaceDir,
maxBuffer: 2 * 1024 * 1024,
},
);
return {
workspaceDir: params.workspaceDir,
graphPath: status.graphPath,
policy,
platforms,
};
} catch (error) {
throw new Error(`Cortex coding sync failed: ${formatCortexExecError(error)}`, {
cause: error,
});
}
}
function formatCortexMemoryEvent(event: CortexMemoryEvent): string {
const metadata = {
source: "openclaw",
actor: event.actor,
agentId: event.agentId,
sessionId: event.sessionId,
channelId: event.channelId,
provider: event.provider,
timestamp: event.timestamp ?? new Date().toISOString(),
};
return [
"Source: OpenClaw conversation",
`Actor: ${event.actor}`,
event.agentId ? `Agent: ${event.agentId}` : "",
event.sessionId ? `Session: ${event.sessionId}` : "",
event.channelId ? `Channel: ${event.channelId}` : "",
event.provider ? `Provider: ${event.provider}` : "",
`Timestamp: ${metadata.timestamp}`,
"",
"Metadata:",
JSON.stringify(metadata, null, 2),
"",
"Message:",
event.text.trim(),
]
.filter(Boolean)
.join("\n");
}
export async function ingestCortexMemoryFromText(params: {
workspaceDir: string;
graphPath?: string;
event: CortexMemoryEvent;
status?: CortexStatus;
}): Promise<CortexMemoryIngestResult> {
const text = params.event.text.trim();
if (!text) {
throw new Error("Cortex memory ingest requires non-empty text");
}
const status = await resolveCortexStatus({
workspaceDir: params.workspaceDir,
graphPath: params.graphPath,
status: params.status,
});
if (!status.available) {
throw new Error("Cortex CLI unavailable: " + (status.error ?? "unknown error"));
}
// Allow ingesting even if the graph does not exist yet - cortex extract
// creates the output file when -o is provided, and ensureCortexGraphInitialized
// seeds the directory below. This fixes first-time ingest failures.
if (!status.graphExists) {
await ensureCortexGraphInitialized({
workspaceDir: params.workspaceDir,
graphPath: params.graphPath,
});
}
await fs.mkdir(path.dirname(status.graphPath), { recursive: true });
const tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-cortex-ingest-"));
const inputPath = path.join(tmpDir, "memory.txt");
const payload = formatCortexMemoryEvent(params.event);
try {
await fs.writeFile(inputPath, payload, "utf8");
await runExec(
"cortex",
["extract", inputPath, "-o", status.graphPath, "--merge", status.graphPath],
{
timeoutMs: 15_000,
cwd: params.workspaceDir,
maxBuffer: 2 * 1024 * 1024,
},
);
return {
workspaceDir: params.workspaceDir,
graphPath: status.graphPath,
stored: true,
};
} catch (error) {
throw new Error(`Cortex ingest failed: ${formatCortexExecError(error)}`, { cause: error });
} finally {
await fs.rm(tmpDir, { recursive: true, force: true }).catch(() => {});
}
}

View File

@ -5,13 +5,41 @@ import {
import { requireApiKey, resolveApiKeyForProvider } from "../agents/model-auth.js";
import { parseGeminiAuth } from "../infra/gemini-auth.js";
import type { SsrFPolicy } from "../infra/net/ssrf.js";
import type { EmbeddingInput } from "./embedding-inputs.js";
import {
hasNonTextEmbeddingParts,
isInlineDataEmbeddingInputPart,
type EmbeddingInput,
} from "./embedding-inputs.js";
import { sanitizeAndNormalizeEmbedding } from "./embedding-vectors.js";
import { debugEmbeddingsLog } from "./embeddings-debug.js";
import type { EmbeddingProvider, EmbeddingProviderOptions } from "./embeddings.js";
import { buildRemoteBaseUrlPolicy, withRemoteHttpResponse } from "./remote-http.js";
import { resolveMemorySecretInputString } from "./secret-input.js";
export const GEMINI_EMBEDDING_2_MODELS = new Set(["gemini-embedding-2-preview"]);
const GEMINI_EMBEDDING_2_DIMENSIONS = new Set([768, 1536, 3072]);
export type GeminiTaskType =
| "RETRIEVAL_QUERY"
| "RETRIEVAL_DOCUMENT"
| "SEMANTIC_SIMILARITY"
| (string & {});
export type GeminiEmbeddingRequestPart =
| { text: string }
| { inlineData: { mimeType: string; data: string } };
export type GeminiEmbeddingContent = {
parts: GeminiEmbeddingRequestPart[];
};
export type GeminiTextEmbeddingRequest = {
model?: string;
content: GeminiEmbeddingContent;
taskType?: GeminiTaskType;
outputDimensionality?: number;
};
export type GeminiEmbeddingClient = {
baseUrl: string;
headers: Record<string, string>;
@ -28,110 +56,28 @@ const GEMINI_MAX_INPUT_TOKENS: Record<string, number> = {
"text-embedding-004": 2048,
};
// --- gemini-embedding-2-preview support ---
export const GEMINI_EMBEDDING_2_MODELS = new Set([
"gemini-embedding-2-preview",
// Add the GA model name here once released.
]);
const GEMINI_EMBEDDING_2_DEFAULT_DIMENSIONS = 3072;
const GEMINI_EMBEDDING_2_VALID_DIMENSIONS = [768, 1536, 3072] as const;
export type GeminiTaskType =
| "RETRIEVAL_QUERY"
| "RETRIEVAL_DOCUMENT"
| "SEMANTIC_SIMILARITY"
| "CLASSIFICATION"
| "CLUSTERING"
| "QUESTION_ANSWERING"
| "FACT_VERIFICATION";
export type GeminiTextPart = { text: string };
export type GeminiInlinePart = {
inlineData: { mimeType: string; data: string };
};
export type GeminiPart = GeminiTextPart | GeminiInlinePart;
export type GeminiEmbeddingRequest = {
content: { parts: GeminiPart[] };
taskType: GeminiTaskType;
outputDimensionality?: number;
model?: string;
};
export type GeminiTextEmbeddingRequest = GeminiEmbeddingRequest;
/** Builds the text-only Gemini embedding request shape used across direct and batch APIs. */
export function buildGeminiTextEmbeddingRequest(params: {
text: string;
taskType: GeminiTaskType;
outputDimensionality?: number;
modelPath?: string;
}): GeminiTextEmbeddingRequest {
return buildGeminiEmbeddingRequest({
input: { text: params.text },
taskType: params.taskType,
outputDimensionality: params.outputDimensionality,
modelPath: params.modelPath,
});
}
export function buildGeminiEmbeddingRequest(params: {
input: EmbeddingInput;
taskType: GeminiTaskType;
outputDimensionality?: number;
modelPath?: string;
}): GeminiEmbeddingRequest {
const request: GeminiEmbeddingRequest = {
content: {
parts: params.input.parts?.map((part) =>
part.type === "text"
? ({ text: part.text } satisfies GeminiTextPart)
: ({
inlineData: { mimeType: part.mimeType, data: part.data },
} satisfies GeminiInlinePart),
) ?? [{ text: params.input.text }],
},
taskType: params.taskType,
};
if (params.modelPath) {
request.model = params.modelPath;
}
if (params.outputDimensionality != null) {
request.outputDimensionality = params.outputDimensionality;
}
return request;
}
/**
* Returns true if the given model name is a gemini-embedding-2 variant that
* supports `outputDimensionality` and extended task types.
*/
export function isGeminiEmbedding2Model(model: string): boolean {
return GEMINI_EMBEDDING_2_MODELS.has(model);
return GEMINI_EMBEDDING_2_MODELS.has(normalizeGeminiModel(model));
}
/**
* Validate and return the `outputDimensionality` for gemini-embedding-2 models.
* Returns `undefined` for older models (they don't support the param).
*/
export function resolveGeminiOutputDimensionality(
model: string,
requested?: number,
outputDimensionality?: number,
): number | undefined {
if (!isGeminiEmbedding2Model(model)) {
return undefined;
}
if (requested == null) {
return GEMINI_EMBEDDING_2_DEFAULT_DIMENSIONS;
if (typeof outputDimensionality !== "number") {
return 3072;
}
const valid: readonly number[] = GEMINI_EMBEDDING_2_VALID_DIMENSIONS;
if (!valid.includes(requested)) {
if (!GEMINI_EMBEDDING_2_DIMENSIONS.has(outputDimensionality)) {
throw new Error(
`Invalid outputDimensionality ${requested} for ${model}. Valid values: ${valid.join(", ")}`,
`Invalid outputDimensionality ${outputDimensionality} for ${normalizeGeminiModel(model)}. Valid values: 768, 1536, 3072.`,
);
}
return requested;
return outputDimensionality;
}
function resolveRemoteApiKey(remoteApiKey: unknown): string | undefined {
const trimmed = resolveMemorySecretInputString({
value: remoteApiKey,
@ -146,7 +92,7 @@ function resolveRemoteApiKey(remoteApiKey: unknown): string | undefined {
return trimmed;
}
export function normalizeGeminiModel(model: string): string {
function normalizeGeminiModel(model: string): string {
const trimmed = model.trim();
if (!trimmed) {
return DEFAULT_GEMINI_EMBEDDING_MODEL;
@ -161,46 +107,6 @@ export function normalizeGeminiModel(model: string): string {
return withoutPrefix;
}
async function fetchGeminiEmbeddingPayload(params: {
client: GeminiEmbeddingClient;
endpoint: string;
body: unknown;
}): Promise<{
embedding?: { values?: number[] };
embeddings?: Array<{ values?: number[] }>;
}> {
return await executeWithApiKeyRotation({
provider: "google",
apiKeys: params.client.apiKeys,
execute: async (apiKey) => {
const authHeaders = parseGeminiAuth(apiKey);
const headers = {
...authHeaders.headers,
...params.client.headers,
};
return await withRemoteHttpResponse({
url: params.endpoint,
ssrfPolicy: params.client.ssrfPolicy,
init: {
method: "POST",
headers,
body: JSON.stringify(params.body),
},
onResponse: async (res) => {
if (!res.ok) {
const text = await res.text();
throw new Error(`gemini embeddings failed: ${res.status} ${text}`);
}
return (await res.json()) as {
embedding?: { values?: number[] };
embeddings?: Array<{ values?: number[] }>;
};
},
});
},
});
}
function normalizeGeminiBaseUrl(raw: string): string {
const trimmed = raw.replace(/\/+$/, "");
const openAiIndex = trimmed.indexOf("/openai");
@ -214,6 +120,46 @@ function buildGeminiModelPath(model: string): string {
return model.startsWith("models/") ? model : `models/${model}`;
}
export function buildGeminiTextEmbeddingRequest(params: {
text: string;
taskType?: GeminiTaskType;
modelPath?: string;
outputDimensionality?: number;
}): GeminiTextEmbeddingRequest {
return {
...(params.modelPath ? { model: params.modelPath } : {}),
content: { parts: [{ text: params.text }] },
...(params.taskType ? { taskType: params.taskType } : {}),
...(typeof params.outputDimensionality === "number"
? { outputDimensionality: params.outputDimensionality }
: {}),
};
}
export function buildGeminiEmbeddingRequest(params: {
input: EmbeddingInput;
taskType?: GeminiTaskType;
modelPath?: string;
outputDimensionality?: number;
}): GeminiTextEmbeddingRequest {
const parts =
params.input.parts?.length && hasNonTextEmbeddingParts(params.input)
? params.input.parts.map((part) =>
isInlineDataEmbeddingInputPart(part)
? { inlineData: { mimeType: part.mimeType, data: part.data } }
: { text: part.text },
)
: [{ text: params.input.text }];
return {
...(params.modelPath ? { model: params.modelPath } : {}),
content: { parts },
...(params.taskType ? { taskType: params.taskType } : {}),
...(typeof params.outputDimensionality === "number"
? { outputDimensionality: params.outputDimensionality }
: {}),
};
}
export async function createGeminiEmbeddingProvider(
options: EmbeddingProviderOptions,
): Promise<{ provider: EmbeddingProvider; client: GeminiEmbeddingClient }> {
@ -221,55 +167,105 @@ export async function createGeminiEmbeddingProvider(
const baseUrl = client.baseUrl.replace(/\/$/, "");
const embedUrl = `${baseUrl}/${client.modelPath}:embedContent`;
const batchUrl = `${baseUrl}/${client.modelPath}:batchEmbedContents`;
const isV2 = isGeminiEmbedding2Model(client.model);
const outputDimensionality = client.outputDimensionality;
const fetchWithGeminiAuth = async (apiKey: string, endpoint: string, body: unknown) => {
const authHeaders = parseGeminiAuth(apiKey);
const headers = {
...authHeaders.headers,
...client.headers,
};
const payload = await withRemoteHttpResponse({
url: endpoint,
ssrfPolicy: client.ssrfPolicy,
init: {
method: "POST",
headers,
body: JSON.stringify(body),
},
onResponse: async (res) => {
if (!res.ok) {
const text = await res.text();
throw new Error(`gemini embeddings failed: ${res.status} ${text}`);
}
return (await res.json()) as {
embedding?: { values?: number[] };
embeddings?: Array<{ values?: number[] }>;
};
},
});
return payload;
};
const embedQuery = async (text: string): Promise<number[]> => {
if (!text.trim()) {
return [];
}
const payload = await fetchGeminiEmbeddingPayload({
client,
endpoint: embedUrl,
body: buildGeminiTextEmbeddingRequest({
text,
taskType: options.taskType ?? "RETRIEVAL_QUERY",
outputDimensionality: isV2 ? outputDimensionality : undefined,
}),
const payload = await executeWithApiKeyRotation({
provider: "google",
apiKeys: client.apiKeys,
execute: (apiKey) =>
fetchWithGeminiAuth(
apiKey,
embedUrl,
buildGeminiTextEmbeddingRequest({
text,
taskType: options.taskType ?? "RETRIEVAL_QUERY",
modelPath: client.modelPath,
outputDimensionality: client.outputDimensionality,
}),
),
});
return sanitizeAndNormalizeEmbedding(payload.embedding?.values ?? []);
};
const embedBatch = async (texts: string[]): Promise<number[][]> => {
if (texts.length === 0) {
return [];
}
const requests = texts.map((text) =>
buildGeminiTextEmbeddingRequest({
text,
taskType: "RETRIEVAL_DOCUMENT",
modelPath: client.modelPath,
outputDimensionality: client.outputDimensionality,
}),
);
const payload = await executeWithApiKeyRotation({
provider: "google",
apiKeys: client.apiKeys,
execute: (apiKey) =>
fetchWithGeminiAuth(apiKey, batchUrl, {
requests,
}),
});
const embeddings = Array.isArray(payload.embeddings) ? payload.embeddings : [];
return texts.map((_, index) => sanitizeAndNormalizeEmbedding(embeddings[index]?.values ?? []));
};
const embedBatchInputs = async (inputs: EmbeddingInput[]): Promise<number[][]> => {
if (inputs.length === 0) {
return [];
}
const payload = await fetchGeminiEmbeddingPayload({
client,
endpoint: batchUrl,
body: {
requests: inputs.map((input) =>
buildGeminiEmbeddingRequest({
input,
modelPath: client.modelPath,
taskType: options.taskType ?? "RETRIEVAL_DOCUMENT",
outputDimensionality: isV2 ? outputDimensionality : undefined,
}),
),
},
const requests = inputs.map((input) =>
buildGeminiEmbeddingRequest({
input,
taskType: "RETRIEVAL_DOCUMENT",
modelPath: client.modelPath,
outputDimensionality: client.outputDimensionality,
}),
);
const payload = await executeWithApiKeyRotation({
provider: "google",
apiKeys: client.apiKeys,
execute: (apiKey) =>
fetchWithGeminiAuth(apiKey, batchUrl, {
requests,
}),
});
const embeddings = Array.isArray(payload.embeddings) ? payload.embeddings : [];
return inputs.map((_, index) => sanitizeAndNormalizeEmbedding(embeddings[index]?.values ?? []));
};
const embedBatch = async (texts: string[]): Promise<number[][]> => {
return await embedBatchInputs(
texts.map((text) => ({
text,
})),
);
};
return {
provider: {
id: "gemini",

View File

@ -225,6 +225,24 @@ describe("embedding provider remote overrides", () => {
expect(headers["Content-Type"]).toBe("application/json");
});
it("passes Gemini outputDimensionality when configured", async () => {
mockResolvedProviderKey("provider-key");
const result = await createEmbeddingProvider({
config: {} as never,
provider: "gemini",
remote: {
apiKey: "gemini-key",
},
model: "gemini-embedding-2-preview",
outputDimensionality: 768,
fallback: "openai",
});
expect(result.gemini?.outputDimensionality).toBe(768);
expect(requireProvider(result).model).toBe("gemini-embedding-2-preview");
});
it("fails fast when Gemini remote apiKey is an unresolved SecretRef", async () => {
await expect(
createEmbeddingProvider({

View File

@ -5,12 +5,7 @@ import type { SecretInput } from "../config/types.secrets.js";
import { formatErrorMessage } from "../infra/errors.js";
import { resolveUserPath } from "../utils.js";
import type { EmbeddingInput } from "./embedding-inputs.js";
import { sanitizeAndNormalizeEmbedding } from "./embedding-vectors.js";
import {
createGeminiEmbeddingProvider,
type GeminiEmbeddingClient,
type GeminiTaskType,
} from "./embeddings-gemini.js";
import { createGeminiEmbeddingProvider, type GeminiEmbeddingClient } from "./embeddings-gemini.js";
import {
createMistralEmbeddingProvider,
type MistralEmbeddingClient,
@ -20,6 +15,15 @@ import { createOpenAiEmbeddingProvider, type OpenAiEmbeddingClient } from "./emb
import { createVoyageEmbeddingProvider, type VoyageEmbeddingClient } from "./embeddings-voyage.js";
import { importNodeLlamaCpp } from "./node-llama.js";
function sanitizeAndNormalizeEmbedding(vec: number[]): number[] {
const sanitized = vec.map((value) => (Number.isFinite(value) ? value : 0));
const magnitude = Math.sqrt(sanitized.reduce((sum, value) => sum + value * value, 0));
if (magnitude < 1e-10) {
return sanitized;
}
return sanitized.map((value) => value / magnitude);
}
export type { GeminiEmbeddingClient } from "./embeddings-gemini.js";
export type { MistralEmbeddingClient } from "./embeddings-mistral.js";
export type { OpenAiEmbeddingClient } from "./embeddings-openai.js";
@ -61,6 +65,8 @@ export type EmbeddingProviderOptions = {
config: OpenClawConfig;
agentDir?: string;
provider: EmbeddingProviderRequest;
outputDimensionality?: number;
taskType?: string;
remote?: {
baseUrl?: string;
apiKey?: SecretInput;
@ -72,10 +78,6 @@ export type EmbeddingProviderOptions = {
modelPath?: string;
modelCacheDir?: string;
};
/** Gemini embedding-2: output vector dimensions (768, 1536, or 3072). */
outputDimensionality?: number;
/** Gemini: override the default task type sent with embedding requests. */
taskType?: GeminiTaskType;
};
export const DEFAULT_LOCAL_MODEL =
@ -310,7 +312,7 @@ function formatLocalSetupError(err: unknown): string {
: undefined,
missing && detail ? `Detail: ${detail}` : null,
"To enable local embeddings:",
"1) Use Node 24 (recommended for installs/updates; Node 22 LTS, currently 22.16+, remains supported)",
"1) Use Node 22 LTS (recommended for installs/updates)",
missing
? "2) Reinstall OpenClaw (this should install node-llama-cpp): npm i -g openclaw@latest"
: null,

View File

@ -327,6 +327,26 @@ describe("memory index", () => {
expect(audioResults.some((result) => result.path.endsWith("meeting.wav"))).toBe(true);
});
it("indexes a multimodal extra path provided as a direct file path", async () => {
const mediaDir = path.join(workspaceDir, "media-single-file");
const imagePath = path.join(mediaDir, "diagram.png");
await fs.mkdir(mediaDir, { recursive: true });
await fs.writeFile(imagePath, Buffer.from("png"));
const cfg = createCfg({
storePath: path.join(workspaceDir, `index-multimodal-file-${randomUUID()}.sqlite`),
provider: "gemini",
model: "gemini-embedding-2-preview",
extraPaths: [imagePath],
multimodal: { enabled: true, modalities: ["image"] },
});
const manager = await getPersistentManager(cfg);
await manager.sync({ reason: "test" });
const imageResults = await manager.search("image");
expect(imageResults.some((result) => result.path.endsWith("diagram.png"))).toBe(true);
});
it("skips oversized multimodal inputs without aborting sync", async () => {
const mediaDir = path.join(workspaceDir, "media-oversize");
await fs.mkdir(mediaDir, { recursive: true });
@ -829,24 +849,26 @@ describe("memory index", () => {
internal.activateFallbackProvider = activateFallbackProvider;
const runUnsafeReindex = vi.fn(async () => {});
internal.runUnsafeReindex = runUnsafeReindex;
try {
await manager.sync({
reason: "post-compaction",
sessionFiles: [sessionPath],
});
await manager.sync({
reason: "post-compaction",
sessionFiles: [sessionPath],
});
expect(activateFallbackProvider).toHaveBeenCalledWith("embedding backend failed");
expect(runUnsafeReindex).toHaveBeenCalledWith({
reason: "post-compaction",
force: true,
progress: undefined,
});
internal.syncSessionFiles = originalSyncSessionFiles;
internal.shouldFallbackOnError = originalShouldFallbackOnError;
internal.activateFallbackProvider = originalActivateFallbackProvider;
internal.runUnsafeReindex = originalRunUnsafeReindex;
await manager.close?.();
expect(activateFallbackProvider).toHaveBeenCalledWith("embedding backend failed");
expect(runUnsafeReindex).toHaveBeenCalledWith({
reason: "post-compaction",
force: true,
sessionFiles: [sessionPath],
progress: undefined,
});
} finally {
internal.syncSessionFiles = originalSyncSessionFiles;
internal.shouldFallbackOnError = originalShouldFallbackOnError;
internal.activateFallbackProvider = originalActivateFallbackProvider;
internal.runUnsafeReindex = originalRunUnsafeReindex;
await manager.close?.();
}
} finally {
if (previousStateDir === undefined) {
delete process.env.OPENCLAW_STATE_DIR;

Some files were not shown because too many files have changed in this diff Show More