Merge remote-tracking branch 'upstream/main' into feat/gigachat
# Conflicts: # extensions/feishu/src/monitor.bot-menu.lifecycle.test.ts # extensions/feishu/src/monitor.reply-once.lifecycle.test.ts # src/hooks/workspace.ts
This commit is contained in:
commit
fb642a08e2
71
.agents/skills/openclaw-test-heap-leaks/SKILL.md
Normal file
71
.agents/skills/openclaw-test-heap-leaks/SKILL.md
Normal file
@ -0,0 +1,71 @@
|
||||
---
|
||||
name: openclaw-test-heap-leaks
|
||||
description: Investigate `pnpm test` memory growth, Vitest worker OOMs, and suspicious RSS increases in OpenClaw using the `scripts/test-parallel.mjs` heap snapshot tooling. Use when Codex needs to reproduce test-lane memory growth, collect repeated `.heapsnapshot` files, compare snapshots from the same worker PID, distinguish transformed-module retention from real data leaks, and fix or reduce the impact by patching cleanup logic or isolating hotspot tests.
|
||||
---
|
||||
|
||||
# OpenClaw Test Heap Leaks
|
||||
|
||||
Use this skill for test-memory investigations. Do not guess from RSS alone when heap snapshots are available.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Reproduce the failing shape first.
|
||||
- Match the real entrypoint if possible. For Linux CI-style unit failures, start with:
|
||||
- `pnpm canvas:a2ui:bundle && OPENCLAW_TEST_MEMORY_TRACE=1 OPENCLAW_TEST_HEAPSNAPSHOT_INTERVAL_MS=60000 OPENCLAW_TEST_HEAPSNAPSHOT_DIR=.tmp/heapsnap OPENCLAW_TEST_WORKERS=2 OPENCLAW_TEST_MAX_OLD_SPACE_SIZE_MB=6144 pnpm test`
|
||||
- Keep `OPENCLAW_TEST_MEMORY_TRACE=1` enabled so the wrapper prints per-file RSS summaries alongside the snapshots.
|
||||
- If the report is about a specific shard or worker budget, preserve that shape.
|
||||
|
||||
2. Wait for repeated snapshots before concluding anything.
|
||||
- Take at least two intervals from the same lane.
|
||||
- Compare snapshots from the same PID inside one lane directory such as `.tmp/heapsnap/unit-fast/`.
|
||||
- Use `scripts/heapsnapshot-delta.mjs` to compare either two files directly or the earliest/latest pair per PID in one lane directory.
|
||||
|
||||
3. Classify the growth before choosing a fix.
|
||||
- If growth is dominated by Vite/Vitest transformed source strings, `Module`, `system / Context`, bytecode, descriptor arrays, or property maps, treat it as retained module graph growth in long-lived workers.
|
||||
- If growth is dominated by app objects, caches, buffers, server handles, timers, mock state, sqlite state, or similar runtime objects, treat it as a likely cleanup or lifecycle leak.
|
||||
|
||||
4. Fix the right layer.
|
||||
- For retained transformed-module growth in shared workers:
|
||||
- Move hotspot files out of `unit-fast` by updating `test/fixtures/test-parallel.behavior.json`.
|
||||
- Prefer `singletonIsolated` for files that are safe alone but inflate shared worker heaps.
|
||||
- If the file should already have been peeled out by timings but is absent from `test/fixtures/test-timings.unit.json`, call that out explicitly. Missing timings are a scheduling blind spot.
|
||||
- For real leaks:
|
||||
- Patch the implicated test or runtime cleanup path.
|
||||
- Look for missing `afterEach`/`afterAll`, module-reset gaps, retained global state, unreleased DB handles, or listeners/timers that survive the file.
|
||||
|
||||
5. Verify with the most direct proof.
|
||||
- Re-run the targeted lane or file with heap snapshots enabled if the suite still finishes in reasonable time.
|
||||
- If snapshot overhead pushes tests over Vitest timeouts, fall back to the same lane without snapshots and confirm the RSS trend or OOM is reduced.
|
||||
- For wrapper-only changes, at minimum verify the expected lanes start and the snapshot files are written.
|
||||
|
||||
## Heuristics
|
||||
|
||||
- Do not call everything a leak. In this repo, large `unit-fast` growth can be a worker-lifetime problem rather than an application object leak.
|
||||
- `scripts/test-parallel.mjs` and `scripts/test-parallel-memory.mjs` are the primary control points for wrapper diagnostics.
|
||||
- The lane names printed by `[test-parallel] start ...` and `[test-parallel][mem] summary ...` tell you where to focus.
|
||||
- When one or two files account for most of the delta and they are missing from timings, reducing impact by isolating them is usually the first pragmatic fix.
|
||||
- When the same retained object families grow across multiple intervals in the same worker PID, trust the snapshots over intuition.
|
||||
|
||||
## Snapshot Comparison
|
||||
|
||||
- Direct comparison:
|
||||
- `node .agents/skills/openclaw-test-heap-leaks/scripts/heapsnapshot-delta.mjs before.heapsnapshot after.heapsnapshot`
|
||||
- Auto-select earliest/latest snapshots per PID within one lane:
|
||||
- `node .agents/skills/openclaw-test-heap-leaks/scripts/heapsnapshot-delta.mjs --lane-dir .tmp/heapsnap/unit-fast`
|
||||
- Useful flags:
|
||||
- `--top 40`
|
||||
- `--min-kb 32`
|
||||
- `--pid 16133`
|
||||
|
||||
Read the top positive deltas first. Large positive growth in module-transform artifacts suggests lane isolation; large positive growth in runtime objects suggests a real leak.
|
||||
|
||||
## Output Expectations
|
||||
|
||||
When using this skill, report:
|
||||
|
||||
- The exact reproduce command.
|
||||
- Which lane and PID were compared.
|
||||
- The dominant retained object families from the snapshot delta.
|
||||
- Whether the issue is a real leak or shared-worker retained module growth.
|
||||
- The concrete fix or impact-reduction patch.
|
||||
- What you verified, and what snapshot overhead prevented you from verifying.
|
||||
@ -0,0 +1,4 @@
|
||||
interface:
|
||||
display_name: "Test Heap Leaks"
|
||||
short_description: "Investigate test OOMs with heap snapshots"
|
||||
default_prompt: "Use $openclaw-test-heap-leaks to investigate test memory growth with heap snapshots and reduce its impact."
|
||||
@ -0,0 +1,265 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
import fs from "node:fs";
|
||||
import path from "node:path";
|
||||
|
||||
function printUsage() {
|
||||
console.error(
|
||||
"Usage: node heapsnapshot-delta.mjs <before.heapsnapshot> <after.heapsnapshot> [--top N] [--min-kb N]",
|
||||
);
|
||||
console.error(
|
||||
" or: node heapsnapshot-delta.mjs --lane-dir <dir> [--pid PID] [--top N] [--min-kb N]",
|
||||
);
|
||||
}
|
||||
|
||||
function fail(message) {
|
||||
console.error(message);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
function parseArgs(argv) {
|
||||
const options = {
|
||||
top: 30,
|
||||
minKb: 64,
|
||||
laneDir: null,
|
||||
pid: null,
|
||||
files: [],
|
||||
};
|
||||
|
||||
for (let index = 0; index < argv.length; index += 1) {
|
||||
const arg = argv[index];
|
||||
if (arg === "--top") {
|
||||
options.top = Number.parseInt(argv[index + 1] ?? "", 10);
|
||||
index += 1;
|
||||
continue;
|
||||
}
|
||||
if (arg === "--min-kb") {
|
||||
options.minKb = Number.parseInt(argv[index + 1] ?? "", 10);
|
||||
index += 1;
|
||||
continue;
|
||||
}
|
||||
if (arg === "--lane-dir") {
|
||||
options.laneDir = argv[index + 1] ?? null;
|
||||
index += 1;
|
||||
continue;
|
||||
}
|
||||
if (arg === "--pid") {
|
||||
options.pid = Number.parseInt(argv[index + 1] ?? "", 10);
|
||||
index += 1;
|
||||
continue;
|
||||
}
|
||||
options.files.push(arg);
|
||||
}
|
||||
|
||||
if (!Number.isFinite(options.top) || options.top <= 0) {
|
||||
fail("--top must be a positive integer");
|
||||
}
|
||||
if (!Number.isFinite(options.minKb) || options.minKb < 0) {
|
||||
fail("--min-kb must be a non-negative integer");
|
||||
}
|
||||
if (options.pid !== null && (!Number.isInteger(options.pid) || options.pid <= 0)) {
|
||||
fail("--pid must be a positive integer");
|
||||
}
|
||||
|
||||
return options;
|
||||
}
|
||||
|
||||
function parseHeapFilename(filePath) {
|
||||
const base = path.basename(filePath);
|
||||
const match = base.match(
|
||||
/^Heap\.(?<stamp>\d{8}\.\d{6})\.(?<pid>\d+)\.0\.(?<seq>\d+)\.heapsnapshot$/u,
|
||||
);
|
||||
if (!match?.groups) {
|
||||
return null;
|
||||
}
|
||||
return {
|
||||
filePath,
|
||||
pid: Number.parseInt(match.groups.pid, 10),
|
||||
stamp: match.groups.stamp,
|
||||
sequence: Number.parseInt(match.groups.seq, 10),
|
||||
};
|
||||
}
|
||||
|
||||
function resolvePair(options) {
|
||||
if (options.laneDir) {
|
||||
const entries = fs
|
||||
.readdirSync(options.laneDir)
|
||||
.map((name) => parseHeapFilename(path.join(options.laneDir, name)))
|
||||
.filter((entry) => entry !== null)
|
||||
.filter((entry) => options.pid === null || entry.pid === options.pid)
|
||||
.toSorted((left, right) => {
|
||||
if (left.pid !== right.pid) {
|
||||
return left.pid - right.pid;
|
||||
}
|
||||
if (left.stamp !== right.stamp) {
|
||||
return left.stamp.localeCompare(right.stamp);
|
||||
}
|
||||
return left.sequence - right.sequence;
|
||||
});
|
||||
|
||||
if (entries.length === 0) {
|
||||
fail(`No matching heap snapshots found in ${options.laneDir}`);
|
||||
}
|
||||
|
||||
const groups = new Map();
|
||||
for (const entry of entries) {
|
||||
const group = groups.get(entry.pid) ?? [];
|
||||
group.push(entry);
|
||||
groups.set(entry.pid, group);
|
||||
}
|
||||
|
||||
const candidates = Array.from(groups.values())
|
||||
.map((group) => ({
|
||||
pid: group[0].pid,
|
||||
before: group[0],
|
||||
after: group.at(-1),
|
||||
count: group.length,
|
||||
}))
|
||||
.filter((entry) => entry.count >= 2);
|
||||
|
||||
if (candidates.length === 0) {
|
||||
fail(`Need at least two snapshots for one PID in ${options.laneDir}`);
|
||||
}
|
||||
|
||||
const chosen =
|
||||
options.pid !== null
|
||||
? (candidates.find((entry) => entry.pid === options.pid) ?? null)
|
||||
: candidates.toSorted((left, right) => right.count - left.count || left.pid - right.pid)[0];
|
||||
|
||||
if (!chosen) {
|
||||
fail(`No PID with at least two snapshots matched in ${options.laneDir}`);
|
||||
}
|
||||
|
||||
return {
|
||||
before: chosen.before.filePath,
|
||||
after: chosen.after.filePath,
|
||||
pid: chosen.pid,
|
||||
snapshotCount: chosen.count,
|
||||
};
|
||||
}
|
||||
|
||||
if (options.files.length !== 2) {
|
||||
printUsage();
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
return {
|
||||
before: options.files[0],
|
||||
after: options.files[1],
|
||||
pid: null,
|
||||
snapshotCount: 2,
|
||||
};
|
||||
}
|
||||
|
||||
function loadSummary(filePath) {
|
||||
const data = JSON.parse(fs.readFileSync(filePath, "utf8"));
|
||||
const meta = data.snapshot?.meta;
|
||||
if (!meta) {
|
||||
fail(`Invalid heap snapshot: ${filePath}`);
|
||||
}
|
||||
|
||||
const nodeFieldCount = meta.node_fields.length;
|
||||
const typeNames = meta.node_types[0];
|
||||
const strings = data.strings;
|
||||
const typeIndex = meta.node_fields.indexOf("type");
|
||||
const nameIndex = meta.node_fields.indexOf("name");
|
||||
const selfSizeIndex = meta.node_fields.indexOf("self_size");
|
||||
|
||||
const summary = new Map();
|
||||
for (let offset = 0; offset < data.nodes.length; offset += nodeFieldCount) {
|
||||
const type = typeNames[data.nodes[offset + typeIndex]];
|
||||
const name = strings[data.nodes[offset + nameIndex]];
|
||||
const selfSize = data.nodes[offset + selfSizeIndex];
|
||||
const key = `${type}\t${name}`;
|
||||
const current = summary.get(key) ?? {
|
||||
type,
|
||||
name,
|
||||
selfSize: 0,
|
||||
count: 0,
|
||||
};
|
||||
current.selfSize += selfSize;
|
||||
current.count += 1;
|
||||
summary.set(key, current);
|
||||
}
|
||||
return {
|
||||
nodeCount: data.snapshot.node_count,
|
||||
summary,
|
||||
};
|
||||
}
|
||||
|
||||
function formatBytes(bytes) {
|
||||
if (Math.abs(bytes) >= 1024 ** 2) {
|
||||
return `${(bytes / 1024 ** 2).toFixed(2)} MiB`;
|
||||
}
|
||||
if (Math.abs(bytes) >= 1024) {
|
||||
return `${(bytes / 1024).toFixed(1)} KiB`;
|
||||
}
|
||||
return `${bytes} B`;
|
||||
}
|
||||
|
||||
function formatDelta(bytes) {
|
||||
return `${bytes >= 0 ? "+" : "-"}${formatBytes(Math.abs(bytes))}`;
|
||||
}
|
||||
|
||||
function truncate(text, maxLength) {
|
||||
return text.length <= maxLength ? text : `${text.slice(0, maxLength - 1)}…`;
|
||||
}
|
||||
|
||||
function main() {
|
||||
const options = parseArgs(process.argv.slice(2));
|
||||
const pair = resolvePair(options);
|
||||
const before = loadSummary(pair.before);
|
||||
const after = loadSummary(pair.after);
|
||||
const minBytes = options.minKb * 1024;
|
||||
|
||||
const rows = [];
|
||||
for (const [key, next] of after.summary) {
|
||||
const previous = before.summary.get(key) ?? { selfSize: 0, count: 0 };
|
||||
const sizeDelta = next.selfSize - previous.selfSize;
|
||||
const countDelta = next.count - previous.count;
|
||||
if (sizeDelta < minBytes) {
|
||||
continue;
|
||||
}
|
||||
rows.push({
|
||||
type: next.type,
|
||||
name: next.name,
|
||||
sizeDelta,
|
||||
countDelta,
|
||||
afterSize: next.selfSize,
|
||||
afterCount: next.count,
|
||||
});
|
||||
}
|
||||
|
||||
rows.sort(
|
||||
(left, right) => right.sizeDelta - left.sizeDelta || right.countDelta - left.countDelta,
|
||||
);
|
||||
|
||||
console.log(`before: ${pair.before}`);
|
||||
console.log(`after: ${pair.after}`);
|
||||
if (pair.pid !== null) {
|
||||
console.log(`pid: ${pair.pid} (${pair.snapshotCount} snapshots found)`);
|
||||
}
|
||||
console.log(
|
||||
`nodes: ${before.nodeCount} -> ${after.nodeCount} (${after.nodeCount - before.nodeCount >= 0 ? "+" : ""}${after.nodeCount - before.nodeCount})`,
|
||||
);
|
||||
console.log(`filter: top=${options.top} min=${options.minKb} KiB`);
|
||||
console.log("");
|
||||
|
||||
if (rows.length === 0) {
|
||||
console.log("No entries exceeded the minimum delta.");
|
||||
return;
|
||||
}
|
||||
|
||||
for (const row of rows.slice(0, options.top)) {
|
||||
console.log(
|
||||
[
|
||||
formatDelta(row.sizeDelta).padStart(11),
|
||||
`count ${row.countDelta >= 0 ? "+" : ""}${row.countDelta}`.padStart(10),
|
||||
row.type.padEnd(16),
|
||||
truncate(row.name || "(empty)", 96),
|
||||
].join(" "),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
main();
|
||||
381
extensions/feishu/src/monitor.acp-init-failure.lifecycle.test.ts
Normal file
381
extensions/feishu/src/monitor.acp-init-failure.lifecycle.test.ts
Normal file
@ -0,0 +1,381 @@
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import { createPluginRuntimeMock } from "../../../test/helpers/extensions/plugin-runtime-mock.js";
|
||||
import type { ClawdbotConfig, PluginRuntime, RuntimeEnv } from "../runtime-api.js";
|
||||
import { monitorSingleAccount } from "./monitor.account.js";
|
||||
import { setFeishuRuntime } from "./runtime.js";
|
||||
import type { ResolvedFeishuAccount } from "./types.js";
|
||||
|
||||
const createEventDispatcherMock = vi.hoisted(() => vi.fn());
|
||||
const monitorWebSocketMock = vi.hoisted(() => vi.fn(async () => {}));
|
||||
const monitorWebhookMock = vi.hoisted(() => vi.fn(async () => {}));
|
||||
const createFeishuThreadBindingManagerMock = vi.hoisted(() => vi.fn(() => ({ stop: vi.fn() })));
|
||||
const resolveBoundConversationMock = vi.hoisted(() => vi.fn(() => null));
|
||||
const touchBindingMock = vi.hoisted(() => vi.fn());
|
||||
const resolveAgentRouteMock = vi.hoisted(() => vi.fn());
|
||||
const resolveConfiguredBindingRouteMock = vi.hoisted(() => vi.fn());
|
||||
const ensureConfiguredBindingRouteReadyMock = vi.hoisted(() => vi.fn());
|
||||
const dispatchReplyFromConfigMock = vi.hoisted(() => vi.fn());
|
||||
const withReplyDispatcherMock = vi.hoisted(() => vi.fn());
|
||||
const finalizeInboundContextMock = vi.hoisted(() => vi.fn((ctx) => ctx));
|
||||
const sendMessageFeishuMock = vi.hoisted(() =>
|
||||
vi.fn(async () => ({ messageId: "om_notice", chatId: "oc_group_topic" })),
|
||||
);
|
||||
const getMessageFeishuMock = vi.hoisted(() => vi.fn(async () => null));
|
||||
const listFeishuThreadMessagesMock = vi.hoisted(() => vi.fn(async () => []));
|
||||
|
||||
let handlers: Record<string, (data: unknown) => Promise<void>> = {};
|
||||
let lastRuntime: RuntimeEnv | null = null;
|
||||
const originalStateDir = process.env.OPENCLAW_STATE_DIR;
|
||||
|
||||
vi.mock("./client.js", async () => {
|
||||
const actual = await vi.importActual<typeof import("./client.js")>("./client.js");
|
||||
return {
|
||||
...actual,
|
||||
createEventDispatcher: createEventDispatcherMock,
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock("./monitor.transport.js", () => ({
|
||||
monitorWebSocket: monitorWebSocketMock,
|
||||
monitorWebhook: monitorWebhookMock,
|
||||
}));
|
||||
|
||||
vi.mock("./thread-bindings.js", () => ({
|
||||
createFeishuThreadBindingManager: createFeishuThreadBindingManagerMock,
|
||||
}));
|
||||
|
||||
vi.mock("./send.js", () => ({
|
||||
sendMessageFeishu: sendMessageFeishuMock,
|
||||
getMessageFeishu: getMessageFeishuMock,
|
||||
listFeishuThreadMessages: listFeishuThreadMessagesMock,
|
||||
}));
|
||||
|
||||
vi.mock("openclaw/plugin-sdk/conversation-runtime", async (importOriginal) => {
|
||||
const actual = await importOriginal<typeof import("openclaw/plugin-sdk/conversation-runtime")>();
|
||||
return {
|
||||
...actual,
|
||||
resolveConfiguredBindingRoute: (params: unknown) => resolveConfiguredBindingRouteMock(params),
|
||||
ensureConfiguredBindingRouteReady: (params: unknown) =>
|
||||
ensureConfiguredBindingRouteReadyMock(params),
|
||||
getSessionBindingService: () => ({
|
||||
resolveByConversation: resolveBoundConversationMock,
|
||||
touch: touchBindingMock,
|
||||
}),
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock("../../../src/infra/outbound/session-binding-service.js", () => ({
|
||||
getSessionBindingService: () => ({
|
||||
resolveByConversation: resolveBoundConversationMock,
|
||||
touch: touchBindingMock,
|
||||
}),
|
||||
}));
|
||||
|
||||
function createLifecycleConfig(): ClawdbotConfig {
|
||||
return {
|
||||
session: { mainKey: "main", scope: "per-sender" },
|
||||
channels: {
|
||||
feishu: {
|
||||
enabled: true,
|
||||
groupPolicy: "open",
|
||||
requireMention: false,
|
||||
resolveSenderNames: false,
|
||||
allowFrom: ["ou_sender_1"],
|
||||
accounts: {
|
||||
"acct-acp": {
|
||||
enabled: true,
|
||||
appId: "cli_test",
|
||||
appSecret: "secret_test", // pragma: allowlist secret
|
||||
connectionMode: "websocket",
|
||||
groupPolicy: "open",
|
||||
requireMention: false,
|
||||
resolveSenderNames: false,
|
||||
groups: {
|
||||
oc_group_topic: {
|
||||
requireMention: false,
|
||||
groupSessionScope: "group_topic",
|
||||
replyInThread: "enabled",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
messages: {
|
||||
inbound: {
|
||||
debounceMs: 0,
|
||||
byChannel: {
|
||||
feishu: 0,
|
||||
},
|
||||
},
|
||||
},
|
||||
} as ClawdbotConfig;
|
||||
}
|
||||
|
||||
function createLifecycleAccount(): ResolvedFeishuAccount {
|
||||
return {
|
||||
accountId: "acct-acp",
|
||||
selectionSource: "explicit",
|
||||
enabled: true,
|
||||
configured: true,
|
||||
appId: "cli_test",
|
||||
appSecret: "secret_test", // pragma: allowlist secret
|
||||
domain: "feishu",
|
||||
config: {
|
||||
enabled: true,
|
||||
connectionMode: "websocket",
|
||||
groupPolicy: "open",
|
||||
requireMention: false,
|
||||
resolveSenderNames: false,
|
||||
groups: {
|
||||
oc_group_topic: {
|
||||
requireMention: false,
|
||||
groupSessionScope: "group_topic",
|
||||
replyInThread: "enabled",
|
||||
},
|
||||
},
|
||||
allowFrom: ["ou_sender_1"],
|
||||
},
|
||||
} as unknown as ResolvedFeishuAccount;
|
||||
}
|
||||
|
||||
function createRuntimeEnv(): RuntimeEnv {
|
||||
return {
|
||||
log: vi.fn(),
|
||||
error: vi.fn(),
|
||||
exit: vi.fn(),
|
||||
} as RuntimeEnv;
|
||||
}
|
||||
|
||||
function createTopicEvent(messageId: string) {
|
||||
return {
|
||||
sender: {
|
||||
sender_id: { open_id: "ou_sender_1" },
|
||||
sender_type: "user",
|
||||
},
|
||||
message: {
|
||||
message_id: messageId,
|
||||
root_id: "om_topic_root_1",
|
||||
thread_id: "omt_topic_1",
|
||||
chat_id: "oc_group_topic",
|
||||
chat_type: "group" as const,
|
||||
message_type: "text",
|
||||
content: JSON.stringify({ text: "hello topic" }),
|
||||
create_time: "1710000000000",
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
async function settleAsyncWork(): Promise<void> {
|
||||
for (let i = 0; i < 6; i += 1) {
|
||||
await Promise.resolve();
|
||||
await new Promise((resolve) => setTimeout(resolve, 0));
|
||||
}
|
||||
}
|
||||
|
||||
async function setupLifecycleMonitor() {
|
||||
const register = vi.fn((registered: Record<string, (data: unknown) => Promise<void>>) => {
|
||||
handlers = registered;
|
||||
});
|
||||
createEventDispatcherMock.mockReturnValue({ register });
|
||||
|
||||
lastRuntime = createRuntimeEnv();
|
||||
|
||||
await monitorSingleAccount({
|
||||
cfg: createLifecycleConfig(),
|
||||
account: createLifecycleAccount(),
|
||||
runtime: lastRuntime,
|
||||
botOpenIdSource: {
|
||||
kind: "prefetched",
|
||||
botOpenId: "ou_bot_1",
|
||||
botName: "Bot",
|
||||
},
|
||||
});
|
||||
|
||||
const onMessage = handlers["im.message.receive_v1"];
|
||||
if (!onMessage) {
|
||||
throw new Error("missing im.message.receive_v1 handler");
|
||||
}
|
||||
return onMessage;
|
||||
}
|
||||
|
||||
describe("Feishu ACP-init failure lifecycle", () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
handlers = {};
|
||||
lastRuntime = null;
|
||||
process.env.OPENCLAW_STATE_DIR = `/tmp/openclaw-feishu-acp-failure-${Date.now()}-${Math.random().toString(36).slice(2)}`;
|
||||
|
||||
resolveBoundConversationMock.mockReturnValue(null);
|
||||
resolveAgentRouteMock.mockReturnValue({
|
||||
agentId: "main",
|
||||
channel: "feishu",
|
||||
accountId: "acct-acp",
|
||||
sessionKey: "agent:main:feishu:group:oc_group_topic",
|
||||
mainSessionKey: "agent:main:main",
|
||||
matchedBy: "default",
|
||||
});
|
||||
resolveConfiguredBindingRouteMock.mockReturnValue({
|
||||
bindingResolution: {
|
||||
configuredBinding: {
|
||||
spec: {
|
||||
channel: "feishu",
|
||||
accountId: "acct-acp",
|
||||
conversationId: "oc_group_topic:topic:om_topic_root_1",
|
||||
agentId: "codex",
|
||||
mode: "persistent",
|
||||
},
|
||||
record: {
|
||||
bindingId: "config:acp:feishu:acct-acp:oc_group_topic:topic:om_topic_root_1",
|
||||
targetSessionKey: "agent:codex:acp:binding:feishu:acct-acp:abc123",
|
||||
targetKind: "session",
|
||||
conversation: {
|
||||
channel: "feishu",
|
||||
accountId: "acct-acp",
|
||||
conversationId: "oc_group_topic:topic:om_topic_root_1",
|
||||
parentConversationId: "oc_group_topic",
|
||||
},
|
||||
status: "active",
|
||||
boundAt: 0,
|
||||
metadata: { source: "config" },
|
||||
},
|
||||
},
|
||||
statefulTarget: {
|
||||
kind: "stateful",
|
||||
driverId: "acp",
|
||||
sessionKey: "agent:codex:acp:binding:feishu:acct-acp:abc123",
|
||||
agentId: "codex",
|
||||
},
|
||||
},
|
||||
configuredBinding: {
|
||||
spec: {
|
||||
channel: "feishu",
|
||||
accountId: "acct-acp",
|
||||
conversationId: "oc_group_topic:topic:om_topic_root_1",
|
||||
agentId: "codex",
|
||||
mode: "persistent",
|
||||
},
|
||||
},
|
||||
route: {
|
||||
agentId: "codex",
|
||||
channel: "feishu",
|
||||
accountId: "acct-acp",
|
||||
sessionKey: "agent:codex:acp:binding:feishu:acct-acp:abc123",
|
||||
mainSessionKey: "agent:codex:main",
|
||||
matchedBy: "binding.channel",
|
||||
},
|
||||
});
|
||||
ensureConfiguredBindingRouteReadyMock.mockResolvedValue({
|
||||
ok: false,
|
||||
error: "runtime unavailable",
|
||||
});
|
||||
|
||||
dispatchReplyFromConfigMock.mockResolvedValue({
|
||||
queuedFinal: false,
|
||||
counts: { final: 0 },
|
||||
});
|
||||
withReplyDispatcherMock.mockImplementation(async ({ run }) => await run());
|
||||
|
||||
setFeishuRuntime(
|
||||
createPluginRuntimeMock({
|
||||
channel: {
|
||||
debounce: {
|
||||
resolveInboundDebounceMs: vi.fn(() => 0),
|
||||
createInboundDebouncer: <T>(params: {
|
||||
onFlush?: (items: T[]) => Promise<void>;
|
||||
onError?: (err: unknown, items: T[]) => void;
|
||||
}) => ({
|
||||
enqueue: async (item: T) => {
|
||||
try {
|
||||
await params.onFlush?.([item]);
|
||||
} catch (err) {
|
||||
params.onError?.(err, [item]);
|
||||
}
|
||||
},
|
||||
flushKey: async () => {},
|
||||
}),
|
||||
},
|
||||
text: {
|
||||
hasControlCommand: vi.fn(() => false),
|
||||
},
|
||||
routing: {
|
||||
resolveAgentRoute:
|
||||
resolveAgentRouteMock as unknown as PluginRuntime["channel"]["routing"]["resolveAgentRoute"],
|
||||
},
|
||||
reply: {
|
||||
resolveEnvelopeFormatOptions: vi.fn(() => ({})),
|
||||
formatAgentEnvelope: vi.fn((params: { body: string }) => params.body),
|
||||
finalizeInboundContext:
|
||||
finalizeInboundContextMock as unknown as PluginRuntime["channel"]["reply"]["finalizeInboundContext"],
|
||||
dispatchReplyFromConfig:
|
||||
dispatchReplyFromConfigMock as unknown as PluginRuntime["channel"]["reply"]["dispatchReplyFromConfig"],
|
||||
withReplyDispatcher:
|
||||
withReplyDispatcherMock as unknown as PluginRuntime["channel"]["reply"]["withReplyDispatcher"],
|
||||
},
|
||||
commands: {
|
||||
shouldComputeCommandAuthorized: vi.fn(() => false),
|
||||
resolveCommandAuthorizedFromAuthorizers: vi.fn(() => false),
|
||||
},
|
||||
session: {
|
||||
readSessionUpdatedAt: vi.fn(),
|
||||
resolveStorePath: vi.fn(() => "/tmp/feishu-acp-failure-sessions.json"),
|
||||
},
|
||||
pairing: {
|
||||
readAllowFromStore: vi.fn().mockResolvedValue([]),
|
||||
upsertPairingRequest: vi.fn(),
|
||||
buildPairingReply: vi.fn(),
|
||||
},
|
||||
},
|
||||
media: {
|
||||
detectMime: vi.fn(async () => "text/plain"),
|
||||
},
|
||||
}) as unknown as PluginRuntime,
|
||||
);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
if (originalStateDir === undefined) {
|
||||
delete process.env.OPENCLAW_STATE_DIR;
|
||||
return;
|
||||
}
|
||||
process.env.OPENCLAW_STATE_DIR = originalStateDir;
|
||||
});
|
||||
|
||||
it("sends one ACP failure notice to the topic root across replay", async () => {
|
||||
const onMessage = await setupLifecycleMonitor();
|
||||
const event = createTopicEvent("om_topic_msg_1");
|
||||
|
||||
await onMessage(event);
|
||||
await settleAsyncWork();
|
||||
await onMessage(event);
|
||||
await settleAsyncWork();
|
||||
|
||||
expect(lastRuntime?.error).not.toHaveBeenCalled();
|
||||
expect(resolveConfiguredBindingRouteMock).toHaveBeenCalledTimes(1);
|
||||
expect(ensureConfiguredBindingRouteReadyMock).toHaveBeenCalledTimes(1);
|
||||
expect(sendMessageFeishuMock).toHaveBeenCalledTimes(1);
|
||||
expect(sendMessageFeishuMock).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
accountId: "acct-acp",
|
||||
to: "chat:oc_group_topic",
|
||||
replyToMessageId: "om_topic_root_1",
|
||||
replyInThread: true,
|
||||
text: expect.stringContaining("runtime unavailable"),
|
||||
}),
|
||||
);
|
||||
expect(dispatchReplyFromConfigMock).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("does not duplicate the ACP failure notice after the first send succeeds", async () => {
|
||||
const onMessage = await setupLifecycleMonitor();
|
||||
const event = createTopicEvent("om_topic_msg_2");
|
||||
|
||||
await onMessage(event);
|
||||
await settleAsyncWork();
|
||||
await onMessage(event);
|
||||
await settleAsyncWork();
|
||||
|
||||
expect(sendMessageFeishuMock).toHaveBeenCalledTimes(1);
|
||||
expect(lastRuntime?.error).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
@ -126,7 +126,7 @@ function createLifecycleAccount(): ResolvedFeishuAccount {
|
||||
requireMention: false,
|
||||
resolveSenderNames: false,
|
||||
},
|
||||
} as ResolvedFeishuAccount;
|
||||
} as unknown as ResolvedFeishuAccount;
|
||||
}
|
||||
|
||||
function createRuntimeEnv(): RuntimeEnv {
|
||||
@ -206,7 +206,7 @@ describe("Feishu bot-menu lifecycle", () => {
|
||||
markDispatchIdle: vi.fn(),
|
||||
});
|
||||
|
||||
resolveBoundConversationMock.mockReturnValue({
|
||||
resolveBoundConversationMock.mockImplementation(() => ({
|
||||
bindingId: "binding-menu",
|
||||
targetSessionKey: "agent:bound-agent:feishu:direct:ou_user1",
|
||||
targetKind: "session",
|
||||
@ -218,7 +218,7 @@ describe("Feishu bot-menu lifecycle", () => {
|
||||
status: "active",
|
||||
boundAt: 1_710_000_000_000,
|
||||
metadata: {},
|
||||
});
|
||||
}));
|
||||
|
||||
resolveAgentRouteMock.mockReturnValue({
|
||||
agentId: "main",
|
||||
|
||||
@ -0,0 +1,392 @@
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import { createPluginRuntimeMock } from "../../../test/helpers/extensions/plugin-runtime-mock.js";
|
||||
import type { ClawdbotConfig, PluginRuntime, RuntimeEnv } from "../runtime-api.js";
|
||||
import { monitorSingleAccount } from "./monitor.account.js";
|
||||
import { setFeishuRuntime } from "./runtime.js";
|
||||
import type { ResolvedFeishuAccount } from "./types.js";
|
||||
|
||||
const createEventDispatcherMock = vi.hoisted(() => vi.fn());
|
||||
const monitorWebSocketMock = vi.hoisted(() => vi.fn(async () => {}));
|
||||
const monitorWebhookMock = vi.hoisted(() => vi.fn(async () => {}));
|
||||
const createFeishuThreadBindingManagerMock = vi.hoisted(() => vi.fn(() => ({ stop: vi.fn() })));
|
||||
const createFeishuReplyDispatcherMock = vi.hoisted(() => vi.fn());
|
||||
const resolveBoundConversationMock = vi.hoisted(() => vi.fn(() => null));
|
||||
const touchBindingMock = vi.hoisted(() => vi.fn());
|
||||
const resolveAgentRouteMock = vi.hoisted(() => vi.fn());
|
||||
const dispatchReplyFromConfigMock = vi.hoisted(() => vi.fn());
|
||||
const withReplyDispatcherMock = vi.hoisted(() => vi.fn());
|
||||
const finalizeInboundContextMock = vi.hoisted(() => vi.fn((ctx) => ctx));
|
||||
const getMessageFeishuMock = vi.hoisted(() => vi.fn(async () => null));
|
||||
const listFeishuThreadMessagesMock = vi.hoisted(() => vi.fn(async () => []));
|
||||
const sendMessageFeishuMock = vi.hoisted(() =>
|
||||
vi.fn(async () => ({ messageId: "om_sent", chatId: "oc_broadcast_group" })),
|
||||
);
|
||||
|
||||
let handlersByAccount = new Map<string, Record<string, (data: unknown) => Promise<void>>>();
|
||||
let runtimesByAccount = new Map<string, RuntimeEnv>();
|
||||
const originalStateDir = process.env.OPENCLAW_STATE_DIR;
|
||||
|
||||
vi.mock("./client.js", async () => {
|
||||
const actual = await vi.importActual<typeof import("./client.js")>("./client.js");
|
||||
return {
|
||||
...actual,
|
||||
createEventDispatcher: createEventDispatcherMock,
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock("./monitor.transport.js", () => ({
|
||||
monitorWebSocket: monitorWebSocketMock,
|
||||
monitorWebhook: monitorWebhookMock,
|
||||
}));
|
||||
|
||||
vi.mock("./thread-bindings.js", () => ({
|
||||
createFeishuThreadBindingManager: createFeishuThreadBindingManagerMock,
|
||||
}));
|
||||
|
||||
vi.mock("./reply-dispatcher.js", () => ({
|
||||
createFeishuReplyDispatcher: createFeishuReplyDispatcherMock,
|
||||
}));
|
||||
|
||||
vi.mock("./send.js", () => ({
|
||||
getMessageFeishu: getMessageFeishuMock,
|
||||
listFeishuThreadMessages: listFeishuThreadMessagesMock,
|
||||
sendMessageFeishu: sendMessageFeishuMock,
|
||||
}));
|
||||
|
||||
vi.mock("openclaw/plugin-sdk/conversation-runtime", async (importOriginal) => {
|
||||
const actual = await importOriginal<typeof import("openclaw/plugin-sdk/conversation-runtime")>();
|
||||
return {
|
||||
...actual,
|
||||
getSessionBindingService: () => ({
|
||||
resolveByConversation: resolveBoundConversationMock,
|
||||
touch: touchBindingMock,
|
||||
}),
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock("../../../src/infra/outbound/session-binding-service.js", () => ({
|
||||
getSessionBindingService: () => ({
|
||||
resolveByConversation: resolveBoundConversationMock,
|
||||
touch: touchBindingMock,
|
||||
}),
|
||||
}));
|
||||
|
||||
function createLifecycleConfig(): ClawdbotConfig {
|
||||
return {
|
||||
broadcast: {
|
||||
oc_broadcast_group: ["susan", "main"],
|
||||
},
|
||||
agents: {
|
||||
list: [{ id: "main" }, { id: "susan" }],
|
||||
},
|
||||
channels: {
|
||||
feishu: {
|
||||
enabled: true,
|
||||
groupPolicy: "open",
|
||||
requireMention: false,
|
||||
resolveSenderNames: false,
|
||||
accounts: {
|
||||
"account-A": {
|
||||
enabled: true,
|
||||
appId: "cli_a",
|
||||
appSecret: "secret_a", // pragma: allowlist secret
|
||||
connectionMode: "websocket",
|
||||
groupPolicy: "open",
|
||||
requireMention: false,
|
||||
resolveSenderNames: false,
|
||||
groups: {
|
||||
oc_broadcast_group: {
|
||||
requireMention: false,
|
||||
},
|
||||
},
|
||||
},
|
||||
"account-B": {
|
||||
enabled: true,
|
||||
appId: "cli_b",
|
||||
appSecret: "secret_b", // pragma: allowlist secret
|
||||
connectionMode: "websocket",
|
||||
groupPolicy: "open",
|
||||
requireMention: false,
|
||||
resolveSenderNames: false,
|
||||
groups: {
|
||||
oc_broadcast_group: {
|
||||
requireMention: false,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
messages: {
|
||||
inbound: {
|
||||
debounceMs: 0,
|
||||
byChannel: {
|
||||
feishu: 0,
|
||||
},
|
||||
},
|
||||
},
|
||||
} as ClawdbotConfig;
|
||||
}
|
||||
|
||||
function createLifecycleAccount(accountId: "account-A" | "account-B"): ResolvedFeishuAccount {
|
||||
return {
|
||||
accountId,
|
||||
selectionSource: "explicit",
|
||||
enabled: true,
|
||||
configured: true,
|
||||
appId: accountId === "account-A" ? "cli_a" : "cli_b",
|
||||
appSecret: accountId === "account-A" ? "secret_a" : "secret_b", // pragma: allowlist secret
|
||||
domain: "feishu",
|
||||
config: {
|
||||
enabled: true,
|
||||
connectionMode: "websocket",
|
||||
groupPolicy: "open",
|
||||
requireMention: false,
|
||||
resolveSenderNames: false,
|
||||
groups: {
|
||||
oc_broadcast_group: {
|
||||
requireMention: false,
|
||||
},
|
||||
},
|
||||
},
|
||||
} as unknown as ResolvedFeishuAccount;
|
||||
}
|
||||
|
||||
function createRuntimeEnv(): RuntimeEnv {
|
||||
return {
|
||||
log: vi.fn(),
|
||||
error: vi.fn(),
|
||||
exit: vi.fn(),
|
||||
} as RuntimeEnv;
|
||||
}
|
||||
|
||||
function createBroadcastEvent(messageId: string) {
|
||||
return {
|
||||
sender: {
|
||||
sender_id: { open_id: "ou_sender_1" },
|
||||
sender_type: "user",
|
||||
},
|
||||
message: {
|
||||
message_id: messageId,
|
||||
chat_id: "oc_broadcast_group",
|
||||
chat_type: "group" as const,
|
||||
message_type: "text",
|
||||
content: JSON.stringify({ text: "hello broadcast" }),
|
||||
create_time: "1710000000000",
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
async function settleAsyncWork(): Promise<void> {
|
||||
for (let i = 0; i < 6; i += 1) {
|
||||
await Promise.resolve();
|
||||
await new Promise((resolve) => setTimeout(resolve, 0));
|
||||
}
|
||||
}
|
||||
|
||||
async function setupLifecycleMonitor(accountId: "account-A" | "account-B") {
|
||||
const register = vi.fn((registered: Record<string, (data: unknown) => Promise<void>>) => {
|
||||
handlersByAccount.set(accountId, registered);
|
||||
});
|
||||
createEventDispatcherMock.mockReturnValueOnce({ register });
|
||||
|
||||
const runtime = createRuntimeEnv();
|
||||
runtimesByAccount.set(accountId, runtime);
|
||||
|
||||
await monitorSingleAccount({
|
||||
cfg: createLifecycleConfig(),
|
||||
account: createLifecycleAccount(accountId),
|
||||
runtime,
|
||||
botOpenIdSource: {
|
||||
kind: "prefetched",
|
||||
botOpenId: "ou_bot_1",
|
||||
botName: "Bot",
|
||||
},
|
||||
});
|
||||
|
||||
const onMessage = handlersByAccount.get(accountId)?.["im.message.receive_v1"];
|
||||
if (!onMessage) {
|
||||
throw new Error(`missing im.message.receive_v1 handler for ${accountId}`);
|
||||
}
|
||||
return onMessage;
|
||||
}
|
||||
|
||||
describe("Feishu broadcast reply-once lifecycle", () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
handlersByAccount = new Map();
|
||||
runtimesByAccount = new Map();
|
||||
process.env.OPENCLAW_STATE_DIR = `/tmp/openclaw-feishu-broadcast-${Date.now()}-${Math.random().toString(36).slice(2)}`;
|
||||
|
||||
const activeDispatcher = {
|
||||
sendToolResult: vi.fn(() => false),
|
||||
sendBlockReply: vi.fn(() => false),
|
||||
sendFinalReply: vi.fn(async () => true),
|
||||
waitForIdle: vi.fn(async () => {}),
|
||||
getQueuedCounts: vi.fn(() => ({ tool: 0, block: 0, final: 0 })),
|
||||
markComplete: vi.fn(),
|
||||
};
|
||||
|
||||
createFeishuReplyDispatcherMock.mockReturnValue({
|
||||
dispatcher: activeDispatcher,
|
||||
replyOptions: {},
|
||||
markDispatchIdle: vi.fn(),
|
||||
});
|
||||
|
||||
resolveBoundConversationMock.mockReturnValue(null);
|
||||
resolveAgentRouteMock.mockReturnValue({
|
||||
agentId: "main",
|
||||
channel: "feishu",
|
||||
accountId: "account-A",
|
||||
sessionKey: "agent:main:feishu:group:oc_broadcast_group",
|
||||
mainSessionKey: "agent:main:main",
|
||||
matchedBy: "default",
|
||||
});
|
||||
|
||||
dispatchReplyFromConfigMock.mockImplementation(async ({ ctx, dispatcher }) => {
|
||||
if (
|
||||
typeof ctx?.SessionKey === "string" &&
|
||||
ctx.SessionKey.includes("agent:main:") &&
|
||||
typeof dispatcher?.sendFinalReply === "function"
|
||||
) {
|
||||
await dispatcher.sendFinalReply({ text: "broadcast reply once" });
|
||||
}
|
||||
return {
|
||||
queuedFinal: false,
|
||||
counts: {
|
||||
final:
|
||||
typeof ctx?.SessionKey === "string" && ctx.SessionKey.includes("agent:main:") ? 1 : 0,
|
||||
},
|
||||
};
|
||||
});
|
||||
|
||||
withReplyDispatcherMock.mockImplementation(async ({ run }) => await run());
|
||||
|
||||
setFeishuRuntime(
|
||||
createPluginRuntimeMock({
|
||||
channel: {
|
||||
debounce: {
|
||||
resolveInboundDebounceMs: vi.fn(() => 0),
|
||||
createInboundDebouncer: <T>(params: {
|
||||
onFlush?: (items: T[]) => Promise<void>;
|
||||
onError?: (err: unknown, items: T[]) => void;
|
||||
}) => ({
|
||||
enqueue: async (item: T) => {
|
||||
try {
|
||||
await params.onFlush?.([item]);
|
||||
} catch (err) {
|
||||
params.onError?.(err, [item]);
|
||||
}
|
||||
},
|
||||
flushKey: async () => {},
|
||||
}),
|
||||
},
|
||||
text: {
|
||||
hasControlCommand: vi.fn(() => false),
|
||||
},
|
||||
routing: {
|
||||
resolveAgentRoute:
|
||||
resolveAgentRouteMock as unknown as PluginRuntime["channel"]["routing"]["resolveAgentRoute"],
|
||||
},
|
||||
reply: {
|
||||
resolveEnvelopeFormatOptions: vi.fn(() => ({})),
|
||||
formatAgentEnvelope: vi.fn((params: { body: string }) => params.body),
|
||||
finalizeInboundContext:
|
||||
finalizeInboundContextMock as unknown as PluginRuntime["channel"]["reply"]["finalizeInboundContext"],
|
||||
dispatchReplyFromConfig:
|
||||
dispatchReplyFromConfigMock as unknown as PluginRuntime["channel"]["reply"]["dispatchReplyFromConfig"],
|
||||
withReplyDispatcher:
|
||||
withReplyDispatcherMock as unknown as PluginRuntime["channel"]["reply"]["withReplyDispatcher"],
|
||||
},
|
||||
commands: {
|
||||
shouldComputeCommandAuthorized: vi.fn(() => false),
|
||||
resolveCommandAuthorizedFromAuthorizers: vi.fn(() => false),
|
||||
},
|
||||
session: {
|
||||
readSessionUpdatedAt: vi.fn(),
|
||||
resolveStorePath: vi.fn(() => "/tmp/feishu-broadcast-sessions.json"),
|
||||
},
|
||||
pairing: {
|
||||
readAllowFromStore: vi.fn().mockResolvedValue([]),
|
||||
upsertPairingRequest: vi.fn(),
|
||||
buildPairingReply: vi.fn(),
|
||||
},
|
||||
},
|
||||
media: {
|
||||
detectMime: vi.fn(async () => "text/plain"),
|
||||
},
|
||||
}) as unknown as PluginRuntime,
|
||||
);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
if (originalStateDir === undefined) {
|
||||
delete process.env.OPENCLAW_STATE_DIR;
|
||||
return;
|
||||
}
|
||||
process.env.OPENCLAW_STATE_DIR = originalStateDir;
|
||||
});
|
||||
|
||||
it("uses one active reply path when the same broadcast event reaches two accounts", async () => {
|
||||
const onMessageA = await setupLifecycleMonitor("account-A");
|
||||
const onMessageB = await setupLifecycleMonitor("account-B");
|
||||
const event = createBroadcastEvent("om_broadcast_once");
|
||||
|
||||
await onMessageA(event);
|
||||
await settleAsyncWork();
|
||||
await onMessageB(event);
|
||||
await settleAsyncWork();
|
||||
|
||||
expect(runtimesByAccount.get("account-A")?.error).not.toHaveBeenCalled();
|
||||
expect(runtimesByAccount.get("account-B")?.error).not.toHaveBeenCalled();
|
||||
|
||||
expect(dispatchReplyFromConfigMock).toHaveBeenCalledTimes(2);
|
||||
expect(createFeishuReplyDispatcherMock).toHaveBeenCalledTimes(1);
|
||||
expect(createFeishuReplyDispatcherMock).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
accountId: "account-a",
|
||||
chatId: "oc_broadcast_group",
|
||||
replyToMessageId: "om_broadcast_once",
|
||||
}),
|
||||
);
|
||||
|
||||
const sessionKeys = finalizeInboundContextMock.mock.calls.map(
|
||||
(call) => (call[0] as { SessionKey?: string }).SessionKey,
|
||||
);
|
||||
expect(sessionKeys).toContain("agent:main:feishu:group:oc_broadcast_group");
|
||||
expect(sessionKeys).toContain("agent:susan:feishu:group:oc_broadcast_group");
|
||||
|
||||
const activeDispatcher = createFeishuReplyDispatcherMock.mock.results[0]?.value.dispatcher as {
|
||||
sendFinalReply: ReturnType<typeof vi.fn>;
|
||||
};
|
||||
expect(activeDispatcher.sendFinalReply).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it("does not duplicate delivery after a post-send failure on the first account", async () => {
|
||||
const onMessageA = await setupLifecycleMonitor("account-A");
|
||||
const onMessageB = await setupLifecycleMonitor("account-B");
|
||||
const event = createBroadcastEvent("om_broadcast_retry");
|
||||
|
||||
dispatchReplyFromConfigMock.mockImplementationOnce(async ({ ctx, dispatcher }) => {
|
||||
if (typeof ctx?.SessionKey === "string" && ctx.SessionKey.includes("agent:susan:")) {
|
||||
return { queuedFinal: false, counts: { final: 0 } };
|
||||
}
|
||||
await dispatcher.sendFinalReply({ text: "broadcast reply once" });
|
||||
throw new Error("post-send failure");
|
||||
});
|
||||
|
||||
await onMessageA(event);
|
||||
await settleAsyncWork();
|
||||
await onMessageB(event);
|
||||
await settleAsyncWork();
|
||||
|
||||
expect(runtimesByAccount.get("account-A")?.error).not.toHaveBeenCalled();
|
||||
expect(runtimesByAccount.get("account-B")?.error).not.toHaveBeenCalled();
|
||||
expect(dispatchReplyFromConfigMock).toHaveBeenCalledTimes(2);
|
||||
|
||||
const activeDispatcher = createFeishuReplyDispatcherMock.mock.results[0]?.value.dispatcher as {
|
||||
sendFinalReply: ReturnType<typeof vi.fn>;
|
||||
};
|
||||
expect(activeDispatcher.sendFinalReply).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
});
|
||||
394
extensions/feishu/src/monitor.card-action.lifecycle.test.ts
Normal file
394
extensions/feishu/src/monitor.card-action.lifecycle.test.ts
Normal file
@ -0,0 +1,394 @@
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import { createPluginRuntimeMock } from "../../../test/helpers/extensions/plugin-runtime-mock.js";
|
||||
import type { ClawdbotConfig, PluginRuntime, RuntimeEnv } from "../runtime-api.js";
|
||||
import { createFeishuCardInteractionEnvelope } from "./card-interaction.js";
|
||||
import { monitorSingleAccount } from "./monitor.account.js";
|
||||
import { setFeishuRuntime } from "./runtime.js";
|
||||
import type { ResolvedFeishuAccount } from "./types.js";
|
||||
|
||||
type BoundConversation = {
|
||||
bindingId: string;
|
||||
targetSessionKey: string;
|
||||
};
|
||||
|
||||
const createEventDispatcherMock = vi.hoisted(() => vi.fn());
|
||||
const monitorWebSocketMock = vi.hoisted(() => vi.fn(async () => {}));
|
||||
const monitorWebhookMock = vi.hoisted(() => vi.fn(async () => {}));
|
||||
const createFeishuThreadBindingManagerMock = vi.hoisted(() => vi.fn(() => ({ stop: vi.fn() })));
|
||||
const createFeishuReplyDispatcherMock = vi.hoisted(() => vi.fn());
|
||||
const resolveBoundConversationMock = vi.hoisted(
|
||||
() => vi.fn<() => BoundConversation | null>(() => null),
|
||||
);
|
||||
const touchBindingMock = vi.hoisted(() => vi.fn());
|
||||
const resolveAgentRouteMock = vi.hoisted(() => vi.fn());
|
||||
const dispatchReplyFromConfigMock = vi.hoisted(() => vi.fn());
|
||||
const withReplyDispatcherMock = vi.hoisted(() => vi.fn());
|
||||
const finalizeInboundContextMock = vi.hoisted(() => vi.fn((ctx) => ctx));
|
||||
const sendMessageFeishuMock = vi.hoisted(() =>
|
||||
vi.fn(async () => ({ messageId: "om_notice", chatId: "p2p:ou_user1" })),
|
||||
);
|
||||
const sendCardFeishuMock = vi.hoisted(() =>
|
||||
vi.fn(async () => ({ messageId: "om_card", chatId: "p2p:ou_user1" })),
|
||||
);
|
||||
const getMessageFeishuMock = vi.hoisted(() => vi.fn(async () => null));
|
||||
const listFeishuThreadMessagesMock = vi.hoisted(() => vi.fn(async () => []));
|
||||
|
||||
let handlers: Record<string, (data: unknown) => Promise<void>> = {};
|
||||
let lastRuntime: RuntimeEnv | null = null;
|
||||
const originalStateDir = process.env.OPENCLAW_STATE_DIR;
|
||||
|
||||
vi.mock("./client.js", async () => {
|
||||
const actual = await vi.importActual<typeof import("./client.js")>("./client.js");
|
||||
return {
|
||||
...actual,
|
||||
createEventDispatcher: createEventDispatcherMock,
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock("./monitor.transport.js", () => ({
|
||||
monitorWebSocket: monitorWebSocketMock,
|
||||
monitorWebhook: monitorWebhookMock,
|
||||
}));
|
||||
|
||||
vi.mock("./thread-bindings.js", () => ({
|
||||
createFeishuThreadBindingManager: createFeishuThreadBindingManagerMock,
|
||||
}));
|
||||
|
||||
vi.mock("./reply-dispatcher.js", () => ({
|
||||
createFeishuReplyDispatcher: createFeishuReplyDispatcherMock,
|
||||
}));
|
||||
|
||||
vi.mock("./send.js", () => ({
|
||||
sendMessageFeishu: sendMessageFeishuMock,
|
||||
sendCardFeishu: sendCardFeishuMock,
|
||||
getMessageFeishu: getMessageFeishuMock,
|
||||
listFeishuThreadMessages: listFeishuThreadMessagesMock,
|
||||
}));
|
||||
|
||||
vi.mock("openclaw/plugin-sdk/conversation-runtime", async (importOriginal) => {
|
||||
const actual = await importOriginal<typeof import("openclaw/plugin-sdk/conversation-runtime")>();
|
||||
return {
|
||||
...actual,
|
||||
getSessionBindingService: () => ({
|
||||
resolveByConversation: resolveBoundConversationMock,
|
||||
touch: touchBindingMock,
|
||||
}),
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock("../../../src/infra/outbound/session-binding-service.js", () => ({
|
||||
getSessionBindingService: () => ({
|
||||
resolveByConversation: resolveBoundConversationMock,
|
||||
touch: touchBindingMock,
|
||||
}),
|
||||
}));
|
||||
|
||||
function createLifecycleConfig(): ClawdbotConfig {
|
||||
return {
|
||||
channels: {
|
||||
feishu: {
|
||||
enabled: true,
|
||||
dmPolicy: "open",
|
||||
requireMention: false,
|
||||
resolveSenderNames: false,
|
||||
accounts: {
|
||||
"acct-card": {
|
||||
enabled: true,
|
||||
appId: "cli_test",
|
||||
appSecret: "secret_test", // pragma: allowlist secret
|
||||
connectionMode: "websocket",
|
||||
dmPolicy: "open",
|
||||
requireMention: false,
|
||||
resolveSenderNames: false,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
messages: {
|
||||
inbound: {
|
||||
debounceMs: 0,
|
||||
byChannel: {
|
||||
feishu: 0,
|
||||
},
|
||||
},
|
||||
},
|
||||
} as ClawdbotConfig;
|
||||
}
|
||||
|
||||
function createLifecycleAccount(): ResolvedFeishuAccount {
|
||||
return {
|
||||
accountId: "acct-card",
|
||||
selectionSource: "explicit",
|
||||
enabled: true,
|
||||
configured: true,
|
||||
appId: "cli_test",
|
||||
appSecret: "secret_test", // pragma: allowlist secret
|
||||
domain: "feishu",
|
||||
config: {
|
||||
enabled: true,
|
||||
connectionMode: "websocket",
|
||||
dmPolicy: "open",
|
||||
requireMention: false,
|
||||
resolveSenderNames: false,
|
||||
},
|
||||
} as unknown as ResolvedFeishuAccount;
|
||||
}
|
||||
|
||||
function createRuntimeEnv(): RuntimeEnv {
|
||||
return {
|
||||
log: vi.fn(),
|
||||
error: vi.fn(),
|
||||
exit: vi.fn(),
|
||||
} as RuntimeEnv;
|
||||
}
|
||||
|
||||
function createCardActionEvent(params: {
|
||||
token: string;
|
||||
action: string;
|
||||
command: string;
|
||||
chatId?: string;
|
||||
chatType?: "group" | "p2p";
|
||||
}) {
|
||||
const openId = "ou_user1";
|
||||
const chatId = params.chatId ?? "p2p:ou_user1";
|
||||
const chatType = params.chatType ?? "p2p";
|
||||
return {
|
||||
operator: {
|
||||
open_id: openId,
|
||||
user_id: "user_1",
|
||||
union_id: "union_1",
|
||||
},
|
||||
token: params.token,
|
||||
action: {
|
||||
tag: "button",
|
||||
value: createFeishuCardInteractionEnvelope({
|
||||
k: "quick",
|
||||
a: params.action,
|
||||
q: params.command,
|
||||
c: {
|
||||
u: openId,
|
||||
h: chatId,
|
||||
t: chatType,
|
||||
e: Date.now() + 60_000,
|
||||
},
|
||||
}),
|
||||
},
|
||||
context: {
|
||||
open_id: openId,
|
||||
user_id: "user_1",
|
||||
chat_id: chatId,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
async function settleAsyncWork(): Promise<void> {
|
||||
for (let i = 0; i < 6; i += 1) {
|
||||
await Promise.resolve();
|
||||
await new Promise((resolve) => setTimeout(resolve, 0));
|
||||
}
|
||||
}
|
||||
|
||||
async function setupLifecycleMonitor() {
|
||||
const register = vi.fn((registered: Record<string, (data: unknown) => Promise<void>>) => {
|
||||
handlers = registered;
|
||||
});
|
||||
createEventDispatcherMock.mockReturnValue({ register });
|
||||
|
||||
lastRuntime = createRuntimeEnv();
|
||||
|
||||
await monitorSingleAccount({
|
||||
cfg: createLifecycleConfig(),
|
||||
account: createLifecycleAccount(),
|
||||
runtime: lastRuntime,
|
||||
botOpenIdSource: {
|
||||
kind: "prefetched",
|
||||
botOpenId: "ou_bot_1",
|
||||
botName: "Bot",
|
||||
},
|
||||
});
|
||||
|
||||
const onCardAction = handlers["card.action.trigger"];
|
||||
if (!onCardAction) {
|
||||
throw new Error("missing card.action.trigger handler");
|
||||
}
|
||||
return onCardAction;
|
||||
}
|
||||
|
||||
describe("Feishu card-action lifecycle", () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
handlers = {};
|
||||
lastRuntime = null;
|
||||
process.env.OPENCLAW_STATE_DIR = `/tmp/openclaw-feishu-card-action-${Date.now()}-${Math.random().toString(36).slice(2)}`;
|
||||
|
||||
const dispatcher = {
|
||||
sendToolResult: vi.fn(() => false),
|
||||
sendBlockReply: vi.fn(() => false),
|
||||
sendFinalReply: vi.fn(async () => true),
|
||||
waitForIdle: vi.fn(async () => {}),
|
||||
getQueuedCounts: vi.fn(() => ({ tool: 0, block: 0, final: 0 })),
|
||||
markComplete: vi.fn(),
|
||||
};
|
||||
|
||||
createFeishuReplyDispatcherMock.mockReturnValue({
|
||||
dispatcher,
|
||||
replyOptions: {},
|
||||
markDispatchIdle: vi.fn(),
|
||||
});
|
||||
|
||||
resolveBoundConversationMock.mockImplementation(() => ({
|
||||
bindingId: "binding-card",
|
||||
targetSessionKey: "agent:bound-agent:feishu:direct:ou_user1",
|
||||
}));
|
||||
|
||||
resolveAgentRouteMock.mockReturnValue({
|
||||
agentId: "main",
|
||||
channel: "feishu",
|
||||
accountId: "acct-card",
|
||||
sessionKey: "agent:main:feishu:direct:ou_user1",
|
||||
mainSessionKey: "agent:main:main",
|
||||
matchedBy: "default",
|
||||
});
|
||||
|
||||
dispatchReplyFromConfigMock.mockImplementation(async ({ dispatcher }) => {
|
||||
await dispatcher.sendFinalReply({ text: "card action reply once" });
|
||||
return {
|
||||
queuedFinal: false,
|
||||
counts: { final: 1 },
|
||||
};
|
||||
});
|
||||
|
||||
withReplyDispatcherMock.mockImplementation(async ({ run }) => await run());
|
||||
|
||||
setFeishuRuntime(
|
||||
createPluginRuntimeMock({
|
||||
channel: {
|
||||
debounce: {
|
||||
resolveInboundDebounceMs: vi.fn(() => 0),
|
||||
createInboundDebouncer: <T>(params: {
|
||||
onFlush?: (items: T[]) => Promise<void>;
|
||||
onError?: (err: unknown, items: T[]) => void;
|
||||
}) => ({
|
||||
enqueue: async (item: T) => {
|
||||
try {
|
||||
await params.onFlush?.([item]);
|
||||
} catch (err) {
|
||||
params.onError?.(err, [item]);
|
||||
}
|
||||
},
|
||||
flushKey: async () => {},
|
||||
}),
|
||||
},
|
||||
text: {
|
||||
hasControlCommand: vi.fn(() => false),
|
||||
},
|
||||
routing: {
|
||||
resolveAgentRoute:
|
||||
resolveAgentRouteMock as unknown as PluginRuntime["channel"]["routing"]["resolveAgentRoute"],
|
||||
},
|
||||
reply: {
|
||||
resolveEnvelopeFormatOptions: vi.fn(() => ({})),
|
||||
formatAgentEnvelope: vi.fn((params: { body: string }) => params.body),
|
||||
finalizeInboundContext:
|
||||
finalizeInboundContextMock as unknown as PluginRuntime["channel"]["reply"]["finalizeInboundContext"],
|
||||
dispatchReplyFromConfig:
|
||||
dispatchReplyFromConfigMock as unknown as PluginRuntime["channel"]["reply"]["dispatchReplyFromConfig"],
|
||||
withReplyDispatcher:
|
||||
withReplyDispatcherMock as unknown as PluginRuntime["channel"]["reply"]["withReplyDispatcher"],
|
||||
},
|
||||
commands: {
|
||||
shouldComputeCommandAuthorized: vi.fn(() => false),
|
||||
resolveCommandAuthorizedFromAuthorizers: vi.fn(() => false),
|
||||
},
|
||||
session: {
|
||||
readSessionUpdatedAt: vi.fn(),
|
||||
resolveStorePath: vi.fn(() => "/tmp/feishu-card-action-sessions.json"),
|
||||
},
|
||||
pairing: {
|
||||
readAllowFromStore: vi.fn().mockResolvedValue([]),
|
||||
upsertPairingRequest: vi.fn(),
|
||||
buildPairingReply: vi.fn(),
|
||||
},
|
||||
},
|
||||
media: {
|
||||
detectMime: vi.fn(async () => "text/plain"),
|
||||
},
|
||||
}) as unknown as PluginRuntime,
|
||||
);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
if (originalStateDir === undefined) {
|
||||
delete process.env.OPENCLAW_STATE_DIR;
|
||||
return;
|
||||
}
|
||||
process.env.OPENCLAW_STATE_DIR = originalStateDir;
|
||||
});
|
||||
|
||||
it("routes one reply across duplicate callback delivery", async () => {
|
||||
const onCardAction = await setupLifecycleMonitor();
|
||||
const event = createCardActionEvent({
|
||||
token: "tok-card-once",
|
||||
action: "feishu.quick_actions.help",
|
||||
command: "/help",
|
||||
});
|
||||
|
||||
await onCardAction(event);
|
||||
await settleAsyncWork();
|
||||
await onCardAction(event);
|
||||
await settleAsyncWork();
|
||||
|
||||
expect(lastRuntime?.error).not.toHaveBeenCalled();
|
||||
expect(dispatchReplyFromConfigMock).toHaveBeenCalledTimes(1);
|
||||
expect(createFeishuReplyDispatcherMock).toHaveBeenCalledTimes(1);
|
||||
expect(createFeishuReplyDispatcherMock).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
accountId: "acct-card",
|
||||
chatId: "p2p:ou_user1",
|
||||
replyToMessageId: "card-action-tok-card-once",
|
||||
}),
|
||||
);
|
||||
expect(finalizeInboundContextMock).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
AccountId: "acct-card",
|
||||
SessionKey: "agent:bound-agent:feishu:direct:ou_user1",
|
||||
MessageSid: "card-action-tok-card-once",
|
||||
}),
|
||||
);
|
||||
expect(touchBindingMock).toHaveBeenCalledWith("binding-card");
|
||||
|
||||
const dispatcher = createFeishuReplyDispatcherMock.mock.results[0]?.value.dispatcher as {
|
||||
sendFinalReply: ReturnType<typeof vi.fn>;
|
||||
};
|
||||
expect(dispatcher.sendFinalReply).toHaveBeenCalledTimes(1);
|
||||
expect(sendMessageFeishuMock).not.toHaveBeenCalled();
|
||||
expect(sendCardFeishuMock).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("does not duplicate delivery when retrying after a post-send failure", async () => {
|
||||
const onCardAction = await setupLifecycleMonitor();
|
||||
const event = createCardActionEvent({
|
||||
token: "tok-card-retry",
|
||||
action: "feishu.quick_actions.help",
|
||||
command: "/help",
|
||||
});
|
||||
|
||||
dispatchReplyFromConfigMock.mockImplementationOnce(async ({ dispatcher }) => {
|
||||
await dispatcher.sendFinalReply({ text: "card action reply once" });
|
||||
throw new Error("post-send failure");
|
||||
});
|
||||
|
||||
await onCardAction(event);
|
||||
await settleAsyncWork();
|
||||
await onCardAction(event);
|
||||
await settleAsyncWork();
|
||||
|
||||
expect(lastRuntime?.error).toHaveBeenCalledTimes(1);
|
||||
expect(dispatchReplyFromConfigMock).toHaveBeenCalledTimes(1);
|
||||
|
||||
const dispatcher = createFeishuReplyDispatcherMock.mock.results[0]?.value.dispatcher as {
|
||||
sendFinalReply: ReturnType<typeof vi.fn>;
|
||||
};
|
||||
expect(dispatcher.sendFinalReply).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
});
|
||||
@ -138,7 +138,7 @@ function createLifecycleAccount(): ResolvedFeishuAccount {
|
||||
},
|
||||
},
|
||||
},
|
||||
} as ResolvedFeishuAccount;
|
||||
} as unknown as ResolvedFeishuAccount;
|
||||
}
|
||||
|
||||
function createRuntimeEnv(): RuntimeEnv {
|
||||
|
||||
@ -2,3 +2,4 @@
|
||||
// helpers without traversing the full plugin-sdk/runtime graph.
|
||||
export * from "./src/auth-precedence.js";
|
||||
export * from "./helper-api.js";
|
||||
export { sendMessageMatrix } from "./src/matrix/send.js";
|
||||
|
||||
@ -1,4 +1,5 @@
|
||||
import path from "node:path";
|
||||
import { sendMessageMatrix } from "../../runtime-api.js";
|
||||
import {
|
||||
readJsonFileWithFallback,
|
||||
registerSessionBindingAdapter,
|
||||
@ -10,7 +11,6 @@ import {
|
||||
import { resolveMatrixStoragePaths } from "./client/storage.js";
|
||||
import type { MatrixAuth } from "./client/types.js";
|
||||
import type { MatrixClient } from "./sdk.js";
|
||||
import { sendMessageMatrix } from "./send.js";
|
||||
import {
|
||||
deleteMatrixThreadBindingManagerEntry,
|
||||
getMatrixThreadBindingManager,
|
||||
|
||||
127
extensions/voice-call/src/webhook.hangup-once.lifecycle.test.ts
Normal file
127
extensions/voice-call/src/webhook.hangup-once.lifecycle.test.ts
Normal file
@ -0,0 +1,127 @@
|
||||
import { afterEach, describe, expect, it } from "vitest";
|
||||
import { VoiceCallConfigSchema, type VoiceCallConfig } from "./config.js";
|
||||
import { CallManager } from "./manager.js";
|
||||
import { createTestStorePath, FakeProvider } from "./manager.test-harness.js";
|
||||
import type { WebhookContext, WebhookParseOptions } from "./types.js";
|
||||
import { VoiceCallWebhookServer } from "./webhook.js";
|
||||
|
||||
const createConfig = (overrides: Partial<VoiceCallConfig> = {}): VoiceCallConfig => {
|
||||
const base = VoiceCallConfigSchema.parse({
|
||||
enabled: true,
|
||||
provider: "plivo",
|
||||
fromNumber: "+15550000000",
|
||||
inboundPolicy: "disabled",
|
||||
});
|
||||
base.serve.port = 0;
|
||||
|
||||
return {
|
||||
...base,
|
||||
...overrides,
|
||||
serve: {
|
||||
...base.serve,
|
||||
...(overrides.serve ?? {}),
|
||||
},
|
||||
};
|
||||
};
|
||||
|
||||
async function postWebhookForm(server: VoiceCallWebhookServer, baseUrl: string, body: string) {
|
||||
const address = (
|
||||
server as unknown as { server?: { address?: () => unknown } }
|
||||
).server?.address?.();
|
||||
const requestUrl = new URL(baseUrl);
|
||||
if (address && typeof address === "object" && "port" in address && address.port) {
|
||||
requestUrl.port = String(address.port);
|
||||
}
|
||||
return await fetch(requestUrl.toString(), {
|
||||
method: "POST",
|
||||
headers: { "content-type": "application/x-www-form-urlencoded" },
|
||||
body,
|
||||
});
|
||||
}
|
||||
|
||||
class RejectInboundReplayProvider extends FakeProvider {
|
||||
override verifyWebhook() {
|
||||
return { ok: true, verifiedRequestKey: "verified:req:reject-once" };
|
||||
}
|
||||
|
||||
override parseWebhookEvent(_ctx: WebhookContext, options?: WebhookParseOptions) {
|
||||
return {
|
||||
statusCode: 200,
|
||||
events: [
|
||||
{
|
||||
id: "evt-reject-once",
|
||||
dedupeKey: options?.verifiedRequestKey,
|
||||
type: "call.initiated" as const,
|
||||
callId: "provider-inbound-1",
|
||||
providerCallId: "provider-inbound-1",
|
||||
timestamp: Date.now(),
|
||||
direction: "inbound" as const,
|
||||
from: "+15552222222",
|
||||
to: "+15550000000",
|
||||
},
|
||||
],
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
class RejectInboundReplayWithHangupFailureProvider extends RejectInboundReplayProvider {
|
||||
override async hangupCall(input: Parameters<FakeProvider["hangupCall"]>[0]): Promise<void> {
|
||||
this.hangupCalls.push(input);
|
||||
throw new Error("hangup failed");
|
||||
}
|
||||
}
|
||||
|
||||
describe("Voice-call webhook hangup-once lifecycle", () => {
|
||||
afterEach(() => {
|
||||
// Each test uses an isolated store path, so only server cleanup is needed.
|
||||
});
|
||||
|
||||
it("hangs up a rejected inbound replay only once across duplicate webhook delivery", async () => {
|
||||
const provider = new RejectInboundReplayProvider("plivo");
|
||||
const config = createConfig();
|
||||
const manager = new CallManager(config, createTestStorePath());
|
||||
await manager.initialize(provider, "https://example.com/voice/webhook");
|
||||
const server = new VoiceCallWebhookServer(config, manager, provider);
|
||||
|
||||
try {
|
||||
const baseUrl = await server.start();
|
||||
const first = await postWebhookForm(server, baseUrl, "CallSid=CA123&From=%2B15552222222");
|
||||
const second = await postWebhookForm(server, baseUrl, "CallSid=CA123&From=%2B15552222222");
|
||||
|
||||
expect(first.status).toBe(200);
|
||||
expect(second.status).toBe(200);
|
||||
expect(provider.hangupCalls).toHaveLength(1);
|
||||
expect(provider.hangupCalls[0]).toEqual(
|
||||
expect.objectContaining({
|
||||
providerCallId: "provider-inbound-1",
|
||||
reason: "hangup-bot",
|
||||
}),
|
||||
);
|
||||
expect(manager.getCallByProviderCallId("provider-inbound-1")).toBeUndefined();
|
||||
} finally {
|
||||
await server.stop();
|
||||
}
|
||||
});
|
||||
|
||||
it("does not attempt a second hangup when replay arrives after the first hangup fails", async () => {
|
||||
const provider = new RejectInboundReplayWithHangupFailureProvider("plivo");
|
||||
const config = createConfig();
|
||||
const manager = new CallManager(config, createTestStorePath());
|
||||
await manager.initialize(provider, "https://example.com/voice/webhook");
|
||||
const server = new VoiceCallWebhookServer(config, manager, provider);
|
||||
|
||||
try {
|
||||
const baseUrl = await server.start();
|
||||
const first = await postWebhookForm(server, baseUrl, "CallSid=CA123&From=%2B15552222222");
|
||||
const second = await postWebhookForm(server, baseUrl, "CallSid=CA123&From=%2B15552222222");
|
||||
|
||||
expect(first.status).toBe(200);
|
||||
expect(second.status).toBe(200);
|
||||
expect(provider.hangupCalls).toHaveLength(1);
|
||||
expect(provider.hangupCalls[0]?.providerCallId).toBe("provider-inbound-1");
|
||||
expect(manager.getCallByProviderCallId("provider-inbound-1")).toBeUndefined();
|
||||
} finally {
|
||||
await server.stop();
|
||||
}
|
||||
});
|
||||
});
|
||||
315
extensions/zalo/src/monitor.reply-once.lifecycle.test.ts
Normal file
315
extensions/zalo/src/monitor.reply-once.lifecycle.test.ts
Normal file
@ -0,0 +1,315 @@
|
||||
import { createServer, type RequestListener } from "node:http";
|
||||
import type { AddressInfo } from "node:net";
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import { createPluginRuntimeMock } from "../../../test/helpers/extensions/plugin-runtime-mock.js";
|
||||
import { createEmptyPluginRegistry } from "../../../src/plugins/registry.js";
|
||||
import { setActivePluginRegistry } from "../../../src/plugins/runtime.js";
|
||||
import type { OpenClawConfig, PluginRuntime } from "../runtime-api.js";
|
||||
import {
|
||||
clearZaloWebhookSecurityStateForTest,
|
||||
monitorZaloProvider,
|
||||
} from "./monitor.js";
|
||||
import type { ResolvedZaloAccount } from "./accounts.js";
|
||||
|
||||
const setWebhookMock = vi.hoisted(() => vi.fn(async () => ({ ok: true, result: { url: "" } })));
|
||||
const deleteWebhookMock = vi.hoisted(() => vi.fn(async () => ({ ok: true, result: { url: "" } })));
|
||||
const getWebhookInfoMock = vi.hoisted(() => vi.fn(async () => ({ ok: true, result: { url: "" } })));
|
||||
const getUpdatesMock = vi.hoisted(() => vi.fn(() => new Promise(() => {})));
|
||||
const sendChatActionMock = vi.hoisted(() => vi.fn(async () => ({ ok: true })));
|
||||
const sendMessageMock = vi.hoisted(() =>
|
||||
vi.fn(async () => ({ ok: true, result: { message_id: "reply-zalo-1" } })),
|
||||
);
|
||||
const sendPhotoMock = vi.hoisted(() => vi.fn(async () => ({ ok: true })));
|
||||
const getZaloRuntimeMock = vi.hoisted(() => vi.fn());
|
||||
|
||||
vi.mock("./api.js", async (importOriginal) => {
|
||||
const actual = await importOriginal<typeof import("./api.js")>();
|
||||
return {
|
||||
...actual,
|
||||
deleteWebhook: deleteWebhookMock,
|
||||
getUpdates: getUpdatesMock,
|
||||
getWebhookInfo: getWebhookInfoMock,
|
||||
sendChatAction: sendChatActionMock,
|
||||
sendMessage: sendMessageMock,
|
||||
sendPhoto: sendPhotoMock,
|
||||
setWebhook: setWebhookMock,
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock("./runtime.js", () => ({
|
||||
getZaloRuntime: getZaloRuntimeMock,
|
||||
}));
|
||||
|
||||
async function withServer(handler: RequestListener, fn: (baseUrl: string) => Promise<void>) {
|
||||
const server = createServer(handler);
|
||||
await new Promise<void>((resolve) => {
|
||||
server.listen(0, "127.0.0.1", () => resolve());
|
||||
});
|
||||
const address = server.address() as AddressInfo | null;
|
||||
if (!address) {
|
||||
throw new Error("missing server address");
|
||||
}
|
||||
try {
|
||||
await fn(`http://127.0.0.1:${address.port}`);
|
||||
} finally {
|
||||
await new Promise<void>((resolve) => server.close(() => resolve()));
|
||||
}
|
||||
}
|
||||
|
||||
function createLifecycleConfig(): OpenClawConfig {
|
||||
return {
|
||||
channels: {
|
||||
zalo: {
|
||||
enabled: true,
|
||||
accounts: {
|
||||
"acct-zalo-lifecycle": {
|
||||
enabled: true,
|
||||
webhookUrl: "https://example.com/hooks/zalo",
|
||||
webhookSecret: "supersecret", // pragma: allowlist secret
|
||||
dmPolicy: "open",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
} as OpenClawConfig;
|
||||
}
|
||||
|
||||
function createLifecycleAccount(): ResolvedZaloAccount {
|
||||
return {
|
||||
accountId: "acct-zalo-lifecycle",
|
||||
enabled: true,
|
||||
token: "zalo-token",
|
||||
tokenSource: "config",
|
||||
config: {
|
||||
webhookUrl: "https://example.com/hooks/zalo",
|
||||
webhookSecret: "supersecret", // pragma: allowlist secret
|
||||
dmPolicy: "open",
|
||||
},
|
||||
} as ResolvedZaloAccount;
|
||||
}
|
||||
|
||||
function createRuntimeEnv() {
|
||||
return {
|
||||
log: vi.fn<(message: string) => void>(),
|
||||
error: vi.fn<(message: string) => void>(),
|
||||
};
|
||||
}
|
||||
|
||||
function createTextUpdate(messageId: string) {
|
||||
return {
|
||||
event_name: "message.text.received",
|
||||
message: {
|
||||
from: { id: "user-1", name: "User One" },
|
||||
chat: { id: "dm-chat-1", chat_type: "PRIVATE" as const },
|
||||
message_id: messageId,
|
||||
date: Math.floor(Date.now() / 1000),
|
||||
text: "hello from zalo",
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
async function settleAsyncWork(): Promise<void> {
|
||||
for (let i = 0; i < 6; i += 1) {
|
||||
await Promise.resolve();
|
||||
await new Promise((resolve) => setTimeout(resolve, 0));
|
||||
}
|
||||
}
|
||||
|
||||
async function postWebhookUpdate(params: {
|
||||
baseUrl: string;
|
||||
path: string;
|
||||
secret: string;
|
||||
payload: Record<string, unknown>;
|
||||
}) {
|
||||
return await fetch(`${params.baseUrl}${params.path}`, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"content-type": "application/json",
|
||||
"x-bot-api-secret-token": params.secret,
|
||||
},
|
||||
body: JSON.stringify(params.payload),
|
||||
});
|
||||
}
|
||||
|
||||
describe("Zalo reply-once lifecycle", () => {
|
||||
const finalizeInboundContextMock = vi.fn((ctx: Record<string, unknown>) => ctx);
|
||||
const recordInboundSessionMock = vi.fn(async () => undefined);
|
||||
const resolveAgentRouteMock = vi.fn(() => ({
|
||||
agentId: "main",
|
||||
channel: "zalo",
|
||||
accountId: "acct-zalo-lifecycle",
|
||||
sessionKey: "agent:main:zalo:direct:dm-chat-1",
|
||||
mainSessionKey: "agent:main:main",
|
||||
matchedBy: "default",
|
||||
}));
|
||||
const dispatchReplyWithBufferedBlockDispatcherMock = vi.fn();
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
clearZaloWebhookSecurityStateForTest();
|
||||
|
||||
getZaloRuntimeMock.mockReturnValue(
|
||||
createPluginRuntimeMock({
|
||||
channel: {
|
||||
routing: {
|
||||
resolveAgentRoute:
|
||||
resolveAgentRouteMock as unknown as PluginRuntime["channel"]["routing"]["resolveAgentRoute"],
|
||||
},
|
||||
reply: {
|
||||
finalizeInboundContext:
|
||||
finalizeInboundContextMock as unknown as PluginRuntime["channel"]["reply"]["finalizeInboundContext"],
|
||||
dispatchReplyWithBufferedBlockDispatcher:
|
||||
dispatchReplyWithBufferedBlockDispatcherMock as unknown as PluginRuntime["channel"]["reply"]["dispatchReplyWithBufferedBlockDispatcher"],
|
||||
},
|
||||
session: {
|
||||
recordInboundSession:
|
||||
recordInboundSessionMock as unknown as PluginRuntime["channel"]["session"]["recordInboundSession"],
|
||||
},
|
||||
},
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
setActivePluginRegistry(createEmptyPluginRegistry());
|
||||
});
|
||||
|
||||
it("routes one accepted webhook event to one visible reply across duplicate replay", async () => {
|
||||
dispatchReplyWithBufferedBlockDispatcherMock.mockImplementation(async ({ dispatcherOptions }) => {
|
||||
await dispatcherOptions.deliver({ text: "zalo reply once" });
|
||||
});
|
||||
|
||||
const registry = createEmptyPluginRegistry();
|
||||
setActivePluginRegistry(registry);
|
||||
const abort = new AbortController();
|
||||
const runtime = createRuntimeEnv();
|
||||
const run = monitorZaloProvider({
|
||||
token: "zalo-token",
|
||||
account: createLifecycleAccount(),
|
||||
config: createLifecycleConfig(),
|
||||
runtime,
|
||||
abortSignal: abort.signal,
|
||||
useWebhook: true,
|
||||
webhookUrl: "https://example.com/hooks/zalo",
|
||||
webhookSecret: "supersecret",
|
||||
});
|
||||
|
||||
await vi.waitFor(() => expect(setWebhookMock).toHaveBeenCalledTimes(1));
|
||||
expect(registry.httpRoutes).toHaveLength(1);
|
||||
const route = registry.httpRoutes[0];
|
||||
if (!route) {
|
||||
throw new Error("missing plugin HTTP route");
|
||||
}
|
||||
|
||||
await withServer((req, res) => route.handler(req, res), async (baseUrl) => {
|
||||
const payload = createTextUpdate(`zalo-replay-${Date.now()}`);
|
||||
const first = await postWebhookUpdate({
|
||||
baseUrl,
|
||||
path: "/hooks/zalo",
|
||||
secret: "supersecret",
|
||||
payload,
|
||||
});
|
||||
const second = await postWebhookUpdate({
|
||||
baseUrl,
|
||||
path: "/hooks/zalo",
|
||||
secret: "supersecret",
|
||||
payload,
|
||||
});
|
||||
|
||||
expect(first.status).toBe(200);
|
||||
expect(second.status).toBe(200);
|
||||
await settleAsyncWork();
|
||||
});
|
||||
|
||||
expect(finalizeInboundContextMock).toHaveBeenCalledTimes(1);
|
||||
expect(finalizeInboundContextMock).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
AccountId: "acct-zalo-lifecycle",
|
||||
SessionKey: "agent:main:zalo:direct:dm-chat-1",
|
||||
MessageSid: expect.stringContaining("zalo-replay-"),
|
||||
From: "zalo:user-1",
|
||||
To: "zalo:dm-chat-1",
|
||||
}),
|
||||
);
|
||||
expect(recordInboundSessionMock).toHaveBeenCalledTimes(1);
|
||||
expect(recordInboundSessionMock).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
sessionKey: "agent:main:zalo:direct:dm-chat-1",
|
||||
}),
|
||||
);
|
||||
expect(sendMessageMock).toHaveBeenCalledTimes(1);
|
||||
expect(sendMessageMock).toHaveBeenCalledWith(
|
||||
"zalo-token",
|
||||
expect.objectContaining({
|
||||
chat_id: "dm-chat-1",
|
||||
text: "zalo reply once",
|
||||
}),
|
||||
undefined,
|
||||
);
|
||||
|
||||
abort.abort();
|
||||
await run;
|
||||
});
|
||||
|
||||
it("does not emit a second visible reply when replay arrives after a post-send failure", async () => {
|
||||
let dispatchAttempts = 0;
|
||||
dispatchReplyWithBufferedBlockDispatcherMock.mockImplementation(async ({ dispatcherOptions }) => {
|
||||
dispatchAttempts += 1;
|
||||
await dispatcherOptions.deliver({ text: "zalo reply after failure" });
|
||||
if (dispatchAttempts === 1) {
|
||||
throw new Error("post-send failure");
|
||||
}
|
||||
});
|
||||
|
||||
const registry = createEmptyPluginRegistry();
|
||||
setActivePluginRegistry(registry);
|
||||
const abort = new AbortController();
|
||||
const runtime = createRuntimeEnv();
|
||||
const run = monitorZaloProvider({
|
||||
token: "zalo-token",
|
||||
account: createLifecycleAccount(),
|
||||
config: createLifecycleConfig(),
|
||||
runtime,
|
||||
abortSignal: abort.signal,
|
||||
useWebhook: true,
|
||||
webhookUrl: "https://example.com/hooks/zalo",
|
||||
webhookSecret: "supersecret",
|
||||
});
|
||||
|
||||
await vi.waitFor(() => expect(setWebhookMock).toHaveBeenCalledTimes(1));
|
||||
const route = registry.httpRoutes[0];
|
||||
if (!route) {
|
||||
throw new Error("missing plugin HTTP route");
|
||||
}
|
||||
|
||||
await withServer((req, res) => route.handler(req, res), async (baseUrl) => {
|
||||
const payload = createTextUpdate(`zalo-retry-${Date.now()}`);
|
||||
const first = await postWebhookUpdate({
|
||||
baseUrl,
|
||||
path: "/hooks/zalo",
|
||||
secret: "supersecret",
|
||||
payload,
|
||||
});
|
||||
await settleAsyncWork();
|
||||
const replay = await postWebhookUpdate({
|
||||
baseUrl,
|
||||
path: "/hooks/zalo",
|
||||
secret: "supersecret",
|
||||
payload,
|
||||
});
|
||||
|
||||
expect(first.status).toBe(200);
|
||||
expect(replay.status).toBe(200);
|
||||
await settleAsyncWork();
|
||||
});
|
||||
|
||||
expect(dispatchReplyWithBufferedBlockDispatcherMock).toHaveBeenCalledTimes(1);
|
||||
expect(sendMessageMock).toHaveBeenCalledTimes(1);
|
||||
expect(runtime.error).toHaveBeenCalledWith(
|
||||
expect.stringContaining("Zalo webhook failed: Error: post-send failure"),
|
||||
);
|
||||
|
||||
abort.abort();
|
||||
await run;
|
||||
});
|
||||
});
|
||||
@ -11,7 +11,7 @@ const ANSI_ESCAPE_PATTERN = new RegExp(
|
||||
const COMPLETED_TEST_FILE_LINE_PATTERN =
|
||||
/(?<file>(?:src|extensions|test|ui)\/\S+?\.(?:live\.test|e2e\.test|test)\.ts)\s+\(.*\)\s+(?<duration>\d+(?:\.\d+)?)(?<unit>ms|s)\s*$/;
|
||||
|
||||
const PS_COLUMNS = ["pid=", "ppid=", "rss="];
|
||||
const PS_COLUMNS = ["pid=", "ppid=", "rss=", "comm="];
|
||||
|
||||
function parseDurationMs(rawValue, unit) {
|
||||
const parsed = Number.parseFloat(rawValue);
|
||||
@ -41,7 +41,7 @@ export function parseCompletedTestFileLines(text) {
|
||||
.filter((entry) => entry !== null);
|
||||
}
|
||||
|
||||
export function sampleProcessTreeRssKb(rootPid) {
|
||||
export function getProcessTreeRecords(rootPid) {
|
||||
if (!Number.isInteger(rootPid) || rootPid <= 0 || process.platform === "win32") {
|
||||
return null;
|
||||
}
|
||||
@ -54,13 +54,13 @@ export function sampleProcessTreeRssKb(rootPid) {
|
||||
}
|
||||
|
||||
const childPidsByParent = new Map();
|
||||
const rssByPid = new Map();
|
||||
const recordsByPid = new Map();
|
||||
for (const line of result.stdout.split(/\r?\n/u)) {
|
||||
const trimmed = line.trim();
|
||||
if (!trimmed) {
|
||||
continue;
|
||||
}
|
||||
const [pidRaw, parentRaw, rssRaw] = trimmed.split(/\s+/u);
|
||||
const [pidRaw, parentRaw, rssRaw, commandRaw] = trimmed.split(/\s+/u, 4);
|
||||
const pid = Number.parseInt(pidRaw ?? "", 10);
|
||||
const parentPid = Number.parseInt(parentRaw ?? "", 10);
|
||||
const rssKb = Number.parseInt(rssRaw ?? "", 10);
|
||||
@ -70,27 +70,30 @@ export function sampleProcessTreeRssKb(rootPid) {
|
||||
const siblings = childPidsByParent.get(parentPid) ?? [];
|
||||
siblings.push(pid);
|
||||
childPidsByParent.set(parentPid, siblings);
|
||||
rssByPid.set(pid, rssKb);
|
||||
recordsByPid.set(pid, {
|
||||
pid,
|
||||
parentPid,
|
||||
rssKb,
|
||||
command: commandRaw ?? "",
|
||||
});
|
||||
}
|
||||
|
||||
if (!rssByPid.has(rootPid)) {
|
||||
if (!recordsByPid.has(rootPid)) {
|
||||
return null;
|
||||
}
|
||||
|
||||
let rssKb = 0;
|
||||
let processCount = 0;
|
||||
const queue = [rootPid];
|
||||
const visited = new Set();
|
||||
const records = [];
|
||||
while (queue.length > 0) {
|
||||
const pid = queue.shift();
|
||||
if (pid === undefined || visited.has(pid)) {
|
||||
continue;
|
||||
}
|
||||
visited.add(pid);
|
||||
const currentRssKb = rssByPid.get(pid);
|
||||
if (currentRssKb !== undefined) {
|
||||
rssKb += currentRssKb;
|
||||
processCount += 1;
|
||||
const record = recordsByPid.get(pid);
|
||||
if (record) {
|
||||
records.push(record);
|
||||
}
|
||||
for (const childPid of childPidsByParent.get(pid) ?? []) {
|
||||
if (!visited.has(childPid)) {
|
||||
@ -99,5 +102,21 @@ export function sampleProcessTreeRssKb(rootPid) {
|
||||
}
|
||||
}
|
||||
|
||||
return records;
|
||||
}
|
||||
|
||||
export function sampleProcessTreeRssKb(rootPid) {
|
||||
const records = getProcessTreeRecords(rootPid);
|
||||
if (!records) {
|
||||
return null;
|
||||
}
|
||||
|
||||
let rssKb = 0;
|
||||
let processCount = 0;
|
||||
for (const record of records) {
|
||||
rssKb += record.rssKb;
|
||||
processCount += 1;
|
||||
}
|
||||
|
||||
return { rssKb, processCount };
|
||||
}
|
||||
|
||||
@ -4,7 +4,11 @@ import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { channelTestPrefixes } from "../vitest.channel-paths.mjs";
|
||||
import { isUnitConfigTestFile } from "../vitest.unit-paths.mjs";
|
||||
import { parseCompletedTestFileLines, sampleProcessTreeRssKb } from "./test-parallel-memory.mjs";
|
||||
import {
|
||||
getProcessTreeRecords,
|
||||
parseCompletedTestFileLines,
|
||||
sampleProcessTreeRssKb,
|
||||
} from "./test-parallel-memory.mjs";
|
||||
import {
|
||||
appendCapturedOutput,
|
||||
hasFatalTestRunOutput,
|
||||
@ -47,17 +51,6 @@ const hostMemoryGiB = Math.floor(os.totalmem() / 1024 ** 3);
|
||||
const highMemLocalHost = !isCI && hostMemoryGiB >= 96;
|
||||
const lowMemLocalHost = !isCI && hostMemoryGiB < 64;
|
||||
const nodeMajor = Number.parseInt(process.versions.node.split(".")[0] ?? "", 10);
|
||||
// vmForks is a big win for transform/import heavy suites. Node 24 is stable again
|
||||
// for the default unit-fast lane after moving the known flaky files to fork-only
|
||||
// isolation, but Node 25+ still falls back to process forks until re-validated.
|
||||
// Keep it opt-out via OPENCLAW_TEST_VM_FORKS=0, and let users force-enable with =1.
|
||||
const supportsVmForks = Number.isFinite(nodeMajor) ? nodeMajor <= 24 : true;
|
||||
const useVmForks =
|
||||
process.env.OPENCLAW_TEST_VM_FORKS === "1" ||
|
||||
(process.env.OPENCLAW_TEST_VM_FORKS !== "0" && !isWindows && supportsVmForks && !lowMemLocalHost);
|
||||
const disableIsolation = process.env.OPENCLAW_TEST_NO_ISOLATE === "1";
|
||||
const includeGatewaySuite = process.env.OPENCLAW_TEST_INCLUDE_GATEWAY === "1";
|
||||
const includeExtensionsSuite = process.env.OPENCLAW_TEST_INCLUDE_EXTENSIONS === "1";
|
||||
const rawTestProfile = process.env.OPENCLAW_TEST_PROFILE?.trim().toLowerCase();
|
||||
const testProfile =
|
||||
rawTestProfile === "low" ||
|
||||
@ -68,6 +61,21 @@ const testProfile =
|
||||
? rawTestProfile
|
||||
: "normal";
|
||||
const isMacMiniProfile = testProfile === "macmini";
|
||||
// vmForks is a big win for transform/import heavy suites. Node 24 is stable again
|
||||
// for the default unit-fast lane after moving the known flaky files to fork-only
|
||||
// isolation, but Node 25+ still falls back to process forks until re-validated.
|
||||
// Keep it opt-out via OPENCLAW_TEST_VM_FORKS=0, and let users force-enable with =1.
|
||||
const supportsVmForks = Number.isFinite(nodeMajor) ? nodeMajor <= 24 : true;
|
||||
const useVmForks =
|
||||
process.env.OPENCLAW_TEST_VM_FORKS === "1" ||
|
||||
(process.env.OPENCLAW_TEST_VM_FORKS !== "0" &&
|
||||
!isWindows &&
|
||||
supportsVmForks &&
|
||||
!lowMemLocalHost &&
|
||||
(isCI || testProfile !== "low"));
|
||||
const disableIsolation = process.env.OPENCLAW_TEST_NO_ISOLATE === "1";
|
||||
const includeGatewaySuite = process.env.OPENCLAW_TEST_INCLUDE_GATEWAY === "1";
|
||||
const includeExtensionsSuite = process.env.OPENCLAW_TEST_INCLUDE_EXTENSIONS === "1";
|
||||
// Even on low-memory hosts, keep the isolated lane split so files like
|
||||
// git-commit.test.ts still get the worker/process isolation they require.
|
||||
const shouldSplitUnitRuns = testProfile !== "serial";
|
||||
@ -725,6 +733,25 @@ const memoryTraceEnabled =
|
||||
(rawMemoryTrace !== "0" && rawMemoryTrace !== "false" && isCI));
|
||||
const memoryTracePollMs = Math.max(250, parseEnvNumber("OPENCLAW_TEST_MEMORY_TRACE_POLL_MS", 1000));
|
||||
const memoryTraceTopCount = Math.max(1, parseEnvNumber("OPENCLAW_TEST_MEMORY_TRACE_TOP_COUNT", 6));
|
||||
const heapSnapshotIntervalMs = Math.max(
|
||||
0,
|
||||
parseEnvNumber("OPENCLAW_TEST_HEAPSNAPSHOT_INTERVAL_MS", 0),
|
||||
);
|
||||
const heapSnapshotMinIntervalMs = 5000;
|
||||
const heapSnapshotEnabled =
|
||||
process.platform !== "win32" &&
|
||||
heapSnapshotIntervalMs >= heapSnapshotMinIntervalMs;
|
||||
const heapSnapshotEnabled = process.platform !== "win32" && heapSnapshotIntervalMs > 0;
|
||||
const heapSnapshotSignal = process.env.OPENCLAW_TEST_HEAPSNAPSHOT_SIGNAL?.trim() || "SIGUSR2";
|
||||
const heapSnapshotBaseDir = heapSnapshotEnabled
|
||||
? path.resolve(
|
||||
process.env.OPENCLAW_TEST_HEAPSNAPSHOT_DIR?.trim() ||
|
||||
path.join(os.tmpdir(), `openclaw-heapsnapshots-${Date.now()}`),
|
||||
)
|
||||
: null;
|
||||
const ensureNodeOptionFlag = (nodeOptions, flagPrefix, nextValue) =>
|
||||
nodeOptions.includes(flagPrefix) ? nodeOptions : `${nodeOptions} ${nextValue}`.trim();
|
||||
const isNodeLikeProcess = (command) => /(?:^|\/)node(?:$|\.exe$)/iu.test(command);
|
||||
|
||||
const runOnce = (entry, extraArgs = []) =>
|
||||
new Promise((resolve) => {
|
||||
@ -757,23 +784,44 @@ const runOnce = (entry, extraArgs = []) =>
|
||||
(acc, flag) => (acc.includes(flag) ? acc : `${acc} ${flag}`.trim()),
|
||||
nodeOptions,
|
||||
);
|
||||
const heapFlag =
|
||||
const heapSnapshotDir =
|
||||
heapSnapshotBaseDir === null ? null : path.join(heapSnapshotBaseDir, entry.name);
|
||||
let resolvedNodeOptions =
|
||||
maxOldSpaceSizeMb && !nextNodeOptions.includes("--max-old-space-size=")
|
||||
? `--max-old-space-size=${maxOldSpaceSizeMb}`
|
||||
: null;
|
||||
const resolvedNodeOptions = heapFlag
|
||||
? `${nextNodeOptions} ${heapFlag}`.trim()
|
||||
: nextNodeOptions;
|
||||
? `${nextNodeOptions} --max-old-space-size=${maxOldSpaceSizeMb}`.trim()
|
||||
: nextNodeOptions;
|
||||
if (heapSnapshotEnabled && heapSnapshotDir) {
|
||||
try {
|
||||
fs.mkdirSync(heapSnapshotDir, { recursive: true });
|
||||
} catch (err) {
|
||||
console.error(`[test-parallel] failed to create heap snapshot dir ${heapSnapshotDir}: ${String(err)}`);
|
||||
resolve(1);
|
||||
return;
|
||||
}
|
||||
resolvedNodeOptions = ensureNodeOptionFlag(
|
||||
resolvedNodeOptions,
|
||||
"--diagnostic-dir=",
|
||||
`--diagnostic-dir=${heapSnapshotDir}`,
|
||||
);
|
||||
resolvedNodeOptions = ensureNodeOptionFlag(
|
||||
resolvedNodeOptions,
|
||||
"--heapsnapshot-signal=",
|
||||
`--heapsnapshot-signal=${heapSnapshotSignal}`,
|
||||
);
|
||||
}
|
||||
}
|
||||
let output = "";
|
||||
let fatalSeen = false;
|
||||
let childError = null;
|
||||
let child;
|
||||
let pendingLine = "";
|
||||
let memoryPollTimer = null;
|
||||
let heapSnapshotTimer = null;
|
||||
const memoryFileRecords = [];
|
||||
let initialTreeSample = null;
|
||||
let latestTreeSample = null;
|
||||
let peakTreeSample = null;
|
||||
let heapSnapshotSequence = 0;
|
||||
const updatePeakTreeSample = (sample, reason) => {
|
||||
if (!sample) {
|
||||
return;
|
||||
@ -782,6 +830,35 @@ const runOnce = (entry, extraArgs = []) =>
|
||||
peakTreeSample = { ...sample, reason };
|
||||
}
|
||||
};
|
||||
const triggerHeapSnapshot = (reason) => {
|
||||
if (!heapSnapshotEnabled || !child?.pid || !heapSnapshotDir) {
|
||||
return;
|
||||
}
|
||||
const records = getProcessTreeRecords(child.pid) ?? [];
|
||||
const targetPids = records
|
||||
.filter((record) => record.pid !== process.pid && isNodeLikeProcess(record.command))
|
||||
.map((record) => record.pid);
|
||||
if (targetPids.length === 0) {
|
||||
return;
|
||||
}
|
||||
heapSnapshotSequence += 1;
|
||||
let signaledCount = 0;
|
||||
for (const pid of targetPids) {
|
||||
try {
|
||||
process.kill(pid, heapSnapshotSignal);
|
||||
signaledCount += 1;
|
||||
} catch {
|
||||
// Process likely exited between ps sampling and signal delivery.
|
||||
}
|
||||
}
|
||||
if (signaledCount > 0) {
|
||||
console.log(
|
||||
`[test-parallel][heap] ${entry.name} seq=${String(heapSnapshotSequence)} reason=${reason} signaled=${String(
|
||||
signaledCount,
|
||||
)}/${String(targetPids.length)} dir=${heapSnapshotDir}`,
|
||||
);
|
||||
}
|
||||
};
|
||||
const captureTreeSample = (reason) => {
|
||||
if (!memoryTraceEnabled || !child?.pid) {
|
||||
return null;
|
||||
@ -877,6 +954,11 @@ const runOnce = (entry, extraArgs = []) =>
|
||||
captureTreeSample("poll");
|
||||
}, memoryTracePollMs);
|
||||
}
|
||||
if (heapSnapshotEnabled) {
|
||||
heapSnapshotTimer = setInterval(() => {
|
||||
triggerHeapSnapshot("interval");
|
||||
}, heapSnapshotIntervalMs);
|
||||
}
|
||||
} catch (err) {
|
||||
console.error(`[test-parallel] spawn failed: ${String(err)}`);
|
||||
resolve(1);
|
||||
@ -905,6 +987,9 @@ const runOnce = (entry, extraArgs = []) =>
|
||||
if (memoryPollTimer) {
|
||||
clearInterval(memoryPollTimer);
|
||||
}
|
||||
if (heapSnapshotTimer) {
|
||||
clearInterval(heapSnapshotTimer);
|
||||
}
|
||||
children.delete(child);
|
||||
const resolvedCode = resolveTestRunExitCode({ code, signal, output, fatalSeen, childError });
|
||||
logMemoryTraceSummary();
|
||||
|
||||
@ -223,10 +223,10 @@ bundledChannelRuntimeSetters.setLineRuntime({
|
||||
},
|
||||
} as never);
|
||||
|
||||
vi.mock("../../../../extensions/matrix/src/matrix/send.js", async () => {
|
||||
vi.mock("../../../../extensions/matrix/runtime-api.js", async () => {
|
||||
const actual = await vi.importActual<
|
||||
typeof import("../../../../extensions/matrix/src/matrix/send.js")
|
||||
>("../../../../extensions/matrix/src/matrix/send.js");
|
||||
typeof import("../../../../extensions/matrix/runtime-api.js")
|
||||
>("../../../../extensions/matrix/runtime-api.js");
|
||||
return {
|
||||
...actual,
|
||||
sendMessageMatrix: sendMessageMatrixMock,
|
||||
|
||||
@ -13,6 +13,21 @@ import {
|
||||
|
||||
describe("config doc baseline", () => {
|
||||
const tempRoots: string[] = [];
|
||||
let sharedBaselinePromise: Promise<Awaited<ReturnType<typeof buildConfigDocBaseline>>> | null =
|
||||
null;
|
||||
let sharedRenderedPromise: Promise<
|
||||
Awaited<ReturnType<typeof renderConfigDocBaselineStatefile>>
|
||||
> | null = null;
|
||||
|
||||
function getSharedBaseline() {
|
||||
sharedBaselinePromise ??= buildConfigDocBaseline();
|
||||
return sharedBaselinePromise;
|
||||
}
|
||||
|
||||
function getSharedRendered() {
|
||||
sharedRenderedPromise ??= renderConfigDocBaselineStatefile(getSharedBaseline());
|
||||
return sharedRenderedPromise;
|
||||
}
|
||||
|
||||
afterEach(async () => {
|
||||
await Promise.all(
|
||||
@ -31,7 +46,7 @@ describe("config doc baseline", () => {
|
||||
});
|
||||
|
||||
it("normalizes array and record paths to wildcard form", async () => {
|
||||
const baseline = await buildConfigDocBaseline();
|
||||
const baseline = await getSharedBaseline();
|
||||
const paths = new Set(baseline.entries.map((entry) => entry.path));
|
||||
|
||||
expect(paths.has("session.sendPolicy.rules.*.match.keyPrefix")).toBe(true);
|
||||
@ -40,7 +55,7 @@ describe("config doc baseline", () => {
|
||||
});
|
||||
|
||||
it("includes core, channel, and plugin config metadata", async () => {
|
||||
const baseline = await buildConfigDocBaseline();
|
||||
const baseline = await getSharedBaseline();
|
||||
const byPath = new Map(baseline.entries.map((entry) => [entry.path, entry]));
|
||||
|
||||
expect(byPath.get("gateway.auth.token")).toMatchObject({
|
||||
@ -58,7 +73,7 @@ describe("config doc baseline", () => {
|
||||
});
|
||||
|
||||
it("preserves help text and tags from merged schema hints", async () => {
|
||||
const baseline = await buildConfigDocBaseline();
|
||||
const baseline = await getSharedBaseline();
|
||||
const byPath = new Map(baseline.entries.map((entry) => [entry.path, entry]));
|
||||
const tokenEntry = byPath.get("gateway.auth.token");
|
||||
|
||||
@ -68,7 +83,7 @@ describe("config doc baseline", () => {
|
||||
});
|
||||
|
||||
it("matches array help hints that still use [] notation", async () => {
|
||||
const baseline = await buildConfigDocBaseline();
|
||||
const baseline = await getSharedBaseline();
|
||||
const byPath = new Map(baseline.entries.map((entry) => [entry.path, entry]));
|
||||
|
||||
expect(byPath.get("session.sendPolicy.rules.*.match.keyPrefix")).toMatchObject({
|
||||
@ -78,7 +93,7 @@ describe("config doc baseline", () => {
|
||||
});
|
||||
|
||||
it("walks union branches for nested config keys", async () => {
|
||||
const baseline = await buildConfigDocBaseline();
|
||||
const baseline = await getSharedBaseline();
|
||||
const byPath = new Map(baseline.entries.map((entry) => [entry.path, entry]));
|
||||
|
||||
expect(byPath.get("bindings.*")).toMatchObject({
|
||||
@ -121,11 +136,13 @@ describe("config doc baseline", () => {
|
||||
it("supports check mode for stale generated artifacts", async () => {
|
||||
const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-config-doc-baseline-"));
|
||||
tempRoots.push(tempRoot);
|
||||
const rendered = getSharedRendered();
|
||||
|
||||
const initial = await writeConfigDocBaselineStatefile({
|
||||
repoRoot: tempRoot,
|
||||
jsonPath: "docs/.generated/config-baseline.json",
|
||||
statefilePath: "docs/.generated/config-baseline.jsonl",
|
||||
rendered,
|
||||
});
|
||||
expect(initial.wrote).toBe(true);
|
||||
|
||||
@ -134,6 +151,7 @@ describe("config doc baseline", () => {
|
||||
jsonPath: "docs/.generated/config-baseline.json",
|
||||
statefilePath: "docs/.generated/config-baseline.jsonl",
|
||||
check: true,
|
||||
rendered,
|
||||
});
|
||||
expect(current.changed).toBe(false);
|
||||
|
||||
@ -153,6 +171,7 @@ describe("config doc baseline", () => {
|
||||
jsonPath: "docs/.generated/config-baseline.json",
|
||||
statefilePath: "docs/.generated/config-baseline.jsonl",
|
||||
check: true,
|
||||
rendered,
|
||||
});
|
||||
expect(stale.changed).toBe(true);
|
||||
expect(stale.wrote).toBe(false);
|
||||
|
||||
@ -658,11 +658,11 @@ export async function buildConfigDocBaseline(): Promise<ConfigDocBaseline> {
|
||||
}
|
||||
|
||||
export async function renderConfigDocBaselineStatefile(
|
||||
baseline?: ConfigDocBaseline,
|
||||
baseline?: ConfigDocBaseline | Promise<ConfigDocBaseline>,
|
||||
): Promise<ConfigDocBaselineStatefileRender> {
|
||||
const start = Date.now();
|
||||
logConfigDocBaselineDebug("render statefile start");
|
||||
const resolvedBaseline = baseline ?? (await buildConfigDocBaseline());
|
||||
const resolvedBaseline = baseline ? await baseline : await buildConfigDocBaseline();
|
||||
const json = `${JSON.stringify(resolvedBaseline, null, 2)}\n`;
|
||||
const metadataLine = JSON.stringify({
|
||||
generatedBy: GENERATED_BY,
|
||||
@ -706,13 +706,16 @@ export async function writeConfigDocBaselineStatefile(params?: {
|
||||
check?: boolean;
|
||||
jsonPath?: string;
|
||||
statefilePath?: string;
|
||||
rendered?: ConfigDocBaselineStatefileRender | Promise<ConfigDocBaselineStatefileRender>;
|
||||
}): Promise<ConfigDocBaselineStatefileWriteResult> {
|
||||
const start = Date.now();
|
||||
logConfigDocBaselineDebug("write statefile start");
|
||||
const repoRoot = params?.repoRoot ?? resolveRepoRoot();
|
||||
const jsonPath = path.resolve(repoRoot, params?.jsonPath ?? DEFAULT_JSON_OUTPUT);
|
||||
const statefilePath = path.resolve(repoRoot, params?.statefilePath ?? DEFAULT_STATEFILE_OUTPUT);
|
||||
const rendered = await renderConfigDocBaselineStatefile();
|
||||
const rendered = params?.rendered
|
||||
? await params.rendered
|
||||
: await renderConfigDocBaselineStatefile();
|
||||
logConfigDocBaselineDebug(`render statefile done elapsedMs=${Date.now() - start}`);
|
||||
logConfigDocBaselineDebug(`read current json start ${jsonPath}`);
|
||||
const currentJson = await readIfExists(jsonPath);
|
||||
|
||||
267
src/config/redact-snapshot.restore.test.ts
Normal file
267
src/config/redact-snapshot.restore.test.ts
Normal file
@ -0,0 +1,267 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import {
|
||||
REDACTED_SENTINEL,
|
||||
redactConfigSnapshot,
|
||||
restoreRedactedValues as restoreRedactedValues_orig,
|
||||
} from "./redact-snapshot.js";
|
||||
import { __test__ } from "./schema.hints.js";
|
||||
import type { ConfigUiHints } from "./schema.js";
|
||||
import type { ConfigFileSnapshot } from "./types.openclaw.js";
|
||||
import { OpenClawSchema } from "./zod-schema.js";
|
||||
|
||||
const { mapSensitivePaths } = __test__;
|
||||
const mainSchemaHints = mapSensitivePaths(OpenClawSchema, "", {});
|
||||
|
||||
type TestSnapshot<TConfig extends Record<string, unknown>> = ConfigFileSnapshot & {
|
||||
parsed: TConfig;
|
||||
resolved: TConfig;
|
||||
config: TConfig;
|
||||
};
|
||||
|
||||
function makeSnapshot<TConfig extends Record<string, unknown>>(
|
||||
config: TConfig,
|
||||
raw?: string,
|
||||
): TestSnapshot<TConfig> {
|
||||
return {
|
||||
path: "/home/user/.openclaw/config.json5",
|
||||
exists: true,
|
||||
raw: raw ?? JSON.stringify(config),
|
||||
parsed: config,
|
||||
resolved: config as ConfigFileSnapshot["resolved"],
|
||||
valid: true,
|
||||
config: config as ConfigFileSnapshot["config"],
|
||||
hash: "abc123",
|
||||
issues: [],
|
||||
warnings: [],
|
||||
legacyIssues: [],
|
||||
} as unknown as TestSnapshot<TConfig>;
|
||||
}
|
||||
|
||||
function restoreRedactedValues<TOriginal>(
|
||||
incoming: unknown,
|
||||
original: TOriginal,
|
||||
hints?: ConfigUiHints,
|
||||
): TOriginal {
|
||||
const result = restoreRedactedValues_orig(incoming, original, hints);
|
||||
expect(result.ok).toBe(true);
|
||||
return result.result as TOriginal;
|
||||
}
|
||||
|
||||
describe("restoreRedactedValues", () => {
|
||||
it("restores redacted URL endpoint fields on round-trip", () => {
|
||||
const incoming = {
|
||||
models: {
|
||||
providers: {
|
||||
openai: { baseUrl: REDACTED_SENTINEL },
|
||||
},
|
||||
},
|
||||
};
|
||||
const original = {
|
||||
models: {
|
||||
providers: {
|
||||
openai: { baseUrl: "https://alice:secret@example.test/v1" },
|
||||
},
|
||||
},
|
||||
};
|
||||
const result = restoreRedactedValues(incoming, original, mainSchemaHints);
|
||||
expect(result.models.providers.openai.baseUrl).toBe("https://alice:secret@example.test/v1");
|
||||
});
|
||||
|
||||
it("restores sentinel values from original config", () => {
|
||||
const incoming = {
|
||||
gateway: { auth: { token: REDACTED_SENTINEL } },
|
||||
};
|
||||
const original = {
|
||||
gateway: { auth: { token: "real-secret-token-value" } },
|
||||
};
|
||||
const result = restoreRedactedValues(incoming, original) as typeof incoming;
|
||||
expect(result.gateway.auth.token).toBe("real-secret-token-value");
|
||||
});
|
||||
|
||||
it("preserves explicitly changed sensitive values", () => {
|
||||
const incoming = {
|
||||
gateway: { auth: { token: "new-token-value-from-user" } },
|
||||
};
|
||||
const original = {
|
||||
gateway: { auth: { token: "old-token-value" } },
|
||||
};
|
||||
const result = restoreRedactedValues(incoming, original) as typeof incoming;
|
||||
expect(result.gateway.auth.token).toBe("new-token-value-from-user");
|
||||
});
|
||||
|
||||
it("preserves non-sensitive fields unchanged", () => {
|
||||
const incoming = {
|
||||
ui: { seamColor: "#ff0000" },
|
||||
gateway: { port: 9999, auth: { token: REDACTED_SENTINEL } },
|
||||
};
|
||||
const original = {
|
||||
ui: { seamColor: "#0088cc" },
|
||||
gateway: { port: 18789, auth: { token: "real-secret" } },
|
||||
};
|
||||
const result = restoreRedactedValues(incoming, original) as typeof incoming;
|
||||
expect(result.ui.seamColor).toBe("#ff0000");
|
||||
expect(result.gateway.port).toBe(9999);
|
||||
expect(result.gateway.auth.token).toBe("real-secret");
|
||||
});
|
||||
|
||||
it("handles deeply nested sentinel restoration", () => {
|
||||
const incoming = {
|
||||
channels: {
|
||||
slack: {
|
||||
accounts: {
|
||||
ws1: { botToken: REDACTED_SENTINEL },
|
||||
ws2: { botToken: "user-typed-new-token-value" },
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
const original = {
|
||||
channels: {
|
||||
slack: {
|
||||
accounts: {
|
||||
ws1: { botToken: "original-ws1-token-value" },
|
||||
ws2: { botToken: "original-ws2-token-value" },
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
const result = restoreRedactedValues(incoming, original) as typeof incoming;
|
||||
expect(result.channels.slack.accounts.ws1.botToken).toBe("original-ws1-token-value");
|
||||
expect(result.channels.slack.accounts.ws2.botToken).toBe("user-typed-new-token-value");
|
||||
});
|
||||
|
||||
it("handles missing original gracefully", () => {
|
||||
const incoming = {
|
||||
channels: { newChannel: { token: REDACTED_SENTINEL } },
|
||||
};
|
||||
const original = {};
|
||||
expect(restoreRedactedValues_orig(incoming, original).ok).toBe(false);
|
||||
});
|
||||
|
||||
it("rejects invalid restore inputs", () => {
|
||||
const invalidInputs = [null, undefined, "token-value"] as const;
|
||||
for (const input of invalidInputs) {
|
||||
const result = restoreRedactedValues_orig(input, { token: "x" });
|
||||
expect(result.ok).toBe(false);
|
||||
}
|
||||
expect(restoreRedactedValues_orig("token-value", { token: "x" })).toEqual({
|
||||
ok: false,
|
||||
error: "input not an object",
|
||||
});
|
||||
});
|
||||
|
||||
it("returns a human-readable error when sentinel cannot be restored", () => {
|
||||
const incoming = {
|
||||
channels: { newChannel: { token: REDACTED_SENTINEL } },
|
||||
};
|
||||
const result = restoreRedactedValues_orig(incoming, {});
|
||||
expect(result.ok).toBe(false);
|
||||
expect(result.humanReadableMessage).toContain(REDACTED_SENTINEL);
|
||||
expect(result.humanReadableMessage).toContain("channels.newChannel.token");
|
||||
});
|
||||
|
||||
it("keeps unmatched wildcard array entries unchanged outside extension paths", () => {
|
||||
const hints: ConfigUiHints = {
|
||||
"custom.*": { sensitive: true },
|
||||
};
|
||||
const incoming = {
|
||||
custom: { items: [REDACTED_SENTINEL] },
|
||||
};
|
||||
const original = {
|
||||
custom: { items: ["original-secret-value"] },
|
||||
};
|
||||
const result = restoreRedactedValues(incoming, original, hints) as typeof incoming;
|
||||
expect(result.custom.items[0]).toBe(REDACTED_SENTINEL);
|
||||
});
|
||||
|
||||
it("round-trips config through redact → restore", () => {
|
||||
const originalConfig = {
|
||||
gateway: { auth: { token: "gateway-auth-secret-token-value" }, port: 18789 },
|
||||
channels: {
|
||||
slack: { botToken: "fake-slack-token-placeholder-value" },
|
||||
telegram: {
|
||||
botToken: "fake-telegram-token-placeholder-value",
|
||||
webhookSecret: "fake-tg-secret-placeholder-value",
|
||||
},
|
||||
},
|
||||
models: {
|
||||
providers: {
|
||||
openai: {
|
||||
apiKey: "sk-proj-fake-openai-api-key-value",
|
||||
baseUrl: "https://api.openai.com",
|
||||
},
|
||||
},
|
||||
},
|
||||
ui: { seamColor: "#0088cc" },
|
||||
};
|
||||
const snapshot = makeSnapshot(originalConfig);
|
||||
const redacted = redactConfigSnapshot(snapshot);
|
||||
const restored = restoreRedactedValues(redacted.config, snapshot.config);
|
||||
expect(restored).toEqual(originalConfig);
|
||||
});
|
||||
|
||||
it("round-trips with uiHints for custom sensitive fields", () => {
|
||||
const hints: ConfigUiHints = {
|
||||
"custom.myApiKey": { sensitive: true },
|
||||
"custom.displayName": { sensitive: false },
|
||||
};
|
||||
const originalConfig = {
|
||||
custom: { myApiKey: "secret-custom-api-key-value", displayName: "My Bot" },
|
||||
};
|
||||
const snapshot = makeSnapshot(originalConfig);
|
||||
const redacted = redactConfigSnapshot(snapshot, hints);
|
||||
const custom = (redacted.config as typeof originalConfig).custom as Record<string, string>;
|
||||
expect(custom.myApiKey).toBe(REDACTED_SENTINEL);
|
||||
expect(custom.displayName).toBe("My Bot");
|
||||
|
||||
const restored = restoreRedactedValues(
|
||||
redacted.config,
|
||||
snapshot.config,
|
||||
hints,
|
||||
) as typeof originalConfig;
|
||||
expect(restored).toEqual(originalConfig);
|
||||
});
|
||||
|
||||
it("restores with uiHints respecting sensitive:false override", () => {
|
||||
const hints: ConfigUiHints = {
|
||||
"gateway.auth.token": { sensitive: false },
|
||||
};
|
||||
const incoming = {
|
||||
gateway: { auth: { token: REDACTED_SENTINEL } },
|
||||
};
|
||||
const original = {
|
||||
gateway: { auth: { token: "real-secret" } },
|
||||
};
|
||||
const result = restoreRedactedValues(incoming, original, hints) as typeof incoming;
|
||||
expect(result.gateway.auth.token).toBe(REDACTED_SENTINEL);
|
||||
});
|
||||
|
||||
it("restores array items using wildcard uiHints", () => {
|
||||
const hints: ConfigUiHints = {
|
||||
"channels.slack.accounts[].botToken": { sensitive: true },
|
||||
};
|
||||
const incoming = {
|
||||
channels: {
|
||||
slack: {
|
||||
accounts: [
|
||||
{ botToken: REDACTED_SENTINEL },
|
||||
{ botToken: "user-provided-new-token-value" },
|
||||
],
|
||||
},
|
||||
},
|
||||
};
|
||||
const original = {
|
||||
channels: {
|
||||
slack: {
|
||||
accounts: [
|
||||
{ botToken: "original-token-first-account" },
|
||||
{ botToken: "original-token-second-account" },
|
||||
],
|
||||
},
|
||||
},
|
||||
};
|
||||
const result = restoreRedactedValues(incoming, original, hints) as typeof incoming;
|
||||
expect(result.channels.slack.accounts[0].botToken).toBe("original-token-first-account");
|
||||
expect(result.channels.slack.accounts[1].botToken).toBe("user-provided-new-token-value");
|
||||
});
|
||||
});
|
||||
88
src/config/redact-snapshot.schema.test.ts
Normal file
88
src/config/redact-snapshot.schema.test.ts
Normal file
@ -0,0 +1,88 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import {
|
||||
REDACTED_SENTINEL,
|
||||
redactConfigSnapshot,
|
||||
restoreRedactedValues as restoreRedactedValues_orig,
|
||||
} from "./redact-snapshot.js";
|
||||
import { __test__ } from "./schema.hints.js";
|
||||
import type { ConfigUiHints } from "./schema.js";
|
||||
import type { ConfigFileSnapshot } from "./types.openclaw.js";
|
||||
import { OpenClawSchema } from "./zod-schema.js";
|
||||
|
||||
const { mapSensitivePaths } = __test__;
|
||||
const mainSchemaHints = mapSensitivePaths(OpenClawSchema, "", {});
|
||||
|
||||
type TestSnapshot<TConfig extends Record<string, unknown>> = ConfigFileSnapshot & {
|
||||
parsed: TConfig;
|
||||
resolved: TConfig;
|
||||
config: TConfig;
|
||||
};
|
||||
|
||||
function makeSnapshot<TConfig extends Record<string, unknown>>(
|
||||
config: TConfig,
|
||||
raw?: string,
|
||||
): TestSnapshot<TConfig> {
|
||||
return {
|
||||
path: "/home/user/.openclaw/config.json5",
|
||||
exists: true,
|
||||
raw: raw ?? JSON.stringify(config),
|
||||
parsed: config,
|
||||
resolved: config as ConfigFileSnapshot["resolved"],
|
||||
valid: true,
|
||||
config: config as ConfigFileSnapshot["config"],
|
||||
hash: "abc123",
|
||||
issues: [],
|
||||
warnings: [],
|
||||
legacyIssues: [],
|
||||
} as unknown as TestSnapshot<TConfig>;
|
||||
}
|
||||
|
||||
function restoreRedactedValues<TOriginal>(
|
||||
incoming: unknown,
|
||||
original: TOriginal,
|
||||
hints?: ConfigUiHints,
|
||||
): TOriginal {
|
||||
const result = restoreRedactedValues_orig(incoming, original, hints);
|
||||
expect(result.ok).toBe(true);
|
||||
return result.result as TOriginal;
|
||||
}
|
||||
|
||||
describe("realredactConfigSnapshot_real", () => {
|
||||
it("main schema redact works (samples)", () => {
|
||||
const schema = OpenClawSchema.toJSONSchema({
|
||||
target: "draft-07",
|
||||
unrepresentable: "any",
|
||||
});
|
||||
schema.title = "OpenClawConfig";
|
||||
const hints = mainSchemaHints;
|
||||
|
||||
const snapshot = makeSnapshot({
|
||||
agents: {
|
||||
defaults: {
|
||||
memorySearch: {
|
||||
remote: {
|
||||
apiKey: "1234",
|
||||
},
|
||||
},
|
||||
},
|
||||
list: [
|
||||
{
|
||||
memorySearch: {
|
||||
remote: {
|
||||
apiKey: "6789",
|
||||
},
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
});
|
||||
|
||||
const result = redactConfigSnapshot(snapshot, hints);
|
||||
const config = result.config as typeof snapshot.config;
|
||||
expect(config.agents.defaults.memorySearch.remote.apiKey).toBe(REDACTED_SENTINEL);
|
||||
expect(config.agents.list[0].memorySearch.remote.apiKey).toBe(REDACTED_SENTINEL);
|
||||
const restored = restoreRedactedValues(result.config, snapshot.config, hints);
|
||||
expect(restored.agents.defaults.memorySearch.remote.apiKey).toBe("1234");
|
||||
expect(restored.agents.list[0].memorySearch.remote.apiKey).toBe("6789");
|
||||
});
|
||||
});
|
||||
@ -918,269 +918,3 @@ describe("redactConfigSnapshot", () => {
|
||||
expect(channels.slack.accounts[1].botToken).toBe(REDACTED_SENTINEL);
|
||||
});
|
||||
});
|
||||
|
||||
describe("restoreRedactedValues", () => {
|
||||
it("restores redacted URL endpoint fields on round-trip", () => {
|
||||
const incoming = {
|
||||
models: {
|
||||
providers: {
|
||||
openai: { baseUrl: REDACTED_SENTINEL },
|
||||
},
|
||||
},
|
||||
};
|
||||
const original = {
|
||||
models: {
|
||||
providers: {
|
||||
openai: { baseUrl: "https://alice:secret@example.test/v1" },
|
||||
},
|
||||
},
|
||||
};
|
||||
const result = restoreRedactedValues(incoming, original, mainSchemaHints);
|
||||
expect(result.models.providers.openai.baseUrl).toBe("https://alice:secret@example.test/v1");
|
||||
});
|
||||
|
||||
it("restores sentinel values from original config", () => {
|
||||
const incoming = {
|
||||
gateway: { auth: { token: REDACTED_SENTINEL } },
|
||||
};
|
||||
const original = {
|
||||
gateway: { auth: { token: "real-secret-token-value" } },
|
||||
};
|
||||
const result = restoreRedactedValues(incoming, original) as typeof incoming;
|
||||
expect(result.gateway.auth.token).toBe("real-secret-token-value");
|
||||
});
|
||||
|
||||
it("preserves explicitly changed sensitive values", () => {
|
||||
const incoming = {
|
||||
gateway: { auth: { token: "new-token-value-from-user" } },
|
||||
};
|
||||
const original = {
|
||||
gateway: { auth: { token: "old-token-value" } },
|
||||
};
|
||||
const result = restoreRedactedValues(incoming, original) as typeof incoming;
|
||||
expect(result.gateway.auth.token).toBe("new-token-value-from-user");
|
||||
});
|
||||
|
||||
it("preserves non-sensitive fields unchanged", () => {
|
||||
const incoming = {
|
||||
ui: { seamColor: "#ff0000" },
|
||||
gateway: { port: 9999, auth: { token: REDACTED_SENTINEL } },
|
||||
};
|
||||
const original = {
|
||||
ui: { seamColor: "#0088cc" },
|
||||
gateway: { port: 18789, auth: { token: "real-secret" } },
|
||||
};
|
||||
const result = restoreRedactedValues(incoming, original) as typeof incoming;
|
||||
expect(result.ui.seamColor).toBe("#ff0000");
|
||||
expect(result.gateway.port).toBe(9999);
|
||||
expect(result.gateway.auth.token).toBe("real-secret");
|
||||
});
|
||||
|
||||
it("handles deeply nested sentinel restoration", () => {
|
||||
const incoming = {
|
||||
channels: {
|
||||
slack: {
|
||||
accounts: {
|
||||
ws1: { botToken: REDACTED_SENTINEL },
|
||||
ws2: { botToken: "user-typed-new-token-value" },
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
const original = {
|
||||
channels: {
|
||||
slack: {
|
||||
accounts: {
|
||||
ws1: { botToken: "original-ws1-token-value" },
|
||||
ws2: { botToken: "original-ws2-token-value" },
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
const result = restoreRedactedValues(incoming, original) as typeof incoming;
|
||||
expect(result.channels.slack.accounts.ws1.botToken).toBe("original-ws1-token-value");
|
||||
expect(result.channels.slack.accounts.ws2.botToken).toBe("user-typed-new-token-value");
|
||||
});
|
||||
|
||||
it("handles missing original gracefully", () => {
|
||||
const incoming = {
|
||||
channels: { newChannel: { token: REDACTED_SENTINEL } },
|
||||
};
|
||||
const original = {};
|
||||
expect(restoreRedactedValues_orig(incoming, original).ok).toBe(false);
|
||||
});
|
||||
|
||||
it("rejects invalid restore inputs", () => {
|
||||
const invalidInputs = [null, undefined, "token-value"] as const;
|
||||
for (const input of invalidInputs) {
|
||||
const result = restoreRedactedValues_orig(input, { token: "x" });
|
||||
expect(result.ok).toBe(false);
|
||||
}
|
||||
expect(restoreRedactedValues_orig("token-value", { token: "x" })).toEqual({
|
||||
ok: false,
|
||||
error: "input not an object",
|
||||
});
|
||||
});
|
||||
|
||||
it("returns a human-readable error when sentinel cannot be restored", () => {
|
||||
const incoming = {
|
||||
channels: { newChannel: { token: REDACTED_SENTINEL } },
|
||||
};
|
||||
const result = restoreRedactedValues_orig(incoming, {});
|
||||
expect(result.ok).toBe(false);
|
||||
expect(result.humanReadableMessage).toContain(REDACTED_SENTINEL);
|
||||
expect(result.humanReadableMessage).toContain("channels.newChannel.token");
|
||||
});
|
||||
|
||||
it("keeps unmatched wildcard array entries unchanged outside extension paths", () => {
|
||||
const hints: ConfigUiHints = {
|
||||
"custom.*": { sensitive: true },
|
||||
};
|
||||
const incoming = {
|
||||
custom: { items: [REDACTED_SENTINEL] },
|
||||
};
|
||||
const original = {
|
||||
custom: { items: ["original-secret-value"] },
|
||||
};
|
||||
const result = restoreRedactedValues(incoming, original, hints) as typeof incoming;
|
||||
expect(result.custom.items[0]).toBe(REDACTED_SENTINEL);
|
||||
});
|
||||
|
||||
it("round-trips config through redact → restore", () => {
|
||||
const originalConfig = {
|
||||
gateway: { auth: { token: "gateway-auth-secret-token-value" }, port: 18789 },
|
||||
channels: {
|
||||
slack: { botToken: "fake-slack-token-placeholder-value" },
|
||||
telegram: {
|
||||
botToken: "fake-telegram-token-placeholder-value",
|
||||
webhookSecret: "fake-tg-secret-placeholder-value",
|
||||
},
|
||||
},
|
||||
models: {
|
||||
providers: {
|
||||
openai: {
|
||||
apiKey: "sk-proj-fake-openai-api-key-value",
|
||||
baseUrl: "https://api.openai.com",
|
||||
},
|
||||
},
|
||||
},
|
||||
ui: { seamColor: "#0088cc" },
|
||||
};
|
||||
const snapshot = makeSnapshot(originalConfig);
|
||||
|
||||
// Redact (simulates config.get response)
|
||||
const redacted = redactConfigSnapshot(snapshot);
|
||||
|
||||
// Restore (simulates config.set before write)
|
||||
const restored = restoreRedactedValues(redacted.config, snapshot.config);
|
||||
|
||||
expect(restored).toEqual(originalConfig);
|
||||
});
|
||||
|
||||
it("round-trips with uiHints for custom sensitive fields", () => {
|
||||
const hints: ConfigUiHints = {
|
||||
"custom.myApiKey": { sensitive: true },
|
||||
"custom.displayName": { sensitive: false },
|
||||
};
|
||||
const originalConfig = {
|
||||
custom: { myApiKey: "secret-custom-api-key-value", displayName: "My Bot" },
|
||||
};
|
||||
const snapshot = makeSnapshot(originalConfig);
|
||||
const redacted = redactConfigSnapshot(snapshot, hints);
|
||||
const custom = (redacted.config as typeof originalConfig).custom as Record<string, string>;
|
||||
expect(custom.myApiKey).toBe(REDACTED_SENTINEL);
|
||||
expect(custom.displayName).toBe("My Bot");
|
||||
|
||||
const restored = restoreRedactedValues(
|
||||
redacted.config,
|
||||
snapshot.config,
|
||||
hints,
|
||||
) as typeof originalConfig;
|
||||
expect(restored).toEqual(originalConfig);
|
||||
});
|
||||
|
||||
it("restores with uiHints respecting sensitive:false override", () => {
|
||||
const hints: ConfigUiHints = {
|
||||
"gateway.auth.token": { sensitive: false },
|
||||
};
|
||||
const incoming = {
|
||||
gateway: { auth: { token: REDACTED_SENTINEL } },
|
||||
};
|
||||
const original = {
|
||||
gateway: { auth: { token: "real-secret" } },
|
||||
};
|
||||
// With sensitive:false, the sentinel is NOT on a sensitive path,
|
||||
// so restore should NOT replace it (it's treated as a literal value)
|
||||
const result = restoreRedactedValues(incoming, original, hints) as typeof incoming;
|
||||
expect(result.gateway.auth.token).toBe(REDACTED_SENTINEL);
|
||||
});
|
||||
|
||||
it("restores array items using wildcard uiHints", () => {
|
||||
const hints: ConfigUiHints = {
|
||||
"channels.slack.accounts[].botToken": { sensitive: true },
|
||||
};
|
||||
const incoming = {
|
||||
channels: {
|
||||
slack: {
|
||||
accounts: [
|
||||
{ botToken: REDACTED_SENTINEL },
|
||||
{ botToken: "user-provided-new-token-value" },
|
||||
],
|
||||
},
|
||||
},
|
||||
};
|
||||
const original = {
|
||||
channels: {
|
||||
slack: {
|
||||
accounts: [
|
||||
{ botToken: "original-token-first-account" },
|
||||
{ botToken: "original-token-second-account" },
|
||||
],
|
||||
},
|
||||
},
|
||||
};
|
||||
const result = restoreRedactedValues(incoming, original, hints) as typeof incoming;
|
||||
expect(result.channels.slack.accounts[0].botToken).toBe("original-token-first-account");
|
||||
expect(result.channels.slack.accounts[1].botToken).toBe("user-provided-new-token-value");
|
||||
});
|
||||
});
|
||||
|
||||
describe("realredactConfigSnapshot_real", () => {
|
||||
it("main schema redact works (samples)", () => {
|
||||
const schema = OpenClawSchema.toJSONSchema({
|
||||
target: "draft-07",
|
||||
unrepresentable: "any",
|
||||
});
|
||||
schema.title = "OpenClawConfig";
|
||||
const hints = mainSchemaHints;
|
||||
|
||||
const snapshot = makeSnapshot({
|
||||
agents: {
|
||||
defaults: {
|
||||
memorySearch: {
|
||||
remote: {
|
||||
apiKey: "1234",
|
||||
},
|
||||
},
|
||||
},
|
||||
list: [
|
||||
{
|
||||
memorySearch: {
|
||||
remote: {
|
||||
apiKey: "6789",
|
||||
},
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
});
|
||||
|
||||
const result = redactConfigSnapshot(snapshot, hints);
|
||||
const config = result.config as typeof snapshot.config;
|
||||
expect(config.agents.defaults.memorySearch.remote.apiKey).toBe(REDACTED_SENTINEL);
|
||||
expect(config.agents.list[0].memorySearch.remote.apiKey).toBe(REDACTED_SENTINEL);
|
||||
const restored = restoreRedactedValues(result.config, snapshot.config, hints);
|
||||
expect(restored.agents.defaults.memorySearch.remote.apiKey).toBe("1234");
|
||||
expect(restored.agents.list[0].memorySearch.remote.apiKey).toBe("6789");
|
||||
});
|
||||
});
|
||||
|
||||
@ -28,6 +28,8 @@ export function resolvePluginHookDirs(params: {
|
||||
const registry = loadPluginManifestRegistry({
|
||||
workspaceDir,
|
||||
config: params.config,
|
||||
// Hook discovery should reflect freshly written bundle manifests immediately.
|
||||
cache: false,
|
||||
});
|
||||
if (registry.plugins.length === 0) {
|
||||
return [];
|
||||
|
||||
@ -28,6 +28,11 @@ type HookPackageManifest = {
|
||||
} & Partial<Record<typeof MANIFEST_KEY, { hooks?: string[] }>>;
|
||||
const log = createSubsystemLogger("hooks/workspace");
|
||||
|
||||
type LoadedHook = {
|
||||
hook: Hook;
|
||||
frontmatter: ParsedHookFrontmatter;
|
||||
};
|
||||
|
||||
function filterHookEntries(
|
||||
entries: HookEntry[],
|
||||
config?: OpenClawConfig,
|
||||
@ -79,7 +84,7 @@ function loadHookFromDir(params: {
|
||||
source: HookSource;
|
||||
pluginId?: string;
|
||||
nameHint?: string;
|
||||
}): Hook | null {
|
||||
}): LoadedHook | null {
|
||||
let canonicalHookDir = path.resolve(params.hookDir);
|
||||
try {
|
||||
canonicalHookDir = fs.realpathSync.native(params.hookDir);
|
||||
@ -123,13 +128,16 @@ function loadHookFromDir(params: {
|
||||
}
|
||||
|
||||
return {
|
||||
name,
|
||||
description,
|
||||
source: params.source,
|
||||
pluginId: params.pluginId,
|
||||
filePath: hookMdPath,
|
||||
baseDir: canonicalHookDir,
|
||||
handlerPath,
|
||||
hook: {
|
||||
name,
|
||||
description,
|
||||
source: params.source,
|
||||
pluginId: params.pluginId,
|
||||
filePath: hookMdPath,
|
||||
baseDir: canonicalHookDir,
|
||||
handlerPath,
|
||||
},
|
||||
frontmatter,
|
||||
};
|
||||
} catch (err) {
|
||||
const message = err instanceof Error ? (err.stack ?? err.message) : String(err);
|
||||
@ -141,7 +149,11 @@ function loadHookFromDir(params: {
|
||||
/**
|
||||
* Scan a directory for hooks (subdirectories containing HOOK.md)
|
||||
*/
|
||||
function loadHooksFromDir(params: { dir: string; source: HookSource; pluginId?: string }): Hook[] {
|
||||
function loadHooksFromDir(params: {
|
||||
dir: string;
|
||||
source: HookSource;
|
||||
pluginId?: string;
|
||||
}): LoadedHook[] {
|
||||
const { dir, source, pluginId } = params;
|
||||
|
||||
if (!fs.existsSync(dir)) {
|
||||
@ -153,7 +165,7 @@ function loadHooksFromDir(params: { dir: string; source: HookSource; pluginId?:
|
||||
return [];
|
||||
}
|
||||
|
||||
const hooks: Hook[] = [];
|
||||
const hooks: LoadedHook[] = [];
|
||||
const entries = fs.readdirSync(dir, { withFileTypes: true });
|
||||
|
||||
for (const entry of entries) {
|
||||
@ -211,16 +223,7 @@ export function loadHookEntriesFromDir(params: {
|
||||
source: params.source,
|
||||
pluginId: params.pluginId,
|
||||
});
|
||||
return hooks.map((hook) => {
|
||||
let frontmatter: ParsedHookFrontmatter = {};
|
||||
const raw = readBoundaryFileUtf8({
|
||||
absolutePath: hook.filePath,
|
||||
rootPath: hook.baseDir,
|
||||
boundaryLabel: "hook directory",
|
||||
});
|
||||
if (raw !== null) {
|
||||
frontmatter = parseFrontmatter(raw);
|
||||
}
|
||||
return hooks.map(({ hook, frontmatter }) => {
|
||||
const entry: HookEntry = {
|
||||
hook: {
|
||||
...hook,
|
||||
|
||||
@ -1,19 +1,12 @@
|
||||
import { beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import { describe, expect, it, vi } from "vitest";
|
||||
import { listChannelPlugins } from "../channels/plugins/index.js";
|
||||
import type { ChannelPlugin } from "../channels/plugins/types.js";
|
||||
import { buildChannelSummary } from "./channel-summary.js";
|
||||
|
||||
vi.mock("../channels/plugins/index.js", () => ({
|
||||
listChannelPlugins: vi.fn(),
|
||||
}));
|
||||
|
||||
let buildChannelSummary: typeof import("./channel-summary.js").buildChannelSummary;
|
||||
let listChannelPlugins: typeof import("../channels/plugins/index.js").listChannelPlugins;
|
||||
|
||||
beforeEach(async () => {
|
||||
vi.resetModules();
|
||||
({ buildChannelSummary } = await import("./channel-summary.js"));
|
||||
({ listChannelPlugins } = await import("../channels/plugins/index.js"));
|
||||
});
|
||||
|
||||
function makeSlackHttpSummaryPlugin(): ChannelPlugin {
|
||||
return {
|
||||
id: "slack",
|
||||
|
||||
@ -25,6 +25,22 @@ function writeMatrixPluginFixture(rootDir: string, helperBody: string): void {
|
||||
fs.writeFileSync(path.join(rootDir, "legacy-crypto-inspector.js"), helperBody, "utf8");
|
||||
}
|
||||
|
||||
function writeMatrixPluginManifest(rootDir: string): void {
|
||||
fs.mkdirSync(rootDir, { recursive: true });
|
||||
fs.writeFileSync(
|
||||
path.join(rootDir, "openclaw.plugin.json"),
|
||||
JSON.stringify({
|
||||
id: "matrix",
|
||||
configSchema: {
|
||||
type: "object",
|
||||
additionalProperties: false,
|
||||
},
|
||||
}),
|
||||
"utf8",
|
||||
);
|
||||
fs.writeFileSync(path.join(rootDir, "index.js"), "export default {};\n", "utf8");
|
||||
}
|
||||
|
||||
describe("matrix plugin helper resolution", () => {
|
||||
it("loads the legacy crypto inspector from the bundled matrix plugin", async () => {
|
||||
await withTempHome(
|
||||
@ -125,6 +141,62 @@ describe("matrix plugin helper resolution", () => {
|
||||
);
|
||||
});
|
||||
|
||||
it("keeps source-style root helper shims on the Jiti fallback path", async () => {
|
||||
await withTempHome(
|
||||
async (home) => {
|
||||
const customRoot = path.join(home, "plugins", "matrix-local");
|
||||
writeMatrixPluginManifest(customRoot);
|
||||
fs.mkdirSync(path.join(customRoot, "src", "matrix"), { recursive: true });
|
||||
fs.writeFileSync(
|
||||
path.join(customRoot, "legacy-crypto-inspector.js"),
|
||||
'export { inspectLegacyMatrixCryptoStore } from "./src/matrix/legacy-crypto-inspector.js";\n',
|
||||
"utf8",
|
||||
);
|
||||
fs.writeFileSync(
|
||||
path.join(customRoot, "src", "matrix", "legacy-crypto-inspector.ts"),
|
||||
[
|
||||
"export async function inspectLegacyMatrixCryptoStore() {",
|
||||
' return { deviceId: "SRCJS", roomKeyCounts: null, backupVersion: null, decryptionKeyBase64: null };',
|
||||
"}",
|
||||
].join("\n"),
|
||||
"utf8",
|
||||
);
|
||||
|
||||
const cfg: OpenClawConfig = {
|
||||
plugins: {
|
||||
load: {
|
||||
paths: [customRoot],
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
expect(isMatrixLegacyCryptoInspectorAvailable({ cfg, env: process.env })).toBe(true);
|
||||
const inspectLegacyStore = await loadMatrixLegacyCryptoInspector({
|
||||
cfg,
|
||||
env: process.env,
|
||||
});
|
||||
|
||||
await expect(
|
||||
inspectLegacyStore({
|
||||
cryptoRootDir: "/tmp/legacy",
|
||||
userId: "@bot:example.org",
|
||||
deviceId: "DEVICE123",
|
||||
}),
|
||||
).resolves.toEqual({
|
||||
deviceId: "SRCJS",
|
||||
roomKeyCounts: null,
|
||||
backupVersion: null,
|
||||
decryptionKeyBase64: null,
|
||||
});
|
||||
},
|
||||
{
|
||||
env: {
|
||||
OPENCLAW_BUNDLED_PLUGINS_DIR: (home) => path.join(home, "empty-bundled"),
|
||||
},
|
||||
},
|
||||
);
|
||||
});
|
||||
|
||||
it("rejects helper files that escape the plugin root", async () => {
|
||||
await withTempHome(
|
||||
async (home) => {
|
||||
|
||||
@ -1,11 +1,13 @@
|
||||
import fs from "node:fs";
|
||||
import path from "node:path";
|
||||
import { pathToFileURL } from "node:url";
|
||||
import { createJiti } from "jiti";
|
||||
import type { OpenClawConfig } from "../config/config.js";
|
||||
import {
|
||||
loadPluginManifestRegistry,
|
||||
type PluginManifestRecord,
|
||||
} from "../plugins/manifest-registry.js";
|
||||
import { shouldPreferNativeJiti } from "../plugins/sdk-alias.js";
|
||||
import { openBoundaryFileSync } from "./boundary-file-read.js";
|
||||
|
||||
const MATRIX_PLUGIN_ID = "matrix";
|
||||
@ -98,15 +100,26 @@ let jitiLoader: ReturnType<typeof createJiti> | null = null;
|
||||
const inspectorCache = new Map<string, Promise<MatrixLegacyCryptoInspector>>();
|
||||
|
||||
function getJiti() {
|
||||
if (!jitiLoader) {
|
||||
jitiLoader = createJiti(import.meta.url, {
|
||||
interopDefault: false,
|
||||
extensions: [".ts", ".tsx", ".mts", ".cts", ".js", ".mjs", ".cjs", ".json"],
|
||||
});
|
||||
if (jitiLoader) {
|
||||
return jitiLoader;
|
||||
}
|
||||
|
||||
jitiLoader = createJiti(import.meta.url, {
|
||||
interopDefault: false,
|
||||
tryNative: false,
|
||||
extensions: [".ts", ".tsx", ".mts", ".cts", ".mtsx", ".ctsx", ".js", ".mjs", ".cjs", ".json"],
|
||||
});
|
||||
return jitiLoader;
|
||||
}
|
||||
|
||||
function canRetryWithJiti(error: unknown): boolean {
|
||||
if (!error || typeof error !== "object") {
|
||||
return false;
|
||||
}
|
||||
const code = "code" in error ? (error as { code?: unknown }).code : undefined;
|
||||
return code === "ERR_MODULE_NOT_FOUND" || code === "ERR_UNKNOWN_FILE_EXTENSION";
|
||||
}
|
||||
|
||||
function isObjectRecord(value: unknown): value is Record<string, unknown> {
|
||||
return typeof value === "object" && value !== null;
|
||||
}
|
||||
@ -154,7 +167,19 @@ export async function loadMatrixLegacyCryptoInspector(params: {
|
||||
}
|
||||
|
||||
const pending = (async () => {
|
||||
const loaded: unknown = await getJiti().import(helperPath);
|
||||
let loaded: unknown;
|
||||
if (shouldPreferNativeJiti(helperPath)) {
|
||||
try {
|
||||
loaded = await import(pathToFileURL(helperPath).href);
|
||||
} catch (error) {
|
||||
if (!canRetryWithJiti(error)) {
|
||||
throw error;
|
||||
}
|
||||
loaded = getJiti()(helperPath);
|
||||
}
|
||||
} else {
|
||||
loaded = getJiti()(helperPath);
|
||||
}
|
||||
const inspectLegacyMatrixCryptoStore = resolveInspectorExport(loaded);
|
||||
if (!inspectLegacyMatrixCryptoStore) {
|
||||
throw new Error(
|
||||
|
||||
@ -3,13 +3,14 @@ import { mkdirSync, rmSync } from "node:fs";
|
||||
import fs from "node:fs/promises";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { afterAll, beforeAll, beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import { afterAll, afterEach, beforeAll, beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import "./test-runtime-mocks.js";
|
||||
import type { MemoryIndexManager } from "./index.js";
|
||||
|
||||
type MemoryIndexModule = typeof import("./index.js");
|
||||
|
||||
let getMemorySearchManager: MemoryIndexModule["getMemorySearchManager"];
|
||||
let closeAllMemorySearchManagers: MemoryIndexModule["closeAllMemorySearchManagers"];
|
||||
|
||||
let embedBatchCalls = 0;
|
||||
let embedBatchInputCalls = 0;
|
||||
@ -125,14 +126,12 @@ describe("memory index", () => {
|
||||
}),
|
||||
].join("\n");
|
||||
|
||||
// Perf: keep managers open across tests, but only reset the one a test uses.
|
||||
const managersByCacheKey = new Map<string, MemoryIndexManager>();
|
||||
const managersForCleanup = new Set<MemoryIndexManager>();
|
||||
|
||||
beforeAll(async () => {
|
||||
vi.resetModules();
|
||||
await import("./test-runtime-mocks.js");
|
||||
({ getMemorySearchManager } = await import("./index.js"));
|
||||
({ getMemorySearchManager, closeAllMemorySearchManagers } = await import("./index.js"));
|
||||
fixtureRoot = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-mem-fixtures-"));
|
||||
workspaceDir = path.join(fixtureRoot, "workspace");
|
||||
memoryDir = path.join(workspaceDir, "memory");
|
||||
@ -158,6 +157,11 @@ describe("memory index", () => {
|
||||
await fs.rm(fixtureRoot, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await closeAllMemorySearchManagers();
|
||||
managersForCleanup.clear();
|
||||
});
|
||||
|
||||
beforeEach(async () => {
|
||||
// Perf: most suites don't need atomic swap behavior for full reindexes.
|
||||
// Keep atomic reindex tests on the safe path.
|
||||
@ -166,7 +170,6 @@ describe("memory index", () => {
|
||||
embedBatchInputCalls = 0;
|
||||
providerCalls = [];
|
||||
|
||||
// Keep the workspace stable to allow manager reuse across tests.
|
||||
mkdirSync(memoryDir, { recursive: true });
|
||||
|
||||
// Clean additional paths that may have been created by earlier cases.
|
||||
@ -243,30 +246,9 @@ describe("memory index", () => {
|
||||
return result.manager as MemoryIndexManager;
|
||||
}
|
||||
|
||||
function getManagerCacheKey(cfg: TestCfg): string {
|
||||
const memorySearch = cfg.agents?.defaults?.memorySearch;
|
||||
const storePath = memorySearch?.store?.path;
|
||||
if (!storePath) {
|
||||
throw new Error("store path missing");
|
||||
}
|
||||
return JSON.stringify({
|
||||
workspaceDir,
|
||||
storePath,
|
||||
memorySearch,
|
||||
});
|
||||
}
|
||||
|
||||
async function getPersistentManager(cfg: TestCfg): Promise<MemoryIndexManager> {
|
||||
const cacheKey = getManagerCacheKey(cfg);
|
||||
const cached = managersByCacheKey.get(cacheKey);
|
||||
if (cached) {
|
||||
resetManagerForTest(cached);
|
||||
return cached;
|
||||
}
|
||||
|
||||
const result = await getMemorySearchManager({ cfg, agentId: "main" });
|
||||
const manager = requireManager(result);
|
||||
managersByCacheKey.set(cacheKey, manager);
|
||||
managersForCleanup.add(manager);
|
||||
resetManagerForTest(manager);
|
||||
return manager;
|
||||
|
||||
@ -104,16 +104,26 @@ describe("QmdMemoryManager", () => {
|
||||
let stateDir: string;
|
||||
let cfg: OpenClawConfig;
|
||||
const agentId = "main";
|
||||
const openManagers = new Set<QmdMemoryManager>();
|
||||
|
||||
function trackManager<T extends QmdMemoryManager | null>(manager: T): T {
|
||||
if (manager) {
|
||||
openManagers.add(manager);
|
||||
}
|
||||
return manager;
|
||||
}
|
||||
|
||||
async function createManager(params?: { mode?: "full" | "status"; cfg?: OpenClawConfig }) {
|
||||
const cfgToUse = params?.cfg ?? cfg;
|
||||
const resolved = resolveMemoryBackendConfig({ cfg: cfgToUse, agentId });
|
||||
const manager = await QmdMemoryManager.create({
|
||||
cfg: cfgToUse,
|
||||
agentId,
|
||||
resolved,
|
||||
mode: params?.mode ?? "status",
|
||||
});
|
||||
const manager = trackManager(
|
||||
await QmdMemoryManager.create({
|
||||
cfg: cfgToUse,
|
||||
agentId,
|
||||
resolved,
|
||||
mode: params?.mode ?? "status",
|
||||
}),
|
||||
);
|
||||
expect(manager).toBeTruthy();
|
||||
if (!manager) {
|
||||
throw new Error("manager missing");
|
||||
@ -161,7 +171,14 @@ describe("QmdMemoryManager", () => {
|
||||
} as OpenClawConfig;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
afterEach(async () => {
|
||||
await Promise.all(
|
||||
Array.from(openManagers, async (manager) => {
|
||||
await manager.close();
|
||||
}),
|
||||
);
|
||||
openManagers.clear();
|
||||
await fs.rm(tmpRoot, { recursive: true, force: true });
|
||||
vi.useRealTimers();
|
||||
delete process.env.OPENCLAW_STATE_DIR;
|
||||
if (originalPath === undefined) {
|
||||
@ -365,12 +382,14 @@ describe("QmdMemoryManager", () => {
|
||||
});
|
||||
|
||||
const resolved = resolveMemoryBackendConfig({ cfg, agentId: devAgentId });
|
||||
const manager = await QmdMemoryManager.create({
|
||||
cfg,
|
||||
agentId: devAgentId,
|
||||
resolved,
|
||||
mode: "full",
|
||||
});
|
||||
const manager = trackManager(
|
||||
await QmdMemoryManager.create({
|
||||
cfg,
|
||||
agentId: devAgentId,
|
||||
resolved,
|
||||
mode: "full",
|
||||
}),
|
||||
);
|
||||
expect(manager).toBeTruthy();
|
||||
await manager?.close();
|
||||
|
||||
@ -755,7 +774,7 @@ describe("QmdMemoryManager", () => {
|
||||
const resolved = resolveMemoryBackendConfig({ cfg, agentId });
|
||||
const createPromise = QmdMemoryManager.create({ cfg, agentId, resolved, mode: "status" });
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
const manager = await createPromise;
|
||||
const manager = trackManager(await createPromise);
|
||||
expect(manager).toBeTruthy();
|
||||
if (!manager) {
|
||||
throw new Error("manager missing");
|
||||
@ -1985,7 +2004,7 @@ describe("QmdMemoryManager", () => {
|
||||
const resolved = resolveMemoryBackendConfig({ cfg, agentId });
|
||||
const createPromise = QmdMemoryManager.create({ cfg, agentId, resolved, mode: "status" });
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
const manager = await createPromise;
|
||||
const manager = trackManager(await createPromise);
|
||||
expect(manager).toBeTruthy();
|
||||
if (!manager) {
|
||||
throw new Error("manager missing");
|
||||
|
||||
106
src/plugin-sdk/index.bundle.test.ts
Normal file
106
src/plugin-sdk/index.bundle.test.ts
Normal file
@ -0,0 +1,106 @@
|
||||
import { execFile } from "node:child_process";
|
||||
import fs from "node:fs/promises";
|
||||
import { createRequire } from "node:module";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { pathToFileURL } from "node:url";
|
||||
import { promisify } from "node:util";
|
||||
import { describe, expect, it } from "vitest";
|
||||
import {
|
||||
buildPluginSdkEntrySources,
|
||||
buildPluginSdkPackageExports,
|
||||
buildPluginSdkSpecifiers,
|
||||
pluginSdkEntrypoints,
|
||||
} from "./entrypoints.js";
|
||||
|
||||
const pluginSdkSpecifiers = buildPluginSdkSpecifiers();
|
||||
const execFileAsync = promisify(execFile);
|
||||
const require = createRequire(import.meta.url);
|
||||
const tsdownModuleUrl = pathToFileURL(require.resolve("tsdown")).href;
|
||||
|
||||
describe("plugin-sdk bundled exports", () => {
|
||||
it("emits importable bundled subpath entries", { timeout: 240_000 }, async () => {
|
||||
const outDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-plugin-sdk-build-"));
|
||||
const fixtureDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-plugin-sdk-consumer-"));
|
||||
|
||||
try {
|
||||
const buildScriptPath = path.join(fixtureDir, "build-plugin-sdk.mjs");
|
||||
await fs.writeFile(
|
||||
buildScriptPath,
|
||||
`import { build } from ${JSON.stringify(tsdownModuleUrl)};
|
||||
await build(${JSON.stringify({
|
||||
clean: true,
|
||||
config: false,
|
||||
dts: false,
|
||||
entry: buildPluginSdkEntrySources(),
|
||||
env: { NODE_ENV: "production" },
|
||||
fixedExtension: false,
|
||||
logLevel: "error",
|
||||
outDir,
|
||||
platform: "node",
|
||||
})});
|
||||
`,
|
||||
);
|
||||
await execFileAsync(process.execPath, [buildScriptPath], {
|
||||
cwd: process.cwd(),
|
||||
});
|
||||
await fs.symlink(
|
||||
path.join(process.cwd(), "node_modules"),
|
||||
path.join(outDir, "node_modules"),
|
||||
"dir",
|
||||
);
|
||||
|
||||
for (const entry of pluginSdkEntrypoints) {
|
||||
const module = await import(pathToFileURL(path.join(outDir, `${entry}.js`)).href);
|
||||
expect(module).toBeTypeOf("object");
|
||||
}
|
||||
|
||||
const packageDir = path.join(fixtureDir, "openclaw");
|
||||
const consumerDir = path.join(fixtureDir, "consumer");
|
||||
const consumerEntry = path.join(consumerDir, "import-plugin-sdk.mjs");
|
||||
|
||||
await fs.mkdir(path.join(packageDir, "dist"), { recursive: true });
|
||||
await fs.symlink(outDir, path.join(packageDir, "dist", "plugin-sdk"), "dir");
|
||||
// Mirror the installed package layout so subpaths can resolve root deps.
|
||||
await fs.symlink(
|
||||
path.join(process.cwd(), "node_modules"),
|
||||
path.join(packageDir, "node_modules"),
|
||||
"dir",
|
||||
);
|
||||
await fs.writeFile(
|
||||
path.join(packageDir, "package.json"),
|
||||
JSON.stringify(
|
||||
{
|
||||
exports: buildPluginSdkPackageExports(),
|
||||
name: "openclaw",
|
||||
type: "module",
|
||||
},
|
||||
null,
|
||||
2,
|
||||
),
|
||||
);
|
||||
|
||||
await fs.mkdir(path.join(consumerDir, "node_modules"), { recursive: true });
|
||||
await fs.symlink(packageDir, path.join(consumerDir, "node_modules", "openclaw"), "dir");
|
||||
await fs.writeFile(
|
||||
consumerEntry,
|
||||
[
|
||||
`const specifiers = ${JSON.stringify(pluginSdkSpecifiers)};`,
|
||||
"const results = {};",
|
||||
"for (const specifier of specifiers) {",
|
||||
" results[specifier] = typeof (await import(specifier));",
|
||||
"}",
|
||||
"export default results;",
|
||||
].join("\n"),
|
||||
);
|
||||
|
||||
const { default: importResults } = await import(pathToFileURL(consumerEntry).href);
|
||||
expect(importResults).toEqual(
|
||||
Object.fromEntries(pluginSdkSpecifiers.map((specifier: string) => [specifier, "object"])),
|
||||
);
|
||||
} finally {
|
||||
await fs.rm(outDir, { recursive: true, force: true });
|
||||
await fs.rm(fixtureDir, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
});
|
||||
@ -1,24 +1,9 @@
|
||||
import { execFile } from "node:child_process";
|
||||
import fs from "node:fs/promises";
|
||||
import { createRequire } from "node:module";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { pathToFileURL } from "node:url";
|
||||
import { promisify } from "node:util";
|
||||
import { describe, expect, it } from "vitest";
|
||||
import {
|
||||
buildPluginSdkEntrySources,
|
||||
buildPluginSdkPackageExports,
|
||||
buildPluginSdkSpecifiers,
|
||||
pluginSdkEntrypoints,
|
||||
} from "./entrypoints.js";
|
||||
import { buildPluginSdkPackageExports } from "./entrypoints.js";
|
||||
import * as sdk from "./index.js";
|
||||
|
||||
const pluginSdkSpecifiers = buildPluginSdkSpecifiers();
|
||||
const execFileAsync = promisify(execFile);
|
||||
const require = createRequire(import.meta.url);
|
||||
const tsdownModuleUrl = pathToFileURL(require.resolve("tsdown")).href;
|
||||
|
||||
describe("plugin-sdk exports", () => {
|
||||
it("does not expose runtime modules", () => {
|
||||
const forbidden = [
|
||||
@ -70,91 +55,6 @@ describe("plugin-sdk exports", () => {
|
||||
expect(Object.prototype.hasOwnProperty.call(sdk, "isDangerousNameMatchingEnabled")).toBe(false);
|
||||
});
|
||||
|
||||
it("emits importable bundled subpath entries", { timeout: 240_000 }, async () => {
|
||||
const outDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-plugin-sdk-build-"));
|
||||
const fixtureDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-plugin-sdk-consumer-"));
|
||||
|
||||
try {
|
||||
const buildScriptPath = path.join(fixtureDir, "build-plugin-sdk.mjs");
|
||||
await fs.writeFile(
|
||||
buildScriptPath,
|
||||
`import { build } from ${JSON.stringify(tsdownModuleUrl)};
|
||||
await build(${JSON.stringify({
|
||||
clean: true,
|
||||
config: false,
|
||||
dts: false,
|
||||
entry: buildPluginSdkEntrySources(),
|
||||
env: { NODE_ENV: "production" },
|
||||
fixedExtension: false,
|
||||
logLevel: "error",
|
||||
outDir,
|
||||
platform: "node",
|
||||
})});
|
||||
`,
|
||||
);
|
||||
await execFileAsync(process.execPath, [buildScriptPath], {
|
||||
cwd: process.cwd(),
|
||||
});
|
||||
await fs.symlink(
|
||||
path.join(process.cwd(), "node_modules"),
|
||||
path.join(outDir, "node_modules"),
|
||||
"dir",
|
||||
);
|
||||
|
||||
for (const entry of pluginSdkEntrypoints) {
|
||||
const module = await import(pathToFileURL(path.join(outDir, `${entry}.js`)).href);
|
||||
expect(module).toBeTypeOf("object");
|
||||
}
|
||||
|
||||
const packageDir = path.join(fixtureDir, "openclaw");
|
||||
const consumerDir = path.join(fixtureDir, "consumer");
|
||||
const consumerEntry = path.join(consumerDir, "import-plugin-sdk.mjs");
|
||||
|
||||
await fs.mkdir(path.join(packageDir, "dist"), { recursive: true });
|
||||
await fs.symlink(outDir, path.join(packageDir, "dist", "plugin-sdk"), "dir");
|
||||
// Mirror the installed package layout so subpaths can resolve root deps.
|
||||
await fs.symlink(
|
||||
path.join(process.cwd(), "node_modules"),
|
||||
path.join(packageDir, "node_modules"),
|
||||
"dir",
|
||||
);
|
||||
await fs.writeFile(
|
||||
path.join(packageDir, "package.json"),
|
||||
JSON.stringify(
|
||||
{
|
||||
exports: buildPluginSdkPackageExports(),
|
||||
name: "openclaw",
|
||||
type: "module",
|
||||
},
|
||||
null,
|
||||
2,
|
||||
),
|
||||
);
|
||||
|
||||
await fs.mkdir(path.join(consumerDir, "node_modules"), { recursive: true });
|
||||
await fs.symlink(packageDir, path.join(consumerDir, "node_modules", "openclaw"), "dir");
|
||||
await fs.writeFile(
|
||||
consumerEntry,
|
||||
[
|
||||
`const specifiers = ${JSON.stringify(pluginSdkSpecifiers)};`,
|
||||
"const results = {};",
|
||||
"for (const specifier of specifiers) {",
|
||||
" results[specifier] = typeof (await import(specifier));",
|
||||
"}",
|
||||
"export default results;",
|
||||
].join("\n"),
|
||||
);
|
||||
|
||||
const { default: importResults } = await import(pathToFileURL(consumerEntry).href);
|
||||
expect(importResults).toEqual(
|
||||
Object.fromEntries(pluginSdkSpecifiers.map((specifier: string) => [specifier, "object"])),
|
||||
);
|
||||
} finally {
|
||||
await fs.rm(outDir, { recursive: true, force: true });
|
||||
await fs.rm(fixtureDir, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
it("keeps package.json plugin-sdk exports synced with the manifest", async () => {
|
||||
const packageJsonPath = path.join(process.cwd(), "package.json");
|
||||
const packageJson = JSON.parse(await fs.readFile(packageJsonPath, "utf8")) as {
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
import fs from "node:fs/promises";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { clearPluginDiscoveryCache } from "./discovery.js";
|
||||
import { clearPluginManifestRegistryCache } from "./manifest-registry.js";
|
||||
|
||||
export function createBundleMcpTempHarness() {
|
||||
@ -13,6 +14,7 @@ export function createBundleMcpTempHarness() {
|
||||
return dir;
|
||||
},
|
||||
async cleanup() {
|
||||
clearPluginDiscoveryCache();
|
||||
clearPluginManifestRegistryCache();
|
||||
await Promise.all(
|
||||
tempDirs
|
||||
|
||||
@ -13,7 +13,6 @@ import {
|
||||
} from "./bundle-manifest.js";
|
||||
import { normalizePluginsConfig, resolveEffectiveEnableState } from "./config-state.js";
|
||||
import { loadPluginManifestRegistry } from "./manifest-registry.js";
|
||||
import { safeRealpathSync } from "./path-safety.js";
|
||||
import type { PluginBundleFormat } from "./types.js";
|
||||
|
||||
export type BundleMcpServerConfig = Record<string, unknown>;
|
||||
@ -122,8 +121,8 @@ function expandBundleRootPlaceholders(value: string, rootDir: string): string {
|
||||
return value.split(CLAUDE_PLUGIN_ROOT_PLACEHOLDER).join(rootDir);
|
||||
}
|
||||
|
||||
function canonicalizeBundlePath(targetPath: string): string {
|
||||
return path.normalize(safeRealpathSync(targetPath) ?? path.resolve(targetPath));
|
||||
function normalizeBundlePath(targetPath: string): string {
|
||||
return path.normalize(path.resolve(targetPath));
|
||||
}
|
||||
|
||||
function normalizeExpandedAbsolutePath(value: string): string {
|
||||
@ -194,7 +193,7 @@ function loadBundleFileBackedMcpConfig(params: {
|
||||
rootDir: string;
|
||||
relativePath: string;
|
||||
}): BundleMcpConfig {
|
||||
const rootDir = canonicalizeBundlePath(params.rootDir);
|
||||
const rootDir = normalizeBundlePath(params.rootDir);
|
||||
const absolutePath = path.resolve(rootDir, params.relativePath);
|
||||
const opened = openBoundaryFileSync({
|
||||
absolutePath,
|
||||
@ -212,7 +211,7 @@ function loadBundleFileBackedMcpConfig(params: {
|
||||
}
|
||||
const raw = JSON.parse(fs.readFileSync(opened.fd, "utf-8")) as unknown;
|
||||
const servers = extractMcpServerMap(raw);
|
||||
const baseDir = canonicalizeBundlePath(path.dirname(absolutePath));
|
||||
const baseDir = normalizeBundlePath(path.dirname(absolutePath));
|
||||
return {
|
||||
mcpServers: Object.fromEntries(
|
||||
Object.entries(servers).map(([serverName, server]) => [
|
||||
@ -233,7 +232,7 @@ function loadBundleInlineMcpConfig(params: {
|
||||
if (!isRecord(params.raw.mcpServers)) {
|
||||
return { mcpServers: {} };
|
||||
}
|
||||
const baseDir = canonicalizeBundlePath(params.baseDir);
|
||||
const baseDir = normalizeBundlePath(params.baseDir);
|
||||
const servers = extractMcpServerMap(params.raw.mcpServers);
|
||||
return {
|
||||
mcpServers: Object.fromEntries(
|
||||
|
||||
@ -4,6 +4,7 @@ import { createEmptyPluginRegistry } from "./registry.js";
|
||||
import {
|
||||
pinActivePluginHttpRouteRegistry,
|
||||
releasePinnedPluginHttpRouteRegistry,
|
||||
resetPluginRuntimeStateForTest,
|
||||
setActivePluginRegistry,
|
||||
} from "./runtime.js";
|
||||
|
||||
@ -45,7 +46,7 @@ function expectRouteRegistrationDenied(params: {
|
||||
describe("registerPluginHttpRoute", () => {
|
||||
afterEach(() => {
|
||||
releasePinnedPluginHttpRouteRegistry();
|
||||
setActivePluginRegistry(createEmptyPluginRegistry());
|
||||
resetPluginRuntimeStateForTest();
|
||||
});
|
||||
|
||||
it("registers route and unregisters it", () => {
|
||||
|
||||
@ -304,6 +304,7 @@ export function loadPluginManifestRegistry(
|
||||
: discoverOpenClawPlugins({
|
||||
workspaceDir: params.workspaceDir,
|
||||
extraPaths: normalized.loadPaths,
|
||||
cache: params.cache,
|
||||
env,
|
||||
});
|
||||
const diagnostics: PluginDiagnostic[] = [...discovery.diagnostics];
|
||||
|
||||
@ -3,6 +3,7 @@ import { createEmptyPluginRegistry } from "./registry.js";
|
||||
import {
|
||||
pinActivePluginHttpRouteRegistry,
|
||||
releasePinnedPluginHttpRouteRegistry,
|
||||
resetPluginRuntimeStateForTest,
|
||||
resolveActivePluginHttpRouteRegistry,
|
||||
setActivePluginRegistry,
|
||||
} from "./runtime.js";
|
||||
@ -10,7 +11,7 @@ import {
|
||||
describe("plugin runtime route registry", () => {
|
||||
afterEach(() => {
|
||||
releasePinnedPluginHttpRouteRegistry();
|
||||
setActivePluginRegistry(createEmptyPluginRegistry());
|
||||
resetPluginRuntimeStateForTest();
|
||||
});
|
||||
|
||||
it("keeps the pinned route registry when the active plugin registry changes", () => {
|
||||
|
||||
@ -98,3 +98,12 @@ export function getActivePluginRegistryKey(): string | null {
|
||||
export function getActivePluginRegistryVersion(): number {
|
||||
return state.version;
|
||||
}
|
||||
|
||||
export function resetPluginRuntimeStateForTest(): void {
|
||||
const emptyRegistry = createEmptyPluginRegistry();
|
||||
state.registry = emptyRegistry;
|
||||
state.httpRouteRegistry = emptyRegistry;
|
||||
state.httpRouteRegistryPinned = false;
|
||||
state.key = null;
|
||||
state.version += 1;
|
||||
}
|
||||
|
||||
414
src/secrets/runtime.integration.test.ts
Normal file
414
src/secrets/runtime.integration.test.ts
Normal file
@ -0,0 +1,414 @@
|
||||
import fs from "node:fs/promises";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { afterEach, describe, expect, it } from "vitest";
|
||||
import { ensureAuthProfileStore, type AuthProfileStore } from "../agents/auth-profiles.js";
|
||||
import {
|
||||
clearConfigCache,
|
||||
loadConfig,
|
||||
type OpenClawConfig,
|
||||
writeConfigFile,
|
||||
} from "../config/config.js";
|
||||
import { withTempHome } from "../config/home-env.test-harness.js";
|
||||
import {
|
||||
activateSecretsRuntimeSnapshot,
|
||||
clearSecretsRuntimeSnapshot,
|
||||
getActiveRuntimeWebToolsMetadata,
|
||||
getActiveSecretsRuntimeSnapshot,
|
||||
prepareSecretsRuntimeSnapshot,
|
||||
} from "./runtime.js";
|
||||
|
||||
const OPENAI_ENV_KEY_REF = { source: "env", provider: "default", id: "OPENAI_API_KEY" } as const;
|
||||
const allowInsecureTempSecretFile = process.platform === "win32";
|
||||
|
||||
function asConfig(value: unknown): OpenClawConfig {
|
||||
return value as OpenClawConfig;
|
||||
}
|
||||
|
||||
function loadAuthStoreWithProfiles(profiles: AuthProfileStore["profiles"]): AuthProfileStore {
|
||||
return {
|
||||
version: 1,
|
||||
profiles,
|
||||
};
|
||||
}
|
||||
|
||||
describe("secrets runtime snapshot integration", () => {
|
||||
afterEach(() => {
|
||||
clearSecretsRuntimeSnapshot();
|
||||
clearConfigCache();
|
||||
});
|
||||
|
||||
it("activates runtime snapshots for loadConfig and ensureAuthProfileStore", async () => {
|
||||
const prepared = await prepareSecretsRuntimeSnapshot({
|
||||
config: asConfig({
|
||||
models: {
|
||||
providers: {
|
||||
openai: {
|
||||
baseUrl: "https://api.openai.com/v1",
|
||||
apiKey: { source: "env", provider: "default", id: "OPENAI_API_KEY" },
|
||||
models: [],
|
||||
},
|
||||
},
|
||||
},
|
||||
}),
|
||||
env: { OPENAI_API_KEY: "sk-runtime" },
|
||||
agentDirs: ["/tmp/openclaw-agent-main"],
|
||||
loadAuthStore: () =>
|
||||
loadAuthStoreWithProfiles({
|
||||
"openai:default": {
|
||||
type: "api_key",
|
||||
provider: "openai",
|
||||
keyRef: OPENAI_ENV_KEY_REF,
|
||||
},
|
||||
}),
|
||||
});
|
||||
|
||||
activateSecretsRuntimeSnapshot(prepared);
|
||||
|
||||
expect(loadConfig().models?.providers?.openai?.apiKey).toBe("sk-runtime");
|
||||
expect(
|
||||
ensureAuthProfileStore("/tmp/openclaw-agent-main").profiles["openai:default"],
|
||||
).toMatchObject({
|
||||
type: "api_key",
|
||||
key: "sk-runtime",
|
||||
});
|
||||
});
|
||||
|
||||
it("keeps active secrets runtime snapshots resolved after config writes", async () => {
|
||||
if (os.platform() === "win32") {
|
||||
return;
|
||||
}
|
||||
await withTempHome("openclaw-secrets-runtime-write-", async (home) => {
|
||||
const configDir = path.join(home, ".openclaw");
|
||||
const secretFile = path.join(configDir, "secrets.json");
|
||||
const agentDir = path.join(configDir, "agents", "main", "agent");
|
||||
const authStorePath = path.join(agentDir, "auth-profiles.json");
|
||||
await fs.mkdir(agentDir, { recursive: true });
|
||||
await fs.chmod(configDir, 0o700).catch(() => {});
|
||||
await fs.writeFile(
|
||||
secretFile,
|
||||
`${JSON.stringify({ providers: { openai: { apiKey: "sk-file-runtime" } } }, null, 2)}\n`,
|
||||
{ encoding: "utf8", mode: 0o600 },
|
||||
);
|
||||
await fs.writeFile(
|
||||
authStorePath,
|
||||
`${JSON.stringify(
|
||||
{
|
||||
version: 1,
|
||||
profiles: {
|
||||
"openai:default": {
|
||||
type: "api_key",
|
||||
provider: "openai",
|
||||
keyRef: { source: "file", provider: "default", id: "/providers/openai/apiKey" },
|
||||
},
|
||||
},
|
||||
},
|
||||
null,
|
||||
2,
|
||||
)}\n`,
|
||||
{ encoding: "utf8", mode: 0o600 },
|
||||
);
|
||||
|
||||
const prepared = await prepareSecretsRuntimeSnapshot({
|
||||
config: asConfig({
|
||||
secrets: {
|
||||
providers: {
|
||||
default: {
|
||||
source: "file",
|
||||
path: secretFile,
|
||||
mode: "json",
|
||||
...(allowInsecureTempSecretFile ? { allowInsecurePath: true } : {}),
|
||||
},
|
||||
},
|
||||
},
|
||||
models: {
|
||||
providers: {
|
||||
openai: {
|
||||
baseUrl: "https://api.openai.com/v1",
|
||||
apiKey: { source: "file", provider: "default", id: "/providers/openai/apiKey" },
|
||||
models: [],
|
||||
},
|
||||
},
|
||||
},
|
||||
}),
|
||||
agentDirs: [agentDir],
|
||||
});
|
||||
|
||||
activateSecretsRuntimeSnapshot(prepared);
|
||||
|
||||
expect(loadConfig().models?.providers?.openai?.apiKey).toBe("sk-file-runtime");
|
||||
expect(ensureAuthProfileStore(agentDir).profiles["openai:default"]).toMatchObject({
|
||||
type: "api_key",
|
||||
key: "sk-file-runtime",
|
||||
});
|
||||
|
||||
await writeConfigFile({
|
||||
...loadConfig(),
|
||||
gateway: { auth: { mode: "token" } },
|
||||
});
|
||||
|
||||
expect(loadConfig().gateway?.auth).toEqual({ mode: "token" });
|
||||
expect(loadConfig().models?.providers?.openai?.apiKey).toBe("sk-file-runtime");
|
||||
expect(ensureAuthProfileStore(agentDir).profiles["openai:default"]).toMatchObject({
|
||||
type: "api_key",
|
||||
key: "sk-file-runtime",
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
it("keeps last-known-good runtime snapshot active when refresh fails after a write", async () => {
|
||||
if (os.platform() === "win32") {
|
||||
return;
|
||||
}
|
||||
await withTempHome("openclaw-secrets-runtime-refresh-fail-", async (home) => {
|
||||
const configDir = path.join(home, ".openclaw");
|
||||
const secretFile = path.join(configDir, "secrets.json");
|
||||
const agentDir = path.join(configDir, "agents", "main", "agent");
|
||||
const authStorePath = path.join(agentDir, "auth-profiles.json");
|
||||
await fs.mkdir(agentDir, { recursive: true });
|
||||
await fs.chmod(configDir, 0o700).catch(() => {});
|
||||
await fs.writeFile(
|
||||
secretFile,
|
||||
`${JSON.stringify({ providers: { openai: { apiKey: "sk-file-runtime" } } }, null, 2)}\n`,
|
||||
{ encoding: "utf8", mode: 0o600 },
|
||||
);
|
||||
await fs.writeFile(
|
||||
authStorePath,
|
||||
`${JSON.stringify(
|
||||
{
|
||||
version: 1,
|
||||
profiles: {
|
||||
"openai:default": {
|
||||
type: "api_key",
|
||||
provider: "openai",
|
||||
keyRef: { source: "file", provider: "default", id: "/providers/openai/apiKey" },
|
||||
},
|
||||
},
|
||||
},
|
||||
null,
|
||||
2,
|
||||
)}\n`,
|
||||
{ encoding: "utf8", mode: 0o600 },
|
||||
);
|
||||
|
||||
let loadAuthStoreCalls = 0;
|
||||
const loadAuthStore = () => {
|
||||
loadAuthStoreCalls += 1;
|
||||
if (loadAuthStoreCalls > 1) {
|
||||
throw new Error("simulated secrets runtime refresh failure");
|
||||
}
|
||||
return loadAuthStoreWithProfiles({
|
||||
"openai:default": {
|
||||
type: "api_key",
|
||||
provider: "openai",
|
||||
keyRef: { source: "file", provider: "default", id: "/providers/openai/apiKey" },
|
||||
},
|
||||
});
|
||||
};
|
||||
|
||||
const prepared = await prepareSecretsRuntimeSnapshot({
|
||||
config: asConfig({
|
||||
secrets: {
|
||||
providers: {
|
||||
default: {
|
||||
source: "file",
|
||||
path: secretFile,
|
||||
mode: "json",
|
||||
...(allowInsecureTempSecretFile ? { allowInsecurePath: true } : {}),
|
||||
},
|
||||
},
|
||||
},
|
||||
models: {
|
||||
providers: {
|
||||
openai: {
|
||||
baseUrl: "https://api.openai.com/v1",
|
||||
apiKey: { source: "file", provider: "default", id: "/providers/openai/apiKey" },
|
||||
models: [],
|
||||
},
|
||||
},
|
||||
},
|
||||
}),
|
||||
agentDirs: [agentDir],
|
||||
loadAuthStore,
|
||||
});
|
||||
|
||||
activateSecretsRuntimeSnapshot(prepared);
|
||||
|
||||
await expect(
|
||||
writeConfigFile({
|
||||
...loadConfig(),
|
||||
gateway: { auth: { mode: "token" } },
|
||||
}),
|
||||
).rejects.toThrow(
|
||||
/runtime snapshot refresh failed: simulated secrets runtime refresh failure/i,
|
||||
);
|
||||
|
||||
const activeAfterFailure = getActiveSecretsRuntimeSnapshot();
|
||||
expect(activeAfterFailure).not.toBeNull();
|
||||
expect(loadConfig().gateway?.auth).toBeUndefined();
|
||||
expect(loadConfig().models?.providers?.openai?.apiKey).toBe("sk-file-runtime");
|
||||
expect(activeAfterFailure?.sourceConfig.models?.providers?.openai?.apiKey).toEqual({
|
||||
source: "file",
|
||||
provider: "default",
|
||||
id: "/providers/openai/apiKey",
|
||||
});
|
||||
expect(ensureAuthProfileStore(agentDir).profiles["openai:default"]).toMatchObject({
|
||||
type: "api_key",
|
||||
key: "sk-file-runtime",
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
it("keeps last-known-good web runtime snapshot when reload introduces unresolved active web refs", async () => {
|
||||
await withTempHome("openclaw-secrets-runtime-web-reload-lkg-", async (home) => {
|
||||
const prepared = await prepareSecretsRuntimeSnapshot({
|
||||
config: asConfig({
|
||||
tools: {
|
||||
web: {
|
||||
search: {
|
||||
provider: "gemini",
|
||||
gemini: {
|
||||
apiKey: { source: "env", provider: "default", id: "WEB_SEARCH_GEMINI_API_KEY" },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}),
|
||||
env: {
|
||||
WEB_SEARCH_GEMINI_API_KEY: "web-search-gemini-runtime-key",
|
||||
},
|
||||
agentDirs: ["/tmp/openclaw-agent-main"],
|
||||
loadAuthStore: () => ({ version: 1, profiles: {} }),
|
||||
});
|
||||
|
||||
activateSecretsRuntimeSnapshot(prepared);
|
||||
|
||||
await expect(
|
||||
writeConfigFile({
|
||||
...loadConfig(),
|
||||
plugins: {
|
||||
entries: {
|
||||
google: {
|
||||
config: {
|
||||
webSearch: {
|
||||
apiKey: {
|
||||
source: "env",
|
||||
provider: "default",
|
||||
id: "MISSING_WEB_SEARCH_GEMINI_API_KEY",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
tools: {
|
||||
web: {
|
||||
search: {
|
||||
provider: "gemini",
|
||||
gemini: {
|
||||
apiKey: {
|
||||
source: "env",
|
||||
provider: "default",
|
||||
id: "MISSING_WEB_SEARCH_GEMINI_API_KEY",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}),
|
||||
).rejects.toThrow(
|
||||
/runtime snapshot refresh failed: .*WEB_SEARCH_KEY_UNRESOLVED_NO_FALLBACK/i,
|
||||
);
|
||||
|
||||
const activeAfterFailure = getActiveSecretsRuntimeSnapshot();
|
||||
expect(activeAfterFailure).not.toBeNull();
|
||||
expect(loadConfig().tools?.web?.search?.gemini?.apiKey).toBe("web-search-gemini-runtime-key");
|
||||
expect(activeAfterFailure?.sourceConfig.tools?.web?.search?.gemini?.apiKey).toEqual({
|
||||
source: "env",
|
||||
provider: "default",
|
||||
id: "WEB_SEARCH_GEMINI_API_KEY",
|
||||
});
|
||||
expect(getActiveRuntimeWebToolsMetadata()?.search.selectedProvider).toBe("gemini");
|
||||
|
||||
const persistedConfig = JSON.parse(
|
||||
await fs.readFile(path.join(home, ".openclaw", "openclaw.json"), "utf8"),
|
||||
) as OpenClawConfig;
|
||||
const persistedGoogleWebSearchConfig = persistedConfig.plugins?.entries?.google?.config as
|
||||
| { webSearch?: { apiKey?: unknown } }
|
||||
| undefined;
|
||||
expect(persistedGoogleWebSearchConfig?.webSearch?.apiKey).toEqual({
|
||||
source: "env",
|
||||
provider: "default",
|
||||
id: "MISSING_WEB_SEARCH_GEMINI_API_KEY",
|
||||
});
|
||||
});
|
||||
}, 180_000);
|
||||
|
||||
it("recomputes config-derived agent dirs when refreshing active secrets runtime snapshots", async () => {
|
||||
await withTempHome("openclaw-secrets-runtime-agent-dirs-", async (home) => {
|
||||
const mainAgentDir = path.join(home, ".openclaw", "agents", "main", "agent");
|
||||
const opsAgentDir = path.join(home, ".openclaw", "agents", "ops", "agent");
|
||||
await fs.mkdir(mainAgentDir, { recursive: true });
|
||||
await fs.mkdir(opsAgentDir, { recursive: true });
|
||||
await fs.writeFile(
|
||||
path.join(mainAgentDir, "auth-profiles.json"),
|
||||
`${JSON.stringify(
|
||||
{
|
||||
version: 1,
|
||||
profiles: {
|
||||
"openai:default": {
|
||||
type: "api_key",
|
||||
provider: "openai",
|
||||
keyRef: { source: "env", provider: "default", id: "OPENAI_API_KEY" },
|
||||
},
|
||||
},
|
||||
},
|
||||
null,
|
||||
2,
|
||||
)}\n`,
|
||||
{ encoding: "utf8", mode: 0o600 },
|
||||
);
|
||||
await fs.writeFile(
|
||||
path.join(opsAgentDir, "auth-profiles.json"),
|
||||
`${JSON.stringify(
|
||||
{
|
||||
version: 1,
|
||||
profiles: {
|
||||
"anthropic:ops": {
|
||||
type: "api_key",
|
||||
provider: "anthropic",
|
||||
keyRef: { source: "env", provider: "default", id: "ANTHROPIC_API_KEY" },
|
||||
},
|
||||
},
|
||||
},
|
||||
null,
|
||||
2,
|
||||
)}\n`,
|
||||
{ encoding: "utf8", mode: 0o600 },
|
||||
);
|
||||
|
||||
const prepared = await prepareSecretsRuntimeSnapshot({
|
||||
config: asConfig({}),
|
||||
env: {
|
||||
OPENAI_API_KEY: "sk-main-runtime",
|
||||
ANTHROPIC_API_KEY: "sk-ops-runtime",
|
||||
},
|
||||
});
|
||||
|
||||
activateSecretsRuntimeSnapshot(prepared);
|
||||
expect(ensureAuthProfileStore(opsAgentDir).profiles["anthropic:ops"]).toBeUndefined();
|
||||
|
||||
await writeConfigFile({
|
||||
agents: {
|
||||
list: [{ id: "ops", agentDir: opsAgentDir }],
|
||||
},
|
||||
});
|
||||
|
||||
expect(ensureAuthProfileStore(opsAgentDir).profiles["anthropic:ops"]).toMatchObject({
|
||||
type: "api_key",
|
||||
key: "sk-ops-runtime",
|
||||
keyRef: { source: "env", provider: "default", id: "ANTHROPIC_API_KEY" },
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
@ -2,15 +2,13 @@ import fs from "node:fs/promises";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import { ensureAuthProfileStore, type AuthProfileStore } from "../agents/auth-profiles.js";
|
||||
import { loadConfig, type OpenClawConfig, writeConfigFile } from "../config/config.js";
|
||||
import { withTempHome } from "../config/home-env.test-harness.js";
|
||||
import type { AuthProfileStore } from "../agents/auth-profiles.js";
|
||||
import { clearConfigCache, type OpenClawConfig } from "../config/config.js";
|
||||
import type { PluginWebSearchProviderEntry } from "../plugins/types.js";
|
||||
import {
|
||||
activateSecretsRuntimeSnapshot,
|
||||
clearSecretsRuntimeSnapshot,
|
||||
getActiveRuntimeWebToolsMetadata,
|
||||
getActiveSecretsRuntimeSnapshot,
|
||||
prepareSecretsRuntimeSnapshot,
|
||||
} from "./runtime.js";
|
||||
|
||||
@ -121,10 +119,10 @@ describe("secrets runtime snapshot", () => {
|
||||
|
||||
afterEach(() => {
|
||||
clearSecretsRuntimeSnapshot();
|
||||
clearConfigCache();
|
||||
resolvePluginWebSearchProvidersMock.mockReset();
|
||||
});
|
||||
|
||||
const allowInsecureTempSecretFile = process.platform === "win32";
|
||||
|
||||
it("resolves env refs for config and auth profiles", async () => {
|
||||
const config = asConfig({
|
||||
agents: {
|
||||
@ -724,385 +722,6 @@ describe("secrets runtime snapshot", () => {
|
||||
}
|
||||
});
|
||||
|
||||
it("activates runtime snapshots for loadConfig and ensureAuthProfileStore", async () => {
|
||||
const prepared = await prepareSecretsRuntimeSnapshot({
|
||||
config: asConfig({
|
||||
models: {
|
||||
providers: {
|
||||
openai: {
|
||||
baseUrl: "https://api.openai.com/v1",
|
||||
apiKey: { source: "env", provider: "default", id: "OPENAI_API_KEY" },
|
||||
models: [],
|
||||
},
|
||||
},
|
||||
},
|
||||
}),
|
||||
env: { OPENAI_API_KEY: "sk-runtime" }, // pragma: allowlist secret
|
||||
agentDirs: ["/tmp/openclaw-agent-main"],
|
||||
loadAuthStore: () =>
|
||||
loadAuthStoreWithProfiles({
|
||||
"openai:default": {
|
||||
type: "api_key",
|
||||
provider: "openai",
|
||||
keyRef: OPENAI_ENV_KEY_REF,
|
||||
},
|
||||
}),
|
||||
});
|
||||
|
||||
activateSecretsRuntimeSnapshot(prepared);
|
||||
|
||||
expect(loadConfig().models?.providers?.openai?.apiKey).toBe("sk-runtime");
|
||||
const store = ensureAuthProfileStore("/tmp/openclaw-agent-main");
|
||||
expect(store.profiles["openai:default"]).toMatchObject({
|
||||
type: "api_key",
|
||||
key: "sk-runtime",
|
||||
});
|
||||
});
|
||||
|
||||
it("keeps active secrets runtime snapshots resolved after config writes", async () => {
|
||||
if (os.platform() === "win32") {
|
||||
return;
|
||||
}
|
||||
await withTempHome("openclaw-secrets-runtime-write-", async (home) => {
|
||||
const configDir = path.join(home, ".openclaw");
|
||||
const secretFile = path.join(configDir, "secrets.json");
|
||||
const agentDir = path.join(configDir, "agents", "main", "agent");
|
||||
const authStorePath = path.join(agentDir, "auth-profiles.json");
|
||||
await fs.mkdir(agentDir, { recursive: true });
|
||||
await fs.chmod(configDir, 0o700).catch(() => {
|
||||
// best-effort on tmp dirs that already have secure perms
|
||||
});
|
||||
await fs.writeFile(
|
||||
secretFile,
|
||||
`${JSON.stringify({ providers: { openai: { apiKey: "sk-file-runtime" } } }, null, 2)}\n`, // pragma: allowlist secret
|
||||
{ encoding: "utf8", mode: 0o600 },
|
||||
);
|
||||
await fs.writeFile(
|
||||
authStorePath,
|
||||
`${JSON.stringify(
|
||||
{
|
||||
version: 1,
|
||||
profiles: {
|
||||
"openai:default": {
|
||||
type: "api_key",
|
||||
provider: "openai",
|
||||
keyRef: { source: "file", provider: "default", id: "/providers/openai/apiKey" },
|
||||
},
|
||||
},
|
||||
},
|
||||
null,
|
||||
2,
|
||||
)}\n`,
|
||||
{ encoding: "utf8", mode: 0o600 },
|
||||
);
|
||||
|
||||
const prepared = await prepareSecretsRuntimeSnapshot({
|
||||
config: asConfig({
|
||||
secrets: {
|
||||
providers: {
|
||||
default: {
|
||||
source: "file",
|
||||
path: secretFile,
|
||||
mode: "json",
|
||||
...(allowInsecureTempSecretFile ? { allowInsecurePath: true } : {}),
|
||||
},
|
||||
},
|
||||
},
|
||||
models: {
|
||||
providers: {
|
||||
openai: {
|
||||
baseUrl: "https://api.openai.com/v1",
|
||||
apiKey: { source: "file", provider: "default", id: "/providers/openai/apiKey" },
|
||||
models: [],
|
||||
},
|
||||
},
|
||||
},
|
||||
}),
|
||||
agentDirs: [agentDir],
|
||||
});
|
||||
|
||||
activateSecretsRuntimeSnapshot(prepared);
|
||||
|
||||
expect(loadConfig().models?.providers?.openai?.apiKey).toBe("sk-file-runtime");
|
||||
expect(ensureAuthProfileStore(agentDir).profiles["openai:default"]).toMatchObject({
|
||||
type: "api_key",
|
||||
key: "sk-file-runtime",
|
||||
});
|
||||
|
||||
await writeConfigFile({
|
||||
...loadConfig(),
|
||||
gateway: { auth: { mode: "token" } },
|
||||
});
|
||||
|
||||
expect(loadConfig().gateway?.auth).toEqual({ mode: "token" });
|
||||
expect(loadConfig().models?.providers?.openai?.apiKey).toBe("sk-file-runtime");
|
||||
expect(ensureAuthProfileStore(agentDir).profiles["openai:default"]).toMatchObject({
|
||||
type: "api_key",
|
||||
key: "sk-file-runtime",
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
it("keeps last-known-good runtime snapshot active when refresh fails after a write", async () => {
|
||||
if (os.platform() === "win32") {
|
||||
return;
|
||||
}
|
||||
await withTempHome("openclaw-secrets-runtime-refresh-fail-", async (home) => {
|
||||
const configDir = path.join(home, ".openclaw");
|
||||
const secretFile = path.join(configDir, "secrets.json");
|
||||
const agentDir = path.join(configDir, "agents", "main", "agent");
|
||||
const authStorePath = path.join(agentDir, "auth-profiles.json");
|
||||
await fs.mkdir(agentDir, { recursive: true });
|
||||
await fs.chmod(configDir, 0o700).catch(() => {
|
||||
// best-effort on tmp dirs that already have secure perms
|
||||
});
|
||||
await fs.writeFile(
|
||||
secretFile,
|
||||
`${JSON.stringify({ providers: { openai: { apiKey: "sk-file-runtime" } } }, null, 2)}\n`, // pragma: allowlist secret
|
||||
{ encoding: "utf8", mode: 0o600 },
|
||||
);
|
||||
await fs.writeFile(
|
||||
authStorePath,
|
||||
`${JSON.stringify(
|
||||
{
|
||||
version: 1,
|
||||
profiles: {
|
||||
"openai:default": {
|
||||
type: "api_key",
|
||||
provider: "openai",
|
||||
keyRef: { source: "file", provider: "default", id: "/providers/openai/apiKey" },
|
||||
},
|
||||
},
|
||||
},
|
||||
null,
|
||||
2,
|
||||
)}\n`,
|
||||
{ encoding: "utf8", mode: 0o600 },
|
||||
);
|
||||
|
||||
let loadAuthStoreCalls = 0;
|
||||
const loadAuthStore = () => {
|
||||
loadAuthStoreCalls += 1;
|
||||
if (loadAuthStoreCalls > 1) {
|
||||
throw new Error("simulated secrets runtime refresh failure");
|
||||
}
|
||||
return loadAuthStoreWithProfiles({
|
||||
"openai:default": {
|
||||
type: "api_key",
|
||||
provider: "openai",
|
||||
keyRef: { source: "file", provider: "default", id: "/providers/openai/apiKey" },
|
||||
},
|
||||
});
|
||||
};
|
||||
|
||||
const prepared = await prepareSecretsRuntimeSnapshot({
|
||||
config: asConfig({
|
||||
secrets: {
|
||||
providers: {
|
||||
default: {
|
||||
source: "file",
|
||||
path: secretFile,
|
||||
mode: "json",
|
||||
...(allowInsecureTempSecretFile ? { allowInsecurePath: true } : {}),
|
||||
},
|
||||
},
|
||||
},
|
||||
models: {
|
||||
providers: {
|
||||
openai: {
|
||||
baseUrl: "https://api.openai.com/v1",
|
||||
apiKey: { source: "file", provider: "default", id: "/providers/openai/apiKey" },
|
||||
models: [],
|
||||
},
|
||||
},
|
||||
},
|
||||
}),
|
||||
agentDirs: [agentDir],
|
||||
loadAuthStore,
|
||||
});
|
||||
|
||||
activateSecretsRuntimeSnapshot(prepared);
|
||||
|
||||
await expect(
|
||||
writeConfigFile({
|
||||
...loadConfig(),
|
||||
gateway: { auth: { mode: "token" } },
|
||||
}),
|
||||
).rejects.toThrow(
|
||||
/runtime snapshot refresh failed: simulated secrets runtime refresh failure/i,
|
||||
);
|
||||
|
||||
const activeAfterFailure = getActiveSecretsRuntimeSnapshot();
|
||||
expect(activeAfterFailure).not.toBeNull();
|
||||
expect(loadConfig().gateway?.auth).toBeUndefined();
|
||||
expect(loadConfig().models?.providers?.openai?.apiKey).toBe("sk-file-runtime");
|
||||
expect(activeAfterFailure?.sourceConfig.models?.providers?.openai?.apiKey).toEqual({
|
||||
source: "file",
|
||||
provider: "default",
|
||||
id: "/providers/openai/apiKey",
|
||||
});
|
||||
|
||||
const persistedStore = ensureAuthProfileStore(agentDir).profiles["openai:default"];
|
||||
expect(persistedStore).toMatchObject({
|
||||
type: "api_key",
|
||||
key: "sk-file-runtime",
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
it("keeps last-known-good web runtime snapshot when reload introduces unresolved active web refs", async () => {
|
||||
await withTempHome("openclaw-secrets-runtime-web-reload-lkg-", async (home) => {
|
||||
const prepared = await prepareSecretsRuntimeSnapshot({
|
||||
config: asConfig({
|
||||
tools: {
|
||||
web: {
|
||||
search: {
|
||||
provider: "gemini",
|
||||
gemini: {
|
||||
apiKey: { source: "env", provider: "default", id: "WEB_SEARCH_GEMINI_API_KEY" },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}),
|
||||
env: {
|
||||
WEB_SEARCH_GEMINI_API_KEY: "web-search-gemini-runtime-key", // pragma: allowlist secret
|
||||
},
|
||||
agentDirs: ["/tmp/openclaw-agent-main"],
|
||||
loadAuthStore: () => ({ version: 1, profiles: {} }),
|
||||
});
|
||||
|
||||
activateSecretsRuntimeSnapshot(prepared);
|
||||
|
||||
await expect(
|
||||
writeConfigFile({
|
||||
...loadConfig(),
|
||||
plugins: {
|
||||
entries: {
|
||||
google: {
|
||||
config: {
|
||||
webSearch: {
|
||||
apiKey: {
|
||||
source: "env",
|
||||
provider: "default",
|
||||
id: "MISSING_WEB_SEARCH_GEMINI_API_KEY",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
tools: {
|
||||
web: {
|
||||
search: {
|
||||
provider: "gemini",
|
||||
gemini: {
|
||||
apiKey: {
|
||||
source: "env",
|
||||
provider: "default",
|
||||
id: "MISSING_WEB_SEARCH_GEMINI_API_KEY",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}),
|
||||
).rejects.toThrow(
|
||||
/runtime snapshot refresh failed: .*WEB_SEARCH_KEY_UNRESOLVED_NO_FALLBACK/i,
|
||||
);
|
||||
|
||||
const activeAfterFailure = getActiveSecretsRuntimeSnapshot();
|
||||
expect(activeAfterFailure).not.toBeNull();
|
||||
expect(loadConfig().tools?.web?.search?.gemini?.apiKey).toBe("web-search-gemini-runtime-key");
|
||||
expect(activeAfterFailure?.sourceConfig.tools?.web?.search?.gemini?.apiKey).toEqual({
|
||||
source: "env",
|
||||
provider: "default",
|
||||
id: "WEB_SEARCH_GEMINI_API_KEY",
|
||||
});
|
||||
expect(getActiveRuntimeWebToolsMetadata()?.search.selectedProvider).toBe("gemini");
|
||||
|
||||
const persistedConfig = JSON.parse(
|
||||
await fs.readFile(path.join(home, ".openclaw", "openclaw.json"), "utf8"),
|
||||
) as OpenClawConfig;
|
||||
const persistedGoogleWebSearchConfig = persistedConfig.plugins?.entries?.google?.config as
|
||||
| { webSearch?: { apiKey?: unknown } }
|
||||
| undefined;
|
||||
expect(persistedGoogleWebSearchConfig?.webSearch?.apiKey).toEqual({
|
||||
source: "env",
|
||||
provider: "default",
|
||||
id: "MISSING_WEB_SEARCH_GEMINI_API_KEY",
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
it("recomputes config-derived agent dirs when refreshing active secrets runtime snapshots", async () => {
|
||||
await withTempHome("openclaw-secrets-runtime-agent-dirs-", async (home) => {
|
||||
const mainAgentDir = path.join(home, ".openclaw", "agents", "main", "agent");
|
||||
const opsAgentDir = path.join(home, ".openclaw", "agents", "ops", "agent");
|
||||
await fs.mkdir(mainAgentDir, { recursive: true });
|
||||
await fs.mkdir(opsAgentDir, { recursive: true });
|
||||
await fs.writeFile(
|
||||
path.join(mainAgentDir, "auth-profiles.json"),
|
||||
`${JSON.stringify(
|
||||
{
|
||||
version: 1,
|
||||
profiles: {
|
||||
"openai:default": {
|
||||
type: "api_key",
|
||||
provider: "openai",
|
||||
keyRef: { source: "env", provider: "default", id: "OPENAI_API_KEY" },
|
||||
},
|
||||
},
|
||||
},
|
||||
null,
|
||||
2,
|
||||
)}\n`,
|
||||
{ encoding: "utf8", mode: 0o600 },
|
||||
);
|
||||
await fs.writeFile(
|
||||
path.join(opsAgentDir, "auth-profiles.json"),
|
||||
`${JSON.stringify(
|
||||
{
|
||||
version: 1,
|
||||
profiles: {
|
||||
"anthropic:ops": {
|
||||
type: "api_key",
|
||||
provider: "anthropic",
|
||||
keyRef: { source: "env", provider: "default", id: "ANTHROPIC_API_KEY" },
|
||||
},
|
||||
},
|
||||
},
|
||||
null,
|
||||
2,
|
||||
)}\n`,
|
||||
{ encoding: "utf8", mode: 0o600 },
|
||||
);
|
||||
|
||||
const prepared = await prepareSecretsRuntimeSnapshot({
|
||||
config: asConfig({}),
|
||||
env: {
|
||||
OPENAI_API_KEY: "sk-main-runtime", // pragma: allowlist secret
|
||||
ANTHROPIC_API_KEY: "sk-ops-runtime", // pragma: allowlist secret
|
||||
},
|
||||
});
|
||||
|
||||
activateSecretsRuntimeSnapshot(prepared);
|
||||
expect(ensureAuthProfileStore(opsAgentDir).profiles["anthropic:ops"]).toBeUndefined();
|
||||
|
||||
await writeConfigFile({
|
||||
agents: {
|
||||
list: [{ id: "ops", agentDir: opsAgentDir }],
|
||||
},
|
||||
});
|
||||
|
||||
expect(ensureAuthProfileStore(opsAgentDir).profiles["anthropic:ops"]).toMatchObject({
|
||||
type: "api_key",
|
||||
key: "sk-ops-runtime",
|
||||
keyRef: { source: "env", provider: "default", id: "ANTHROPIC_API_KEY" },
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
it("skips inactive-surface refs and emits diagnostics", async () => {
|
||||
const config = asConfig({
|
||||
agents: {
|
||||
|
||||
20
test/fixtures/test-parallel.behavior.json
vendored
20
test/fixtures/test-parallel.behavior.json
vendored
@ -13,6 +13,10 @@
|
||||
"file": "src/infra/git-commit.test.ts",
|
||||
"reason": "Mutates process.cwd() and core loader seams."
|
||||
},
|
||||
{
|
||||
"file": "src/config/doc-baseline.test.ts",
|
||||
"reason": "Rebuilds bundled config baselines through many channel schema subprocesses; keep out of the shared lane."
|
||||
},
|
||||
{
|
||||
"file": "extensions/imessage/src/monitor.shutdown.unhandled-rejection.test.ts",
|
||||
"reason": "Touches process-level unhandledRejection listeners."
|
||||
@ -31,6 +35,10 @@
|
||||
"file": "src/secrets/runtime.test.ts",
|
||||
"reason": "Secrets runtime coverage retained the largest unit-fast heap spike in CI and is safer outside the shared lane."
|
||||
},
|
||||
{
|
||||
"file": "src/secrets/runtime.integration.test.ts",
|
||||
"reason": "Secrets runtime activation/write-through integration coverage is CPU-heavy and safer outside the shared unit-fast lane."
|
||||
},
|
||||
{
|
||||
"file": "src/memory/index.test.ts",
|
||||
"reason": "Memory index coverage retained nearly 1 GiB in unit-fast on Linux CI and is safer in its own fork."
|
||||
@ -47,6 +55,14 @@
|
||||
"file": "src/config/redact-snapshot.test.ts",
|
||||
"reason": "Snapshot redaction coverage produced a large retained heap jump in unit-fast on Linux CI."
|
||||
},
|
||||
{
|
||||
"file": "src/config/redact-snapshot.restore.test.ts",
|
||||
"reason": "Snapshot restore coverage retains a broad schema/redaction graph and is safer outside the shared lane."
|
||||
},
|
||||
{
|
||||
"file": "src/config/redact-snapshot.schema.test.ts",
|
||||
"reason": "Schema-backed redaction round-trip coverage loads the full config schema graph and is safer outside the shared lane."
|
||||
},
|
||||
{
|
||||
"file": "src/infra/outbound/message-action-runner.media.test.ts",
|
||||
"reason": "Outbound media action coverage retained a large media/plugin graph in unit-fast."
|
||||
@ -59,6 +75,10 @@
|
||||
"file": "src/plugin-sdk/index.test.ts",
|
||||
"reason": "Plugin SDK index coverage retained a broad export graph in unit-fast and is safer outside the shared lane."
|
||||
},
|
||||
{
|
||||
"file": "src/plugin-sdk/index.bundle.test.ts",
|
||||
"reason": "Plugin SDK bundle validation builds and imports the full bundled export graph and is safer outside the shared lane."
|
||||
},
|
||||
{
|
||||
"file": "src/config/sessions.cache.test.ts",
|
||||
"reason": "Session cache coverage retained a large config/session graph in unit-fast on Linux CI."
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user