Merge branch 'main' of https://github.com/openclaw/openclaw into openclaw/2026.2.22
This commit is contained in:
commit
15063e85a2
22
CHANGELOG.md
22
CHANGELOG.md
@ -8,6 +8,7 @@ Docs: https://docs.openclaw.ai
|
||||
|
||||
- Channels/Config: unify channel preview streaming config handling with a shared resolver and canonical migration path.
|
||||
- Discord/Allowlist: canonicalize resolved Discord allowlist names to IDs and split resolution flow for clearer fail-closed behavior.
|
||||
- Memory/FTS: add Korean stop-word filtering and particle-aware keyword extraction (including mixed Korean/English stems) for query expansion in FTS-only search mode. (#18899) Thanks @ruypang.
|
||||
- iOS/Talk: prefetch TTS segments and suppress expected speech-cancellation errors for smoother talk playback. (#22833) Thanks @ngutman.
|
||||
|
||||
### Breaking
|
||||
@ -16,6 +17,23 @@ Docs: https://docs.openclaw.ai
|
||||
|
||||
### Fixes
|
||||
|
||||
- Telegram/WSL2: disable `autoSelectFamily` by default on WSL2 and memoize WSL2 detection in Telegram network decision logic to avoid repeated sync `/proc/version` probes on fetch/send paths. (#21916) Thanks @MizukiMachine.
|
||||
- Telegram/Streaming: preserve archived draft preview mapping after flush and clean superseded reasoning preview bubbles so multi-message preview finals no longer cross-edit or orphan stale messages under send/rotation races. (#23202) Thanks @obviyus.
|
||||
- Slack/Slash commands: preserve the Bolt app receiver when registering external select options handlers so monitor startup does not crash on runtimes that require bound `app.options` calls. (#23209) Thanks @0xgaia.
|
||||
- Slack/Telegram slash sessions: await session metadata persistence before dispatch so first-turn native slash runs do not race session-origin metadata updates. (#23065) thanks @hydro13.
|
||||
- Agents/Ollama: preserve unsafe integer tool-call arguments as exact strings during NDJSON parsing, preventing large numeric IDs from being rounded before tool execution. (#23170) Thanks @BestJoester.
|
||||
- Cron/Gateway: keep `cron.list` and `cron.status` responsive during startup catch-up by avoiding a long-held cron lock while missed jobs execute. (#23106) Thanks @jayleekr.
|
||||
- Gateway/Config reload: compare array-valued config paths structurally during diffing so unchanged `memory.qmd.paths` and `memory.qmd.scope.rules` no longer trigger false restart-required reloads. (#23185) Thanks @rex05ai.
|
||||
- Cron/Scheduling: validate runtime cron expressions before schedule/stagger evaluation so malformed persisted jobs report a clear `invalid cron schedule: expr is required` error instead of crashing with `undefined.trim` failures and auto-disable churn. (#23223) Thanks @asimons81.
|
||||
- Memory/QMD: migrate legacy unscoped collection bindings (for example `memory-root`) to per-agent scoped names (for example `memory-root-main`) during startup when safe, so QMD-backed `memory_search` no longer fails with `Collection not found` after upgrades. (#23228, #20727) Thanks @JLDynamics and @AaronFaby.
|
||||
- TUI/Input: enable multiline-paste burst coalescing on macOS Terminal.app and iTerm so pasted blocks no longer submit line-by-line as separate messages. (#18809) Thanks @fwends.
|
||||
- TUI/Status: request immediate renders after setting `sending`/`waiting` activity states so in-flight runs always show visible progress indicators instead of appearing idle until completion. (#21549) Thanks @13Guinness.
|
||||
- Agents/Fallbacks: treat JSON payloads with `type: "api_error"` + `"Internal server error"` as transient failover errors so Anthropic 500-style failures trigger model fallback. (#23193) Thanks @jarvis-lane.
|
||||
- Agents/Diagnostics: include resolved lifecycle error text in `embedded run agent end` warnings so UI/TUI “Connection error” runs expose actionable provider failure reasons in gateway logs. (#23054) Thanks @Raize.
|
||||
- Gateway/Pairing: treat operator.admin pairing tokens as satisfying operator.write requests so legacy devices stop looping through scope-upgrade prompts introduced in 2026.2.19. (#23125, #23006) Thanks @vignesh07.
|
||||
- Gateway/Pairing: treat `operator.admin` as satisfying other `operator.*` scope checks during device-auth verification so local CLI/TUI sessions stop entering pairing-required loops for pairing/approval-scoped commands. (#22062, #22193, #21191) Thanks @Botaccess, @jhartshorn, and @ctbritt.
|
||||
- Gateway/Pairing: preserve existing approved token scopes when processing repair pairings that omit `scopes`, preventing empty-scope token regressions on reconnecting clients. (#21906) Thanks @paki81.
|
||||
- Plugins/CLI: make `openclaw plugins enable` and plugin install/link flows update allowlists via shared plugin-enable policy so enabled plugins are not left disabled by allowlist mismatch. (#23190) Thanks @downwind7clawd-ctrl.
|
||||
- Memory/QMD: add optional `memory.qmd.mcporter` search routing so QMD `query/search/vsearch` can run through mcporter keep-alive flows (including multi-collection paths) to reduce cold starts, while keeping searches on agent-scoped QMD state for consistent recall. (#19617) Thanks @nicole-luxe and @vignesh07.
|
||||
- Chat/UI: strip inline reply/audio directive tags (`[[reply_to_current]]`, `[[reply_to:<id>]]`, `[[audio_as_voice]]`) from displayed chat history, live chat event output, and session preview snippets so control tags no longer leak into user-visible surfaces.
|
||||
- BlueBubbles/DM history: restore DM backfill context with account-scoped rolling history, bounded backfill retries, and safer history payload limits. (#20302) Thanks @Ryan-Haines.
|
||||
@ -23,6 +41,7 @@ Docs: https://docs.openclaw.ai
|
||||
- Security/Shell env: validate login-shell executable paths for shell-env fallback (`/etc/shells` + trusted prefixes) and block `SHELL` in dangerous env override policy paths so untrusted shell-path injection falls back safely to `/bin/sh`. Thanks @athuljayaram for reporting.
|
||||
- Security/Config: make parsed chat allowlist checks fail closed when `allowFrom` is empty, restoring expected DM/pairing gating.
|
||||
- Security/Exec: in non-default setups that manually add `sort` to `tools.exec.safeBins`, block `sort --compress-program` so allowlist-mode safe-bin checks cannot bypass approval. Thanks @tdjackey for reporting.
|
||||
- Security/Exec approvals: when users choose `allow-always` for shell-wrapper commands (for example `/bin/zsh -lc ...`), persist allowlist patterns for the inner executable(s) instead of the wrapper shell binary, preventing accidental broad shell allowlisting in moderate mode. (#23276) Thanks @xrom2863.
|
||||
- Security/macOS app beta: enforce path-only `system.run` allowlist matching (drop basename matches like `echo`), migrate legacy basename entries to last resolved paths when available, and harden shell-chain handling to fail closed on unsafe parse/control syntax (including quoted command substitution/backticks). This is an optional allowlist-mode feature; default installs remain deny-by-default. This ships in the next npm release. Thanks @tdjackey for reporting.
|
||||
- Security/SSRF: expand IPv4 fetch guard blocking to include RFC special-use/non-global ranges (including benchmarking, TEST-NET, multicast, and reserved/broadcast blocks), and centralize range checks into a single CIDR policy table to reduce classifier drift.
|
||||
- Security/Archive: block zip symlink escapes during archive extraction.
|
||||
@ -30,6 +49,7 @@ Docs: https://docs.openclaw.ai
|
||||
- Security/Discord: add `openclaw security audit` warnings for name/tag-based Discord allowlist entries (DM allowlists, guild/channel `users`, and pairing-store entries), highlighting slug-collision risk while keeping name-based matching supported, and canonicalize resolved Discord allowlist names to IDs at runtime without rewriting config files. Thanks @tdjackey for reporting.
|
||||
- Security/Gateway: block node-role connections when device identity metadata is missing.
|
||||
- Security/Media: enforce inbound media byte limits during download/read across Discord, Telegram, Zalo, Microsoft Teams, and BlueBubbles to prevent oversized payload memory spikes before rejection. This ships in the next npm release. Thanks @tdjackey for reporting.
|
||||
- Media/Understanding: preserve `application/pdf` MIME classification during text-like file heuristics so PDF uploads use PDF extraction paths instead of being inlined as raw text. (#23191) Thanks @claudeplay2026-byte.
|
||||
- Security/Control UI: block symlink-based out-of-root static file reads by enforcing realpath containment and file-identity checks when serving Control UI assets and SPA fallback `index.html`. This ships in the next npm release. Thanks @tdjackey for reporting.
|
||||
- Security/MSTeams media: enforce allowlist checks for SharePoint reference attachment URLs and redirect targets during Graph-backed media fetches so redirect chains cannot escape configured media host boundaries. This ships in the next npm release. Thanks @tdjackey for reporting.
|
||||
- Security/macOS discovery: fail closed for unresolved discovery endpoints by clearing stale remote selection values, use resolved service host only for SSH target derivation, and keep remote URL config aligned with resolved endpoint availability. (#21618) Thanks @bmendonca3.
|
||||
@ -92,6 +112,7 @@ Docs: https://docs.openclaw.ai
|
||||
- CLI/Pairing: default `pairing list` and `pairing approve` to the sole available pairing channel when omitted, so TUI-only setups can recover from `pairing required` without guessing channel arguments. (#21527) Thanks @losts1.
|
||||
- TUI/Pairing: show explicit pairing-required recovery guidance after gateway disconnects that return `pairing required`, including approval steps to unblock quickstart TUI hatching on fresh installs. (#21841) Thanks @nicolinux.
|
||||
- TUI/Input: suppress duplicate backspace events arriving in the same input burst window so SSH sessions no longer delete two characters per backspace press in the composer. (#19318) Thanks @eheimer.
|
||||
- TUI/Models: scope `models.list` to the configured model allowlist (`agents.defaults.models`) so `/model` picker no longer floods with unrelated catalog entries by default. (#18816) Thanks @fwends.
|
||||
- TUI/Heartbeat: suppress heartbeat ACK/prompt noise in chat streaming when `showOk` is disabled, while still preserving non-ACK heartbeat alerts in final output. (#20228) Thanks @bhalliburton.
|
||||
- TUI/History: cap chat-log component growth and prune stale render nodes/references so large default history loads no longer overflow render recursion with `RangeError: Maximum call stack size exceeded`. (#18068) Thanks @JaniJegoroff.
|
||||
- Memory/QMD: diversify mixed-source search ranking when both session and memory collections are present so session transcript hits no longer crowd out durable memory-file matches in top results. (#19913) Thanks @alextempr.
|
||||
@ -101,6 +122,7 @@ Docs: https://docs.openclaw.ai
|
||||
- Provider/HTTP: treat HTTP 503 as failover-eligible for LLM provider errors. (#21086) Thanks @Protocol-zero-0.
|
||||
- Slack: pass `recipient_team_id` / `recipient_user_id` through Slack native streaming calls so `chat.startStream`/`appendStream`/`stopStream` work reliably across DMs and Slack Connect setups, and disable block streaming when native streaming is active. (#20988) Thanks @Dithilli. Earlier recipient-ID groundwork was contributed in #20377 by @AsserAl1012.
|
||||
- CLI/Config: add canonical `--strict-json` parsing for `config set` and keep `--json` as a legacy alias to reduce help/behavior drift. (#21332) thanks @adhitShet.
|
||||
- CLI/Config: preserve explicitly unset config paths in persisted JSON after writes so `openclaw config unset <path>` no longer re-introduces defaulted keys (for example `commands.ownerDisplay`) through schema normalization. (#22984) Thanks @aronchick.
|
||||
- CLI: keep `openclaw -v` as a root-only version alias so subcommand `-v, --verbose` flags (for example ACP/hooks/skills) are no longer intercepted globally. (#21303) thanks @adhitShet.
|
||||
- Memory: return empty snippets when `memory_get`/QMD read files that have not been created yet, and harden memory indexing/session helpers against ENOENT races so missing Markdown no longer crashes tools. (#20680) Thanks @pahdo.
|
||||
- Telegram/Streaming: always clean up draft previews even when dispatch throws before fallback handling, preventing orphaned preview messages during failed runs. (#19041) thanks @mudrii.
|
||||
|
||||
@ -11,6 +11,7 @@ import {
|
||||
minSecurity,
|
||||
recordAllowlistUse,
|
||||
requiresExecApproval,
|
||||
resolveAllowAlwaysPatterns,
|
||||
resolveExecApprovals,
|
||||
} from "../infra/exec-approvals.js";
|
||||
import { markBackgrounded, tail } from "./bash-process-registry.js";
|
||||
@ -153,8 +154,13 @@ export async function processGatewayAllowlist(
|
||||
} else if (decision === "allow-always") {
|
||||
approvedByAsk = true;
|
||||
if (hostSecurity === "allowlist") {
|
||||
for (const segment of allowlistEval.segments) {
|
||||
const pattern = segment.resolution?.resolvedPath ?? "";
|
||||
const patterns = resolveAllowAlwaysPatterns({
|
||||
segments: allowlistEval.segments,
|
||||
cwd: params.workdir,
|
||||
env: params.env,
|
||||
platform: process.platform,
|
||||
});
|
||||
for (const pattern of patterns) {
|
||||
if (pattern) {
|
||||
addAllowlistEntry(approvals.file, params.agentId, pattern);
|
||||
}
|
||||
|
||||
@ -244,6 +244,40 @@ describe("parseNdjsonStream", () => {
|
||||
// Final done:true chunk has no tool_calls
|
||||
expect(chunks[2].message.tool_calls).toBeUndefined();
|
||||
});
|
||||
|
||||
it("preserves unsafe integer tool arguments as exact strings", async () => {
|
||||
const reader = mockNdjsonReader([
|
||||
'{"model":"m","created_at":"t","message":{"role":"assistant","content":"","tool_calls":[{"function":{"name":"send","arguments":{"target":1234567890123456789,"nested":{"thread":9223372036854775807}}}}]},"done":false}',
|
||||
]);
|
||||
|
||||
const chunks = [];
|
||||
for await (const chunk of parseNdjsonStream(reader)) {
|
||||
chunks.push(chunk);
|
||||
}
|
||||
|
||||
const args = chunks[0]?.message.tool_calls?.[0]?.function.arguments as
|
||||
| { target?: unknown; nested?: { thread?: unknown } }
|
||||
| undefined;
|
||||
expect(args?.target).toBe("1234567890123456789");
|
||||
expect(args?.nested?.thread).toBe("9223372036854775807");
|
||||
});
|
||||
|
||||
it("keeps safe integer tool arguments as numbers", async () => {
|
||||
const reader = mockNdjsonReader([
|
||||
'{"model":"m","created_at":"t","message":{"role":"assistant","content":"","tool_calls":[{"function":{"name":"send","arguments":{"retries":3,"delayMs":2500}}}]},"done":false}',
|
||||
]);
|
||||
|
||||
const chunks = [];
|
||||
for await (const chunk of parseNdjsonStream(reader)) {
|
||||
chunks.push(chunk);
|
||||
}
|
||||
|
||||
const args = chunks[0]?.message.tool_calls?.[0]?.function.arguments as
|
||||
| { retries?: unknown; delayMs?: unknown }
|
||||
| undefined;
|
||||
expect(args?.retries).toBe(3);
|
||||
expect(args?.delayMs).toBe(2500);
|
||||
});
|
||||
});
|
||||
|
||||
describe("createOllamaStreamFn", () => {
|
||||
|
||||
@ -49,6 +49,130 @@ interface OllamaToolCall {
|
||||
};
|
||||
}
|
||||
|
||||
const MAX_SAFE_INTEGER_ABS_STR = String(Number.MAX_SAFE_INTEGER);
|
||||
|
||||
function isAsciiDigit(ch: string | undefined): boolean {
|
||||
return ch !== undefined && ch >= "0" && ch <= "9";
|
||||
}
|
||||
|
||||
function parseJsonNumberToken(
|
||||
input: string,
|
||||
start: number,
|
||||
): { token: string; end: number; isInteger: boolean } | null {
|
||||
let idx = start;
|
||||
if (input[idx] === "-") {
|
||||
idx += 1;
|
||||
}
|
||||
if (idx >= input.length) {
|
||||
return null;
|
||||
}
|
||||
|
||||
if (input[idx] === "0") {
|
||||
idx += 1;
|
||||
} else if (isAsciiDigit(input[idx]) && input[idx] !== "0") {
|
||||
while (isAsciiDigit(input[idx])) {
|
||||
idx += 1;
|
||||
}
|
||||
} else {
|
||||
return null;
|
||||
}
|
||||
|
||||
let isInteger = true;
|
||||
if (input[idx] === ".") {
|
||||
isInteger = false;
|
||||
idx += 1;
|
||||
if (!isAsciiDigit(input[idx])) {
|
||||
return null;
|
||||
}
|
||||
while (isAsciiDigit(input[idx])) {
|
||||
idx += 1;
|
||||
}
|
||||
}
|
||||
|
||||
if (input[idx] === "e" || input[idx] === "E") {
|
||||
isInteger = false;
|
||||
idx += 1;
|
||||
if (input[idx] === "+" || input[idx] === "-") {
|
||||
idx += 1;
|
||||
}
|
||||
if (!isAsciiDigit(input[idx])) {
|
||||
return null;
|
||||
}
|
||||
while (isAsciiDigit(input[idx])) {
|
||||
idx += 1;
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
token: input.slice(start, idx),
|
||||
end: idx,
|
||||
isInteger,
|
||||
};
|
||||
}
|
||||
|
||||
function isUnsafeIntegerLiteral(token: string): boolean {
|
||||
const digits = token[0] === "-" ? token.slice(1) : token;
|
||||
if (digits.length < MAX_SAFE_INTEGER_ABS_STR.length) {
|
||||
return false;
|
||||
}
|
||||
if (digits.length > MAX_SAFE_INTEGER_ABS_STR.length) {
|
||||
return true;
|
||||
}
|
||||
return digits > MAX_SAFE_INTEGER_ABS_STR;
|
||||
}
|
||||
|
||||
function quoteUnsafeIntegerLiterals(input: string): string {
|
||||
let out = "";
|
||||
let inString = false;
|
||||
let escaped = false;
|
||||
let idx = 0;
|
||||
|
||||
while (idx < input.length) {
|
||||
const ch = input[idx] ?? "";
|
||||
if (inString) {
|
||||
out += ch;
|
||||
if (escaped) {
|
||||
escaped = false;
|
||||
} else if (ch === "\\") {
|
||||
escaped = true;
|
||||
} else if (ch === '"') {
|
||||
inString = false;
|
||||
}
|
||||
idx += 1;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (ch === '"') {
|
||||
inString = true;
|
||||
out += ch;
|
||||
idx += 1;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (ch === "-" || isAsciiDigit(ch)) {
|
||||
const parsed = parseJsonNumberToken(input, idx);
|
||||
if (parsed) {
|
||||
if (parsed.isInteger && isUnsafeIntegerLiteral(parsed.token)) {
|
||||
out += `"${parsed.token}"`;
|
||||
} else {
|
||||
out += parsed.token;
|
||||
}
|
||||
idx = parsed.end;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
out += ch;
|
||||
idx += 1;
|
||||
}
|
||||
|
||||
return out;
|
||||
}
|
||||
|
||||
function parseJsonPreservingUnsafeIntegers(input: string): unknown {
|
||||
return JSON.parse(quoteUnsafeIntegerLiterals(input)) as unknown;
|
||||
}
|
||||
|
||||
// ── Ollama /api/chat response types ─────────────────────────────────────────
|
||||
|
||||
interface OllamaChatResponse {
|
||||
@ -262,7 +386,7 @@ export async function* parseNdjsonStream(
|
||||
continue;
|
||||
}
|
||||
try {
|
||||
yield JSON.parse(trimmed) as OllamaChatResponse;
|
||||
yield parseJsonPreservingUnsafeIntegers(trimmed) as OllamaChatResponse;
|
||||
} catch {
|
||||
log.warn(`Skipping malformed NDJSON line: ${trimmed.slice(0, 120)}`);
|
||||
}
|
||||
@ -271,7 +395,7 @@ export async function* parseNdjsonStream(
|
||||
|
||||
if (buffer.trim()) {
|
||||
try {
|
||||
yield JSON.parse(buffer.trim()) as OllamaChatResponse;
|
||||
yield parseJsonPreservingUnsafeIntegers(buffer.trim()) as OllamaChatResponse;
|
||||
} catch {
|
||||
log.warn(`Skipping malformed trailing data: ${buffer.trim().slice(0, 120)}`);
|
||||
}
|
||||
|
||||
@ -377,4 +377,11 @@ describe("classifyFailoverReason", () => {
|
||||
),
|
||||
).toBe("rate_limit");
|
||||
});
|
||||
it("classifies JSON api_error internal server failures as timeout", () => {
|
||||
expect(
|
||||
classifyFailoverReason(
|
||||
'{"type":"error","error":{"type":"api_error","message":"Internal server error"}}',
|
||||
),
|
||||
).toBe("timeout");
|
||||
});
|
||||
});
|
||||
|
||||
@ -686,6 +686,16 @@ export function isOverloadedErrorMessage(raw: string): boolean {
|
||||
return matchesErrorPatterns(raw, ERROR_PATTERNS.overloaded);
|
||||
}
|
||||
|
||||
function isJsonApiInternalServerError(raw: string): boolean {
|
||||
if (!raw) {
|
||||
return false;
|
||||
}
|
||||
const value = raw.toLowerCase();
|
||||
// Anthropic often wraps transient 500s in JSON payloads like:
|
||||
// {"type":"error","error":{"type":"api_error","message":"Internal server error"}}
|
||||
return value.includes('"type":"api_error"') && value.includes("internal server error");
|
||||
}
|
||||
|
||||
export function parseImageDimensionError(raw: string): {
|
||||
maxDimensionPx?: number;
|
||||
messageIndex?: number;
|
||||
@ -794,6 +804,9 @@ export function classifyFailoverReason(raw: string): FailoverReason | null {
|
||||
// Treat transient 5xx provider failures as retryable transport issues.
|
||||
return "timeout";
|
||||
}
|
||||
if (isJsonApiInternalServerError(raw)) {
|
||||
return "timeout";
|
||||
}
|
||||
if (isRateLimitErrorMessage(raw)) {
|
||||
return "rate_limit";
|
||||
}
|
||||
|
||||
76
src/agents/pi-embedded-subscribe.handlers.lifecycle.test.ts
Normal file
76
src/agents/pi-embedded-subscribe.handlers.lifecycle.test.ts
Normal file
@ -0,0 +1,76 @@
|
||||
import { describe, expect, it, vi } from "vitest";
|
||||
import { createInlineCodeState } from "../markdown/code-spans.js";
|
||||
import { handleAgentEnd } from "./pi-embedded-subscribe.handlers.lifecycle.js";
|
||||
import type { EmbeddedPiSubscribeContext } from "./pi-embedded-subscribe.handlers.types.js";
|
||||
|
||||
vi.mock("../infra/agent-events.js", () => ({
|
||||
emitAgentEvent: vi.fn(),
|
||||
}));
|
||||
|
||||
function createContext(
|
||||
lastAssistant: unknown,
|
||||
overrides?: { onAgentEvent?: (event: unknown) => void },
|
||||
): EmbeddedPiSubscribeContext {
|
||||
return {
|
||||
params: {
|
||||
runId: "run-1",
|
||||
config: {},
|
||||
sessionKey: "agent:main:main",
|
||||
onAgentEvent: overrides?.onAgentEvent,
|
||||
},
|
||||
state: {
|
||||
lastAssistant: lastAssistant as EmbeddedPiSubscribeContext["state"]["lastAssistant"],
|
||||
pendingCompactionRetry: 0,
|
||||
blockState: {
|
||||
thinking: true,
|
||||
final: true,
|
||||
inlineCode: createInlineCodeState(),
|
||||
},
|
||||
},
|
||||
log: {
|
||||
debug: vi.fn(),
|
||||
warn: vi.fn(),
|
||||
},
|
||||
flushBlockReplyBuffer: vi.fn(),
|
||||
resolveCompactionRetry: vi.fn(),
|
||||
maybeResolveCompactionWait: vi.fn(),
|
||||
} as unknown as EmbeddedPiSubscribeContext;
|
||||
}
|
||||
|
||||
describe("handleAgentEnd", () => {
|
||||
it("logs the resolved error message when run ends with assistant error", () => {
|
||||
const onAgentEvent = vi.fn();
|
||||
const ctx = createContext(
|
||||
{
|
||||
role: "assistant",
|
||||
stopReason: "error",
|
||||
errorMessage: "connection refused",
|
||||
content: [{ type: "text", text: "" }],
|
||||
},
|
||||
{ onAgentEvent },
|
||||
);
|
||||
|
||||
handleAgentEnd(ctx);
|
||||
|
||||
const warn = vi.mocked(ctx.log.warn);
|
||||
expect(warn).toHaveBeenCalledTimes(1);
|
||||
expect(warn.mock.calls[0]?.[0]).toContain("runId=run-1");
|
||||
expect(warn.mock.calls[0]?.[0]).toContain("error=connection refused");
|
||||
expect(onAgentEvent).toHaveBeenCalledWith({
|
||||
stream: "lifecycle",
|
||||
data: {
|
||||
phase: "error",
|
||||
error: "connection refused",
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it("keeps non-error run-end logging on debug only", () => {
|
||||
const ctx = createContext(undefined);
|
||||
|
||||
handleAgentEnd(ctx);
|
||||
|
||||
expect(ctx.log.warn).not.toHaveBeenCalled();
|
||||
expect(ctx.log.debug).toHaveBeenCalledWith("embedded run agent end: runId=run-1 isError=false");
|
||||
});
|
||||
});
|
||||
@ -29,8 +29,6 @@ export function handleAgentEnd(ctx: EmbeddedPiSubscribeContext) {
|
||||
const lastAssistant = ctx.state.lastAssistant;
|
||||
const isError = isAssistantMessage(lastAssistant) && lastAssistant.stopReason === "error";
|
||||
|
||||
ctx.log.debug(`embedded run agent end: runId=${ctx.params.runId} isError=${isError}`);
|
||||
|
||||
if (isError && lastAssistant) {
|
||||
const friendlyError = formatAssistantErrorText(lastAssistant, {
|
||||
cfg: ctx.params.config,
|
||||
@ -38,12 +36,16 @@ export function handleAgentEnd(ctx: EmbeddedPiSubscribeContext) {
|
||||
provider: lastAssistant.provider,
|
||||
model: lastAssistant.model,
|
||||
});
|
||||
const errorText = (friendlyError || lastAssistant.errorMessage || "LLM request failed.").trim();
|
||||
ctx.log.warn(
|
||||
`embedded run agent end: runId=${ctx.params.runId} isError=true error=${errorText}`,
|
||||
);
|
||||
emitAgentEvent({
|
||||
runId: ctx.params.runId,
|
||||
stream: "lifecycle",
|
||||
data: {
|
||||
phase: "error",
|
||||
error: friendlyError || lastAssistant.errorMessage || "LLM request failed.",
|
||||
error: errorText,
|
||||
endedAt: Date.now(),
|
||||
},
|
||||
});
|
||||
@ -51,10 +53,11 @@ export function handleAgentEnd(ctx: EmbeddedPiSubscribeContext) {
|
||||
stream: "lifecycle",
|
||||
data: {
|
||||
phase: "error",
|
||||
error: friendlyError || lastAssistant.errorMessage || "LLM request failed.",
|
||||
error: errorText,
|
||||
},
|
||||
});
|
||||
} else {
|
||||
ctx.log.debug(`embedded run agent end: runId=${ctx.params.runId} isError=${isError}`);
|
||||
emitAgentEvent({
|
||||
runId: ctx.params.runId,
|
||||
stream: "lifecycle",
|
||||
|
||||
@ -9,11 +9,14 @@ import type { ConfigFileSnapshot, OpenClawConfig } from "../config/types.js";
|
||||
*/
|
||||
|
||||
const mockReadConfigFileSnapshot = vi.fn<() => Promise<ConfigFileSnapshot>>();
|
||||
const mockWriteConfigFile = vi.fn<(cfg: OpenClawConfig) => Promise<void>>(async () => {});
|
||||
const mockWriteConfigFile = vi.fn<
|
||||
(cfg: OpenClawConfig, options?: { unsetPaths?: string[][] }) => Promise<void>
|
||||
>(async () => {});
|
||||
|
||||
vi.mock("../config/config.js", () => ({
|
||||
readConfigFileSnapshot: () => mockReadConfigFileSnapshot(),
|
||||
writeConfigFile: (cfg: OpenClawConfig) => mockWriteConfigFile(cfg),
|
||||
writeConfigFile: (cfg: OpenClawConfig, options?: { unsetPaths?: string[][] }) =>
|
||||
mockWriteConfigFile(cfg, options),
|
||||
}));
|
||||
|
||||
const mockLog = vi.fn();
|
||||
@ -216,6 +219,9 @@ describe("config cli", () => {
|
||||
expect(written.gateway).toEqual(resolved.gateway);
|
||||
expect(written.tools?.profile).toBe("coding");
|
||||
expect(written.logging).toEqual(resolved.logging);
|
||||
expect(mockWriteConfigFile.mock.calls[0]?.[1]).toEqual({
|
||||
unsetPaths: [["tools", "alsoAllow"]],
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@ -272,7 +272,7 @@ export async function runConfigUnset(opts: { path: string; runtime?: RuntimeEnv
|
||||
runtime.exit(1);
|
||||
return;
|
||||
}
|
||||
await writeConfigFile(next);
|
||||
await writeConfigFile(next, { unsetPaths: [parsedPath] });
|
||||
runtime.log(info(`Removed ${opts.path}. Restart the gateway to apply.`));
|
||||
} catch (err) {
|
||||
runtime.error(danger(String(err)));
|
||||
|
||||
@ -6,6 +6,7 @@ import type { OpenClawConfig } from "../config/config.js";
|
||||
import { loadConfig, writeConfigFile } from "../config/config.js";
|
||||
import { resolveStateDir } from "../config/paths.js";
|
||||
import { resolveArchiveKind } from "../infra/archive.js";
|
||||
import { enablePluginInConfig } from "../plugins/enable.js";
|
||||
import { installPluginFromNpmSpec, installPluginFromPath } from "../plugins/install.js";
|
||||
import { recordPluginInstall } from "../plugins/installs.js";
|
||||
import { clearPluginManifestRegistryCache } from "../plugins/manifest-registry.js";
|
||||
@ -135,22 +136,6 @@ function createPluginInstallLogger(): { info: (msg: string) => void; warn: (msg:
|
||||
};
|
||||
}
|
||||
|
||||
function enablePluginInConfig(config: OpenClawConfig, pluginId: string): OpenClawConfig {
|
||||
return {
|
||||
...config,
|
||||
plugins: {
|
||||
...config.plugins,
|
||||
entries: {
|
||||
...config.plugins?.entries,
|
||||
[pluginId]: {
|
||||
...(config.plugins?.entries?.[pluginId] as object | undefined),
|
||||
enabled: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
function logSlotWarnings(warnings: string[]) {
|
||||
if (warnings.length === 0) {
|
||||
return;
|
||||
@ -352,24 +337,21 @@ export function registerPluginsCli(program: Command) {
|
||||
.argument("<id>", "Plugin id")
|
||||
.action(async (id: string) => {
|
||||
const cfg = loadConfig();
|
||||
let next: OpenClawConfig = {
|
||||
...cfg,
|
||||
plugins: {
|
||||
...cfg.plugins,
|
||||
entries: {
|
||||
...cfg.plugins?.entries,
|
||||
[id]: {
|
||||
...(cfg.plugins?.entries as Record<string, { enabled?: boolean }> | undefined)?.[id],
|
||||
enabled: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
const enableResult = enablePluginInConfig(cfg, id);
|
||||
let next: OpenClawConfig = enableResult.config;
|
||||
const slotResult = applySlotSelectionForPlugin(next, id);
|
||||
next = slotResult.config;
|
||||
await writeConfigFile(next);
|
||||
logSlotWarnings(slotResult.warnings);
|
||||
defaultRuntime.log(`Enabled plugin "${id}". Restart the gateway to apply.`);
|
||||
if (enableResult.enabled) {
|
||||
defaultRuntime.log(`Enabled plugin "${id}". Restart the gateway to apply.`);
|
||||
return;
|
||||
}
|
||||
defaultRuntime.log(
|
||||
theme.warn(
|
||||
`Plugin "${id}" could not be enabled (${enableResult.reason ?? "unknown reason"}).`,
|
||||
),
|
||||
);
|
||||
});
|
||||
|
||||
plugins
|
||||
@ -568,7 +550,7 @@ export function registerPluginsCli(program: Command) {
|
||||
},
|
||||
},
|
||||
probe.pluginId,
|
||||
);
|
||||
).config;
|
||||
next = recordPluginInstall(next, {
|
||||
pluginId: probe.pluginId,
|
||||
source: "path",
|
||||
@ -597,7 +579,7 @@ export function registerPluginsCli(program: Command) {
|
||||
// force a rescan so config validation sees the freshly installed plugin.
|
||||
clearPluginManifestRegistryCache();
|
||||
|
||||
let next = enablePluginInConfig(cfg, result.pluginId);
|
||||
let next = enablePluginInConfig(cfg, result.pluginId).config;
|
||||
const source: "archive" | "path" = resolveArchiveKind(resolved) ? "archive" : "path";
|
||||
next = recordPluginInstall(next, {
|
||||
pluginId: result.pluginId,
|
||||
@ -648,7 +630,7 @@ export function registerPluginsCli(program: Command) {
|
||||
// Ensure config validation sees newly installed plugin(s) even if the cache was warmed at startup.
|
||||
clearPluginManifestRegistryCache();
|
||||
|
||||
let next = enablePluginInConfig(cfg, result.pluginId);
|
||||
let next = enablePluginInConfig(cfg, result.pluginId).config;
|
||||
const resolvedSpec = result.npmResolution?.resolvedSpec;
|
||||
const recordSpec = opts.pin && resolvedSpec ? resolvedSpec : raw;
|
||||
if (opts.pin && !resolvedSpec) {
|
||||
|
||||
@ -114,6 +114,11 @@ export type ConfigWriteOptions = {
|
||||
* same config file path that produced the snapshot.
|
||||
*/
|
||||
expectedConfigPath?: string;
|
||||
/**
|
||||
* Paths that must be explicitly removed from the persisted file payload,
|
||||
* even if schema/default normalization reintroduces them.
|
||||
*/
|
||||
unsetPaths?: string[][];
|
||||
};
|
||||
|
||||
export type ReadConfigFileSnapshotForWriteResult = {
|
||||
@ -128,6 +133,86 @@ function hashConfigRaw(raw: string | null): string {
|
||||
.digest("hex");
|
||||
}
|
||||
|
||||
function isNumericPathSegment(raw: string): boolean {
|
||||
return /^[0-9]+$/.test(raw);
|
||||
}
|
||||
|
||||
function isWritePlainObject(value: unknown): value is Record<string, unknown> {
|
||||
return Boolean(value) && typeof value === "object" && !Array.isArray(value);
|
||||
}
|
||||
|
||||
function unsetPathForWrite(root: Record<string, unknown>, pathSegments: string[]): boolean {
|
||||
if (pathSegments.length === 0) {
|
||||
return false;
|
||||
}
|
||||
|
||||
const traversal: Array<{ container: unknown; key: string | number }> = [];
|
||||
let cursor: unknown = root;
|
||||
|
||||
for (let i = 0; i < pathSegments.length - 1; i += 1) {
|
||||
const segment = pathSegments[i];
|
||||
if (Array.isArray(cursor)) {
|
||||
if (!isNumericPathSegment(segment)) {
|
||||
return false;
|
||||
}
|
||||
const index = Number.parseInt(segment, 10);
|
||||
if (!Number.isFinite(index) || index < 0 || index >= cursor.length) {
|
||||
return false;
|
||||
}
|
||||
traversal.push({ container: cursor, key: index });
|
||||
cursor = cursor[index];
|
||||
continue;
|
||||
}
|
||||
if (!isWritePlainObject(cursor) || !(segment in cursor)) {
|
||||
return false;
|
||||
}
|
||||
traversal.push({ container: cursor, key: segment });
|
||||
cursor = cursor[segment];
|
||||
}
|
||||
|
||||
const leaf = pathSegments[pathSegments.length - 1];
|
||||
if (Array.isArray(cursor)) {
|
||||
if (!isNumericPathSegment(leaf)) {
|
||||
return false;
|
||||
}
|
||||
const index = Number.parseInt(leaf, 10);
|
||||
if (!Number.isFinite(index) || index < 0 || index >= cursor.length) {
|
||||
return false;
|
||||
}
|
||||
cursor.splice(index, 1);
|
||||
} else {
|
||||
if (!isWritePlainObject(cursor) || !(leaf in cursor)) {
|
||||
return false;
|
||||
}
|
||||
delete cursor[leaf];
|
||||
}
|
||||
|
||||
// Prune now-empty object branches after unsetting to avoid dead config scaffolding.
|
||||
for (let i = traversal.length - 1; i >= 0; i -= 1) {
|
||||
const { container, key } = traversal[i];
|
||||
let child: unknown;
|
||||
if (Array.isArray(container)) {
|
||||
child = typeof key === "number" ? container[key] : undefined;
|
||||
} else if (isWritePlainObject(container)) {
|
||||
child = container[String(key)];
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
if (!isWritePlainObject(child) || Object.keys(child).length > 0) {
|
||||
break;
|
||||
}
|
||||
if (Array.isArray(container) && typeof key === "number") {
|
||||
if (key >= 0 && key < container.length) {
|
||||
container.splice(key, 1);
|
||||
}
|
||||
} else if (isWritePlainObject(container)) {
|
||||
delete container[String(key)];
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
export function resolveConfigSnapshotHash(snapshot: {
|
||||
hash?: string;
|
||||
raw?: string | null;
|
||||
@ -892,6 +977,14 @@ export function createConfigIO(overrides: ConfigIoDeps = {}) {
|
||||
envRefMap && changedPaths
|
||||
? (restoreEnvRefsFromMap(cfgToWrite, "", envRefMap, changedPaths) as OpenClawConfig)
|
||||
: cfgToWrite;
|
||||
if (options.unsetPaths?.length) {
|
||||
for (const unsetPath of options.unsetPaths) {
|
||||
if (!Array.isArray(unsetPath) || unsetPath.length === 0) {
|
||||
continue;
|
||||
}
|
||||
unsetPathForWrite(outputConfig as Record<string, unknown>, unsetPath);
|
||||
}
|
||||
}
|
||||
// Do NOT apply runtime defaults when writing — user config should only contain
|
||||
// explicitly set values. Runtime defaults are applied when loading (issue #6070).
|
||||
const stampedOutputConfig = stampConfigVersion(outputConfig);
|
||||
@ -1129,5 +1222,6 @@ export async function writeConfigFile(
|
||||
options.expectedConfigPath === undefined || options.expectedConfigPath === io.configPath;
|
||||
await io.writeConfigFile(cfg, {
|
||||
envSnapshotForRestore: sameConfigPath ? options.envSnapshotForRestore : undefined,
|
||||
unsetPaths: options.unsetPaths,
|
||||
});
|
||||
}
|
||||
|
||||
@ -96,6 +96,34 @@ describe("config io write", () => {
|
||||
});
|
||||
});
|
||||
|
||||
it("honors explicit unset paths when schema defaults would otherwise reappear", async () => {
|
||||
await withTempHome("openclaw-config-io-", async (home) => {
|
||||
const { configPath, io, snapshot } = await writeConfigAndCreateIo({
|
||||
home,
|
||||
initialConfig: {
|
||||
gateway: { auth: { mode: "none" } },
|
||||
commands: { ownerDisplay: "hash" },
|
||||
},
|
||||
});
|
||||
|
||||
const next = structuredClone(snapshot.resolved) as Record<string, unknown>;
|
||||
if (
|
||||
next.commands &&
|
||||
typeof next.commands === "object" &&
|
||||
"ownerDisplay" in (next.commands as Record<string, unknown>)
|
||||
) {
|
||||
delete (next.commands as Record<string, unknown>).ownerDisplay;
|
||||
}
|
||||
|
||||
await io.writeConfigFile(next, { unsetPaths: [["commands", "ownerDisplay"]] });
|
||||
|
||||
const persisted = JSON.parse(await fs.readFile(configPath, "utf-8")) as {
|
||||
commands?: Record<string, unknown>;
|
||||
};
|
||||
expect(persisted.commands ?? {}).not.toHaveProperty("ownerDisplay");
|
||||
});
|
||||
});
|
||||
|
||||
it("preserves env var references when writing", async () => {
|
||||
await withTempHome("openclaw-config-io-", async (home) => {
|
||||
const { configPath, io, snapshot } = await writeConfigAndCreateIo({
|
||||
|
||||
@ -13,6 +13,18 @@ describe("cron schedule", () => {
|
||||
expect(next).toBe(Date.parse("2025-12-17T17:00:00.000Z"));
|
||||
});
|
||||
|
||||
it("throws a clear error when cron expr is missing at runtime", () => {
|
||||
const nowMs = Date.parse("2025-12-13T00:00:00.000Z");
|
||||
expect(() =>
|
||||
computeNextRunAtMs(
|
||||
{
|
||||
kind: "cron",
|
||||
} as unknown as { kind: "cron"; expr: string; tz?: string },
|
||||
nowMs,
|
||||
),
|
||||
).toThrow("invalid cron schedule: expr is required");
|
||||
});
|
||||
|
||||
it("computes next run for every schedule", () => {
|
||||
const anchor = Date.parse("2025-12-13T00:00:00.000Z");
|
||||
const now = anchor + 10_000;
|
||||
|
||||
@ -41,7 +41,11 @@ export function computeNextRunAtMs(schedule: CronSchedule, nowMs: number): numbe
|
||||
return anchor + steps * everyMs;
|
||||
}
|
||||
|
||||
const expr = schedule.expr.trim();
|
||||
const exprSource = (schedule as { expr?: unknown }).expr;
|
||||
if (typeof exprSource !== "string") {
|
||||
throw new Error("invalid cron schedule: expr is required");
|
||||
}
|
||||
const expr = exprSource.trim();
|
||||
if (!expr) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
@ -11,6 +11,22 @@ const noopLogger = {
|
||||
error: vi.fn(),
|
||||
};
|
||||
|
||||
async function withTimeout<T>(promise: Promise<T>, timeoutMs: number, label: string): Promise<T> {
|
||||
let timeout: NodeJS.Timeout | undefined;
|
||||
try {
|
||||
return await Promise.race([
|
||||
promise,
|
||||
new Promise<T>((_resolve, reject) => {
|
||||
timeout = setTimeout(() => reject(new Error(`${label} timed out`)), timeoutMs);
|
||||
}),
|
||||
]);
|
||||
} finally {
|
||||
if (timeout) {
|
||||
clearTimeout(timeout);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async function makeStorePath() {
|
||||
const dir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-cron-"));
|
||||
return {
|
||||
@ -135,4 +151,86 @@ describe("CronService read ops while job is running", () => {
|
||||
await store.cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
it("keeps list and status responsive during startup catch-up runs", async () => {
|
||||
const store = await makeStorePath();
|
||||
const enqueueSystemEvent = vi.fn();
|
||||
const requestHeartbeatNow = vi.fn();
|
||||
const nowMs = Date.parse("2025-12-13T00:00:00.000Z");
|
||||
|
||||
await fs.mkdir(path.dirname(store.storePath), { recursive: true });
|
||||
await fs.writeFile(
|
||||
store.storePath,
|
||||
JSON.stringify({
|
||||
version: 1,
|
||||
jobs: [
|
||||
{
|
||||
id: "startup-catchup",
|
||||
name: "startup catch-up",
|
||||
enabled: true,
|
||||
createdAtMs: nowMs - 86_400_000,
|
||||
updatedAtMs: nowMs - 86_400_000,
|
||||
schedule: { kind: "at", at: new Date(nowMs - 60_000).toISOString() },
|
||||
sessionTarget: "isolated",
|
||||
wakeMode: "next-heartbeat",
|
||||
payload: { kind: "agentTurn", message: "startup replay" },
|
||||
delivery: { mode: "none" },
|
||||
state: { nextRunAtMs: nowMs - 60_000 },
|
||||
},
|
||||
],
|
||||
}),
|
||||
"utf-8",
|
||||
);
|
||||
|
||||
let resolveRun:
|
||||
| ((value: { status: "ok" | "error" | "skipped"; summary?: string; error?: string }) => void)
|
||||
| undefined;
|
||||
let resolveRunStarted: (() => void) | undefined;
|
||||
const runStarted = new Promise<void>((resolve) => {
|
||||
resolveRunStarted = resolve;
|
||||
});
|
||||
|
||||
const runIsolatedAgentJob = vi.fn(async () => {
|
||||
resolveRunStarted?.();
|
||||
return await new Promise<{
|
||||
status: "ok" | "error" | "skipped";
|
||||
summary?: string;
|
||||
error?: string;
|
||||
}>((resolve) => {
|
||||
resolveRun = resolve;
|
||||
});
|
||||
});
|
||||
|
||||
const cron = new CronService({
|
||||
storePath: store.storePath,
|
||||
cronEnabled: true,
|
||||
log: noopLogger,
|
||||
nowMs: () => nowMs,
|
||||
enqueueSystemEvent,
|
||||
requestHeartbeatNow,
|
||||
runIsolatedAgentJob,
|
||||
});
|
||||
|
||||
try {
|
||||
const startPromise = cron.start();
|
||||
await runStarted;
|
||||
|
||||
await expect(
|
||||
withTimeout(cron.list({ includeDisabled: true }), 300, "cron.list during startup"),
|
||||
).resolves.toBeTypeOf("object");
|
||||
await expect(withTimeout(cron.status(), 300, "cron.status during startup")).resolves.toEqual(
|
||||
expect.objectContaining({ enabled: true, storePath: store.storePath }),
|
||||
);
|
||||
|
||||
resolveRun?.({ status: "ok", summary: "done" });
|
||||
await startPromise;
|
||||
|
||||
const jobs = await cron.list({ includeDisabled: true });
|
||||
expect(jobs[0]?.state.lastStatus).toBe("ok");
|
||||
expect(jobs[0]?.state.runningAtMs).toBeUndefined();
|
||||
} finally {
|
||||
cron.stop();
|
||||
await store.cleanup();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
@ -186,4 +186,19 @@ describe("cron schedule error isolation", () => {
|
||||
expect(badJob.state.lastError).toMatch(/^schedule error:/);
|
||||
expect(badJob.state.lastError).toBeTruthy();
|
||||
});
|
||||
|
||||
it("records a clear schedule error when cron expr is missing", () => {
|
||||
const badJob = createJob({
|
||||
id: "missing-expr",
|
||||
name: "Missing Expr",
|
||||
schedule: { kind: "cron" } as unknown as CronJob["schedule"],
|
||||
});
|
||||
const state = createMockState([badJob]);
|
||||
|
||||
recomputeNextRuns(state);
|
||||
|
||||
expect(badJob.state.lastError).toContain("invalid cron schedule: expr is required");
|
||||
expect(badJob.state.lastError).not.toContain("Cannot read properties of undefined");
|
||||
expect(badJob.state.scheduleErrorCount).toBe(1);
|
||||
});
|
||||
});
|
||||
|
||||
@ -28,14 +28,15 @@ async function ensureLoadedForRead(state: CronServiceState) {
|
||||
}
|
||||
|
||||
export async function start(state: CronServiceState) {
|
||||
if (!state.deps.cronEnabled) {
|
||||
state.deps.log.info({ enabled: false }, "cron: disabled");
|
||||
return;
|
||||
}
|
||||
|
||||
const startupInterruptedJobIds = new Set<string>();
|
||||
await locked(state, async () => {
|
||||
if (!state.deps.cronEnabled) {
|
||||
state.deps.log.info({ enabled: false }, "cron: disabled");
|
||||
return;
|
||||
}
|
||||
await ensureLoaded(state, { skipRecompute: true });
|
||||
const jobs = state.store?.jobs ?? [];
|
||||
const startupInterruptedJobIds = new Set<string>();
|
||||
for (const job of jobs) {
|
||||
if (typeof job.state.runningAtMs === "number") {
|
||||
state.deps.log.warn(
|
||||
@ -46,7 +47,13 @@ export async function start(state: CronServiceState) {
|
||||
startupInterruptedJobIds.add(job.id);
|
||||
}
|
||||
}
|
||||
await runMissedJobs(state, { skipJobIds: startupInterruptedJobIds });
|
||||
await persist(state);
|
||||
});
|
||||
|
||||
await runMissedJobs(state, { skipJobIds: startupInterruptedJobIds });
|
||||
|
||||
await locked(state, async () => {
|
||||
await ensureLoaded(state, { forceReload: true, skipRecompute: true });
|
||||
recomputeNextRuns(state);
|
||||
await persist(state);
|
||||
armTimer(state);
|
||||
|
||||
@ -458,22 +458,97 @@ export async function runMissedJobs(
|
||||
state: CronServiceState,
|
||||
opts?: { skipJobIds?: ReadonlySet<string> },
|
||||
) {
|
||||
if (!state.store) {
|
||||
return;
|
||||
}
|
||||
const now = state.deps.nowMs();
|
||||
const skipJobIds = opts?.skipJobIds;
|
||||
const missed = collectRunnableJobs(state, now, { skipJobIds, skipAtIfAlreadyRan: true });
|
||||
|
||||
if (missed.length > 0) {
|
||||
const startupCandidates = await locked(state, async () => {
|
||||
await ensureLoaded(state, { skipRecompute: true });
|
||||
if (!state.store) {
|
||||
return [] as Array<{ jobId: string; job: CronJob }>;
|
||||
}
|
||||
const now = state.deps.nowMs();
|
||||
const skipJobIds = opts?.skipJobIds;
|
||||
const missed = collectRunnableJobs(state, now, { skipJobIds, skipAtIfAlreadyRan: true });
|
||||
if (missed.length === 0) {
|
||||
return [] as Array<{ jobId: string; job: CronJob }>;
|
||||
}
|
||||
state.deps.log.info(
|
||||
{ count: missed.length, jobIds: missed.map((j) => j.id) },
|
||||
"cron: running missed jobs after restart",
|
||||
);
|
||||
for (const job of missed) {
|
||||
await executeJob(state, job, now, { forced: false });
|
||||
job.state.runningAtMs = now;
|
||||
job.state.lastError = undefined;
|
||||
}
|
||||
await persist(state);
|
||||
return missed.map((job) => ({ jobId: job.id, job }));
|
||||
});
|
||||
|
||||
if (startupCandidates.length === 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
const outcomes: Array<TimedCronRunOutcome> = [];
|
||||
for (const candidate of startupCandidates) {
|
||||
const startedAt = state.deps.nowMs();
|
||||
emit(state, { jobId: candidate.job.id, action: "started", runAtMs: startedAt });
|
||||
try {
|
||||
const result = await executeJobCore(state, candidate.job);
|
||||
outcomes.push({
|
||||
jobId: candidate.jobId,
|
||||
status: result.status,
|
||||
error: result.error,
|
||||
summary: result.summary,
|
||||
delivered: result.delivered,
|
||||
sessionId: result.sessionId,
|
||||
sessionKey: result.sessionKey,
|
||||
model: result.model,
|
||||
provider: result.provider,
|
||||
usage: result.usage,
|
||||
startedAt,
|
||||
endedAt: state.deps.nowMs(),
|
||||
});
|
||||
} catch (err) {
|
||||
outcomes.push({
|
||||
jobId: candidate.jobId,
|
||||
status: "error",
|
||||
error: String(err),
|
||||
startedAt,
|
||||
endedAt: state.deps.nowMs(),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
await locked(state, async () => {
|
||||
await ensureLoaded(state, { forceReload: true, skipRecompute: true });
|
||||
if (!state.store) {
|
||||
return;
|
||||
}
|
||||
|
||||
for (const result of outcomes) {
|
||||
const job = state.store.jobs.find((entry) => entry.id === result.jobId);
|
||||
if (!job) {
|
||||
continue;
|
||||
}
|
||||
const shouldDelete = applyJobResult(state, job, {
|
||||
status: result.status,
|
||||
error: result.error,
|
||||
delivered: result.delivered,
|
||||
startedAt: result.startedAt,
|
||||
endedAt: result.endedAt,
|
||||
});
|
||||
|
||||
emitJobFinished(state, job, result, result.startedAt);
|
||||
|
||||
if (shouldDelete) {
|
||||
state.store.jobs = state.store.jobs.filter((entry) => entry.id !== job.id);
|
||||
emit(state, { jobId: job.id, action: "removed" });
|
||||
}
|
||||
}
|
||||
|
||||
// Preserve any new past-due nextRunAtMs values that became due while
|
||||
// startup catch-up was running. They should execute on a future tick
|
||||
// instead of being silently advanced.
|
||||
recomputeNextRunsForMaintenance(state);
|
||||
await persist(state);
|
||||
});
|
||||
}
|
||||
|
||||
export async function runDueJobs(state: CronServiceState) {
|
||||
|
||||
@ -33,4 +33,13 @@ describe("cron stagger helpers", () => {
|
||||
expect(resolveCronStaggerMs({ kind: "cron", expr: "0 * * * *", staggerMs: 0 })).toBe(0);
|
||||
expect(resolveCronStaggerMs({ kind: "cron", expr: "15 * * * *" })).toBe(0);
|
||||
});
|
||||
|
||||
it("handles missing runtime expr values without throwing", () => {
|
||||
expect(() =>
|
||||
resolveCronStaggerMs({ kind: "cron" } as unknown as { kind: "cron"; expr: string }),
|
||||
).not.toThrow();
|
||||
expect(
|
||||
resolveCronStaggerMs({ kind: "cron" } as unknown as { kind: "cron"; expr: string }),
|
||||
).toBe(0);
|
||||
});
|
||||
});
|
||||
|
||||
@ -41,5 +41,7 @@ export function resolveCronStaggerMs(schedule: Extract<CronSchedule, { kind: "cr
|
||||
if (explicit !== undefined) {
|
||||
return explicit;
|
||||
}
|
||||
return resolveDefaultCronStaggerMs(schedule.expr) ?? 0;
|
||||
const expr = (schedule as { expr?: unknown }).expr;
|
||||
const cronExpr = typeof expr === "string" ? expr : "";
|
||||
return resolveDefaultCronStaggerMs(cronExpr) ?? 0;
|
||||
}
|
||||
|
||||
@ -23,6 +23,48 @@ describe("diffConfigPaths", () => {
|
||||
const paths = diffConfigPaths(prev, next);
|
||||
expect(paths).toContain("messages.groupChat.mentionPatterns");
|
||||
});
|
||||
|
||||
it("does not report unchanged arrays of objects as changed", () => {
|
||||
const prev = {
|
||||
memory: {
|
||||
qmd: {
|
||||
paths: [{ path: "~/docs", pattern: "**/*.md", name: "docs" }],
|
||||
scope: {
|
||||
rules: [{ when: { channel: "slack" }, include: ["docs"] }],
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
const next = {
|
||||
memory: {
|
||||
qmd: {
|
||||
paths: [{ path: "~/docs", pattern: "**/*.md", name: "docs" }],
|
||||
scope: {
|
||||
rules: [{ when: { channel: "slack" }, include: ["docs"] }],
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
expect(diffConfigPaths(prev, next)).toEqual([]);
|
||||
});
|
||||
|
||||
it("reports changed arrays of objects", () => {
|
||||
const prev = {
|
||||
memory: {
|
||||
qmd: {
|
||||
paths: [{ path: "~/docs", pattern: "**/*.md", name: "docs" }],
|
||||
},
|
||||
},
|
||||
};
|
||||
const next = {
|
||||
memory: {
|
||||
qmd: {
|
||||
paths: [{ path: "~/docs", pattern: "**/*.txt", name: "docs" }],
|
||||
},
|
||||
},
|
||||
};
|
||||
expect(diffConfigPaths(prev, next)).toContain("memory.qmd.paths");
|
||||
});
|
||||
});
|
||||
|
||||
describe("buildGatewayReloadPlan", () => {
|
||||
|
||||
@ -1,3 +1,4 @@
|
||||
import { isDeepStrictEqual } from "node:util";
|
||||
import chokidar from "chokidar";
|
||||
import { type ChannelId, listChannelPlugins } from "../channels/plugins/index.js";
|
||||
import type { OpenClawConfig, ConfigFileSnapshot, GatewayReloadMode } from "../config/config.js";
|
||||
@ -150,7 +151,9 @@ export function diffConfigPaths(prev: unknown, next: unknown, prefix = ""): stri
|
||||
return paths;
|
||||
}
|
||||
if (Array.isArray(prev) && Array.isArray(next)) {
|
||||
if (prev.length === next.length && prev.every((val, idx) => val === next[idx])) {
|
||||
// Arrays can contain object entries (for example memory.qmd.paths/scope.rules);
|
||||
// compare structurally so identical values are not reported as changed.
|
||||
if (isDeepStrictEqual(prev, next)) {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
@ -1,3 +1,6 @@
|
||||
import { DEFAULT_PROVIDER } from "../../agents/defaults.js";
|
||||
import { buildAllowedModelSet } from "../../agents/model-selection.js";
|
||||
import { loadConfig } from "../../config/config.js";
|
||||
import {
|
||||
ErrorCodes,
|
||||
errorShape,
|
||||
@ -20,7 +23,14 @@ export const modelsHandlers: GatewayRequestHandlers = {
|
||||
return;
|
||||
}
|
||||
try {
|
||||
const models = await context.loadGatewayModelCatalog();
|
||||
const catalog = await context.loadGatewayModelCatalog();
|
||||
const cfg = loadConfig();
|
||||
const { allowedCatalog } = buildAllowedModelSet({
|
||||
cfg,
|
||||
catalog,
|
||||
defaultProvider: DEFAULT_PROVIDER,
|
||||
});
|
||||
const models = allowedCatalog.length > 0 ? allowedCatalog : catalog;
|
||||
respond(true, { models }, undefined);
|
||||
} catch (err) {
|
||||
respond(false, undefined, errorShape(ErrorCodes.UNAVAILABLE, String(err)));
|
||||
|
||||
@ -5,6 +5,7 @@ import { afterAll, beforeAll, describe, expect, test } from "vitest";
|
||||
import { WebSocket } from "ws";
|
||||
import { getChannelPlugin } from "../channels/plugins/index.js";
|
||||
import type { ChannelOutboundAdapter } from "../channels/plugins/types.js";
|
||||
import { clearConfigCache } from "../config/config.js";
|
||||
import { resolveCanvasHostUrl } from "../infra/canvas-host-url.js";
|
||||
import { GatewayLockError } from "../infra/gateway-lock.js";
|
||||
import { getActivePluginRegistry, setActivePluginRegistry } from "../plugins/runtime.js";
|
||||
@ -251,6 +252,99 @@ describe("gateway server models + voicewake", () => {
|
||||
expect(piSdkMock.discoverCalls).toBe(1);
|
||||
});
|
||||
|
||||
test("models.list filters to allowlisted configured models by default", async () => {
|
||||
const configPath = process.env.OPENCLAW_CONFIG_PATH;
|
||||
if (!configPath) {
|
||||
throw new Error("Missing OPENCLAW_CONFIG_PATH");
|
||||
}
|
||||
let previousConfig: string | undefined;
|
||||
try {
|
||||
previousConfig = await fs.readFile(configPath, "utf-8");
|
||||
} catch (err) {
|
||||
const code = (err as NodeJS.ErrnoException | undefined)?.code;
|
||||
if (code !== "ENOENT") {
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
try {
|
||||
await fs.mkdir(path.dirname(configPath), { recursive: true });
|
||||
await fs.writeFile(
|
||||
configPath,
|
||||
JSON.stringify(
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "openai/gpt-test-z" },
|
||||
models: {
|
||||
"openai/gpt-test-z": {},
|
||||
"anthropic/claude-test-a": {},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
null,
|
||||
2,
|
||||
),
|
||||
"utf-8",
|
||||
);
|
||||
clearConfigCache();
|
||||
|
||||
piSdkMock.enabled = true;
|
||||
piSdkMock.models = [
|
||||
{ id: "gpt-test-z", provider: "openai", contextWindow: 0 },
|
||||
{
|
||||
id: "gpt-test-a",
|
||||
name: "A-Model",
|
||||
provider: "openai",
|
||||
contextWindow: 8000,
|
||||
},
|
||||
{
|
||||
id: "claude-test-b",
|
||||
name: "B-Model",
|
||||
provider: "anthropic",
|
||||
contextWindow: 1000,
|
||||
},
|
||||
{
|
||||
id: "claude-test-a",
|
||||
name: "A-Model",
|
||||
provider: "anthropic",
|
||||
contextWindow: 200_000,
|
||||
},
|
||||
];
|
||||
|
||||
const res = await rpcReq<{
|
||||
models: Array<{
|
||||
id: string;
|
||||
name: string;
|
||||
provider: string;
|
||||
contextWindow?: number;
|
||||
}>;
|
||||
}>(ws, "models.list");
|
||||
|
||||
expect(res.ok).toBe(true);
|
||||
expect(res.payload?.models).toEqual([
|
||||
{
|
||||
id: "claude-test-a",
|
||||
name: "A-Model",
|
||||
provider: "anthropic",
|
||||
contextWindow: 200_000,
|
||||
},
|
||||
{
|
||||
id: "gpt-test-z",
|
||||
name: "gpt-test-z",
|
||||
provider: "openai",
|
||||
},
|
||||
]);
|
||||
} finally {
|
||||
if (previousConfig === undefined) {
|
||||
await fs.rm(configPath, { force: true });
|
||||
} else {
|
||||
await fs.writeFile(configPath, previousConfig, "utf-8");
|
||||
}
|
||||
clearConfigCache();
|
||||
}
|
||||
});
|
||||
|
||||
test("models.list rejects unknown params", async () => {
|
||||
piSdkMock.enabled = true;
|
||||
piSdkMock.models = [{ id: "gpt-test-a", name: "A", provider: "openai" }];
|
||||
|
||||
@ -122,6 +122,26 @@ describe("device pairing tokens", () => {
|
||||
expect(paired?.tokens?.operator?.scopes).toEqual(["operator.read"]);
|
||||
});
|
||||
|
||||
test("preserves existing token scopes when approving a repair without requested scopes", async () => {
|
||||
const baseDir = await mkdtemp(join(tmpdir(), "openclaw-device-pairing-"));
|
||||
await setupPairedOperatorDevice(baseDir, ["operator.admin"]);
|
||||
|
||||
const repair = await requestDevicePairing(
|
||||
{
|
||||
deviceId: "device-1",
|
||||
publicKey: "public-key-1",
|
||||
role: "operator",
|
||||
},
|
||||
baseDir,
|
||||
);
|
||||
await approveDevicePairing(repair.request.requestId, baseDir);
|
||||
|
||||
const paired = await getPairedDevice("device-1", baseDir);
|
||||
expect(paired?.scopes).toEqual(["operator.admin"]);
|
||||
expect(paired?.approvedScopes).toEqual(["operator.admin"]);
|
||||
expect(paired?.tokens?.operator?.scopes).toEqual(["operator.admin"]);
|
||||
});
|
||||
|
||||
test("rejects scope escalation when rotating a token and leaves state unchanged", async () => {
|
||||
const baseDir = await mkdtemp(join(tmpdir(), "openclaw-device-pairing-"));
|
||||
await setupPairedOperatorDevice(baseDir, ["operator.read"]);
|
||||
@ -168,7 +188,7 @@ describe("device pairing tokens", () => {
|
||||
expect(mismatch.reason).toBe("token-mismatch");
|
||||
});
|
||||
|
||||
test("accepts operator.read requests with an operator.admin token scope", async () => {
|
||||
test("accepts operator.read/operator.write requests with an operator.admin token scope", async () => {
|
||||
const baseDir = await mkdtemp(join(tmpdir(), "openclaw-device-pairing-"));
|
||||
await setupPairedOperatorDevice(baseDir, ["operator.admin"]);
|
||||
const paired = await getPairedDevice("device-1", baseDir);
|
||||
@ -183,14 +203,14 @@ describe("device pairing tokens", () => {
|
||||
});
|
||||
expect(readOk.ok).toBe(true);
|
||||
|
||||
const writeMismatch = await verifyDeviceToken({
|
||||
const writeOk = await verifyDeviceToken({
|
||||
deviceId: "device-1",
|
||||
token,
|
||||
role: "operator",
|
||||
scopes: ["operator.write"],
|
||||
baseDir,
|
||||
});
|
||||
expect(writeMismatch).toEqual({ ok: false, reason: "scope-mismatch" });
|
||||
expect(writeOk.ok).toBe(true);
|
||||
});
|
||||
|
||||
test("treats multibyte same-length token input as mismatch without throwing", async () => {
|
||||
|
||||
@ -332,8 +332,17 @@ export async function approveDevicePairing(
|
||||
const tokens = existing?.tokens ? { ...existing.tokens } : {};
|
||||
const roleForToken = normalizeRole(pending.role);
|
||||
if (roleForToken) {
|
||||
const nextScopes = normalizeDeviceAuthScopes(pending.scopes);
|
||||
const existingToken = tokens[roleForToken];
|
||||
const requestedScopes = normalizeDeviceAuthScopes(pending.scopes);
|
||||
const nextScopes =
|
||||
requestedScopes.length > 0
|
||||
? requestedScopes
|
||||
: normalizeDeviceAuthScopes(
|
||||
existingToken?.scopes ??
|
||||
approvedScopes ??
|
||||
existing?.approvedScopes ??
|
||||
existing?.scopes,
|
||||
);
|
||||
const now = Date.now();
|
||||
tokens[roleForToken] = {
|
||||
token: newToken(),
|
||||
|
||||
@ -205,6 +205,148 @@ export type ExecAllowlistAnalysis = {
|
||||
segmentSatisfiedBy: ExecSegmentSatisfiedBy[];
|
||||
};
|
||||
|
||||
const SHELL_WRAPPER_EXECUTABLES = new Set([
|
||||
"ash",
|
||||
"bash",
|
||||
"cmd",
|
||||
"cmd.exe",
|
||||
"dash",
|
||||
"fish",
|
||||
"ksh",
|
||||
"powershell",
|
||||
"powershell.exe",
|
||||
"pwsh",
|
||||
"pwsh.exe",
|
||||
"sh",
|
||||
"zsh",
|
||||
]);
|
||||
|
||||
function normalizeExecutableName(name: string | undefined): string {
|
||||
return (name ?? "").trim().toLowerCase();
|
||||
}
|
||||
|
||||
function isShellWrapperSegment(segment: ExecCommandSegment): boolean {
|
||||
const candidates = [
|
||||
normalizeExecutableName(segment.resolution?.executableName),
|
||||
normalizeExecutableName(segment.resolution?.rawExecutable),
|
||||
];
|
||||
for (const candidate of candidates) {
|
||||
if (!candidate) {
|
||||
continue;
|
||||
}
|
||||
if (SHELL_WRAPPER_EXECUTABLES.has(candidate)) {
|
||||
return true;
|
||||
}
|
||||
const base = candidate.split(/[\\/]/).pop();
|
||||
if (base && SHELL_WRAPPER_EXECUTABLES.has(base)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
function extractShellInlineCommand(argv: string[]): string | null {
|
||||
for (let i = 1; i < argv.length; i += 1) {
|
||||
const token = argv[i];
|
||||
if (!token) {
|
||||
continue;
|
||||
}
|
||||
const lower = token.toLowerCase();
|
||||
if (lower === "--") {
|
||||
break;
|
||||
}
|
||||
if (
|
||||
lower === "-c" ||
|
||||
lower === "--command" ||
|
||||
lower === "-command" ||
|
||||
lower === "/c" ||
|
||||
lower === "/k"
|
||||
) {
|
||||
const next = argv[i + 1]?.trim();
|
||||
return next ? next : null;
|
||||
}
|
||||
if (/^-[^-]*c[^-]*$/i.test(token)) {
|
||||
const commandIndex = lower.indexOf("c");
|
||||
const inline = token.slice(commandIndex + 1).trim();
|
||||
if (inline) {
|
||||
return inline;
|
||||
}
|
||||
const next = argv[i + 1]?.trim();
|
||||
return next ? next : null;
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function collectAllowAlwaysPatterns(params: {
|
||||
segment: ExecCommandSegment;
|
||||
cwd?: string;
|
||||
env?: NodeJS.ProcessEnv;
|
||||
platform?: string | null;
|
||||
depth: number;
|
||||
out: Set<string>;
|
||||
}) {
|
||||
const candidatePath = resolveAllowlistCandidatePath(params.segment.resolution, params.cwd);
|
||||
if (!candidatePath) {
|
||||
return;
|
||||
}
|
||||
if (!isShellWrapperSegment(params.segment)) {
|
||||
params.out.add(candidatePath);
|
||||
return;
|
||||
}
|
||||
if (params.depth >= 3) {
|
||||
return;
|
||||
}
|
||||
const inlineCommand = extractShellInlineCommand(params.segment.argv);
|
||||
if (!inlineCommand) {
|
||||
return;
|
||||
}
|
||||
const nested = analyzeShellCommand({
|
||||
command: inlineCommand,
|
||||
cwd: params.cwd,
|
||||
env: params.env,
|
||||
platform: params.platform,
|
||||
});
|
||||
if (!nested.ok) {
|
||||
return;
|
||||
}
|
||||
for (const nestedSegment of nested.segments) {
|
||||
collectAllowAlwaysPatterns({
|
||||
segment: nestedSegment,
|
||||
cwd: params.cwd,
|
||||
env: params.env,
|
||||
platform: params.platform,
|
||||
depth: params.depth + 1,
|
||||
out: params.out,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Derive persisted allowlist patterns for an "allow always" decision.
|
||||
* When a command is wrapped in a shell (for example `zsh -lc "<cmd>"`),
|
||||
* persist the inner executable(s) rather than the shell binary.
|
||||
*/
|
||||
export function resolveAllowAlwaysPatterns(params: {
|
||||
segments: ExecCommandSegment[];
|
||||
cwd?: string;
|
||||
env?: NodeJS.ProcessEnv;
|
||||
platform?: string | null;
|
||||
}): string[] {
|
||||
const patterns = new Set<string>();
|
||||
for (const segment of params.segments) {
|
||||
collectAllowAlwaysPatterns({
|
||||
segment,
|
||||
cwd: params.cwd,
|
||||
env: params.env,
|
||||
platform: params.platform,
|
||||
depth: 0,
|
||||
out: patterns,
|
||||
});
|
||||
}
|
||||
return Array.from(patterns);
|
||||
}
|
||||
|
||||
/**
|
||||
* Evaluates allowlist for shell commands (including &&, ||, ;) and returns analysis metadata.
|
||||
*/
|
||||
|
||||
@ -18,6 +18,7 @@ import {
|
||||
normalizeSafeBins,
|
||||
requiresExecApproval,
|
||||
resolveCommandResolution,
|
||||
resolveAllowAlwaysPatterns,
|
||||
resolveExecApprovals,
|
||||
resolveExecApprovalsFromFile,
|
||||
resolveExecApprovalsPath,
|
||||
@ -1214,3 +1215,122 @@ describe("normalizeExecApprovals handles string allowlist entries (#9790)", () =
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
describe("resolveAllowAlwaysPatterns", () => {
|
||||
function makeExecutable(dir: string, name: string): string {
|
||||
const fileName = process.platform === "win32" ? `${name}.exe` : name;
|
||||
const exe = path.join(dir, fileName);
|
||||
fs.writeFileSync(exe, "");
|
||||
fs.chmodSync(exe, 0o755);
|
||||
return exe;
|
||||
}
|
||||
|
||||
it("returns direct executable paths for non-shell segments", () => {
|
||||
const exe = path.join("/tmp", "openclaw-tool");
|
||||
const patterns = resolveAllowAlwaysPatterns({
|
||||
segments: [
|
||||
{
|
||||
raw: exe,
|
||||
argv: [exe],
|
||||
resolution: { rawExecutable: exe, resolvedPath: exe, executableName: "openclaw-tool" },
|
||||
},
|
||||
],
|
||||
});
|
||||
expect(patterns).toEqual([exe]);
|
||||
});
|
||||
|
||||
it("unwraps shell wrappers and persists the inner executable instead", () => {
|
||||
if (process.platform === "win32") {
|
||||
return;
|
||||
}
|
||||
const dir = makeTempDir();
|
||||
const whoami = makeExecutable(dir, "whoami");
|
||||
const patterns = resolveAllowAlwaysPatterns({
|
||||
segments: [
|
||||
{
|
||||
raw: "/bin/zsh -lc 'whoami'",
|
||||
argv: ["/bin/zsh", "-lc", "whoami"],
|
||||
resolution: {
|
||||
rawExecutable: "/bin/zsh",
|
||||
resolvedPath: "/bin/zsh",
|
||||
executableName: "zsh",
|
||||
},
|
||||
},
|
||||
],
|
||||
cwd: dir,
|
||||
env: makePathEnv(dir),
|
||||
platform: process.platform,
|
||||
});
|
||||
expect(patterns).toEqual([whoami]);
|
||||
expect(patterns).not.toContain("/bin/zsh");
|
||||
});
|
||||
|
||||
it("extracts all inner binaries from shell chains and deduplicates", () => {
|
||||
if (process.platform === "win32") {
|
||||
return;
|
||||
}
|
||||
const dir = makeTempDir();
|
||||
const whoami = makeExecutable(dir, "whoami");
|
||||
const ls = makeExecutable(dir, "ls");
|
||||
const patterns = resolveAllowAlwaysPatterns({
|
||||
segments: [
|
||||
{
|
||||
raw: "/bin/zsh -lc 'whoami && ls && whoami'",
|
||||
argv: ["/bin/zsh", "-lc", "whoami && ls && whoami"],
|
||||
resolution: {
|
||||
rawExecutable: "/bin/zsh",
|
||||
resolvedPath: "/bin/zsh",
|
||||
executableName: "zsh",
|
||||
},
|
||||
},
|
||||
],
|
||||
cwd: dir,
|
||||
env: makePathEnv(dir),
|
||||
platform: process.platform,
|
||||
});
|
||||
expect(new Set(patterns)).toEqual(new Set([whoami, ls]));
|
||||
});
|
||||
|
||||
it("does not persist broad shell binaries when no inner command can be derived", () => {
|
||||
const patterns = resolveAllowAlwaysPatterns({
|
||||
segments: [
|
||||
{
|
||||
raw: "/bin/zsh -s",
|
||||
argv: ["/bin/zsh", "-s"],
|
||||
resolution: {
|
||||
rawExecutable: "/bin/zsh",
|
||||
resolvedPath: "/bin/zsh",
|
||||
executableName: "zsh",
|
||||
},
|
||||
},
|
||||
],
|
||||
platform: process.platform,
|
||||
});
|
||||
expect(patterns).toEqual([]);
|
||||
});
|
||||
|
||||
it("detects shell wrappers even when unresolved executableName is a full path", () => {
|
||||
if (process.platform === "win32") {
|
||||
return;
|
||||
}
|
||||
const dir = makeTempDir();
|
||||
const whoami = makeExecutable(dir, "whoami");
|
||||
const patterns = resolveAllowAlwaysPatterns({
|
||||
segments: [
|
||||
{
|
||||
raw: "/usr/local/bin/zsh -lc whoami",
|
||||
argv: ["/usr/local/bin/zsh", "-lc", "whoami"],
|
||||
resolution: {
|
||||
rawExecutable: "/usr/local/bin/zsh",
|
||||
resolvedPath: undefined,
|
||||
executableName: "/usr/local/bin/zsh",
|
||||
},
|
||||
},
|
||||
],
|
||||
cwd: dir,
|
||||
env: makePathEnv(dir),
|
||||
platform: process.platform,
|
||||
});
|
||||
expect(patterns).toEqual([whoami]);
|
||||
});
|
||||
});
|
||||
|
||||
@ -196,6 +196,55 @@ describe("gateway lock", () => {
|
||||
staleSpy.mockRestore();
|
||||
});
|
||||
|
||||
it("keeps lock when fs.stat fails until payload is stale", async () => {
|
||||
vi.useRealTimers();
|
||||
const env = await makeEnv();
|
||||
const { lockPath, configPath } = resolveLockPath(env);
|
||||
const payload = createLockPayload({ configPath, startTime: 111 });
|
||||
await fs.writeFile(lockPath, JSON.stringify(payload), "utf8");
|
||||
|
||||
const procSpy = mockProcStatRead({
|
||||
onProcRead: () => {
|
||||
throw new Error("EACCES");
|
||||
},
|
||||
});
|
||||
const statSpy = vi
|
||||
.spyOn(fs, "stat")
|
||||
.mockRejectedValue(Object.assign(new Error("EPERM"), { code: "EPERM" }));
|
||||
|
||||
const pending = acquireForTest(env, {
|
||||
timeoutMs: 20,
|
||||
staleMs: 10_000,
|
||||
platform: "linux",
|
||||
});
|
||||
await expect(pending).rejects.toBeInstanceOf(GatewayLockError);
|
||||
|
||||
procSpy.mockRestore();
|
||||
|
||||
const stalePayload = createLockPayload({
|
||||
configPath,
|
||||
startTime: 111,
|
||||
createdAt: new Date(0).toISOString(),
|
||||
});
|
||||
await fs.writeFile(lockPath, JSON.stringify(stalePayload), "utf8");
|
||||
|
||||
const staleProcSpy = mockProcStatRead({
|
||||
onProcRead: () => {
|
||||
throw new Error("EACCES");
|
||||
},
|
||||
});
|
||||
|
||||
const lock = await acquireForTest(env, {
|
||||
staleMs: 1,
|
||||
platform: "linux",
|
||||
});
|
||||
expect(lock).not.toBeNull();
|
||||
|
||||
await lock?.release();
|
||||
staleProcSpy.mockRestore();
|
||||
statSpy.mockRestore();
|
||||
});
|
||||
|
||||
it("returns null when multi-gateway override is enabled", async () => {
|
||||
const env = await makeEnv();
|
||||
const lock = await acquireGatewayLock({
|
||||
|
||||
@ -236,7 +236,11 @@ export async function acquireGatewayLock(
|
||||
const st = await fs.stat(lockPath);
|
||||
stale = Date.now() - st.mtimeMs > staleMs;
|
||||
} catch {
|
||||
stale = true;
|
||||
// On Windows or locked filesystems we may be unable to stat the
|
||||
// lock file even though the existing gateway is still healthy.
|
||||
// Treat the lock as non-stale so we keep waiting instead of
|
||||
// forcefully removing another gateway's lock.
|
||||
stale = false;
|
||||
}
|
||||
}
|
||||
if (stale) {
|
||||
|
||||
@ -632,6 +632,38 @@ describe("applyMediaUnderstanding", () => {
|
||||
expect(ctx.Body).not.toContain("<file");
|
||||
});
|
||||
|
||||
it("does not reclassify PDF attachments as text/plain", async () => {
|
||||
const pseudoPdf = Buffer.from("%PDF-1.7\n1 0 obj\n<< /Type /Catalog >>\nendobj\n", "utf8");
|
||||
const filePath = await createTempMediaFile({
|
||||
fileName: "report.pdf",
|
||||
content: pseudoPdf,
|
||||
});
|
||||
|
||||
const cfg: OpenClawConfig = {
|
||||
...createMediaDisabledConfig(),
|
||||
gateway: {
|
||||
http: {
|
||||
endpoints: {
|
||||
responses: {
|
||||
files: { allowedMimes: ["text/plain"] },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
const { ctx, result } = await applyWithDisabledMedia({
|
||||
body: "<media:file>",
|
||||
mediaPath: filePath,
|
||||
mediaType: "application/pdf",
|
||||
cfg,
|
||||
});
|
||||
|
||||
expect(result.appliedFile).toBe(false);
|
||||
expect(ctx.Body).toBe("<media:file>");
|
||||
expect(ctx.Body).not.toContain("<file");
|
||||
});
|
||||
|
||||
it("respects configured allowedMimes for text-like attachments", async () => {
|
||||
const tsvText = "a\tb\tc\n1\t2\t3";
|
||||
const tsvPath = await createTempMediaFile({
|
||||
|
||||
@ -382,7 +382,11 @@ async function extractFileBlocks(params: {
|
||||
}
|
||||
const utf16Charset = resolveUtf16Charset(bufferResult?.buffer);
|
||||
const textSample = decodeTextSample(bufferResult?.buffer);
|
||||
const textLike = Boolean(utf16Charset) || looksLikeUtf8Text(bufferResult?.buffer);
|
||||
// Do not coerce real PDFs into text/plain via printable-byte heuristics.
|
||||
// PDFs have a dedicated extraction path in extractFileContentFromSource.
|
||||
const allowTextHeuristic = normalizedRawMime !== "application/pdf";
|
||||
const textLike =
|
||||
allowTextHeuristic && (Boolean(utf16Charset) || looksLikeUtf8Text(bufferResult?.buffer));
|
||||
const guessedDelimited = textLike ? guessDelimitedMime(textSample) : undefined;
|
||||
const textHint =
|
||||
forcedTextMimeResolved ?? guessedDelimited ?? (textLike ? "text/plain" : undefined);
|
||||
|
||||
@ -414,6 +414,132 @@ describe("QmdMemoryManager", () => {
|
||||
expect(addSessions).toBeDefined();
|
||||
});
|
||||
|
||||
it("migrates unscoped legacy collections before adding scoped names", async () => {
|
||||
cfg = {
|
||||
...cfg,
|
||||
memory: {
|
||||
backend: "qmd",
|
||||
qmd: {
|
||||
includeDefaultMemory: true,
|
||||
update: { interval: "0s", debounceMs: 60_000, onBoot: false },
|
||||
paths: [],
|
||||
},
|
||||
},
|
||||
} as OpenClawConfig;
|
||||
|
||||
const legacyCollections = new Map<
|
||||
string,
|
||||
{
|
||||
path: string;
|
||||
mask: string;
|
||||
}
|
||||
>([
|
||||
["memory-root", { path: workspaceDir, mask: "MEMORY.md" }],
|
||||
["memory-alt", { path: workspaceDir, mask: "memory.md" }],
|
||||
["memory-dir", { path: path.join(workspaceDir, "memory"), mask: "**/*.md" }],
|
||||
]);
|
||||
const removeCalls: string[] = [];
|
||||
|
||||
spawnMock.mockImplementation((_cmd: string, args: string[]) => {
|
||||
if (args[0] === "collection" && args[1] === "list") {
|
||||
const child = createMockChild({ autoClose: false });
|
||||
emitAndClose(
|
||||
child,
|
||||
"stdout",
|
||||
JSON.stringify(
|
||||
[...legacyCollections.entries()].map(([name, info]) => ({
|
||||
name,
|
||||
path: info.path,
|
||||
mask: info.mask,
|
||||
})),
|
||||
),
|
||||
);
|
||||
return child;
|
||||
}
|
||||
if (args[0] === "collection" && args[1] === "remove") {
|
||||
const child = createMockChild({ autoClose: false });
|
||||
const name = args[2] ?? "";
|
||||
removeCalls.push(name);
|
||||
legacyCollections.delete(name);
|
||||
queueMicrotask(() => child.closeWith(0));
|
||||
return child;
|
||||
}
|
||||
if (args[0] === "collection" && args[1] === "add") {
|
||||
const child = createMockChild({ autoClose: false });
|
||||
const pathArg = args[2] ?? "";
|
||||
const name = args[args.indexOf("--name") + 1] ?? "";
|
||||
const mask = args[args.indexOf("--mask") + 1] ?? "";
|
||||
const hasConflict = [...legacyCollections.entries()].some(
|
||||
([existingName, info]) =>
|
||||
existingName !== name && info.path === pathArg && info.mask === mask,
|
||||
);
|
||||
if (hasConflict) {
|
||||
emitAndClose(child, "stderr", "collection already exists", 1);
|
||||
return child;
|
||||
}
|
||||
legacyCollections.set(name, { path: pathArg, mask });
|
||||
queueMicrotask(() => child.closeWith(0));
|
||||
return child;
|
||||
}
|
||||
return createMockChild();
|
||||
});
|
||||
|
||||
const { manager } = await createManager({ mode: "full" });
|
||||
await manager.close();
|
||||
|
||||
expect(removeCalls).toEqual(["memory-root", "memory-alt", "memory-dir"]);
|
||||
expect(legacyCollections.has("memory-root-main")).toBe(true);
|
||||
expect(legacyCollections.has("memory-alt-main")).toBe(true);
|
||||
expect(legacyCollections.has("memory-dir-main")).toBe(true);
|
||||
expect(legacyCollections.has("memory-root")).toBe(false);
|
||||
expect(legacyCollections.has("memory-alt")).toBe(false);
|
||||
expect(legacyCollections.has("memory-dir")).toBe(false);
|
||||
});
|
||||
|
||||
it("does not migrate unscoped collections when listed metadata differs", async () => {
|
||||
cfg = {
|
||||
...cfg,
|
||||
memory: {
|
||||
backend: "qmd",
|
||||
qmd: {
|
||||
includeDefaultMemory: true,
|
||||
update: { interval: "0s", debounceMs: 60_000, onBoot: false },
|
||||
paths: [],
|
||||
},
|
||||
},
|
||||
} as OpenClawConfig;
|
||||
|
||||
const differentPath = path.join(tmpRoot, "other-memory");
|
||||
await fs.mkdir(differentPath, { recursive: true });
|
||||
const removeCalls: string[] = [];
|
||||
spawnMock.mockImplementation((_cmd: string, args: string[]) => {
|
||||
if (args[0] === "collection" && args[1] === "list") {
|
||||
const child = createMockChild({ autoClose: false });
|
||||
emitAndClose(
|
||||
child,
|
||||
"stdout",
|
||||
JSON.stringify([{ name: "memory-root", path: differentPath, mask: "MEMORY.md" }]),
|
||||
);
|
||||
return child;
|
||||
}
|
||||
if (args[0] === "collection" && args[1] === "remove") {
|
||||
const child = createMockChild({ autoClose: false });
|
||||
removeCalls.push(args[2] ?? "");
|
||||
queueMicrotask(() => child.closeWith(0));
|
||||
return child;
|
||||
}
|
||||
return createMockChild();
|
||||
});
|
||||
|
||||
const { manager } = await createManager({ mode: "full" });
|
||||
await manager.close();
|
||||
|
||||
expect(removeCalls).not.toContain("memory-root");
|
||||
expect(logDebugMock).toHaveBeenCalledWith(
|
||||
expect.stringContaining("qmd legacy collection migration skipped for memory-root"),
|
||||
);
|
||||
});
|
||||
|
||||
it("times out qmd update during sync when configured", async () => {
|
||||
vi.useFakeTimers();
|
||||
cfg = {
|
||||
@ -985,8 +1111,9 @@ describe("QmdMemoryManager", () => {
|
||||
);
|
||||
expect(mcporterCall).toBeDefined();
|
||||
const spawnOpts = mcporterCall?.[2] as { env?: NodeJS.ProcessEnv } | undefined;
|
||||
expect(spawnOpts?.env?.XDG_CONFIG_HOME).toContain("/agents/main/qmd/xdg-config");
|
||||
expect(spawnOpts?.env?.XDG_CACHE_HOME).toContain("/agents/main/qmd/xdg-cache");
|
||||
const normalizePath = (value?: string) => value?.replace(/\\/g, "/");
|
||||
expect(normalizePath(spawnOpts?.env?.XDG_CONFIG_HOME)).toContain("/agents/main/qmd/xdg-config");
|
||||
expect(normalizePath(spawnOpts?.env?.XDG_CACHE_HOME)).toContain("/agents/main/qmd/xdg-cache");
|
||||
|
||||
await manager.close();
|
||||
});
|
||||
|
||||
@ -73,6 +73,13 @@ type ListedCollection = {
|
||||
pattern?: string;
|
||||
};
|
||||
|
||||
type ManagedCollection = {
|
||||
name: string;
|
||||
path: string;
|
||||
pattern: string;
|
||||
kind: "memory" | "custom" | "sessions";
|
||||
};
|
||||
|
||||
type QmdManagerMode = "full" | "status";
|
||||
|
||||
export class QmdMemoryManager implements MemorySearchManager {
|
||||
@ -269,6 +276,8 @@ export class QmdMemoryManager implements MemorySearchManager {
|
||||
// ignore; older qmd versions might not support list --json.
|
||||
}
|
||||
|
||||
await this.migrateLegacyUnscopedCollections(existing);
|
||||
|
||||
for (const collection of this.qmd.collections) {
|
||||
const listed = existing.get(collection.name);
|
||||
if (listed && !this.shouldRebindCollection(collection, listed)) {
|
||||
@ -297,6 +306,61 @@ export class QmdMemoryManager implements MemorySearchManager {
|
||||
}
|
||||
}
|
||||
|
||||
private async migrateLegacyUnscopedCollections(
|
||||
existing: Map<string, ListedCollection>,
|
||||
): Promise<void> {
|
||||
for (const collection of this.qmd.collections) {
|
||||
if (existing.has(collection.name)) {
|
||||
continue;
|
||||
}
|
||||
const legacyName = this.deriveLegacyCollectionName(collection.name);
|
||||
if (!legacyName) {
|
||||
continue;
|
||||
}
|
||||
const listedLegacy = existing.get(legacyName);
|
||||
if (!listedLegacy) {
|
||||
continue;
|
||||
}
|
||||
if (!this.canMigrateLegacyCollection(collection, listedLegacy)) {
|
||||
log.debug(
|
||||
`qmd legacy collection migration skipped for ${legacyName} (path/pattern mismatch)`,
|
||||
);
|
||||
continue;
|
||||
}
|
||||
try {
|
||||
await this.removeCollection(legacyName);
|
||||
existing.delete(legacyName);
|
||||
} catch (err) {
|
||||
const message = err instanceof Error ? err.message : String(err);
|
||||
if (!this.isCollectionMissingError(message)) {
|
||||
log.warn(`qmd collection remove failed for ${legacyName}: ${message}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private deriveLegacyCollectionName(scopedName: string): string | null {
|
||||
const agentSuffix = `-${this.sanitizeCollectionNameSegment(this.agentId)}`;
|
||||
if (!scopedName.endsWith(agentSuffix)) {
|
||||
return null;
|
||||
}
|
||||
const legacyName = scopedName.slice(0, -agentSuffix.length).trim();
|
||||
return legacyName || null;
|
||||
}
|
||||
|
||||
private canMigrateLegacyCollection(
|
||||
collection: ManagedCollection,
|
||||
listedLegacy: ListedCollection,
|
||||
): boolean {
|
||||
if (listedLegacy.path && !this.pathsMatch(listedLegacy.path, collection.path)) {
|
||||
return false;
|
||||
}
|
||||
if (typeof listedLegacy.pattern === "string" && listedLegacy.pattern !== collection.pattern) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
private async ensureCollectionPath(collection: {
|
||||
path: string;
|
||||
pattern: string;
|
||||
@ -336,10 +400,7 @@ export class QmdMemoryManager implements MemorySearchManager {
|
||||
});
|
||||
}
|
||||
|
||||
private shouldRebindCollection(
|
||||
collection: { kind: string; path: string; pattern: string },
|
||||
listed: ListedCollection,
|
||||
): boolean {
|
||||
private shouldRebindCollection(collection: ManagedCollection, listed: ListedCollection): boolean {
|
||||
if (!listed.path) {
|
||||
// Older qmd versions may only return names from `collection list --json`.
|
||||
// Rebind managed collections so stale path bindings cannot survive upgrades.
|
||||
|
||||
@ -38,6 +38,63 @@ describe("extractKeywords", () => {
|
||||
expect(keywords).toContain("bug");
|
||||
});
|
||||
|
||||
it("extracts keywords from Korean conversational query", () => {
|
||||
const keywords = extractKeywords("어제 논의한 배포 전략");
|
||||
expect(keywords).toContain("논의한");
|
||||
expect(keywords).toContain("배포");
|
||||
expect(keywords).toContain("전략");
|
||||
// Should not include stop words
|
||||
expect(keywords).not.toContain("어제");
|
||||
});
|
||||
|
||||
it("strips Korean particles to extract stems", () => {
|
||||
const keywords = extractKeywords("서버에서 발생한 에러를 확인");
|
||||
expect(keywords).toContain("서버");
|
||||
expect(keywords).toContain("에러");
|
||||
expect(keywords).toContain("확인");
|
||||
});
|
||||
|
||||
it("filters Korean stop words including inflected forms", () => {
|
||||
const keywords = extractKeywords("나는 그리고 그래서");
|
||||
expect(keywords).not.toContain("나");
|
||||
expect(keywords).not.toContain("나는");
|
||||
expect(keywords).not.toContain("그리고");
|
||||
expect(keywords).not.toContain("그래서");
|
||||
});
|
||||
|
||||
it("filters inflected Korean stop words not explicitly listed", () => {
|
||||
const keywords = extractKeywords("그녀는 우리는");
|
||||
expect(keywords).not.toContain("그녀는");
|
||||
expect(keywords).not.toContain("우리는");
|
||||
expect(keywords).not.toContain("그녀");
|
||||
expect(keywords).not.toContain("우리");
|
||||
});
|
||||
|
||||
it("does not produce bogus single-char stems from particle stripping", () => {
|
||||
const keywords = extractKeywords("논의");
|
||||
expect(keywords).toContain("논의");
|
||||
expect(keywords).not.toContain("논");
|
||||
});
|
||||
|
||||
it("strips longest Korean trailing particles first", () => {
|
||||
const keywords = extractKeywords("기능으로 설명");
|
||||
expect(keywords).toContain("기능");
|
||||
expect(keywords).not.toContain("기능으");
|
||||
});
|
||||
|
||||
it("keeps stripped ASCII stems for mixed Korean tokens", () => {
|
||||
const keywords = extractKeywords("API를 배포했다");
|
||||
expect(keywords).toContain("api");
|
||||
expect(keywords).toContain("배포했다");
|
||||
});
|
||||
|
||||
it("handles mixed Korean and English query", () => {
|
||||
const keywords = extractKeywords("API 배포에 대한 논의");
|
||||
expect(keywords).toContain("api");
|
||||
expect(keywords).toContain("배포");
|
||||
expect(keywords).toContain("논의");
|
||||
});
|
||||
|
||||
it("handles empty query", () => {
|
||||
expect(extractKeywords("")).toEqual([]);
|
||||
expect(extractKeywords(" ")).toEqual([]);
|
||||
|
||||
@ -118,6 +118,161 @@ const STOP_WORDS_EN = new Set([
|
||||
"give",
|
||||
]);
|
||||
|
||||
const STOP_WORDS_KO = new Set([
|
||||
// Particles (조사)
|
||||
"은",
|
||||
"는",
|
||||
"이",
|
||||
"가",
|
||||
"을",
|
||||
"를",
|
||||
"의",
|
||||
"에",
|
||||
"에서",
|
||||
"로",
|
||||
"으로",
|
||||
"와",
|
||||
"과",
|
||||
"도",
|
||||
"만",
|
||||
"까지",
|
||||
"부터",
|
||||
"한테",
|
||||
"에게",
|
||||
"께",
|
||||
"처럼",
|
||||
"같이",
|
||||
"보다",
|
||||
"마다",
|
||||
"밖에",
|
||||
"대로",
|
||||
// Pronouns (대명사)
|
||||
"나",
|
||||
"나는",
|
||||
"내가",
|
||||
"나를",
|
||||
"너",
|
||||
"우리",
|
||||
"저",
|
||||
"저희",
|
||||
"그",
|
||||
"그녀",
|
||||
"그들",
|
||||
"이것",
|
||||
"저것",
|
||||
"그것",
|
||||
"여기",
|
||||
"저기",
|
||||
"거기",
|
||||
// Common verbs / auxiliaries (일반 동사/보조 동사)
|
||||
"있다",
|
||||
"없다",
|
||||
"하다",
|
||||
"되다",
|
||||
"이다",
|
||||
"아니다",
|
||||
"보다",
|
||||
"주다",
|
||||
"오다",
|
||||
"가다",
|
||||
// Nouns (의존 명사 / vague)
|
||||
"것",
|
||||
"거",
|
||||
"등",
|
||||
"수",
|
||||
"때",
|
||||
"곳",
|
||||
"중",
|
||||
"분",
|
||||
// Adverbs
|
||||
"잘",
|
||||
"더",
|
||||
"또",
|
||||
"매우",
|
||||
"정말",
|
||||
"아주",
|
||||
"많이",
|
||||
"너무",
|
||||
"좀",
|
||||
// Conjunctions
|
||||
"그리고",
|
||||
"하지만",
|
||||
"그래서",
|
||||
"그런데",
|
||||
"그러나",
|
||||
"또는",
|
||||
"그러면",
|
||||
// Question words
|
||||
"왜",
|
||||
"어떻게",
|
||||
"뭐",
|
||||
"언제",
|
||||
"어디",
|
||||
"누구",
|
||||
"무엇",
|
||||
"어떤",
|
||||
// Time (vague)
|
||||
"어제",
|
||||
"오늘",
|
||||
"내일",
|
||||
"최근",
|
||||
"지금",
|
||||
"아까",
|
||||
"나중",
|
||||
"전에",
|
||||
// Request words
|
||||
"제발",
|
||||
"부탁",
|
||||
]);
|
||||
|
||||
// Common Korean trailing particles to strip from words for tokenization
|
||||
// Sorted by descending length so longest-match-first is guaranteed.
|
||||
const KO_TRAILING_PARTICLES = [
|
||||
"에서",
|
||||
"으로",
|
||||
"에게",
|
||||
"한테",
|
||||
"처럼",
|
||||
"같이",
|
||||
"보다",
|
||||
"까지",
|
||||
"부터",
|
||||
"마다",
|
||||
"밖에",
|
||||
"대로",
|
||||
"은",
|
||||
"는",
|
||||
"이",
|
||||
"가",
|
||||
"을",
|
||||
"를",
|
||||
"의",
|
||||
"에",
|
||||
"로",
|
||||
"와",
|
||||
"과",
|
||||
"도",
|
||||
"만",
|
||||
].toSorted((a, b) => b.length - a.length);
|
||||
|
||||
function stripKoreanTrailingParticle(token: string): string | null {
|
||||
for (const particle of KO_TRAILING_PARTICLES) {
|
||||
if (token.length > particle.length && token.endsWith(particle)) {
|
||||
return token.slice(0, -particle.length);
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function isUsefulKoreanStem(stem: string): boolean {
|
||||
// Prevent bogus one-syllable stems from words like "논의" -> "논".
|
||||
if (/[\uac00-\ud7af]/.test(stem)) {
|
||||
return stem.length >= 2;
|
||||
}
|
||||
// Keep stripped ASCII stems for mixed tokens like "API를" -> "api".
|
||||
return /^[a-z0-9_]+$/i.test(stem);
|
||||
}
|
||||
|
||||
const STOP_WORDS_ZH = new Set([
|
||||
// Pronouns
|
||||
"我",
|
||||
@ -240,7 +395,7 @@ function isValidKeyword(token: string): boolean {
|
||||
}
|
||||
|
||||
/**
|
||||
* Simple tokenizer that handles both English and Chinese text.
|
||||
* Simple tokenizer that handles English, Chinese, and Korean text.
|
||||
* For Chinese, we do character-based splitting since we don't have a proper segmenter.
|
||||
* For English, we split on whitespace and punctuation.
|
||||
*/
|
||||
@ -252,7 +407,7 @@ function tokenize(text: string): string[] {
|
||||
const segments = normalized.split(/[\s\p{P}]+/u).filter(Boolean);
|
||||
|
||||
for (const segment of segments) {
|
||||
// Check if segment contains CJK characters
|
||||
// Check if segment contains CJK characters (Chinese)
|
||||
if (/[\u4e00-\u9fff]/.test(segment)) {
|
||||
// For Chinese, extract character n-grams (unigrams and bigrams)
|
||||
const chars = Array.from(segment).filter((c) => /[\u4e00-\u9fff]/.test(c));
|
||||
@ -262,6 +417,18 @@ function tokenize(text: string): string[] {
|
||||
for (let i = 0; i < chars.length - 1; i++) {
|
||||
tokens.push(chars[i] + chars[i + 1]);
|
||||
}
|
||||
} else if (/[\uac00-\ud7af\u3131-\u3163]/.test(segment)) {
|
||||
// For Korean (Hangul syllables and jamo), keep the word as-is unless it is
|
||||
// effectively a stop word once trailing particles are removed.
|
||||
const stem = stripKoreanTrailingParticle(segment);
|
||||
const stemIsStopWord = stem !== null && STOP_WORDS_KO.has(stem);
|
||||
if (!STOP_WORDS_KO.has(segment) && !stemIsStopWord) {
|
||||
tokens.push(segment);
|
||||
}
|
||||
// Also emit particle-stripped stems when they are useful keywords.
|
||||
if (stem && !STOP_WORDS_KO.has(stem) && isUsefulKoreanStem(stem)) {
|
||||
tokens.push(stem);
|
||||
}
|
||||
} else {
|
||||
// For non-CJK, keep as single token
|
||||
tokens.push(segment);
|
||||
@ -286,7 +453,7 @@ export function extractKeywords(query: string): string[] {
|
||||
|
||||
for (const token of tokens) {
|
||||
// Skip stop words
|
||||
if (STOP_WORDS_EN.has(token) || STOP_WORDS_ZH.has(token)) {
|
||||
if (STOP_WORDS_EN.has(token) || STOP_WORDS_ZH.has(token) || STOP_WORDS_KO.has(token)) {
|
||||
continue;
|
||||
}
|
||||
// Skip invalid keywords
|
||||
|
||||
@ -9,6 +9,7 @@ import {
|
||||
evaluateShellAllowlist,
|
||||
recordAllowlistUse,
|
||||
requiresExecApproval,
|
||||
resolveAllowAlwaysPatterns,
|
||||
resolveExecApprovals,
|
||||
resolveSafeBins,
|
||||
type ExecAllowlistEntry,
|
||||
@ -314,8 +315,13 @@ export async function handleSystemRunInvoke(opts: {
|
||||
}
|
||||
if (approvalDecision === "allow-always" && security === "allowlist") {
|
||||
if (analysisOk) {
|
||||
for (const segment of segments) {
|
||||
const pattern = segment.resolution?.resolvedPath ?? "";
|
||||
const patterns = resolveAllowAlwaysPatterns({
|
||||
segments,
|
||||
cwd: opts.params.cwd ?? undefined,
|
||||
env,
|
||||
platform: process.platform,
|
||||
});
|
||||
for (const pattern of patterns) {
|
||||
if (pattern) {
|
||||
addAllowlistEntry(approvals.file, agentId, pattern);
|
||||
}
|
||||
|
||||
34
src/plugins/enable.test.ts
Normal file
34
src/plugins/enable.test.ts
Normal file
@ -0,0 +1,34 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import type { OpenClawConfig } from "../config/config.js";
|
||||
import { enablePluginInConfig } from "./enable.js";
|
||||
|
||||
describe("enablePluginInConfig", () => {
|
||||
it("enables a plugin entry", () => {
|
||||
const cfg: OpenClawConfig = {};
|
||||
const result = enablePluginInConfig(cfg, "google-antigravity-auth");
|
||||
expect(result.enabled).toBe(true);
|
||||
expect(result.config.plugins?.entries?.["google-antigravity-auth"]?.enabled).toBe(true);
|
||||
});
|
||||
|
||||
it("adds plugin to allowlist when allowlist is configured", () => {
|
||||
const cfg: OpenClawConfig = {
|
||||
plugins: {
|
||||
allow: ["memory-core"],
|
||||
},
|
||||
};
|
||||
const result = enablePluginInConfig(cfg, "google-antigravity-auth");
|
||||
expect(result.enabled).toBe(true);
|
||||
expect(result.config.plugins?.allow).toEqual(["memory-core", "google-antigravity-auth"]);
|
||||
});
|
||||
|
||||
it("refuses enable when plugin is denylisted", () => {
|
||||
const cfg: OpenClawConfig = {
|
||||
plugins: {
|
||||
deny: ["google-antigravity-auth"],
|
||||
},
|
||||
};
|
||||
const result = enablePluginInConfig(cfg, "google-antigravity-auth");
|
||||
expect(result.enabled).toBe(false);
|
||||
expect(result.reason).toBe("blocked by denylist");
|
||||
});
|
||||
});
|
||||
@ -26,11 +26,45 @@ describe("roleScopesAllow", () => {
|
||||
).toBe(true);
|
||||
});
|
||||
|
||||
it("keeps non-read operator scopes explicit", () => {
|
||||
it("treats operator.write as satisfied by write/admin scopes", () => {
|
||||
expect(
|
||||
roleScopesAllow({
|
||||
role: "operator",
|
||||
requestedScopes: ["operator.write"],
|
||||
allowedScopes: ["operator.write"],
|
||||
}),
|
||||
).toBe(true);
|
||||
expect(
|
||||
roleScopesAllow({
|
||||
role: "operator",
|
||||
requestedScopes: ["operator.write"],
|
||||
allowedScopes: ["operator.admin"],
|
||||
}),
|
||||
).toBe(true);
|
||||
});
|
||||
|
||||
it("treats operator.approvals/operator.pairing as satisfied by operator.admin", () => {
|
||||
expect(
|
||||
roleScopesAllow({
|
||||
role: "operator",
|
||||
requestedScopes: ["operator.approvals"],
|
||||
allowedScopes: ["operator.admin"],
|
||||
}),
|
||||
).toBe(true);
|
||||
expect(
|
||||
roleScopesAllow({
|
||||
role: "operator",
|
||||
requestedScopes: ["operator.pairing"],
|
||||
allowedScopes: ["operator.admin"],
|
||||
}),
|
||||
).toBe(true);
|
||||
});
|
||||
|
||||
it("does not treat operator.admin as satisfying non-operator scopes", () => {
|
||||
expect(
|
||||
roleScopesAllow({
|
||||
role: "operator",
|
||||
requestedScopes: ["system.run"],
|
||||
allowedScopes: ["operator.admin"],
|
||||
}),
|
||||
).toBe(false);
|
||||
|
||||
@ -2,6 +2,7 @@ const OPERATOR_ROLE = "operator";
|
||||
const OPERATOR_ADMIN_SCOPE = "operator.admin";
|
||||
const OPERATOR_READ_SCOPE = "operator.read";
|
||||
const OPERATOR_WRITE_SCOPE = "operator.write";
|
||||
const OPERATOR_SCOPE_PREFIX = "operator.";
|
||||
|
||||
function normalizeScopeList(scopes: readonly string[]): string[] {
|
||||
const out = new Set<string>();
|
||||
@ -15,12 +16,14 @@ function normalizeScopeList(scopes: readonly string[]): string[] {
|
||||
}
|
||||
|
||||
function operatorScopeSatisfied(requestedScope: string, granted: Set<string>): boolean {
|
||||
if (granted.has(OPERATOR_ADMIN_SCOPE) && requestedScope.startsWith(OPERATOR_SCOPE_PREFIX)) {
|
||||
return true;
|
||||
}
|
||||
if (requestedScope === OPERATOR_READ_SCOPE) {
|
||||
return (
|
||||
granted.has(OPERATOR_READ_SCOPE) ||
|
||||
granted.has(OPERATOR_WRITE_SCOPE) ||
|
||||
granted.has(OPERATOR_ADMIN_SCOPE)
|
||||
);
|
||||
return granted.has(OPERATOR_READ_SCOPE) || granted.has(OPERATOR_WRITE_SCOPE);
|
||||
}
|
||||
if (requestedScope === OPERATOR_WRITE_SCOPE) {
|
||||
return granted.has(OPERATOR_WRITE_SCOPE);
|
||||
}
|
||||
return granted.has(requestedScope);
|
||||
}
|
||||
|
||||
@ -8,6 +8,8 @@ const mocks = vi.hoisted(() => ({
|
||||
finalizeInboundContextMock: vi.fn(),
|
||||
resolveConversationLabelMock: vi.fn(),
|
||||
createReplyPrefixOptionsMock: vi.fn(),
|
||||
recordSessionMetaFromInboundMock: vi.fn(),
|
||||
resolveStorePathMock: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock("../../auto-reply/reply/provider-dispatcher.js", () => ({
|
||||
@ -35,6 +37,12 @@ vi.mock("../../channels/reply-prefix.js", () => ({
|
||||
createReplyPrefixOptions: (...args: unknown[]) => mocks.createReplyPrefixOptionsMock(...args),
|
||||
}));
|
||||
|
||||
vi.mock("../../config/sessions.js", () => ({
|
||||
recordSessionMetaFromInbound: (...args: unknown[]) =>
|
||||
mocks.recordSessionMetaFromInboundMock(...args),
|
||||
resolveStorePath: (...args: unknown[]) => mocks.resolveStorePathMock(...args),
|
||||
}));
|
||||
|
||||
type SlashHarnessMocks = {
|
||||
dispatchMock: ReturnType<typeof vi.fn>;
|
||||
readAllowFromStoreMock: ReturnType<typeof vi.fn>;
|
||||
@ -43,6 +51,8 @@ type SlashHarnessMocks = {
|
||||
finalizeInboundContextMock: ReturnType<typeof vi.fn>;
|
||||
resolveConversationLabelMock: ReturnType<typeof vi.fn>;
|
||||
createReplyPrefixOptionsMock: ReturnType<typeof vi.fn>;
|
||||
recordSessionMetaFromInboundMock: ReturnType<typeof vi.fn>;
|
||||
resolveStorePathMock: ReturnType<typeof vi.fn>;
|
||||
};
|
||||
|
||||
export function getSlackSlashMocks(): SlashHarnessMocks {
|
||||
@ -61,4 +71,6 @@ export function resetSlackSlashMocks() {
|
||||
mocks.finalizeInboundContextMock.mockReset().mockImplementation((ctx: unknown) => ctx);
|
||||
mocks.resolveConversationLabelMock.mockReset().mockReturnValue(undefined);
|
||||
mocks.createReplyPrefixOptionsMock.mockReset().mockReturnValue({ onModelSelected: () => {} });
|
||||
mocks.recordSessionMetaFromInboundMock.mockReset().mockResolvedValue(undefined);
|
||||
mocks.resolveStorePathMock.mockReset().mockReturnValue("/tmp/openclaw-sessions.json");
|
||||
}
|
||||
|
||||
@ -210,6 +210,14 @@ function findFirstActionsBlock(payload: { blocks?: Array<{ type: string }> }) {
|
||||
| undefined;
|
||||
}
|
||||
|
||||
function createDeferred<T>() {
|
||||
let resolve!: (value: T | PromiseLike<T>) => void;
|
||||
const promise = new Promise<T>((res) => {
|
||||
resolve = res;
|
||||
});
|
||||
return { promise, resolve };
|
||||
}
|
||||
|
||||
function createArgMenusHarness() {
|
||||
const commands = new Map<string, (args: unknown) => Promise<void>>();
|
||||
const actions = new Map<string, (args: unknown) => Promise<void>>();
|
||||
@ -370,6 +378,62 @@ describe("Slack native command argument menus", () => {
|
||||
harness.postEphemeral.mockClear();
|
||||
});
|
||||
|
||||
it("registers options handlers without losing app receiver binding", async () => {
|
||||
const commands = new Map<string, (args: unknown) => Promise<void>>();
|
||||
const actions = new Map<string, (args: unknown) => Promise<void>>();
|
||||
const options = new Map<string, (args: unknown) => Promise<void>>();
|
||||
const postEphemeral = vi.fn().mockResolvedValue({ ok: true });
|
||||
const app = {
|
||||
client: { chat: { postEphemeral } },
|
||||
command: (name: string, handler: (args: unknown) => Promise<void>) => {
|
||||
commands.set(name, handler);
|
||||
},
|
||||
action: (id: string, handler: (args: unknown) => Promise<void>) => {
|
||||
actions.set(id, handler);
|
||||
},
|
||||
options: function (this: unknown, id: string, handler: (args: unknown) => Promise<void>) {
|
||||
expect(this).toBe(app);
|
||||
options.set(id, handler);
|
||||
},
|
||||
};
|
||||
const ctx = {
|
||||
cfg: { commands: { native: true, nativeSkills: false } },
|
||||
runtime: {},
|
||||
botToken: "bot-token",
|
||||
botUserId: "bot",
|
||||
teamId: "T1",
|
||||
allowFrom: ["*"],
|
||||
dmEnabled: true,
|
||||
dmPolicy: "open",
|
||||
groupDmEnabled: false,
|
||||
groupDmChannels: [],
|
||||
defaultRequireMention: true,
|
||||
groupPolicy: "open",
|
||||
useAccessGroups: false,
|
||||
channelsConfig: undefined,
|
||||
slashCommand: {
|
||||
enabled: true,
|
||||
name: "openclaw",
|
||||
ephemeral: true,
|
||||
sessionPrefix: "slack:slash",
|
||||
},
|
||||
textLimit: 4000,
|
||||
app,
|
||||
isChannelAllowed: () => true,
|
||||
resolveChannelName: async () => ({ name: "dm", type: "im" }),
|
||||
resolveUserName: async () => ({ name: "Ada" }),
|
||||
} as unknown;
|
||||
const account = {
|
||||
accountId: "acct",
|
||||
config: { commands: { native: true, nativeSkills: false } },
|
||||
} as unknown;
|
||||
|
||||
await registerCommands(ctx, account);
|
||||
expect(commands.size).toBeGreaterThan(0);
|
||||
expect(actions.has("openclaw_cmdarg")).toBe(true);
|
||||
expect(options.has("openclaw_cmdarg")).toBe(true);
|
||||
});
|
||||
|
||||
it("shows a button menu when required args are omitted", async () => {
|
||||
const { respond } = await runCommandHandler(usageHandler);
|
||||
const actions = expectArgMenuLayout(respond);
|
||||
@ -803,3 +867,47 @@ describe("slack slash commands access groups", () => {
|
||||
expectUnauthorizedResponse(respond);
|
||||
});
|
||||
});
|
||||
|
||||
describe("slack slash command session metadata", () => {
|
||||
const { recordSessionMetaFromInboundMock } = getSlackSlashMocks();
|
||||
|
||||
it("calls recordSessionMetaFromInbound after dispatching a slash command", async () => {
|
||||
const harness = createPolicyHarness({ groupPolicy: "open" });
|
||||
await registerAndRunPolicySlash({ harness });
|
||||
|
||||
expect(dispatchMock).toHaveBeenCalledTimes(1);
|
||||
expect(recordSessionMetaFromInboundMock).toHaveBeenCalledTimes(1);
|
||||
const call = recordSessionMetaFromInboundMock.mock.calls[0]?.[0] as {
|
||||
sessionKey?: string;
|
||||
ctx?: { OriginatingChannel?: string };
|
||||
};
|
||||
expect(call.ctx?.OriginatingChannel).toBe("slack");
|
||||
expect(call.sessionKey).toBeDefined();
|
||||
});
|
||||
|
||||
it("awaits session metadata persistence before dispatch", async () => {
|
||||
const deferred = createDeferred<void>();
|
||||
recordSessionMetaFromInboundMock.mockReset().mockReturnValue(deferred.promise);
|
||||
|
||||
const harness = createPolicyHarness({ groupPolicy: "open" });
|
||||
await registerCommands(harness.ctx, harness.account);
|
||||
|
||||
const runPromise = runSlashHandler({
|
||||
commands: harness.commands,
|
||||
command: {
|
||||
channel_id: harness.channelId,
|
||||
channel_name: harness.channelName,
|
||||
},
|
||||
});
|
||||
|
||||
await vi.waitFor(() => {
|
||||
expect(recordSessionMetaFromInboundMock).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
expect(dispatchMock).not.toHaveBeenCalled();
|
||||
|
||||
deferred.resolve();
|
||||
await runPromise;
|
||||
|
||||
expect(dispatchMock).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
});
|
||||
|
||||
@ -539,9 +539,14 @@ export async function registerSlackMonitorSlashCommands(params: {
|
||||
import("../../auto-reply/reply/inbound-context.js"),
|
||||
import("../../auto-reply/reply/provider-dispatcher.js"),
|
||||
]);
|
||||
const [{ resolveConversationLabel }, { createReplyPrefixOptions }] = await Promise.all([
|
||||
const [
|
||||
{ resolveConversationLabel },
|
||||
{ createReplyPrefixOptions },
|
||||
{ recordSessionMetaFromInbound, resolveStorePath },
|
||||
] = await Promise.all([
|
||||
import("../../channels/conversation-label.js"),
|
||||
import("../../channels/reply-prefix.js"),
|
||||
import("../../config/sessions.js"),
|
||||
]);
|
||||
|
||||
const route = resolveAgentRoute({
|
||||
@ -605,6 +610,19 @@ export async function registerSlackMonitorSlashCommands(params: {
|
||||
OriginatingTo: `user:${command.user_id}`,
|
||||
});
|
||||
|
||||
const storePath = resolveStorePath(cfg.session?.store, {
|
||||
agentId: route.agentId,
|
||||
});
|
||||
try {
|
||||
await recordSessionMetaFromInbound({
|
||||
storePath,
|
||||
sessionKey: ctxPayload.SessionKey ?? route.sessionKey,
|
||||
ctx: ctxPayload,
|
||||
});
|
||||
} catch (err) {
|
||||
runtime.error?.(danger(`slack slash: failed updating session meta: ${String(err)}`));
|
||||
}
|
||||
|
||||
const { onModelSelected, ...prefixOptions } = createReplyPrefixOptions({
|
||||
cfg,
|
||||
agentId: route.agentId,
|
||||
@ -734,21 +752,19 @@ export async function registerSlackMonitorSlashCommands(params: {
|
||||
}
|
||||
|
||||
const registerArgOptions = () => {
|
||||
const optionsHandler = (
|
||||
ctx.app as unknown as {
|
||||
options?: (
|
||||
actionId: string,
|
||||
handler: (args: {
|
||||
ack: (payload: { options: unknown[] }) => Promise<void>;
|
||||
body: unknown;
|
||||
}) => Promise<void>,
|
||||
) => void;
|
||||
}
|
||||
).options;
|
||||
if (typeof optionsHandler !== "function") {
|
||||
const appWithOptions = ctx.app as unknown as {
|
||||
options?: (
|
||||
actionId: string,
|
||||
handler: (args: {
|
||||
ack: (payload: { options: unknown[] }) => Promise<void>;
|
||||
body: unknown;
|
||||
}) => Promise<void>,
|
||||
) => void;
|
||||
};
|
||||
if (typeof appWithOptions.options !== "function") {
|
||||
return;
|
||||
}
|
||||
optionsHandler(SLACK_COMMAND_ARG_ACTION_ID, async ({ ack, body }) => {
|
||||
appWithOptions.options(SLACK_COMMAND_ARG_ACTION_ID, async ({ ack, body }) => {
|
||||
const typedBody = body as {
|
||||
value?: string;
|
||||
user?: { id?: string };
|
||||
|
||||
@ -137,7 +137,13 @@ describe("dispatchTelegramMessage draft streaming", () => {
|
||||
}
|
||||
|
||||
function createBot(): Bot {
|
||||
return { api: { sendMessage: vi.fn(), editMessageText: vi.fn() } } as unknown as Bot;
|
||||
return {
|
||||
api: {
|
||||
sendMessage: vi.fn(),
|
||||
editMessageText: vi.fn(),
|
||||
deleteMessage: vi.fn().mockResolvedValue(true),
|
||||
},
|
||||
} as unknown as Bot;
|
||||
}
|
||||
|
||||
function createRuntime(): Parameters<typeof dispatchTelegramMessage>[0]["runtime"] {
|
||||
@ -154,10 +160,12 @@ describe("dispatchTelegramMessage draft streaming", () => {
|
||||
context: TelegramMessageContext;
|
||||
telegramCfg?: Parameters<typeof dispatchTelegramMessage>[0]["telegramCfg"];
|
||||
streamMode?: Parameters<typeof dispatchTelegramMessage>[0]["streamMode"];
|
||||
bot?: Bot;
|
||||
}) {
|
||||
const bot = params.bot ?? createBot();
|
||||
await dispatchTelegramMessage({
|
||||
context: params.context,
|
||||
bot: createBot(),
|
||||
bot,
|
||||
cfg: {},
|
||||
runtime: createRuntime(),
|
||||
replyToMode: "first",
|
||||
@ -577,6 +585,141 @@ describe("dispatchTelegramMessage draft streaming", () => {
|
||||
expect(deliverReplies).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("maps finals correctly when first preview id resolves after message boundary", async () => {
|
||||
let answerMessageId: number | undefined;
|
||||
let answerDraftParams:
|
||||
| {
|
||||
onSupersededPreview?: (preview: { messageId: number; textSnapshot: string }) => void;
|
||||
}
|
||||
| undefined;
|
||||
const answerDraftStream = {
|
||||
update: vi.fn().mockImplementation((text: string) => {
|
||||
if (text.includes("Message B")) {
|
||||
answerMessageId = 1002;
|
||||
}
|
||||
}),
|
||||
flush: vi.fn().mockResolvedValue(undefined),
|
||||
messageId: vi.fn().mockImplementation(() => answerMessageId),
|
||||
clear: vi.fn().mockResolvedValue(undefined),
|
||||
stop: vi.fn().mockResolvedValue(undefined),
|
||||
forceNewMessage: vi.fn().mockImplementation(() => {
|
||||
answerMessageId = undefined;
|
||||
}),
|
||||
};
|
||||
const reasoningDraftStream = createDraftStream();
|
||||
createTelegramDraftStream
|
||||
.mockImplementationOnce((params) => {
|
||||
answerDraftParams = params as typeof answerDraftParams;
|
||||
return answerDraftStream;
|
||||
})
|
||||
.mockImplementationOnce(() => reasoningDraftStream);
|
||||
dispatchReplyWithBufferedBlockDispatcher.mockImplementation(
|
||||
async ({ dispatcherOptions, replyOptions }) => {
|
||||
await replyOptions?.onPartialReply?.({ text: "Message A partial" });
|
||||
await replyOptions?.onAssistantMessageStart?.();
|
||||
await replyOptions?.onPartialReply?.({ text: "Message B partial" });
|
||||
// Simulate late resolution of message A preview ID after boundary rotation.
|
||||
answerDraftParams?.onSupersededPreview?.({
|
||||
messageId: 1001,
|
||||
textSnapshot: "Message A partial",
|
||||
});
|
||||
|
||||
await dispatcherOptions.deliver({ text: "Message A final" }, { kind: "final" });
|
||||
await dispatcherOptions.deliver({ text: "Message B final" }, { kind: "final" });
|
||||
return { queuedFinal: true };
|
||||
},
|
||||
);
|
||||
deliverReplies.mockResolvedValue({ delivered: true });
|
||||
editMessageTelegram.mockResolvedValue({ ok: true, chatId: "123", messageId: "1001" });
|
||||
|
||||
await dispatchWithContext({ context: createContext(), streamMode: "partial" });
|
||||
|
||||
expect(editMessageTelegram).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
123,
|
||||
1001,
|
||||
"Message A final",
|
||||
expect.any(Object),
|
||||
);
|
||||
expect(editMessageTelegram).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
123,
|
||||
1002,
|
||||
"Message B final",
|
||||
expect.any(Object),
|
||||
);
|
||||
expect(deliverReplies).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("maps finals correctly when archived preview id arrives during final flush", async () => {
|
||||
let answerMessageId: number | undefined;
|
||||
let answerDraftParams:
|
||||
| {
|
||||
onSupersededPreview?: (preview: { messageId: number; textSnapshot: string }) => void;
|
||||
}
|
||||
| undefined;
|
||||
let emittedSupersededPreview = false;
|
||||
const answerDraftStream = {
|
||||
update: vi.fn().mockImplementation((text: string) => {
|
||||
if (text.includes("Message B")) {
|
||||
answerMessageId = 1002;
|
||||
}
|
||||
}),
|
||||
flush: vi.fn().mockImplementation(async () => {
|
||||
if (!emittedSupersededPreview) {
|
||||
emittedSupersededPreview = true;
|
||||
answerDraftParams?.onSupersededPreview?.({
|
||||
messageId: 1001,
|
||||
textSnapshot: "Message A partial",
|
||||
});
|
||||
}
|
||||
}),
|
||||
messageId: vi.fn().mockImplementation(() => answerMessageId),
|
||||
clear: vi.fn().mockResolvedValue(undefined),
|
||||
stop: vi.fn().mockResolvedValue(undefined),
|
||||
forceNewMessage: vi.fn().mockImplementation(() => {
|
||||
answerMessageId = undefined;
|
||||
}),
|
||||
};
|
||||
const reasoningDraftStream = createDraftStream();
|
||||
createTelegramDraftStream
|
||||
.mockImplementationOnce((params) => {
|
||||
answerDraftParams = params as typeof answerDraftParams;
|
||||
return answerDraftStream;
|
||||
})
|
||||
.mockImplementationOnce(() => reasoningDraftStream);
|
||||
dispatchReplyWithBufferedBlockDispatcher.mockImplementation(
|
||||
async ({ dispatcherOptions, replyOptions }) => {
|
||||
await replyOptions?.onPartialReply?.({ text: "Message A partial" });
|
||||
await replyOptions?.onAssistantMessageStart?.();
|
||||
await replyOptions?.onPartialReply?.({ text: "Message B partial" });
|
||||
await dispatcherOptions.deliver({ text: "Message A final" }, { kind: "final" });
|
||||
await dispatcherOptions.deliver({ text: "Message B final" }, { kind: "final" });
|
||||
return { queuedFinal: true };
|
||||
},
|
||||
);
|
||||
deliverReplies.mockResolvedValue({ delivered: true });
|
||||
editMessageTelegram.mockResolvedValue({ ok: true, chatId: "123", messageId: "1001" });
|
||||
|
||||
await dispatchWithContext({ context: createContext(), streamMode: "partial" });
|
||||
|
||||
expect(editMessageTelegram).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
123,
|
||||
1001,
|
||||
"Message A final",
|
||||
expect.any(Object),
|
||||
);
|
||||
expect(editMessageTelegram).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
123,
|
||||
1002,
|
||||
"Message B final",
|
||||
expect.any(Object),
|
||||
);
|
||||
expect(deliverReplies).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it.each(["block", "partial"] as const)(
|
||||
"splits reasoning lane only when a later reasoning block starts (%s mode)",
|
||||
async (streamMode) => {
|
||||
@ -604,6 +747,46 @@ describe("dispatchTelegramMessage draft streaming", () => {
|
||||
},
|
||||
);
|
||||
|
||||
it("cleans superseded reasoning previews after lane rotation", async () => {
|
||||
let reasoningDraftParams:
|
||||
| {
|
||||
onSupersededPreview?: (preview: { messageId: number; textSnapshot: string }) => void;
|
||||
}
|
||||
| undefined;
|
||||
const answerDraftStream = createDraftStream(999);
|
||||
const reasoningDraftStream = createDraftStream(111);
|
||||
createTelegramDraftStream
|
||||
.mockImplementationOnce(() => answerDraftStream)
|
||||
.mockImplementationOnce((params) => {
|
||||
reasoningDraftParams = params as typeof reasoningDraftParams;
|
||||
return reasoningDraftStream;
|
||||
});
|
||||
dispatchReplyWithBufferedBlockDispatcher.mockImplementation(
|
||||
async ({ dispatcherOptions, replyOptions }) => {
|
||||
await replyOptions?.onReasoningStream?.({ text: "Reasoning:\n_first block_" });
|
||||
await replyOptions?.onReasoningEnd?.();
|
||||
await replyOptions?.onReasoningStream?.({ text: "Reasoning:\n_second block_" });
|
||||
reasoningDraftParams?.onSupersededPreview?.({
|
||||
messageId: 4444,
|
||||
textSnapshot: "Reasoning:\n_first block_",
|
||||
});
|
||||
await dispatcherOptions.deliver({ text: "Done" }, { kind: "final" });
|
||||
return { queuedFinal: true };
|
||||
},
|
||||
);
|
||||
deliverReplies.mockResolvedValue({ delivered: true });
|
||||
editMessageTelegram.mockResolvedValue({ ok: true, chatId: "123", messageId: "999" });
|
||||
|
||||
const bot = createBot();
|
||||
await dispatchWithContext({ context: createContext(), streamMode: "partial", bot });
|
||||
|
||||
expect(reasoningDraftParams?.onSupersededPreview).toBeTypeOf("function");
|
||||
const deleteMessageCalls = (
|
||||
bot.api as unknown as { deleteMessage: { mock: { calls: unknown[][] } } }
|
||||
).deleteMessage.mock.calls;
|
||||
expect(deleteMessageCalls).toContainEqual([123, 4444]);
|
||||
});
|
||||
|
||||
it.each(["block", "partial"] as const)(
|
||||
"does not split reasoning lane on reasoning end without a later reasoning block (%s mode)",
|
||||
async (streamMode) => {
|
||||
|
||||
@ -155,7 +155,10 @@ export const dispatchTelegramMessage = async ({
|
||||
lastPartialText: string;
|
||||
hasStreamedMessage: boolean;
|
||||
};
|
||||
const createDraftLane = (enabled: boolean): DraftLaneState => {
|
||||
type ArchivedPreview = { messageId: number; textSnapshot: string };
|
||||
const archivedAnswerPreviews: ArchivedPreview[] = [];
|
||||
const archivedReasoningPreviewIds: number[] = [];
|
||||
const createDraftLane = (laneName: LaneName, enabled: boolean): DraftLaneState => {
|
||||
const stream = enabled
|
||||
? createTelegramDraftStream({
|
||||
api: bot.api,
|
||||
@ -165,6 +168,21 @@ export const dispatchTelegramMessage = async ({
|
||||
replyToMessageId: draftReplyToMessageId,
|
||||
minInitialChars: draftMinInitialChars,
|
||||
renderText: renderDraftPreview,
|
||||
onSupersededPreview:
|
||||
laneName === "answer" || laneName === "reasoning"
|
||||
? (preview) => {
|
||||
if (laneName === "reasoning") {
|
||||
if (!archivedReasoningPreviewIds.includes(preview.messageId)) {
|
||||
archivedReasoningPreviewIds.push(preview.messageId);
|
||||
}
|
||||
return;
|
||||
}
|
||||
archivedAnswerPreviews.push({
|
||||
messageId: preview.messageId,
|
||||
textSnapshot: preview.textSnapshot,
|
||||
});
|
||||
}
|
||||
: undefined,
|
||||
log: logVerbose,
|
||||
warn: logVerbose,
|
||||
})
|
||||
@ -176,15 +194,13 @@ export const dispatchTelegramMessage = async ({
|
||||
};
|
||||
};
|
||||
const lanes: Record<LaneName, DraftLaneState> = {
|
||||
answer: createDraftLane(canStreamAnswerDraft),
|
||||
reasoning: createDraftLane(canStreamReasoningDraft),
|
||||
answer: createDraftLane("answer", canStreamAnswerDraft),
|
||||
reasoning: createDraftLane("reasoning", canStreamReasoningDraft),
|
||||
};
|
||||
const answerLane = lanes.answer;
|
||||
const reasoningLane = lanes.reasoning;
|
||||
let splitReasoningOnNextStream = false;
|
||||
const reasoningStepState = createTelegramReasoningStepState();
|
||||
type ArchivedPreview = { messageId: number; textSnapshot: string };
|
||||
const archivedAnswerPreviews: ArchivedPreview[] = [];
|
||||
type SplitLaneSegment = { lane: LaneName; text: string };
|
||||
const splitTextIntoLaneSegments = (text?: string): SplitLaneSegment[] => {
|
||||
const split = splitTelegramReasoningText(text);
|
||||
@ -434,6 +450,43 @@ export const dispatchTelegramMessage = async ({
|
||||
return result.delivered;
|
||||
};
|
||||
type LaneDeliveryResult = "preview-finalized" | "preview-updated" | "sent" | "skipped";
|
||||
const consumeArchivedAnswerPreviewForFinal = async (params: {
|
||||
lane: DraftLaneState;
|
||||
text: string;
|
||||
payload: ReplyPayload;
|
||||
previewButtons?: TelegramInlineButtons;
|
||||
canEditViaPreview: boolean;
|
||||
}): Promise<LaneDeliveryResult | undefined> => {
|
||||
const archivedPreview = archivedAnswerPreviews.shift();
|
||||
if (!archivedPreview) {
|
||||
return undefined;
|
||||
}
|
||||
if (params.canEditViaPreview) {
|
||||
const finalized = await tryUpdatePreviewForLane({
|
||||
lane: params.lane,
|
||||
laneName: "answer",
|
||||
text: params.text,
|
||||
previewButtons: params.previewButtons,
|
||||
stopBeforeEdit: false,
|
||||
skipRegressive: "existingOnly",
|
||||
context: "final",
|
||||
previewMessageId: archivedPreview.messageId,
|
||||
previewTextSnapshot: archivedPreview.textSnapshot,
|
||||
});
|
||||
if (finalized) {
|
||||
return "preview-finalized";
|
||||
}
|
||||
}
|
||||
try {
|
||||
await bot.api.deleteMessage(chatId, archivedPreview.messageId);
|
||||
} catch (err) {
|
||||
logVerbose(
|
||||
`telegram: archived answer preview cleanup failed (${archivedPreview.messageId}): ${String(err)}`,
|
||||
);
|
||||
}
|
||||
const delivered = await sendPayload(applyTextToPayload(params.payload, params.text));
|
||||
return delivered ? "sent" : "skipped";
|
||||
};
|
||||
const deliverLaneText = async (params: {
|
||||
laneName: LaneName;
|
||||
text: string;
|
||||
@ -456,38 +509,32 @@ export const dispatchTelegramMessage = async ({
|
||||
!hasMedia && text.length > 0 && text.length <= draftMaxChars && !payload.isError;
|
||||
|
||||
if (infoKind === "final") {
|
||||
if (laneName === "answer" && archivedAnswerPreviews.length > 0) {
|
||||
const archivedPreview = archivedAnswerPreviews.shift();
|
||||
if (archivedPreview) {
|
||||
if (canEditViaPreview) {
|
||||
const finalized = await tryUpdatePreviewForLane({
|
||||
lane,
|
||||
laneName,
|
||||
text,
|
||||
previewButtons,
|
||||
stopBeforeEdit: false,
|
||||
skipRegressive: "existingOnly",
|
||||
context: "final",
|
||||
previewMessageId: archivedPreview.messageId,
|
||||
previewTextSnapshot: archivedPreview.textSnapshot,
|
||||
});
|
||||
if (finalized) {
|
||||
return "preview-finalized";
|
||||
}
|
||||
}
|
||||
try {
|
||||
await bot.api.deleteMessage(chatId, archivedPreview.messageId);
|
||||
} catch (err) {
|
||||
logVerbose(
|
||||
`telegram: archived answer preview cleanup failed (${archivedPreview.messageId}): ${String(err)}`,
|
||||
);
|
||||
}
|
||||
const delivered = await sendPayload(applyTextToPayload(payload, text));
|
||||
return delivered ? "sent" : "skipped";
|
||||
if (laneName === "answer") {
|
||||
const archivedResult = await consumeArchivedAnswerPreviewForFinal({
|
||||
lane,
|
||||
text,
|
||||
payload,
|
||||
previewButtons,
|
||||
canEditViaPreview,
|
||||
});
|
||||
if (archivedResult) {
|
||||
return archivedResult;
|
||||
}
|
||||
}
|
||||
if (canEditViaPreview && !finalizedPreviewByLane[laneName]) {
|
||||
await flushDraftLane(lane);
|
||||
if (laneName === "answer") {
|
||||
const archivedResultAfterFlush = await consumeArchivedAnswerPreviewForFinal({
|
||||
lane,
|
||||
text,
|
||||
payload,
|
||||
previewButtons,
|
||||
canEditViaPreview,
|
||||
});
|
||||
if (archivedResultAfterFlush) {
|
||||
return archivedResultAfterFlush;
|
||||
}
|
||||
}
|
||||
const finalized = await tryUpdatePreviewForLane({
|
||||
lane,
|
||||
laneName,
|
||||
@ -735,6 +782,15 @@ export const dispatchTelegramMessage = async ({
|
||||
);
|
||||
}
|
||||
}
|
||||
for (const messageId of archivedReasoningPreviewIds) {
|
||||
try {
|
||||
await bot.api.deleteMessage(chatId, messageId);
|
||||
} catch (err) {
|
||||
logVerbose(
|
||||
`telegram: archived reasoning preview cleanup failed (${messageId}): ${String(err)}`,
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
let sentFallback = false;
|
||||
if (
|
||||
|
||||
173
src/telegram/bot-native-commands.session-meta.test.ts
Normal file
173
src/telegram/bot-native-commands.session-meta.test.ts
Normal file
@ -0,0 +1,173 @@
|
||||
import { describe, expect, it, vi } from "vitest";
|
||||
import type { OpenClawConfig } from "../config/config.js";
|
||||
import type { TelegramAccountConfig } from "../config/types.js";
|
||||
import type { RuntimeEnv } from "../runtime.js";
|
||||
import { registerTelegramNativeCommands } from "./bot-native-commands.js";
|
||||
|
||||
// All mocks scoped to this file only — does not affect bot-native-commands.test.ts
|
||||
|
||||
const sessionMocks = vi.hoisted(() => ({
|
||||
recordSessionMetaFromInbound: vi.fn(),
|
||||
resolveStorePath: vi.fn(),
|
||||
}));
|
||||
const replyMocks = vi.hoisted(() => ({
|
||||
dispatchReplyWithBufferedBlockDispatcher: vi.fn(async () => undefined),
|
||||
}));
|
||||
|
||||
vi.mock("../config/sessions.js", () => ({
|
||||
recordSessionMetaFromInbound: sessionMocks.recordSessionMetaFromInbound,
|
||||
resolveStorePath: sessionMocks.resolveStorePath,
|
||||
}));
|
||||
vi.mock("../pairing/pairing-store.js", () => ({
|
||||
readChannelAllowFromStore: vi.fn(async () => []),
|
||||
}));
|
||||
vi.mock("../auto-reply/reply/inbound-context.js", () => ({
|
||||
finalizeInboundContext: vi.fn((ctx: unknown) => ctx),
|
||||
}));
|
||||
vi.mock("../auto-reply/reply/provider-dispatcher.js", () => ({
|
||||
dispatchReplyWithBufferedBlockDispatcher: replyMocks.dispatchReplyWithBufferedBlockDispatcher,
|
||||
}));
|
||||
vi.mock("../channels/reply-prefix.js", () => ({
|
||||
createReplyPrefixOptions: vi.fn(() => ({ onModelSelected: () => {} })),
|
||||
}));
|
||||
vi.mock("../auto-reply/skill-commands.js", async (importOriginal) => {
|
||||
const actual = await importOriginal<typeof import("../auto-reply/skill-commands.js")>();
|
||||
return { ...actual, listSkillCommandsForAgents: vi.fn(() => []) };
|
||||
});
|
||||
vi.mock("../plugins/commands.js", () => ({
|
||||
getPluginCommandSpecs: vi.fn(() => []),
|
||||
matchPluginCommand: vi.fn(() => null),
|
||||
executePluginCommand: vi.fn(async () => ({ text: "ok" })),
|
||||
}));
|
||||
vi.mock("./bot/delivery.js", () => ({
|
||||
deliverReplies: vi.fn(async () => ({ delivered: true })),
|
||||
}));
|
||||
|
||||
const buildParams = (cfg: OpenClawConfig, accountId = "default") => ({
|
||||
bot: {
|
||||
api: {
|
||||
setMyCommands: vi.fn().mockResolvedValue(undefined),
|
||||
sendMessage: vi.fn().mockResolvedValue(undefined),
|
||||
},
|
||||
command: vi.fn(),
|
||||
} as unknown as Parameters<typeof registerTelegramNativeCommands>[0]["bot"],
|
||||
cfg,
|
||||
runtime: {} as unknown as RuntimeEnv,
|
||||
accountId,
|
||||
telegramCfg: {} as TelegramAccountConfig,
|
||||
allowFrom: [],
|
||||
groupAllowFrom: [],
|
||||
replyToMode: "off" as const,
|
||||
textLimit: 4096,
|
||||
useAccessGroups: false,
|
||||
nativeEnabled: true,
|
||||
nativeSkillsEnabled: true,
|
||||
nativeDisabledExplicit: false,
|
||||
resolveGroupPolicy: () => ({ allowlistEnabled: false, allowed: true }),
|
||||
resolveTelegramGroupConfig: () => ({
|
||||
groupConfig: undefined,
|
||||
topicConfig: undefined,
|
||||
}),
|
||||
shouldSkipUpdate: () => false,
|
||||
opts: { token: "token" },
|
||||
});
|
||||
|
||||
function createDeferred<T>() {
|
||||
let resolve!: (value: T | PromiseLike<T>) => void;
|
||||
const promise = new Promise<T>((res) => {
|
||||
resolve = res;
|
||||
});
|
||||
return { promise, resolve };
|
||||
}
|
||||
|
||||
describe("registerTelegramNativeCommands — session metadata", () => {
|
||||
it("calls recordSessionMetaFromInbound after a native slash command", async () => {
|
||||
sessionMocks.recordSessionMetaFromInbound.mockReset().mockResolvedValue(undefined);
|
||||
sessionMocks.resolveStorePath.mockReset().mockReturnValue("/tmp/openclaw-sessions.json");
|
||||
|
||||
const commandHandlers = new Map<string, (ctx: unknown) => Promise<void>>();
|
||||
const cfg: OpenClawConfig = {};
|
||||
|
||||
registerTelegramNativeCommands({
|
||||
...buildParams(cfg),
|
||||
allowFrom: ["*"],
|
||||
bot: {
|
||||
api: {
|
||||
setMyCommands: vi.fn().mockResolvedValue(undefined),
|
||||
sendMessage: vi.fn().mockResolvedValue(undefined),
|
||||
},
|
||||
command: vi.fn((name: string, cb: (ctx: unknown) => Promise<void>) => {
|
||||
commandHandlers.set(name, cb);
|
||||
}),
|
||||
} as unknown as Parameters<typeof registerTelegramNativeCommands>[0]["bot"],
|
||||
});
|
||||
|
||||
const handler = commandHandlers.get("status");
|
||||
expect(handler).toBeTruthy();
|
||||
await handler?.({
|
||||
match: "",
|
||||
message: {
|
||||
message_id: 1,
|
||||
date: Math.floor(Date.now() / 1000),
|
||||
chat: { id: 100, type: "private" },
|
||||
from: { id: 200, username: "bob" },
|
||||
},
|
||||
});
|
||||
|
||||
expect(sessionMocks.recordSessionMetaFromInbound).toHaveBeenCalledTimes(1);
|
||||
const call = (
|
||||
sessionMocks.recordSessionMetaFromInbound.mock.calls as unknown as Array<
|
||||
[{ sessionKey?: string; ctx?: { OriginatingChannel?: string } }]
|
||||
>
|
||||
)[0]?.[0];
|
||||
expect(call?.ctx?.OriginatingChannel).toBe("telegram");
|
||||
expect(call?.sessionKey).toBeDefined();
|
||||
});
|
||||
|
||||
it("awaits session metadata persistence before dispatch", async () => {
|
||||
const deferred = createDeferred<void>();
|
||||
sessionMocks.recordSessionMetaFromInbound.mockReset().mockReturnValue(deferred.promise);
|
||||
sessionMocks.resolveStorePath.mockReset().mockReturnValue("/tmp/openclaw-sessions.json");
|
||||
replyMocks.dispatchReplyWithBufferedBlockDispatcher.mockReset().mockResolvedValue(undefined);
|
||||
|
||||
const commandHandlers = new Map<string, (ctx: unknown) => Promise<void>>();
|
||||
const cfg: OpenClawConfig = {};
|
||||
|
||||
registerTelegramNativeCommands({
|
||||
...buildParams(cfg),
|
||||
allowFrom: ["*"],
|
||||
bot: {
|
||||
api: {
|
||||
setMyCommands: vi.fn().mockResolvedValue(undefined),
|
||||
sendMessage: vi.fn().mockResolvedValue(undefined),
|
||||
},
|
||||
command: vi.fn((name: string, cb: (ctx: unknown) => Promise<void>) => {
|
||||
commandHandlers.set(name, cb);
|
||||
}),
|
||||
} as unknown as Parameters<typeof registerTelegramNativeCommands>[0]["bot"],
|
||||
});
|
||||
|
||||
const handler = commandHandlers.get("status");
|
||||
expect(handler).toBeTruthy();
|
||||
|
||||
const runPromise = handler?.({
|
||||
match: "",
|
||||
message: {
|
||||
message_id: 1,
|
||||
date: Math.floor(Date.now() / 1000),
|
||||
chat: { id: 100, type: "private" },
|
||||
from: { id: 200, username: "bob" },
|
||||
},
|
||||
});
|
||||
|
||||
await vi.waitFor(() => {
|
||||
expect(sessionMocks.recordSessionMetaFromInbound).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
expect(replyMocks.dispatchReplyWithBufferedBlockDispatcher).not.toHaveBeenCalled();
|
||||
|
||||
deferred.resolve();
|
||||
await runPromise;
|
||||
|
||||
expect(replyMocks.dispatchReplyWithBufferedBlockDispatcher).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
});
|
||||
@ -17,6 +17,7 @@ import { createReplyPrefixOptions } from "../channels/reply-prefix.js";
|
||||
import type { OpenClawConfig } from "../config/config.js";
|
||||
import type { ChannelGroupPolicy } from "../config/group-policy.js";
|
||||
import { resolveMarkdownTableMode } from "../config/markdown-tables.js";
|
||||
import { recordSessionMetaFromInbound, resolveStorePath } from "../config/sessions.js";
|
||||
import {
|
||||
normalizeTelegramCommandName,
|
||||
resolveTelegramCustomCommands,
|
||||
@ -594,6 +595,19 @@ export const registerTelegramNativeCommands = ({
|
||||
OriginatingTo: `telegram:${chatId}`,
|
||||
});
|
||||
|
||||
const storePath = resolveStorePath(cfg.session?.store, {
|
||||
agentId: route.agentId,
|
||||
});
|
||||
try {
|
||||
await recordSessionMetaFromInbound({
|
||||
storePath,
|
||||
sessionKey: ctxPayload.SessionKey ?? route.sessionKey,
|
||||
ctx: ctxPayload,
|
||||
});
|
||||
} catch (err) {
|
||||
runtime.error?.(danger(`telegram slash: failed updating session meta: ${String(err)}`));
|
||||
}
|
||||
|
||||
const disableBlockStreaming =
|
||||
typeof telegramCfg.blockStreaming === "boolean"
|
||||
? !telegramCfg.blockStreaming
|
||||
|
||||
@ -1,3 +1,4 @@
|
||||
import type { Bot } from "grammy";
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import { createTelegramDraftStream } from "./draft-stream.js";
|
||||
|
||||
@ -18,8 +19,7 @@ function createThreadedDraftStream(
|
||||
thread: { id: number; scope: "forum" | "dm" },
|
||||
) {
|
||||
return createTelegramDraftStream({
|
||||
// oxlint-disable-next-line typescript/no-explicit-any
|
||||
api: api as any,
|
||||
api: api as unknown as Bot["api"],
|
||||
chatId: 123,
|
||||
thread,
|
||||
});
|
||||
@ -109,8 +109,7 @@ describe("createTelegramDraftStream", () => {
|
||||
deleteMessage: vi.fn().mockResolvedValue(true),
|
||||
};
|
||||
const stream = createTelegramDraftStream({
|
||||
// oxlint-disable-next-line typescript/no-explicit-any
|
||||
api: api as any,
|
||||
api: api as unknown as Bot["api"],
|
||||
chatId: 123,
|
||||
});
|
||||
|
||||
@ -146,8 +145,7 @@ describe("createTelegramDraftStream", () => {
|
||||
deleteMessage: vi.fn().mockResolvedValue(true),
|
||||
};
|
||||
const stream = createTelegramDraftStream({
|
||||
// oxlint-disable-next-line typescript/no-explicit-any
|
||||
api: api as any,
|
||||
api: api as unknown as Bot["api"],
|
||||
chatId: 123,
|
||||
throttleMs: 1000,
|
||||
});
|
||||
@ -167,11 +165,47 @@ describe("createTelegramDraftStream", () => {
|
||||
}
|
||||
});
|
||||
|
||||
it("does not rebind to an old message when forceNewMessage races an in-flight send", async () => {
|
||||
let resolveFirstSend: ((value: { message_id: number }) => void) | undefined;
|
||||
const firstSend = new Promise<{ message_id: number }>((resolve) => {
|
||||
resolveFirstSend = resolve;
|
||||
});
|
||||
const api = {
|
||||
sendMessage: vi.fn().mockReturnValueOnce(firstSend).mockResolvedValueOnce({ message_id: 42 }),
|
||||
editMessageText: vi.fn().mockResolvedValue(true),
|
||||
deleteMessage: vi.fn().mockResolvedValue(true),
|
||||
};
|
||||
const onSupersededPreview = vi.fn();
|
||||
const stream = createTelegramDraftStream({
|
||||
api: api as unknown as Bot["api"],
|
||||
chatId: 123,
|
||||
onSupersededPreview,
|
||||
});
|
||||
|
||||
stream.update("Message A partial");
|
||||
await vi.waitFor(() => expect(api.sendMessage).toHaveBeenCalledTimes(1));
|
||||
|
||||
// Rotate to message B before message A send resolves.
|
||||
stream.forceNewMessage();
|
||||
stream.update("Message B partial");
|
||||
|
||||
resolveFirstSend?.({ message_id: 17 });
|
||||
await stream.flush();
|
||||
|
||||
expect(onSupersededPreview).toHaveBeenCalledWith({
|
||||
messageId: 17,
|
||||
textSnapshot: "Message A partial",
|
||||
parseMode: undefined,
|
||||
});
|
||||
expect(api.sendMessage).toHaveBeenCalledTimes(2);
|
||||
expect(api.sendMessage).toHaveBeenNthCalledWith(2, 123, "Message B partial", undefined);
|
||||
expect(api.editMessageText).not.toHaveBeenCalledWith(123, 17, "Message B partial");
|
||||
});
|
||||
|
||||
it("supports rendered previews with parse_mode", async () => {
|
||||
const api = createMockDraftApi();
|
||||
const stream = createTelegramDraftStream({
|
||||
// oxlint-disable-next-line typescript/no-explicit-any
|
||||
api: api as any,
|
||||
api: api as unknown as Bot["api"],
|
||||
chatId: 123,
|
||||
renderText: (text) => ({ text: `<i>${text}</i>`, parseMode: "HTML" }),
|
||||
});
|
||||
@ -191,8 +225,7 @@ describe("createTelegramDraftStream", () => {
|
||||
const api = createMockDraftApi();
|
||||
const warn = vi.fn();
|
||||
const stream = createTelegramDraftStream({
|
||||
// oxlint-disable-next-line typescript/no-explicit-any
|
||||
api: api as any,
|
||||
api: api as unknown as Bot["api"],
|
||||
chatId: 123,
|
||||
maxChars: 100,
|
||||
renderText: () => ({ text: `<b>${"<".repeat(120)}</b>`, parseMode: "HTML" }),
|
||||
@ -229,8 +262,7 @@ describe("draft stream initial message debounce", () => {
|
||||
it("sends immediately on stop() even with 1 character", async () => {
|
||||
const api = createMockApi();
|
||||
const stream = createTelegramDraftStream({
|
||||
// oxlint-disable-next-line typescript/no-explicit-any
|
||||
api: api as any,
|
||||
api: api as unknown as Bot["api"],
|
||||
chatId: 123,
|
||||
minInitialChars: 30,
|
||||
});
|
||||
@ -245,8 +277,7 @@ describe("draft stream initial message debounce", () => {
|
||||
it("sends immediately on stop() with short sentence", async () => {
|
||||
const api = createMockApi();
|
||||
const stream = createTelegramDraftStream({
|
||||
// oxlint-disable-next-line typescript/no-explicit-any
|
||||
api: api as any,
|
||||
api: api as unknown as Bot["api"],
|
||||
chatId: 123,
|
||||
minInitialChars: 30,
|
||||
});
|
||||
@ -263,8 +294,7 @@ describe("draft stream initial message debounce", () => {
|
||||
it("does not send first message below threshold", async () => {
|
||||
const api = createMockApi();
|
||||
const stream = createTelegramDraftStream({
|
||||
// oxlint-disable-next-line typescript/no-explicit-any
|
||||
api: api as any,
|
||||
api: api as unknown as Bot["api"],
|
||||
chatId: 123,
|
||||
minInitialChars: 30,
|
||||
});
|
||||
@ -278,8 +308,7 @@ describe("draft stream initial message debounce", () => {
|
||||
it("sends first message when reaching threshold", async () => {
|
||||
const api = createMockApi();
|
||||
const stream = createTelegramDraftStream({
|
||||
// oxlint-disable-next-line typescript/no-explicit-any
|
||||
api: api as any,
|
||||
api: api as unknown as Bot["api"],
|
||||
chatId: 123,
|
||||
minInitialChars: 30,
|
||||
});
|
||||
@ -294,8 +323,7 @@ describe("draft stream initial message debounce", () => {
|
||||
it("works with longer text above threshold", async () => {
|
||||
const api = createMockApi();
|
||||
const stream = createTelegramDraftStream({
|
||||
// oxlint-disable-next-line typescript/no-explicit-any
|
||||
api: api as any,
|
||||
api: api as unknown as Bot["api"],
|
||||
chatId: 123,
|
||||
minInitialChars: 30,
|
||||
});
|
||||
@ -311,8 +339,7 @@ describe("draft stream initial message debounce", () => {
|
||||
it("edits normally after first message is sent", async () => {
|
||||
const api = createMockApi();
|
||||
const stream = createTelegramDraftStream({
|
||||
// oxlint-disable-next-line typescript/no-explicit-any
|
||||
api: api as any,
|
||||
api: api as unknown as Bot["api"],
|
||||
chatId: 123,
|
||||
minInitialChars: 30,
|
||||
});
|
||||
@ -335,8 +362,7 @@ describe("draft stream initial message debounce", () => {
|
||||
it("sends immediately without minInitialChars set (backward compatible)", async () => {
|
||||
const api = createMockApi();
|
||||
const stream = createTelegramDraftStream({
|
||||
// oxlint-disable-next-line typescript/no-explicit-any
|
||||
api: api as any,
|
||||
api: api as unknown as Bot["api"],
|
||||
chatId: 123,
|
||||
// no minInitialChars (backward-compatible behavior)
|
||||
});
|
||||
|
||||
@ -20,6 +20,12 @@ type TelegramDraftPreview = {
|
||||
parseMode?: "HTML";
|
||||
};
|
||||
|
||||
type SupersededTelegramPreview = {
|
||||
messageId: number;
|
||||
textSnapshot: string;
|
||||
parseMode?: "HTML";
|
||||
};
|
||||
|
||||
export function createTelegramDraftStream(params: {
|
||||
api: Bot["api"];
|
||||
chatId: number;
|
||||
@ -31,6 +37,8 @@ export function createTelegramDraftStream(params: {
|
||||
minInitialChars?: number;
|
||||
/** Optional preview renderer (e.g. markdown -> HTML + parse mode). */
|
||||
renderText?: (text: string) => TelegramDraftPreview;
|
||||
/** Called when a late send resolves after forceNewMessage() switched generations. */
|
||||
onSupersededPreview?: (preview: SupersededTelegramPreview) => void;
|
||||
log?: (message: string) => void;
|
||||
warn?: (message: string) => void;
|
||||
}): TelegramDraftStream {
|
||||
@ -52,6 +60,7 @@ export function createTelegramDraftStream(params: {
|
||||
let lastSentParseMode: "HTML" | undefined;
|
||||
let stopped = false;
|
||||
let isFinal = false;
|
||||
let generation = 0;
|
||||
|
||||
const sendOrEditStreamMessage = async (text: string): Promise<boolean> => {
|
||||
// Allow final flush even if stopped (e.g., after clear()).
|
||||
@ -80,6 +89,7 @@ export function createTelegramDraftStream(params: {
|
||||
if (renderedText === lastSentText && renderedParseMode === lastSentParseMode) {
|
||||
return true;
|
||||
}
|
||||
const sendGeneration = generation;
|
||||
|
||||
// Debounce first preview send for better push notification quality.
|
||||
if (typeof streamMessageId !== "number" && minInitialChars != null && !isFinal) {
|
||||
@ -114,7 +124,16 @@ export function createTelegramDraftStream(params: {
|
||||
params.warn?.("telegram stream preview stopped (missing message id from sendMessage)");
|
||||
return false;
|
||||
}
|
||||
streamMessageId = Math.trunc(sentMessageId);
|
||||
const normalizedMessageId = Math.trunc(sentMessageId);
|
||||
if (sendGeneration !== generation) {
|
||||
params.onSupersededPreview?.({
|
||||
messageId: normalizedMessageId,
|
||||
textSnapshot: renderedText,
|
||||
parseMode: renderedParseMode,
|
||||
});
|
||||
return true;
|
||||
}
|
||||
streamMessageId = normalizedMessageId;
|
||||
return true;
|
||||
} catch (err) {
|
||||
stopped = true;
|
||||
@ -163,6 +182,7 @@ export function createTelegramDraftStream(params: {
|
||||
};
|
||||
|
||||
const forceNewMessage = () => {
|
||||
generation += 1;
|
||||
streamMessageId = undefined;
|
||||
lastSentText = "";
|
||||
lastSentParseMode = undefined;
|
||||
|
||||
@ -1,7 +1,22 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { resolveTelegramAutoSelectFamilyDecision } from "./network-config.js";
|
||||
import { afterEach, describe, expect, it, vi } from "vitest";
|
||||
import {
|
||||
resetTelegramNetworkConfigStateForTests,
|
||||
resolveTelegramAutoSelectFamilyDecision,
|
||||
} from "./network-config.js";
|
||||
|
||||
// Mock isWSL2Sync at the top level
|
||||
vi.mock("../infra/wsl.js", () => ({
|
||||
isWSL2Sync: vi.fn(() => false),
|
||||
}));
|
||||
|
||||
import { isWSL2Sync } from "../infra/wsl.js";
|
||||
|
||||
describe("resolveTelegramAutoSelectFamilyDecision", () => {
|
||||
afterEach(() => {
|
||||
vi.restoreAllMocks();
|
||||
resetTelegramNetworkConfigStateForTests();
|
||||
});
|
||||
|
||||
it("prefers env enable over env disable", () => {
|
||||
const decision = resolveTelegramAutoSelectFamilyDecision({
|
||||
env: {
|
||||
@ -69,4 +84,48 @@ describe("resolveTelegramAutoSelectFamilyDecision", () => {
|
||||
const decision = resolveTelegramAutoSelectFamilyDecision({ env: {}, nodeMajor: 20 });
|
||||
expect(decision).toEqual({ value: null });
|
||||
});
|
||||
|
||||
describe("WSL2 detection", () => {
|
||||
it("disables autoSelectFamily on WSL2", () => {
|
||||
vi.mocked(isWSL2Sync).mockReturnValue(true);
|
||||
const decision = resolveTelegramAutoSelectFamilyDecision({ env: {}, nodeMajor: 22 });
|
||||
expect(decision).toEqual({ value: false, source: "default-wsl2" });
|
||||
});
|
||||
|
||||
it("respects config override on WSL2", () => {
|
||||
vi.mocked(isWSL2Sync).mockReturnValue(true);
|
||||
const decision = resolveTelegramAutoSelectFamilyDecision({
|
||||
env: {},
|
||||
network: { autoSelectFamily: true },
|
||||
nodeMajor: 22,
|
||||
});
|
||||
expect(decision).toEqual({ value: true, source: "config" });
|
||||
});
|
||||
|
||||
it("respects env override on WSL2", () => {
|
||||
vi.mocked(isWSL2Sync).mockReturnValue(true);
|
||||
const decision = resolveTelegramAutoSelectFamilyDecision({
|
||||
env: { OPENCLAW_TELEGRAM_ENABLE_AUTO_SELECT_FAMILY: "1" },
|
||||
nodeMajor: 22,
|
||||
});
|
||||
expect(decision).toEqual({
|
||||
value: true,
|
||||
source: "env:OPENCLAW_TELEGRAM_ENABLE_AUTO_SELECT_FAMILY",
|
||||
});
|
||||
});
|
||||
|
||||
it("uses Node 22 default when not on WSL2", () => {
|
||||
vi.mocked(isWSL2Sync).mockReturnValue(false);
|
||||
const decision = resolveTelegramAutoSelectFamilyDecision({ env: {}, nodeMajor: 22 });
|
||||
expect(decision).toEqual({ value: true, source: "default-node22" });
|
||||
});
|
||||
|
||||
it("memoizes WSL2 detection across repeated defaults", () => {
|
||||
vi.mocked(isWSL2Sync).mockReset();
|
||||
vi.mocked(isWSL2Sync).mockReturnValue(false);
|
||||
resolveTelegramAutoSelectFamilyDecision({ env: {}, nodeMajor: 22 });
|
||||
resolveTelegramAutoSelectFamilyDecision({ env: {}, nodeMajor: 22 });
|
||||
expect(isWSL2Sync).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
import process from "node:process";
|
||||
import type { TelegramNetworkConfig } from "../config/types.telegram.js";
|
||||
import { isTruthyEnvValue } from "../infra/env.js";
|
||||
import { isWSL2Sync } from "../infra/wsl.js";
|
||||
|
||||
export const TELEGRAM_DISABLE_AUTO_SELECT_FAMILY_ENV =
|
||||
"OPENCLAW_TELEGRAM_DISABLE_AUTO_SELECT_FAMILY";
|
||||
@ -11,6 +12,16 @@ export type TelegramAutoSelectFamilyDecision = {
|
||||
source?: string;
|
||||
};
|
||||
|
||||
let wsl2SyncCache: boolean | undefined;
|
||||
|
||||
function isWSL2SyncCached(): boolean {
|
||||
if (typeof wsl2SyncCache === "boolean") {
|
||||
return wsl2SyncCache;
|
||||
}
|
||||
wsl2SyncCache = isWSL2Sync();
|
||||
return wsl2SyncCache;
|
||||
}
|
||||
|
||||
export function resolveTelegramAutoSelectFamilyDecision(params?: {
|
||||
network?: TelegramNetworkConfig;
|
||||
env?: NodeJS.ProcessEnv;
|
||||
@ -31,8 +42,16 @@ export function resolveTelegramAutoSelectFamilyDecision(params?: {
|
||||
if (typeof params?.network?.autoSelectFamily === "boolean") {
|
||||
return { value: params.network.autoSelectFamily, source: "config" };
|
||||
}
|
||||
// WSL2 has unstable IPv6 connectivity; disable autoSelectFamily to use IPv4 directly
|
||||
if (isWSL2SyncCached()) {
|
||||
return { value: false, source: "default-wsl2" };
|
||||
}
|
||||
if (Number.isFinite(nodeMajor) && nodeMajor >= 22) {
|
||||
return { value: true, source: "default-node22" };
|
||||
}
|
||||
return { value: null };
|
||||
}
|
||||
|
||||
export function resetTelegramNetworkConfigStateForTests(): void {
|
||||
wsl2SyncCache = undefined;
|
||||
}
|
||||
|
||||
@ -2,6 +2,58 @@ import { describe, expect, it, vi } from "vitest";
|
||||
import { createCommandHandlers } from "./tui-command-handlers.js";
|
||||
|
||||
describe("tui command handlers", () => {
|
||||
it("renders the sending indicator before chat.send resolves", async () => {
|
||||
let resolveSend: ((value: { runId: string }) => void) | null = null;
|
||||
const sendChat = vi.fn(
|
||||
() =>
|
||||
new Promise<{ runId: string }>((resolve) => {
|
||||
resolveSend = resolve;
|
||||
}),
|
||||
);
|
||||
const addUser = vi.fn();
|
||||
const requestRender = vi.fn();
|
||||
const setActivityStatus = vi.fn();
|
||||
|
||||
const { handleCommand } = createCommandHandlers({
|
||||
client: { sendChat } as never,
|
||||
chatLog: { addUser, addSystem: vi.fn() } as never,
|
||||
tui: { requestRender } as never,
|
||||
opts: {},
|
||||
state: {
|
||||
currentSessionKey: "agent:main:main",
|
||||
activeChatRunId: null,
|
||||
sessionInfo: {},
|
||||
} as never,
|
||||
deliverDefault: false,
|
||||
openOverlay: vi.fn(),
|
||||
closeOverlay: vi.fn(),
|
||||
refreshSessionInfo: vi.fn(),
|
||||
loadHistory: vi.fn(),
|
||||
setSession: vi.fn(),
|
||||
refreshAgents: vi.fn(),
|
||||
abortActive: vi.fn(),
|
||||
setActivityStatus,
|
||||
formatSessionKey: vi.fn(),
|
||||
applySessionInfoFromPatch: vi.fn(),
|
||||
noteLocalRunId: vi.fn(),
|
||||
});
|
||||
|
||||
const pending = handleCommand("/context");
|
||||
await Promise.resolve();
|
||||
|
||||
expect(setActivityStatus).toHaveBeenCalledWith("sending");
|
||||
const sendingOrder = setActivityStatus.mock.invocationCallOrder[0] ?? 0;
|
||||
const renderOrders = requestRender.mock.invocationCallOrder;
|
||||
expect(renderOrders.some((order) => order > sendingOrder)).toBe(true);
|
||||
|
||||
if (typeof resolveSend !== "function") {
|
||||
throw new Error("expected sendChat to be pending");
|
||||
}
|
||||
(resolveSend as (value: { runId: string }) => void)({ runId: "r1" });
|
||||
await pending;
|
||||
expect(setActivityStatus).toHaveBeenCalledWith("waiting");
|
||||
});
|
||||
|
||||
it("forwards unknown slash commands to the gateway", async () => {
|
||||
const sendChat = vi.fn().mockResolvedValue({ runId: "r1" });
|
||||
const addUser = vi.fn();
|
||||
|
||||
@ -470,6 +470,7 @@ export function createCommandHandlers(context: CommandHandlerContext) {
|
||||
noteLocalRunId(runId);
|
||||
state.activeChatRunId = runId;
|
||||
setActivityStatus("sending");
|
||||
tui.requestRender();
|
||||
await client.sendChat({
|
||||
sessionKey: state.currentSessionKey,
|
||||
message: text,
|
||||
@ -479,6 +480,7 @@ export function createCommandHandlers(context: CommandHandlerContext) {
|
||||
runId,
|
||||
});
|
||||
setActivityStatus("waiting");
|
||||
tui.requestRender();
|
||||
} catch (err) {
|
||||
if (state.activeChatRunId) {
|
||||
forgetLocalRunId?.(state.activeChatRunId);
|
||||
@ -486,8 +488,8 @@ export function createCommandHandlers(context: CommandHandlerContext) {
|
||||
state.activeChatRunId = null;
|
||||
chatLog.addSystem(`send failed: ${String(err)}`);
|
||||
setActivityStatus("error");
|
||||
tui.requestRender();
|
||||
}
|
||||
tui.requestRender();
|
||||
};
|
||||
|
||||
return {
|
||||
|
||||
@ -130,10 +130,32 @@ describe("shouldEnableWindowsGitBashPasteFallback", () => {
|
||||
).toBe(true);
|
||||
});
|
||||
|
||||
it("disables fallback outside Windows", () => {
|
||||
it("enables fallback on macOS iTerm", () => {
|
||||
expect(
|
||||
shouldEnableWindowsGitBashPasteFallback({
|
||||
platform: "darwin",
|
||||
env: {
|
||||
TERM_PROGRAM: "iTerm.app",
|
||||
} as NodeJS.ProcessEnv,
|
||||
}),
|
||||
).toBe(true);
|
||||
});
|
||||
|
||||
it("enables fallback on macOS Terminal.app", () => {
|
||||
expect(
|
||||
shouldEnableWindowsGitBashPasteFallback({
|
||||
platform: "darwin",
|
||||
env: {
|
||||
TERM_PROGRAM: "Apple_Terminal",
|
||||
} as NodeJS.ProcessEnv,
|
||||
}),
|
||||
).toBe(true);
|
||||
});
|
||||
|
||||
it("disables fallback outside Windows", () => {
|
||||
expect(
|
||||
shouldEnableWindowsGitBashPasteFallback({
|
||||
platform: "linux",
|
||||
env: {
|
||||
MSYSTEM: "MINGW64",
|
||||
} as NodeJS.ProcessEnv,
|
||||
|
||||
@ -84,13 +84,24 @@ export function shouldEnableWindowsGitBashPasteFallback(params?: {
|
||||
env?: NodeJS.ProcessEnv;
|
||||
}): boolean {
|
||||
const platform = params?.platform ?? process.platform;
|
||||
const env = params?.env ?? process.env;
|
||||
const termProgram = (env.TERM_PROGRAM ?? "").toLowerCase();
|
||||
|
||||
// Some macOS terminals emit multiline paste as rapid single-line submits.
|
||||
// Enable burst coalescing so pasted blocks stay as one user message.
|
||||
if (platform === "darwin") {
|
||||
if (termProgram.includes("iterm") || termProgram.includes("apple_terminal")) {
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
if (platform !== "win32") {
|
||||
return false;
|
||||
}
|
||||
const env = params?.env ?? process.env;
|
||||
|
||||
const msystem = (env.MSYSTEM ?? "").toUpperCase();
|
||||
const shell = env.SHELL ?? "";
|
||||
const termProgram = (env.TERM_PROGRAM ?? "").toLowerCase();
|
||||
if (msystem.startsWith("MINGW") || msystem.startsWith("MSYS")) {
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -372,6 +372,13 @@
|
||||
border-color: rgba(255, 255, 255, 0.2);
|
||||
}
|
||||
|
||||
/* Ensure chat toolbar toggles have a clearly visible active state. */
|
||||
.chat-controls .btn--icon.active {
|
||||
border-color: var(--accent);
|
||||
background: var(--accent-subtle);
|
||||
color: var(--accent);
|
||||
}
|
||||
|
||||
/* Light theme icon button overrides */
|
||||
:root[data-theme="light"] .btn--icon {
|
||||
background: #ffffff;
|
||||
@ -386,6 +393,13 @@
|
||||
color: var(--text);
|
||||
}
|
||||
|
||||
:root[data-theme="light"] .chat-controls .btn--icon.active {
|
||||
border-color: var(--accent);
|
||||
background: var(--accent-subtle);
|
||||
color: var(--accent);
|
||||
box-shadow: 0 0 0 1px var(--accent-subtle);
|
||||
}
|
||||
|
||||
.btn--icon svg {
|
||||
display: block;
|
||||
width: 18px;
|
||||
|
||||
@ -542,6 +542,12 @@
|
||||
background: var(--bg-hover);
|
||||
}
|
||||
|
||||
:root[data-theme="light"] .btn.active {
|
||||
border-color: var(--accent);
|
||||
background: var(--accent-subtle);
|
||||
color: var(--accent);
|
||||
}
|
||||
|
||||
:root[data-theme="light"] .btn.primary {
|
||||
background: var(--accent);
|
||||
border-color: var(--accent);
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user