Add deterministic context control layer that intercepts prompt construction without modifying existing architecture: - context_engine.py: single choke point (build_context) that assembles structured prompts from ledger + sigil + live window, with token budget enforcement and automatic window shrinking - ledger.py: bounded per-stream JSON state (orientation, blockers, open questions, delta) with hard field/list limits - sigil.py: FIFO shorthand memory (max 15 entries) with deterministic rule-based generation from message patterns - token_gate.py: fast token estimation (~4 chars/token) and hard cap enforcement with configurable MAX_TOKENS/LIVE_WINDOW - redact.py: secret pattern detection (Discord, OpenAI, Anthropic, AWS, Slack, GitHub, Telegram, Bearer, generic key=value) replaced with [REDACTED_SECRET] before any output path All 64 tests passing. No modifications to existing agent spawning, model routing, tool system, or Discord relay architecture. https://claude.ai/code/session_01K7BWJY2gUoJi6dq91Yc7nx
22 lines
592 B
Python
22 lines
592 B
Python
"""
|
|
ra2 — Context Sovereignty Layer (Phase 1)
|
|
|
|
Deterministic thin wrapper that:
|
|
- Prevents full markdown history injection into prompts
|
|
- Introduces structured ledger memory
|
|
- Introduces sigil shorthand memory
|
|
- Enforces hard token caps before provider calls
|
|
- Redacts secrets before logs and model calls
|
|
|
|
Usage:
|
|
from ra2.context_engine import build_context
|
|
|
|
result = build_context(stream_id="my-stream", new_messages=[...])
|
|
prompt = result["prompt"]
|
|
tokens = result["token_estimate"]
|
|
"""
|
|
|
|
from ra2.context_engine import build_context
|
|
|
|
__all__ = ["build_context"]
|