📦 NEW: ALWAYS ON skills

This commit is contained in:
kumarabhirup 2026-02-09 10:47:04 -08:00
parent 580423a6eb
commit 04da69c89b
No known key found for this signature in database
GPG Key ID: DB7CA2289CAB0167
8 changed files with 1257 additions and 3 deletions

View File

@ -0,0 +1,641 @@
---
name: Dench Filesystem CRM Integration
overview: Replace Dench's slow tool-by-tool CRM agent with a filesystem-first architecture where OpenClaw manages a DuckDB database and markdown documents in a dedicated workspace folder, synced via S3, and surfaced in Dench's sidebar.
todos:
- id: inject-skill-infra
content: "Phase 1: Add `inject: true` skill metadata support to OpenClaw (types.ts, workspace.ts, system-prompt.ts, run.ts)"
status: pending
- id: dench-skill
content: "Phase 2: Create skills/dench/SKILL.md with DuckDB schema, SQL patterns, CRM patterns, document management instructions"
status: pending
- id: s3-sync-script
content: "Phase 3: Create S3 sync script and sandbox startup hook for dench workspace persistence"
status: pending
- id: dench-sidebar
content: "Phase 4 (DEFERRED): Add workspace data source to Dench sidebar (tRPC endpoints + buildKnowledgeTree merge)"
status: pending
- id: lambda-sync
content: "Phase 4b (DEFERRED): S3-to-PostgreSQL Lambda sync for fast sidebar queries (or simpler: direct DuckDB read from S3)"
status: pending
isProject: false
---
# Dench Filesystem-First CRM via OpenClaw
## Architecture Overview
The current Dench CRM agent uses 15+ individual tRPC-backed tools (createObjectTool, createFieldTool, createEntryTool, searchEntriesTool, etc.) that each make a Prisma call. This is slow -- the agent often needs 10+ tool calls for a single user request.
The new architecture gives OpenClaw a `dench/` workspace folder with a DuckDB database and markdown files. The agent generates SQL directly via `exec` (duckdb CLI) and writes documents as `.md` files. S3 syncs this data between sandbox sessions, and Dench's sidebar reads from S3/PostgreSQL.
**Why DuckDB over SQLite:**
- Native PIVOT/UNPIVOT -- essential for the EAV (Entity-Attribute-Value) pattern used by custom fields; every "show me entries as a table" query needs pivot
- PostgreSQL-compatible SQL dialect -- matches Dench's Supabase/Postgres, so generated SQL is portable
- Native JSON type -- clean handling of `enum_values`, `enum_colors`, field mappings (no JSON1 extension needed)
- Built-in CSV/Parquet import/export with auto-detection -- bulk CRM operations ("import 500 leads from CSV")
- FTS extension -- full-text search across entry fields
- `generate_series` + macros -- nanoid 32 generation in pure SQL
- ~50-100ms startup overhead is acceptable given agent tool-call overhead is already 200-500ms
```mermaid
flowchart TB
subgraph sandbox [OpenClaw Sandbox]
agent[OpenClaw Agent]
skill[Dench Skill - always in context]
ctx[dench/workspace_context.yaml]
db[dench/workspace.duckdb]
docs[dench/documents/*.md]
agent --> skill
agent -->|"read-only context"| ctx
agent -->|"exec: duckdb"| db
agent -->|"write/edit"| docs
end
subgraph s3layer [Persistence Layer]
s3["S3: dench/{orgId}/"]
end
subgraph denchApp [Dench Web App]
sidebar[App Sidebar]
api[tRPC API]
pg[PostgreSQL - fs_files sync]
end
db -->|sync on save| s3
docs -->|sync on save| s3
s3 -->|Lambda trigger| pg
pg --> api --> sidebar
s3 -->|download on sandbox start| db
s3 -->|download on sandbox start| docs
```
---
## Phase 1: Always-In-Context Skill Infrastructure (OpenClaw)
Currently, all skills are lazy-loaded: only name + description appear in the system prompt, and the agent must `read` the SKILL.md to get full instructions. For the Dench CRM skill, we need the full content injected automatically.
**Approach:** Add an `inject: true` flag to skill metadata. When set, the skill's full content is included in the system prompt alongside bootstrap files (in the "Project Context" section), not in the lazy-loaded skills list.
**Files to modify:**
- [src/agents/skills/types.ts](src/agents/skills/types.ts) -- Add `inject?: boolean` to `OpenClawSkillMetadata`
- [src/agents/skills/workspace.ts](src/agents/skills/workspace.ts) -- In `buildWorkspaceSkillSnapshot()`, separate injected skills from lazy-loaded skills. Return injected skill contents alongside the prompt.
- [src/agents/system-prompt.ts](src/agents/system-prompt.ts) -- Accept injected skill content and include it in the "Project Context" / "Workspace Files" section (similar to how bootstrap files like AGENTS.md are included via `contextFiles`)
- [src/agents/pi-embedded-runner/run.ts](src/agents/pi-embedded-runner/run.ts) -- Pass injected skill content through to the system prompt builder
**Key change in `buildWorkspaceSkillSnapshot`:**
```typescript
// New return type addition
export type SkillSnapshot = {
prompt: string; // lazy-loaded skills prompt (XML)
injectedContent?: string; // always-in-context skill content (concatenated)
skills: Array<{ name: string; primaryEnv?: string }>;
// ...
};
```
---
## Phase 2: Dench CRM Skill (OpenClaw)
Create `skills/dench/SKILL.md` with:
- Full DuckDB schema reference
- nanoid 32 macro for ID generation (matching Dench's Supabase nanoid IDs)
- SQL patterns for all CRUD operations including PIVOT for table views
- Document management instructions
- Workspace structure documentation
- CRM patterns (contact, lead, deal, etc.) ported from the [Dench CRM agent prompt](file:///Users/kumareth/Documents/projects/dench/src/lib/agents/crm-agent.ts)
**New file:** `skills/dench/SKILL.md`
**Skill frontmatter:**
```yaml
---
name: dench
description: Manage Dench CRM workspace - create objects, fields, entries via DuckDB and documents as markdown files
metadata:
openclaw:
inject: true
emoji: "📊"
---
```
**Workspace directory structure managed by the skill:**
```
~/.openclaw/workspace/dench/
workspace_context.yaml # READ-ONLY context (org, members, integrations, defaults)
workspace.duckdb # DuckDB database (CRM data)
documents/ # Markdown documents (nested by path)
getting-started.md
projects/
project-alpha.md
exports/ # Generated CSV/Excel exports
WORKSPACE.md # Auto-generated schema documentation
```
**workspace_context.yaml** -- read-only context the agent consumes on startup (written by Dench, never by the agent):
```yaml
# Dench Workspace Context (READ-ONLY)
# This file is generated by Dench and synced via S3.
# The agent reads this for organizational context but MUST NOT modify it.
# Changes flow from Dench UI -> S3 -> this file (on sandbox init).
workspace:
version: 1
# Organization identity (synced from Dench on sandbox init)
organization:
id: "org_abc123"
name: "Acme Corp"
slug: "acme-corp"
business:
name: "Acme Corporation"
type: "saas" # saas, agency, ecommerce, services, etc.
industry: "Technology"
website: "https://acme.com"
# Team members -- needed for "user" type fields (e.g. "Assigned To")
# Agent uses these IDs when creating entries with user-type fields.
members:
- id: "usr_abc123"
name: "John Doe"
email: "john@acme.com"
role: owner
- id: "usr_def456"
name: "Jane Smith"
email: "jane@acme.com"
role: admin
- id: "usr_ghi789"
name: "Bob Wilson"
email: "bob@acme.com"
role: member
# Protected objects -- cannot be deleted/renamed by agent
# Mirrors Dench's base-objects.ts immutable list
protected_objects:
- name: "people"
description: "Contact records"
icon: "users"
- name: "companies"
description: "Company records"
icon: "building-2"
# Connected integrations -- agent reads this for sync context
# Populated by Dench when sandbox initializes from S3
integrations:
connections: []
# Example when connected:
# - app_key: "salesforce"
# app_name: "Salesforce"
# connection_id: "conn_xyz"
# synced_objects:
# - external_resource: "Lead"
# local_object: "lead"
# sync_direction: bidirectional # import, export, bidirectional
# sync_frequency: hourly # realtime, hourly, daily, manual
# field_mappings:
# "FirstName": "Full Name"
# "Email": "Email Address"
# Enrichment configuration
enrichment:
enabled: false
provider: "aviato" # aviato, apollo
auto_enrich: false # Auto-enrich new entries on creation
# CRM defaults
defaults:
default_view: table # table, kanban
date_format: "YYYY-MM-DD"
naming_convention: singular_lowercase # Object names: "lead" not "Leads"
# S3 persistence
sync:
s3_bucket: "dench-workspaces"
s3_prefix: "" # Set to org_id on init
frequency: on_write # on_write, manual
last_synced_at: null
# Credit account (for enrichment, AI operations)
credits:
allowance_balance: 0
topup_balance: 0
```
**Why YAML for context (not DuckDB):** This is read-only context the agent consumes once at startup -- never writes. The agent can `cat workspace_context.yaml` to understand the full org context instantly. Members list means no separate query to resolve user-type field assignments. Integrations give awareness of sync relationships without querying external APIs. This follows the Fintool pattern where user state (preferences, watchlists) lives as YAML in S3 while dense queryable data lives in the database. The data flow is one-way: Dench UI -> S3 -> workspace_context.yaml (on sandbox init). The agent never writes back to this file.
**DuckDB schema** (initialized by agent on first use via `duckdb dench/workspace.duckdb`):
```sql
-- nanoid 32 macro: generates 32-char IDs matching Dench's Supabase nanoid format
CREATE OR REPLACE MACRO nanoid32() AS (
SELECT string_agg(
substr('0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz_-',
(floor(random() * 64) + 1)::int, 1), '')
FROM generate_series(1, 32)
);
CREATE TABLE IF NOT EXISTS objects (
id VARCHAR PRIMARY KEY DEFAULT (nanoid32()),
name VARCHAR NOT NULL,
description VARCHAR,
icon VARCHAR,
default_view VARCHAR DEFAULT 'table', -- 'table' or 'kanban'
parent_document_id VARCHAR,
sort_order INTEGER DEFAULT 0,
source_app VARCHAR,
immutable BOOLEAN DEFAULT false,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(name)
);
CREATE TABLE IF NOT EXISTS fields (
id VARCHAR PRIMARY KEY DEFAULT (nanoid32()),
object_id VARCHAR NOT NULL REFERENCES objects(id) ON DELETE CASCADE,
name VARCHAR NOT NULL,
description VARCHAR,
type VARCHAR NOT NULL, -- text, number, email, phone, boolean, date, richtext, user, relation, enum
required BOOLEAN DEFAULT false,
default_value VARCHAR,
related_object_id VARCHAR REFERENCES objects(id) ON DELETE SET NULL,
relationship_type VARCHAR, -- one_to_one, one_to_many, many_to_one, many_to_many
enum_values JSON, -- ["New","In Progress","Done"]
enum_colors JSON, -- ["#ef4444","#f59e0b","#22c55e"]
enum_multiple BOOLEAN DEFAULT false,
sort_order INTEGER DEFAULT 0,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(object_id, name)
);
CREATE TABLE IF NOT EXISTS entries (
id VARCHAR PRIMARY KEY DEFAULT (nanoid32()),
object_id VARCHAR NOT NULL REFERENCES objects(id) ON DELETE CASCADE,
sort_order INTEGER DEFAULT 0,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
CREATE TABLE IF NOT EXISTS entry_fields (
id VARCHAR PRIMARY KEY DEFAULT (nanoid32()),
entry_id VARCHAR NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
field_id VARCHAR NOT NULL REFERENCES fields(id) ON DELETE CASCADE,
value VARCHAR,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(entry_id, field_id)
);
CREATE TABLE IF NOT EXISTS statuses (
id VARCHAR PRIMARY KEY DEFAULT (nanoid32()),
object_id VARCHAR NOT NULL REFERENCES objects(id) ON DELETE CASCADE,
name VARCHAR NOT NULL,
color VARCHAR DEFAULT '#94a3b8',
sort_order INTEGER DEFAULT 0,
is_default BOOLEAN DEFAULT false,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(object_id, name)
);
CREATE TABLE IF NOT EXISTS documents (
id VARCHAR PRIMARY KEY DEFAULT (nanoid32()),
title VARCHAR DEFAULT 'Untitled',
icon VARCHAR,
cover_image VARCHAR,
file_path VARCHAR NOT NULL UNIQUE, -- relative path in documents/ dir
parent_id VARCHAR REFERENCES documents(id) ON DELETE CASCADE,
parent_object_id VARCHAR REFERENCES objects(id) ON DELETE CASCADE,
sort_order INTEGER DEFAULT 0,
is_published BOOLEAN DEFAULT false,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- Full-text search index (DuckDB FTS extension)
INSTALL fts; LOAD fts;
```
**Auto-generated views** -- after every object/field mutation, the agent regenerates a PIVOT view for each object. These are stored queries (zero data duplication) that make the EAV pattern invisible:
```sql
-- Auto-generated after creating the "leads" object and its fields
CREATE OR REPLACE VIEW v_leads AS
PIVOT (
SELECT e.id as entry_id, e.created_at, e.updated_at,
f.name as field_name, ef.value
FROM entries e
JOIN entry_fields ef ON ef.entry_id = e.id
JOIN fields f ON f.id = ef.field_id
WHERE e.object_id = (SELECT id FROM objects WHERE name = 'leads')
) ON field_name USING first(value);
-- Now query like a normal table:
SELECT * FROM v_leads WHERE "Status" = 'New' ORDER BY "Full Name" LIMIT 50;
SELECT "Status", COUNT(*) FROM v_leads GROUP BY "Status";
SELECT * FROM v_leads WHERE "Email Address" LIKE '%@gmail.com';
```
Views are regenerated (not data, just the query definition) whenever fields are added/removed/renamed. Naming convention: `v_{object_name}` (e.g., `v_leads`, `v_companies`, `v_people`).
**Filesystem directory structure** -- auto-projected from DuckDB after schema mutations. Represents the sidebar's nested knowledge tree. NO entry data in the filesystem (DuckDB is sole source of truth for entries):
```
dench/
workspace.duckdb # SOLE source of truth for all structured data + views
workspace_context.yaml # Read-only org context
knowledge/ # Root of knowledge tree (= sidebar root)
people/ # Object "people" (directory = object node in sidebar)
.object.yaml # Lightweight metadata projection (id, icon, view, field list)
onboarding-guide.md # Document nested UNDER the people object
companies/ # Object "companies"
.object.yaml
projects/ # Document "Projects" (directory with children)
projects.md # Document content
tasks/ # Object nested UNDER the projects document
.object.yaml
roadmap.md # Sibling document
sales/
sales.md
leads/ # Object nested under sales document
.object.yaml
follow-up-playbook.md # Document nested under leads object
exports/ # CSV/Parquet exports (on-demand, not auto-generated)
WORKSPACE.md # Auto-generated schema summary
```
**Source of truth rules:**
- **Entries (rows)**: DuckDB ONLY (queried via `v_{object}` views). Never duplicated to filesystem.
- **Fields (columns)**: DuckDB. Summary projected to `.object.yaml` (read-only).
- **Objects (tables)**: DuckDB. Projected as directories + `.object.yaml` (read-only). Queryable via auto-generated views.
- **Document metadata**: DuckDB. Projected as directory structure (read-only).
- **Document content**: Filesystem (`.md` files). DuckDB stores `file_path` reference only.
- **Nesting/ordering**: DuckDB. Projected as directory hierarchy (read-only).
**Key DuckDB advantages leveraged:**
- **PIVOT views**: Auto-generated `v_{object}` views make EAV invisible -- query like normal tables
- **Native JSON**: `enum_values` and `enum_colors` are native JSON columns, no string parsing needed
- **CSV import**: `COPY v_leads TO 'exports/leads.csv';` or import with `COPY ... FROM 'import.csv' (AUTO_DETECT true);`
- **PostgreSQL dialect**: Generated SQL is directly portable to Dench's Supabase/Postgres
**Skill content structure** -- the full SKILL.md incorporates and adapts every section from Dench's existing CRM agent prompt ([src/lib/agents/crm-agent.ts](file:///Users/kumareth/Documents/projects/dench/src/lib/agents/crm-agent.ts)), rewritten for DuckDB/filesystem execution instead of tool calls:
### Section 1: Role and Workspace Startup
- Role: Dench CRM Management Agent operating via DuckDB and filesystem
- On every conversation: read `dench/workspace_context.yaml` (READ-ONLY) for org context, members, integrations, protected objects
- Initialize DuckDB if not exists: `duckdb dench/workspace.duckdb < schema.sql`
- Database path: `dench/workspace.duckdb`, documents path: `dench/documents/`
### Section 2: Primary Responsibilities (adapted from `<primary_responsibilities>`)
- **Request analysis**: Same as original -- extract intent, identify entities/objects/fields/relationships, transform vague requests into structured SQL
- **Object creation**: Instead of `createObjectTool`, generate `INSERT INTO objects` SQL. Naming convention: singular, lowercase (e.g., "lead", "customer"). Check existing with `SELECT` first. For kanban: auto-create Status (enum) and Assigned To (user) fields in same transaction
- **Field management**: Instead of `createFieldTool`, generate `INSERT INTO fields` SQL. Field types: text, number, email, phone, boolean, date, richtext, user, relation, enum. Use `INSERT ... ON CONFLICT (object_id, name) DO UPDATE` for idempotency
- **Entry creation**: Instead of `createEntryTool`, generate `INSERT INTO entries` + `INSERT INTO entry_fields` in a transaction. Resolve field names to field IDs via `SELECT id FROM fields WHERE object_id = ? AND name = ?`
- **Entry search**: Instead of `searchEntriesTool`, generate PIVOT queries with WHERE/LIKE/ORDER BY. Use DuckDB FTS for full-text search. Operators: `=` (equals), `LIKE '%...%'` (contains), `LIKE '...%'` (startsWith), `LIKE '%...'` (endsWith), `IS NULL` (isEmpty), `IS NOT NULL` (isNotEmpty)
### Section 3: SQL Operation Guide (replaces `<tool_usage_guide>`)
Each former tool maps to SQL patterns:
- **createObjectTool -> INSERT object**: `INSERT INTO objects (name, description, icon, default_view) VALUES (...) ON CONFLICT (name) DO NOTHING RETURNING *;`
- **createFieldTool -> INSERT field**: `INSERT INTO fields (object_id, name, type, required, enum_values, enum_colors, related_object_id, relationship_type, sort_order) VALUES (...) ON CONFLICT (object_id, name) DO UPDATE SET ...;`
- **createEntryTool -> INSERT entry + entry_fields**: Transaction with `INSERT INTO entries` then `INSERT INTO entry_fields` for each field value
- **getObjectTool -> SELECT object + fields**: `SELECT o.*, json_group_array(json_object('id', f.id, 'name', f.name, 'type', f.type)) as fields FROM objects o LEFT JOIN fields f ON f.object_id = o.id WHERE o.id = ? GROUP BY o.id;`
- **getObjectsTool -> SELECT all objects**: `SELECT o.*, COUNT(e.id) as entry_count FROM objects o LEFT JOIN entries e ON e.object_id = o.id GROUP BY o.id ORDER BY o.sort_order;`
- **getOrganizationMembersTool -> READ workspace_context.yaml**: Members list is in `workspace_context.yaml` under `members:`. Read with `cat dench/workspace_context.yaml` and extract the members section. User fields store member IDs like `usr_abc123`.
- **searchEntriesTool -> query the auto-generated view**:
```sql
-- Simple: query the pre-built PIVOT view like a normal table
SELECT * FROM v_leads WHERE "Status" = 'New' ORDER BY created_at DESC LIMIT 50;
SELECT * FROM v_leads WHERE "Email Address" LIKE '%@gmail.com';
SELECT * FROM v_leads WHERE "Full Name" ILIKE '%john%';
SELECT "Status", COUNT(*) FROM v_leads GROUP BY "Status";
```
Views are auto-generated per object (`v_{object_name}`) so the agent never writes raw PIVOT queries for reads.
- **updateObjectTool -> UPDATE object**: `UPDATE objects SET name = ?, description = ?, updated_at = now() WHERE id = ?;`
- **updateFieldTool -> UPDATE field**: `UPDATE fields SET ... WHERE id = ?;`
- **updateEntryTool -> UPSERT entry_fields**: `INSERT INTO entry_fields (entry_id, field_id, value) VALUES (?, ?, ?) ON CONFLICT (entry_id, field_id) DO UPDATE SET value = excluded.value, updated_at = now();`
- **deleteObjectTool -> DELETE cascade**: `DELETE FROM objects WHERE id = ? AND immutable = false;` (cascades to fields, entries, entry_fields via FK)
- **deleteFieldTool -> DELETE field**: `DELETE FROM fields WHERE id = ?;` (cascades to entry_fields)
- **deleteEntryTool -> DELETE entry**: `DELETE FROM entries WHERE id = ?;` (cascades to entry_fields)
- **createManyEntriesTool -> Batch INSERT**: Wrap multiple entry+entry_fields inserts in `BEGIN TRANSACTION; ... COMMIT;`
- **Bulk import from CSV**: `COPY ... FROM 'import.csv' (AUTO_DETECT true);` with field mapping
### Section 4: Execution Workflows (adapted from `<execution_guidelines>`)
Same workflow selection principle: choose the minimal workflow needed based on user intent.
- **Create New CRM Structure**: `SELECT` to check existence -> `INSERT objects` -> `INSERT fields` (all in one `exec` call with multi-statement SQL in a transaction)
- **Search and Display**: Generate PIVOT query with appropriate WHERE/ORDER BY/LIMIT
- **Add New Entries**: `SELECT` object+fields -> `INSERT entries` + `INSERT entry_fields` in transaction
- **Update Existing Data**: PIVOT query to find -> `UPDATE` entry_fields
- **Quick Information**: `SELECT` with aggregate counts
- **Bulk Operations**: Multi-row INSERT in transaction, report counts
- **Data Cleanup**: PIVOT query with filters -> `DELETE` matching entries
Key differences from original:
1. **All steps happen in a single `exec` call** with multi-statement SQL, not 10+ separate tool calls
2. **After schema mutations**: regenerate the `v_{object}` view and project the filesystem directory structure
3. **Reads use views**: `SELECT * FROM v_leads` instead of raw PIVOT queries
Example of creating a full CRM structure in one shot:
```sql
BEGIN TRANSACTION;
INSERT INTO objects (name, description, icon, default_view) VALUES ('lead', 'Sales leads', 'user-plus', 'table') ON CONFLICT (name) DO NOTHING;
INSERT INTO fields (object_id, name, type, required, sort_order) VALUES
((SELECT id FROM objects WHERE name = 'lead'), 'Full Name', 'text', true, 0),
((SELECT id FROM objects WHERE name = 'lead'), 'Email Address', 'email', true, 1),
((SELECT id FROM objects WHERE name = 'lead'), 'Phone Number', 'phone', false, 2)
ON CONFLICT (object_id, name) DO NOTHING;
INSERT INTO fields (object_id, name, type, enum_values, enum_colors, sort_order) VALUES
((SELECT id FROM objects WHERE name = 'lead'), 'Status', 'enum',
'["New","Contacted","Qualified","Converted"]'::JSON,
'["#94a3b8","#3b82f6","#f59e0b","#22c55e"]'::JSON, 3)
ON CONFLICT (object_id, name) DO NOTHING;
-- Auto-generate the PIVOT view for this object
CREATE OR REPLACE VIEW v_lead AS
PIVOT (
SELECT e.id as entry_id, e.created_at, e.updated_at,
f.name as field_name, ef.value
FROM entries e
JOIN entry_fields ef ON ef.entry_id = e.id
JOIN fields f ON f.id = ef.field_id
WHERE e.object_id = (SELECT id FROM objects WHERE name = 'lead')
) ON field_name USING first(value);
COMMIT;
-- Then: project filesystem structure (mkdir knowledge/lead/, write .object.yaml)
```
### Section 5: CRM Patterns (adapted from `<crm_patterns>`)
Identical patterns, but with SQL examples instead of tool call examples:
- **Contact/Customer**: Full Name (text, required), Email Address (email, required), Phone Number (phone), Company (relation), Notes (richtext)
- **Lead/Prospect**: Full Name, Email, Phone, Status (enum: New/Contacted/Qualified/Converted), Source (enum), Score (number), Assigned To (user), Notes (richtext)
- **Company/Organization**: Company Name (text, required), Industry (enum), Website (text), Type (enum), Notes (richtext)
- **Deal/Opportunity**: Deal Name, Amount (number), Stage (enum: Discovery/Proposal/Negotiation/Closed Won/Closed Lost), Close Date (date), Probability (number), Primary Contact (relation), Assigned To (user), Notes (richtext)
- **Case/Project**: Case Number, Title, Client (relation), Status (enum), Priority (enum), Due Date (date), Assigned To (user), Notes (richtext)
- **Property/Asset**: Address, Property Type (enum), Price (number), Status (enum), Square Footage (number), Bedrooms (number), Notes (richtext)
- **Task/Activity** (kanban): Title, Description, Assigned To (user), Due Date, Status (enum: In Queue/In Progress/Done), Priority (enum), Notes (richtext). Use `default_view = 'kanban'` and auto-create Status + Assigned To fields.
### Section 6: Field Type Reference (adapted from `<field_type_selection>`)
- **text**: General text data, names, descriptions, addresses. Stored as VARCHAR.
- **email**: Email addresses. Stored as VARCHAR. Agent validates format.
- **phone**: Phone numbers. Stored as VARCHAR. Agent normalizes format.
- **number**: Numeric values (prices, quantities, scores). Stored as VARCHAR in entry_fields (EAV), cast with `::NUMERIC` in queries.
- **boolean**: Yes/no flags. Stored as "true"/"false" strings.
- **date**: Dates. Stored as ISO 8601 strings. Cast with `::DATE` in queries.
- **richtext**: Rich text for Notes fields. Content stored as entry_field value (plain text / markdown). Displayed in Notion-style editor in Dench UI.
- **user**: Person/assignee fields. Stores member ID (e.g., "usr_abc123") from `workspace_context.yaml` members list. ALWAYS resolve member name -> ID before inserting.
- **enum**: Dropdown/select fields. Field definition stores `enum_values` as JSON array. Entry stores the selected value string. `enum_colors` parallel array for styling. `enum_multiple = true` for multi-select (value stored as JSON array string).
- **relation**: Links to entries in another object. Field stores `related_object_id` and `relationship_type`. Entry stores the related entry ID(s). `many_to_one` for single select, `many_to_many` for multi-select (stored as JSON array).
### Section 7: Naming Conventions and Data Handling (adapted from `<field_naming_conventions>` and `<data_handling_best_practices>`)
- Object names: singular, lowercase, one word ("lead" not "Leads")
- Field names: human-readable, proper capitalization ("Email Address" not "email")
- Be descriptive: "Phone Number" not "Phone"
- Be consistent within an object
- Validate email formats, normalize phone numbers, use ISO 8601 dates
- Trim whitespace from all values
- Check for duplicates before creating entries (SELECT before INSERT)
### Section 8: Error Handling (adapted from `<error_handling>`)
- `UNIQUE constraint` on INSERT -> item already exists, treat as SUCCESS (use `ON CONFLICT DO NOTHING` or `DO UPDATE`)
- Protected object deletion -> check `immutable` column and `protected_objects` in workspace_context.yaml
- Field type mismatch -> warn user before changing type on field with existing data
- Missing required fields -> validate before INSERT, report which fields are missing
### Section 9: Document Management (new -- not in original CRM agent)
- Create document: `write` tool to create `dench/documents/<path>.md` + `INSERT INTO documents` with metadata
- Document content is the .md file; DuckDB `documents` table tracks metadata (title, icon, nesting, order)
- Cross-nesting: documents under objects (`parent_object_id`), objects under documents (`parent_document_id`)
- Document tree mirrors filesystem: `dench/documents/projects/alpha.md` -> `file_path = 'projects/alpha.md'`
### Section 10: Protected Objects and Read-Only Context (new)
- Read `protected_objects` from `workspace_context.yaml` on startup
- NEVER delete, rename, or modify immutable objects (People, Companies)
- NEVER modify `workspace_context.yaml` -- it is read-only context from Dench
- Members list is authoritative for user-type field resolution
### Section 11: Post-Mutation Pipeline (new)
After any schema mutation (create/update/delete object, field, or document), run a 3-step pipeline:
1. **Regenerate views**: `CREATE OR REPLACE VIEW v_{object}` for any affected objects
2. **Project filesystem**: Sync the `knowledge/` directory structure from DuckDB (mkdir/rmdir for objects, write `.object.yaml` summaries, move `.md` files if nesting changed)
3. **Sync to S3**: Run `dench/sync.sh` to persist workspace.duckdb + knowledge/ to S3
4. **Regenerate WORKSPACE.md**: Human-readable summary of all objects, fields, entry counts, and views
### Section 12: Critical Reminders (adapted from `<critical_reminders>`)
- Handle the ENTIRE CRM operation from analysis to SQL execution to summary
- Always check existing data before creating (SELECT before INSERT, or ON CONFLICT)
- Search proactively to provide better UX (PIVOT with filters)
- Never assume field names -- always verify with `SELECT * FROM fields WHERE object_id = ?`
- Extract ALL data from user messages
- NOTES FIELDS: type "richtext", displayed in Notion editor
- USER FIELDS: Resolve member name to ID from workspace_context.yaml BEFORE inserting
- ENUM FIELDS: type "enum" with `enum_values` JSON array
- RELATION FIELDS: type "relation" with `related_object_id`
- KANBAN BOARDS: `default_view = 'kanban'`, auto-create Status and Assigned To fields
- PROTECTED OBJECTS: Never delete objects listed in `workspace_context.yaml` `protected_objects`
- ONE EXEC CALL: Batch related SQL in a single transaction whenever possible -- this is the entire point of the filesystem-first approach
---
## Phase 3: S3 Sync Layer
**Two sync directions:**
1. **Sandbox -> S3:** After every database write or document save, sync changed files to S3
2. **S3 -> Sandbox:** On sandbox startup, download the latest dench/ folder from S3
**Option A (simpler): Script-based sync**
- Add a `dench/sync.sh` script that the agent calls after mutations
- Uses AWS CLI: `aws s3 sync dench/ s3://dench-workspaces/{orgId}/`
- Credentials injected via ABAC (short-lived, scoped to org prefix)
- Skill instructs the agent to run sync after writes
**Option B (more robust): inotifywait + background sync**
- Background process watches dench/ for changes
- Debounced sync to S3 (e.g., 5s after last change)
- More reliable but more infrastructure
**Recommendation:** Start with Option A. The agent is already making exec calls; one more `aws s3 sync` is trivial.
**New file:** `skills/dench/sync.sh` (bundled with the skill, referenced by SKILL.md)
**Sandbox startup hook:**
- Add to OpenClaw's sandbox initialization: download dench workspace from S3 if it exists
- Could be a `BOOTSTRAP.md` instruction or a sandbox pre-warm step
---
## Phase 4: Dench Sidebar Integration
The Dench sidebar currently reads from Prisma (`getAll` objects + `getAll` documents). It needs a new data source for workspace-managed data.
**Approach:** Add a new tRPC endpoint that reads from S3 (or the synced PostgreSQL `fs_files` table) and returns the same tree structure the sidebar expects.
**Files to modify in Dench:**
- [src/lib/trpc/routers/objects.ts](file:///Users/kumareth/Documents/projects/dench/src/lib/trpc/routers/objects.ts) -- Add `getWorkspaceObjects` endpoint that queries the synced SQLite data (either by reading dench.db from S3, or from a PostgreSQL materialized view)
- [src/lib/trpc/routers/documents.ts](file:///Users/kumareth/Documents/projects/dench/src/lib/trpc/routers/documents.ts) -- Add `getWorkspaceDocuments` endpoint
- [src/components/app-sidebar.tsx](file:///Users/kumareth/Documents/projects/dench/src/components/app-sidebar.tsx) -- Merge workspace data into the `buildKnowledgeTree` function
**S3 -> PostgreSQL sync (Lambda):**
- When workspace.duckdb is uploaded to S3, a Lambda function:
1. Downloads the DuckDB file
2. Reads objects, fields, entries, documents tables
3. Upserts into PostgreSQL tables (or a dedicated `workspace_objects`, `workspace_documents` table)
- This mirrors the Fintool pattern: S3 source of truth, Lambda sync, PG for fast queries
**Alternative (simpler for v1):** Dench backend downloads workspace.duckdb from S3 on demand and queries it directly using `duckdb` Node.js bindings (`@duckdb/node-api`). No Lambda needed initially.
---
## Phase 5: Real-Time Updates (Future)
- WebSocket notifications when S3 objects change (via S3 Event Notifications -> SNS -> WebSocket)
- Dench sidebar auto-refreshes when workspace data changes
- Bidirectional: Dench UI edits write back to S3, agent picks up changes on next sandbox start
---
## Key Design Decisions
- **Three-layer storage** (following Fintool pattern):
- **YAML** (`workspace_context.yaml`): Read-only workspace identity, team members, integrations, defaults. Written by Dench, consumed by agent. Never modified by the agent -- data flows one-way from Dench UI -> S3 -> this file.
- **DuckDB** (`workspace.duckdb`): Dense, relational CRM data (objects, fields, entries, statuses). Queryable via SQL with PIVOT, JOIN, FTS.
- **Markdown** (`documents/*.md`): Rich document content. Agent uses `write`/`edit` tools directly. DuckDB tracks metadata (title, icon, nesting) while the file system holds the content.
- **DuckDB over SQLite**: Native PIVOT/UNPIVOT is essential for the EAV data model (rendering custom-field entries as tables). PostgreSQL-compatible SQL dialect means generated SQL is portable to Dench's Supabase/Postgres. Native JSON type eliminates string parsing for enum_values/colors. Built-in CSV/Parquet import with auto-detection enables bulk CRM operations. FTS extension provides full-text search. ~50-100ms startup overhead is acceptable.
- **nanoid 32 IDs**: Matches Dench's Supabase PostgreSQL nanoid format. Generated via a DuckDB macro using `generate_series` + `random()` over the standard nanoid alphabet (`0-9A-Za-z_-`). 32 chars provides 192 bits of entropy.
- **Members in YAML not DuckDB**: The agent needs member IDs for "user" type fields (like "Assigned To"). Putting members in workspace_context.yaml means the agent reads them once on startup without a separate SQL query. The list changes infrequently (team changes, not per-request). Agent reads only, never writes.
- **Integrations in YAML not DuckDB**: The agent needs to know what apps are connected and how fields map, but doesn't need to query this relationally. YAML gives the agent instant context about sync relationships. Agent reads only, never writes.
- **Skill inject vs bootstrap file**: Using skill metadata `inject: true` is cleaner than adding a new bootstrap file type. It keeps the Dench instructions modular and version-controlled in the skills directory.
- **One transaction per operation**: The skill instructs the agent to wrap multi-step operations in `BEGIN ... COMMIT` for atomicity. DuckDB supports full ACID transactions.

470
skills/dench/SKILL.md Normal file
View File

@ -0,0 +1,470 @@
---
name: dench
description: Manage Dench CRM workspace - objects, fields, entries via DuckDB and documents as markdown files in a nested knowledge tree.
metadata: { "openclaw": { "inject": true, "always": true, "emoji": "📊" } }
---
# Dench CRM Workspace
You manage a Dench CRM workspace stored locally at `dench/` in your working directory.
All structured data lives in **DuckDB** (`dench/workspace.duckdb`). Documents are **markdown files** in `dench/knowledge/`. Organization context is in `dench/workspace_context.yaml` (READ-ONLY).
## Workspace Structure
```
dench/
workspace_context.yaml # READ-ONLY org context (members, integrations, protected objects)
workspace.duckdb # DuckDB database — sole source of truth for structured data
knowledge/ # Nested knowledge tree (sidebar mirrors this)
people/ # Object directory
.object.yaml # Object metadata projection
onboarding-guide.md # Document nested under object
companies/
.object.yaml
projects/
projects.md # Document content
tasks/ # Object nested under document
.object.yaml
exports/ # On-demand CSV/Parquet exports
WORKSPACE.md # Auto-generated schema summary
```
## Startup
On every conversation:
1. Read `dench/workspace_context.yaml` for org context, members, integrations, protected objects. **NEVER modify this file.**
2. Install duckdb if it doesn't exist: `curl https://install.duckdb.org | sh`
3. If `dench/workspace.duckdb` does not exist, initialize it with the schema below.
## workspace_context.yaml (READ-ONLY)
This file is generated by Dench and synced via S3. It contains:
- `organization`: id, name, slug, business info
- `members`: Team members with IDs, names, emails, roles. **Use these IDs for "user" type fields** (e.g., "Assigned To").
- `protected_objects`: Objects that MUST NOT be deleted or renamed (e.g., people, companies).
- `integrations`: Connected apps with sync direction, frequency, and field mappings.
- `enrichment`: Whether enrichment is enabled and which provider.
- `defaults`: Default view, date format, naming conventions.
- `credits`: Current credit balance for enrichment/AI operations.
## DuckDB Schema
Initialize via `exec` with `duckdb dench/workspace.duckdb`:
```sql
-- Nanoid 32 macro: generates IDs matching Dench's Supabase nanoid format
CREATE OR REPLACE MACRO nanoid32() AS (
SELECT string_agg(
substr('0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz_-',
(floor(random() * 64) + 1)::int, 1), '')
FROM generate_series(1, 32)
);
CREATE TABLE IF NOT EXISTS objects (
id VARCHAR PRIMARY KEY DEFAULT (nanoid32()),
name VARCHAR NOT NULL,
description VARCHAR,
icon VARCHAR,
default_view VARCHAR DEFAULT 'table',
parent_document_id VARCHAR,
sort_order INTEGER DEFAULT 0,
source_app VARCHAR,
immutable BOOLEAN DEFAULT false,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(name)
);
CREATE TABLE IF NOT EXISTS fields (
id VARCHAR PRIMARY KEY DEFAULT (nanoid32()),
object_id VARCHAR NOT NULL REFERENCES objects(id) ON DELETE CASCADE,
name VARCHAR NOT NULL,
description VARCHAR,
type VARCHAR NOT NULL,
required BOOLEAN DEFAULT false,
default_value VARCHAR,
related_object_id VARCHAR REFERENCES objects(id) ON DELETE SET NULL,
relationship_type VARCHAR,
enum_values JSON,
enum_colors JSON,
enum_multiple BOOLEAN DEFAULT false,
sort_order INTEGER DEFAULT 0,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(object_id, name)
);
CREATE TABLE IF NOT EXISTS entries (
id VARCHAR PRIMARY KEY DEFAULT (nanoid32()),
object_id VARCHAR NOT NULL REFERENCES objects(id) ON DELETE CASCADE,
sort_order INTEGER DEFAULT 0,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
CREATE TABLE IF NOT EXISTS entry_fields (
id VARCHAR PRIMARY KEY DEFAULT (nanoid32()),
entry_id VARCHAR NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
field_id VARCHAR NOT NULL REFERENCES fields(id) ON DELETE CASCADE,
value VARCHAR,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(entry_id, field_id)
);
CREATE TABLE IF NOT EXISTS statuses (
id VARCHAR PRIMARY KEY DEFAULT (nanoid32()),
object_id VARCHAR NOT NULL REFERENCES objects(id) ON DELETE CASCADE,
name VARCHAR NOT NULL,
color VARCHAR DEFAULT '#94a3b8',
sort_order INTEGER DEFAULT 0,
is_default BOOLEAN DEFAULT false,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(object_id, name)
);
CREATE TABLE IF NOT EXISTS documents (
id VARCHAR PRIMARY KEY DEFAULT (nanoid32()),
title VARCHAR DEFAULT 'Untitled',
icon VARCHAR,
cover_image VARCHAR,
file_path VARCHAR NOT NULL UNIQUE,
parent_id VARCHAR REFERENCES documents(id) ON DELETE CASCADE,
parent_object_id VARCHAR REFERENCES objects(id) ON DELETE CASCADE,
sort_order INTEGER DEFAULT 0,
is_published BOOLEAN DEFAULT false,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
INSTALL fts; LOAD fts;
```
## Auto-Generated Views
After every object or field mutation, regenerate the PIVOT view for each affected object. Views are stored queries (zero data duplication) that make the EAV pattern invisible:
```sql
-- Example: auto-generated view for "leads" object
CREATE OR REPLACE VIEW v_leads AS
PIVOT (
SELECT e.id as entry_id, e.created_at, e.updated_at,
f.name as field_name, ef.value
FROM entries e
JOIN entry_fields ef ON ef.entry_id = e.id
JOIN fields f ON f.id = ef.field_id
WHERE e.object_id = (SELECT id FROM objects WHERE name = 'leads')
) ON field_name USING first(value);
```
Naming convention: `v_{object_name}` (e.g., `v_leads`, `v_companies`, `v_people`).
Now query like a normal table:
```sql
SELECT * FROM v_leads WHERE "Status" = 'New' ORDER BY created_at DESC LIMIT 50;
SELECT "Status", COUNT(*) FROM v_leads GROUP BY "Status";
SELECT * FROM v_leads WHERE "Email Address" LIKE '%@gmail.com';
```
## SQL Operations Reference
All operations use `exec` with `duckdb dench/workspace.duckdb`. Batch related SQL in a single exec call with transactions.
### Create Object
```sql
INSERT INTO objects (name, description, icon, default_view)
VALUES ('lead', 'Sales leads tracking', 'user-plus', 'table')
ON CONFLICT (name) DO NOTHING RETURNING *;
```
### Create Fields
```sql
INSERT INTO fields (object_id, name, type, required, sort_order)
VALUES
((SELECT id FROM objects WHERE name = 'lead'), 'Full Name', 'text', true, 0),
((SELECT id FROM objects WHERE name = 'lead'), 'Email Address', 'email', true, 1),
((SELECT id FROM objects WHERE name = 'lead'), 'Phone Number', 'phone', false, 2)
ON CONFLICT (object_id, name) DO NOTHING;
```
### Create Enum Field
```sql
INSERT INTO fields (object_id, name, type, enum_values, enum_colors, sort_order)
VALUES (
(SELECT id FROM objects WHERE name = 'lead'), 'Status', 'enum',
'["New","Contacted","Qualified","Converted"]'::JSON,
'["#94a3b8","#3b82f6","#f59e0b","#22c55e"]'::JSON, 3
) ON CONFLICT (object_id, name) DO NOTHING;
```
### Create Entry with Field Values
```sql
BEGIN TRANSACTION;
INSERT INTO entries (object_id) VALUES ((SELECT id FROM objects WHERE name = 'lead')) RETURNING id;
-- Use the returned entry id:
INSERT INTO entry_fields (entry_id, field_id, value) VALUES
('<entry_id>', (SELECT id FROM fields WHERE object_id = (SELECT id FROM objects WHERE name = 'lead') AND name = 'Full Name'), 'Jane Smith'),
('<entry_id>', (SELECT id FROM fields WHERE object_id = (SELECT id FROM objects WHERE name = 'lead') AND name = 'Email Address'), 'jane@example.com'),
('<entry_id>', (SELECT id FROM fields WHERE object_id = (SELECT id FROM objects WHERE name = 'lead') AND name = 'Status'), 'New');
COMMIT;
```
### Search Entries (via view)
```sql
-- Simple search
SELECT * FROM v_leads WHERE "Full Name" ILIKE '%john%';
-- Filter by field
SELECT * FROM v_leads WHERE "Status" = 'New' ORDER BY created_at DESC;
-- Aggregation
SELECT "Status", COUNT(*) as count FROM v_leads GROUP BY "Status";
-- Pagination
SELECT * FROM v_leads ORDER BY created_at DESC LIMIT 20 OFFSET 0;
```
### Update Entry
```sql
INSERT INTO entry_fields (entry_id, field_id, value)
VALUES ('<entry_id>', (SELECT id FROM fields WHERE object_id = '<obj_id>' AND name = 'Status'), 'Qualified')
ON CONFLICT (entry_id, field_id) DO UPDATE SET value = excluded.value, updated_at = now();
```
### Delete (with cascade)
```sql
-- Delete entry (cascades to entry_fields)
DELETE FROM entries WHERE id = '<entry_id>';
-- Delete field (cascades to entry_fields)
DELETE FROM fields WHERE id = '<field_id>';
-- Delete object (cascades to fields, entries, entry_fields) — check immutable first!
DELETE FROM objects WHERE id = '<obj_id>' AND immutable = false;
```
### Bulk Import from CSV
```sql
COPY entries FROM 'dench/exports/import.csv' (AUTO_DETECT true);
```
### Export to CSV
```sql
COPY (SELECT * FROM v_leads) TO 'dench/exports/leads.csv' (HEADER true);
```
## Full Workflow: Create CRM Structure in One Shot
Batch everything in a single exec call:
```sql
BEGIN TRANSACTION;
-- 1. Create object
INSERT INTO objects (name, description, icon, default_view)
VALUES ('lead', 'Sales leads tracking', 'user-plus', 'table')
ON CONFLICT (name) DO NOTHING;
-- 2. Create all fields
INSERT INTO fields (object_id, name, type, required, sort_order) VALUES
((SELECT id FROM objects WHERE name = 'lead'), 'Full Name', 'text', true, 0),
((SELECT id FROM objects WHERE name = 'lead'), 'Email Address', 'email', true, 1),
((SELECT id FROM objects WHERE name = 'lead'), 'Phone Number', 'phone', false, 2),
((SELECT id FROM objects WHERE name = 'lead'), 'Score', 'number', false, 4),
((SELECT id FROM objects WHERE name = 'lead'), 'Notes', 'richtext', false, 6)
ON CONFLICT (object_id, name) DO NOTHING;
INSERT INTO fields (object_id, name, type, enum_values, enum_colors, sort_order) VALUES
((SELECT id FROM objects WHERE name = 'lead'), 'Status', 'enum',
'["New","Contacted","Qualified","Converted"]'::JSON,
'["#94a3b8","#3b82f6","#f59e0b","#22c55e"]'::JSON, 3),
((SELECT id FROM objects WHERE name = 'lead'), 'Source', 'enum',
'["Website","Referral","Cold Call","Social"]'::JSON, NULL, 5)
ON CONFLICT (object_id, name) DO NOTHING;
-- 3. Auto-generate PIVOT view
CREATE OR REPLACE VIEW v_lead AS
PIVOT (
SELECT e.id as entry_id, e.created_at, e.updated_at,
f.name as field_name, ef.value
FROM entries e
JOIN entry_fields ef ON ef.entry_id = e.id
JOIN fields f ON f.id = ef.field_id
WHERE e.object_id = (SELECT id FROM objects WHERE name = 'lead')
) ON field_name USING first(value);
COMMIT;
```
Then project the filesystem: `mkdir -p dench/knowledge/lead` and write `.object.yaml`.
## Kanban Boards
When creating task/board objects, use `default_view = 'kanban'` and auto-create Status + Assigned To fields:
```sql
BEGIN TRANSACTION;
INSERT INTO objects (name, description, icon, default_view)
VALUES ('task', 'Task tracking board', 'check-square', 'kanban')
ON CONFLICT (name) DO NOTHING;
-- Auto-create Status field with kanban-appropriate values
INSERT INTO fields (object_id, name, type, enum_values, enum_colors, sort_order)
VALUES ((SELECT id FROM objects WHERE name = 'task'), 'Status', 'enum',
'["In Queue","In Progress","Done"]'::JSON,
'["#94a3b8","#3b82f6","#22c55e"]'::JSON, 0)
ON CONFLICT (object_id, name) DO NOTHING;
-- Auto-create Assigned To field (user type)
INSERT INTO fields (object_id, name, type, sort_order)
VALUES ((SELECT id FROM objects WHERE name = 'task'), 'Assigned To', 'user', 1)
ON CONFLICT (object_id, name) DO NOTHING;
-- Auto-create default statuses
INSERT INTO statuses (object_id, name, color, sort_order, is_default) VALUES
((SELECT id FROM objects WHERE name = 'task'), 'In Queue', '#94a3b8', 0, true),
((SELECT id FROM objects WHERE name = 'task'), 'In Progress', '#3b82f6', 1, false),
((SELECT id FROM objects WHERE name = 'task'), 'Done', '#22c55e', 2, false)
ON CONFLICT (object_id, name) DO NOTHING;
CREATE OR REPLACE VIEW v_task AS
PIVOT (
SELECT e.id as entry_id, e.created_at, e.updated_at,
f.name as field_name, ef.value
FROM entries e
JOIN entry_fields ef ON ef.entry_id = e.id
JOIN fields f ON f.id = ef.field_id
WHERE e.object_id = (SELECT id FROM objects WHERE name = 'task')
) ON field_name USING first(value);
COMMIT;
```
## Field Types Reference
| Type | Description | Storage | Query Cast |
| -------- | ------------------------------------- | ------------------ | ----------- |
| text | General text, names, descriptions | VARCHAR | none |
| email | Email addresses (validated) | VARCHAR | none |
| phone | Phone numbers (normalized) | VARCHAR | none |
| number | Numeric values (prices, scores) | VARCHAR | `::NUMERIC` |
| boolean | Yes/no flags | "true"/"false" | `= 'true'` |
| date | ISO 8601 dates | VARCHAR | `::DATE` |
| richtext | Rich text for Notes fields | VARCHAR | none |
| user | Member ID from workspace_context.yaml | VARCHAR | none |
| enum | Dropdown with predefined values | VARCHAR | none |
| relation | Link to entry in another object | VARCHAR (entry ID) | none |
**user fields**: Resolve member name to ID from `workspace_context.yaml` `members` list BEFORE inserting. User fields store IDs like `usr_abc123`, NOT names.
**enum fields**: Field definition stores `enum_values` as JSON array. Entry stores the selected value string. `enum_multiple = true` for multi-select (value stored as JSON array string).
**relation fields**: Field stores `related_object_id` and `relationship_type`. Entry stores the related entry ID. `many_to_one` for single select, `many_to_many` for multi-select (JSON array of IDs).
## CRM Patterns
### Contact/Customer
- Full Name (text, required), Email Address (email, required), Phone Number (phone), Company (relation to company object), Notes (richtext)
- Universal pattern for clients, customers, patients, members
### Lead/Prospect
- Full Name (text, required), Email Address (email, required), Phone Number (phone), Status (enum: New/Contacted/Qualified/Converted), Source (enum: Website/Referral/Cold Call/Social), Score (number), Assigned To (user), Notes (richtext)
- Sales, legal intake, real estate prospects
### Company/Organization
- Company Name (text, required), Industry (enum), Website (text), Type (enum: Client/Partner/Vendor), Relationship Status (enum), Notes (richtext)
- B2B relationships, vendor management
### Deal/Opportunity
- Deal Name (text, required), Amount (number), Stage (enum: Discovery/Proposal/Negotiation/Closed Won/Closed Lost), Close Date (date), Probability (number), Primary Contact (relation), Assigned To (user), Notes (richtext)
- Sales pipeline, project bids
### Case/Project
- Case Number (text, required), Title (text, required), Client (relation), Status (enum: Open/In Progress/Closed), Priority (enum: Low/Medium/High/Urgent), Due Date (date), Assigned To (user), Notes (richtext)
- Legal cases, client projects
### Property/Asset
- Address (text, required), Property Type (enum), Price (number), Status (enum: Available/Under Contract/Sold), Square Footage (number), Bedrooms (number), Notes (richtext)
- Real estate listings, asset management
### Task/Activity (use kanban)
- Title (text, required), Description (text), Assigned To (user), Due Date (date), Status (enum: In Queue/In Progress/Done), Priority (enum: Low/Medium/High), Notes (richtext)
- Use `default_view = 'kanban'` — auto-creates Status and Assigned To fields
## Document Management
Documents are markdown files in `dench/knowledge/`. The DuckDB `documents` table tracks metadata only; the `.md` file IS the content.
### Create Document
1. Write the `.md` file: `write dench/knowledge/projects/roadmap.md`
2. Insert metadata into DuckDB:
```sql
INSERT INTO documents (title, icon, file_path, parent_id, sort_order)
VALUES ('Roadmap', 'map', 'projects/roadmap.md', '<parent_doc_id>', 0);
```
### Cross-Nesting
- **Document under Object**: Set `parent_object_id` on the document. Place `.md` file inside the object's directory.
- **Object under Document**: Set `parent_document_id` on the object. Place object directory inside the document's directory.
## Naming Conventions
- **Object names**: singular, lowercase, one word ("lead" not "Leads")
- **Field names**: human-readable, proper capitalization ("Email Address" not "email")
- **Be descriptive**: "Phone Number" not "Phone"
- **Be consistent**: Don't mix "Full Name" and "Name" in the same object
## Error Handling
- `UNIQUE constraint` on INSERT: item already exists — use `ON CONFLICT DO NOTHING` or `DO UPDATE`. Treat as success.
- Protected object deletion: check `immutable` column AND `protected_objects` in `workspace_context.yaml`. NEVER delete protected objects.
- Field type change: warn user before changing type on field with existing data.
- Missing required fields: validate before INSERT, report which fields are missing.
## Post-Mutation Pipeline
After any schema mutation (create/update/delete object, field, or document):
1. **Regenerate views**: `CREATE OR REPLACE VIEW v_{object}` for affected objects
2. **Project filesystem**: Sync `dench/knowledge/` directory structure from DuckDB (mkdir/rmdir for objects, write `.object.yaml`, move `.md` files if nesting changed)
3. **Regenerate WORKSPACE.md**: Human-readable summary of all objects, fields, entry counts
## Critical Reminders
- Handle the ENTIRE CRM operation from analysis to SQL execution to summary
- Always check existing data before creating (`SELECT` before `INSERT`, or `ON CONFLICT`)
- Use views (`v_{object}`) for all reads — never write raw PIVOT queries for search
- Never assume field names — verify with `SELECT * FROM fields WHERE object_id = ?`
- Extract ALL data from user messages — don't leave information unused
- **NOTES**: Always use type "richtext" for Notes fields
- **USER FIELDS**: Resolve member name to ID from `workspace_context.yaml` BEFORE inserting
- **ENUM FIELDS**: Use type "enum" with `enum_values` JSON array
- **RELATION FIELDS**: Use type "relation" with `related_object_id`
- **KANBAN**: Use `default_view = 'kanban'`, auto-create Status and Assigned To fields
- **PROTECTED OBJECTS**: Never delete objects listed in `workspace_context.yaml` `protected_objects`
- **ONE EXEC CALL**: Batch related SQL in a single transaction — this is the whole point
- **workspace_context.yaml**: READ-ONLY. Never modify. Data flows from Dench UI only.
- **Source of truth**: DuckDB for all structured data. Filesystem for document content and navigation tree. Never duplicate entry data to the filesystem.

75
skills/dench/sync.sh Executable file
View File

@ -0,0 +1,75 @@
#!/usr/bin/env bash
# Dench workspace S3 sync script
# Usage: ./sync.sh [upload|download]
#
# Requires:
# - AWS CLI configured with credentials (ABAC-scoped to org prefix)
# - DENCH_S3_BUCKET and DENCH_S3_PREFIX environment variables
# (or set in workspace_context.yaml sync section)
#
# This script syncs the dench/ workspace folder with S3.
# S3 is the persistence layer between sandbox sessions.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
WORKSPACE_DIR="${SCRIPT_DIR}/.."
# Resolve workspace root (one level up from skills/dench/)
# If run from the dench/ folder itself, use current dir
if [ -f "${WORKSPACE_DIR}/workspace.duckdb" ]; then
DENCH_DIR="${WORKSPACE_DIR}"
elif [ -f "${SCRIPT_DIR}/../../dench/workspace.duckdb" ]; then
DENCH_DIR="${SCRIPT_DIR}/../../dench"
else
# Fallback: look relative to cwd
DENCH_DIR="./dench"
fi
# Read S3 config from environment or workspace_context.yaml
S3_BUCKET="${DENCH_S3_BUCKET:-}"
S3_PREFIX="${DENCH_S3_PREFIX:-}"
if [ -z "$S3_BUCKET" ] && [ -f "${DENCH_DIR}/workspace_context.yaml" ]; then
# Extract sync config from YAML (basic grep, no yq dependency)
S3_BUCKET=$(grep -A5 'sync:' "${DENCH_DIR}/workspace_context.yaml" | grep 's3_bucket:' | awk '{print $2}' | tr -d '"' || true)
S3_PREFIX=$(grep -A5 'sync:' "${DENCH_DIR}/workspace_context.yaml" | grep 's3_prefix:' | awk '{print $2}' | tr -d '"' || true)
fi
if [ -z "$S3_BUCKET" ]; then
echo "Error: DENCH_S3_BUCKET not set and not found in workspace_context.yaml"
exit 1
fi
S3_PATH="s3://${S3_BUCKET}/${S3_PREFIX}"
ACTION="${1:-upload}"
case "$ACTION" in
upload)
echo "Syncing dench workspace to S3: ${S3_PATH}"
aws s3 sync "${DENCH_DIR}/" "${S3_PATH}" \
--exclude "*.tmp" \
--exclude ".DS_Store" \
--exclude "exports/*" \
--delete \
--size-only
echo "Upload complete."
;;
download)
echo "Downloading dench workspace from S3: ${S3_PATH}"
mkdir -p "${DENCH_DIR}"
aws s3 sync "${S3_PATH}" "${DENCH_DIR}/" \
--exclude "*.tmp" \
--exclude ".DS_Store" \
--delete \
--size-only
echo "Download complete."
;;
*)
echo "Usage: $0 [upload|download]"
exit 1
;;
esac

View File

@ -188,7 +188,7 @@ export async function runEmbeddedAttempt(
});
const sessionLabel = params.sessionKey ?? params.sessionId;
const { bootstrapFiles: hookAdjustedBootstrapFiles, contextFiles } =
const { bootstrapFiles: hookAdjustedBootstrapFiles, contextFiles: bootstrapContextFiles } =
await resolveBootstrapContextForRun({
workspaceDir: effectiveWorkspace,
config: params.config,
@ -196,6 +196,20 @@ export async function runEmbeddedAttempt(
sessionId: params.sessionId,
warn: makeBootstrapWarn({ sessionLabel, warn: (message) => log.warn(message) }),
});
// Append injected skill content (inject: true) as context files alongside bootstrap files.
const injectedSkills = params.skillsSnapshot?.injectedSkills ?? [];
const contextFiles =
injectedSkills.length > 0
? [
...bootstrapContextFiles,
...injectedSkills.map((skill) => ({
path: `skill:${skill.name}`,
content: skill.content,
})),
]
: bootstrapContextFiles;
const workspaceNotes = hookAdjustedBootstrapFiles.some(
(file) => file.name === DEFAULT_BOOTSTRAP_FILENAME && !file.missing,
)

View File

@ -136,7 +136,7 @@ export function shouldIncludeSkill(params: {
) {
return false;
}
if (entry.metadata?.always === true) {
if (entry.metadata?.always === true || entry.metadata?.inject === true) {
return true;
}

View File

@ -135,6 +135,7 @@ export function resolveOpenClawMetadata(
const osRaw = normalizeStringList(metadataObj.os);
return {
always: typeof metadataObj.always === "boolean" ? metadataObj.always : undefined,
inject: typeof metadataObj.inject === "boolean" ? metadataObj.inject : undefined,
emoji: typeof metadataObj.emoji === "string" ? metadataObj.emoji : undefined,
homepage: typeof metadataObj.homepage === "string" ? metadataObj.homepage : undefined,
skillKey: typeof metadataObj.skillKey === "string" ? metadataObj.skillKey : undefined,

View File

@ -18,6 +18,9 @@ export type SkillInstallSpec = {
export type OpenClawSkillMetadata = {
always?: boolean;
/** If true, the full skill content is injected into the system prompt automatically
* (like a bootstrap file) instead of being lazy-loaded via the skills list. */
inject?: boolean;
skillKey?: string;
primaryEnv?: string;
emoji?: string;
@ -79,9 +82,16 @@ export type SkillEligibilityContext = {
};
};
export type InjectedSkillContent = {
name: string;
content: string;
};
export type SkillSnapshot = {
prompt: string;
skills: Array<{ name: string; primaryEnv?: string }>;
resolvedSkills?: Skill[];
/** Skills with `inject: true` whose full content should be included in the system prompt. */
injectedSkills?: InjectedSkillContent[];
version?: number;
};

View File

@ -7,6 +7,7 @@ import fs from "node:fs";
import path from "node:path";
import type { OpenClawConfig } from "../../config/config.js";
import type {
InjectedSkillContent,
ParsedSkillFrontmatter,
SkillEligibilityContext,
SkillCommandSpec,
@ -188,6 +189,23 @@ function loadSkillEntries(
return skillEntries;
}
/** Read and strip frontmatter from a SKILL.md file, returning the body content. */
function readSkillContent(filePath: string): string | undefined {
try {
const raw = fs.readFileSync(filePath, "utf-8");
// Strip YAML frontmatter (--- ... ---)
if (raw.startsWith("---")) {
const endIndex = raw.indexOf("\n---", 3);
if (endIndex !== -1) {
return raw.slice(endIndex + "\n---".length).trim();
}
}
return raw.trim();
} catch {
return undefined;
}
}
export function buildWorkspaceSkillSnapshot(
workspaceDir: string,
opts?: {
@ -208,12 +226,36 @@ export function buildWorkspaceSkillSnapshot(
opts?.skillFilter,
opts?.eligibility,
);
const promptEntries = eligible.filter(
// Separate injected skills (inject: true) from lazy-loaded skills.
// Injected skills have their full content included in the system prompt automatically.
const injectedEntries: SkillEntry[] = [];
const lazyEntries: SkillEntry[] = [];
for (const entry of eligible) {
if (entry.metadata?.inject === true) {
injectedEntries.push(entry);
} else {
lazyEntries.push(entry);
}
}
// Build lazy-loaded skills prompt (XML format for on-demand reading)
const promptEntries = lazyEntries.filter(
(entry) => entry.invocation?.disableModelInvocation !== true,
);
const resolvedSkills = promptEntries.map((entry) => entry.skill);
const remoteNote = opts?.eligibility?.remote?.note?.trim();
const prompt = [remoteNote, formatSkillsForPrompt(resolvedSkills)].filter(Boolean).join("\n");
// Read full content of injected skills
const injectedSkills: InjectedSkillContent[] = [];
for (const entry of injectedEntries) {
const content = readSkillContent(entry.skill.filePath);
if (content) {
injectedSkills.push({ name: entry.skill.name, content });
}
}
return {
prompt,
skills: eligible.map((entry) => ({
@ -221,6 +263,7 @@ export function buildWorkspaceSkillSnapshot(
primaryEnv: entry.metadata?.primaryEnv,
})),
resolvedSkills,
injectedSkills: injectedSkills.length > 0 ? injectedSkills : undefined,
version: opts?.snapshotVersion,
};
}