Merge 20f626e9c19b370ce148a6ce22268f23def2730b into 5e417b44e1540f528d2ae63e3e20229a902d1db2

This commit is contained in:
zwtang119 2026-03-20 19:55:50 -07:00 committed by GitHub
commit 2970f502e2
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 119 additions and 1 deletions

95
skills/attractor/SKILL.md Normal file
View File

@ -0,0 +1,95 @@
---
name: attractor
description: Build, extend, and debug Attractor implementations from the strongdm/attractor specs. Use when work involves DOT pipeline DSL parsing, graph execution traversal, node handlers, checkpoint/resume state, human-in-the-loop gates, condition expressions, model stylesheet rules, or integrating coding-agent-loop/unified-llm backends.
homepage: https://github.com/strongdm/attractor
metadata: { "openclaw": { "emoji": "🧲" } }
---
# Attractor
Implement Attractor as a spec-driven orchestration layer, not as ad-hoc glue code.
## Source of truth
Read these specs before changing architecture:
- `attractor-spec.md`
- `coding-agent-loop-spec.md`
- `unified-llm-spec.md`
If local files are unavailable, use:
- <https://github.com/strongdm/attractor/blob/main/attractor-spec.md>
- <https://github.com/strongdm/attractor/blob/main/coding-agent-loop-spec.md>
- <https://github.com/strongdm/attractor/blob/main/unified-llm-spec.md>
## Execution order
1. Confirm scope.
2. Map requested work to specific spec sections.
3. Implement minimal, testable slices in this order:
- DOT parser + schema validation
- Graph normalization + static validation
- Execution engine (state, traversal, edge selection)
- Node handler registry + built-in handlers
- Checkpoint/resume persistence
- Human-in-the-loop pause/resume flow
- Model stylesheet and condition evaluator
4. Add focused tests for each implemented slice.
5. Verify behavior against acceptance criteria from the relevant section.
## Scope mapping checklist
Map each task to one or more of these components:
- **DOT DSL schema**: grammar subset, typed attributes, defaults, chained edges, subgraph scoping.
- **Execution engine**: session/run state, deterministic traversal, retries, timeouts, loop handling.
- **Node handlers**: `start`, `exit`, `codergen`, `wait.human`, `conditional`, `parallel`, `parallel.fan_in`, `tool`, `stack.manager_loop`.
- **State and context**: run context shape, per-node outputs, status normalization.
- **Human-in-the-loop**: blocking prompts, explicit response routing, resumable state.
- **Model stylesheet**: selector precedence, inheritance, node-level override merge.
- **Condition language**: parser/evaluator behavior, truthiness, error handling.
- **Transforms/extensibility**: pre/post transforms, plugin-like handler registration, event hooks.
## Guardrails
- Keep Attractor decoupled from the LLM backend. Use backend interfaces, not provider-specific calls in core traversal logic.
- Keep graph execution deterministic. If multiple edges are eligible, apply documented priority rules (condition, weight, label preferences) consistently.
- Reject invalid graph inputs early with clear errors (do not silently coerce malformed data).
- Keep checkpoint payloads stable and serializable to support crash-safe resume.
- Preserve separation between:
- orchestration (Attractor),
- coding agent loop,
- unified LLM client.
- Do not invent undocumented node types, attributes, or condition operators unless explicitly requested.
## Testing requirements
Always add or update tests for changed behavior:
- Parser acceptance tests for valid DOT samples.
- Parser rejection tests for unsupported syntax and bad attributes.
- Execution tests for linear, conditional, retry, and parallel paths.
- Checkpoint/resume tests that simulate interruption and restart.
- Human-gate tests that pause, accept input, and continue correctly.
- Condition evaluation tests for happy path and invalid expressions.
## Implementation strategy for large requests
For multi-feature asks, split work into PR-sized increments:
1. Parser + validator foundation.
2. Core traversal + deterministic routing.
3. Handler implementations and HITL integration.
4. Stylesheet/condition language refinements.
Each increment should keep the engine runnable and test-backed.
## Delivery checklist
Before finishing:
- Confirm implementation matches cited spec sections.
- Confirm tests cover both success and failure paths.
- Confirm any new config/attributes are documented where users read them.
- Confirm no coupling leaks between orchestration and provider-specific adapters.

View File

@ -95,7 +95,14 @@ def validate_skill(skill_path):
"Invalid YAML in frontmatter: unsupported syntax without PyYAML installed",
)
allowed_properties = {"name", "description", "license", "allowed-tools", "metadata"}
allowed_properties = {
"name",
"description",
"homepage",
"license",
"allowed-tools",
"metadata",
}
unexpected_keys = set(frontmatter.keys()) - allowed_properties
if unexpected_keys:

View File

@ -67,6 +67,22 @@ metadata: |
self.assertTrue(valid, message)
def test_accepts_homepage_frontmatter_key(self):
skill_dir = self.temp_dir / "homepage-skill"
skill_dir.mkdir(parents=True, exist_ok=True)
content = """---
name: homepage-skill
description: Supports homepage
homepage: https://example.com
---
# Skill
"""
(skill_dir / "SKILL.md").write_text(content, encoding="utf-8")
valid, message = quick_validate.validate_skill(skill_dir)
self.assertTrue(valid, message)
if __name__ == "__main__":
main()