OpenClaw
Plugin architecture for agent workflow control, not another standalone orchestration toy.
OpenClaw matters because it makes existing tools better instead of asking users to adopt yet another dashboard from scratch. The plugin inherits the host environment, adds a local MCP bridge, and exposes a browser-native mission control for active agent work.

Why this had to live inside another tool
A standalone orchestration app would have needed its own auth, process management, lifecycle handling, and deployment story. The plugin avoids that tax by inheriting the host app's environment and focusing only on the orchestration value.
The result is a smaller adoption ask: make the tool people already use better, then add the control surface they were missing.
/orgx/mcp
claude -> ~/.claude/mcp.json auto-configured
codex -> ~/.codex/config.toml auto-configured
cursor -> ~/.cursor/mcp.json auto-configured
single local bridge, single auth story, shared lifecycleA survivable workflow stack
OpenClaw plugin flow
Agent hosts connect through one MCP bridge, the plugin manages queue state and streaming, and the dashboard stays current through SSE with a polling fallback.
The interface shows why the architecture matters


State machines and fallbacks keep the UI from lying
This is the part that matters most in practice. The value of the dashboard disappears immediately if it shows stale or partial truth. The queue model, outbox, and transport fallback exist so the interface can stay trustworthy under failure, not just under demos.
- Task states are enforced by the state machine, not by scattered conventions.
- SQLite outbox preserves mutations offline and replays them when the gateway reconnects.
- SSE is primary, polling is backup, so the dashboard never goes blind when streaming drops.
The value of a plugin is that you don’t have to convince anyone to adopt a new tool.
You just make the tool they already use better. That is the architectural instinct I wanted this project to make obvious.