Case study 04Experimental

OpenClaw

Plugin architecture for agent workflow control, not another standalone orchestration toy.

OpenClaw matters because it makes existing tools better instead of asking users to adopt yet another dashboard from scratch. The plugin inherits the host environment, adds a local MCP bridge, and exposes a browser-native mission control for active agent work.

Commits558
MCP tools30
PersistenceSQLite outbox
StreamingSSE + polling fallback
OpenClaw full dashboard screenshot
RuntimeTypeScript
TransportSSE + polling fallback
PersistenceSQLite outbox
Agent layerMCP bridge + CLI integration
01 // why plugin

Why this had to live inside another tool

A standalone orchestration app would have needed its own auth, process management, lifecycle handling, and deployment story. The plugin avoids that tax by inheriting the host app's environment and focusing only on the orchestration value.

The result is a smaller adoption ask: make the tool people already use better, then add the control surface they were missing.

/orgx/mcp

claude  -> ~/.claude/mcp.json auto-configured
codex   -> ~/.codex/config.toml auto-configured
cursor  -> ~/.cursor/mcp.json auto-configured

single local bridge, single auth story, shared lifecycle
02 // architecture

A survivable workflow stack

OpenClaw plugin flow

Agent hosts connect through one MCP bridge, the plugin manages queue state and streaming, and the dashboard stays current through SSE with a polling fallback.

03 // dashboard proof

The interface shows why the architecture matters

OpenClaw full nine-panel dashboard
The full dashboard: mission control, next-up queue, activity timeline, decisions, and live system state in one place.
OpenClaw mission control panel
Mission control detail view, where hierarchy and next actions stay visible instead of collapsing into log noise.
04 // resilience

State machines and fallbacks keep the UI from lying

This is the part that matters most in practice. The value of the dashboard disappears immediately if it shows stale or partial truth. The queue model, outbox, and transport fallback exist so the interface can stay trustworthy under failure, not just under demos.

  • Task states are enforced by the state machine, not by scattered conventions.
  • SQLite outbox preserves mutations offline and replays them when the gateway reconnects.
  • SSE is primary, polling is backup, so the dashboard never goes blind when streaming drops.

The value of a plugin is that you don’t have to convince anyone to adopt a new tool.

You just make the tool they already use better. That is the architectural instinct I wanted this project to make obvious.