I kept finding out what my agents did after the fact. Logs, traces, error reports. By then the action had already happened. I wanted a way to stop the action before it executes, present context to a human, and record the decision. That layer didn't exist, so I built it.

DashClaw sits between your agents and their actions. Every action goes through a policy check before it runs. The agent waits until the decision comes back.

What it does:

- Guard policies defined in YAML. Conditions, thresholds, actions (allow / block / require_approval). Versioned, testable in CI via simulation endpoint.

- Decision recording to Postgres before the action fires. Action, context, assumptions, policy applied, outcome. All org-scoped.

- Human-in-the-loop over SSE with Redis broker. 10-min replay window so approvals survive disconnects. Agent holds pending until decision or timeout.

- CLI approval channel. `dashclaw approve <id>` from a second terminal. No browser needed.

- Node and Python SDKs. Wraps existing calls. No framework lock-in.

How the guard check works in practice:

    const decision = await claw.guard({ action_type: 'deploy', risk_score: 90 });
    if (decision.decision === 'block') return;
Claude Code integration via hooks. DashClaw intercepts tool calls via PreToolUse hooks before Claude Code executes them. High-risk tools get routed for terminal approval automatically.

Works with LangChain, CrewAI, OpenAI tools, Anthropic tools, Autogen, or custom agents.

- Quick Demo w/Docker: npx dashclaw-demo

- End to End Demo: cd examples/openai-governed-agent && npm install && node index.js

That runs a real agent, attempts a high-risk action, DashClaw intercepts it, and waits for your approval in the browser. Or you can run npm install -g @dashclaw/cli and approve it in the terminal via dashclaw approve <actionId>

Stack: Next.js 15 (JS, no TS), Neon Postgres, Redis, NextAuth, Vercel. MIT licensed.

Honest limitations: guard policies handle threshold and allowlist rules well. Contextual policies that reason over full conversation history are not there yet. Thinking about that as a separate eval layer rather than baking it into the guard runtime. Features in the Labs category are still in development, currently building out integrations and starting with Discord/Slack.

GitHub: https://github.com/ucsandman/DashClaw Demo (fixture data, no login): https://dashclaw.io

Happy to go deep on any part of the architecture.