openclawbrain-site

OpenClaw Integration

OpenClawBrain is your agent’s second brain — it builds a knowledge map, trains a guide that picks the right context for each question, and learns from corrections. OpenClaw is front of house (live conversations); OpenClawBrain is the kitchen (knowledge, learning, compilation). This page describes how the two systems connect.

Current truth: the learner builds candidate packs off the hot path; activation stages and promotes them; compiler serves only from the active promoted pack; route updates are PG-only from explicit labels. Target end shape: continuous live graph update (decay, co-firing, pruning, reorganizing) on the active pack during serving; scanner/labels/harvest on by default after attach; hard API enforcement of the OpenClaw/OpenClawBrain split.

New here? Start with the setup guide.

Other docs:

Who owns what

OpenClaw (runtime)

OpenClawBrain (learning)

The active promoted pack is the only serve-visible slot. candidate and previous stay inspectable for promotion and rollback. Missing active packs fail open, but learned-required route-artifact drift returns hardRequirementViolated=true and disables static fallback.

Packages

The integration is split into focused npm packages, each handling one piece of the learning pipeline:

Build and deploy

corepack enable
pnpm install
pnpm check
pnpm release:pack

OpenClaw runtime then deploys the released pack set in its own environment.

Runtime behavior

Serving (hot path)

  1. OpenClaw receives a query.
  2. The learned route_fn (from the deployed brain pack) walks the knowledge graph and picks which context blocks to surface. It blends the graph’s structural knowledge (graph_prior) with per-query relevance (QTsim), using a confidence gate to shift weight between them.
  3. OpenClaw assembles the prompt with that bounded context and serves the response.

Learning (off the hot path)

These run asynchronously and never add latency to responses:

Graph-dynamics today: decay settings, Hebbian co-firing settings, and split/merge/prune/connect counts are recorded inside the immutable pack artifact as build-time metadata. This is not the same as live runtime mutation of the active pack during serving.

Graph-dynamics target: continuous live update (decay, Hebbian co-firing, pruning, reorganizing) on the active pack during serving. This is the target end shape; it is not proved in this repo today. See CLAIMS.md for the authoritative boundary.

Fast boot

The runtime starts serving immediately from existing workspace files. Bootstrap attach self-boots truthfully even from zero live events; the zero-event seed state is operator-visible rather than failing at init. Background learning exports new events first, then backfills historical data.

Use pnpm operator:status, pnpm operator:doctor, and pnpm operator:rollback -- --dry-run for day-0 triage once you have a real activation root.

Fail-open behavior

The integration is designed to be fail-open:

Proof boundary

Three levels, kept separate:

See CLAIMS.md for the authoritative boundary between what OpenClawBrain proves, what Brain Ground Zero proves, and what is not yet claimed.

What to avoid