OpenClawBrain is your agent’s second brain — it builds a knowledge map, trains a guide that picks the right context for each question, and learns from corrections. OpenClaw is front of house (live conversations); OpenClawBrain is the kitchen (knowledge, learning, compilation). This page describes how the two systems connect.
Current truth: the learner builds candidate packs off the hot path; activation stages and promotes them; compiler serves only from the active promoted pack; route updates are PG-only from explicit labels. Target end shape: continuous live graph update (decay, co-firing, pruning, reorganizing) on the active pack during serving; scanner/labels/harvest on by default after attach; hard API enforcement of the OpenClaw/OpenClawBrain split.
New here? Start with the setup guide.
Other docs:
@openclawbrain/openclaw — typed OpenClaw bridge: compile diagnostics, learned-route hard-fail enforcement, and normalized event export handoffThe active promoted pack is the only serve-visible slot. candidate and previous stay inspectable for promotion and rollback. Missing active packs fail open, but learned-required route-artifact drift returns hardRequirementViolated=true and disables static fallback.
The integration is split into focused npm packages, each handling one piece of the learning pipeline:
@openclawbrain/contracts@openclawbrain/events@openclawbrain/event-export@openclawbrain/workspace-metadata@openclawbrain/provenance@openclawbrain/pack-format@openclawbrain/activation@openclawbrain/compiler@openclawbrain/learner@openclawbrain/openclawcorepack enable
pnpm install
pnpm check
pnpm release:pack
OpenClaw runtime then deploys the released pack set in its own environment.
route_fn (from the deployed brain pack) walks the knowledge graph and picks which context blocks to surface. It blends the graph’s structural knowledge (graph_prior) with per-query relevance (QTsim), using a confidence gate to shift weight between them.These run asynchronously and never add latency to responses:
activation stages and promotes them; the compiler then serves from the newly promoted packGraph-dynamics today: decay settings, Hebbian co-firing settings, and split/merge/prune/connect counts are recorded inside the immutable pack artifact as build-time metadata. This is not the same as live runtime mutation of the active pack during serving.
Graph-dynamics target: continuous live update (decay, Hebbian co-firing, pruning, reorganizing) on the active pack during serving. This is the target end shape; it is not proved in this repo today. See CLAIMS.md for the authoritative boundary.
The runtime starts serving immediately from existing workspace files. Bootstrap attach self-boots truthfully even from zero live events; the zero-event seed state is operator-visible rather than failing at init. Background learning exports new events first, then backfills historical data.
Use pnpm operator:status, pnpm operator:doctor, and pnpm operator:rollback -- --dry-run for day-0 triage once you have a real activation root.
The integration is designed to be fail-open:
Three levels, kept separate:
route_fn evidence verified, operator observability passes. Run pnpm lifecycle:smoke, pnpm observability:smoke, pnpm continuous-product-loop:smoke, pnpm fresh-env:smoke.See CLAIMS.md for the authoritative boundary between what OpenClawBrain proves, what Brain Ground Zero proves, and what is not yet claimed.
continuousGraphLearning config flags as proof of live runtime graph plasticity. Today, graph-dynamics fields are pack artifact metadata, not live serving-path operations.@openclawbrain/* packages are npm-published. The current wave ships from local .release/*.tgz tarballs. Use pnpm release:status to check the honest distribution lane.