OpenClawBrain
A real continuity turn

How memory changes the next action without flooding the prompt.

This page shows a redacted local OpenClawBrain runtime and one memory-assisted Telegram turn. The useful part is not that the system stored data; it is that the right operating rule appeared only when it mattered.

Captured: 2026-05-08 02:20 PDT Release: 0.2.33 Mode: balanced Policy: route-policy-v3 gated_active Storage: local SQLite graph
Pluginloaded
Hooks5
HTTP routes11
incoming turn channel + task signals route_fn calibrate or abstain memory graph nodes, edges, FTS proof events auditable decisions prompt block bounded injection
real runtime:
service=openclawbrain
version=0.2.33
activated=true
rawTranscriptUpload=false
allowRemoteLlm=false
Real local state

What is running here

Runtime

OpenClawBrain is enabled, activated, explicitly selected as the memory slot, loaded as service openclawbrain, and installed from the 0.2.33 archive.

Hook surface

It observes prompt build, gateway start and stop, model call start, and model call end. That is enough to retrieve before the prompt and learn after the turn.

Privacy posture

Raw transcript upload is disabled, remote LLM endpoints are disallowed, and sensitive recall rules require narrow scope.

SettingLive value
Releasepackage openclawbrain, version 0.2.33
Modebalanced
Scoped agentsmain
Captureaggressive, immediate correction capture on, post-run workflow capture on, candidates stored
Routinghybrid_llm_on_cache_miss, max 40 candidates, max 8 injected memories, max 2500 injected chars
Route learningroute-policy-v3 enabled, update mode gated_active, calibrated confidence floor 0.62, abstain margin 0.05
Local modelsqwen2.5:32b-instruct for route, planner, feedback, and learning through local Ollama-compatible endpoint

Public redaction note: the actual activation path, install path, session identifiers, and chat identifiers are intentionally not printed here.

Local proof

The graph is there so memory can be explained.

OpenClawBrain's durable surface is a local SQLite database. The important product behavior is that memory has lineage: what was remembered, what superseded it, what was injected, what was withheld, and why the agent acted the way it did.

Memory side

memory_nodes stores scoped memories. memory_edges connects related memories. memory_search provides FTS retrieval. memory_injections records what actually entered the prompt.

Learning side

route_frames_v3, route_shadow_decisions_v3, route_calibration_examples_v3, route_action_family_stats_v3, and route_policy_candidate_reports_v3 turn outcome evidence into a safer router.

Graph objectPlain-English meaningWhy it matters
Memory nodeA durable fact, correction, preference, or workflow rule.The agent can reuse it later without relearning it.
Memory edgeA relationship between nodes, such as same repo, same channel, supersedes, or supports.The agent can follow connected context without dumping every memory.
Route decisionThe router's choice to retrieve, inject, abstain, or fall back.Bad memory behavior can be audited and fixed.
Route frameA redacted training example for what kind of turn this was.Route-policy-v3 learns from cases instead of vibes.
Calibration exampleWhether a predicted route was reliable for an action family.Confidence thresholds can be learned per family.
Proof eventAn auditable record of capture, routing, injection, or status.Local memory has evidence trails and rollback points.
Useful turn pipeline

The turn is routed before the answer, then learned after it

1. User turnThe channel, task, repo, risk, and wording become routing signals.
2. Candidate searchSQLite FTS and scoped graph lookup find memories that might matter.
3. Route policyroute-policy-v3 scores the action family, confidence, and injection value.
4. Abstain gateIf support is weak, OpenClawBrain stays quiet or falls back instead of guessing.
5. Prompt injectionOnly a compact, relevant memory block is inserted into the next prompt.
6. Proof and learningOutcome evidence feeds route frames, calibration, shadow replay, and candidate reports.
Real example

A memory-injected turn from this Telegram session

This is the useful behavior Jonathan wanted visualized: the user asks to push and publish everything, and OpenClawBrain injects channel-specific memory that changes how the agent communicates while it works.

User turn

Make sure to push publish everything!

Retrieved memory context

<openclawbrain_context>
Relevant memory:
- Must follow: All Telegram profiles and agents must have noisy streaming/tool progress disabled to prevent leaking internal progress and tool chatter.
- Must follow: On Telegram, restrict messages to operator summaries (ready/not ready, blockers, actions, decisions). Do not include command details, proof data, or ledger specifics unless explicitly requested.
</openclawbrain_context>

Agent behavior

The agent still does the engineering work, but the remote chat only gets concise operator updates: ready state, blocker state, actions, and decisions. Internal commands, proof rows, and noisy progress stay out of Telegram.

Why this is valuable

The memory is not general trivia. It is a channel-specific operating rule that prevents accidental disclosure while preserving momentum. The right memory appears at the exact turn where it matters.

What makes it safer

Memory only helps if the system can say no

Abstention

If confidence is below the calibrated threshold, OpenClawBrain withholds memory instead of injecting a maybe-relevant rule.

Shadow replay

Candidate policies can score real past turns before they are trusted to alter future prompts.

Rollback lineage

Policy snapshots and candidate reports keep enough history to retreat from a noisy or harmful policy.

Proof surface

How to inspect the same kind of evidence locally

openclaw plugins inspect openclawbrain --runtime
openclaw config get plugins.entries.openclawbrain

# authenticated local Gateway routes expose proof, search, status,
# graph, route policy, route learning, and recent injection surfaces

The public page deliberately shows a redacted snapshot. The local machine keeps the inspectable data, including proof events, status snapshots, and route learning rows.