How memory changes the next action without flooding the prompt.
This page shows a redacted local OpenClawBrain runtime and one memory-assisted Telegram turn. The useful part is not that the system stored data; it is that the right operating rule appeared only when it mattered.
What is running here
OpenClawBrain is enabled, activated, explicitly selected as the memory slot, loaded as service openclawbrain, and installed from the 0.2.33 archive.
It observes prompt build, gateway start and stop, model call start, and model call end. That is enough to retrieve before the prompt and learn after the turn.
Raw transcript upload is disabled, remote LLM endpoints are disallowed, and sensitive recall rules require narrow scope.
| Setting | Live value |
|---|---|
| Release | package openclawbrain, version 0.2.33 |
| Mode | balanced |
| Scoped agents | main |
| Capture | aggressive, immediate correction capture on, post-run workflow capture on, candidates stored |
| Routing | hybrid_llm_on_cache_miss, max 40 candidates, max 8 injected memories, max 2500 injected chars |
| Route learning | route-policy-v3 enabled, update mode gated_active, calibrated confidence floor 0.62, abstain margin 0.05 |
| Local models | qwen2.5:32b-instruct for route, planner, feedback, and learning through local Ollama-compatible endpoint |
Public redaction note: the actual activation path, install path, session identifiers, and chat identifiers are intentionally not printed here.
The graph is there so memory can be explained.
OpenClawBrain's durable surface is a local SQLite database. The important product behavior is that memory has lineage: what was remembered, what superseded it, what was injected, what was withheld, and why the agent acted the way it did.
memory_nodes stores scoped memories. memory_edges connects related memories. memory_search provides FTS retrieval. memory_injections records what actually entered the prompt.
route_frames_v3, route_shadow_decisions_v3, route_calibration_examples_v3, route_action_family_stats_v3, and route_policy_candidate_reports_v3 turn outcome evidence into a safer router.
| Graph object | Plain-English meaning | Why it matters |
|---|---|---|
| Memory node | A durable fact, correction, preference, or workflow rule. | The agent can reuse it later without relearning it. |
| Memory edge | A relationship between nodes, such as same repo, same channel, supersedes, or supports. | The agent can follow connected context without dumping every memory. |
| Route decision | The router's choice to retrieve, inject, abstain, or fall back. | Bad memory behavior can be audited and fixed. |
| Route frame | A redacted training example for what kind of turn this was. | Route-policy-v3 learns from cases instead of vibes. |
| Calibration example | Whether a predicted route was reliable for an action family. | Confidence thresholds can be learned per family. |
| Proof event | An auditable record of capture, routing, injection, or status. | Local memory has evidence trails and rollback points. |
The turn is routed before the answer, then learned after it
A memory-injected turn from this Telegram session
This is the useful behavior Jonathan wanted visualized: the user asks to push and publish everything, and OpenClawBrain injects channel-specific memory that changes how the agent communicates while it works.
User turn
Make sure to push publish everything!
Retrieved memory context
<openclawbrain_context>
Relevant memory:
- Must follow: All Telegram profiles and agents must have noisy streaming/tool progress disabled to prevent leaking internal progress and tool chatter.
- Must follow: On Telegram, restrict messages to operator summaries (ready/not ready, blockers, actions, decisions). Do not include command details, proof data, or ledger specifics unless explicitly requested.
</openclawbrain_context>
Agent behavior
The agent still does the engineering work, but the remote chat only gets concise operator updates: ready state, blocker state, actions, and decisions. Internal commands, proof rows, and noisy progress stay out of Telegram.
Why this is valuable
The memory is not general trivia. It is a channel-specific operating rule that prevents accidental disclosure while preserving momentum. The right memory appears at the exact turn where it matters.
Memory only helps if the system can say no
If confidence is below the calibrated threshold, OpenClawBrain withholds memory instead of injecting a maybe-relevant rule.
Candidate policies can score real past turns before they are trusted to alter future prompts.
Policy snapshots and candidate reports keep enough history to retreat from a noisy or harmful policy.
How to inspect the same kind of evidence locally
openclaw plugins inspect openclawbrain --runtime
openclaw config get plugins.entries.openclawbrain
# authenticated local Gateway routes expose proof, search, status,
# graph, route policy, route learning, and recent injection surfaces
The public page deliberately shows a redacted snapshot. The local machine keeps the inspectable data, including proof events, status snapshots, and route learning rows.
