You corrected your agent's coding style last Tuesday. Today it made the same mistake. You explained your project's architecture to Cursor this morning. Tonight, Claude Code has no idea. Every conversation starts from zero.
PLUR gives your agents a shared memory that persists, learns from feedback, and travels between tools and devices. Corrections stick. Preferences compound. Models are commodities — your knowledge isn't. PLUR makes it portable.
| Knowledge type | Record | Win rate |
|---|---|---|
| House rules Tag formats, file placement, project conventions | 12 – 0 | 100% |
| Tool routing Finding the right tool among 100+ options | 10 – 2 | 83% |
| Past experience API quirks, debugging insights, infrastructure details | 4 – 0 | 100% |
| Learned style Communication tone, design preferences | 5 – 2 | 71% |
| General tasks No memory advantage expected | — | No penalty |
Engrams — learned knowledge that strengthens with use and fades when irrelevant. Stale facts naturally drop out without manual cleanup. Memory injects at every stage — session start, planning, skill use, agent spawning, even subagents. Nothing slips through. READ THE SPEC →
Correct something in Claude Code. Cursor already knows. OpenClaw remembers it on Telegram. Hermes picks it up in Python. Multi-store reads your team's shared engrams alongside personal memory — no copying, no syncing. Switch AI models next month; your memory carries over.
Open the file. Read what your agent remembers. Edit it. Delete it. It's YAML on your disk — not a database, not a cloud, not a black box. Apache-2.0 licensed, forever. Unlike Mem0, Letta, or Zep, nothing leaves your machine unless you choose.
plur init installs 9 hooks. Memory injected at session start, plan mode, skill invocation, agent spawn, subagent start. Observations logged for offline pattern extraction.Your memory lives on your machine. Open the file, see what's there. No platform can revoke it or hold it hostage.
An AI that remembers your preferences, your patterns, your way of thinking — and carries that across every tool you use.
Your memory outlives any model. Switch from Claude to GPT to whatever comes next. The knowledge transfers.
Nothing shared without your say. Nothing taken without giving back. Private until you choose otherwise.
Corrections stick. Preferences compound. Every session builds on the last. One agent, one developer — a memory that gets better with use, not just bigger.
Knowledge Packs and Multi-Store let teams pool hard-won knowledge without giving up ownership. What took one person six months, another installs in six seconds.
A network of sovereign agents, each choosing what to share. Not a hive mind — a chorus. The future of intelligence is not artificial. It is plural.