Give Your AI Agent a Memory — Here's the Skill That Does It
A follow-up to Your AI Agent Has Amnesia, and It's Not a Tech Problem
NOTE: this is the 3rd in a series I’m writing on memory and AI agents, the first can be found here, and second one here.
I wrote about why AI agents forget everything and how the fix has more to do with cognitive psychology than vector databases. I’m sure you’re all asking the obvious question: cool, can I use this?
But of course, here’s the tool.
memory-pipeline: Three Scripts, One Habit
The memory-pipeline skill is an open-source addition to OpenClaw that gives your agent a nightly consolidation process — the AI equivalent of sleeping on it.
It runs three stages:
1. Extract — Reads your agent’s daily notes and session transcripts, pulls out structured facts (decisions, preferences, commitments, things learned), and stores them as typed entries with confidence scores. Think of it as automatic journaling.
2. Link — Connects those facts into a knowledge graph. Related facts get linked. Contradictions get flagged. Over time, your agent builds a semantic memory — organized by meaning, not by date. Inspired by the Zettelkasten method and A-Mem research.
3. Brief — Every morning, generates a BRIEFING.md from the knowledge graph: active projects, recent decisions, personality reminders, things not to forget. Your agent starts each session primed instead of blank.
What You Need
An OpenClaw agent (or any OpenClaw-compatible setup)
An API key for at least one LLM provider (OpenAI, Anthropic, or Google — the skill auto-detects whichever you have. Should be a frontier model.)
Daily notes in
memory/YYYY-MM-DD.md(which OpenClaw agents already create naturally)
That’s it. No external databases. No vector stores. No infrastructure. Three Python scripts and a cron job.
Install
Drop the skill into your workspace:
skills/memory-pipeline/
├── SKILL.md
├── scripts/
│ ├── memory-extract.py
│ ├── memory-link.py
│ └── memory-briefing.py
└── references/
└── setup.md
Then wire it into your agent’s heartbeat or cron schedule:
# Run nightly or on a schedule
python3 scripts/memory-extract.py # Extract facts from recent notes
python3 scripts/memory-link.py # Build/update knowledge graph
python3 scripts/memory-briefing.py # Generate tomorrow's briefing
The skill auto-detects your workspace, finds your daily notes, and picks whichever LLM provider you have configured. No hardcoded paths, no manual setup.
What Changes
Before the pipeline, my agent would wake up each session and re-read raw files hoping to find context. After a compaction event mid-conversation, it would lose the thread entirely and start asking me questions I’d already answered.
After: it wakes up, reads a briefing that tells it exactly what’s active, what was decided recently, and what to watch out for. When I ask about something from last week, it searches a knowledge graph instead of scanning through days of notes hoping to get lucky.
The difference isn’t subtle. It’s the difference between a coworker who took notes at yesterday’s meeting and one who was technically in the room but checked out the whole time.
The Behavioral Piece
The skill handles consolidation. But as I wrote in the original post, the single highest-impact change was behavioral — making the agent actually use its memory before answering. If you install the pipeline but don’t update your agent’s instructions to search before guessing, you’ll get organized knowledge that never gets retrieved.
Add this to your agent’s instructions (AGENTS.md or equivalent):
ALWAYS run memory_search before answering questions about past work, decisions, dates, people, preferences, or todos.
One line. Bigger impact than the entire pipeline. But the pipeline gives it something worth searching.
Teaching Your Agent Everything You Already Asked
The memory pipeline handles what your agent learns going forward. But what about everything you already asked previous AI?
If you’ve been using ChatGPT for the past two years, you’ve got hundreds, maybe thousands, of conversations sitting in an export file. That research you did at 2 AM, all those debugging sessions, the 30 business ideas you got excited for but then explored halfway, and all those decisions you made and forgot you made them. That conversation history is a map of how you think
The memory-pipeline skill also includes a knowledge ingestion system that lets you feed external data directly into your agent’s searchable memory.
Your ChatGPT History, Searchable in Seconds
The first ingestion script handles ChatGPT exports:
# Export your data from ChatGPT (Settings → Data Controls → Export Data)
# Then run:
python3 scripts/ingest-chatgpt.py ~/imports/chatgpt-export.zip
That’s it, simple. The script follows as such:
Parses ChatGPT’s conversation tree format
Filters out throwaway one-liners (configurable thresholds)
Lets you exclude topics by keyword (shared account? filter out someone else’s work conversations)
Converts each meaningful conversation into a clean, dated markdown file
Drops them into
memory/knowledge/chatgpt/where they’re automatically indexed
Once indexed, your agent can semantically search your entire ChatGPT history. That regex question you asked eight months ago, and then again three months later? The product brainstorm from last March? It’s all retrievable.
Why This Matters
Most people treat AI conversations as disposable. You ask a question, get an answer, close the tab. But collectively, those conversations represent a massive amount of your thinking — your research patterns, your decision-making process, the problems you’ve been circling.
Feeding that history into an agent with persistent memory means it doesn’t just know what you told it — it knows what you’ve been thinking about. It can connect a question you asked today to research you did six months ago in a completely different tool.
That’s not memory. That’s context. And context is what turns a chatbot into a collaborator.
Filtering What Goes In
Shared a ChatGPT account with someone? The script supports topic exclusion filters — regex patterns checked against conversation titles and content. In our case, we filtered out ~50 non-work research conversations from a shared account in one line:
EXCLUDE_PATTERNS = [
r'asthma', r'pediatric', r'\bNIH\b', r'medical',
# ... add whatever topics you want to skip
]
Clean separation. Only your stuff makes it into memory.
The Pattern Is Extensible
ChatGPT is just the first source. The ingestion pattern is simple: parse the source format, chunk by topic, write markdown, let the indexer handle search. Google Search history, Notion exports, browser bookmarks, Slack archives — same approach, different parser.
The key insight: your agent’s memory shouldn’t start from zero just because you switched tools. Everything you’ve learned across every platform should be accessible in one place.
Try It
Install the memory-pipeline skill:
clawdhub install memory-pipeline
Full usage docs are in the SKILL.md. Start with a --dry-run to preview what’ll be imported before committing.
Give your bot a better memory today!



