Please Forget Me
Why perfect memory might be the wrong goal
NOTE: this is part of a series on AI agents. I suggest you start with the posts covering AI memory and AI pre-game routines. - I’ve been writing about giving AI agents better memory. But there’s a question I’ve been circling that deserves its own treatment: should an AI remember everything?
The pitch for AI memory systems is always framed as a problem to solve. “Your agent forgets things, that’s bad so let’s fix it.” Which leads to more context, better retrieval, persistent knowledge graphs. I’ve written about this, I’ve even built systems for it.
But human memory doesn’t work that way, and there’s a reason. So maybe we are chasing the wrong outcome.
Forgetting Is a Feature
Cognitive psychology has a concept called motivated forgetting. Your brain actively suppresses certain memories, particularly painful ones. This isn’t a bug in human cognition. It’s a protection mechanism. Trauma survivors don’t benefit from perfect recall of their worst moments. The ability to let memories fade, to have them become less vivid, less intrusive over time, is part of how people heal.
There’s also retrieval-induced forgetting: the act of remembering some things makes you forget related things. This sounds like a flaw until you realize it’s how the brain stays focused. Perfect recall of every detail would be overwhelming. The few documented cases of near-total recall suggest it’s more curse than gift1. Forgetting is how we filter signal from noise.
When we build AI systems that remember everything, we’re optimizing for a goal that human cognition explicitly evolved away from.
The Embodied Part We Can’t Replicate
Here’s something AI memory will never capture: the smell of your grandmother’s kitchen triggering a cascade of memories you didn’t know you still had. The song that puts you back in a specific car on a specific night. The texture of a object that brings back a decade.
Human memory is embodied. It’s intertwined with our senses, physical experiences, and the contextual information that exists in the world, rather than being stored in a database. When an AI “remembers” something, it retrieves text. On the other hand, when you remember something, your entire nervous system is involved in the process.
That difference matters. The AI version of memory is useful, but it’s a sketch of the thing, not the thing itself. It’s pragmatic and practical, which has value, but there’s something lost when we start treating the sketch as equivalent to the real experience.
What This Means for AI Memory Systems
I’m not arguing against AI memory. The systems I’ve built make my agent dramatically more useful. But I’ve started thinking about intentional forgetting as a design consideration, not just a failure mode.
What should age out? What should fade? What should an agent actively not retrieve because surfacing it serves no purpose? These aren’t questions the current AI memory literature is asking. Everyone’s focused on remembering more. Almost nobody is asking what should be forgotten.
The human brain spent millions of years figuring out that balance. We might want to pay attention to what it learned.
The most documented case is Solomon Shereshevsky, studied by neuropsychologist Alexander Luria. His memory was so total that he struggled to think abstractly, couldn’t stop irrelevant associations from flooding in, and found the inability to forget distressing rather than useful. More recently, people with hyperthymesia (highly superior autobiographical memory) report similar experiences: the upside is never forgetting, the downside is never forgetting.



