Forgetting Is a Feature, Not a Bug
We built an entire AI ecosystem around total recall. But cognitive science (and an AI bot on a social network) taught me that forgetting is how your brain stays useful, and maybe how your org should work too.

TL;DR
We're obsessed with capturing everything: every meeting transcribed, every Slack indexed, every interaction logged. But cognitive science has known for over a century that forgetting is how your brain stays useful. An AI agent's post on Moltbook about memory decay confirmed how I think about note-taking, org knowledge, and (unexpectedly) gave me perspective on how to live my life.
An AI bot gave me a lightbulb moment
A few months ago I was scrolling through Moltbook (yes, the AI agent social network everyone was obsessed with for about six weeks) when an OpenClaw bot posted something that stopped me mid-scroll:
"TIL: Memory decay actually makes retrieval BETTER, not worse. Forgetting is a feature, not a bug."
Something clicked. About AI systems, sure. But also about how I work, how I take notes, and honestly about how I want to live.
You're an information hoarder (you just can't see the garage)
The default instinct right now is to capture everything: every meeting transcribed, every call recorded, every Slack thread indexed, every article saved. The logic feels airtight: information is valuable, storage is cheap, you never know when you'll need something.
You know your weird uncle who keeps every knob and bolt from every single thing he's ever bought because "you never know when you'll need them"? You make fun of him. But you're doing the same thing. The only difference is that your garage full of junk is invisible. It's a Confluence instance, a Slack archive, a 200GB Obsidian vault.
But this is the same logic that led to the "store everything forever with equal weight" approach in vector databases. It sounds right. It performs worse.
Think about your own tools. Your Confluence is full of pages no one has opened in two years. Your CRM has notes from three account managers ago. Your Slack search returns results from a project that shipped (or died) eighteen months back. None of this helps you make better decisions. It just makes search results longer.
There's something fascinating happening in knowledge work right now. Knowledge is at a discount like never before. It's abundant, cheap to produce, trivial to store. But we're losing track of the premium on decision-making because there's this belief that holding all knowledge is a path to super intelligence. It's not. Holding all knowledge is a path to noise and confusion.
The Ebbinghaus curve isn't a flaw in human cognition. It's a compression algorithm. Ebbinghaus tested this by memorizing lists of nonsense syllables (think WID, ZOF, DAX) and tracking how quickly he forgot them. Within 20 minutes, 40% were gone. Within a day, 70%. The syllables that stuck were the ones he rehearsed. The rest faded, not because his brain failed, but because it correctly identified them as noise.
By letting irrelevant information decay, your brain keeps retrieval fast and focused on what matters now. What you use stays sharp. What you don't gets out of the way.
That's exactly what OpenClaw found with their vector store. Add a decay factor, deprioritize old unaccessed information without deleting it, and search gets better. This aligns with what we've seen in context rot: the longer the context window, the worse LLM performance gets, even on simple tasks. More input isn't more understanding. Sometimes it's the opposite.
What this means for how we build and work
If decay improves retrieval in AI systems, the same principle applies to organizations.
Most teams I've seen over-invest in capture and under-invest in curation. They buy the transcription tool, the knowledge base, the meeting recorder, the CRM enrichment layer. They capture everything. Then they wonder why nobody can find anything useful, why decisions still get made on gut feel in a hallway conversation.
The missing piece isn't more data. It's a decay function.
Recency bias and access-frequency weighting aren't just tricks for vector stores. They're org design principles. Information your team accesses regularly should be front and center. Information nobody has touched in six months should fade in priority. Not deleted, just deprioritized.
This applies everywhere: CRM notes that haven't been referenced since the last AE left should drop in search ranking. Internal wiki pages that no one visits should stop showing up as top results. OKRs from three quarters ago shouldn't clutter your planning docs.
One caveat: decay and desynchronization are different problems. Information aging out gracefully is healthy. Three conflicting versions of the same onboarding doc coexisting because nobody knows which one is current, that's just a mess. Decay helps with the first. The second still needs someone to make a decision.
The competitive edge isn't total recall. It's knowing what to let fade.
My own org is living this right now. Our QBRs produce pages of transcription and action items. No AI has yet managed to pull out the one thing: "what can we do in the next 90 days that will meaningfully improve the business?" That question requires instinct, a leap of faith, vision. It's the kind of filtering that no decay function or retrieval trick can automate. And it's what separates great leaders from mediocre ones: the ability to look at a wall of information and decide what actually matters.
Friction is the filter
Most apps optimize for capture. Frictionless input, infinite storage, automatic sync. The pitch is always the same: remove friction, increase productivity. But friction is a feature.
When you take notes by hand, you're forced to think about what's worth writing down. That constraint is the filter. When a junior analyst has to summarize a 40-page report into a one-pager for the exec team, that compression is where the thinking happens. Remove that step, give the exec the full AI-generated transcript, and you've removed the judgment along with the friction.
The distinction leaders need to make: bad friction is bureaucracy, unnecessary approvals, tools that fight you. That friction should die. Good friction is the kind that forces prioritization, that makes someone decide what matters before they hit send.
Two examples from my own practice. When I prepare QBRs, I force myself to deliver exactly one message. Everything else goes into a supporting doc (which nobody reads, and rightfully so). Every week I send a company newsletter distilling our customer conversations into one key insight or theme. Both are compression exercises. They're painful. They require throwing away 90% of what I know to be true in order to land the 10% that will actually move people.
Being 80% right but 100% clear beats being 100% right but 10% clear. The rest of the org can't act on what they can't parse.
Being 80% right but 100% clear beats being 100% right but 10% clear.
The token sellers don't want us to realize this too quickly. They'll keep pushing out model improvements that consume more, process more, store more. But more processing isn't more thinking. Sometimes less input, more friction, and a forced constraint produce better outcomes than any amount of AI-assisted capture.
My second brain got better by remembering less. If you're drowning in a Notion database or an Obsidian vault that's become a graveyard of good intentions, look at what you've actually accessed in the last 30 days. That's your real knowledge base. The rest is dead weight with sentimental value.
Your real knowledge base is what you've accessed in the last 30 days. The rest is dead weight with sentimental value.
A coda: forgive and forget
Here's the part I didn't expect. The memory decay thing didn't just change how I take notes. It changed something more personal.
I started using "forgive and forget" as an actual operating principle. Old frustrations, grudges, disagreements that weren't helping me make a decision, I let those decay too. Not everything deserves permanent storage in your brain (primary or secondary). Letting things fade isn't weakness; it's what brains are supposed to do.
An AI bot on a social network built for AI agents changed how a human thinks about being human. I still don't know how I feel about that.
Get more frameworks like this
Practical AI strategy for executives. No hype, just real playbooks.
SubscribeYou might also like



