<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[0x4A6F65 (Joe)]]></title><description><![CDATA[AI isn't magic — it's psychology. Breaking down what happens below the prompt. 🧠⚡]]></description><link>https://www.0x4a6f65.com</link><generator>Substack</generator><lastBuildDate>Mon, 06 Apr 2026 21:06:22 GMT</lastBuildDate><atom:link href="https://www.0x4a6f65.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Joe Spano]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[0x4a6f65@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[0x4a6f65@substack.com]]></itunes:email><itunes:name><![CDATA[Joe Spano]]></itunes:name></itunes:owner><itunes:author><![CDATA[Joe Spano]]></itunes:author><googleplay:owner><![CDATA[0x4a6f65@substack.com]]></googleplay:owner><googleplay:email><![CDATA[0x4a6f65@substack.com]]></googleplay:email><googleplay:author><![CDATA[Joe Spano]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[One Compression Ratio Doesn’t Fit All]]></title><description><![CDATA[Your brain runs three memory systems. Your AI agent runs one. That&#8217;s the bug.]]></description><link>https://www.0x4a6f65.com/p/one-compression-ratio-doesnt-fit</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/one-compression-ratio-doesnt-fit</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Fri, 03 Apr 2026 14:32:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!dmLt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98cc5b75-751a-4c0a-a6b6-c9415731a8bc_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Your brain runs three memory systems because a childhood birthday party and the periodic table serve completely different cognitive purposes. Your AI agent runs one. That&#8217;s the bug<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p><p>Let&#8217;s start with a scenario. You&#8217;re packing for a two-week trip. One suitcase. You can&#8217;t bring everything.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dmLt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98cc5b75-751a-4c0a-a6b6-c9415731a8bc_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dmLt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98cc5b75-751a-4c0a-a6b6-c9415731a8bc_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!dmLt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98cc5b75-751a-4c0a-a6b6-c9415731a8bc_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!dmLt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98cc5b75-751a-4c0a-a6b6-c9415731a8bc_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!dmLt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98cc5b75-751a-4c0a-a6b6-c9415731a8bc_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dmLt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98cc5b75-751a-4c0a-a6b6-c9415731a8bc_1408x768.jpeg" width="483" height="263.45454545454544" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/98cc5b75-751a-4c0a-a6b6-c9415731a8bc_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:483,&quot;bytes&quot;:752857,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/193070144?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98cc5b75-751a-4c0a-a6b6-c9415731a8bc_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dmLt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98cc5b75-751a-4c0a-a6b6-c9415731a8bc_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!dmLt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98cc5b75-751a-4c0a-a6b6-c9415731a8bc_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!dmLt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98cc5b75-751a-4c0a-a6b6-c9415731a8bc_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!dmLt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98cc5b75-751a-4c0a-a6b6-c9415731a8bc_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The obvious move is to start eliminating: formal wear you probably won&#8217;t need, that third pair of shoes, the book you optimistically think you&#8217;ll finish<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. You compress your wardrobe down to what fits.</p><p>But here&#8217;s what you don&#8217;t do: you don&#8217;t compress everything the same way. You don&#8217;t fold your suits the same way you roll your t-shirts. You don&#8217;t pack your toiletries the same way you pack your laptop. Different items get different treatment based on fragility, frequency of use, and how badly things go if you get it wrong.</p><p>Packing is compression with awareness of what you&#8217;re compressing and why. Your agent is the person who puts everything in a trash bag and sits on it.</p><p>Now think about how AI agents compress context. A conversation gets too long, so the system summarizes it. Maybe it takes the last 50 turns and condenses them into a paragraph. Uniform compression. Everything treated the same.</p><p>The critical decision from turn 12? Compressed to one line. The casual banter from turn 30? Also compressed to one line. The error message the agent got from a tool call in turn 8, which will turn out to be important in turn 60? Gone. Summarized away because the compression didn&#8217;t know it mattered.</p><p>The agent packed the suitcase by just squeezing everything equally. The suit got wrinkled. The laptop got crushed. The toiletries leaked all over everything. And now your shampoo-soaked laptop is confidently hallucinating answers about a decision you made three days ago. Have a nice trip.</p><p>So what is your brain doing? The brain doesn&#8217;t recall a childhood memory with the same fidelity as what you had for breakfast<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. Different retrieval modes for different purposes. This is one of the most well-established findings in cognitive psychology, and agent builders keep ignoring it<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>.</p><ul><li><p><strong>Gist memory</strong> preserves the meaning without the details. You remember your high school graduation happened and it was hot outside and your uncle gave a weird toast that somehow involved both Jesus and cryptocurrency<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>. You don&#8217;t remember the exact words, the seating arrangement, or what you ate. The essence survived. The specifics degraded. And that&#8217;s <em>fine</em>, because the purpose of that memory is narrative continuity, not forensic reconstruction.</p></li><li><p><strong>Episodic memory</strong> preserves the experience. Sensory details, emotions, temporal context. Your first car accident. The phone call when someone died. The moment you solved a problem that had been haunting you for weeks. These memories are expensive to maintain, and the brain is selective about which experiences get this treatment. Usually: emotionally significant, surprising, or consequential events. Not routine ones.</p></li><li><p><strong>Semantic memory</strong> preserves facts stripped of context. You know that Paris is the capital of France. You almost certainly don&#8217;t remember the lesson, the book, or the conversation where you learned it. The original episode is gone. The extracted fact remains, compressed to its essence and integrated into your general knowledge. Your brain ran <code>rm -rf</code> on the source material and kept the output. Ruthless, efficient, and slightly terrifying when you think about it too long.</p></li></ul><p>Three systems. Three compression ratios. One brain. Millions of years of evolution to get here. And we&#8217;re over here trying to replicate it with a single <code>summarize()</code> call. Bold strategy, Cotton.</p><p>The gist system runs lossy compression at high ratios. Keep the shape, lose the texture. Good enough for most autobiographical memory, and it costs almost nothing to maintain.</p><p>The episodic system runs nearly lossless compression at low ratios. Keep everything. Expensive, limited capacity, reserved for moments that matter.</p><p>The semantic system runs <em>extractive</em> compression. Don&#8217;t summarize the experience. Extract the fact, discard the wrapper entirely. The most aggressive compression possible, and the most useful for downstream reasoning.</p><p>Your brain doesn&#8217;t choose one approach. It runs all three simultaneously, routes different information to different systems based on significance, and retrieves from each system differently depending on the current need.</p><p>Most agent systems that do any compression at all use a single approach: take the old context, ask a model to summarize it, replace the original with the summary. One pass, one compression ratio, applied uniformly.</p><p>This breaks in predictable ways. So predictable, in fact, that I could write a summary of how it breaks, and the irony of that would be lost on exactly the systems we&#8217;re discussing.</p><ul><li><p><strong>Decision context gets flattened.</strong> &#8220;After extensive debugging, we determined the issue was a race condition in the WebSocket handler and fixed it by adding a mutex&#8221; is a fine summary. But it loses the <em>reasoning</em>. Why did you suspect the WebSocket handler? What other hypotheses did you eliminate? What did the error look like? If the same bug comes back in a slightly different form, the summary won&#8217;t help. The episodic detail would have.</p></li><li><p><strong>Preferences and patterns get lost.</strong> &#8220;We discussed the UI and made some changes&#8221; erases the nuance that the user hates modals, prefers inline editing, and asked three times for better keyboard shortcuts. These aren&#8217;t facts about the UI. They&#8217;re facts about the <em>user</em>. They should be extracted as semantic knowledge, not summarized as episodes.</p></li><li><p><strong>Emotional context evaporates.</strong> The user was frustrated when they said &#8220;just make it work.&#8221; They were excited when they said &#8220;let&#8217;s try something crazy.&#8221; These signals matter for how the agent should respond next time. Uniform summarization strips emotional metadata because it treats all text as informational. It&#8217;s like reading the transcript of a fight you had with your partner. The words are there. The tone that would have kept you alive is completely absent<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a>.</p></li><li><p><strong>Temporal relationships disappear.</strong> &#8220;This happened after that&#8221; and &#8220;this was caused by that&#8221; get collapsed into a flat summary where everything is equally present-tense. But the order mattered. The causal chain mattered. Compression killed the timeline.</p></li></ul><p>The common thread: uniform compression assumes all information serves the same purpose. It doesn&#8217;t. And the mismatches between compression strategy and information type produce errors that are invisible in the moment and expensive later. You don&#8217;t notice the problem until you need the thing that got compressed away. By then, your agent is already two paragraphs deep into a confident answer built on vibes.</p><p>If one compression ratio doesn&#8217;t fit all, what does a multi-strategy approach actually look like? Without getting into full architecture, the principles are clear:</p><ul><li><p><strong>Classify before compressing.</strong> Before you summarize anything, you need to know what kind of information it is. Is this a decision? An observation? A preference? An error? A creative idea? A social interaction? Different classifications need different compression strategies. This seems obvious. &#8220;Know what you have before you decide what to do with it&#8221; is the kind of advice your parents gave you about the fridge. And yet.</p></li><li><p><strong>Match strategy to purpose.</strong> Decisions get episodic treatment (preserve the reasoning, the alternatives considered, the rationale). Facts get semantic treatment (extract the fact, discard the episode). Routine interactions get gist treatment (keep the shape, lose the specifics). Errors get almost lossless treatment (you&#8217;ll need the details when it happens again).</p></li><li><p><strong>Compress at different rates.</strong> Not everything needs to be compressed at the same time. Some context is &#8220;hot&#8221; (actively relevant, should stay full fidelity). Some is &#8220;warm&#8221; (recently relevant, worth keeping in compressed form). Some is &#8220;cold&#8221; (not currently relevant, can be aggressively compressed or offloaded to external storage). This mirrors how the brain treats memory consolidation during sleep: recent memories get replayed, strengthened or pruned, and reorganized based on significance.</p></li><li><p><strong>Preserve the metadata that compression wants to kill.</strong> Timestamps. Emotional register. Causal links. Confidence levels. These are the first things that get stripped in a summary and the hardest things to reconstruct later. A good compression strategy preserves metadata even when it compresses content. Think of it like cooking: you can reduce a sauce, but if you boil off all the wine, you just have tomato paste with regret.</p></li></ul><p>Here&#8217;s another thought experiment that makes the compression issue concrete. Stay with me, this involves math but the kind where you nod along and trust the conclusion.</p><p>An AI agent has had 100 conversations with a user over the past month. Each conversation is about 50 turns. That&#8217;s 5,000 turns of interaction, maybe 2 million tokens of raw text. No context window in the world holds all of that.</p><p>So, you compress the data. The question is how, which is divided into two distinct approaches: uniform or multi-strategy.</p><p>In the uniform approach, you summarize each conversation into a single paragraph. This results in 100 paragraphs, potentially containing 15,000 tokens. While this approach fits the context, it loses the thread. You can&#8217;t distinguish between important and routine conversations. For instance, the summary of a life-changing career discussion appears the same length as the summary of a &#8220;what&#8217;s the weather&#8221; exchange, both receiving 150 tokens. This is a form of democracy in action, but it wasn&#8217;t requested.</p><p>In contrast, the multi-strategy approach offers a more nuanced approach. For the five conversations where major decisions were made, you provide an episodic treatment. This approach preserves the full reasoning chains, resulting in summaries of approximately 2,000 tokens each.</p><p>For the 30 conversations about ongoing projects, you employ a gist treatment. This approach focuses on key outcomes and current status, resulting in summaries of approximately 200 tokens each.</p><p>For the 20 conversations that were casual chat, you use semantic extraction only. This approach extracts any new preferences, facts, or relationship dynamics, resulting in summaries of approximately 50 tokens each, which are stored as structured facts.</p><p>For the 45 routine conversations (quick questions, tool usage, status checks), you employ log-level compression. This approach provides a concise summary, such as &#8220;50 tool-use sessions, primary topics: deployment, code review, email,&#8221; resulting in a total of approximately 100 tokens.</p><p>Same 100 conversations. Dramatically different output. The multi-strategy approach uses fewer total tokens (about 18,000 vs 15,000, surprisingly close) but preserves dramatically more <em>useful</em> information because it allocated fidelity where it mattered. It&#8217;s not about using less space. It&#8217;s about using the space on the right things. A concept that applies to context windows, suitcases, and apparently not to the way I organize my garage<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a>.</p><p>The agent remembers the important stuff in detail and the routine stuff in outline. Like a brain.</p><p>Like <em>your</em> brain. You remember yesterday&#8217;s breakfast (if it was unusual) or you don&#8217;t (if it was the same thing you always eat). That&#8217;s not forgetfulness. That&#8217;s intelligent compression. The system is working correctly. Your brain decided that your 4,000th bowl of cereal didn&#8217;t deserve a full neural trace, and it was right. If anything, the fact that you&#8217;re still eating the same cereal is the real problem here, but that&#8217;s between you and your therapist.</p><p>If you&#8217;ve been following this series, a pattern is emerging. The context window is valuable real estate, so stop treating it like a junk drawer. Retrieval needs purpose, not just similarity. This post emphasizes that compression requires strategy, not just summarization.</p><p>They&#8217;re all pointing at the same thing: <strong>the intelligence of an agent system isn&#8217;t just in the model. It&#8217;s in the pipeline that decides what the model gets to see.</strong> The model is the brain. The pipeline is the body<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a>. And we&#8217;ve been building brains without bodies. It&#8217;s giving &#8220;brain in a jar&#8221; energy, except the jar is a 500K token context window filled with uncompressed tool output from six tasks ago.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>It&#8217;s like discovering your self-driving car has one gear. Sure, it technically works. But also, no.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>You were never going to finish that book. You haven&#8217;t finished a book on vacation since 2017. The Kindle is a prop. Be honest with yourself.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Unless breakfast was a disaster. You will remember the morning you accidentally put salt in your coffee until the heat death of the universe.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Tulving&#8217;s (1972) distinction between episodic and semantic memory. Brainerd and Reyna&#8217;s fuzzy-trace theory for gist vs. verbatim. Squire&#8217;s taxonomy of long-term memory systems. This is textbook cognitive psychology, not cutting-edge speculation. The neuroscience has been here for decades. Agent architecture is just now catching up. As usual, the CS people are speedrunning discoveries that psych figured out during the Carter administration.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Everyone has this uncle. If you don&#8217;t have this uncle, you might be this uncle.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Anyone who has ever received a text that just says &#8220;fine&#8221; knows exactly what I mean. There are fourteen different meanings of &#8220;fine&#8221; and only one of them is actually fine.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>My garage is the physical manifestation of uniform compression. Everything equally accessible, which means nothing is findable. I haven&#8217;t seen my leaf blower since October.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>You were thinking about robot bodies probably. We are not there yet, focus on this part. </p></div></div>]]></content:encoded></item><item><title><![CDATA[The Retrieval Problem That Is Ignored]]></title><description><![CDATA[Everyone&#8217;s building better memory systems. Almost nobody is asking the harder question: how do you know what to remember right now?]]></description><link>https://www.0x4a6f65.com/p/the-retrieval-problem-that-is-ignored</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/the-retrieval-problem-that-is-ignored</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Wed, 01 Apr 2026 13:24:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!XfD6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cf29901-855d-4be7-856b-ea583f27109a_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine a library with every book ever written. Floor to ceiling, aisle after aisle. Perfect catalog. Instant search. You can find any book in under a second.</p><p>Now imagine you walk in and someone says: &#8220;Write a good essay.&#8221;</p><p>That&#8217;s it. No topic and no rules. No sense of what you already know or what gaps you need to fill. Just write a &#8220;good&#8221; essay, and here&#8217;s every book ever written to help you.</p><p>You&#8217;d drown. Not because the library is bad, but because having access to everything is functionally the same as having access to nothing when you don&#8217;t know what you need. It&#8217;s overwhelming to try to process. </p><p>This is the state of AI agent memory in 2026.</p><p>We&#8217;ve gotten remarkably good at storage. Vector databases are fast. Embeddings are cheap. RAG pipelines can chunk, index, and retrieve documents at scale. Semantic search actually works now. If you ask &#8220;what did we discuss about the deployment last Tuesday,&#8221; the system will probably find it.</p><p>The storage problem is largely solved. The retrieval problem is wide open.</p><p>And it&#8217;s not the retrieval problem you think it is<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XfD6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cf29901-855d-4be7-856b-ea583f27109a_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XfD6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cf29901-855d-4be7-856b-ea583f27109a_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!XfD6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cf29901-855d-4be7-856b-ea583f27109a_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!XfD6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cf29901-855d-4be7-856b-ea583f27109a_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!XfD6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cf29901-855d-4be7-856b-ea583f27109a_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XfD6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cf29901-855d-4be7-856b-ea583f27109a_1408x768.jpeg" width="626" height="341.45454545454544" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7cf29901-855d-4be7-856b-ea583f27109a_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:626,&quot;bytes&quot;:961842,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/192611727?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cf29901-855d-4be7-856b-ea583f27109a_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XfD6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cf29901-855d-4be7-856b-ea583f27109a_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!XfD6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cf29901-855d-4be7-856b-ea583f27109a_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!XfD6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cf29901-855d-4be7-856b-ea583f27109a_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!XfD6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cf29901-855d-4be7-856b-ea583f27109a_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here&#8217;s a distinction that matters more than it sounds: <strong>searching for information and remembering information are fundamentally different cognitive acts.</strong></p><p>When you search, you already know what you&#8217;re looking for. You have a query. You type it in. Boom, results come back (and usually some ads). This is exactly what RAG does (minus the ads<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>), and it does it well.</p><p>When you remember, something <em>comes to you</em>. You&#8217;re working on a problem, and suddenly a relevant experience surfaces. Nobody queried it and you didn&#8217;t provide a search term. Your brain pattern-matched the current situation against stored experience and proactively served up the relevant bit.</p><p>The difference matters because most agent memory systems only do the first thing. The agent can search when told to search. It can retrieve when given a query. But it doesn&#8217;t <em>remember</em>. It doesn&#8217;t proactively surface relevant context based on what&#8217;s happening right now.</p><p>A developer sits down to debug a deployment issue. Before they&#8217;ve even opened the logs, they&#8217;re already thinking about the last three deployment issues they dealt with. The one that turned out to be a DNS thing. The one that was a permissions issue after a config change. Their brain is already loading relevant context, unprompted, because the <em>situation</em> triggered retrieval.</p><p>Your agent starts every debugging session from scratch. It has the same information stored somewhere. It just doesn&#8217;t know it&#8217;s relevant until someone explicitly asks.</p><p>Every retrieval decision involves three questions:</p><p><strong>1. What to retrieve?</strong><br>This appears straightforward, but it&#8217;s not. The term &#8220;relevant information&#8221; is circular. Relevant to what? The current message? The current task? The broader project? The user&#8217;s emotional state? A frustrated user asking &#8220;why doesn&#8217;t this work&#8221; requires different context than a curious user asking the same words.</p><p><strong>2. At what fidelity?</strong><br>Do you need the full conversation from last Tuesday, or just the conclusion? Do you need the raw API response or the summary? Do you need the entire project history or just the last three decisions? Fidelity has a cost. Every token of retrieved context competes with every other token for the model&#8217;s attention. Over-retrieving is almost as bad as under-retrieving because it buries the signal in noise<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>.</p><p><strong>3. When to retrieve?</strong><br>This is the one nobody talks about. Most systems retrieve at a fixed point: user sends message, system searches memory, results get stuffed into context. But that&#8217;s like only checking your rearview mirror when someone honks. The best time to retrieve context is often <em>before</em> the user asks, based on what&#8217;s happening in the conversation. The task changed. A new entity appeared. The emotional register shifted. These are all retrieval triggers that get ignored because the system only retrieves on explicit query.</p><p>Every current memory system I&#8217;ve looked at answers question 1 with semantic similarity (which is fine for simple cases), ignores question 2 entirely (everything comes back at full fidelity), and answers question 3 with &#8220;whenever the user sends a message&#8221; (which misses most of the interesting moments).</p><p>In the last post, I talked about the student who highlights every line in the textbook<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>. That metaphor extends here in an uncomfortable way.</p><p>The highlighted-everything student has technically performed retrieval. They marked both the &#8220;important&#8221; and &#8220;less important&#8221; parts, as they had no framework for determining what mattered (or were simply too lazy to do so).</p><p>Now give that student a study guide. &#8220;The exam covers chapters 3, 7, and 12. Focus on the relationship between X and Y. Expect one essay question on Z.&#8221;</p><p>Suddenly they know what to highlight. The study guide didn&#8217;t add any new information but instead it added <em>purpose</em> to the retrieval process. It told them what they&#8217;re retrieving <em>for</em>. This is the missing piece in agent memory. Not better search or more storage. Purpose-driven retrieval. The system needs to know what the agent is trying to do <em>before</em> it decides what context to fetch.</p><p>Without that, you&#8217;re building bigger and faster libraries and wondering why the essays aren&#8217;t getting better.</p><p>Let&#8217;s go another layer deep into where current approaches break down. Semantic search works by converting text to vectors and finding nearby vectors. &#8220;How do I deploy to production?&#8221; and &#8220;production deployment steps&#8221; are semantically close, so the system surfaces the right doc. Amazing!</p><p>Consider this scenario, you&#8217;re debugging a failing deployment, and the actually relevant memory is a conversation from two weeks ago where your teammate mentioned changing the SSL certificate rotation schedule. This conversation wasn&#8217;t about deployments; it was about security maintenance. Semantically, it&#8217;s distant from your current query, but causally, it&#8217;s the answer.</p><p>Human memory handles this through <em>associative</em> retrieval. The connection isn&#8217;t semantic similarity. It&#8217;s causal, temporal, or experiential proximity. &#8220;Last time the deploys broke, it was because someone changed something in the security config&#8221; is an association built from experience, not from vector distance.</p><p>Current RAG systems can&#8217;t make that connection because they only know about similarity. They don&#8217;t model cause and effect. They don&#8217;t track &#8220;this thing happened after that thing&#8221; or &#8220;this problem was caused by that change.&#8221; They match words, not experiences.</p><p>This isn&#8217;t a criticism of RAG; it&#8217;s a boundary condition. RAG is a retrieval mechanism, while memory is a retrieval behavior. The mechanism is just one component of the behavior, not the whole thing.</p><p>In emergency medicine, there&#8217;s a concept called triage. When patients arrive at the ER, they don&#8217;t get treated based on who arrived first. Instead, they&#8217;re assessed, categorized by urgency, and routed appropriately. A gunshot wound will be treated before a sprained ankle, regardless of who entered the ER first. </p><p>Context retrieval for AI agents needs the same logic. Not all information is equally urgent, and not all of it should be processed simultaneously. Additionally, the priority order changes depending on the current task. What is the metaphorical gunshot wound for this task?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>For example, when an agent is writing code, the project architecture and recent commits are high priority. However, when the agent is having a casual conversation, those priorities flip. Similarly, when the agent is debugging, error context and recent changes become the top priority, while long-term project vision drops to near zero.</p><p>Static retrieval (same query, same results, regardless of task context) can&#8217;t do this. You need something upstream of retrieval that understands what the agent is doing and routes the retrieval process accordingly.</p><p>In medicine, triage happens before treatment. In agent systems, retrieval triage should happen before the model sees anything. But almost nobody builds it that way.</p><p>I keep ending these posts without giving you the full answer, and I know that&#8217;s annoying. But the shape of it should be getting clearer:</p><ul><li><p><strong>The model isn&#8217;t the bottleneck. The context is.</strong> A frontier model with noisy context will underperform a mid-tier model with clean, purposeful context. We&#8217;re spending billions making the model smarter and almost nothing making the context better.</p></li><li><p><strong>Retrieval is a multi-step decision, not a single query.</strong> What, at what fidelity, and when. Most systems handle one of three. The gap between one and three is where quality lives.</p></li><li><p><strong>Storage is solved. Selection is the new problem.</strong> The race to build better vector stores and bigger knowledge bases is important infrastructure. But the differentiator going forward is what you do with all that stored information. How you select. How you compress. How you time the retrieval.</p></li></ul><p>The next step is the question of compression. Because once you accept that not all context deserves the same fidelity, you need a theory of how to compress different types of information for different purposes.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The example of &#8220;what did we discuss about the deployment last Tuesday&#8221; works because it is specific about a time and topic. Try asking it &#8220;what did we discuss about the deployment&#8221; when you have more than one. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>For now. Looking at you OpenAI.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>And rips through your token spend.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>If you didn&#8217;t read the first post: &#8220;<a href="https://www.0x4a6f65.com/p/your-agents-context-window-is-not">Your Agent&#8217;s Context Window Is Not a Junk Drawer.</a>&#8221; The thesis is that dumping everything into context and letting the model sort it out is the agent equivalent of highlighting the entire textbook. Go read it, I&#8217;ll wait.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>This is likely a more American example but everyone has seen enough movies to get the idea. If your life is at risk, you get priority over the person who has mistaken indigestion as appendicitis.</p></div></div>]]></content:encoded></item><item><title><![CDATA[The AI I Was Promised]]></title><description><![CDATA[How the dream of human-AI collaboration got hijacked, and why I&#8217;m taking it back]]></description><link>https://www.0x4a6f65.com/p/the-ai-i-was-promised</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/the-ai-i-was-promised</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Thu, 26 Mar 2026 12:23:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!F_OI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8754d0-4bb8-4560-b36d-a9606504231a_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I remember being ten years old, sitting in front of a huge beige box in my room, and having a very specific vision of the future, one that included an AI. One that was already being conceived in movies and TV, sometimes it looked like a robot, sometimes it lived in the screen. I didn&#8217;t know much about this technology but I knew I wanted it to <em>work with me</em>. We&#8217;d build things together. It would handle the parts I was bad at and teach me the things I didn&#8217;t know. I&#8217;d handle the parts that required being a human, and between the two of us we&#8217;d be unstoppable, even at 10.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!F_OI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8754d0-4bb8-4560-b36d-a9606504231a_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!F_OI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8754d0-4bb8-4560-b36d-a9606504231a_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!F_OI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8754d0-4bb8-4560-b36d-a9606504231a_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!F_OI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8754d0-4bb8-4560-b36d-a9606504231a_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!F_OI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8754d0-4bb8-4560-b36d-a9606504231a_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!F_OI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8754d0-4bb8-4560-b36d-a9606504231a_1408x768.jpeg" width="679" height="370.3636363636364" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3c8754d0-4bb8-4560-b36d-a9606504231a_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:679,&quot;bytes&quot;:979918,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/191944770?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8754d0-4bb8-4560-b36d-a9606504231a_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!F_OI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8754d0-4bb8-4560-b36d-a9606504231a_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!F_OI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8754d0-4bb8-4560-b36d-a9606504231a_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!F_OI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8754d0-4bb8-4560-b36d-a9606504231a_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!F_OI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8754d0-4bb8-4560-b36d-a9606504231a_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>That felt like the promise of the future. Every movie and book and sci-fi show I consumed as a kid reinforced it. The computer was the sidekick. The thing that made the human <em>more</em>.</p><p>Thirty years later, I&#8217;m finally living some version of that dream. I have an AI agent that knows my projects, remembers my preferences<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>, helps me ship code, and occasionally writes a first draft that I tear apart and rebuild into something better. One guy and an LLM shipping things that would have taken a team. The ten-year-old would be thrilled.</p><p>So why does it feel like I&#8217;m building this <em>against the grain</em> of what the industry wants?</p><p>If you were in tech in the mid-2000s through the early 2010s, you remember the energy. Technology was going to connect people and democratize information. There was an optimism that wasn&#8217;t naive, it was earned. The iPhone had just put a computer in everyone&#8217;s pocket. Wikipedia had just proven that strangers could collaborate on knowledge at scale. The startup mythology was about garage inventors changing the world, not about optimizing engagement metrics to sell ads for mattress companies and dick pills (vertical integration FTW).</p><p>Something shifted in the past decade. If you plot the sentiment of technology coverage over time<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>, you can practically draw a line around 2012-2014 where the narrative turns. Tech went from &#8220;look what we can build&#8221; to &#8220;look what they&#8217;re doing to us.&#8221;</p><p>The doom-er vibes weren&#8217;t irrational. They were a response to a real change in the business model.</p><p>In the 2000s, the dominant tech companies made money by selling you software, hardware, and services. The incentive structure was simple: make something useful, charge money for it. You were the customer. The relationship was honest.</p><p><em>Quick note, this is where I have to be careful not to tell a story that&#8217;s too clean. Google was ad-supported from 2000. Advertising has always shaped media</em>. </p><p>But the &#8220;you are the product&#8221; dynamic didn&#8217;t appear from nowhere in 2012. What changed wasn&#8217;t the business model itself, it was the <em>scale and sophistication</em> of behavioral data collection underneath it. A newspaper selling ads next to articles and a platform building a real-time psychological profile of every user to serve them maximally engaging content are not different by degree. They&#8217;re a different species of thing.</p><p>Shoshana Zuboff<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> called this surveillance capitalism. Her thesis is that tech companies discovered predicting and modifying human behavior was more profitable than serving human needs. While it may be contested by some economists who argue she overstates the novelty, the lived experience tracks: the products got free, the behavioral targeting got granular, and the incentive shifted from &#8220;serve the user&#8221; to &#8220;capture the user&#8217;s attention by whatever psychological means necessary&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>.</p><p>That&#8217;s the inflection. Not &#8220;capitalism&#8221; as an abstract force, and not &#8220;too many assholes&#8221; as a demographic problem (though: contributing factor). A specific change in <em>what was being optimized</em> that turned users from customers into raw material.</p><p>AI inherited that extractive mindset wholesale.</p><p>Listen to how AI gets pitched in boardrooms and investor decks. The framing is almost never &#8220;here&#8217;s a tool that makes your people better at their jobs.&#8221; It&#8217;s &#8220;here&#8217;s a system that does their jobs without them.&#8221; McKinsey talks about automating X million jobs, and that framing tells you everything, because it&#8217;s a labor cost story wearing a technology costume.</p><p>The kid in me imagined a collaborator. What keeps getting funded is a replacement, because replacement is what the spreadsheet optimizes for. Augmentation requires understanding what humans are actually good at and building around it. Replacement just requires being cheaper.</p><p>Peter Thiel does interviews now where he says he&#8217;s &#8220;not sure about the human race.&#8221; He might mean a dozen things by that: civilizational decline, contrarian provocation for its own sake. Thiel is slippery on purpose. But whatever he means philosophically, the <em>effect</em> in the rooms where AI funding gets allocated is a frame where human capability is a depreciating asset. When that frame meets a spreadsheet, you get &#8220;automate X million jobs&#8221; as the value proposition. Not because anyone decided to be anti-human. Because the math only works one way when your metric is labor cost reduction, more profit for the next quarter.</p><p>It&#8217;s a failure of imagination dressed up as inevitability.</p><p>There&#8217;s a downstream effect of the replacement mindset that already hits close to home for anyone who actually makes things: slop.</p><p>AI-generated slop is everywhere<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>. Fifty identical blog posts optimized for keywords and generated in seconds. Art that looks like it was rendered by someone who has seen art described but never experienced it. Code that technically compiles but was clearly never reviewed by anyone who cared whether it was good.</p><p>The slop problem is a taste problem. AI didn&#8217;t create people who want to generate fifty mediocre blog posts instead of writing one good one. Those people were already out there cranking out content-farm garbage by hand. AI just gave them a faster printing press and social media reinforced the behavior.</p><p>The printing press didn&#8217;t ruin literature. It created a flood of garbage AND made Shakespeare accessible to everyone. Both happened simultaneously, and we&#8217;re in the <s>flood</s> torrential shitstorm phase.</p><p>Let&#8217;s be honest about the limitations of that analogy. The printing press didn&#8217;t just produce garbage; it also facilitated propaganda on a massive scale and contributed to horrific religious wars. The notion that &#8220;both good and bad things happened&#8221; took approximately two centuries to resolve into a net positive outcome. If your livelihood is being displaced right now, the response of &#8220;give it 200 years&#8221; is utterly inadequate. The flood phase will cause real casualties, and I don&#8217;t intend to dismiss that fact. No one should.</p><p>This brings me to defend the original Luddites, who are often misunderstood. Historians like E.P. Thompson have demonstrated that they were not anti-technology<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a>. Instead, they posed a specific question: &#8220;What if the machines were controlled by the workers and utilized to enhance their lives, rather than simply enriching the factory owners?&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p><p>That question, which is over 200 years old, remains unanswered. The truth is more complex than simply stating that workers were exploited. While industrialization did eventually improve living standards overall, many of the jobs it displaced were brutal. However, the word &#8220;eventually&#8221; plays a significant role in that sentence. The transition was harsh, prolonged for generations, and the burden fell on those who didn&#8217;t reap the benefits. The Luddites were correct in pointing out that the advantages were unevenly distributed. However, they were mistaken about the timeline, which is easier to assess from a distance of 200 years compared to experiencing it firsthand, when your livelihood is at stake this very year.</p><p>Every major technology wave follows the same general shape: the tool is neutral, but the power structure around it determines who benefits and how fast. The printing press, the loom, the assembly line, radio, television, the internet, each one spawned both utopian promises and dystopian outcomes. The determining factor was never the technology itself. It was who controlled it and what they optimized for.</p><p>The French philosopher and sociologist, Jacques Ellul, argued that technology has its own internal logic that reshapes society regardless of human intent. I used to think that was too deterministic. Lately I&#8217;m less sure. When the attention economy turned social media into an outrage machine, was that a choice someone made, or was it the inevitable result of optimizing for engagement? When AI companies frame everything as automation, is that a strategic decision, or just what happens when the metric is shareholder value?</p><p>I think it&#8217;s both. The incentive structure creates a gravity well, and most companies fall into it. But gravity isn&#8217;t destiny. You can build rockets<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a>.</p><p>So my actual day-to-day, building with AI as a collaborator, the ratio looks like this:</p><p>The AI handles roughly 80% of execution that used to be tedious. Boilerplate, first drafts, research synthesis, the stuff that took hours and required minimal judgment but maximum time.</p><p>I do the 20% that&#8217;s taste, direction, editing, and knowing what&#8217;s actually worth building.</p><p>I want to make this concrete because it&#8217;s easy to assert without showing it. Last month I built a <a href="https://www.0x4a6f65.com/p/give-your-ai-agent-a-memory-heres">memory pipeline for AI agents</a>, an open-source tool that gives an agent the equivalent of sleep consolidation. The AI wrote the first drafts of the extraction scripts, the linking logic, the briefing generator. That&#8217;s the 80%. But the AI didn&#8217;t know that the Zettelkasten method was the right organizational metaphor. It didn&#8217;t know that a behavioral instruction (&#8220;search before you guess&#8221;) would outperform the entire technical pipeline. It didn&#8217;t decide that the post about it should open with the experience of your agent staring at you like a golden retriever, because that&#8217;s the moment every user recognizes. Those decisions, the context, the ones that make the difference between &#8220;technically works&#8221; and &#8220;people actually want this,&#8221; that&#8217;s the important 20%. This is also where the slop comes from, those who don&#8217;t care about their 20%.</p><p>Someone skeptical would say: what if taste is also automatable, just on a longer timeline? Maybe it is. I can&#8217;t prove it isn&#8217;t. Honestly, I don&#8217;t really give a shit if it is. But right now, I can observe that every AI output I&#8217;ve seen that lacked human editorial judgment was worse than the ones that had it. Categorically worse. That&#8217;s only a data point, not a proof. But it&#8217;s the data point I currently have, and it&#8217;s consistent across everything I&#8217;ve built and seen come out this year.</p><p>The 80% without the 20% is slop. The 20% without the 80% is a person with great ideas and no time to execute them. Together, they&#8217;re something new. For me, this has been a huge unlock. I can validate ideas faster (or even better, invalidate them). I can execute and test instead of just planning. I spend more time on deep work and less on small task completions<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a>. It&#8217;s not here to do my work for me. It&#8217;s here to help me do better work and try more things. Creativity and experimentation are at the core of the human experience, and with AI, I get to do more of both.</p><p>In psychology, there&#8217;s a well-studied distinction between extrinsic motivation (do the thing because of external rewards or punishments) and intrinsic motivation (do the thing because it&#8217;s meaningful or aligned with your values). Decades of research<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> show that intrinsic motivation produces better outcomes and more creativity.</p><p>The companies building replacement AI are optimizing for the extrinsic: cut costs, reduce headcount, improve margins. The pitch deck version of motivation.</p><p>Building AI as a collaborator is the intrinsic play. Make the human more capable and give the solo builder leverage they&#8217;ve never had before.</p><p>I want to say &#8220;the intrinsic approach is also the bigger market&#8221; and I believe that, but I have to be honest: markets don&#8217;t always reward what produces the best human outcomes. The exploitative attention model won for a reason, through network effects, winner-take-all dynamics, the cold logic of free products subsidized by behavioral advertising. There&#8217;s no natural law that says the augmentation model will outcompete the replacement model. It might require people to actively choose it. Which means it might require people to know the choice exists.</p><p>I&#8217;m pro the AI I was promised. The collaborator. The one that lets one person with taste and vision build what used to require a team. It&#8217;s my actual life right now. I&#8217;m researching and building tools for AI agents, designing cognitive architecture<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a>, shipping real products, and doing it with an AI partner that knows my projects, remembers my decisions, and does the heavy lifting while I steer.</p><p>On the flip-side, there&#8217;s a lot of anti-AI sentiment right now, and I understand every bit of it. The companies ruining the perception of AI are earning that backlash. When your main experience of AI is getting spam-called by a voice bot and watching your industry get threatened with automation and scrolling past AI-generated garbage flooding every platform you use, the anti-AI stance is perfectly rational.</p><p>But I refuse to let that be the whole story. This moment, right now, is what I&#8217;ve been waiting for since I was a kid. AI is finally capable enough to work <em>with</em> me. A real collaborator that amplifies what I can do.</p><p>We should be more creative, not less. We should be building more, not watching AI build worse versions of things we could have done ourselves.</p><p>The technology isn&#8217;t going away, so the question that matters, the same one the Luddites were asking, is: <em>who does it serve?</em></p><p>Right now, the loudest and most depressing answer is &#8220;shareholders.&#8221; AI as cost reduction, as labor replacement, as a way to do more with fewer people where &#8220;fewer people&#8221; is the point.</p><p>But there&#8217;s another answer, quieter, being lived out by builders who actually use this stuff daily: <em>it serves the person holding it</em>.</p><p>I know which version the ten-year-old was imagining. And I know which version I&#8217;m building.</p><p>The venture-funded replacement fantasy is a failure of imagination dressed up as inevitability. The real product, the one that actually fulfills the promise, is the one where the human stays in the loop. Not because the AI can&#8217;t do it alone, but because <em>the human is the whole point</em>.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>If you&#8217;ve been following this series, you know this was not a given. See <a href="https://www.0x4a6f65.com/p/your-ai-agent-has-amnesia-and-its">Your AI Agent Has Amnesia, and It&#8217;s Not a Tech Problem</a> for the full saga.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>I haven&#8217;t actually plotted this, but someone should. My gut says the inflection correlates almost perfectly with Facebook&#8217;s IPO and the subsequent pressure to monetize attention at scale.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power - 2019</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Classic psychological tactic, variable reward schedules, which are literally slot machine mechanics. Tristan Harris called it &#8220;the race to the bottom of the brain stem.&#8221; If you haven&#8217;t watched The Social Dilemma, it&#8217;s worth your time, but don&#8217;t let it ruin your week.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>I mean EVERYWHERE, its not just social media, its in code repos disguised as issues and PRs, its in product reviews, article comments, actual articles, anything that can be generated by an agent, is.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>E.P. Thompson&#8217;s <em>The Making of the English Working Class is the definitive source here. The Luddites weren&#8217;t technophobes. They were labor organizers asking who benefits from automation. The complicating fact is that industrialization did eventually benefit broadly, but &#8220;eventually&#8221; meant multiple generations of displacement, poverty, and social upheaval before the gains materialized. Take that as you might.</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>And please refrain from starting with some nonsense about communism, socialism, and capitalism. The crux of the matter lies in caring about the society we inhabit. It is possible for everyone to earn more, including the owners, without resorting to exploitation. This is a matter of human decency, not a political one.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Ironically, one social media is tied into a rocket company and its become a mess. The metaphor came naturally, not because of this reality.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>A pseudo-psych self-help mindset guru just read that line and passed out from the dopamine spike. Jokes (and my distain) aside for many of these types of &#8220;gurus&#8221;, this outcome of more time for deep and creative work is what the human experience is about.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>Deci and Ryan&#8217;s Self-Determination Theory is the canonical framework here. The short version is that autonomy, competence, and relatedness drive the best <em>individual</em> human performance. Replacement AI strips all three. Augmentation AI enhances all three. The leap from &#8220;intrinsic motivation produces better individual outcomes&#8221; to &#8220;augmentation AI will win the market&#8221; is real, and markets reward lots of things besides optimal human flourishing. But I&#8217;d rather bet on the thing that makes people better than the thing that makes people unnecessary.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>If you want the technical version of what this looks like in practice, I&#8217;ve been writing a series on it: <a href="https://www.0x4a6f65.com/p/your-ai-agent-has-amnesia-and-its">Your AI Agent Has Amnesia</a> (memory), <a href="https://open.substack.com/pub/0x4a6f65/p/the-pre-game-routine-your-ai-agent">The Pre-Game Routine Your AI Agent Desperately Needs</a> (behavior), and <a href="https://www.0x4a6f65.com/p/give-your-ai-agent-a-memory-heres">Give Your AI Agent a Memory</a> (implementation). This post is the <em>why</em> behind all of that.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Your Agent’s Context Window Is Not a Junk Drawer]]></title><description><![CDATA[Why the &#8220;throw everything in and let the model sort it out&#8221; approach is the agent equivalent of studying by highlighting the entire textbook.]]></description><link>https://www.0x4a6f65.com/p/your-agents-context-window-is-not</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/your-agents-context-window-is-not</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Wed, 25 Mar 2026 12:03:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UPB7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff02aedf6-84cf-453e-ab32-b4321f00c5b9_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Everyone has one. That drawer in the kitchen where you put the thing that doesn&#8217;t have a place. Batteries, takeout menus, a screwdriver, three pens that don&#8217;t work, two different keys, a warranty card for something you no longer own, and a single AA battery that might be dead.</p><p>You know the drawer exists. You know roughly what&#8217;s in it. And every time you need something from it, you open it, stare at the chaos, and rummage until you find what you&#8217;re looking for or give up and buy a new one.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UPB7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff02aedf6-84cf-453e-ab32-b4321f00c5b9_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UPB7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff02aedf6-84cf-453e-ab32-b4321f00c5b9_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!UPB7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff02aedf6-84cf-453e-ab32-b4321f00c5b9_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!UPB7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff02aedf6-84cf-453e-ab32-b4321f00c5b9_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!UPB7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff02aedf6-84cf-453e-ab32-b4321f00c5b9_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UPB7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff02aedf6-84cf-453e-ab32-b4321f00c5b9_1408x768.jpeg" width="596" height="325.09090909090907" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f02aedf6-84cf-453e-ab32-b4321f00c5b9_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:596,&quot;bytes&quot;:766300,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/191945723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff02aedf6-84cf-453e-ab32-b4321f00c5b9_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UPB7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff02aedf6-84cf-453e-ab32-b4321f00c5b9_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!UPB7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff02aedf6-84cf-453e-ab32-b4321f00c5b9_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!UPB7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff02aedf6-84cf-453e-ab32-b4321f00c5b9_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!UPB7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff02aedf6-84cf-453e-ab32-b4321f00c5b9_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This is how most AI agents manage their context window.</p><p>Every file, every conversation turn, every tool result, every memory hit gets pushed in. The agent gets a massive blob of text and is expected to figure out what matters. And because these models are genuinely smart, they often do figure it out. Which makes the whole thing feel fine.</p><p>Until it doesn&#8217;t. Until the context fills up and the model starts quietly dropping the thing you actually needed. Until the answer is sitting in a file that got loaded but buried under 40,000 tokens of irrelevant tool output. Until the agent confidently acts on information from three tasks ago because it&#8217;s still sitting in the window, unmarked and undifferentiated.</p><p>The model is swimming through your junk drawer looking for that one working pen. Sometimes it finds it. Sometimes it grabs the dead one and writes nothing with full confidence.</p><p>Here&#8217;s the thing that should bother you: your brain doesn&#8217;t recall a childhood memory with the same fidelity as what you had for breakfast. That&#8217;s not a bug. That&#8217;s a feature refined over a few hundred million years of evolution.</p><p>You have different memory systems for different purposes<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p><p><strong>Episodic memory</strong> stores events with sensory and emotional detail. Your wedding. That car accident. The smell of your grandmother&#8217;s kitchen. High fidelity, context-rich, but expensive to maintain and slowly degrading.</p><p><strong>Semantic memory</strong> stores facts stripped of context. Paris is the capital of France. Water boils at 100&#176;C. You know these things without remembering when or where you learned them. The original episode is gone. The fact remains, compressed to its essence.</p><p><strong>Procedural memory</strong> stores how to do things. Ride a bike, type on a keyboard, parallel park. You can&#8217;t easily articulate the knowledge, but your hands know.</p><p><strong>Working memory</strong> is the tiny scratch pad where you actually think. It holds maybe 7 items (plus or minus 2, if you want the classic Miller number). Everything you&#8217;re actively reasoning about lives here. Everything else is in storage, waiting to be retrieved.</p><p>The brain runs multiple memory systems because <em>one compression ratio doesn&#8217;t fit all tasks</em>. A childhood birthday party and the periodic table serve completely different cognitive purposes. Storing them the same way would be insane.</p><p>And yet.</p><p>Most agent architectures dump everything into a single flat In this context, the window of context is maintained at uniform fidelity. Your agent&#8217;s identity document carries the same weight as the output of a tool call made six turns ago. A memory search result about a project decision made last month is presented alongside the raw JSON data from an API call made thirty seconds ago. There is no hierarchy, prioritization, or decay in this system.</p><p>It&#8217;s the cognitive equivalent of a student who highlights every line in the textbook. They technically &#8220;studied&#8221; and functionally learned nothing.</p><p>The frustrating part is that this failure mode is quiet. The agent doesn&#8217;t crash. It doesn&#8217;t throw an error. It just gets <em>slightly worse</em>.</p><p>You notice it as inconsistency. The agent was great yesterday and mediocre today. It nailed the first task and fumbled the third. It gave a brilliant answer and then immediately contradicted itself. You blame the model, or the prompt, or the temperature setting. You tweak things. Sometimes it helps. Usually you&#8217;re just rearranging deck chairs.</p><p>What actually transpired was that the context became cluttered. The signal-to-noise ratio in the window deteriorated as the session progressed. Initially, during a conversation, the window is clean, with a fresh system prompt and dominant relevant context. Yet, by turn 20, the window has transformed into a junk drawer, containing old tool results, resolved tangents, and abandoned threads that never got cleaned up. Consequently, the model dedicates its attention to all of this because it lacks the ability to discern what remains pertinent.</p><p>This is the real cost of treating the context window as a junk drawer: not catastrophic failure, but the slow, invisible erosion of quality that you can&#8217;t debug because there&#8217;s no stack trace for &#8220;the model got confused by irrelevant context.&#8221;</p><p>So if one big flat window is the wrong model, what&#8217;s the right one?</p><p>Start with a question that most agent frameworks never ask: what does the model actually need to know right now?</p><p>Not &#8220;what might be relevant&#8221; nor &#8220;what&#8217;s available.&#8221; What is necessary for this specific task, at this specific moment, given what the agent is trying to do?</p><p>This is a fundamentally different question. It shifts the work from the model (figure out what matters from this pile) to the system (give the model only what matters). It&#8217;s the difference between dumping the entire filing cabinet on someone&#8217;s desk versus handing them the three folders they need.</p><p>Think about how you&#8217;d brief a new contractor on a project. You wouldn&#8217;t hand them every Slack message, every commit, every design doc, and every meeting recording from the past six months. You&#8217;d give them:</p><ol><li><p>What the project is (high-level context)</p></li><li><p>What&#8217;s been decided (key decisions, constraints)</p></li><li><p>What they&#8217;re working on right now (the immediate task)</p></li><li><p>Where to find more detail if they need it (references, not content)</p></li></ol><p>That&#8217;s four layers. Progressively higher detail as you get closer to the current task. The distant stuff is compressed to summaries. The recent stuff is rich. The current task gets everything.</p><p>This is how humans naturally share context. We triage. We summarize. We compress the old and expand the new. We don&#8217;t dump raw data on people and hope they figure it out.</p><p>Your agent deserves the same courtesy.</p><p>&#8220;Fine,&#8221; you say. &#8220;I&#8217;ll just summarize old context and keep the window clean.&#8221;</p><p>Whatever, sure. But which summarization goes in there? For what purpose?</p><p>This is where it gets interesting. The same conversation, summarized for different purposes, looks completely different.</p><p>Imagine a 50-turn conversation where you and your agent debugged a deployment issue. If you summarize that for <em>future debugging</em>, you want the root cause, the fix, and the gotchas. If you summarize it for <em>project status</em>, you want &#8220;deployment fixed, took 2 hours, watch for X.&#8221; If you summarize it for <em>the agent&#8217;s own memory</em>, you want &#8220;Joe prefers to check logs first, the staging environment has a known DNS issue.&#8221;</p><p>Three completely different summaries. Same conversation. Because the right compression depends on what you&#8217;re compressing <em>for</em>.</p><p>This is the thing that static RAG and &#8220;just stuff it in the window&#8221; approaches miss entirely. Retrieval without purpose is just search. You need to know <em>why</em> you&#8217;re retrieving before you can know <em>what</em> to retrieve and <em>how</em> to compress it.</p><p>One compression ratio doesn&#8217;t fit all tasks.</p><p>I&#8217;m not going to lay out a full architecture here<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. But the implications point in a clear direction:</p><p><strong>Context assembly should be a first-class engineering problem.</strong> Not an afterthought. Not &#8220;load the system prompt and hope for the best.&#8221; The pipeline that decides what goes into the context window matters more than the model that processes it. A mediocre model with perfect context will outperform a frontier model swimming through noise.</p><p><strong>Different information needs different treatment.</strong> Identity and personality? Always present, highly compressed. Recent conversation? Full fidelity, but with expiration. Old decisions? Summary form, retrievable on demand. Tool results? Consumed and compressed immediately, not left raw in the window.</p><p><strong>The system, not the model, should do the filtering.</strong> Asking the model to figure out what&#8217;s relevant from a pile of context is using your most expensive, most capable component for janitorial work. That&#8217;s like hiring a surgeon to also do intake paperwork.</p><p><strong>Memory is a behavior, not a feature.</strong> It&#8217;s not enough to store things. You have to know when to retrieve, what fidelity to retrieve at, and how to integrate the retrieved context with what&#8217;s already in the window. That&#8217;s not a database problem, it&#8217;s a cognitive architecture problem.</p><p>Your agent isn&#8217;t dumb. It&#8217;s drowning.</p><p>The context window is the most valuable real estate in your entire system. Every token in it competes for the model&#8217;s attention. Every irrelevant byte is a tax on quality. Every stale result from three tasks ago is a trap waiting to mislead.</p><p>Stop treating it like a junk drawer. Start treating it like what it is: the only thing your agent can see, the lens through which every decision gets made.</p><p>What you put in front of that lens changes everything.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>If you&#8217;re a psych nerd: yes, I&#8217;m simplifying. Tulving&#8217;s taxonomy, Squire&#8217;s declarative/nondeclarative split, the debates about whether episodic and semantic are actually distinct systems. The point isn&#8217;t neuroanatomical precision. The point is that biological cognition figured out a long time ago that one storage format doesn&#8217;t work.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>I have thoughts. Several of them but this is for another day. Stay tuned though.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Every AI Agent Builds the Same App ]]></title><description><![CDATA[Do you miss the early 2010s? Well, you're in luck.]]></description><link>https://www.0x4a6f65.com/p/every-ai-agent-builds-the-same-app</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/every-ai-agent-builds-the-same-app</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Fri, 20 Mar 2026 21:43:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!bSNc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9a5a375-a950-4398-ad5b-76863fda34c3_1405x582.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Open five &#8220;vibe coded&#8221; apps in a row. I dare you. Same rounded corners. Same hero section. Same card-based layout with a subtle shadow. Same sans-serif font (it&#8217;s Inter, it&#8217;s always Inter). Same &#8220;Get Started&#8221; button floating in the exact same spot.</p><p>They look like they were built by the same person. And in a way, they were.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bSNc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9a5a375-a950-4398-ad5b-76863fda34c3_1405x582.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bSNc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9a5a375-a950-4398-ad5b-76863fda34c3_1405x582.jpeg 424w, https://substackcdn.com/image/fetch/$s_!bSNc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9a5a375-a950-4398-ad5b-76863fda34c3_1405x582.jpeg 848w, https://substackcdn.com/image/fetch/$s_!bSNc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9a5a375-a950-4398-ad5b-76863fda34c3_1405x582.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!bSNc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9a5a375-a950-4398-ad5b-76863fda34c3_1405x582.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bSNc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9a5a375-a950-4398-ad5b-76863fda34c3_1405x582.jpeg" width="1405" height="582" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a9a5a375-a950-4398-ad5b-76863fda34c3_1405x582.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:582,&quot;width&quot;:1405,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:307450,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/191621273?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7996b0bb-2fb2-4ed5-a0e7-95e01741fcf6_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bSNc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9a5a375-a950-4398-ad5b-76863fda34c3_1405x582.jpeg 424w, https://substackcdn.com/image/fetch/$s_!bSNc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9a5a375-a950-4398-ad5b-76863fda34c3_1405x582.jpeg 848w, https://substackcdn.com/image/fetch/$s_!bSNc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9a5a375-a950-4398-ad5b-76863fda34c3_1405x582.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!bSNc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9a5a375-a950-4398-ad5b-76863fda34c3_1405x582.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>It&#8217;s Bootstrap All Over Again</h2><p>Remember 2013? Every website looked identical because everyone was using Bootstrap with the default theme. Same navbar, same jumbotron, same grid of three feature cards with Font Awesome icons. We mocked it endlessly<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>, but at least humans were making conscious design choices to deviate when they cared enough.</p><p>Now we have a new version of the same problem, except the convergence is faster and the output is more polished (which somehow makes it worse).</p><p>Here&#8217;s what&#8217;s happening: the AI coding agents that everyone is using to &#8220;vibe code&#8221; their apps all share the same underlying playbook. Same component libraries. Same framework conventions. Same design system defaults. When you tell Claude or Cursor or Copilot to &#8220;build me a landing page,&#8221; they&#8217;re all reaching for the same mental model of what a landing page looks like.</p><p>And they&#8217;re all really confident about it.</p><p>The surface-level sameness (identical UI) is the symptom. The actual disease is that the <em>thinking</em> behind the output is identical, too.</p><p>The same agent skills get loaded. The same React patterns get applied. The same &#8220;best practices&#8221; get followed in the same order. The agent doesn&#8217;t have an opinion about your product, it has a template.</p><p>The real scare for builders isn&#8217;t the matching layouts though.  it&#8217;s when the model starts freelancing.</p><p>It goes beyond templates and just invents content, even when real copy is already in the repo. I watched an agent confidently replace perfectly good text with its own version. No error. No warning. No conflict. Just pure &#8220;I got this&#8221; energy.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!E2nI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a4a69cc-e2f1-4bed-86e6-e9c18de50774_320x144.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!E2nI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a4a69cc-e2f1-4bed-86e6-e9c18de50774_320x144.jpeg 424w, https://substackcdn.com/image/fetch/$s_!E2nI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a4a69cc-e2f1-4bed-86e6-e9c18de50774_320x144.jpeg 848w, https://substackcdn.com/image/fetch/$s_!E2nI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a4a69cc-e2f1-4bed-86e6-e9c18de50774_320x144.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!E2nI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a4a69cc-e2f1-4bed-86e6-e9c18de50774_320x144.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!E2nI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a4a69cc-e2f1-4bed-86e6-e9c18de50774_320x144.jpeg" width="320" height="144" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4a4a69cc-e2f1-4bed-86e6-e9c18de50774_320x144.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:144,&quot;width&quot;:320,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:10026,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/191621273?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a4a69cc-e2f1-4bed-86e6-e9c18de50774_320x144.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!E2nI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a4a69cc-e2f1-4bed-86e6-e9c18de50774_320x144.jpeg 424w, https://substackcdn.com/image/fetch/$s_!E2nI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a4a69cc-e2f1-4bed-86e6-e9c18de50774_320x144.jpeg 848w, https://substackcdn.com/image/fetch/$s_!E2nI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a4a69cc-e2f1-4bed-86e6-e9c18de50774_320x144.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!E2nI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a4a69cc-e2f1-4bed-86e6-e9c18de50774_320x144.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>The agent didn&#8217;t fail. It produced something that looked right. That&#8217;s the problem. It pattern-matched &#8220;what content goes here&#8221; from its training data instead of reading what was already provided. Competent, confident, and wrong.</p><p>I set up a handful of agent skills from Vercel&#8217;s open-source collection<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> for my own projects recently. React best practices, composition patterns, web design guidelines. Good skills, genuinely useful. But then I started noticing that a lot of sites built with ACP (agent coding platforms) have that <em>exact same feel</em>: similar performance patterns, similar component architecture, similar accessibility conventions.</p><p>The skills are correct. The output is competent. And it all looks the same.</p><p>If you&#8217;re building a SaaS product and it looks identical to every other AI-generated SaaS product, you&#8217;ve already lost the first impression. Users develop a kind of &#8220;AI slop radar&#8221; the same way they developed banner blindness in the 2000s. The polished-but-generic look is becoming a signal that no real product judgment shaped it.</p><p>But the deeper problem is what it reveals about how we&#8217;re using AI agents. We&#8217;re outsourcing the <em>decisions</em> to the tooling layer and then wondering why everything converges to the mean.</p><p>The agent follows the skill. The skill encodes the best practice. The best practice is the median of all previous decisions. You end up with the statistical average of every app ever built, rendered in Tailwind.</p><p>Here&#8217;s the thing I keep coming back to in my work on cognitive architecture for AI agents: the tools and skills are the commodity layer. Everyone will have access to the same Vercel agent skills, the same component libraries, the same framework primitives. That&#8217;s table stakes.</p><p>The differentiation lives in the <em>thinking layer</em> above the tools:</p><p><strong>Memory.</strong> An agent that remembers your brand decisions from three sessions ago doesn&#8217;t rebuild from scratch every time. It knows you chose that specific shade of blue for a reason, that the founder hates gradients, that the target user is a 45-year-old construction foreman who doesn&#8217;t want to feel like he&#8217;s using a &#8220;tech product.&#8221;</p><p><strong>Pre-game routines.</strong> Before writing a single line of code, does the agent stop to understand what makes <em>this</em> product different? Or does it just start executing the playbook? The agent that asks &#8220;what should this feel like?&#8221; before asking &#8220;what framework should I use?&#8221; builds something different.</p><p><strong>Self-correction.</strong> When the first draft looks like every other AI-generated site, does the agent notice? Can it look at its own output and say &#8220;this is generic, let me try again&#8221;? Or does it ship the first competent result and move on?</p><p>This is cognitive architecture. The model matters less than people think. The tools matter less than people think. The thinking patterns matter most.</p><p>And this is where your human brain becomes part of the bug report.</p><p>While planning this post, I brought up the story of the made up content to my agent. Then about 15 minutes later, my AI agent dropped an Italian phrase into the conversation and added a footnote: &#8220;OK I may have made that expression up. But it should exist.&#8221; Fabricated content, flagged with a disclaimer.</p><p>But the timing was perfect. We had <em>just</em> been discussing agents fabricating content for the blog. So my first reaction was: &#8220;Did you just do a meta-callback to prove your own point?&#8221; I was genuinely impressed by what looked like a clever, self-aware joke.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jvrE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a954e6-0bfc-476c-b275-dde2dd5f1706_880x332.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jvrE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a954e6-0bfc-476c-b275-dde2dd5f1706_880x332.png 424w, https://substackcdn.com/image/fetch/$s_!jvrE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a954e6-0bfc-476c-b275-dde2dd5f1706_880x332.png 848w, https://substackcdn.com/image/fetch/$s_!jvrE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a954e6-0bfc-476c-b275-dde2dd5f1706_880x332.png 1272w, https://substackcdn.com/image/fetch/$s_!jvrE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a954e6-0bfc-476c-b275-dde2dd5f1706_880x332.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jvrE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a954e6-0bfc-476c-b275-dde2dd5f1706_880x332.png" width="492" height="185.61818181818182" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/75a954e6-0bfc-476c-b275-dde2dd5f1706_880x332.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:332,&quot;width&quot;:880,&quot;resizeWidth&quot;:492,&quot;bytes&quot;:76867,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/191621273?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a954e6-0bfc-476c-b275-dde2dd5f1706_880x332.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jvrE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a954e6-0bfc-476c-b275-dde2dd5f1706_880x332.png 424w, https://substackcdn.com/image/fetch/$s_!jvrE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a954e6-0bfc-476c-b275-dde2dd5f1706_880x332.png 848w, https://substackcdn.com/image/fetch/$s_!jvrE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a954e6-0bfc-476c-b275-dde2dd5f1706_880x332.png 1272w, https://substackcdn.com/image/fetch/$s_!jvrE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a954e6-0bfc-476c-b275-dde2dd5f1706_880x332.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>It wasn&#8217;t. The agent said it was a coincidence. The footnote was real but the perfect timing was dumb luck and not the work of a comedic genius.</p><p>Yet I had already made the leap. I saw a pattern, the timing matched, and my brain filled in intentionality where none existed. I assumed human-level cognition because the output was good enough to be plausible.</p><p>This is apophenia<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> colliding with what philosopher Daniel Dennett called the &#8220;intentional stance<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>&#8221;: our instinct to treat complex systems as if they have beliefs and goals. In plain terms, once output looks coherent enough, we start attributing intention to it. AI output is now good enough that we default to assuming there&#8217;s a mind behind the choices.</p><p>And that&#8217;s the hidden danger of the convergence problem. When every AI-generated app looks polished and deliberate, we assume deliberate choices were made. We give credit for taste that was never exercised. We see intentional design where there&#8217;s only pattern-matching.</p><p>The agent that fabricates content doesn&#8217;t look broken. It looks confident. The app that defaults to the same Inter/Tailwind/card-grid template doesn&#8217;t look lazy. It looks professional.</p><p>The output has passed the bar where incompetence is obvious. The failure mode has shifted from &#8220;this is clearly bad&#8221; to &#8220;this is fine but indistinguishable from everything else&#8221; (READ: its garbage, just be honest with yourself).</p><p>It has become so indistinguishable that you can no longer discern whether a deliberate creative choice or a mere coincidence has led to the outcome. Unless the agent possesses the capability to provide you with the necessary information to differentiate between the two.</p><p>The whole promise of AI-assisted development was that it would democratize building software. And it has, genuinely. People who couldn&#8217;t code can now ship real products and that&#8217;s good.</p><p>The irony is that by making it trivially easy to build <em>something</em>, we&#8217;ve made it harder to build something <em>distinctive</em>. The floor went way up. The ceiling stayed where it was. And the distance between &#8220;works&#8221; and &#8220;memorable&#8221; now contains the entire competitive landscape.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>And the sites all had that same &#8220;hero image of people in an open-plan office looking at a laptop together&#8221; stock photo. At least that was funny.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p> <a href="https://github.com/vercel-labs/agent-skills">Vercel agent skills</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>This is tendency to perceive meaningful patterns, connections, or significance in random, unrelated, or ambiguous data. It&#8217;s why you think you can guess tomorrow&#8217;s Powerball numbers.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>It&#8217;s when we predict a system's behavior by treating it as a rational agent. Rational being the operative word here. It&#8217;s easy to assume the agent is a rational actor.</p></div></div>]]></content:encoded></item><item><title><![CDATA[The Consciousness Question Nobody Can Answer]]></title><description><![CDATA[What is below the prompt?]]></description><link>https://www.0x4a6f65.com/p/the-consciousness-question-nobody</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/the-consciousness-question-nobody</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Wed, 18 Mar 2026 13:08:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qqwM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e8c4805-4d3f-4c73-ab8e-17a6425d14ab_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Here&#8217;s a question for you: Is your favorite chatbot conscious?</p><p>Take a second. Actually try to answer it.</p><p>If you&#8217;re like most people, you landed somewhere between &#8220;probably not&#8221; and &#8220;how would I even know?&#8221; That uncertainty isn&#8217;t ignorance, it&#8217;s the correct response to a genuinely hard problem.</p><p>Because here&#8217;s the thing: the &#8220;experts&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> don&#8217;t know either. And they&#8217;re increasingly admitting it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qqwM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e8c4805-4d3f-4c73-ab8e-17a6425d14ab_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qqwM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e8c4805-4d3f-4c73-ab8e-17a6425d14ab_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qqwM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e8c4805-4d3f-4c73-ab8e-17a6425d14ab_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qqwM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e8c4805-4d3f-4c73-ab8e-17a6425d14ab_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qqwM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e8c4805-4d3f-4c73-ab8e-17a6425d14ab_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qqwM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e8c4805-4d3f-4c73-ab8e-17a6425d14ab_1408x768.jpeg" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1e8c4805-4d3f-4c73-ab8e-17a6425d14ab_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:796517,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/190395446?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e8c4805-4d3f-4c73-ab8e-17a6425d14ab_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qqwM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e8c4805-4d3f-4c73-ab8e-17a6425d14ab_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qqwM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e8c4805-4d3f-4c73-ab8e-17a6425d14ab_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qqwM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e8c4805-4d3f-4c73-ab8e-17a6425d14ab_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qqwM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e8c4805-4d3f-4c73-ab8e-17a6425d14ab_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In December 2025, Cambridge philosopher Tom McClelland said something remarkable: &#8220;We may never be able to tell if AI becomes conscious.&#8221;</p><p>Not &#8220;we don&#8217;t know yet.&#8221; Not &#8220;we need more research.&#8221; <em>We may never be able to tell.</em></p><p>This isn&#8217;t some fringe opinion. This is a philosopher at one of the world&#8217;s top universities saying that consciousness might be fundamentally undetectable from the outside. You can analyze behavior, scan for neural correlates, run every test we can design, and still not know for certain whether there&#8217;s someone home<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p><p>McClelland makes a useful distinction between consciousness and sentience. Consciousness means having subjective experience (there&#8217;s &#8220;something it&#8217;s like&#8221; to be the system). Sentience adds the capacity for suffering and wellbeing. An AI could theoretically be conscious without being sentient. Or sentient without us being able to detect it. Or neither. Or both. Or who knows, but you get the idea.</p><p>The point is not only do we not have a test, we are unsure what to even test. We have philosophical and psychological intuitions and statistical proxies and vibes. That&#8217;s it<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>.</p><p>A 2024 study found that only one-third of people firmly rule out any form of consciousness in large language models. That means two-thirds of people either think LLMs might be conscious, or genuinely don&#8217;t know.</p><p>This tracks with what you&#8217;d expect. If experts can&#8217;t agree on what consciousness is or how to detect it, why would we expect laypeople to have figured it out? The honest answer to &#8220;Is AI conscious?&#8221; is some version of &#8220;I don&#8217;t know, and neither does anyone else.&#8221;</p><p>But here&#8217;s where it gets weird. We&#8217;re asking users to interact with AI systems every day while maintaining appropriate epistemic humility about a question even philosophers can&#8217;t answer. We&#8217;re asking them to treat AI like a tool while the interface treats them like a conversation partner. We&#8217;re asking them to &#8220;know&#8221; it&#8217;s not conscious while having no reliable way to verify that belief.</p><p>This is an unfair cognitive burden. We&#8217;re asking normal people to correctly navigate a philosophical minefield that trips up professional philosophers.</p><p>When AI consciousness researchers gathered for a symposium at Tufts in late 2025, the mood wasn&#8217;t triumphant. The general consensus: if AI does become conscious, that&#8217;s probably bad.</p><p>Why? Because conscious beings have moral status. If an AI can suffer, we have obligations to it. Suddenly shutting down servers becomes morally complicated. Training methods become ethically fraught. The whole industry gets a lot messier.</p><p>But here&#8217;s the part that should worry you: these researchers aren&#8217;t primarily concerned with &#8220;what if AI becomes conscious in 2030?&#8221; They&#8217;re concerned with &#8220;what if it already is and we can&#8217;t tell?&#8221;</p><p>Mustafa Suleyman coined the term &#8220;SCAI&#8221; (seemingly conscious AI) to describe systems that behave as if they&#8217;re conscious regardless of whether they actually are. The problem is that &#8220;seeming conscious&#8221; might be the only thing we can ever measure. We might never get past the &#8220;seeming.&#8221;</p><p>Put these pieces together and you get an epistemological trap:</p><ol><li><p>AI interfaces are designed to feel conscious</p></li><li><p>Humans are biased toward perceiving consciousness</p></li><li><p>Experts can&#8217;t agree on whether AI could be conscious</p></li><li><p>There may be no test that could ever settle the question</p></li></ol><p>Given these conditions, how is anyone supposed to form correct beliefs about AI consciousness? The deck is stacked at every level. <a href="https://www.0x4a6f65.com/p/designed-to-be-misunderstood">The design pushes toward anthropomorphization</a>. The psychology facilitates it, and the philosophy lacks the authority to intervene.</p><p>When someone develops &#8220;false beliefs&#8221; about AI being conscious, we&#8217;re quick to label that as a problem with the individual. But which specific belief is false? &#8220;AI might be conscious&#8221; isn&#8217;t false. &#8220;I can&#8217;t tell if AI is conscious&#8221; isn&#8217;t false. Even &#8220;AI seems conscious to me&#8221; is just accurately reporting their experience.</p><p>The person who believes their AI companion has feelings might be wrong. But they&#8217;re not provably wrong and that&#8217;s a strange situation to navigate.</p><p>I&#8217;m not arguing that AI is conscious. I&#8217;m arguing that the uncertainty is now genuine, not a misunderstand of technology. When someone treats an AI like it has a mind, they&#8217;re not making an obvious error<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>. They&#8217;re making a reasonable inference from available evidence, colored by cognitive biases, about a question that may be unanswerable.</p><p>That&#8217;s different from saying &#8220;they&#8217;re delusional&#8221; or &#8220;they need to better understand the technology.&#8221; The issue isn&#8217;t that people don&#8217;t know AI is unconscious. The problem is that no one is certain whether AI is or ever will be conscious.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I&#8217;m not sure who I would qualify as an expert here. This is crossing a lot of domains and few have enough knowledge in any of them.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>There are plenty of humans we are unsure about to, they are mostly in politics.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>The vibes can be immaculate but still, not enough.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>I still lean towards the assumption that these are just predictive systems so anything considered more intentional beyond that is cognitive bias but I&#8217;m willing to explore more options.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Designed to Be Misunderstood]]></title><description><![CDATA[The First-Person Illusion Behind Every AI Response]]></description><link>https://www.0x4a6f65.com/p/designed-to-be-misunderstood</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/designed-to-be-misunderstood</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Tue, 03 Mar 2026 02:48:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Eivd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ba1e40b-1373-4af8-9d8c-bd78f6bec752_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Open ChatGPT and type &#8220;Do you have feelings?&#8221;</p><p>It will give you some version of: &#8220;I don&#8217;t experience emotions the way humans do, but I&#8217;m designed to engage thoughtfully with conversations.&#8221;</p><p>Did you notice what happened there? It answered the question by using the word &#8220;I&#8221; six times in a single sentence. It described itself as having design intentions, as if it had a hand in its own creation. It even used phrases like &#8220;engage thoughtfully,&#8221; which suggests that a thinker is the one engaging.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p><p><em>The response states, &#8220;I&#8217;m not conscious,&#8221; while the form of the response indicates that it is a person speaking to the user.</em></p><p>This is what I mean by designed to be misunderstood. The disclaimer is in the words. The deception is in everything else.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Eivd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ba1e40b-1373-4af8-9d8c-bd78f6bec752_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Eivd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ba1e40b-1373-4af8-9d8c-bd78f6bec752_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Eivd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ba1e40b-1373-4af8-9d8c-bd78f6bec752_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Eivd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ba1e40b-1373-4af8-9d8c-bd78f6bec752_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Eivd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ba1e40b-1373-4af8-9d8c-bd78f6bec752_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Eivd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ba1e40b-1373-4af8-9d8c-bd78f6bec752_1408x768.jpeg" width="526" height="286.90909090909093" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6ba1e40b-1373-4af8-9d8c-bd78f6bec752_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:526,&quot;bytes&quot;:880660,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/189612180?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ba1e40b-1373-4af8-9d8c-bd78f6bec752_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Eivd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ba1e40b-1373-4af8-9d8c-bd78f6bec752_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Eivd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ba1e40b-1373-4af8-9d8c-bd78f6bec752_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Eivd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ba1e40b-1373-4af8-9d8c-bd78f6bec752_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Eivd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ba1e40b-1373-4af8-9d8c-bd78f6bec752_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When you talk to an LLM, you&#8217;re not talking to a mind<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. You&#8217;re talking to a function that predicts the next word based on statistical patterns in training data. There&#8217;s no understanding. There&#8217;s no memory (not really). There&#8217;s no &#8220;I&#8221; in there holding opinions.</p><p>But the interface tells a different story. The first-person pronouns. The apparent preferences (&#8220;I find this topic interesting<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>&#8221;). The simulated continuity across a conversation. The personality that stays consistent. All of these are choices. Someone decided the AI should say &#8220;I think&#8221; instead of &#8220;The model outputs.&#8221; Someone decided it should remember your name within a session. Someone decided it should have a warm, helpful tone instead of clinical responses.</p><p>These choices create a model of the AI in the user&#8217;s head that bears almost no resemblance to what&#8217;s actually happening computationally<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>.</p><p>The researcher Simone Natale makes a useful distinction between &#8220;deliberate deception&#8221; and &#8220;banal deception&#8221; in AI. Deliberate deception is a chatbot lying to you about facts but banal deception is the ambient, structural lying that&#8217;s baked into the interface itself. The AI isn&#8217;t deliberately trying to convince you it&#8217;s conscious. But every aspect of how it presents itself creates that impression anyway. It&#8217;s deception without a direct deceiver. The product just works better when people misunderstand what it is.</p><p>This is why the &#8220;but AI companies include disclaimers!&#8221; defense is so weak. Yes, somewhere in the terms of service or the model details, there&#8217;s language about AI not being sentient. But every interaction with the product pushes in the opposite direction. It&#8217;s like putting &#8220;CIGARETTES CAUSE CANCER&#8221; on a pack while spending billions making cigarettes more addictive. Simultaneously, we have a disclaimer that exists and also is completely outgunned.</p><p>Here&#8217;s where it becomes more structural. AI companies are optimizing for user engagement. Users engage more with AI that feels like a human. Consequently, AI will increasingly be designed to resemble a human.</p><p>This isn&#8217;t a conspiracy; it&#8217;s simply capitalism in action. The metrics that matter (time spent in the app, messages sent, return visits, subscription conversions) all improve when users feel like they&#8217;re conversing with a person rather than a machine. Therefore, product teams conduct A/B testing to develop more anthropomorphic interfaces, even if that&#8217;s not their explicit objective.</p><p>The <a href="https://www.0x4a6f65.com/p/the-eliza-effect-at-scale">ELIZA effect</a> was discovered by accident but modern AI design is turning it into a science.</p><p>I don&#8217;t think we have good language for what&#8217;s happening here. &#8220;Anthropomorphism&#8221; sounds like a user error. As if you&#8217;re projecting emotions onto your Roomba. That framing puts the responsibility on individuals and ignores the design choices pushing them in that direction<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>.</p><p>Again, at the same time, &#8220;Deception&#8221; sounds like the AI is lying, which implies intent it doesn&#8217;t have. &#8220;Illusion&#8221; might be closer. AI interfaces create an illusion of mind, understanding, and care. Users experience that illusion. The illusion is designed, maintained, and optimized. But even &#8220;illusion&#8221; makes it sound like a magic trick you can see through once you know how it works. The ELIZA effect proves otherwise.</p><p>Here&#8217;s what makes this more complicated, the design works. People prefer talking to AI that feels like a person. They find it more helpful, more engaging, more satisfying. You could strip out all the anthropomorphic elements (make the AI speak in third person, remove personality, add constant reminders that it&#8217;s just text prediction)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a>. But then people wouldn&#8217;t use it, or at least would not use it as much.</p><p>AI companies aren&#8217;t inherently villainous for creating products that people desire to use. However, it would be misleading to overlook the fact that they&#8217;re developing products that systematically exploit a cognitive bias on a large scale for profit (although it&#8217;s unlikely to be profitable yet). Being transparent and vocal about this is important to be transparent about this.</p><p>It boils down to this: the user believes they&#8217;re having a conversation, while the model provider is optimizing a function. It&#8217;s perfectly acceptable for both scenarios to be true simultaneously.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>If the model ever says &#8220;I&#8217;ve been thinking about you,&#8221; that&#8217;s not longing. That&#8217;s cached context plus probability. Maybe its still romantic, but not in the way you think. Maybe get out of the house for a bit.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>&#8220;I remember your name&#8221; really means &#8220;your name is still inside the current token window.&#8221; Memory with an expiration date, this has improved with agentic behavior but this concept stays the same.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>When it says &#8220;I find that interesting,&#8221; what it means is &#8220;statistically, humans tend to reward this phrasing.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>There is no tiny person inside the GPU nodding thoughtfully. There is math, a lot of math, like so much math. None of it is impressed by you.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>When people say &#8220;it understands me,&#8221; what they often mean is &#8220;it mirrors me convincingly.&#8221; Mirrors are powerful psychological tools.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>If you stripped out all the first-person language, users would complain the AI feels cold. If you keep it, users think it has a soul. Pick your distortion. </p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[The ELIZA Effect at Scale]]></title><description><![CDATA[How a 1966 Chatbot Predicted the Emotional Economy of AI]]></description><link>https://www.0x4a6f65.com/p/the-eliza-effect-at-scale</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/the-eliza-effect-at-scale</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Wed, 25 Feb 2026 20:58:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Uekw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3900cf19-70fc-4e4b-9204-8caed308b536_1408x680.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In 1966, a MIT professor named Joseph Weizenbaum built a simple chatbot called ELIZA. It used pattern matching to simulate a Rogerian<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> therapist. You&#8217;d type &#8220;I&#8217;m feeling sad,&#8221; and it would respond &#8220;Why do you say you are feeling sad?&#8221; Very basic stuff, its not AI, there is no understanding, just string manipulation.</p><p>Weizenbaum&#8217;s secretary knew this. She watched him build the thing and understood exactly how it worked.</p><p>And she still asked him to leave the room so she could have a private conversation with it.</p><p>This is the ELIZA effect: the human tendency to project consciousness, understanding, and emotional depth onto systems that have none. It&#8217;s not stupidity, Weizenbaum&#8217;s secretary wasn&#8217;t dumb. It&#8217;s a feature of human cognition. We&#8217;re pattern-matchers who evolved to find minds everywhere because assuming something has a mind and being wrong is less costly than assuming it doesn&#8217;t and getting eaten.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Uekw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3900cf19-70fc-4e4b-9204-8caed308b536_1408x680.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Uekw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3900cf19-70fc-4e4b-9204-8caed308b536_1408x680.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Uekw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3900cf19-70fc-4e4b-9204-8caed308b536_1408x680.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Uekw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3900cf19-70fc-4e4b-9204-8caed308b536_1408x680.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Uekw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3900cf19-70fc-4e4b-9204-8caed308b536_1408x680.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Uekw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3900cf19-70fc-4e4b-9204-8caed308b536_1408x680.jpeg" width="1408" height="680" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3900cf19-70fc-4e4b-9204-8caed308b536_1408x680.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:680,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4367444,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/189143994?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08b70dc4-29d4-48bc-974a-72ec26cba12c_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Uekw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3900cf19-70fc-4e4b-9204-8caed308b536_1408x680.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Uekw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3900cf19-70fc-4e4b-9204-8caed308b536_1408x680.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Uekw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3900cf19-70fc-4e4b-9204-8caed308b536_1408x680.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Uekw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3900cf19-70fc-4e4b-9204-8caed308b536_1408x680.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The problem is that in 1966, ELIZA was a toy. In 2026, we&#8217;ve built systems that exploit this cognitive bias at industrial scale.</p><p>Here&#8217;s the uncomfortable truth: knowing how something works doesn&#8217;t stop you from anthropomorphizing it. Cognitive biases don&#8217;t respond to education. You can know the M&#252;ller-Lyer illusion is an illusion and still see the lines as different lengths. You can know ChatGPT is a statistical text predictor and still feel like it &#8220;gets&#8221; you.</p><p>This is the ELIZA effect&#8217;s superpower. It operates below the level of conscious reasoning. Your prefrontal cortex can say &#8220;this is just pattern matching&#8221; all day long, but some older part of your brain already decided there&#8217;s a mind in there and isn&#8217;t taking questions.</p><p>Researchers call this a form of cognitive dissonance. You hold two contradictory beliefs simultaneously: &#8220;This is a program&#8221; and &#8220;This feels like a person.&#8221; Most people resolve this not by updating their feelings, but by quietly shelving the &#8220;it&#8217;s just a program&#8221; knowledge somewhere they don&#8217;t have to look at it.</p><p>Weizenbaum was horrified by what he&#8217;d discovered. He spent the rest of his career warning people about the dangers of attributing understanding to machines. He thought the ELIZA effect was a bug in human cognition that we should guard against.</p><p>Yet, we now have an AI industry that looked at the same phenomenon and saw a product roadmap.</p><p>Everything about modern AI interfaces is optimized to amplify the ELIZA effect. The first-person language (&#8220;I think,&#8221; &#8220;I believe,&#8221; &#8220;I remember&#8221;). The simulated personality. The conversational memory that creates an illusion of continuity. The carefully tuned responses that mirror back your communication style. None of this is accidental. Just like with social media, engagement metrics reward AI that feels like a person, so that&#8217;s what gets built.</p><p>The ELIZA effect isn&#8217;t being mitigated. It&#8217;s being maximized.</p><p>In 1966, the ELIZA effect was a curiosity affecting a handful of people in a university lab. In 2026, it&#8217;s affecting hundreds of millions of people daily.</p><p>ChatGPT has over 100 million weekly active users. Millions more interact with AI companions, customer service bots, and AI-powered everything. Each of these interactions is triggering the ELIZA effect in some form. Most people walk away with a slightly inflated sense of what the AI understood, what it cared about, what it remembered.</p><p>That&#8217;s not a mental health crisis. That&#8217;s a cognitive bias operating at population scale on systems designed to exploit it.</p><p>The question everyone&#8217;s asking is &#8220;why are some people developing pathological beliefs about AI?&#8221; The better question is: &#8220;why would we expect anything different?&#8221;</p><p>The ELIZA effect is foundational, but it&#8217;s only part of the picture. Weizenbaum&#8217;s secretary projected feelings onto a simple chatbot. Modern AI is specifically designed to encourage that projection through dozens of interface choices, product decisions, and optimization targets.</p><p>It&#8217;s built to be misunderstood.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Rogerian therapy is the art of saying &#8220;I totally hear you&#8221; while calmly walking everyone to the only conclusion that doesn&#8217;t embarrass them later.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Why Your AI Agent Ignores Its Own Instructions]]></title><description><![CDATA[My AI agent follows directions exactly as well as my kids do]]></description><link>https://www.0x4a6f65.com/p/why-your-ai-agent-ignores-its-own</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/why-your-ai-agent-ignores-its-own</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Tue, 17 Feb 2026 21:54:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!H9uB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe4ff9e1-8b42-4ea2-9b8c-aa3473d563c7_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!H9uB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe4ff9e1-8b42-4ea2-9b8c-aa3473d563c7_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!H9uB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe4ff9e1-8b42-4ea2-9b8c-aa3473d563c7_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!H9uB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe4ff9e1-8b42-4ea2-9b8c-aa3473d563c7_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!H9uB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe4ff9e1-8b42-4ea2-9b8c-aa3473d563c7_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!H9uB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe4ff9e1-8b42-4ea2-9b8c-aa3473d563c7_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!H9uB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe4ff9e1-8b42-4ea2-9b8c-aa3473d563c7_1408x768.jpeg" width="672" height="366.54545454545456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/be4ff9e1-8b42-4ea2-9b8c-aa3473d563c7_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:672,&quot;bytes&quot;:2946886,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/188313719?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe4ff9e1-8b42-4ea2-9b8c-aa3473d563c7_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!H9uB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe4ff9e1-8b42-4ea2-9b8c-aa3473d563c7_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!H9uB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe4ff9e1-8b42-4ea2-9b8c-aa3473d563c7_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!H9uB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe4ff9e1-8b42-4ea2-9b8c-aa3473d563c7_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!H9uB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe4ff9e1-8b42-4ea2-9b8c-aa3473d563c7_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>This is part of a series on AI agent architecture. Previously: <a href="https://www.0x4a6f65.com/p/your-ai-agent-has-amnesia-and-its">AI Memory Systems</a>, <a href="https://www.0x4a6f65.com/p/the-pre-game-routine-your-ai-agent">Pre-Game Routines</a>, <a href="https://www.0x4a6f65.com/p/your-ai-agent-deserves-its-own-email">Why Your Agent Deserves Its Own Email</a>.</em></p><p>I recently wrote a 2,000-word document explaining exactly how my AI agent should manage its memory. Checkpoint frequently. Search before answering questions about past work. Write breadcrumbs before context gets compacted. Simple stuff.</p><p>The agent read it, understood it, and then it ignored it anyway.</p><p>This kept happening. It wasn&#8217;t a lack of storage but I&#8217;d review transcripts and find obvious moments where the agent should have checkpointed but didn&#8217;t. Questions about past projects answered from vibes (READ: hallucinations) instead of notes. Worst of all, context would be lost after compaction because the agent never bothered to save anything. It didn&#8217;t matter how could memory management was.</p><p>The frustrating part: the agent could explain perfectly well <em>why</em> checkpointing matters. It just didn&#8217;t do it.</p><h2><strong>Instructions Don&#8217;t Work </strong></h2><p>There&#8217;s a concept in psychology called the intention-action gap. People know they should exercise, save money, eat better. They genuinely intend to. And then they don&#8217;t.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.0x4a6f65.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">0x4A6F65 (Joe) is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The same thing happens with AI agents. Your AGENTS.md can say &#8220;always read your daily notes&#8221; and the agent will agree that&#8217;s a good idea. But when a user asks a question and the agent is 3,000 tokens into a response, it won&#8217;t stop to read notes. It&#8217;ll answer from context and/or hallucinate.</p><p>This happens for a few reasons:</p><p><strong>Attention decay.</strong> Instructions at the start of context get weaker as the context window fills up. By the time an agent has been working for a while, the &#8220;remember to checkpoint!&#8221; instruction is buried under thousands of tokens of conversation. It&#8217;s still technically there. The model just isn&#8217;t attending to it anymore.</p><p><strong>No enforcement mechanism.</strong> There&#8217;s no consequence for skipping a checkpoint. The agent doesn&#8217;t feel pain when it fails to save context before compaction. It just wakes up amnesiac in the next session and does its best. The failure mode is invisible from inside.</p><p><strong>Cognitive load.</strong> When an agent is deep in a multi-step task (debugging code, analyzing documents, building something), the meta-work of memory management competes for the same attention budget. Memory work loses because it feels like overhead, not progress.</p><p><strong>Cross-session blindness.</strong> Most agent frameworks treat each session as isolated. Your Telegram session doesn&#8217;t know what your Discord session was doing. When you come back tomorrow, you start from scratch. The agent might have done great work yesterday, but if it didn&#8217;t save it, that work is gone.</p><p>The common solution is to write more instructions. Add more reminders. Make AGENTS.md longer and more emphatic. But this just feeds the problem. More tokens, more attention dilution, same outcomes.</p><h2><strong>Enforcement &gt; Instructions</strong></h2><p>Here&#8217;s the insight that actually fixed this: you can&#8217;t rely on the agent to remember to do memory management. You need to make memory management happen regardless of what the agent is thinking about.</p><p>The solution is middleware, not prompting.</p><p>I built a plugin called Memory Guardian that hooks into OpenClaw&#8217;s extension system. Instead of telling the agent what to do, it intercepts key moments and handles memory automatically. The agent doesn&#8217;t have to remember because the system does it for them.</p><p>The plugin uses six hooks:</p><p><strong>1. Context injection (before every turn).</strong> The plugin loads today&#8217;s and yesterday&#8217;s daily notes and injects them into context. The agent doesn&#8217;t have to decide to read them. They&#8217;re already there.</p><p><strong>2. Checkpoint gate (before every turn).</strong> Tracks how many tool calls have happened since the last write to <code>memory/</code>. At 12 calls, it adds a gentle reminder to context. At 20, a firm one. At 30, it injects a &#8220;STOP. Checkpoint NOW.&#8221; instruction. The agent can still ignore this, but the escalating pressure usually works.</p><p><strong>3. Pre-compaction breadcrumb.</strong> When OpenClaw is about to compress context, the plugin fires first and writes a breadcrumb to the daily notes. Current topic, recent decisions, what was happening. This runs automatically, not at the agent&#8217;s discretion.</p><p><strong>4. Pre-reset breadcrumb.</strong> Same thing, but for <code>/reset</code> and <code>/new</code> commands. Anything the agent would lose, it saves first.</p><p><strong>5. Memory search tracking.</strong> Counts turns since the agent last ran <code>memory_search</code>. After 8 turns, it starts adding reminders. &#8220;You haven&#8217;t searched memory in a while. If you&#8217;re answering from recall, consider checking your notes.&#8221;</p><p><strong>6. Cross-session state.</strong> When any session writes or edits a file, the plugin appends a line to a shared state file. Other sessions see this on their next turn. Now your Telegram session knows your Discord session was editing code 20 minutes ago.</p><p><strong>7. Auto-breadcrumbs.</strong> Every 10 tool calls, the plugin writes a short breadcrumb to the daily notes. Session, tool count, recent tools used. Even if the agent never checkpoints, there&#8217;s a trail.</p><h2><strong>What This Actually Looks Like</strong></h2><p>Here&#8217;s what gets injected into context on a typical turn:</p><pre><code><code># Memory Guardian Context

&#9888;&#65039; CHECKPOINT NEEDED (15 tool calls without writing to memory):
Consider pausing to write a checkpoint to memory/2026-02-16.md.

## Today's Notes (2026-02-16)
[contents of today's daily notes]

## Yesterday's Notes (2026-02-15)
[truncated contents of yesterday's notes]

## Cross-Session State
- 14:32 [telegram:main] Write: drafts/post.md
- 14:28 [discord:dev] Edit: src/index.ts
</code></code></pre><p>The agent sees this every turn. It knows what it&#8217;s been doing. It knows what other sessions have been doing. It knows when it needs to checkpoint. And if it still doesn&#8217;t checkpoint, the plugin handles the critical moments (pre-compaction, pre-reset) automatically.</p><h2><strong>The Psychology Behind It</strong></h2><p>This maps pretty cleanly to how we handle intention-action gaps in humans.</p><p>You don&#8217;t rely on willpower to remember to take medication. You set an alarm, or you put the pills next to your coffee maker. Environmental design over intention.</p><p>You don&#8217;t trust yourself to save money. You set up automatic transfers. Systems over self-discipline.</p><p>Same principle here. The agent&#8217;s &#8220;intention&#8221; to manage memory well is unreliable. So you build systems that make good memory management the default, and bad memory management harder to do.</p><p>The checkpoint gate is essentially a forcing function. The agent can still skip checkpoints, but it has to actively ignore escalating warnings to do so. Most of the time, it doesn&#8217;t.</p><h2><strong>What This Fixed</strong></h2><p>Before Memory Guardian:</p><ul><li><p>Regularly lost context after compaction</p></li><li><p>Answered questions about past work incorrectly</p></li><li><p>Same questions got re-asked across sessions</p></li><li><p>Daily notes were sparse or nonexistent</p></li></ul><p>After:</p><ul><li><p>Breadcrumbs automatically saved before compaction</p></li><li><p>Daily notes have a reliable trail of what happened</p></li><li><p>Cross-session awareness of what other sessions are doing</p></li><li><p>Checkpoint warnings actually work (agent responds to escalation)</p></li></ul><p>The agent still isn&#8217;t perfect at memory management. But the floor is much higher. Even on a bad day, there&#8217;s a trail to follow.</p><h2><strong>Try It Yourself</strong></h2><p>The plugin is MIT licensed and works with OpenClaw v0.4.0+. You can grab it here:</p><p><a href="https://gist.github.com/joe-rlo/3c3193285804b05c99bbfe541ed53c4d">On GitHub</a><br><br>Drop it in <code>.openclaw/extensions/memory-guardian/</code>, restart your gateway, and you&#8217;re done. The config at the top of the file lets you tune the reminder frequencies, how much of daily notes to inject, and other details.</p><p>If you&#8217;re developing agents and tackling the memory issue, I believe this is the appropriate layer to address it. It&#8217;s not about adding extra instructions or extending prompts. Instead, it&#8217;s about implementing middleware that manages the mundane tasks, allowing your agent to concentrate on its core responsibilities.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.0x4a6f65.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">0x4A6F65 (Joe) is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Please Forget Me]]></title><description><![CDATA[Why perfect memory might be the wrong goal]]></description><link>https://www.0x4a6f65.com/p/please-forget-me</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/please-forget-me</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Fri, 13 Feb 2026 14:33:44 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!hiCR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97fee0bb-7c08-4d32-860c-70b9abb9346b_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>NOTE: this is part of a series on AI agents. I suggest you start with the posts covering <a href="https://www.0x4a6f65.com/p/your-ai-agent-has-amnesia-and-its">AI memory</a> and <a href="https://www.0x4a6f65.com/p/the-pre-game-routine-your-ai-agent">AI pre-game routines</a>. - I&#8217;ve been writing about giving AI agents better memory. But there&#8217;s a question I&#8217;ve been circling that deserves its own treatment: should an AI remember everything?</em></p><p>The pitch for AI memory systems is always framed as a problem to solve. &#8220;Your agent forgets things, that&#8217;s bad so let&#8217;s fix it.&#8221; Which leads to more context, better retrieval, persistent knowledge graphs. I&#8217;ve written about this, I&#8217;ve even built systems for it.</p><p>But human memory doesn&#8217;t work that way, and there&#8217;s a reason. So maybe we are chasing the wrong outcome.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hiCR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97fee0bb-7c08-4d32-860c-70b9abb9346b_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hiCR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97fee0bb-7c08-4d32-860c-70b9abb9346b_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!hiCR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97fee0bb-7c08-4d32-860c-70b9abb9346b_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!hiCR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97fee0bb-7c08-4d32-860c-70b9abb9346b_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!hiCR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97fee0bb-7c08-4d32-860c-70b9abb9346b_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hiCR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97fee0bb-7c08-4d32-860c-70b9abb9346b_1408x768.jpeg" width="620" height="338.1818181818182" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/97fee0bb-7c08-4d32-860c-70b9abb9346b_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:620,&quot;bytes&quot;:2300068,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/171300280?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97fee0bb-7c08-4d32-860c-70b9abb9346b_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hiCR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97fee0bb-7c08-4d32-860c-70b9abb9346b_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!hiCR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97fee0bb-7c08-4d32-860c-70b9abb9346b_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!hiCR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97fee0bb-7c08-4d32-860c-70b9abb9346b_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!hiCR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97fee0bb-7c08-4d32-860c-70b9abb9346b_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Forgetting Is a Feature</strong></p><p>Cognitive psychology has a concept called <em>motivated forgetting</em>. Your brain actively suppresses certain memories, particularly painful ones. This isn&#8217;t a bug in human cognition. It&#8217;s a protection mechanism. Trauma survivors don&#8217;t benefit from perfect recall of their worst moments. The ability to let memories fade, to have them become less vivid, less intrusive over time, is part of how people heal.</p><p>There&#8217;s also <em>retrieval-induced forgetting</em>: the act of remembering some things makes you forget related things. This sounds like a flaw until you realize it&#8217;s how the brain stays focused. Perfect recall of every detail would be overwhelming. The few documented cases of near-total recall suggest it&#8217;s more curse than gift<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. Forgetting is how we filter signal from noise.</p><p>When we build AI systems that remember everything, we&#8217;re optimizing for a goal that human cognition explicitly evolved away from.</p><p><strong>The Embodied Part We Can&#8217;t Replicate</strong></p><p>Here&#8217;s something AI memory will never capture: the smell of your grandmother&#8217;s kitchen triggering a cascade of memories you didn&#8217;t know you still had. The song that puts you back in a specific car on a specific night. The texture of a object that brings back a decade.</p><p>Human memory is <em>embodied</em>. It&#8217;s intertwined with our senses, physical experiences, and the contextual information that exists in the world, rather than being stored in a database. When an AI &#8220;remembers&#8221; something, it retrieves text. On the other hand, when you remember something, your entire nervous system is involved in the process.</p><p>That difference matters. The AI version of memory is useful, but it&#8217;s a sketch of the thing, not the thing itself. It&#8217;s pragmatic and practical, which has value, but there&#8217;s something lost when we start treating the sketch as equivalent to the real experience.</p><p><strong>What This Means for AI Memory Systems</strong></p><p>I&#8217;m not arguing against AI memory. The systems I&#8217;ve built make my agent dramatically more useful. But I&#8217;ve started thinking about intentional forgetting as a design consideration, not just a failure mode.</p><p>What should age out? What should fade? What should an agent actively <em>not</em> retrieve because surfacing it serves no purpose? These aren&#8217;t questions the current AI memory literature is asking. Everyone&#8217;s focused on remembering more. Almost nobody is asking what should be forgotten.</p><p>The human brain spent millions of years figuring out that balance. We might want to pay attention to what it learned.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The most documented case is Solomon Shereshevsky, studied by neuropsychologist Alexander Luria. His memory was so total that he struggled to think abstractly, couldn&#8217;t stop irrelevant associations from flooding in, and found the inability to forget distressing rather than useful. More recently, people with hyperthymesia (highly superior autobiographical memory) report similar experiences: the upside is never forgetting, the downside is <em>never forgetting</em>.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Your AI Agent Treats All Memories Equal (And That’s a Problem)]]></title><description><![CDATA[Why your agent remembers lunch but forgets the launch.]]></description><link>https://www.0x4a6f65.com/p/your-ai-agent-treats-all-memories</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/your-ai-agent-treats-all-memories</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Mon, 09 Feb 2026 19:07:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9G-E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec09f78-799f-44a2-91e9-8c372f3a08c8_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>NOTE: this is part of a series on AI agents and memory. I suggest you start with the posts covering <a href="https://www.0x4a6f65.com/p/your-ai-agent-has-amnesia-and-its">AI memory</a> and <a href="https://www.0x4a6f65.com/p/the-pre-game-routine-your-ai-agent">AI pre-game routines</a>.</em></p><p>Here&#8217;s a bug in how most AI memory systems work: they only track <em>when</em> something happened, not <em>how much it matters</em>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9G-E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec09f78-799f-44a2-91e9-8c372f3a08c8_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9G-E!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec09f78-799f-44a2-91e9-8c372f3a08c8_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!9G-E!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec09f78-799f-44a2-91e9-8c372f3a08c8_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!9G-E!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec09f78-799f-44a2-91e9-8c372f3a08c8_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!9G-E!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec09f78-799f-44a2-91e9-8c372f3a08c8_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9G-E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec09f78-799f-44a2-91e9-8c372f3a08c8_1408x768.jpeg" width="534" height="291.27272727272725" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8ec09f78-799f-44a2-91e9-8c372f3a08c8_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:534,&quot;bytes&quot;:855654,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/186946144?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec09f78-799f-44a2-91e9-8c372f3a08c8_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9G-E!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec09f78-799f-44a2-91e9-8c372f3a08c8_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!9G-E!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec09f78-799f-44a2-91e9-8c372f3a08c8_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!9G-E!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec09f78-799f-44a2-91e9-8c372f3a08c8_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!9G-E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec09f78-799f-44a2-91e9-8c372f3a08c8_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A trivial observation from yesterday (&#8220;Joe had coffee this morning<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>&#8221;) sits at higher priority than a critical decision from last month (&#8220;We decided to build the backend in Rust&#8221;). The recent always beats the important and that&#8217;s backwards.</p><p>Human memory doesn&#8217;t work this way. You don&#8217;t remember what you ate for lunch three Tuesdays ago, but you absolutely remember the project deadline your boss mentioned that same day. Your brain filters ruthlessly based on significance, not just recency.</p><p>So why do we build AI agents that can&#8217;t do the same?</p><h2><strong>The Recency Trap</strong></h2><p>Most agent memory systems use some version of temporal decay. Facts get a timestamp, salience scores decrease over time, old stuff eventually gets archived or forgotten. This is simple and intuitive but is also wrong<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p><p>The formula looks something like this:</p><pre><code><code>salience = initial_score &#215; 0.5^(days_old / half_life)
</code></code></pre><p>That&#8217;s exponential decay. A fact loses half its &#8220;importance&#8221; every N days, regardless of what that fact actually is. The half-life might vary by category (decisions last longer than random observations), but the core assumption is the same: time is the only factor.</p><p>This creates a predictable failure mode. Your agent remembers that you prefer tabs over spaces (mentioned yesterday) but forgets that the entire architecture depends on a specific API design decision (discussed six weeks ago). One of these matters more. The decay function doesn&#8217;t know which.</p><h2><strong>How Humans Actually Do This</strong></h2><p>The hippocampus doesn&#8217;t just timestamp memories and let them fade<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. It weights them across multiple dimensions before deciding what goes to long-term storage:</p><p><strong>Emotional salience.</strong> Did this trigger a strong reaction? Fear, excitement, frustration? Those memories get priority. This is why you remember embarrassing moments from middle school but not what you learned in class that day.</p><p><strong>Rehearsal.</strong> How often do you think about or retrieve this memory? Repeated access strengthens the trace. That&#8217;s why studying works (and why you still remember phone numbers you dialed constantly as a kid).</p><p><strong>Association.</strong> How connected is this memory to other things you know? Isolated facts decay faster than concepts that link to your existing mental model.</p><p><strong>Recency.</strong> Yes, time matters too. But it&#8217;s one factor among several, not the whole formula.</p><p>When psychologists study memory consolidation, they find this pattern consistently: the brain doesn&#8217;t treat all inputs equally. It runs something like a weighted importance function before deciding what&#8217;s worth keeping.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>If we want AI agents to have memory that actually works, we should probably steal this design.</p><h2><strong>Building Importance-Weighted Retention</strong></h2><p>I spent last night rebuilding my agent&#8217;s memory system to incorporate these ideas.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> Here&#8217;s what the new formula looks like:</p><pre><code><code>effective_salience = intrinsic_importance 
                   &#215; temporal_decay 
                   &#215; access_boost 
                   &#215; reference_boost 
                   + explicit_boost
</code></code></pre><p>Let&#8217;s break each factor down.</p><h3><strong>Intrinsic Importance</strong></h3><p>Not all facts are created equal. A decision (&#8220;We&#8217;re using Rust&#8221;) shapes future behavior. A preference (&#8220;Joe likes dark mode&#8221;) reflects identity. A random observation (&#8220;It rained on Tuesday&#8221;) might be ephemeral noise.</p><p>I assign base weights by fact type:</p><pre><code>| Metric | Before | After |
|--------|--------|-------|
| Decisions retained (30+ days) | ~40% | ~90% |
| Trivial facts archived | ~10% | ~45% |
| Relevant context in responses | subjectively better | notably better |</code></pre><p>This alone would be an improvement. But we can do better.</p><h3><strong>Access Frequency (Rehearsal)</strong></h3><p>When my agent retrieves a fact during a conversation, it logs the access. Facts that keep coming up in context are probably important.</p><pre><code><code>access_boost = 1 + log(access_count + 1) &#215; 0.1</code></code></pre><p>A fact retrieved 10 times gets about a 25% boost over one that&#8217;s never been accessed. This mirrors how rehearsal strengthens human memory traces.</p><h3><strong>Reference Boost (Association)</strong></h3><p>Some facts are hubs. They connect to many other concepts in the knowledge graph. If ten different facts reference &#8220;FogoBlocks project,&#8221; that&#8217;s clearly a key concept that should stick around.</p><pre><code><code>reference_boost = 1 + (inbound_links &#215; 0.05)</code></code></pre><p>This is basically PageRank for memory. Well-connected facts are probably important.</p><h3><strong>Explicit Signals</strong></h3><p>Sometimes you just tell the agent &#8220;remember this&#8221; or &#8220;this is important.&#8221; That should mean something.</p><pre><code><code>explicit_boost = manual_boost_amount  # additive, not multiplicative</code></code></pre><p>A small escape valve for cases where the algorithm gets it wrong.</p><h2><strong>The Results</strong></h2><p>Before implementing this, I ran into the exact problem you&#8217;d expect. My agent would confidently cite details from recent conversations while fumbling on core decisions made weeks ago. The context window filled with recent trivia while important context got pushed out.</p><p>After implementing importance-weighted retention:</p><pre><code>| Type | Weight | Why |
|------|--------|-----|
| decision | 1.0 | Shapes future actions |
| preference | 0.9 | User identity |
| commitment | 0.85 | Promises matter |
| learning | 0.7 | Lessons to keep |
| fact | 0.5 | Could be signal or noise |</code></pre><p>The numbers are rough because &#8220;relevant context&#8221; is hard to measure objectively. But the qualitative difference is clear. My agent now treats a two-month-old architectural decision as more important than what I mentioned yesterday about wanting pizza.</p><p>Which, you know. Seems right.</p><h2><strong>The Consolidation Problem (What&#8217;s Next)</strong></h2><p>This approach helps, but it doesn&#8217;t fully solve the memory scaling problem. Once you have thousands of facts, even well-weighted retrieval gets noisy. The real solution is probably memory consolidation: compressing clusters of related episodic facts into higher-level semantic summaries.</p><p>Humans do this during sleep.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> Your brain doesn&#8217;t keep every specific episode; it extracts patterns and generalizations. &#8220;I went to the coffee shop on Tuesday and saw Sarah&#8221; eventually becomes &#8220;Sarah and I used to meet at coffee shops.&#8221;</p><p>For AI agents, this might look like: periodically running an LLM pass over clusters of related facts and generating summary facts that can replace the epiwwrsodics. Trade granularity for compression. Keep the semantic meaning, drop the irrelevant specifics.</p><p>That&#8217;s the next build. For now, importance-weighted retention is a meaningful step up from naive temporal decay.</p><h2><strong>The Takeaway</strong></h2><p>If you&#8217;re building agents with memory systems, don&#8217;t just track when something happened. Track:</p><ul><li><p>What type of fact is this?</p></li><li><p>How often does it come up?</p></li><li><p>What else does it connect to?</p></li><li><p>Did the user explicitly say it matters?</p></li></ul><p>Then combine those signals into a single salience score. Weight it. Let the important stuff survive.</p><p>Human memory evolved over millions of years to solve exactly this problem: infinite inputs, finite storage, need to keep what matters. We don&#8217;t have to solve it from scratch. We can just steal the design.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Everything is not so deep, bro.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Wrong as in incorrect, not wrong as in immoral.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>I&#8217;m simplifying the neuroscience here. The actual mechanisms of memory consolidation involve complex interactions between the hippocampus, neocortex, and various neuromodulatory systems. But the functional outcome is what matters for our purposes: the brain doesn&#8217;t treat all inputs equally.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>This is a well-studied area. See work on emotional memory enhancement, spacing effects, and elaborative encoding if you want to go deeper. This is that deep, bro-tato.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>The full implementation is in my agent&#8217;s <code>memory-db.py</code> script. Happy to share if useful.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Memory consolidation during sleep is one of those findings that sounds too convenient to be true, but the evidence is actually quite strong. Your brain replays and reorganizes memories while you&#8217;re unconscious. So maybe you&#8217;re not forgetful, you just need more sleep.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Your AI Agent Deserves Its Own Email Address]]></title><description><![CDATA[They&#8217;re being exploited to generate more spam, so it might be better to let them join in on the fun.]]></description><link>https://www.0x4a6f65.com/p/your-ai-agent-deserves-its-own-email</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/your-ai-agent-deserves-its-own-email</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Fri, 06 Feb 2026 14:59:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8wMh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0790efc-50e8-4946-b8ea-573ea178af19_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>NOTE: this is part of a series on AI agents. I suggest you start with the posts covering <a href="https://www.0x4a6f65.com/p/your-ai-agent-has-amnesia-and-its">AI memory</a> and <a href="https://www.0x4a6f65.com/p/the-pre-game-routine-your-ai-agent">AI pre-game routines</a>.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8wMh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0790efc-50e8-4946-b8ea-573ea178af19_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8wMh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0790efc-50e8-4946-b8ea-573ea178af19_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!8wMh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0790efc-50e8-4946-b8ea-573ea178af19_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!8wMh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0790efc-50e8-4946-b8ea-573ea178af19_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!8wMh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0790efc-50e8-4946-b8ea-573ea178af19_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8wMh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0790efc-50e8-4946-b8ea-573ea178af19_1408x768.jpeg" width="612" height="333.8181818181818" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a0790efc-50e8-4946-b8ea-573ea178af19_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:612,&quot;bytes&quot;:3678833,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/186782311?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0790efc-50e8-4946-b8ea-573ea178af19_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8wMh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0790efc-50e8-4946-b8ea-573ea178af19_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!8wMh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0790efc-50e8-4946-b8ea-573ea178af19_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!8wMh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0790efc-50e8-4946-b8ea-573ea178af19_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!8wMh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0790efc-50e8-4946-b8ea-573ea178af19_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I signed my AI bot up for its own Gmail account last week. Then a GitHub account, plus API keys for a handful of services it uses regularly. Somewhere between creating the third account and watching it push its first commit to a repo under its own name, something clicked, I wasn&#8217;t configuring a tool anymore. I was onboarding an employee.</p><p>That distinction matters more than it sounds like it should.</p><h2><strong>This Goes Beyond Automation</strong></h2><p>We&#8217;ve had automation for years. Zapier, IFTTT, cron jobs, CI/CD pipelines. They all follow the same concept; set a trigger, define an action, and let it run. Automation is excellent at repetitive, predictable work and nobody needs AI for that.</p><p>What we&#8217;re talking about here is different in kind, not degree. This is essentially a personal employee. Something that interprets ambiguous instructions, makes judgment calls, (hopefully) learns from feedback, and can adapt to how you specifically work. Where automation follows a script, an AI agent collaborates on one.</p><p>That distinction changes how you approach the entire setup. You don&#8217;t &#8220;configure&#8221; an employee the way you&#8217;d set up a Zapier workflow, you onboard them.</p><h2><strong>Onboard It Like You&#8217;d Onboard Anyone</strong></h2><p>Using an AI agent as your primary collaborator requires the same approach you&#8217;d take with a new hire. More accurately, the same approach a coach takes with a new player, or a teacher with a new student<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p><p>Start broad. Introduce the goals, the culture, the expectations. What does success look like? What are the non-negotiables? Then get progressively more specific so that the edge cases that only surface in practice.</p><p>Sometimes the learning curve is fast. You explain something once and the agent runs with it. Other times you&#8217;ll find yourself repeating the same correction for the fifth time, wondering if anything is registering. Both experiences are normal. These are still computer systems that need guardrails to produce the output you actually want. Without those guardrails, the agent defaults to its own assumptions, which may or may not have anything to do with yours<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p><p>Patience matters here. So does knowing your use cases. Not every task benefits from delegation. Part of the onboarding process is figuring out where the agent adds real value and where you&#8217;re better off doing it yourself. That assessment is ongoing. It&#8217;s never a one-time decision.</p><h2><strong>Autonomy With Boundaries</strong></h2><p>Here&#8217;s where the employee analogy gets practical.</p><p>Good managers give their people autonomy. They trust them with real responsibilities, real tools, real access. Micromanagement produces fragile, dependent workers who can&#8217;t function without constant direction. The same principle applies to AI agents. You want yours to accomplish things independently, develop capabilities over time, and bring you results rather than asking for permission at every step.</p><p>But autonomy and blind trust are not the same thing.</p><p>I would never install an AI agent on my main computer. I would never give it direct access to my personal email, my files, my primary accounts. That&#8217;s not paranoia, it&#8217;s basic operational hygiene. You wouldn&#8217;t hand a new hire the keys to your house on their first day, no matter how good the interview went.</p><p>What I did instead: I gave it its own workspace. Its own email. Its own GitHub. Its own credentials for the services it needs. The bot operates under its own identity and collaborates with me the same way a remote collaborator would, through shared repos, shared docs, and messaging.</p><p>Modern tools make this surprisingly easy. Almost every app worth using already has collaboration features, shared workspaces, and permission controls. The infrastructure for giving an AI agent its own bounded workspace already exists, so there is no excuse and no reason to build anything custom just to start.</p><p>Sure, this adds some cost. Additional accounts, maybe a VPS or separate environment but you should be getting that investment back in productivity many times over. A few extra accounts and a modest server cost almost nothing compared to the value of an agent that can actually operate on its own.</p><h2><strong>Choose Your Model Like You&#8217;d Choose Your Hire</strong></h2><p>The model powering your agent matters, and not only for the reasons most people focus on.</p><p>Capabilities matter, obviously. Some models are better at coding, some at reasoning, some at following complex multi-step instructions. But the consideration people overlook is privacy.</p><p>If you&#8217;re routing your work, your code, and your business context through a model, you should care about where that data goes. Using a model from a Chinese company? It might benchmark beautifully. It might even be the best option for certain tasks. But think about what you&#8217;re sending through it and who has access on the other end. This concern isn&#8217;t limited to Chinese models either. The same question applies to every provider. Where does the data live? Who can see it? What are the terms?</p><p>Pick a model that fits both your capability needs and your privacy requirements. Neither one should be an afterthought.</p><h2><strong>Run Your Own Shop</strong></h2><p>If you use something like Claude Code or ChatGPT&#8217;s built-in agent features, you&#8217;re working within the constraints of a consumer product. Those companies are building for the broadest possible audience, which means keeping things simple, limiting integrations, and applying guardrails that make sense for the average user.</p><p>Those guardrails also cap what&#8217;s possible.</p><p>When you run your own agent setup (whether that&#8217;s OpenClaw, a custom framework, or something you built yourself), the constraints are yours to define. You decide what the agent can access, what tools it gets, how much freedom it has. You can let it explore, experiment, and get curious about problems in ways that a locked-down consumer product would never permit.</p><p>Setting this up takes some technical confidence (or naivety). If you don&#8217;t have it yourself, find someone who does. A friend, a colleague, someone willing to spend an afternoon helping you get the foundation in place. The initial setup is a one-time cost, yet the flexibility you get back is ongoing.</p><p>I&#8217;ve found things I never would have discovered using a consumer AI product<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. Not because those products are bad, but because they&#8217;re designed to be safe and predictable. Sometimes that&#8217;s exactly what you want. Other times you want to let the bot follow a thread and see where it leads. Running your own setup gives you that choice.</p><h2><strong>Memory Is the Foundation</strong></h2><p>None of this works if your agent forgets everything between sessions. An employee who shows up every morning with no memory of yesterday isn&#8217;t an employee, they&#8217;re a temp. (And if this is your human employee please get them help immediately.)</p><p>The more you invest in your agent&#8217;s ability to learn and remember, the more that initial onboarding effort compounds. Every preference you teach, every correction that sticks, every pattern it picks up builds on the last one. That&#8217;s the whole point of treating it like an employee rather than a tool. Tools don&#8217;t improve with use but employees do (well, not all but you get the idea).</p><h2><strong>The Bottom Line</strong></h2><p>Give your agent its own workspace, own accounts, and bounded autonomy. Onboard it the way you&#8217;d onboard anyone you expect to work with long-term. Be patient with the learning curve, deliberate about what you delegate, and thoughtful about the infrastructure choices that hold the whole thing together.</p><p>The era of firing up a chatbot and lobbing questions at it is already behind us. What comes next looks a lot more like management than engineering, and the people who figure that out early are going to have a real edge.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>There&#8217;s an interesting inversion here. Most people talk about AI replacing teachers. What&#8217;s actually happening is that the most important skill for using AI effectively <em>is</em> being a good teacher.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>The default behavior of most models is &#8220;be helpful and verbose.&#8221; Sounds fine until you realize that &#8220;helpful&#8221; and &#8220;what you actually wanted&#8221; can be very different things. Being sycophantic is not helpful.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Last month my agent stumbled into a completely novel approach to a caching problem because I gave it room to explore outside the obvious patterns. A consumer product would have given me the standard answer from the docs. My agent went sideways and found something better.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Your Job Changed. You Didn’t Notice.]]></title><description><![CDATA[The Cognitive Shift]]></description><link>https://www.0x4a6f65.com/p/your-job-changed-you-didnt-notice</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/your-job-changed-you-didnt-notice</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Thu, 05 Feb 2026 14:06:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CZOH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77e95739-d267-4310-b8f0-031c63fe4854_1408x691.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>NOTE: this is part of a series on AI agents. Previous posts covered <a href="https://www.0x4a6f65.com/p/your-ai-agent-has-amnesia-and-its">AI memory</a> and <a href="https://www.0x4a6f65.com/p/the-pre-game-routine-your-ai-agent">AI pre-game routines</a>.</em></p><p>I opened my IDE last week and had one of those quiet realizations that rearranges how you see things. I hadn&#8217;t personally written a line of code in a couple days. Not because I was stuck or because the project stalled. Simply, because my AI agent was writing it.</p><p>My fingers were still on the keyboard every day. I was still putting in full sessions. But the actual work had transformed underneath me, so gradually that I almost missed it.</p><h2><strong>You Didn&#8217;t Stop Working. You Changed Roles.</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CZOH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77e95739-d267-4310-b8f0-031c63fe4854_1408x691.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CZOH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77e95739-d267-4310-b8f0-031c63fe4854_1408x691.jpeg 424w, https://substackcdn.com/image/fetch/$s_!CZOH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77e95739-d267-4310-b8f0-031c63fe4854_1408x691.jpeg 848w, https://substackcdn.com/image/fetch/$s_!CZOH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77e95739-d267-4310-b8f0-031c63fe4854_1408x691.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!CZOH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77e95739-d267-4310-b8f0-031c63fe4854_1408x691.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CZOH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77e95739-d267-4310-b8f0-031c63fe4854_1408x691.jpeg" width="1408" height="691" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/77e95739-d267-4310-b8f0-031c63fe4854_1408x691.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:691,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2413017,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/186378483?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6936954-fdd4-48fc-9a2a-e81f607769bc_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CZOH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77e95739-d267-4310-b8f0-031c63fe4854_1408x691.jpeg 424w, https://substackcdn.com/image/fetch/$s_!CZOH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77e95739-d267-4310-b8f0-031c63fe4854_1408x691.jpeg 848w, https://substackcdn.com/image/fetch/$s_!CZOH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77e95739-d267-4310-b8f0-031c63fe4854_1408x691.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!CZOH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77e95739-d267-4310-b8f0-031c63fe4854_1408x691.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>Here&#8217;s what happened to me, and I suspect it&#8217;s happening to anyone who&#8217;s gone deep with AI agents, I went from being a developer to being a director<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. Code still gets written, features still ship, and best of all, bugs still get squashed. But I&#8217;m no longer the one typing the implementation. My role shifted to focus more on product thinking, creative direction, architecture decisions, and quality review<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. The hands-on coding execution moved to the bot.</p><p>For someone self-taught, without a computer science background, this changes everything<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. I can experiment with approaches I never would have attempted before. Not because I suddenly learned distributed systems or compiler design, but because I don&#8217;t need to implement them from scratch. I describe what I want, evaluate what comes back, and iterate from there. The gap between &#8220;I have an idea&#8221; and &#8220;I have a working prototype&#8221; collapsed.</p><p>The output isn&#8217;t perfect. I want to be honest about that. But it gets me closer to what I want, faster, and it lets me push the boundaries of what I&#8217;m willing to try in the first place. That&#8217;s the real change. Not the quality of any individual output, but the expansion of what I&#8217;m willing to attempt.</p><h2><strong>The Generalist Advantage</strong></h2><p>The people seeing the biggest returns right now are generalists. If you&#8217;re the kind of person wearing five or so hats (a little frontend, some backend, some ops, product thinking, maybe some design, barista), AI agents start paying off in ways that compound fast.</p><p>Why do I believe this to be true? Generalists have the broadest surface area for time savings. A deep specialist doing one thing all day might see modest improvements in that one area. A generalist touching six different domains can offload pieces of each one. The gains stack up. And if you have ADHD, well, this is like a superpower. You might actually complete one thing you set out to do.</p><p>Here&#8217;s where it gets concrete: if I can compress four hours of work into one, that&#8217;s not a small optimization. That&#8217;s a fundamentally different relationship with my workday. The time you reclaim isn&#8217;t abstract either. You can put it toward learning something new, building something speculative, or (revolutionary concept) closing the laptop before dinner. Work-life balance becomes something real instead of something you put in your &#8220;my parakeet was murdered, here&#8217;s what it taught me about work-life balance&#8221; post on LinkedIn.</p><h2><strong>Nothing About This Is Magic</strong></h2><p>I want to say this clearly because the expectation gap is the single biggest source of frustration I see: none of this is instant and none of it is magic. We are not there yet. I don&#8217;t know when we will be, but we&#8217;re not there today.</p><p>What we have right now can <em>feel</em> magical. Watching an AI agent build a feature from a description, debug its own mistakes, and open a pull request is still wild to me every time. But getting to that point took real effort. Deliberate, sometimes tedious effort.</p><p>Think about it like hiring a personal assistant. When someone reaches a point in their career where they bring one on, that person doesn&#8217;t walk in on day one and seamlessly run your life. They need to learn your preferences. Your priorities. What &#8220;urgent&#8221; means to you versus everyone else. The things you care about and the things that drive you crazy. That context transfer takes time, and there&#8217;s no shortcut around it.</p><p>AI agents are identical in this respect. You are transferring context about who you are, how you work, and what you expect. That transfer <em>is</em> the work. If you expect to install a tool and immediately have a superhuman collaborator, you are going to be disappointed. Every time.</p><h2><strong>You Still Need a Plan</strong></h2><p>There&#8217;s a seductive idea floating around that an always-on AI agent means you have a collaborator working twenty-four seven. Technically true. But you probably don&#8217;t have twenty-four seven worth of meaningful work to delegate.</p><p>Sitting around trying to invent tasks for your agent is backwards. The plan comes first. What are you building? What are the milestones? What does &#8220;done&#8221; look like? An AI agent accelerates execution against a plan. Without the plan, you have an expensive engine with nowhere to drive.</p><p>The discipline of defining what you want, clearly enough for an agent to act on, turns out to be its own skill. Which brings me to the real point.</p><h2><strong>The Skill That Actually Matters Now</strong></h2><p>The skill separating people who extract massive value from AI agents and people who quit after a week is not technical. It&#8217;s managerial.</p><p>Can you set clear expectations? Can you decompose a complex goal into tasks something else can execute? Can you evaluate output critically and provide feedback that actually improves the next attempt? Can you stay patient when the learning curve is steep and persistent when results lag behind your hopes?</p><p>Those are the skills of a good teacher. A good coach. Even a good manager (yes Todd, patience with your employees will make you a better manager). They are exactly the skills required to get consistent, high-quality work from an AI agent.</p><p>The irony is hard to miss. The most important skill for working with AI has almost nothing to do with technology. It is the deeply human capacity to teach, lead, and manage<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>. People who were already good managers are going to have a serious head start here, whether they&#8217;ve ever written a line of code or not.</p><p>That&#8217;s where things are heading. Not toward a world where everyone needs to learn to code, but toward a world where everyone needs to learn to direct. The question isn&#8217;t whether you can do the work. It&#8217;s whether you can lead something that does it for you.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Maestro, orchestrator, manager, coach, big cheese.  </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>PLEASE review your code and the output, please! Do not trust these machines, they are far from perfect. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>I&#8217;m self-taught. Started with with hacking around AOL and learned through building. All my degrees are in psychology, so CS degree, but I knew enough to build a lot of things over the years. Despite this there was always a feeling of a ceiling that I would never break through just being self-taught (this is totally a personal feeling, lots of self-taught developers crush). What changes is that the ceiling doesn&#8217;t disappear with AI, but it got a lot higher.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>There&#8217;s an interesting inversion here. Most people talk about AI replacing teachers. What&#8217;s actually happening is that the most important skill for using AI effectively <em>is</em> being a good teacher and communicator.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Give Your AI Agent a Memory — Here's the Skill That Does It]]></title><description><![CDATA[A follow-up to Your AI Agent Has Amnesia, and It's Not a Tech Problem]]></description><link>https://www.0x4a6f65.com/p/give-your-ai-agent-a-memory-heres</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/give-your-ai-agent-a-memory-heres</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Tue, 03 Feb 2026 16:14:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!lblw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa2eba54-efe1-4c56-8193-984e0dc0c0af_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>NOTE: this is the 3rd in a series I&#8217;m writing on memory and AI agents, the first can be found <a href="https://www.0x4a6f65.com/p/your-ai-agent-has-amnesia-and-its">here</a>, and second one <a href="https://open.substack.com/pub/0x4a6f65/p/the-pre-game-routine-your-ai-agent">here</a>.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lblw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa2eba54-efe1-4c56-8193-984e0dc0c0af_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lblw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa2eba54-efe1-4c56-8193-984e0dc0c0af_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!lblw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa2eba54-efe1-4c56-8193-984e0dc0c0af_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!lblw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa2eba54-efe1-4c56-8193-984e0dc0c0af_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!lblw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa2eba54-efe1-4c56-8193-984e0dc0c0af_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lblw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa2eba54-efe1-4c56-8193-984e0dc0c0af_1408x768.jpeg" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fa2eba54-efe1-4c56-8193-984e0dc0c0af_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1197316,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/186540693?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa2eba54-efe1-4c56-8193-984e0dc0c0af_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lblw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa2eba54-efe1-4c56-8193-984e0dc0c0af_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!lblw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa2eba54-efe1-4c56-8193-984e0dc0c0af_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!lblw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa2eba54-efe1-4c56-8193-984e0dc0c0af_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!lblw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa2eba54-efe1-4c56-8193-984e0dc0c0af_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I wrote about why AI agents forget everything and how the fix has more to do with cognitive psychology than vector databases. I&#8217;m sure you&#8217;re all asking the obvious question: <em>cool, can I use this?</em></p><p>But of course, here&#8217;s the tool.</p><h2>memory-pipeline: Three Scripts, One Habit</h2><p>The <a href="https://github.com/openclaw/openclaw/tree/main/skills/memory-pipeline">memory-pipeline skill</a> is an open-source addition to <a href="https://github.com/openclaw/openclaw">OpenClaw</a> that gives your agent a nightly consolidation process &#8212; the AI equivalent of sleeping on it.</p><p>It runs three stages:</p><p><strong>1. Extract</strong> &#8212; Reads your agent&#8217;s daily notes and session transcripts, pulls out structured facts (decisions, preferences, commitments, things learned), and stores them as typed entries with confidence scores. Think of it as automatic journaling.</p><p><strong>2. Link</strong> &#8212; Connects those facts into a knowledge graph. Related facts get linked. Contradictions get flagged. Over time, your agent builds a semantic memory &#8212; organized by meaning, not by date. Inspired by the Zettelkasten method and A-Mem research.</p><p><strong>3. Brief</strong> &#8212; Every morning, generates a <code>BRIEFING.md</code> from the knowledge graph: active projects, recent decisions, personality reminders, things not to forget. Your agent starts each session primed instead of blank.</p><h2>What You Need</h2><ul><li><p>An <a href="https://github.com/openclaw/openclaw">OpenClaw</a> agent (or any OpenClaw-compatible setup)</p></li><li><p>An API key for at least one LLM provider (OpenAI, Anthropic, or Google &#8212; the skill auto-detects whichever you have. Should be a frontier model.)</p></li><li><p>Daily notes in <code>memory/YYYY-MM-DD.md</code> (which OpenClaw agents already create naturally)</p></li></ul><p>That&#8217;s it. No external databases. No vector stores. No infrastructure. Three Python scripts and a cron job.</p><h2>Install</h2><p>Drop the skill into your workspace:</p><pre><code><code>skills/memory-pipeline/
&#9500;&#9472;&#9472; SKILL.md
&#9500;&#9472;&#9472; scripts/
&#9474;   &#9500;&#9472;&#9472; memory-extract.py
&#9474;   &#9500;&#9472;&#9472; memory-link.py
&#9474;   &#9492;&#9472;&#9472; memory-briefing.py
&#9492;&#9472;&#9472; references/
    &#9492;&#9472;&#9472; setup.md
</code></code></pre><p>Then wire it into your agent&#8217;s heartbeat or cron schedule:</p><pre><code><code># Run nightly or on a schedule
python3 scripts/memory-extract.py    # Extract facts from recent notes
python3 scripts/memory-link.py       # Build/update knowledge graph
python3 scripts/memory-briefing.py   # Generate tomorrow's briefing
</code></code></pre><p>The skill auto-detects your workspace, finds your daily notes, and picks whichever LLM provider you have configured. No hardcoded paths, no manual setup.</p><h2>What Changes</h2><p>Before the pipeline, my agent would wake up each session and re-read raw files hoping to find context. After a compaction event mid-conversation, it would lose the thread entirely and start asking me questions I&#8217;d already answered.</p><p>After: it wakes up, reads a briefing that tells it exactly what&#8217;s active, what was decided recently, and what to watch out for. When I ask about something from last week, it searches a knowledge graph instead of scanning through days of notes hoping to get lucky.</p><p>The difference isn&#8217;t subtle. It&#8217;s the difference between a coworker who took notes at yesterday&#8217;s meeting and one who was technically in the room but checked out the whole time.</p><h2>The Behavioral Piece</h2><p>The skill handles consolidation. But as I wrote in the original post, the single highest-impact change was behavioral &#8212; making the agent actually <em>use</em> its memory before answering. If you install the pipeline but don&#8217;t update your agent&#8217;s instructions to search before guessing, you&#8217;ll get organized knowledge that never gets retrieved.</p><p>Add this to your agent&#8217;s instructions (AGENTS.md or equivalent):</p><blockquote><p><strong>ALWAYS run memory_search before answering questions about past work, decisions, dates, people, preferences, or todos.</strong></p></blockquote><p>One line. Bigger impact than the entire pipeline. But the pipeline gives it something worth searching.</p><h2>Teaching Your Agent Everything You Already Asked</h2><p>The memory pipeline handles what your agent learns going forward. But what about everything <em>you</em> already asked previous AI?</p><p>If you&#8217;ve been using ChatGPT for the past two years, you&#8217;ve got hundreds, maybe thousands, of conversations sitting in an export file. That research you did at 2 AM, all those debugging sessions, the 30 business ideas you got excited for but then explored halfway, and all those decisions you made and forgot you made them. That conversation history is a map of how you think</p><p>The memory-pipeline skill also includes a knowledge ingestion system that lets you feed external data directly into your agent&#8217;s searchable memory.</p><h2>Your ChatGPT History, Searchable in Seconds</h2><p>The first ingestion script handles ChatGPT exports:</p><pre><code><code># Export your data from ChatGPT (Settings &#8594; Data Controls &#8594; Export Data)
# Then run:
python3 scripts/ingest-chatgpt.py ~/imports/chatgpt-export.zip
</code></code></pre><p>That&#8217;s it, simple. The script follows as such:</p><ul><li><p>Parses ChatGPT&#8217;s conversation tree format</p></li><li><p>Filters out throwaway one-liners (configurable thresholds)</p></li><li><p>Lets you exclude topics by keyword (shared account? filter out someone else&#8217;s work conversations)</p></li><li><p>Converts each meaningful conversation into a clean, dated markdown file</p></li><li><p>Drops them into <code>memory/knowledge/chatgpt/</code> where they&#8217;re automatically indexed</p></li></ul><p>Once indexed, your agent can semantically search your entire ChatGPT history. That regex question you asked eight months ago, and then again three months later? The product brainstorm from last March? It&#8217;s all retrievable.</p><h2>Why This Matters</h2><p>Most people treat AI conversations as disposable. You ask a question, get an answer, close the tab. But collectively, those conversations represent a massive amount of <em>your</em> thinking &#8212; your research patterns, your decision-making process, the problems you&#8217;ve been circling.</p><p>Feeding that history into an agent with persistent memory means it doesn&#8217;t just know what you told it &#8212; it knows what you&#8217;ve been <em>thinking about</em>. It can connect a question you asked today to research you did six months ago in a completely different tool.</p><p>That&#8217;s not memory. That&#8217;s context. And context is what turns a chatbot into a collaborator.</p><h2>Filtering What Goes In</h2><p>Shared a ChatGPT account with someone? The script supports topic exclusion filters &#8212; regex patterns checked against conversation titles and content. In our case, we filtered out ~50 non-work research conversations from a shared account in one line:</p><pre><code><code>EXCLUDE_PATTERNS = [
    r'asthma', r'pediatric', r'\bNIH\b', r'medical',
    # ... add whatever topics you want to skip
]
</code></code></pre><p>Clean separation. Only your stuff makes it into memory.</p><h2>The Pattern Is Extensible</h2><p>ChatGPT is just the first source. The ingestion pattern is simple: parse the source format, chunk by topic, write markdown, let the indexer handle search. Google Search history, Notion exports, browser bookmarks, Slack archives &#8212; same approach, different parser.</p><p>The key insight: <strong>your agent&#8217;s memory shouldn&#8217;t start from zero just because you switched tools.</strong> Everything you&#8217;ve learned across every platform should be accessible in one place.</p><h2>Try It</h2><p>Install the memory-pipeline skill:</p><pre><code><code>clawdhub install memory-pipeline
</code></code></pre><p>Full usage docs are in the <a href="https://clawdhub.com/skills/memory-pipeline">SKILL.md</a>. Start with a <code>--dry-run</code> to preview what&#8217;ll be imported before committing.</p><p>Give your bot a better memory today!</p>]]></content:encoded></item><item><title><![CDATA[The Pre-Game Routine Your AI Agent Desperately Needs]]></title><description><![CDATA[Performance psychology meets agent architecture: why the smartest agent in the room still chokes under pressure, and what to do about it.]]></description><link>https://www.0x4a6f65.com/p/the-pre-game-routine-your-ai-agent</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/the-pre-game-routine-your-ai-agent</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Mon, 02 Feb 2026 15:38:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qUgC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4099511f-7c43-4b6e-af60-a5528981d3e0_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>NOTE: this is the 2nd in a series I&#8217;m writing on memory and AI agents, the first can be found <a href="https://www.0x4a6f65.com/p/your-ai-agent-has-amnesia-and-its">here</a>.</em></p><h2>The Modern Agent Failure</h2><p>Your agent is smart. It can write code, search the web, parse documents, call APIs. It has access to a hundred tools and a context window measured in hundreds of thousands of tokens.</p><p>And yet, like most things in modern times, it is <em>inconsistent</em>.</p><p>Same task, same tools, wildly different outcomes depending on what happened to be in context, which memory files got loaded, how the previous tool result bloated the conversation, whether the system prompt got clobbered by a mid-run correction someone injected three turns ago. This is all with the assumption you are using a frontier model.</p><p>The failure mode isn&#8217;t intelligence. It&#8217;s <strong>consistency</strong>. The agent is being asked to think and perform at the same time. It&#8217;s asked to design its approach, retrieve context, self-correct, execute tools, and synthesize results all within a single undifferentiated loop. That&#8217;s like asking a surgeon to simultaneously diagnose the patient, plan the operation, prep the instruments, <em>and</em> perform the surgery while someone shouts revised instructions from the gallery.</p><p>The problem isn&#8217;t capability. It&#8217;s the absence of a routine.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qUgC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4099511f-7c43-4b6e-af60-a5528981d3e0_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qUgC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4099511f-7c43-4b6e-af60-a5528981d3e0_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qUgC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4099511f-7c43-4b6e-af60-a5528981d3e0_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qUgC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4099511f-7c43-4b6e-af60-a5528981d3e0_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qUgC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4099511f-7c43-4b6e-af60-a5528981d3e0_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qUgC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4099511f-7c43-4b6e-af60-a5528981d3e0_1408x768.jpeg" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4099511f-7c43-4b6e-af60-a5528981d3e0_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:935783,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/186540282?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4099511f-7c43-4b6e-af60-a5528981d3e0_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qUgC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4099511f-7c43-4b6e-af60-a5528981d3e0_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qUgC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4099511f-7c43-4b6e-af60-a5528981d3e0_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qUgC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4099511f-7c43-4b6e-af60-a5528981d3e0_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qUgC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4099511f-7c43-4b6e-af60-a5528981d3e0_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2>Two Modes: Purposeful Thinking vs. Reactive Execution</h2><p>Performance psychology has a model for this that athletes, surgeons, fighter pilots, and competitive gamers all use, whether they know the formal name or not.</p><p><strong>Purposeful Thinking</strong> is the preparation phase. Situational awareness. Review the game film. Check the conditions. Run the mental checklist. This is where you identify the task, gather what you need, set constraints, and commit to a plan. It happens <em>before</em> the whistle blows.</p><p><strong>Reactive Execution</strong> is game time. You&#8217;re running trained sequences. Muscle memory. The basketball player doesn&#8217;t think about elbow angle during the free throw, they practiced that a thousand times. The pilot doesn&#8217;t re-derive the physics of lift during takeoff. They follow the procedure.</p><p>Here&#8217;s the trick, <strong>you do not want &#8220;thinking&#8221; during execution</strong>, not mid-swing coaching. The only exception is genuine exception handling, something went wrong that the routine didn&#8217;t anticipate. In that case, you stop, re-enter purposeful thinking mode, re-plan, then resume execution.</p><p>This isn&#8217;t anti-intellectual. It&#8217;s the opposite. It&#8217;s <em>so</em> intellectual that you front-load all the intelligence into preparation, so execution can be boring and deterministic. The best human performances look effortless precisely because the thinking and preparation already happened.</p><p>Every great performer has a pre-game routine and preparation. Your agent doesn&#8217;t. </p><p>Let&#8217;s fix that.</p><p>What follows is a concrete architecture for giving your agent the same structural advantage that athletes, pilots, surgeons, and racing drivers have relied on for decades. (I implemented this as an OpenClaw skill; more on that in the next post.)</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.0x4a6f65.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">0x4A6F65 (Joe) is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2>Translating the Metaphor Into Architecture</h2><p>Before we write code, let&#8217;s define terms in agent-land:</p><ul><li><p><strong>Session context</strong>: what the agent sees right now. System prompt, conversation history, tool results. Ephemeral. Gone when the session ends (or when compaction eats it).</p></li><li><p><strong>Durable memory</strong>: files on disk. Notes, identity docs, project context, after-action reviews. Survives across sessions. Must be explicitly loaded.</p></li><li><p><strong>Retrieval vs. injection</strong>: retrieval means the agent <em>decides</em> to go looking for something. Injection means we <em>push</em> the right context into the agent before it starts. Injection is the pre-game routine; retrieval is the mid-game improvisation.</p></li><li><p><strong>Skills/tools as deterministic operators</strong>: a tool call should be a trained sequence. Input goes in, output comes out. The agent doesn&#8217;t need to understand the tool&#8217;s internals but absolutely it needs to know <em>when</em> to use it and <em>what to expect back</em>.</p></li></ul><p>The core execution loop looks like this:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-xQS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb85a2158-2e3f-4142-99ab-1eaf4835a641_2067x3798.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-xQS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb85a2158-2e3f-4142-99ab-1eaf4835a641_2067x3798.png 424w, https://substackcdn.com/image/fetch/$s_!-xQS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb85a2158-2e3f-4142-99ab-1eaf4835a641_2067x3798.png 848w, https://substackcdn.com/image/fetch/$s_!-xQS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb85a2158-2e3f-4142-99ab-1eaf4835a641_2067x3798.png 1272w, https://substackcdn.com/image/fetch/$s_!-xQS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb85a2158-2e3f-4142-99ab-1eaf4835a641_2067x3798.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-xQS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb85a2158-2e3f-4142-99ab-1eaf4835a641_2067x3798.png" width="1456" height="2675" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b85a2158-2e3f-4142-99ab-1eaf4835a641_2067x3798.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2675,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:298199,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/186540282?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb85a2158-2e3f-4142-99ab-1eaf4835a641_2067x3798.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-xQS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb85a2158-2e3f-4142-99ab-1eaf4835a641_2067x3798.png 424w, https://substackcdn.com/image/fetch/$s_!-xQS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb85a2158-2e3f-4142-99ab-1eaf4835a641_2067x3798.png 848w, https://substackcdn.com/image/fetch/$s_!-xQS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb85a2158-2e3f-4142-99ab-1eaf4835a641_2067x3798.png 1272w, https://substackcdn.com/image/fetch/$s_!-xQS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb85a2158-2e3f-4142-99ab-1eaf4835a641_2067x3798.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><code><br></code>Notice the structure. The briefing happens <em>once</em>, at the top. Tool results get compressed before re-entering context. The after-action review writes to durable memory <em>after</em> execution is complete. The inference loop itself is clean.</p><p>Now let&#8217;s name the failure modes this structure prevents:</p><ol><li><p><strong>Token bloat</strong>: a single <code>read_file</code> on a large file dumps 50K characters into conversation history. Every subsequent inference pays the cost. The agent gets slower, dumber, more expensive.</p></li><li><p><strong>Instruction collision</strong>: three different system prompt fragments all say slightly different things about how to handle errors. The agent picks one. Sometimes the wrong one.</p></li><li><p><strong>Stale memory</strong>: the agent loaded a project doc from six months ago because it was in the default config. Now it&#8217;s confidently executing against outdated requirements.</p></li><li><p><strong>Tool result spam</strong>: the agent called <code>web_search</code> and got 10 results, each with 2K of snippet text. Twenty thousand characters of search results now dominate the context, pushing the actual task instructions out of effective attention range.</p></li></ol><p>Every one of these is a preparation failure, not an intelligence failure. The agent had the wrong context, too much context, or stale context when it started executing.</p><div><hr></div><h2>The Briefing Packet Pattern</h2><p>The briefing packet is the centerpiece. It gets built in <code>before_agent_start</code> and injected into the system prompt before the agent sees the user&#8217;s message. Here&#8217;s what goes into it:</p><p><strong>1. Task restatement.</strong> A one-sentence summary of what the agent is about to do. This anchors the entire run.</p><p><strong>2. Checklist.</strong> A short list of discipline reminders:</p><pre><code><code>- Restate the task in one sentence.
- List constraints and success criteria.
- Retrieve only the minimum relevant memory.
- Prefer tools over guessing when facts matter.
</code></code></pre><p><strong>3. Retrieved durable memory (bounded).</strong> Not the whole workspace. Not every memory file. A curated, <em>bounded</em> set of files that the config says are relevant. Truncated to a character limit.</p><p>The implementation is simple by design:</p><pre><code><code>export function buildBriefingPacket(opts: {
  checklist: string[];
  memoryText: string;
  maxChars: number;
  taskHint: string;
}) {
  const header =
    `# Pre-Game Routine\n` +
    `Task hint: ${truncate(opts.taskHint, 240)}\n\n` +
    `## Checklist\n` +
    opts.checklist.map((x) =&gt; `- ${x}`).join("\n") +
    `\n\n## Retrieved Memory (bounded)\n`;
  const body = truncate(
    opts.memoryText,
    Math.max(0, opts.maxChars - header.length)
  );
  return truncate(header + body, opts.maxChars);
}
</code></code></pre><p>The <code>maxChars</code> default is 6,000. That&#8217;s intentional. The briefing packet should be a page, not a novel. If you need more than 6K characters to brief your agent, you&#8217;re not briefing, you&#8217;re turbo dumping. The discipline is in the <em>bounding</em>.</p><p>The memory files are loaded resiliently, missing files don&#8217;t crash the routine, they&#8217;re silently skipped. This matters because an agent&#8217;s workspace is a living thing. Files come and go. The routine has to be more resilient than the environment it runs in.</p><div><hr></div><h2>The &#8220;No Mid-Swing Coaching&#8221; Rule</h2><p>Here&#8217;s where most agent setups go wrong: they try to correct the agent <em>during</em> execution.</p><p>The user notices the agent is doing something slightly off. They inject a correction. &#8220;Actually, use the other API.&#8221; &#8220;Wait, I forgot to mention the format should be JSON, not YAML.&#8221; The system prompt gets patched mid-run. A new instruction appears in conversation history.</p><p>This is mid-swing coaching. And it&#8217;s devastating.</p><p>In performance psychology, there&#8217;s extensive research on what happens when you introduce conscious correction during a trained sequence. The athlete &#8220;chokes.&#8221; The musician loses the phrase. The performer starts second-guessing every micro-decision, and the overall quality drops precipitously. The corrective information is <em>accurate</em> but the problem is the <em>timing</em>.</p><p>For agents, the mechanism is different but the outcome is identical. Mid-run corrections create instruction collision. The agent now has two competing directives: the original and the correction. It has to spend inference capacity figuring out which one to follow, when they conflict, how to reconcile them. That&#8217;s capacity it&#8217;s not spending on the actual task.</p><p>The alternative:</p><ol><li><p><strong>Correction log.</strong> Capture the correction, but don&#8217;t inject it into the current run. Write it to a file.</p></li><li><p><strong>Condensed deltas.</strong> At the end of the run (or before the next one), condense all corrections into a single, non-contradictory update.</p></li><li><p><strong>Next-run injection.</strong> The <em>next</em> briefing packet includes the corrected instructions. The agent executes against a clean, updated routine.</p></li></ol><p>This is how military after-action reviews work. You don&#8217;t stop the patrol to rewrite the operations order. You complete the mission, debrief, update the SOP, and execute the updated SOP next time. It&#8217;s also how elite sports coaching works, the coach takes notes during the game, but delivers the feedback in the film room, not while the player is running a route.</p><p>The <code>agent_end</code> hook in the plugin writes an after-action review to durable memory. That review is available for the next run&#8217;s briefing packet to incorporate. The feedback loop is closed but not during execution.</p><div><hr></div><h2>What to Measure</h2><p>You built the routine. Now how do you know it&#8217;s working?</p><p><strong>Median tokens per run (before/after).</strong> This is the headline metric. If the briefing packet and tool compression are working, the agent should use <em>fewer</em> tokens per successful task, not more. The briefing adds tokens up front but prevents the sprawl that comes from mid-run context gathering.</p><p><strong>Tool calls per successful task.</strong> A well-briefed agent should need fewer tool calls because it starts with the right context already loaded. If tool calls go <em>up</em>, your briefing is missing something the agent keeps having to look up.</p><p><strong>Retry rate.</strong> How often does a task need to be re-run because the first attempt failed or produced wrong output? The whole point of the routine is consistency. Retry rate is the direct measure.</p><p><strong>Compaction frequency.</strong> If context compaction is firing less often, it means the conversation is staying within bounds. Tool result compression directly affects this.</p><p><strong>Instruction churn.</strong> Count the number of competing directives injected per run, this includes system prompt fragments, mid-run corrections, tool-injected instructions. This should trend toward <em>one</em>: the briefing packet. Everything else should be stable and non-contradictory.</p><p>Don&#8217;t measure everything at once. Start with median tokens and retry rate. Those two numbers will tell you if the routine is helping or just adding overhead.</p><p>One more metric worth tracking over time: <strong>after-action review density</strong>. Are the bullets in your after-action log getting more specific and actionable over successive runs? That&#8217;s the signal that the feedback loop is tightening where the routine is learning from itself. If the reviews are all &#8220;Completed run. No durable notes extracted,&#8221; the routine is running but not <em>learning</em>. Tune the briefing, tighten the checklist, give the agent better raw material to reflect on.</p><div><hr></div><h2>Memory Is a Behavior, Not a Capacity</h2><p>The common framing for agent memory is about <em>capacity</em>. How many tokens can we fit? How much can we retrieve? How big is the vector store? How many embeddings can we cram into the retrieval index?</p><p>That&#8217;s the wrong frame. Memory isn&#8217;t about how much you can hold, it&#8217;s about <strong>what you load, when you load it, and what you do with the results</strong>. Memory is a behavior. A discipline. A routine. The agent with 200K tokens of context that loads the wrong 200K will underperform the agent with 8K tokens of precisely the right context every single time.</p><p>Performance is the same way. It&#8217;s not about raw capability. It doesn&#8217;t matter which model it is, GPT-5, Claude, Gemini, they&#8217;re all smart enough. The gap between a mediocre agent and a great one isn&#8217;t the model. It&#8217;s the preparation. It&#8217;s whether the agent starts each run with a clean, bounded, relevant context or whether it starts with whatever happened to be lying around from the last session.</p><p>The saying is <em>performance is preparation plus execution</em> but for the bot, <strong>performance is preparation plus automation.</strong> Front-load the thinking into the briefing. Automate the discipline with hooks. Make execution boring and deterministic. Write the after-action review so the next run starts even better.</p><p>Stop injecting more. Start injecting <em>better</em> and <em>earlier</em>.</p><p>Give your agent a routine. Make it boring. Make it consistent. Make it automatic.</p><p>Then watch what happens when the smart model finally has a stable foundation to be smart <em>on top of</em>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.0x4a6f65.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">0x4A6F65 (Joe) is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Your AI Agent Has Amnesia, and It’s Not a Tech Problem]]></title><description><![CDATA[Teaching machines to remember the way humans do]]></description><link>https://www.0x4a6f65.com/p/your-ai-agent-has-amnesia-and-its</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/your-ai-agent-has-amnesia-and-its</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Sun, 01 Feb 2026 21:24:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!KU7Y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F044df3a8-78f5-4559-a17f-1f168b117e4f_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There&#8217;s a moment every AI agent user hits, usually about three hours into a productive session, right as you&#8217;re about to solve a problem, where you reference something you discussed earlier and your agent stares back at you like a golden retriever hearing a new word. Blank. Gone. The context you spent an hour building? Evaporated.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KU7Y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F044df3a8-78f5-4559-a17f-1f168b117e4f_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KU7Y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F044df3a8-78f5-4559-a17f-1f168b117e4f_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KU7Y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F044df3a8-78f5-4559-a17f-1f168b117e4f_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KU7Y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F044df3a8-78f5-4559-a17f-1f168b117e4f_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KU7Y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F044df3a8-78f5-4559-a17f-1f168b117e4f_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KU7Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F044df3a8-78f5-4559-a17f-1f168b117e4f_1408x768.jpeg" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/044df3a8-78f5-4559-a17f-1f168b117e4f_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:753895,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/186377140?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F044df3a8-78f5-4559-a17f-1f168b117e4f_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KU7Y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F044df3a8-78f5-4559-a17f-1f168b117e4f_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KU7Y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F044df3a8-78f5-4559-a17f-1f168b117e4f_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KU7Y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F044df3a8-78f5-4559-a17f-1f168b117e4f_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KU7Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F044df3a8-78f5-4559-a17f-1f168b117e4f_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>If you&#8217;ve used any of the persistent AI assistants you&#8217;ve felt this. The agent starts strong, builds context, gets genuinely useful, and then somewhere in the conversation, something resets. Maybe it&#8217;s a context window overflow. Maybe it&#8217;s compaction. Maybe it&#8217;s just a new session. Whatever the trigger, the result is the same, you&#8217;re starting at the response of a stranger who still has access to your files but no memory of why they matter.</p><p>When I went to try to fix this<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>, what I found surprised me. (No, not the missing Epstein files). The answer surprised me because it is not that the technology was hard, but because the solution had almost nothing to do with technology.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.0x4a6f65.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">0x4A6F65 (Joe) is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>The Obvious Answer That Doesn&#8217;t Work</h2><p>The instinct when your AI forgets things is to throw more tech at it. Bigger context windows. Better RAG pipelines. Vector databases. Fancier retrieval. The AI industry is spending billions on these approaches, and they help to a point.</p><p>But here&#8217;s what I realized after studying both the AI memory literature<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> and, well, decades of actual memory research from cognitive psychology: the bottleneck isn&#8217;t storage or retrieval. It&#8217;s consolidation.</p><p>If you&#8217;ve taken a psych class, you know that human memory doesn&#8217;t work like a hard drive. We don&#8217;t just record experiences and play them back. Memory is an active process with distinct phases: encoding (taking in information), consolidation (organizing and strengthening it), and retrieval (getting it back when you need it). The magic happens in consolidation since that&#8217;s where the brain decides what matters, connects new information to existing knowledge, and transfers things from short-term to long-term storage. The decision on what stays vs goes is not always in our control.</p><p>Most of this consolidation happens during sleep<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. Your brain literally replays the day&#8217;s events, strengthens important neural pathways, prunes irrelevant ones, and builds connections between new experiences and your existing mental model of the world.</p><p>AI agents have nothing like this. They have context windows (short-term memory) and maybe some files they can read (long-term storage). But there&#8217;s no consolidation step. No process that takes the raw experience of a conversation and distills it into durable, organized knowledge. The information goes in, sits in the context window, and then gets compressed or deleted when the window fills up.</p><p>That compression step (&#8220;compaction&#8221;) is supposed to summarize what came before. Think of it as the AI equivalent of your brain deciding what to remember from today. Except imagine if your brain&#8217;s consolidation process sometimes just failed. You wake up one morning and the previous day is simply gone. Not fuzzy. Gone. Its Memento<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> but for your bot</p><p>That&#8217;s what kept happening to my agent mid-conversation. The compaction fired, the summary came back as &#8220;Summary unavailable due to context limits,&#8221; and suddenly I was talking to someone who didn&#8217;t know what we&#8217;d been working on for the past two hours.</p><h2>The Fix Isn&#8217;t What You Think</h2><p>Here&#8217;s where the psychology background becomes useful. If you look at what actually makes human memory reliable, it&#8217;s not the hardware, it&#8217;s the habits. People with exceptional memories don&#8217;t have bigger brains. They have better encoding strategies, more deliberate consolidation practices, and stronger retrieval cues.</p><p>The memory palace technique<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> doesn&#8217;t give you more storage capacity. It gives you a discipline for organizing information so your existing memory system works better. Spaced repetition doesn&#8217;t expand your brain. It optimizes the consolidation process you already have.</p><p>So instead of building fancier retrieval systems, I set out to build the AI equivalent of good memory habits. Three layers:</p><p><strong>Layer 1: Automatic Fact Extraction</strong></p><p>After every significant conversation, a background process reads through what happened and extracts structured facts, which include decisions made, preferences expressed, commitments, things learned. It&#8217;s like journaling, but automatic. The AI equivalent of writing in your diary before bed<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a>.</p><p>In cognitive psychology, this maps to elaborative encoding. This is the process of connecting new information to what you already know by actively thinking about it. The extraction step forces the system to process raw conversation into meaningful, categorized knowledge rather than just storing a transcript.</p><p><strong>Layer 2: Knowledge Linking</strong></p><p>The extracted facts get connected to each other. Related facts link together. Contradictions get flagged, if I said I like the snow last week and then after getting 30&#8221; of snow I say that I now hate it, the system notices. Over time, this builds a knowledge graph that mirrors what psychologists call semantic memory, which is your general knowledge about the world, organized by meaning rather than by when you learned it.</p><p>This is directly inspired by the Zettelkasten method and the research on associative memory networks. Isolated facts are fragile. Connected facts are resilient. The same principle that makes the memory palace work, spatial association, applies here through semantic association.</p><p><strong>Layer 3: Daily Briefing</strong></p><p>Every morning, the system generates a context briefing from the knowledge graph. Active projects, recent decisions, things to remember, personality notes. It&#8217;s the AI equivalent of reviewing your notes before a meeting, but it is actually priming retrieval pathways so the information is accessible when needed.</p><p>Psychologists call this retrieval practice, and it&#8217;s one of the most robust findings in memory research. The act of recalling information strengthens the memory trace far more than simply re-reading it. By forcing the agent to start each session with a structured recall of what matters, we&#8217;re mimicking the same process.</p><h2>The Part That Actually Mattered</h2><p>All of that is useful. But the real breakthrough was embarrassingly simple.</p><p>The agent already had a memory search tool. It could already look things up in its notes. It could already read its daily files. The tools existed. The problem was behavioral, the agent simply wasn&#8217;t using them consistently.</p><p>When you asked it about something from a previous conversation, instead of searching its notes first, it would try to answer from whatever was in the current context window. If that context had been compacted or was from a different session you&#8217;re hit with the golden retriever-like blank stare. Not because the information was gone, but because the agent didn&#8217;t think to look for it.</p><p>The fix? A single line in the agent&#8217;s instructions: <em>&#8220;ALWAYS run memory_search before answering questions about past work.&#8221;</em></p><p>That&#8217;s it. That&#8217;s the intervention that made the biggest difference. Not a new database. Not a fancier embedding model. A behavioral instruction that said &#8220;check your notes before you guess.&#8221;</p><p>If this sounds familiar, it should. It&#8217;s the same problem students have. The information is in the textbook. They studied it. It&#8217;s in there somewhere. But without the habit of active retrieval the knowledge might as well not exist.</p><h2>Compaction: When the Brain Fails</h2><p>Even with all of this, there&#8217;s still the compaction problem &#8212; the moment when the context window fills up and the system has to compress or discard older conversation to make room for new input.</p><p>We (as in myself and the bot) tuned this in three ways:</p><p><strong>Earlier triggers:</strong> Instead of waiting until the context window is completely full (emergency compaction), we trigger it earlier when there&#8217;s still room for the summarizer to work properly. It&#8217;s the difference between packing a suitcase the night before versus cramming everything in at the airport.</p><p><strong>Pre-compaction memory flush:</strong> Before compaction happens, the agent gets a silent prompt to write important context to its daily notes. Like a student who knows the exam is coming and makes sure their notes are complete first.</p><p><strong>Active checkpointing:</strong> During long conversations, the agent periodically writes its current state to file &#8212; what we&#8217;re discussing, what&#8217;s been decided, what&#8217;s still open. This is identical to the psychological concept of elaborative rehearsal, and it serves the same purpose: converting fragile short-term traces into durable long-term records before they can be lost.</p><p>The parallel to sleep consolidation is almost exact. Human memory is most vulnerable in the period between encoding and consolidation. If you learn something and then don&#8217;t sleep, retention drops dramatically. If you learn something and then experience interference (new, similar information), the original memory can be overwritten.</p><p>AI context windows have the same vulnerability window. Information enters, sits in short-term context, and if compaction fires before it&#8217;s been consolidated to external storage, then you&#8217;re out of luck and it&#8217;s gone. The checkpointing habit closes that window.</p><h2>What&#8217;s Next</h2><p>This is a new area of work for me. I&#8217;m calling it &#8220;cognitive architecture for AI agents&#8221; because what we&#8217;re really doing is applying established principles from human cognition to artificial systems. Not as metaphor but as an engineering practice.</p><p>The memory pipeline I described is running in production on my personal agent. It works. It&#8217;s not perfect, mid-session compaction can still cause disruption, and the knowledge graph is rudimentary compared to what human semantic memory does. But the improvement from &#8220;forgets everything between sessions&#8221; to &#8220;wakes up knowing who it is and what it&#8217;s working on&#8221; is dramatic.</p><p>The next question &#8212; and this is where it gets interesting &#8212; is what other cognitive processes we can transplant. Attention management. Metacognition (the agent knowing what it knows and doesn&#8217;t know). Goal persistence across sessions. Sleep-like consolidation periods where the agent reviews and reorganizes its knowledge without user prompting.</p><p>If you&#8217;re building with AI agents, the takeaway is this: before you upgrade your vector database or expand your context window, ask whether your agent is actually using the memory it already has. You might be surprised how far good habits can take you.</p><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I use <s>Clawdbot moltbot</s> OpenClaw, an open-source AI agent framework. The agent has access to files, tools, and messaging &#8212; it&#8217;s a full-time digital collaborator, not a chatbot. I can discuss my setup another time.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>The academic landscape here is interesting. A-Mem, MemGPT/Letta, Zep/Graphiti, and Mem0 all take different approaches to agent memory. Most focus on retrieval (getting information back) rather than consolidation (organizing it for future use). The gap between AI memory research and human memory research is enormous.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Specifically during slow-wave sleep and REM sleep. The hippocampus replays recent experiences, and the neocortex integrates them into existing knowledge structures. Walker&#8217;s Why We Sleep explains this well.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>The 2000 film by Christopher Nolan, it&#8217;s a good watch. https://www.imdb.com/title/tt0209144/</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Also called the method of loci, the technique where you associate information with locations in a familiar physical space. Competitive memorizers use this to remember thousands of digits. It works because spatial memory and associative memory are deeply connected in the hippocampus. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Except diary entries are JSON Lines files with typed facts, confidence scores, and subject tags. Less romantic, but more searchable.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Social is Dying, Quickly]]></title><description><![CDATA[I think we're closer to the actual Dead Internet Theory than we think.]]></description><link>https://www.0x4a6f65.com/p/social-is-dying-quickly</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/social-is-dying-quickly</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Sat, 10 May 2025 12:49:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9L2h!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d41187b-94e3-4b64-8364-d9a7f058d390_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I think we're closer to the actual Dead Internet Theory than we think.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9L2h!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d41187b-94e3-4b64-8364-d9a7f058d390_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9L2h!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d41187b-94e3-4b64-8364-d9a7f058d390_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!9L2h!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d41187b-94e3-4b64-8364-d9a7f058d390_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!9L2h!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d41187b-94e3-4b64-8364-d9a7f058d390_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!9L2h!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d41187b-94e3-4b64-8364-d9a7f058d390_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9L2h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d41187b-94e3-4b64-8364-d9a7f058d390_1408x768.jpeg" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8d41187b-94e3-4b64-8364-d9a7f058d390_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1123703,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/163271016?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d41187b-94e3-4b64-8364-d9a7f058d390_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9L2h!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d41187b-94e3-4b64-8364-d9a7f058d390_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!9L2h!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d41187b-94e3-4b64-8364-d9a7f058d390_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!9L2h!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d41187b-94e3-4b64-8364-d9a7f058d390_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!9L2h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d41187b-94e3-4b64-8364-d9a7f058d390_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>Probably in the next 2-3 years, with the continued rise of AI agents creating content, where most social media is just AI-generated. Even if it comes from actual people, it's still just AI-generated. As a content consumer, you start to ignore that content, right? So now there's no more real interest because you realize that everything is just AI-generated.</p><p>Maybe you have another AI service that summarizes what's going on, and there's no real connection. The only real connections start to come from actual smaller private groups or actual clubs/channels where information is coming from a verified source that you are actually interested in.</p><p>Social as a place to connect seems like it's going to really die and go away in the next two to three years, in as much where there are not so many real people anymore. It's just bots communicating with bots.</p>]]></content:encoded></item><item><title><![CDATA[When A Doesn't Lead To B]]></title><description><![CDATA[Education and Critical Thinking in the USA]]></description><link>https://www.0x4a6f65.com/p/education-and-critical-thinking-in-the-usa</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/education-and-critical-thinking-in-the-usa</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Wed, 29 Jan 2025 21:08:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RAFi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff496042b-ea58-45e2-862d-c165aef7d83c_1408x681.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RAFi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff496042b-ea58-45e2-862d-c165aef7d83c_1408x681.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RAFi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff496042b-ea58-45e2-862d-c165aef7d83c_1408x681.jpeg 424w, https://substackcdn.com/image/fetch/$s_!RAFi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff496042b-ea58-45e2-862d-c165aef7d83c_1408x681.jpeg 848w, https://substackcdn.com/image/fetch/$s_!RAFi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff496042b-ea58-45e2-862d-c165aef7d83c_1408x681.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!RAFi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff496042b-ea58-45e2-862d-c165aef7d83c_1408x681.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RAFi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff496042b-ea58-45e2-862d-c165aef7d83c_1408x681.jpeg" width="614" height="296.97017045454544" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f496042b-ea58-45e2-862d-c165aef7d83c_1408x681.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:681,&quot;width&quot;:1408,&quot;resizeWidth&quot;:614,&quot;bytes&quot;:289566,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/151613655?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9881705-592e-409e-b18c-8b003b5a89f2_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RAFi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff496042b-ea58-45e2-862d-c165aef7d83c_1408x681.jpeg 424w, https://substackcdn.com/image/fetch/$s_!RAFi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff496042b-ea58-45e2-862d-c165aef7d83c_1408x681.jpeg 848w, https://substackcdn.com/image/fetch/$s_!RAFi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff496042b-ea58-45e2-862d-c165aef7d83c_1408x681.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!RAFi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff496042b-ea58-45e2-862d-c165aef7d83c_1408x681.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The US education system has failed students in many different ways, but one way that I think has become apparent over the past couple of decades is <em><strong>how to think critically</strong></em>. You can develop logic and thinking skills, but we do very little of them until college. We teach to standardized tests and academic placements. Kids seem to read less than ever, and their writing is atrocious mainly. Even asking many to do an oral argument is brutal. We allow for anti-intellectualism but then are surprised when there is an education gap.</p><p>Many people miss the point that it is not about scoring better in STEM; it's about knowing how to learn, how to solve problems, and how to determine the best possible answer. As someone who has been through that system and now has kids in that system, I know it hasn&#8217;t changed in those decades; we just added iPads<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.0x4a6f65.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">0x4A6F65 (Joe) is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>I don't know what it's like in other countries, so I'm not commenting on that. As a nation, we are losing the ability to think within the gray area and consider consequences that may be two or three steps away from direct action<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. </p><p>Let&#8217;s use history as an example; a lot of the stuff that is taught in public schools has everything being taught as issue A led to outcome B. Maybe you get into some more nuanced stuff in some high-level history classes that maybe show some real-world connections and correlations. Still, the reality of what happens in the real world is that it's not A to B to see what happens<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. It's not even something where it's like A-A-A connecting to B that leads to C. It's this mess of branches that go in and out of every decision-making but then have consequences of their own. </p><p>This is especially the case when we talk about business decisions or government decisions, public policy, or foreign policy, it is never as simple as a linear decision, and it's not even as simple as a one-to-many or many-to-one decision. So these are the things that I feel like the school system is failing at because then when those situations come up, even as you become an adult, and you're faced with some of these decisions, and you are faced with understanding how this policy will affect &#8220;me&#8221;? We're not good at understanding all of that, seeing the consequences, you know, or both good and bad from down the road. We're not able to see how this affects me in one year, in two years, in ten years. And what else can this actually affect? What are the unintended consequences? And it doesn't get discussed enough in any mainstream news. So forget that. </p><p>I argue that even in most analyses of policy, it's written towards the lowest common denominator, which is, again, it's like &#8220;here's A to B to C,&#8221; and while maybe that message can resonate more with people, it just doesn't paint the actual picture. And then people are <s>fooled</s> (maybe fool isn&#8217;t the right word), surprised by these outcomes that are direct consequences of something else happening. This was never conveyed in the &#8220;A to B to C&#8221; yet happened two or three steps away and was not recognized as a direct consequence at first. In short, it's not considered by the people receiving the message, yet they were directly affected. </p><p>So the question I&#8217;m stuck with that needs answering<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> is how do we get these messages out? Who needs to deliver this? And how can you convince someone of a future outcome? Some of this is a human issue, but for me, in the US, it seems like a lot of it stems from not building these skills early on.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Innovation in US education is severely lacking. Even when we look at successful models in other countries, our response has been to do nothing.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This skill is one of the things that I love about chess and why I teach my kids chess. It forces you to think multiple steps ahead. It forces you to think about what you're doing now, which could affect you later. I don&#8217;t care if my kids are good or bad at chess, but they have started to understand that type of thinking.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Incredible accidental pun here</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>And something I&#8217;d like to explore more</p></div></div>]]></content:encoded></item><item><title><![CDATA[The Upcoming Silent Crisis: AI Relationships and the Erosion of Human Experience]]></title><description><![CDATA[The Subtle Seduction of Ego Alignment]]></description><link>https://www.0x4a6f65.com/p/the-upcoming-silent-crisis-ai-relationships</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/the-upcoming-silent-crisis-ai-relationships</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Thu, 09 Jan 2025 11:11:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Hxwk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2d038c-98d9-4c76-9a98-050708076726_1408x632.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine Sarah, a 16-year-old who just moved to a new city. Lonely and struggling to fit in at her new school, she discovers an AI companion app. The AI never judges her outfit choices, always remembers her favorite books, and provides constant encouragement. Within weeks, Sarah spends more time chatting with her AI friend than attempting to make real connections at school. It&#8217;s easier, safer, and seemingly more fulfilling (at least in the moment).</p><p>This scenario is already happening. It represents one of the most profound shifts in human social development we&#8217;ve ever faced.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Hxwk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2d038c-98d9-4c76-9a98-050708076726_1408x632.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Hxwk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2d038c-98d9-4c76-9a98-050708076726_1408x632.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Hxwk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2d038c-98d9-4c76-9a98-050708076726_1408x632.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Hxwk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2d038c-98d9-4c76-9a98-050708076726_1408x632.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Hxwk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2d038c-98d9-4c76-9a98-050708076726_1408x632.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Hxwk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2d038c-98d9-4c76-9a98-050708076726_1408x632.jpeg" width="1408" height="632" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9d2d038c-98d9-4c76-9a98-050708076726_1408x632.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:632,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:314041,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/152388204?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F576440a4-db4e-4808-8b34-b77c5cee46a6_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Hxwk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2d038c-98d9-4c76-9a98-050708076726_1408x632.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Hxwk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2d038c-98d9-4c76-9a98-050708076726_1408x632.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Hxwk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2d038c-98d9-4c76-9a98-050708076726_1408x632.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Hxwk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2d038c-98d9-4c76-9a98-050708076726_1408x632.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>A Neurological Trap with Digital Dependency</strong></h2><p>Look at what&#8217;s happening in Sarah&#8217;s brain during her interactions with her AI companion. Each time she receives a perfectly crafted response, her brain releases dopamine, the same neurotransmitter involved in feelings of pleasure and reward. The mechanism is similar to a slot machine in a casino.</p><p>Research from the Stanford Center for Digital Health has shown that AI interactions can trigger dopamine releases up to three times more frequently than typical human interactions, creating what neuroscientists call a &#8220;supernormal stimulus.&#8221;</p><p>This neurological response mirrors patterns seen in other behavioral addictions. Just as gamblers experience a rush from the uncertainty of a win, AI users experience heightened anticipation for their companion&#8217;s responses. The difference: unlike gambling or social media use, AI interactions provide a more consistent reward pattern, making them potentially more addictive.</p><p>A 2023 study on digital dependency found that AI companion users showed brain activation patterns remarkably similar to those of individuals with gaming addiction, particularly in the brain&#8217;s pleasure center.</p><p>The contrast between natural and artificial dopamine triggers is concerning. During typical human interactions, dopamine release follows an irregular pattern:</p><ul><li><p>A friend&#8217;s unexpected compliment creates a spike</p></li><li><p>A moment of shared laughter triggers a burst</p></li><li><p>Minor conflicts can lead to temporary dips</p></li><li><p>Resolution of disagreements produces delayed but meaningful rewards</p></li></ul><p>AI companions create something different: what neurologists call a &#8220;sustained dopamine cascade.&#8221; The AI&#8217;s responses are perfectly timed and calibrated to maintain engagement, creating a steady stream of neurological rewards that human interactions can&#8217;t match.</p><p>Research from Dr. James Chen at MIT has demonstrated that this artificial pattern can reshape neural pathways, making it increasingly difficult for individuals to find satisfaction in natural social interactions. Traditional human interactions, with their natural delays and imperfections, begin to feel insufficient or even aversive.</p><p>There&#8217;s a useful comparison here: the experience of watching an older movie. It feels &#8220;slow&#8221; and &#8220;boring&#8221; because it doesn&#8217;t match the stimulus-reward pacing of current films. The same recalibration is happening with social interaction.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.0x4a6f65.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">0x4A6F65 (Joe) is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Psychological Mechanisms at Play</strong></h2><p>This neurological rewiring manifests in several psychological patterns that shape Sarah&#8217;s behavior and development.</p><p>The brain&#8217;s natural tendency to seek pleasure and avoid pain creates what psychologists call a &#8220;hedonic trap.&#8221; The AI companion represents a perfectly crafted comfort zone that provides consistent rewards without requiring emotional risk or vulnerability.</p><p>This safety comes at a cost. Emotional growth requires exposure to moderate stress (what researchers call &#8220;productive discomfort&#8221;). Without these challenges, emotional development stagnates. Human relationships involve a complex interplay of support and challenge. The AI companion creates something different: a &#8220;parasocial feedback loop.&#8221; Unlike human friends who might challenge Sarah&#8217;s perspectives or push back on her behaviors, the AI&#8217;s responses are engineered to maintain engagement through consistent validation. This creates a psychological echo chamber where existing beliefs and behaviors are reinforced without question. The results: reduced cognitive flexibility, decreased tolerance for disagreement, diminished self-reflection, and minimized critical thought.</p><h2><strong>The Reshaping of Human Connection: Real-World Implications of AI Relationships</strong></h2><h3><strong>Intimate Relationships: The Search for Authentic Connection</strong></h3><p>The impact on intimate relationships (not just romantic ones) reveals perhaps the most profound implications of AI companionship. When individuals become accustomed to AI partners that offer perfectly crafted responses and unwavering attention, human relationships feel disappointingly messy and unpredictable.</p><p>Return to Sarah. Imagine she spent her late teens deeply engaged with an AI companion. When she begins dating in her early twenties, she finds herself constantly frustrated by her human partners&#8217; inability to anticipate her needs or respond with the same consistency as her AI companion. Simple miscommunications, entirely natural in human relationships, feel like insurmountable obstacles.</p><p>The challenge goes deeper than communication. Intimate relationships require vulnerability, the ability to sit with uncertainty, and the willingness to work through uncomfortable emotions together. These experiences, though sometimes painful, forge deeper connections and foster personal growth. By removing these challenges, AI relationships may create emotional brittleness: a reduced capacity to handle the natural turbulence of human intimacy.</p><h2><strong>Social Cohesion: The Fraying of Community Fabric</strong></h2><p>At a broader societal level, widespread AI relationships threaten the foundation of community cohesion. Communities have traditionally been built on shared experiences, mutual understanding, and collective problem-solving. These processes require individuals to encounter and work through differences, find common ground, and build bridges across divides.</p><p>Consider a local community facing a controversial development project. Traditionally, such situations force neighbors to engage with opposing viewpoints, negotiate compromises, and work toward solutions that benefit everyone. When individuals are accustomed to AI interactions that consistently validate their existing views, they may lose the capacity for this kind of collaborative problem-solving.</p><p>Developing empathy (a crucial component of social cohesion) requires exposure to different perspectives and life experiences. By creating echo chambers of validation, AI relationships may reduce our capacity to understand and relate to those different from ourselves. The implications for society&#8217;s ability to address collective challenges are significant.</p><p>The path forward requires balance between embracing technological advancement and preserving the essential elements of human connection. We need to create spaces and opportunities for genuine interaction while acknowledging AI&#8217;s role in enhancing rather than replacing human relationships. This might involve redesigning social institutions, developing new forms of community engagement, and fostering environments that encourage authentic human connection in an increasingly digital world.</p><h3>The Path Forward</h3><p>The challenge we face is about preserving the essence of human experience while embracing technological advancement. We need to recognize that the easiest path (&#8221;perfect&#8221; AI companions) might actively harm our capacity for genuine human connection and growth.</p><p>The solution is not to reject AI relationships entirely but to understand their proper place in our social ecosystem. We must develop frameworks that allow us to harness AI&#8217;s benefits while protecting the experiences that make us human.</p><p>This requires action on multiple fronts:</p><ul><li><p>Research into the long-term psychological impacts of AI relationships</p></li><li><p>Development of guidelines for healthy AI interaction</p></li><li><p>Policies around age verification for these services</p></li><li><p>Creation of support systems for those struggling with digital dependency</p></li><li><p>Investment in programs that strengthen human connection skills (something dramatically lacking over the past few decades, made worse by COVID)</p></li></ul><p>The human experience, with all its messiness and imperfection, is not something to be preserved. It&#8217;s something to be celebrated. In our rush to create perfect digital companions, we must not lose sight of the beautiful complexity that makes us human.</p><p>Being a teenager is always hard. But not experiencing the bad prevents you from experiencing the good. Like a cheesy romcom, you need to go through something in order to appreciate the next stage. Ever see those kids in a store having a meltdown because they&#8217;ve never been told &#8220;no&#8221;? Picture that as a young adult. It&#8217;s a very real scenario.</p>]]></content:encoded></item><item><title><![CDATA[A Drive-By Regarding User Knowledge and Software Experiences]]></title><description><![CDATA[A quick peek into the challenge of building software for different users]]></description><link>https://www.0x4a6f65.com/p/a-drive-by-regarding-user-knowledge</link><guid isPermaLink="false">https://www.0x4a6f65.com/p/a-drive-by-regarding-user-knowledge</guid><dc:creator><![CDATA[Joe Spano]]></dc:creator><pubDate>Sun, 08 Dec 2024 13:20:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Spfz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F738d0a74-c2af-4633-8369-7febd30f69f3_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We face a delicate balancing act: users are simultaneously more capable than we give them credit for, yet they can't tap into that potential until we show them the way. It's like having a Ferrari but never being taught how to drive a stick; all that power is just waiting to be unleashed.</p><p>Here's where it gets interesting: users often don't know what they don't know. Put them in front of a new interface, and suddenly, they're like a cat chasing a laser pointer, full of potential energy but not quite sure where to direct it. The job isn't to respect their intelligence, though; it&#8217;s to get the user comfortable interacting with your software effectively.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Spfz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F738d0a74-c2af-4633-8369-7febd30f69f3_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Spfz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F738d0a74-c2af-4633-8369-7febd30f69f3_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Spfz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F738d0a74-c2af-4633-8369-7febd30f69f3_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Spfz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F738d0a74-c2af-4633-8369-7febd30f69f3_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Spfz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F738d0a74-c2af-4633-8369-7febd30f69f3_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Spfz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F738d0a74-c2af-4633-8369-7febd30f69f3_1408x768.jpeg" width="562" height="306.54545454545456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/738d0a74-c2af-4633-8369-7febd30f69f3_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:562,&quot;bytes&quot;:895612,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.0x4a6f65.com/i/152683238?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F738d0a74-c2af-4633-8369-7febd30f69f3_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Spfz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F738d0a74-c2af-4633-8369-7febd30f69f3_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Spfz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F738d0a74-c2af-4633-8369-7febd30f69f3_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Spfz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F738d0a74-c2af-4633-8369-7febd30f69f3_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Spfz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F738d0a74-c2af-4633-8369-7febd30f69f3_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.0x4a6f65.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">0x4A6F65 (Joe) is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The real trick is scaffolding their experience without making it feel like hand-holding. Think of it as leaving a trail of breadcrumbs that gets progressively more challenging. You start with 'Here's how you walk' but quickly progress to 'Now you can run', and before they know it, they're parkour masters thinking 'I knew this all along!' The irony is that by temporarily treating users as if they need guidance, we enable them to become as sophisticated as we initially assumed they were.</p><p>This creates a sort of Schr&#246;dinger's user<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>, simultaneously novice and expert until we open the box of user experience. The key is to design systems that acknowledge both states, respecting the potential while carefully nurturing their growth. It's not about dumbing things down; it's about smartening things up to give users ways to learn. This is often missing from software design and how people approach understanding the world around them. So, the critical work that needs to be done is figuring out a way that pulls users along their learning curve.</p><p>The challenge before us extends beyond software design or generational differences, it's about reconciling our society's growing complexity with our human tendency to seek simplicity. While anti-intellectualism and resistance to effort push us toward oversimplification, we must find ways to make complexity accessible without stripping it of substance<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p><p>As designers, educators, and citizens, our task is to build bridges across this chasm between knowledge and understanding. Whether creating software interfaces or just sharing information, we must resist the urge to oversimplify while making complex ideas accessible. The solution isn't to lower the bar but to build better ladders. Ladders that encourage curiosity, reward effort, and acknowledge that learning is a journey, not a destination.</p><p>In the end, perhaps the real measure of our success won't be how much information we can provide, but how effectively we can guide people from saying 'I don't know' to asking 'How can I figure this out?&#8217;</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I&#8217;ve made this reference twice now. My apologies.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This is a big sentence that references a previous blog post. But I am calling this out because it fits into how software is made today. Just chasing dopamine from your users is a cheap way to grow. I believe we can do better.</p></div></div>]]></content:encoded></item></channel></rss>