AI Coding Patterns 2: Cognitive Debt
I was working with a client that was shipping fast. Seven developers, each with AI agents running, and a product taking shape. New endpoints and UI components landed several times an hour. In seven weeks they had a working product in front of early users that would have taken six months without AI.
Week eight, a customer asked for a filter on one of the list views. Simple feature. But when one of the developers handed it to their agent, the diffs came back huge and full of bugs.
I sat in on the call to help them figure it out. The lead developer shared her screen, opened the codebase, and scrolled. "I think the data comes through here." "I'm not sure why it's split into two services." "Are we using an ORM?"
Two hours later we were still on the call tracing the data flow. After the previous bugs, nobody felt confident enough to add the filter without understanding how the system worked.
What it is
Technical debt is a familiar concept. Shortcuts in the code that slow you down later.
Cognitive debt accumulates when you ship faster than you understand. Every file can be clean, every test can pass, and you still might not know what the code does or how to safely change anything.
Understanding comes from decisions: thinking through edge cases, choosing dependencies, tracing data flow. When you hand that work to an agent, you skip the decisions. The code is there. You don't understand it.
Simon Willison, co-creator of Django, noticed the same pattern after building several projects with heavy AI assistance:
I no longer have a firm mental model of what they can do and how they work, which means each additional feature becomes harder to reason about, eventually leading me to lose the ability to make confident decisions about where to go next.
What gets lost
That feeling isn't new. In 1985, Peter Naur argued that the essence of a program is the theory in the developers' heads about what the code does and why. When that theory is lost, the program dies.
My client's team never built the theory in the first place. They had seven weeks of source code and no mental model to make it useful. That's what made the filter so hard, not bad code. The problem is invisible decisions nobody remembers making: dependencies, error handling patterns, how modules depend on each other. When multiple agents run in parallel, the decisions multiply.
On one team, a developer added caching to speed up the dashboard. The dashboard loaded user data (name, avatar, account status) through an API, and the cache sat in front of it. Page loads got faster. A week later, users reported that updating their profile didn't work. The profile settings page wrote to the database, but the dashboard still served from cache. The write path and the read path had never been thought about together.
On another team, an agent built a notification service in the first week. A month later, a different developer prompted their agent to add email alerts for a new feature. The agent built a second system instead of re-using the existing one. The two services had different retry logic and different failure modes. Customers got duplicate messages in some cases and no notifications in others. A theory of how notifications work didn't exist.
Paying It Down
Cognitive debt shows up in mundane ways. A developer estimates a feature at two hours and takes two days because they didn't know what the code assumed. Two developers disagree about how a module works and laugh about how they both were wrong. Someone fixes a bug by describing symptoms to an AI agent because they aren't sure where to find logs.
These look like symptoms of a messy codebase, but the code can be fine. The remaining chapters cover concrete ways to use AI while retaining understanding. But here are a few quick things you can do to reduce cognitive debt.
1. Read critically
If you read something substantive and your only thought is "makes sense," you didn't read it critically. Critical reading leaves you with questions, doubts, and things you want to change.
When AI writes a spec, question anything you don't fully understand. Verify claims against the actual codebase. Do logs confirm assumptions about how things work today? The spec should look very different from what AI first gave you.
When AI writes code, ask what decisions it made. Why this data structure? Why two services instead of one? Why does this error get swallowed? If you can't answer, you don't understand the code yet. Change anything you disagree with.
2. Explain it
You think you understand something until you try to explain it. Psychologists call this the Illusion of Explanatory Depth. The feeling of understanding that collapses the moment you need details.
Before approving a change, explain what it does out loud. If you struggle, you're about to merge code you can't change confidently. Sketch the parts of the system the feature touches: data flow, module boundaries, external dependencies.
3. Upskill Constantly
Consider a single feature: a user buys a subscription through Stripe. To understand what that button does, you need to know how webhooks receive payment confirmations. You need to know how API keys are stored and why they never reach the browser. You need to understand error traces and HTTP status codes well enough to figure out why a charge succeeded but the account wasn't upgraded, before customers start posting about it.
That's one button. Every feature rests on a stack of concepts like this. AI will use all of them whether you understand them or not. The thinking doesn't get easier because the tool got better. You still have to own these decisions and their consequences.
The next chapter covers spec-first development, which is the one tool for staying ahead of cognitive debt.