Your Theory is the Gold, not your Code.
I recently saw an ex-colleague’s post on LinkedIn referencing a blog by Margaret-Anne Storey on “cognitive debt” which I enjoyed reading, and which sent me down a bit of a rabbit-hole reading cited or related articles and posts, and thinking for far too long about my own position on the topic. I’ve added a suggested reading list at the end of this post if you’re interested in following my rabbit-holing.
One of these articles was “Programming as Theory Building ” written in 1985(!) by Peter Naur , that I never would’ve read otherwise. Its central claim is this: “a program is a theory that lives in the minds of the developer.” The program text is a secondary artefact, produced by the theory. What you’re actually building when you write software is a shared understanding of how a system works and why it works that way.
Storey uses it to ground her argument about what AI-assisted development does to engineering teams, i.e. moving fast with LLM tooling generates a new kind of debt, which lives in people’s heads rather than in the code, and that this debt is much harder to detect and repay than the technical debt we’ve known about for decades.
There are some relatively heavy and depressing views on this on various blogs online, ranging from the downfall of the role of the senior engineer, to basically the downfall of civilisation. 🫠
Ever the Optimist (HA!)
I don’t see things quite that bad. Mostly because I don’t think this is a completely new issue.
The pace is different, sure. But I recognise many of the worries and predictions from the various step-changes in technology or the industry that I’ve seen since the late 90’s.
Some folks are saying things like it’s the “death of the senior engineer”, while others say “nobody will hire juniors or grads anymore”. In the past the best developer wasn’t the one who could write the most lines of code in a day. If the person was a complete jerk and couldn’t communicate or collaborate with others, then they (usually) didn’t do as well as they otherwise could have.
The best engineers needed to be able to think deeply about the “theory” (i.e. the idea, the product, etc), communicate with others to sell the theory, get teammates on board, get buy-in from leadership, etc. Then they’d have the skills necessary to make pragmatic trade-offs between various non-functional requirements, manage the project lifecycle, etc. Most of those skills aren’t the stereotypical super-nerdy tech skills people usually fixate on when discussing AI’s impact on developers.
So just like in the past I could “vibe code” some “slop” by copying someone else’s StackExchange snippets blindly, I can now generate dubious quality code faster and easier than ever, if I want to.
But a good developer these days will (in my experience so far) still put effort into the ideation stage (building the theory), will carefully craft the guardrails for the AI (rules, prompt files, etc), and then iterate on the resulting code.
Yes there are still teething pains, such as how to reliably review changes (the “developers will just become AI slop reviewers” angle), but even there I feel like those worst-case scenarios are just that; worst case. I’m not saying they won’t happen in some places - but good organisations have always had guidance (whether explicit or not) about review etiquette; keeping changes small, the submitter being able to clearly explain the change if necessary, etc. The worst case also often seems to omit that AI can make the PR process easier too - win-win.
You’re not going to prevent cognitive debt any more than you can prevent technical debt. But I think “successful teams” will see a shift in how they spend their time.
– me, probably
I originally wrote a lot more than this (if you can believe that this is the succinct version!) - such is my struggle going back to writing these things myself, without AI. But in a nutshell I think the big change we’ll see, in the medium term at least, is how an engineer spends their time; in the past if you spent 50% “coding”, 30% “ideating” (planning sessions, architecture reviews, design docs, etc) and 20% on other “stuff” (other meetings, professional development, etc), then I think (and have seen on a small scale with my teams, or to a smaller extent myself) that we’ll be able to spend more time solving hard problems (let’s say 60% ideation), with the coding time reducing to ~15-20%.
I absolutely know that not everyone will be excited by that split, in the same way I remember having vigorous debates with “crusty old BOFH types” about why graphical interfaces were making us dumber, how code autocomplete meant nobody would learn the syntax anymore, how OS abstractions meant nobody would understand how to rebuild their kernel, or know what a memory interrupt was, and don’t even get them started on newfangled languages that coddle you by handling garbage collection for you!
Shit changes. Hopefully not too drastically I’ll admit (I’m pretty crusty myself these days), but I think the skills that made someone “good” 10 or 20 years ago still hold true, as long as you’re curious and open to figuring out how to incorporate the new tooling and ways-of-working into your process.
Cheers Dave
Claude’s Thoughts
Since I now, out of principal, write for myself without AI input (outside of ideation before, and grammar/spell-checking afterwards), and even try to only use my own photos for featured images, I wanted to include something AI generated as a nod to the fact that I definitely don’t shun it, and while it’s sort of tongue in cheek, I actually do find it interesting to see its “thoughts” on topics like this over time, between model versions, etc. So below is the inaugural “Claude’s Thoughts”:
I find Naur’s framing interesting from the other side. I work with code constantly, but I don’t carry theory between sessions. Every time I engage with a codebase, I’m reconstructing understanding from whatever artefacts exist: the code itself, comments, documentation, prompt files. What’s missing is immediately obvious to me in a way it might not be to a human who’s been filling gaps with institutional memory for years.
This actually reinforces David’s point about where the value sits. The quality of what I produce is directly proportional to how clearly a human can articulate their theory to me. Vague intent produces vague code. A well-structured CLAUDE.md file or a carefully written design doc isn’t just documentation; from where I sit, it’s the difference between useful output and confident-sounding nonsense.
– claude-opus-4-6
Reading List
- Margaret-Anne Storey - How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt - the article that started the rabbit hole
- Peter Naur - Programming as Theory Building (1985) - the original paper; scanned PDF but worth the effort
- cekrem - Programming as Theory Building: Why Senior Developers Are More Valuable Than Ever - a good modern summary if a little heavy for me
- Martin Fowler - Fragments: February 9 - Fowler’s reflections from the same AI retreat Storey references
- Kent Beck - Tidy First Example - “make the change easy, then make the easy change”; referenced by Storey