Teach the AI to Fish
In my last post I talked about Peter Naur’s “Programming as Theory Building ” and how cognitive debt is the new thing we should be worrying about (just in case you needed another one 😅). I cut something I’ve been thinking about as that post was already getting long, so I thought I’d write another quick post just on that.
My observation: Documentation isn’t just for humans anymore.
For decades (forever?), “good documentation” meant writing things down so that another person could understand your system. READMEs, design docs, setup guides, wikis, etc. All aimed at someone sitting down to read and learn.
That’s still important. But there’s a new audience now, and most teams haven’t quite caught up to that fact.
When I set up a new project these days, one of the first things I create is a CLAUDE.md file (or whatever the equivalent is for your agent of choice). I have a template I can reuse, so I’m not writing it from scratch every time. It describes the intended project structure, the conventions, the things I want the AI to know before it starts touching code. It’s not an afterthought; it’s as deliberate as choosing a framework or setting up CI. And importantly, I make sure to keep it up to date throughout the development process.
These files, whether they’re called rules, agent skills, prompt files, or whatever, are just documentation. But they’re documentation for a fundamentally different reader. One that doesn’t get bored, doesn’t skim, doesn’t rely on institutional memory from the last standup, but also doesn’t infer things that aren’t written down (it’ll happily and confidently fill in any gaps itself! 😉).
That last bit is the key. A human can read a vague onboarding doc and fill in gaps from context, body language, or just asking the person next to them. An AI agent reads exactly what you give it and goes from there. The quality of its output is (to a large extent at least) bounded by the quality of its working context.
There is some argument these days about how valuable these “context files” actually are. Specifically there are a couple of studies questioning the effectiveness of various approaches to providing context. It’s an interesting debate, but I think some commentators misinterpret the findings; IMO it simply highlights that context-setting requires the same effort as any other part of your dev workflow. So if you put the effort in, it can still be beneficial.
Models are absolutely getting better at starting from scratch, establishing their own context simply by reviewing your whole project organically. But I have still seen examples where some of the “best” models will either guess things (for example, how to authenticate to an endpoint), or just completely hallucinate an API it thinks should exist based on other context.
So if you’re building anything that other developers users will interact with: you should be thinking about their AI agents as first-class consumers of your documentation. Not instead of human docs, but alongside them.
What does your system look like to an AI that’s been told “use this API to build a feature”? Does your README or AGENTS file give it enough to work with? Are your conventions explicit or do they live in tribal knowledge? If someone’s AI agent tried to integrate with your service tomorrow with nothing but your docs, what would it produce?
I’ve found myself saying this to my teams a few times recently:
Rather than teaching a person to fish, teach a person’s AI to fish.
The developer who reads your docs and the agent that acts on them are increasingly part of the same workflow.
Cheers Dave