Aider
Aider lets you pair program with LLMs, to edit code in your local git repository. Start a new project or work with an existing code base.
Thanks to Mark Wotton’s recommendation, I’ve been playing around a bunch with Aider - “AI pair programming in your terminal” - using Anthropic’s Claude as a backend.
So far I’m impressed.
In Drew Breunig’s model of AI use cases (gods, interns, and cogs) it’s an intern, but it’s a pretty good one at that. And that makes it a hugely powerful partner.
I’ve been building user-facing production software for about 20 years now,
I like to think of myself as a reasonably proficient software engineer.
But I’ve also spent the last 10 years in management and leadership roles,
where the only code I’ve actually written myself has been for personal or professional side-projects.
So there are areas where I’m rusty - I’ve lost track of the latest way to write an __init__
method in modern Python,
or the right syntax for labelling a form field in HTML.
I can go look them up,
but Aider (plus good linting and testing setups) takes so much of the effort away,
so I can focus on what matters.
It gets things wrong, but the things it gets wrong tend to be higher-level conceptual issues - problems with the approach, rather than details of the code itself.
For example, forms that GET
when they should POST
, HTTP status codes that don’t quite represent the right meaning, etc.
And these are the things that I can easily notice and fix - they’re fundamental (if sometimes subtle) pieces of knowledge about the systems as a whole.
It’s not “writing code for me”, but it has been a huge force-multiplier on the code I’ve used it with.
Concerns
I have some concerns. They’re general AI-usage-shaped ones, rather than specific to Aider, but this is the primary place I’ve actually had to face them. They concern me enough to want to mention them alongside my recommendation, but not enough to make me want to stop using it.
Training Data
I’m concerned about the training data used, the appropriateness of the usage, and the ethics of using the output of that.
But I trust Anthropic to do reasonable and sensible things, even if I’ve not got much more than vibes (not entirely baselessly - they do seem to be the most thoughtful of the big players, e.g. https://privacy.anthropic.com/en/articles/10023555-how-do-you-use-personal-data-in-model-training), and this doesn’t feel materially ethically worse than eating a cheeseburger - not necessarily great, but not fundamentally terrible.
Skill Atrophy
I’m concerned it might be making my development skills atrophy, as I rely on it to handle things for me.
But just as I’m ok with the iOS Contacts app meaning I have no memory for phone numbers any more, or Google Maps meaning I never improved my terrible sense of geography, I think this is a tradeoff I’m happy to make.
(Plus arguably those skills were decaying a little for me already…)
Config
My Aider config is simple - at the time of writing it’s as below (live version at https://github.com/doismellburning/dotfiles/blob/master/aider/aider.conf.yml.symlink):
architect: true # Run in architect mode - propose solutions but ask me before doing anything
auto-commits: false # Don't commit for me - I want to tidy things first
lint-cmd: make fix # Use my conventional setup for code cleanup tools
test-cmd: make test # Use my conventional setup for running tests
I generally try not to customise things too heavily, and with Aider I really didn’t feel much need.
Backends
I use Aider with Claude Sonnet because I like Anthropic and because Claude worked well for me. Note that you’ll need Anthropic API credits, which aren’t the same as Claude Pro.
I’ve known folks use it fine with OpenAI o1 and DeepSeek R1.
My brief experiments with local models didn’t go so well.
So
I like Aider, I expect to continue using it, and if you know what you’re doing, I recommend giving it a try.