2025-06-20 On LLMs and Exobrains

Published on 2025-06-20

I like the concept of an “exobrain” - a brain-like thing outside of my human brain that augments it - whether that be a calculator, address book, Wikipedia, Google Maps or an LLM

Exobrains provide value. They let me do things faster/better/more than I could without. They have a cost, yes - I’m sacrificing some independent self-contained skill at a given thing - but in general this is worth it because I can use my brainpower for more valuable things.

A pocket calculator - the thing that 90s teachers were adamant you wouldn’t always have with you - is a form of exobrain. I could multiply 3824 by 8274 in my head or on paper, but if I use a calculator it is faster and easier and with a lower chance of error, so I can get on with more important things. There is still value in being able to do it myself, but in general it’s better offloaded to an exobrain.

Google Maps - or Citymapper or TomTom or whatever - is a form of exobrain. I could navigate from A to B based on a combination of memory, signposts, notes from paper maps, etc. - but GMaps is faster and easier and more resilient to unplanned detours etc, so I can focus my attention on the actual driving and spend less time planning. There is still value in being able to do it myself, but I gain a lot from offloading it to an exobrain.

Wikipedia and Google Search are a form of exobrain. I could cram a bucketload of facts into my head, but it’s generally quick and easy for me to retrieve specific information as needed (enshittification of the internet etc. aside) via the internet, and spend the brainpower on other things like Actually Using Them. There is still value in Knowing Stuff, but probably much more generally in Doing Stuff With That Knowledge.

In all of these cases, these exobrains:

  • Do something I could generally do myself (given sufficient preparation and practice)

  • Generally do it faster and better and more reliably and more “cheaply”

  • Lead to a reduction in my ability to do some given thing because I will tend to offload it

  • Enable me to spend more time and energy on other things I consider more important

Now, sure, everything is tradeoffs, there are no free lunches etc.

And there’s value in keeping your endobrain - the actual squishy meatlump inside you - well-exercised. Yes it’s fast to multiply with a calculator but you should still be able to crunch numbers yourself for speed and convenience and intuition; it’s easy to use Google Maps to find your way around but maintaining your own sense of direction helps in case of outage, battery failure, and/or just general familiarity; it’s easy to look stuff up but having the facts to hand can help you act uninterrupted, unblock your progress, etc.

But fundamentally AI/LLMs seem like Just Another Exobrain.

Claude and other LLMs do a whole bunch of stuff Remarkably Well, from document summarisation, to feedback on writing, to JS and other code creation and editing, and more. Applied judiciously and sensibly (which is vital and yet sadly often overlooked) they’re huge force-multipliers in Getting Stuff Done.

Yes if you’re blindly trusting their factual output, that’s bad, if you’re using them as a substitute for valuable learning, that’s bad, if you’re relying on them to the point where they’re a crutch, that’s probably not good.

But to reject LLMs because (like any other exobrain, or indeed any other tool) they have tradeoffs and/or can be misused, seems reductive and shortsighted.