2026-03-27 On Writing Well With AI ================================== .. feed-entry:: :date: 2026-03-27 https://www.onepercentbrighter.com/p/how-to-write-well-with-ai went past (thanks `Miikka `__) and it raised some interesting discussion. The excerpt I saw made me want to argue with it, but actually I’m basically onboard with the article as a whole. Crucially I think the definition of “writing” is load-bearing. I regularly don’t love the writing that LLMs generate, and I generally want the words I produce for people to be *mine*. But I’d describe the article’s use of LLMs “for writing” as more like “as a writing **partner**”, and that I very much agree with - providing feedback, review, a hostile read, highlighting awkward phrasing, and so on. Broadly speaking, my current positions are: 1. LLMs can be fantastic collaborators (e.g. a writing partner) 2. Raw LLM output is still often not great for direct human consumption without reasonable modification (e.g. generating a blog post) - I regularly see it and dislike it 3. Raw LLM output can still be pretty great for direct nonhuman consumption (code) or small texts (1-2 sentences of words here or there) LLM-as-collaborator is what the article talks about, and this is one of my main use-cases; it’ll pick up typos, argue with me, point out potential issues, and more. And it will do this pretty cheaply, quickly, and indefatigably, whatever time of day. Meanwhile there’s a tonne of largely-AI-generated articles out there and it’s (at least currently) palpable [#bias]_ and there’s something about the tone and feel, and I just don’t like it. I can appreciate why people do it, but when I write for people I want it to be *my* words. It’s plausible that better training data and prompting and model sophistication could wow me here, but right now the text I write all very much comes from me, albeit sometimes with a bunch of LLM-nudging depending on the situation. When it comes to code, I find the situation is almost reversed. I can throw a pretty low-sophistication prompt (*for a software engineer* - thanks `Justin `__ for highlighting this important nuance) at an LLM and get code out that I would happily use with zero or few modifications. And as pointed out, a lot of that still relies on my software engineering knowledge and experience to know where and how to use it, when to intervene, how to steer, etc., but ultimately still the bulk of the code is *not* “mine”. I think that’s the key distinction: I write to communicate ideas and thoughts; the words are the output, humans are the primary target, the outcomes I want are their thoughts and feelings and later behaviours, and those outcomes are things I can only hope at best to influence. I code to make things happen, computers are the primary target [#reader]_, the (direct) outcomes I want are more concrete and measurable and quantifiable. .. [#bias] In fairness, there's probably something here about me only noticing the less good stuff .. [#reader] I used to be a big proponent of the argument that humans are a primary consumer of code - that is to say, there’s a lot of value in writing code that humans can understand, rather than something “clever” and/or obscure, especially as optimising compilers exist and are quite good - I still think this is a generally valuable approach, but it’s not clear to me that humans are the primary readers to target now