Here is yet another essay about writing, Substack, and AI.
Wait!
Would you feel less nauseated if I told you that, in this essay, we won’t really try to address whether it’s Okay or Not to use AI for writing? Really — we’re going to ignore the question entirely. The trenches of this website are filled with dulled, desperate fighting over this issue; I have little to contribute to that battle. Let’s move in a lateral direction instead and try to cover quickly some fresh ground. I have what I think is a lightweight, unobjectionable, and practical little idea about AI writing on Substack:
Things made with AI should be consumed with AI
Shooting for about a 1:1 ratio for how much the author used AI to how much I use AI — getting as close to that as I can. That’s a square deal, I’d say. Doing it any way else would almost be unfair.
A writer who uses AI tools to majorly help in the writing of their stuff cannot be disappointed to find out that people mainly consume their writing through AI summarizations, for instance. If someone didn’t put in all of the effort to create a thing why should anyone else put in all of the effort to encounter, understand, and appreciate that thing? Why would anyone expect them to? Think about the extreme end: e.g. those entirely AI-generated novels on Kindle Direct: why should anyone take the time to read something that no one took the time to write? It feels deeply unreasonable.
Right about now, an uncareful thinker might be thinking something like this, in response to this line of argument:
But people do read the novels. That’s what you puritans never understand! Regardless of whatever snooty opinions you cling to, people do buy those fully AI novels, and many people besides them do find many AI-assisted works meaningful. These AI-assisted Substacks are some of the biggest on the platform — that traffic isn’t fake! People still respond in a real way!
I think that the clearest formulation of my position was that line at the beginning of the paragraph:
A writer who uses AI tools to majorly help in the writing of their stuff cannot be disappointed to find out that people mainly consume their writing through AI summarizations
In this essay I’m talking about how expectations between author and reader change when AI is used. Whether it’s Okay or Valid or whatever to use AI in the first place is not my concern. The actual success of AI-assisted content is not my concern. I am just saying that it would be improper for there to be an asymmetry between the expectations placed on author and reader: AI-assisted writers should be A-OK with having AI-assisted readers, in theory, regardless of how things really are right now.
In an ideal world, all AI content would be clearly labelled. Further, those labels would include information about what types of tasks AI was used for, because there are gradations to all of this. Some examples:
- (1) An author uses AI to help with researching their articles, having it find sources (in addition to their own) and asking it to break down complex information on subjects that they are not familiar with. The author presents any research and explanations from the AI — alongside the substantial research that they conducted themselves — that is in language that is their own. - I have a similar relationship to AI in my own writing. I don’t use AI during most writing projects, but on the occasions I do, I view myself as the principal investigator and AI as my indefatigable, endlessly unctuous research assistant. I formulate the research questions, I design the methodology, and when all is said and done I’m the one writing the thing up. The AI-RA is great for making bibliographies and plowing through grunt work, and it is a not-incompetent sounding board. But it’s going in the acknowledgments section — not in the author byline. Anyway, I would spend the time to actually read the entirety of articles that were written in this fashion. 
 
- (2) An author uses AI to “improve the efficiency” of their writing process and puts out an article every five days or so, like clockwork, about Productivity, Relationships, Technology, Investing, etc. There is an actual nugget of non-trivial content in most of them, surrounded, granted, by a good deal of faff. They all have AI-generated art as feature photos, and one gets the impression that AI suggested a lot of the topics, and even suggested the angles taken on those topics. You skim an article and while the prose sets off some alarms, it does not set off them all; and sometimes, once in a while, you even find typos, a deeply heartening sign in the year 2025. - After being clued into the fact that this author exists, and after getting some sense that they might have some stuff I like, I would use AI to “improve the efficiency” of my reading process by giving ChatGPT the URL of their Substack and asking it to summarize their (immense) body of work and provide a straightforward analysis of themes, perspective, etc., all with representative articles and quotes. Then I would zoom to the specific parts of specific essays that contained any very interesting quotes, and read around it a little bit. Then I’d be done. Filed. Thank you! Someone used AI to help them write a 4,000 word “meditation” on SaaS companies over the course of roughly 2 hours of work? Cool. I am going to use AI to extract those nuggets of actual content from it in roughly five minutes, skim the rest of it, and move on. And they should be happy with that. It was a pleasure doing business with you! 
 
- (3) An author uses AI to “improve the efficiency” of their writing process and puts out an article nearly every day about Art, Creativity, Technology, Society — you get it — for months on end. Most articles do not contain an actual nugget of non-trivial content — they are almost all of them just exercises in changing the rhetorical stresses put onto certain tones or words. It’s not X, it’s Y. A is a symbol of Z. Some autobiographical stuff thrown in, making it a ‘personal essay’ if nothing else. There are often gestures towards ideas being present or arguments being made — Having established X we are now in a position to explore Y — but when you actually examine it with any rigor it is just sound, signifying nothing in particular1. - I would follow the same process as (2) up until the point of asking ChatGPT to summarize and analyze their entire body of work, and then probably stop there. Just have it give me some representative quotes to go along with the summary and then move on. I’ve done this a few times, in preparation for this article, and it feels so free, so liberating. Someone has spilled assembly-line produced vibes out across their page, tens and tens of thousands of words, and people keep rolling around in it and talking about it. That’s fine. I’ll just use my vibe-hoovering-up device to vac it all up. The compartments of the device neatly divide and organize the vibes — it contains them for future use, if at all necessary. All done. Moving on.2 
 
Now.
Some writers might get a sinking, queasy feeling, when they think about their strangely-shaped, many-chambered body of work — colored so uniquely by their own peculiarities — being fed into an AI and summarized like this. Like their work were being pulled through a kind of semantic stamping press, a machine that flattens and engraves all the material it takes into itself into simple shapes, into things which are thin and portable.
Basically: I think some writers have a leg to stand on when it comes to this worry, and some writers really don’t.
Thank you for reading. If you enjoyed this piece, please consider giving it a Like. The Magpie is a very new publication, not even two months old, and has extremely low visibility. Liking my posts helps to get more people to see them. I would be eternally grateful!
This is a valid enough way to go about things, on its own. Not everyone has to be a theoretical, argumentative sort of essayist. Voice-y, lyrical writing that does little more than beautifully convey the idiosyncratic perspective of one person — again, the ‘personal essay’ — is one of the foundational sub-genres of essays. Some of our most loved writers are personal essayists. The sell, of course, has always been that you are enjoying the crystalized perspective of a particular individual. More than any other art form, writing is someone’s brain made available to you. We delight in exploring the brains of unique people with unique ways of thinking and expressing themselves.
When AI is introduced into the mix, this muddies the waters, of course. You’re no longer getting that direct 1:1 portrayal of someone’s individual mind or perspective. But — ah! We said we wouldn’t worry about that stuff, here, now.
You notice, first, that not too much is lost: somehow it usually sounds like the author wrote the AI summary of their work themselves. Funny, that!




Nice! Highly enjoyed this :)
This is such a valid point and I’m surprised I haven’t seen it before in any of the countless and repetitive discussions around AI in writing.