There is a sentence I am hearing more often, in code reviews, in meetings, in emails sent in apology. That was AI. I didn't write that. Sorry, GPT generated the draft. Don't blame me, the model hallucinated.
I want to be careful here, because I use AI heavily and I think more people should. The objection I have is not to the use. It is to the move. The move is to put your name on something, send it, and then — when it turns out to be wrong, sloppy, offensive, or stupid — disclaim authorship by naming the tool that produced the words.
That move does not work. It cannot work. The author is still the author.
Where the line actually is
The line is not at generation. It is at signing.
If you used a model to draft an email, edit a paragraph, write a function, or sketch a slide, and then you read it, decided it was right, and put your name on it — the words are yours. You have adopted them. The mechanism that produced them is irrelevant to the relationship between you and the person reading them. It does not matter whether they came from your fingers, a junior writer, a ghostwriter, an LLM, or a Magic 8-Ball. Pressing send is the act of ownership.
Conversely: if you used a model to generate something and you did not read it carefully before signing, the failure is not the model's. The failure is that you sent something you had not vetted. The model did not betray you. You skipped a step.
This is the same standard we have always applied to executives who sign documents drafted by their lawyers, to surgeons who use techniques they did not invent, to CEOs who deliver speeches written by speechwriters. I didn't write it personally has never been a defense in any of those contexts. The signature, the delivery, the endorsement — that is what creates accountability. The drafting process does not.
Why the move is tempting
It is tempting because it would be convenient if it worked. Authorship is heavy. Words you sign your name to can come back to you years later. If you could outsource the consequences to a tool, you would get the leverage of the tool without the risk.
It is also tempting because of a recent cultural pattern: blaming the algorithm. The algorithm decided to recommend that. The model classified them that way. The system flagged the transaction. In each of those, a person made a decision to deploy the system, a person made a decision to act on its output, and no one designed the system to be unaccountable. The accountability is fully present. It is just diffused across multiple humans, none of whom feel personally responsible because the technical artifact is between them and the consequence.
"GPT wrote that" extends the same dodge to authorship. The artifact of the model gets placed between the author and the reader, and the author tries to step out of the relationship. It does not work for the same reason the algorithm dodge does not work: a person chose to use the tool, a person chose to ship the output, and the consequences land on the people who never asked for either.
What you owe the reader
Every reader you write to is operating under an assumption: the person whose name is on this took a moment to mean it. That assumption is the basis of every reading relationship — emails, papers, code reviews, contracts, condolence notes.
If the assumption is true — if you wrote it, or you used a model and read it carefully and adopted the words — the relationship works exactly as it always has. AI in the loop is invisible from the reader's perspective and should be. They do not need to know whether you typed each word.
If the assumption is false — if you used a model and shipped without reading — you have lied to the reader by sending. The lie is not in the technology. The lie is in the implied claim I considered this. You did not consider it, and the reader is going to make a decision based on the assumption that you did.
The fix is not to disclose every use of AI. The fix is to read what you send.
What's fine, by my reckoning
Use AI to draft. Use AI to refine. Use AI to write production code that you review and own. Use AI to research, brainstorm, summarize, translate, code-review your own work, find the typo, suggest a better verb. Use it for the boilerplate, the first pass, the plumbing, the structure, the grunt work. Use it heavily. Most of the people who claim not to use it are using it.
The only thing not to do is to use it as a shield. I didn't write that, GPT did is a sentence that should embarrass the person saying it, the way my secretary wrote that once embarrassed executives who hadn't read their own correspondence. The tool changes; the embarrassment is the same.
A note on disclosure
There is a small culture forming around proactive AI disclosure — written with assistance from Claude, first draft by GPT — and I am cautiously in favor of it, with a caveat. The disclosure is useful only if it is accompanied by the implicit promise and I read it. Disclosure without endorsement is worse than no disclosure, because it pretends to be honest while still leaving the reader unsure what they are looking at.
The right standard is the one writers have always lived under: I stand behind every word here, regardless of how it got onto the page. AI does not change that. It just makes the alternative — shipping without reading — much faster and much more tempting.
What I still don't know
- How to handle agentic systems that send messages on my behalf without my reading them. This is the next frontier and I do not yet have a clean answer. My current intuition: the standard extends — if I deployed the agent, the agent's outputs are my outputs. But that creates incentives I am not sure I want.
- Where the line sits for collaborative AI work. Pair-programming with a model, where the human and the agent take turns and the final code is genuinely co-authored — the disclosure question is real and the etiquette is unsettled.
- Whether the cultural reflex will shift. I think the "sorry, GPT did it" sentence will read in five years the way "sorry, my phone autocorrected" reads now — embarrassed, deflective, a tell that the speaker has not yet adjusted to the tool. I am not sure how long the adjustment will take.
The unifying observation is the simple one. AI did not change the contract between author and reader. It made the contract easier to violate without noticing. The work is the same as it has always been: read what you send, mean what you write, and own what you sign. The tool does not reach into the relationship. You do.