Walk into almost any writer’s or author forum today and you will hear the same charge: “AI is cheating.” On TikTok and elsewhere, the accusation often escalates into a claim about identity: “If you used AI, you’re not really an author.”
This article argues for a clearer, more workable standard—one that respects legitimate concerns while recognizing a long history of disruptive tools in writing and publishing. To do that, we need to clarify what we mean by “authorship.”
Merriam-Webster defines authorship as:
- the profession of writing
- the source (such as the author) of a piece of writing, music, or art
- the state or act of writing, creating, or causing
It also defines an author as:
- the writer of a literary work (such as a book)
- one that originates or creates something
In copyright law, things are less tidy. The U.S. Copyright Act does not provide a single, general definition of “author” or “authorship.” Instead, it uses the concept throughout the statute and defines related categories (such as “work made for hire” and “joint work”). Separately, U.S. Copyright Office guidance emphasizes that copyrightability ultimately depends on human creative contribution to the expressive elements of a work.
Canada’s Copyright Act similarly relies on the concept of “author” without a single universal statutory definition, and the European Union harmonizes much of copyright through directives and case law rather than one unitary definition of authorship across all works. Notably, the United Kingdom’s Copyright, Designs and Patents Act contains an explicit rule for “computer-generated” works: the author is taken to be the person who undertakes the arrangements necessary for the creation of the work.
Taken together, these frameworks support a practical conclusion: a responsible, human-directed use of AI can still fall within ordinary understandings of authorship—particularly when a human sets the creative intent, exercises editorial judgment, and remains accountable for what is published. The debate is not about whether tools are used; it is about whether the work’s meaning and final expression remain under human control.
The fear is real—and it’s not nonsense
The backlash is not merely aesthetic. It tends to cluster around four grounded concerns:
- Consent and provenance: Were copyrighted works used to train models without permission?
- Labor displacement: Will editors, translators, illustrators, and writers be undercut by low-cost, high-volume generation?
- Market flooding: Will storefronts and recommendation systems be overwhelmed by low-effort content?
- Trust: If readers cannot tell how a work was made, how do they evaluate originality, craft, and accountability?
These concerns are reflected in real industry responses. For example, Penguin Random House added copyright-page language explicitly prohibiting the use of its books for AI training. The Authors Guild has also pressed a “consent and compensation” argument and published best-practice guidance for writers navigating AI.
Acknowledging these issues is not “anti-AI.” It is how serious industries adapt without burning trust.
Authorship is not “how the words got typed”
A workable definition of authorship for the AI era can be stated plainly:
An author is, fundamentally, a storyteller—someone who makes the meaningful decisions that create a work: theme, structure, tone, character truth, and what the reader is meant to feel.
That has always allowed for delegation and assistance.
Analogy: Architecture vs. bricklaying
An architect can design a building without laying every brick. The building is still the architect’s work because the creative direction, constraints, and accountability sit with them. The craft is in the choices, not the manual repetition.
Analogy: Composer vs. orchestra
A composer writes the music; an orchestra performs it. The performance adds texture and interpretation, but the composition remains authored.
Analogy: Dictation vs. handwriting
If you dictate a chapter and someone transcribes it, you’re still the author. The words may be shaped in transcription, but the narrative intent remains yours.
AI-assisted writing is best understood in this family of practices: the author directs; the tool accelerates; the author selects, revises, and signs off. In certain workflows, that resemblance to ghostwriting is the point—but the analogy only holds if the same ethical standards apply.
Publishing has always accepted “authorship by direction”
A large portion of commercial publishing history is built on collaboration models that look, structurally, like “human-directed production.” Examples with clear documentation include:
- House pseudonyms and book-packaging: “Carolyn Keene” (Nancy Drew) is a house pseudonym used by multiple writers working under the Stratemeyer Syndicate—an early, industrial-scale model of packaged storytelling. Similarly, series like The Hardy Boys were written by multiple ghostwriters under the “house name” Franklin W. Dixon, working from outlines.
- Continuations after death: V. C. Andrews’ estate hired Andrew Neiderman to complete and continue books under the V. C. Andrews name after her death.
- Brand continuations / collaborations: Tom Clancy’s “Ryanverse” continued via credited co-authors and later authors after his death.
- High-output collaboration models: James Patterson is widely reported to use a co-author model supported by detailed outlines and revision, with co-authors credited.
- Ongoing franchise and co-author practices: In modern commercial fiction, “house style” and collaborative production are common in multiple forms—co-authoring, packaging, and branded universes.
None of this is presented to say “therefore anything goes.” It is presented to establish a reality:
The industry has never defined authorship as “one person typed every sentence alone.”
So here is the honest question: if someone claims, “Using AI means you’re not an author,” do they also believe a writer who uses a ghostwriter—or a co-author, or a packaging house—is “not an author”?
Most readers and most of the industry have answered that question already: authorship is responsibility and creative direction, not a myth of isolation.
However, the analogy only holds if you also apply the ethical rules that govern ghostwriting and collaboration:
- There must be human creative direction (story intent, constraints, taste, selection).
- There must be human editorial control (rewrite, reshape, reject, verify).
- There must be human accountability for the final manuscript.
A brief, slightly exasperated note on the em dash (—)
A surprisingly common myth online is that the em dash is “proof” a passage was written by AI. By that logic, Oscar Wilde was a chatbot in a velvet jacket, Herman Melville was generating whale prose from a server rack, and Charles Dickens was auto-completing the French Revolution.
The em dash is simply a long dash—used to create emphasis, insert an aside, or pivot sharply—functions style guides have described for decades.
If you want receipts, em dashes show up all over classic literature:
- Oscar Wilde, The Picture of Dorian Gray
- Herman Melville, Moby-Dick; or, The Whale
- Charles Dickens, A Tale of Two Cities
- Emily Dickinson, whose poetry is famously dash-driven by design
In other words: the em dash is not an “AI tell.” It is a legitimate, historically common punctuation tool—now merely caught in the crossfire of a new technological moment.

Disruptive tools are the rule, not the exception
The “cheating” claim often ignores how writing and publishing have repeatedly been transformed by tools that made production faster, cheaper, and more scalable. It can also be fueled by a very human emotion: the fear of being undercut after years of painstaking effort—drafting, rewriting, restructuring, and refining—only to feel forced to compete with something that appears effortless.
That emotional reaction is understandable. But historically, every major productivity leap has triggered similar discomfort before becoming normalized.
Before the press: manuscripts, scribes, and bottlenecks
For centuries, books were copied by hand, making knowledge scarce and expensive. Mechanized printing in Europe, associated with Gutenberg’s press in the 15th century, changed the economics of copying and accelerated literacy and distribution.
Analogy: AI is not the first tool that makes “drafting” dramatically faster. It is the newest.
The typewriter: speed, legibility, and “mechanical writing”
When typewriters became common, they increased speed and standardized documents. Institutions adopted them precisely because they made production more efficient and consistent.
Analogy: “Typing isn’t real writing” may have sounded plausible to some people in 1885. Today it sounds like a category error.
Word processors: revision at scale
Word processing made rewriting cheap. The ability to draft, cut, paste, rearrange, and iterate reshaped writing workflows and publishing operations.
Analogy: If today’s accusation is “AI isn’t real writing,” yesterday’s version could have been “If you didn’t do it by hand, it doesn’t count.”
The latest disruptive technology is AI—but it most definitely will not be the last. The historical pattern is consistent: the tool changes throughput, not the need for authorship. It improves efficiency; it does not eliminate craftsmanship.
Where AI fits most cleanly: editorial assistance
Even many skeptics of AI-generated prose accept that AI can be powerful in editorial workflows—especially for repetitive or consistency-heavy tasks:
- proofreading passes and formatting consistency checks
- continuity audits (names, dates, timeline drift, canon consistency)
- clarity rewrites and rhythm alternatives for review
- summarization and outline validation
- support for testing marketing blurbs and metadata positioning
Amazon KDP distinguishes AI-generated content (disclosure required) from AI-assisted content (disclosure not required), underscoring a practical line: assistance can be legitimate, but fully generated content carries different expectations.
Analogy: A metal detector on a beach does not decide what is valuable—it tells you where to look. The editor (human) decides what is gold and what’s trash.
For self-published authors in particular, responsible AI assistance can reduce costs, speed up revision cycles, and improve the quality of the final manuscript—without removing the author from the driver’s seat.
Copyrightability: what the law is signaling (and why it matters)
In the United States, the Copyright Office has reinforced a baseline principle: copyright protects works of human authorship. Purely AI-generated output, by itself, is generally not protected; protection depends on meaningful human creative contribution (selection, arrangement, modification, incorporation of human-authored expression).
Courts have echoed the same general idea: a work created solely by AI, without meaningful human involvement, is not eligible for copyright protection under U.S. law.
Practical implication for creators: if you want strong copyrightability, you want visible human contribution—substantive revision, original selection/arrangement, and a defensible authorship record.
The training-data debate: the “library borrowing” analogy—useful, but incomplete
One of the most prevalent complaints about AI in authorship revolves around the use of books in training AI models. Opponents swear the use of existing work without explicit content from the creator is intellectual property theft. Some proponents argue: “Every book can be borrowed from a library for free; a human can read it and be inspired; Where is the theft in an AI reading borrowed books to learn?”
This analogy has merits, but it should be handled carefully.
Where it helps
A judge addressing AI training and fair use invoked the idea of learning by reading—likening model training to a reader absorbing material to create something new. Rhetorically, this frames training as learning rather than replication.
Where it breaks
Library lending typically rests on doctrines like first sale: a library lends a lawfully acquired physical copy; it does not gain the right to reproduce unlimited copies.
Training a model may involve large-scale copying, storage, and processing. Courts have drawn a bright line between lawfully acquired materials and pirated acquisition. In the Anthropic matter, the fair-use discussion did not “bless piracy”; separate proceedings and settlements addressed alleged downloading and retention of pirated books.
The correct stance must be this:
- Piracy and unauthorized mass acquisition are difficult to defend, even for the advancement of science and technology.
- Acceptability improves dramatically when inputs are lawfully acquired, licensed, or subject to opt-outs—and when outputs do not substitute for the originals.
The “new era” point: licensing, opt-outs, and creator-control mechanisms are expanding
Whatever one thinks of early practices in the AI ecosystem, there is a clear trend: the system is shifting away from “scrape everything” and toward licensed partnerships, clearer permissions, and creator-control mechanisms—as it should—especially for high-value content.
OpenAI publicly describes opt-out mechanisms for certain kinds of user content and states that some business/enterprise/API data is not used for training by default. OpenAI has also announced content partnerships with major publishers and media organizations, emphasizing permitted use and attribution. More broadly, publisher–AI licensing deals have proliferated across the industry.
This does not “solve” the debate, but it is a concrete direction of travel: more permissioned pathways, more controls, and more compensation experiments.
Best practices: a responsible AI authorship standard people can rally around
Universal approval is unlikely. Broad acceptability, however, is achievable—if the people using the tools commit to standards that protect readers, respect creators, and preserve human accountability.
- Keep a “human spine.”
Humans own the outline, narrative intent, and final decisions. AI can propose; humans dispose.
Analogy: autopilot can hold altitude; it cannot choose the destination. - Treat AI output as raw material, not finished prose.
Use AI to generate options, then rewrite in your own voice. The more the author shapes and revises, the clearer the authorship. - Don’t impersonate living writers.
Avoid prompts that mimic identifiable living authors for commercial publication. This is where “inspiration” can slide into misappropriation. - Fact-check anything consequential.
AI can hallucinate. Research claims, historical assertions, and legal statements must be verified. - Maintain an authorship log.
Not for performative transparency—because in a dispute, it is evidence of human control and original contribution. - Disclose when it matters.
Disclosure is not moral theater; it is expectation management. Follow platform rules (e.g., KDP’s AI-generated disclosure requirement) and choose voluntary transparency appropriate to your audience.
A fair conclusion: AI doesn’t erase authorship—irresponsible use erases trust
AI is a disruptive writing technology in the same lineage as the printing press, the typewriter, and the word processor: it reduces friction and increases efficiency. That does not automatically cheapen craft. It does, however, raise the stakes on ethics, provenance, and accountability.
The most defensible position is neither “AI is fraud” nor “anything goes.” It is:
Human-led authorship, responsibly assisted by new tools, with clear standards for originality, consent, and accountability.
That is the kind of stance readers can trust, writers can work with, and industry actors can evaluate—because it is grounded in how publishing has always evolved: by adopting powerful tools while renegotiating norms.
Comment below with your views on this topic.
Max.
Sources and additionnal read.
Platform and policy
Copyrightability and human authorship
- U.S. Copyright Office — Copyright and Artificial Intelligence (project page)
- U.S. Copyright Office — Copyright and Artificial Intelligence, Part 2: Copyrightability (Report PDF)
- Reuters — U.S. appeals court rejects copyrights for AI-generated art lacking human creator (Thaler) (Mar. 18, 2025)
Em dash (—) usage
- Merriam-Webster — Em dash (definition)
- Merriam-Webster — How to Use Em Dashes (—), Colons (:), and Semicolons (;)
- The Washington Post — The em dash “AI tell” discourse (coverage)
- Project Gutenberg — The Picture of Dorian Gray (Oscar Wilde)
- Project Gutenberg — Moby-Dick; or, The Whale (Herman Melville)
- Project Gutenberg — A Tale of Two Cities (Charles Dickens)
Training data, fair use, and piracy distinctions
- Associated Press — Judge says Anthropic can train AI on books it bought, but not on pirated copies (Jun. 24, 2025)
- The Verge — Judge rules Anthropic can train AI on purchased books but not pirated ones (Jun. 24, 2025)
- Associated Press — Anthropic to pay $1.5 million in pirated books settlement (Aug. 28, 2025)The Guardian — Alsup ruling: fair use framing; separate infringement issue re: pirated library storage.
Libraries and lending
Creator org guidance
Publisher stances
Opt-outs and data controls
- OpenAI — Terms of Use
- OpenAI — Privacy Policy
- OpenAI — Enterprise privacy
- OpenAI Help Center — How your data is used to improve model performance
Licensing partnerships and “permissioned” pathways
- OpenAI — OpenAI and the Financial Times announce partnership
- News Corp — News Corp and OpenAI sign landmark multi-year global partnership (press release)
- The Guardian — OpenAI signs multi-year content partnership with Condé Nast (Aug. 20, 2024)
Historical disruptions


Leave a Reply