Does this sound familiar? You find yourself staring at a blank page, cursor blinking mockingly, as the weight of having to write something "good" settles into your gut. I’m not really a writer.
Enter ChatGPT: the siren song for anyone who's ever struggled to find the right words. Just type a prompt, hit enter, and voilà: instant prose!
But here's the rub: that nagging feeling in your gut when you read what comes out. The text is technically correct, sure. Every comma in place, every paragraph perfectly balanced. Yet something feels off. It reads like a robot cosplaying as a human: all those "multifaceted considerations" and "it's important to note" phrases piling up like corporate jargon at a board meeting. You wanted help finding your voice, but instead you got someone else, someone who sounds suspiciously like every other AI-generated piece floating around the internet.
The frustration is real: we're caught between our very human desire to communicate well and the uncanny valley of AI assistance that promises to help but delivers prose as appetizing as cardboard. We wanted a writing partner, but got a polite automaton that wouldn't know authentic voice if it tripped over an em dash.
Lily Chambers will take it from here since this is her expertise. She’s helped me immensely over the last few months with my writing.
Q: What are some dead giveaways that someone’s writing was done with an LLM?
We’ve probably all heard by now that the infamous em dash can be giveaway that an LLM may have authored a text. Maybe this bothers you, maybe it’s a non-issue, but here are some additional style, voice, and tone tells that can clue you in on an article’s author, human or otherwise. (And don’t worry, we’re still very pro em dash here.)
It’s important to note that LLMs will change their style, voice, and tone with effective prompting. And, different LLMs have various, pre-programmed styles. ChatGPT is casual, friendly, and well…sycophantic, whereas Claude has a bit more sincerity. If you have some language prowess and can tell the LLM how you want your text to flow, it may do just that. In my experience, even with explicit direction, it tends to fall back on its preferred style, or mix the preferred style with elements of whatever prose it’s been asked to emulate.
Below is a roundup of “tells” that frequently creep into large-language-model (LLM) prose and a few cautions on using any one model in isolation. Most of these markers show up consistently in empirical studies of AI-generated text or in practitioner cheat-sheets assembled by detection vendors and editors.
1. Phrasing & pattern tells
Many of these phrases are idioms and basically filler text that don’t convey much information. They are overused and rather low-impact, by that I mean they don’t do a lot of storytelling. Any writing 101 course will likely instill that relying on these types of phrases is cliched and a little uninspiring, but English Professor judgment aside, AI-authored texts tend to be rife with them. Writers, professional and casual alike, use idiomatic speech or fixed statements so it’s not a hard-and-fast rule. Where LLMs tend to be particularly guilty with these phrases and patterns is when they are at the beginning of a sentence or paragraph, and when they show up in abundance in one piece.
Add these to your watch-list:
“In today’s fast-paced world …” / “In an ever-evolving landscape …”
High-frequency openers especially in ChatGPT. They let the model sound universal without committing to specifics.
“It is important to note that …”
Ranked among the top “AI give-away” phrases by multiple detection services.
“At its core” / “At the heart of the matter”
Convenient fillers that the model uses when shifting from definition to explanation.
“From a broader perspective” / “Through this lens”
Transitional crutches to move to an analytical moment without adding content.
“Ultimately,” followed by a restatement of the prompt
Mirrors the LLM alignment objective to satisfy the instruction; humans rarely echo the task so literally.
Apology macros: “I’m sorry, but …”, “As an AI language model, I cannot …”
Guardrails that shine through when prompts aren’t fully stripped. Similarly, leaving prompts IN the text is a huge tell (obviously).
2. Structural tells
Hyper-symmetry: sentences and paragraphs of almost identical length and cadence, giving the page a “blocky” look. This can be a bit hard to spot because there will be a few types of sentences. But keep a keen eye out, and you’ll see that they will repeat over and over. Often, the sentence structure patterns look something like this:
Short intro sentence. A longer, more complicated sentence with a few dependent clauses, ending with a clause that features the Oxford comma, three descriptive adjectives, or three dependent clauses. Then a punchy short sentence.
Intro + exactly-three-body-sections + recap: even for 100-word blurbs, LLMs default to the five-paragraph essay template which gives high school essay vibes.
Bullet-or-numbered lists nested inside lists: where descriptive prose would flow better. Sometimes this is useful, depending on the context, for example SEO-heavy articles can benefit from this format. Sometimes it just feels weirdly out of place.
Near-perfect grammar, Oxford commas, zero contractions: humans usually slip a dash of informality or the odd fragment. And boy do LLMs love an Oxford comma. Remember that Oxford commas are style guide specific. Chicago style asks for it, AP style says no thank you.
Uniform register: no drift between abstract exposition and concrete anecdote. LLMs struggle to mix modes without explicit prompting. Again, contextually this may be fine, but even in highly quantitative writing formats, some abstraction is key.
3. Vocabulary tells
Overused adjectives & verbs
Typical human alternative
multifaceted, robust, pivotal, nuanced, profound
varied, sturdy, key, subtle, deep
leverage, delve, empower, foster, harness
use, explore, help, encourage, use
LLMs love to delve. As mentioned above, “delve” is a transitional crutch. It helps the LLM pivot from some explanatory text to analysis.
Extra giveaways
Clustered intensifiers: significantly, substantially, and fundamentally in quick succession.
Hedging stack: “can … may … might … potentially … often … typically” inside one sentence.
Rare sensory or idiomatic language; models avoid low-frequency slang unless coaxed. Truthfully, this tends to be a good thing and often part of localizing text. Where this can be an issue or tell is if the text should reflect some local flare or include nods to the audience its appealing to.
4. Tone & rhetorical tells
Polite-but-bland optimism: praise for every idea, no strong stance.
“Everyone wins” framing: balances pros/cons even when the brief didn’t ask for neutrality.
Earnest mission-statement cadence: “We must strive to…” or “In moments like these…” reminiscent of corporate press releases.
Zero first-hand anecdotes: instead, generic “for example” scenarios that feel textbook-tidy.
Peppy calls to action: short, quippy last sentences that evoke a sense of optimism and inspiration.
5. Content-level tells
Dictionary opening: defining an obvious term before using it (“Innovation, by definition, is …”) to pad length. This is a move that pesky English 101 Professor would quickly redline out of an essay. It’s a weak intro no matter the context.
Exemplars too neat: case studies where every variable aligns perfectly with the thesis, no messy edge cases or counter-arguments. Make Toulmin happy, be the devil’s advocate a little bit.
Pre-emptive caveats: reflexive “of course, limitations apply” paragraphs even when no risk cue exists.
“Three-to-five buckets” taxonomy: regardless of the subject’s natural granularity. Similar to that five-paragraph essay format even for very short context prompts, this is not necessarily the needed format but a frequently used one by LLMs.
6. Social tells
This is a somewhat audience and context-specific tell, but if you are someone who is suddenly posting long, or wordier content online after rarely publishing written text, or if you have struggled with grammar and spelling in the past and your online presence demonstrates that, then a sudden influx of seemingly well-written and error-free text is a tell in and of itself. This may not bother you at all and may be completely acceptable with your goals but if it edges into more personal realms, it could be off-putting to online friends and family.
A recent example that comes to mind was a long Instagram caption that served as essentially a eulogy for a friend’s recently-deceased family member. This poster’s previous text tended to be short and filled with typos. The eulogy included many of these aforementioned tells and was perfect down to the last comma. Ultimately, if the author was happy with it, that’s all that mattered, but I would have liked to read a eulogy written by the human who wanted to acknowledge a passed family member, not an AI model who never knew them.
7 Practical caveats & recommendations
Models age quickly: newer models fine-tune away the most obvious quirks, yesterday’s cheat-sheet can become obsolete fast.
Humans can sound robotic too: boilerplate résumé phrasing (“dynamic team player”) triggers recruiter suspicion more than any single AI tell. For this reason, I actually really recommend using LLMs in these circumstances because the bot result is not too far off from the human result anyway.
Encourage authenticity, not just AI evasion: ask for concrete anecdotes, sensory detail, and opinionated statements, things LLMs still find hard to fake convincingly. Getting deeper with your prompting can go a long way. This is a great time to recall some English class basics and incorporate them into your prompting.
Remove the sycophancy: it’s no secret these days that generative AI (we’re looking at you, ChatGPT,) is terribly sycophantic. This comes across in its output. Overly upbeat or agreeable text in most cases does not fit the bill.
Know the context and your audience: as a rhetor, the main thing I focus on when writing is who my audience is. Hiring managers are going to read something very differently than my Bluesky followers. The text should reflect that. LLMs will only know the context you give them and they’ll make assumptions if nothing is provided. When prompting, add the context of your writing and who your audience is. And perhaps most importantly, consider your audience before your turn to an LLM at all.
Read the generated text: a tried and true trick of any writer or editor is reading a text out loud. Before you hit “post” or “send” with your generated text, read it out loud to yourself. Does it sound like you? Is it actually what you mean to say? Does it flow naturally? If not, edit the text yourself and put a little human in it. It will go a long way (and your writer friends will thank you.)
I mean hell yeah, use an em dash: maybe this isn’t the recommendation you thought you’d read but at the end of the day. Humans used em dashes first. I use them all the time – especially when texting or writing more casually. Don’t be afraid to use an em dash here and there. As you may have noticed, there are plenty of ways you can make your writing sound like you.
Bottom line
There’s no morality in using an LLM to boost your writing here and there. Everyone has their strengths and not everyone is a writer, either through born talent or ruthless sharpening of skill. Rather than worrying whether or not text was authored by AI, I’d encourage readers to decide if it matters given the context. And this may vary from person to person, but having a personal barometer might be the best we can do. Is it such a big deal if a soul-less SEO blog was generated by Claude? Probably not.
Personally, when it comes to things like eulogies, poetry, memoir, or heck wedding vows, I’m not too excited if a bot made those up. But then again, maybe the person you’re eulogizing loved AI, you never know these days.
Remember too, AI is a swiftly shifting beast. What is true today will be fiction tomorrow. All we can hope for is that the dogged use of an em dash sticks around so we have something to clue us in.
Do you have a question about AI? Send us your AI inquiries and your question could be featured in the next Ask Tom.
This article was co-authored by Lily Chambers. The use of first person refers to her opinions and impressions. Lily is a conversational AI designer with an academic background in rhetoric and writing. She is a fierce defender of human-written words and a semi-willing participant in AI help.
Check out more of Lily Chamber’s ramblings here.