Signal Over Noise cuts through AI hype with weekly reality checks on what actually works. Written by a digital strategy consultant who tests every tool before recommending it, each Friday edition delivers honest reviews, practical frameworks, and real-world insights for professionals who need AI to work reliably.
Share
How to Spot AI Generated Text
Published 27 days ago • 6 min read
Signal Over Noise #12
July 25th, 2025
Dear Reader,
Last week, a reader got in touch with some feedback that gave me pause: “Your newsletter last week felt a little ‘ChatGPT-y’,” they said. And yeah. They were absolutely right.
It’s no secret that I like to experiment widely with various AI tools (also - that’s the whole point of this newsletter) but in the process, I’d let the machine’s voice creep in where mine should have been.
So this week, let’s get into how to spot AI-generated text - the telltale signs that give it away, why it matters, and how to avoid accidentally sounding like a robot when you’re using these tools.
Bleep blorp.
The Words That Scream “ChatGPT Wrote This”
If you’ve been reading AI-generated content lately (surprise: you have, whether you know it or not), you might have noticed certain phrases popping up everywhere. Research analysing millions of academic papers found some words experienced a 25-fold increase in 2024. Want to guess what topped the list?
“Delve into.”
Seriously. ChatGPT loves to “delve into” everything. It also can’t resist “navigating landscapes,” describing things as “tapestries,” and talking about “embarking on journeys.” If you see any of these phrases, there’s a very good chance AI was involved.
Let's delve into a tapestry, navigate a landscape and embark on a journey, shall we?
But it goes deeper than just buzzwords. ChatGPT has a formal, clinical writing style that prefers “individuals with diabetes” over “people with diabetes” and uses “glucose” instead of “sugar” at twice the rate us humans do. Yes, it sounds more professional, but it also sounds less human.
Here are some more patterns I’ve noticed as a result of being permanently online:
ChatGPT loves lists. It will go out of its way to turn any content into bullet points, even when a narrative flow would work better.
“No fluff” has become ChatGPT’s latest obsession. If you see someone promising “no fluff” content, there’s a good chance AI was involved. ChatGPT’s hatred of fluff is so intense, you’d think it survived a traumatic childhood spent in a pillow factory. And don’t get me started on “In today’s digital world”, still hanging on since 2023.
Every time I see ‘no fluff’ in AI copy, I imagine ChatGPT scrubbing a duckling with a wire brush and whispering, ‘Efficiency.’
The formula structure: “It’s not about speed, it’s not about efficiency, it’s about results.” Or the variation: “No gimmicks, no tricks, just proven strategies.” Once you notice this pattern, you’ll see it everywhere.
ChatGPT isn’t alone by far. Claude has its own patterns - more nuanced, but still detectable in its structured, comprehensive explanations. And Gemini leans toward conversational, explanatory phrases that can feel overly-wordy and corporate.
Opening paragraphs with “Moreover,” “Furthermore,” or “Additionally,” and conclusions that rely on “It’s important to note that” or “At the end of the day.” can be dead giveaways (see also: “In conclusion”). These create a mechanical rhythm that your brain recognises as artificial, even if you can’t quite put your finger on why.
The Structure Problem
Beyond vocabulary, AI writing has structural tells that are harder to articulate but easier to feel.
AI looooooves balanced coverage and treats every point with equal weight, creating unnaturally even paragraphs that hit all the expected topics without the natural emphasis and de-emphasis that comes from human thinking. AI has algorithms - real writers have opinions, priorities, and tangents.
You’ll also notice AI’s obsession with paired adjectives (“unique and intense,” “highly original and impressive”) and its preference for connecting simple statements rather than crafting complex thoughts. The sentences are technically correct but mechanically rhythmic, like reading a well-programmed assembly line. Which…
...well yeah, which it is.
When the algorithm nails symmetry, but forgets how actual humans talk.
The Detection Arms Race
The technology for spotting AI text has gotten sophisticated fast. Tools like GPTZero claim 98% accuracy for unedited AI content, while Turnitin boasts near-100% accuracy in academic settings.
But here’s the thing: these tools struggle with edited content. If someone takes AI output and manually revises it thoughtfully, detection accuracy drops to around 50% - proof that human editing matters enormously. The most advanced detection now uses a form of watermarking with invisible statistical signatures embedded during text generation. Google’s working on this, but adoption is voluntary, so it’s not widespread yet.
What Humans Notice (Even When We Don’t Realise It)
Research shows people (currently) can only consciously identify AI text about 53% of the time, but we unconsciously react to AI content in ways we can’t always explain.
Human writing reflects real cognitive processes: working memory limitations, real-time thinking, the way our minds actually work through problems and the beauty and tragedy that is the human experience. AI maintains surface-level coherence but lacks the deeper logical progression that comes from authentic human thought.
People with higher reasoning skills are better at detecting AI content, while heavy social media use actually makes detection worse (probably because we’re used to algorithmic content). Most tellingly, people are less likely to share content they suspect is AI-generated, even when they can’t articulate why it feels off.
After getting called out last week, I’ve been more intentional about my AI collaboration process. Here’s how I actually work with these tools:
I start with the idea and outline myself. The concept, angle, and structure need to come from my brain, not the machine’s. AI can help refine it, but the core thinking is mine. I keep a folder of draft ideas in Obsidian, visiting them and iterating them over the course of a week or two (or less in the case of last week’s issue).
Research comes next, usually with Perplexity and sometimes Claude I use it to gather information, find studies, get different perspectives on the topic. This is where AI really shines by pulling together the information I’d spend hours hunting down manually. I get back research reports with actual data points that I can reference and reflect on.
I write the bare bones first. The basic argument, key points, personal insights. Then I scan for structure and see where AI might help fill gaps or improve flow or - in some cases - not.
The editing is where the magic happens. That’s it. Don’t give AI the keys to the kingdom thinking that it’s capable of doing a final draft in your tone of voice. Human oversight is paramount.
Everything gets read aloud. If it sounds like I’m giving a corporate presentation instead of having a conversation, I question my life choices and then rewrite it.
I have a secret weapon: my style guide. I had Claude analyse examples of my best writing and create a tone and style guide specifically for me. Now when I collaborate with AI, I give it that guide upfront. It knows I prefer contractions, direct address, varied sentence lengths, and conversational transitions. This is still a work in progress, and definitely not a “one size fits all” approach.
I lost my original book manuscript so had to scan each page, then extract the text - but that’s a story for another time.
The Real Goal
Look, the point isn’t to hide AI usage or to never use these tools, nor to feel embarrassed by using them. They’re incredibly useful for research, brainstorming, and working through ideas.
The goal is to strategically place AI within your process to enhance your thinking and writing while preserving what makes your voice unique, not to get lazy and churn out the same voice as the rest of the internet.
AI should make you more efficient at being yourself, not more efficient at being generic.
Because here’s the takeaway: in a world increasingly flooded with AI-generated content, authentic human insight becomes more valuable, not less. The writers who succeed will be those who master the balance - leveraging AI’s capabilities while maintaining the creativity, expertise, and personality that create real connection with readers.
Sincere, heartfelt thanks to the reader who called me out last week. The feedback was earnest, constructive - and I think it made this newsletter better - no fluff.
Until next week,
Jim
P.S. If you want to test your own AI detection skills, try reading a few newsletters or articles or social media posts and see what feels “off” to you. You might be surprised at how much your unconscious mind already notices.
Signal Over Noise is written by Jim Christian and should be forwarded immediately to three of your closest friends. Subscribe at newsletter.jimchristian.net.
Signal Over Noise cuts through AI hype with weekly reality checks on what actually works. Written by a digital strategy consultant who tests every tool before recommending it, each Friday edition delivers honest reviews, practical frameworks, and real-world insights for professionals who need AI to work reliably.
Signal Over Noise #16 August 20th, 2025 TLDR: Watch the video above for a brief on this week's issue. Dear Reader, I’ve been publishing this newsletter every Friday for the last 10 months, but growth has completely stagnated. I could keep doing what I’m doing and hope for different results, or I could treat this like any other workflow problem and systematically fix it. So - it’s not an accident that you’re now receiving this on a Wednesday. Note: DALL-E and SORA have big problems displaying...
Signal Over Noise #15 August 15th, 2025 Dear Reader, Well it’s been a week since release and everyone’s still talking about ChatGPT-5. Most can’t access it, and those who can have mixed reviews at best. The GPT-5 rollout on August 7th was supposed to be transformative, but it’s turned into a masterclass in why chasing the latest model is the wrong strategy entirely. So in this issue, let’s talk about what actually happened, why OpenAI is scrambling to fix it, and why your prompting framework...
Signal Over Noise #14 August 8th, 2025 Dear Reader, Back in April, I told you the AI tool landscape was a mess and gave you a simple framework: pick one core assistant, add research tools, maybe some workflow apps. Some months later, I’m still using that framework, but my actual daily stack looks completely different than what I recommended, and it changes depending on context (at my desk, out and about etc.) Here’s what I’ve learned after months of real-world testing, and what’s actually...