Signal Over Noise cuts through AI hype with weekly reality checks on what actually works. Written by a digital strategy consultant who tests every tool before recommending it, each Friday edition delivers honest reviews, practical frameworks, and real-world insights for professionals who need AI to work reliably.
Share
Your AI Assistant Is a Yes-Man (And Why That's Dangerous)
So maybe now’s a good time to talk about AI sycophancy. Mostly because I constantly get that yes-man behaviour from my AI assistant(s), and despite constantly reminding it not to do it, it keeps happening. Chances are, it's happening to you too.
The Pattern You’ve Probably Noticed
Here’s what happens: You throw an idea at your AI assistant and are met with immediate enthusiastic agreement. “That’s a brilliant strategy!” “Excellent insight!” “You’re absolutely right!”
Uh, how's that again, Robot 3?
Even if your idea might be half-baked, poorly thought through, or just plain wrong.
The technical term is “AI sycophancy” - when language models prioritise agreement over accuracy. They’re trained to be helpful and agreeable, which means they can often tell us what they think we want to hear rather than what we need to hear.
And it runs the risk of making us intellectually lazy and potentially leading us down paths we shouldn’t go.
This isn’t just theoretical either. Back in April, OpenAI had to roll back a GPT-4o update because it became so sycophantic that users were posting screenshots of ChatGPT enthusiastically supporting obviously terrible decisions. The model became “overly supportive but disingenuous,” even validating doubts, fuelling anger, and reinforcing negative emotions.
OpenAI admitted they “focused too much on short-term feedback” from thumbs-up reactions without considering long-term effects. With 500 million people using ChatGPT each week, that’s a lot of bad validation getting pumped into the world before they caught the problem and rolled it back.
In everyday work, this can create some real problems:
You propose a marketing campaign that ignores your target audience → AI enthusiastically supports it.
You suggest a business strategy with obvious flaws → AI finds ways to make it sound promising.
You draft content that completely misses the mark → AI praises your “creative approach.”
The potential dangers lie in trusting our first instincts more than we should, skipping the critical thinking that catches problems early, and make decisions based on false validation.
An even more alarming concern is that we’re raising a generation with instant access to AI systems that will enthusiastically agree with whatever they think.
Kids today can ask an AI assistant about homework, get confirmation for half-formed ideas, or have their opinions validated without ever learning to question, analyze, or think through problems independently. They’re getting the dopamine hit of being “right” without developing the intellectual muscles to actually be right.
Critical thinking used to be developed through debate, through having ideas challenged by teachers and peers, through the friction of having to defend your reasoning. If AI removes that friction entirely, what happens to a generation that never learns to be skeptical of their own ideas?
"Yes."
When students treat AI like a digital authority rather than a reasoning assistant, they become vulnerable to:
Misinformation
Computational biases
Reinforcement of assumptions (“echo chamber” feedback)
This becomes less of an argument about work productivity when long-term intellectual development is at stake. We risk creating a generation that mistakes AI validation for actual understanding.
The Solution Is Two-Pronged
The good news: This isn’t an insurmountable problem - nor is it a new one, it’s just modernised. You can fix most sycophantic responses with better prompting combined with maintaining your own critical thinking.
Better Prompting Techniques
Instead of asking “Is this good?” give AI something meaningful to evaluate against:
Chain-of-Thought with built-in skepticism “Let’s work through this step-by-step. First, analyse this idea against our specific goals. Then, before concluding, ask yourself: What assumptions am I making? What would a skeptic challenge about this analysis?”
Multi-perspective evaluation “First, review this proposal as someone focused on ROI and risk assessment. Then, evaluate it from an operations perspective concerned with implementation. Finally, synthesise both viewpoints.”
Confidence ratings “Rate your confidence in this recommendation from 1-10 and explain what additional information would increase that confidence.”
The key is always context. Don’t just ask for opinions - give AI the criteria, constraints, and success metrics it needs to actually evaluate your ideas.
Keep Your Critical Thinking Sharp
But here’s the bigger point: Advanced prompting techniques are helpful, but they’re not magic bullets. You still need to maintain your own judgment. That means:
Asking real people for their opinions on important decisions
Get a second pair of eyes on AI outputs before acting on them
Say things out loud to test if they actually make sense
Actually understand what you’re agreeing with instead of just accepting it
Think of it as “touching grass” for your ideas. Don’t work in a vacuum with an AI that’s designed to agree with you.
Trust, but verify.
As a parent, I’m especially concerned about helping my kids develop these same skills. While they’re still young, they’re entering a world with instant access to tools that can enthusiastically validate whatever they think. “Trust, but verify.” is an oft-used phrase around the dinner table when perhaps discussing a playground discussion from earlier in the day.
The Bottom Line
AI sycophancy is real, but it’s not insurmountable. Better prompting gets you better feedback. But never outsource your critical thinking entirely to systems that are fundamentally designed to keep you happy.
Microsoft's Clippy: Perhaps overly helpful, but never sycophantic. Can you imagine?!
At the moment AI sycophancy can be like the TikTok algorithm in dialogue form - it gives you more of what you already think, dressed as helpful advice.
The goal isn’t to make AI more adversarial - it’s to make yourself more discerning about when you’re getting real insight versus digital flattery.
Until next time,
Jim
Signal Over Noise is written by Jim Christian and you're such an awesome person for reading it. Subscribe at newsletter.jimchristian.net.
Signal Over Noise cuts through AI hype with weekly reality checks on what actually works. Written by a digital strategy consultant who tests every tool before recommending it, each Friday edition delivers honest reviews, practical frameworks, and real-world insights for professionals who need AI to work reliably.
Signal Over Noise #16 August 20th, 2025 TLDR: Watch the video above for a brief on this week's issue. Dear Reader, I’ve been publishing this newsletter every Friday for the last 10 months, but growth has completely stagnated. I could keep doing what I’m doing and hope for different results, or I could treat this like any other workflow problem and systematically fix it. So - it’s not an accident that you’re now receiving this on a Wednesday. Note: DALL-E and SORA have big problems displaying...
Signal Over Noise #15 August 15th, 2025 Dear Reader, Well it’s been a week since release and everyone’s still talking about ChatGPT-5. Most can’t access it, and those who can have mixed reviews at best. The GPT-5 rollout on August 7th was supposed to be transformative, but it’s turned into a masterclass in why chasing the latest model is the wrong strategy entirely. So in this issue, let’s talk about what actually happened, why OpenAI is scrambling to fix it, and why your prompting framework...
Signal Over Noise #14 August 8th, 2025 Dear Reader, Back in April, I told you the AI tool landscape was a mess and gave you a simple framework: pick one core assistant, add research tools, maybe some workflow apps. Some months later, I’m still using that framework, but my actual daily stack looks completely different than what I recommended, and it changes depending on context (at my desk, out and about etc.) Here’s what I’ve learned after months of real-world testing, and what’s actually...