Signal Over Noise cuts through AI hype with weekly reality checks on what actually works. Written by a digital strategy consultant who tests every tool before recommending it, each Friday edition delivers honest reviews, practical frameworks, and real-world insights for professionals who need AI to work reliably.
Share
The 95% AI Failure Rate Nobody's Talking About (And What to Do About It)
Published 2 days ago • 3 min read
Signal Over Noise #18
September 3rd, 2025
Dear Reader,
Air Canada recently learned an expensive lesson about AI implementation. Their customer service chatbot provided incorrect information about bereavement fares, and when challenged, the company argued they weren't liable for their AI's mistakes.
"My AI told you that." isn't much of a legal defense.
The court disagreed. The ruling was clear: organisations remain accountable for AI-generated decisions.
This case represents a broader pattern being noticed across industries. While 71% of organisations now use generative AI regularly, recent MIT research reveals that 95% of generative AI pilots are failing to deliver expected value. Yet most leaders are still approaching AI adoption without systematic frameworks for success.
The Implementation Gap
The disconnect between AI enthusiasm and results is widening. McKinsey data shows AI usage nearly doubled over 10 months, but user satisfaction declined from 77% to 72% in the same period. Stack Overflow’s developer survey shows an even steeper decline - from 72% to 60% favourable views over two years.
The pattern is consistent: rapid adoption, declining satisfaction, and expensive course corrections. Consider these recent examples:
Sports Illustrated’s use of AI-generated author profiles created credibility issues
New York City’s official business chatbot provided legally problematic advice about employment law
Multiple organisations have had to implement “AI incident response teams” after customer-facing failures
The underlying issue isn’t AI capability - it’s implementation methodology.
A Verification Framework for Potential Success
After analysing implementation patterns across dozens of organisations, three critical factors emerge for AI success:
1. Verification Protocols
Organisations succeeding with AI have built systematic verification processes. Lumen Technologies reduced sales preparation time from 4 hours to 15 minutes using Microsoft Copilot, but maintained human review of all customer-facing outputs. This verification step is what separated successful efficiency gains from costly mistakes.
The principle: AI output quality is inversely related to verification requirements. High-stakes decisions require human oversight; low-stakes tasks can run with minimal supervision.
2. Failure Recovery Systems
44% of organisations have experienced negative consequences from AI inaccuracy. The differentiator isn’t avoiding failures, it’s having systematic responses when they occur.
Successful implementations include:
Clear escalation procedures for AI errors
Rollback capabilities for automated decisions
Communication protocols for customer-facing mistakes
Regular accuracy auditing and model retraining schedules
3. Build vs. Buy Decision Framework
McKinsey identified three AI implementation approaches: Takers (off-the-shelf solutions), Shapers (customised implementations), and Makers (built from scratch). Success rates vary dramatically:
Takers: 67% success rate
Shapers: 45% success rate
Makers: 33% success rate
Organisations building custom AI solutions face 1.5x longer implementation timelines and significantly higher failure rates. The most successful approach is often the least exciting: proven vendor solutions with established track records.
August 2025: General-purpose AI model requirements for EU markets
August 2026: High-risk AI system regulations take full effect
Currently, only 18% of organisations have enterprise-wide AI governance councils. The remaining 82% are operating without systematic oversight frameworks—a significant compliance and operational risk.
A Practical Implementation Approach
Based on analysis of successful AI deployments, here’s a systematic approach to AI implementation:
Phase 1: Internal Process Optimisation Start with low-risk / high-value internal applications. Brisbane Catholic Education achieved 9.3 hours per week in teacher time savings through AI-assisted lesson planning and administrative tasks. These applications build organisational confidence while delivering measurable value.
Phase 2: Vendor Solution Evaluation Prioritise proven vendor solutions over custom development. The 67% success rate for off-the-shelf solutions reflects established implementation methodologies and ongoing support structures.
Phase 3: Systematic Scaling Organisations attributing 10%+ of EBIT to AI focus on business outcome measurement rather than technical benchmarks. Success metrics should directly correlate with revenue impact, efficiency gains, or cost reduction.
Implementation Checkpoints
Before any AI deployment, establish clear answers to these verification questions:
Accuracy Validation: How will you verify AI output accuracy in your specific use case?
Error Response: What’s your systematic response when AI makes mistakes?
Compliance Framework: How does this implementation align with current and pending regulatory requirements?
Success Metrics: What specific business outcomes will determine ROI?
Moving Forward Systematically
The current AI adoption phase requires disciplined implementation approaches over enthusiastic experimentation. Organisations achieving sustained value from AI treat it as a systematic business capability, not a technological experiment.
The winners in AI adoption won’t be the earliest adopters or those using the most advanced models. They’ll be organisations that built proper verification frameworks, maintained realistic expectations, and focused on measurable business outcomes.
What systematic approaches are you taking to AI implementation in your organization?
I’m particularly interested in verification frameworks that are producing measurable results.
Until next time,
Jim
PS: Know someone else who's tired of AI hype? They might appreciate joining the newsletter themselves - please forward it on!
Signal Over Noise is weekly, reader-first publication on AI "without the hype" published by Jim Christian. Subscribe for free: signalovernoise.at
Signal Over Noise cuts through AI hype with weekly reality checks on what actually works. Written by a digital strategy consultant who tests every tool before recommending it, each Friday edition delivers honest reviews, practical frameworks, and real-world insights for professionals who need AI to work reliably.
Signal Over Noise #17 August 27th, 2025 Dear Reader, Two weeks ago, I built a complete business intelligence tool in one afternoon. Not a prototype, but a fully functional system that scrapes social platforms for monetizable problems, analyzes them with AI, and outputs detailed business opportunities. It’s also something that I had no planned intention of building that day. The trigger? A fellow developer’s impressive SaaS demo that I was curious enough to use, but at a subscription price I...
Signal Over Noise #16 August 20th, 2025 TLDR: Watch the video above for a brief on this week's issue. Dear Reader, I’ve been publishing this newsletter every Friday for the last 10 months, but growth has completely stagnated. I could keep doing what I’m doing and hope for different results, or I could treat this like any other workflow problem and systematically fix it. So - it’s not an accident that you’re now receiving this on a Wednesday. Note: DALL-E and SORA have big problems displaying...
Signal Over Noise #15 August 15th, 2025 Dear Reader, Well it’s been a week since release and everyone’s still talking about ChatGPT-5. Most can’t access it, and those who can have mixed reviews at best. The GPT-5 rollout on August 7th was supposed to be transformative, but it’s turned into a masterclass in why chasing the latest model is the wrong strategy entirely. So in this issue, let’s talk about what actually happened, why OpenAI is scrambling to fix it, and why your prompting framework...