Sloppy Joe and the AI That Said So
A cautionary tale about speed, garbage, and the bill that comes due
Meet Joe. Not just any Joe—Sloppy Joe, senior consultant at Bluster & Associates, one of those firms with a mahogany-paneled lobby and a coffee machine that costs more than your first car.
Joe found out about AI tools around eighteen months ago, and it seemed to change his life. Or so he thought.
Almost overnight, Joe became the fastest gun in the firm. Market analyses that used to take three days? Done by lunch. Competitive landscape reports? Before the second cup of coffee. Industry benchmarking studies? Joe was pumping them out like a soft-serve ice cream machine at a county fair, fast, smooth, and vaguely the same flavor every time.
His managing partner was impressed. Joe secured the corner office (metaphorically; it was actually a nicer cubicle, but still). He was mentioned twice in the firm’s internal newsletter. The headline: “The Future of Consulting Is Here.” Partners whispered that Joe might become a partner by thirty-two.
What no one knew, what Joe barely understood, was that his process was this: find some data, feed it to AI, copy the output, add a firm logo, and send. Rinse. Repeat. No verification. No critical review. No sniff test. Just speed and swagger.
Joe was confusing fast with good.
He wasn’t alone in his enthusiasm. Across the industry, the AI gold rush was real. Earlier this year, OpenAI announced “Frontier Alliances,” a multi-year partnership with Boston Consulting Group, McKinsey, Accenture, and Capgemini, to help enterprises deploy AI agents at scale. The implicit message was: AI is now integrated, and the consultants are the adults ensuring it behaves. As OpenAI stated, “the limiting factor for seeing value from AI in enterprises isn’t model intelligence, it’s how agents are built and run in their organizations.” The humans, in other words.
Joe had missed that memo.
He also apparently overlooked the story about Deloitte being asked to issue a partial refund for a government report that contained AI-generated hallucinations. (That’s a real story. Look it up.)
Joe’s moment of reckoning came on a Tuesday, and it came fast.
He had prepared a sweeping market entry recommendation for Gobsmacked Industries, a mid-sized manufacturer considering a major expansion. The deck was gorgeous. Sixty-two slides. Beautiful charts. The kind of document that makes a client feel their retainer was well spent just from the table of contents alone.
The client’s CFO, a no-nonsense woman named Linda who had been running financial models since before Joe was born, started flipping through it. She got to slide fourteen. She paused. She flipped back to slide eleven. Then twelve. Then fourteen again.
“Joe,” she said slowly, “where exactly did this market size figure come from?”
Joe contemplated his navel.
Linda spent the next twenty minutes dismantling the analysis. The market sizing was fabricated. Two of the cited competitors didn’t exist. One statistic had been presented as current but was actually from 2009.
This is exactly what Ken Griffin, the founder and CEO of Citadel, explained at the World Economic Forum in Davos. A colleague gave him an AI-generated report, and Griffin said the opening sentences looked truly impressive. But as he continued reading, the rest turned out to be, his exact words, “garbage.” Griffin has been clear that while large language models can increase productivity in some areas, in many white-collar jobs, the output merely appears polished. The substance, he warned, is often missing.
Joe’s output wasn’t just absent. It was incorrect.
Joe lost the client. Joe lost his partner track. Joe lost the cubicle upgrade. Joe is now, reportedly, “consulting on a freelance basis,” which is a polite way of saying he’s refreshing LinkedIn every half hour.
The lesson isn’t that AI is useless. It isn’t. The lesson is that AI is a tool, not a consultant. It doesn’t know what it doesn’t know. It can’t tell you when a number smells wrong. It doesn’t have twenty years of pattern recognition sitting behind its eyes while it reads a balance sheet. As the Wall Street Journal recently noted, AI turns out to need management consultants after all, not to replace judgment, but because judgment is exactly what AI cannot provide.
You do.
The speed AI gives you is real, but one should never outsource judgment and critical thinking.
Otherwise, you’re not doing faster work. You’re just producing garbage faster.
What can leaders do to foster the growth of judgment?
Begin with a shift in mindset from the top. Leaders should stop rewarding speed as a stand-in for quality. The goal should be both speed and rigor, not speed alone.
Make verification an essential requirement. Young analysts should be obliged to demonstrate their process, not just the result, but also the sources behind it. A straightforward rule: every data point in a client-facing document must include a primary source citation that a human has reviewed.
Teach the "sniff test" as a key skill. Many young analysts were trained in environments where quickly finding the right answer was the main goal. What they often haven't developed is the judgment to see when something sounds plausible but is wrong. Leaders should actively review this, as a good editor teaches a journalist to read their own work skeptically.
Reframe the purpose of junior roles. The traditional value of being a young analyst was gaining judgment through repetition, working through data, making mistakes in low-stakes situations, and developing instincts. If AI now handles the grinding, leaders must intentionally create other opportunities for that judgment to develop. Structured debriefs, teaming exercises, and workshops.
Use AI failures as learning moments, not career setbacks. When a young analyst submits something that doesn't pass the sniff test, the instinct might be to correct it and move on. The better approach is to sit with them and walk through exactly where the AI went wrong and why they didn't catch it.
Establish clear AI-use policies with enforcement and rationale. Vague guidance like "use AI responsibly" is ineffective. Leaders must be specific: define where AI is appropriate, where human judgment is essential, and the consequences when these boundaries are crossed. Providing the reasoning is as important as the rule; young talent needs to understand why, not just what.
Model it yourself. If senior leaders are also outsourcing their thinking to AI without acknowledging it, the message to junior staff is clear. Demonstrate the intellectual behavior you're trying to develop, not just demand.
The core issue is that AI easily blurs the line between output and thinking. A 40-slide deck does not prove analysis, and a confident paragraph does not demonstrate understanding. Leaders have always needed to cultivate judgment in their teams; AI simply accelerates and magnifies the consequences of neglecting that development. The standards for what qualifies as "done" must keep pace with our tools.
Angelo Santinelli is the founder of Entrepreneurial Edge Executive Coaching and Advising and a strategic advisor to PE-backed and founder-led companies. He works with CEOs and executive teams on strategic execution, leadership development, and organizational performance.