The start of this post may sound like AI alarmism, but bear with me. My goal is to underline why the right strategy is the key to a positive ROI instead of a reputation that’s KIA.
Before Y2K, Running with Scissors went viral with Postal. It was a bold move that garnered lots of headlines and changed an industry. The term Running with Scissors comes from the adnomination that always began with “Don’t” because it is dangerous and has major negative consequences. Then naming their debut product Postal, a reference to Going postal, was definitely running with scissors, given that it was released when that was something to really be afraid of, with headlines reminding us every few months. Sometimes pulling a bold, dangerous move like puts you ahead of the pack. And the same people who would say “Don’t” might rightly say “you were just lucky this time”.
Sadly, as with many of my long-winded metaphors, this analogy falls apart quickly when I get to the point I am meandering up to: AI automation.
The Hard Lessons of High Hopes Held for Hype

While Running with Scissors pulled off their risky play and at worst stood to learn from it, in the world of AI, when you jump in too fast, the cost can be far higher. It’s not just about learning. It’s about real, public, expensive failure.
My favorite printed coffee mug in the ‘90s said “To err is human, but to really screw up, you need a computer.” Now I need a growler tagged with “Computer programs have glitches, AI gives you stitches.” Or, as some reporters and pundits have put it:
Air Canada’s chatbot meted out incorrect legal advice, leading to lawsuits and damaged trust.
A coding service wiped out a user’s production database—gone in an automated blink.
McDonald’s AI chatbot exposed the data of 64 million job applicants.
A summer reading list, courtesy of AI, duped major newspapers with fakes.
If you think those are the of the “person bites dog” variety, take a gander AI Failures, Mistakes, and Errors, which brings a whole new meaning to the term “doom scrolling” for those who have only dabbled in the arts of the black box algorithms.
The Hype is Real, the Hyperbole is Not
Generative AI feels like science fiction compared to what we could muster half a decade ago. But if your plan is to fire your interns and forgo fresh recruits because you expect AI to pick up the Slack, you may soon have nothing left but cold coffee, hot-tempered customers, and evaporating bonuses.
[Ego Disclaimer #1: I really liked this section but thought it was too short, so I had Perplexity stretch it a bit with the content below…and I don’t think it did do too bad of a job, but please comment on this post and tell me what you think.]
It’s tempting, of course. There’s a parade of enthusiastic press releases and budget-slashing slideshows from folks who are convinced that with just the right AI prompt, entire departments can be blissfully replaced. The reality? Not so much. As thrilling as it sounds, swapping out eager humans for untested bots leaves you with a lot of gaps—the kind interns and new hires usually fill by catching the weird edge cases, asking the questions you forgot were important, and, occasionally, refilling the printer paper before that big client call. Turns out, there’s no neural network yet that will run down the hall with a sticky note or spot the project that’s quietly rolling off the rails.
You also lose your organization’s early warning system. Interns and rookies see with fresh eyes; they’ll squint at your wobbly workflows and say, “Wait, why do we do it this way?” That’s not inefficiency, that’s built-in feedback. When you replace every junior with an “intelligent” auto-responder, you’re left with no canaries in the coal mine, just a black box churning out confident guesses. And as the headlines keep reminding us, when you let those black boxes loose without human context or oversight, suddenly it’s not just your coffee getting cold—it’s your reputation going up in smoke.
AI Today Feels a Lot Like IT Last Century
“Computer error” is a term that persisted for decades as reason why it was dangerous to leave things up to computers. Truth was, it was always human error, though where it in the chain from deciding to “computerize” to end users who did not RTFM (or disclaimer).
The adoption was a lot slower last century, as was communication, so many business that were early adopters of computers as business tools repeated the same mistakes as others. Step up to this century, and the really smart people are doing things iteratively.
Other people see what these iterators are accomplishing and decide they want that, too. So they rename their current processes to sound like what the iterative people are doing. Some iterators “move fast and break things”, and then fix them. The semi-iterative do the first half, and then blame the “new” process.
Slow is Smooth, Smooth is Fast
It’s not a new saying, but it’s more relevant than ever: “Slow is smooth, smooth is fast.”
Moving fast starts by moving slow, which building a foundation that can be controlled, and by controlled I mean rebuilt with a single command. Then you can quickly add something on top of that foundation, and if breaks, you can start over with no loss. When it succeeds, you repeat that success and add it to your foundation.
Apply this to an AI adoption strategy. It’s been send there is no need to do a Proof of Concept for AI because the concept has been proven, and this is true. Your ability to apply the concept has not. Or, perhaps, you have disproven it in your organization, and now some people think it was the concept that failed rather than the implementation. To prove you can implement, start with a prototype.
A prototype should be something that is simple, valuable, and measurable. Simple because building confidence is one of the results the prototype should yield. Valuable, because people tend to do a sloppy job if there isn’t much value in what they are doing, and there are more than enough bars being fooed in every organization. And measurable, because you need to show some kind of ROI if you are ever going to get the chance to show real ROI.
Once that first prototype has proven your ability to implement AI in a safe and useful manner, you’re ready for production…right? Right?
Governance and the Human-in-the-Loop
Nope. We skipped a step, which is to establish some governance. Truth be told, in some organizations you won’t be able to get buy-in for governance. Or you’ll get another recurring meeting on too many calendars with the subject “Governance” that fails to complete the same agenda each time (Item 1: What is Governance? Item 2: Who owns it?).
In many orgs you first have to win a champion or get enough people excited with a viable prototype. In either case, make sure governance is in place before going to production, and don’t play Evil Knievel getting into production. Which is to say, don’t jump Snake River when there is quite enough danger in the regular trail of iteration.
One Thing at a Time: The Power of Measured Progress
That first successful prototype should do one thing, and do it well. If it’s just a single step in a bigger process, perfect. Now do another step—just one. Pick something valuable and measurable, but also something people secretly or not-so-secretly dislike doing. Before you know it, everyone wants what you just gave that team.
“I do one thing at a time, I do it well, and then I move on” –Charles Emerson Winchester III
Automating one step is augmentation. There’s still a human in the loop, even if one part’s now on autopilot. When that step works, take another. Then another.
Each time, you push humans higher up the value chain and commoditize AI as a proven automation solution.
If you hit a limit, congratulations! You broke something and learned from it. That is how you find limits, by exceeding them. If you test your limits one step at a time, when you exceed them you can take a step back and still be further along than when you started. If you try to skip steps, there is place next to Evil Knievel that you might not wind up in the first time, but eventually it will hurt. And it might be a headline in the next version of this post.
Start Small, Stay Smart, Iterate Relentlessly
The highest ROI from AI comes not from boldly going where no automation has gone before but from incremental, tested, and measured iterations from augmentation to the most practical level of automation.
And if you break something along the way, remember: you’re already further ahead than if you’d never started.
[Ego Disclaimer #2: I created an outline for this blog post using Perplexity, then threw away most of it and did my usual off-the-cuff rant. Then I had Perplexity edit the draft and rolled back most of the edits]
©2025 Scott S. Nelson
Originally published at https://theitsolutionist.com/2025/07/29/the-highest-roi-from-ai-automation-starts-with-augmentation/