← Back to Blog
February 18, 2026
The discomfort isn't resistance. It's a rational response to a tool that works differently from anything that came before.
You've rolled out CRMs, migrated to the cloud, adopted collaboration platforms, survived at least one ERP implementation. None of it was easy, but none of it was mysterious either. You learned the system, it behaved consistently, and eventually it became part of how your team worked.
AI doesn't follow that pattern. And the discomfort people feel around it isn't a lack of effort or interest — it's a rational response to a tool that works fundamentally differently from anything they've used before.
Three things are going on.
Every enterprise tool you've ever adopted is deterministic. Click the button, get the result. Run the report, see the numbers. The system does what it does, and once you understand it, you can trust it.
AI is different. You ask a question and get an answer that depends on how you asked, what context you provided, even what the system infers about your intent. Ask the same question a slightly different way and you might get a meaningfully different response.
That's not a bug — it's how the technology works. But for professionals who've spent their careers mastering tools that reward precision and consistency, it feels unreliable. And the natural response to unreliable tools is to use them less, not more.
Most software is designed around execution. Enter the data. Select from the menu. Follow the workflow. The system tells you what it needs, and your job is to provide it.
AI reverses that relationship. It doesn't tell you what to do — it waits for you to decide what to ask. You have to frame the question, evaluate what comes back, decide whether it's good enough or whether to push further. That's not clicking through a process. That's judgment.
Most people haven't been asked to use judgment as a tool skill before. They've been asked to follow processes. The shift from execution to evaluation is genuinely new.
This is the one that really slows people down.
With a spreadsheet, you know if the formula worked. With a CRM, you can see whether the record saved. With a presentation tool, the slide either looks right or it doesn't. Success is visible.
With AI, the output can look polished, well-structured, even impressive — and still be wrong. It can sound authoritative while being completely fabricated. There's no error message. No red underline. No obvious signal that you should question what you're reading.
For senior professionals who've built their careers on accuracy and accountability, that's not a small thing. They're being asked to rely on a tool where the failure mode is invisible. Of course they hesitate.
The standard organizational response to all three of these issues is the same: more training. Teach people how AI works. Show them prompting techniques. Run workshops.
Training helps with the mechanics. But it doesn't solve the deeper issue, which is that AI requires people to develop a new kind of professional skill — the ability to direct, evaluate and iterate with a system that doesn't give clear feedback.
That's not something you learn in a two-hour session. It develops through practice, through seeing real examples tied to real work, and through having the room to get it wrong without consequence.
The organizations getting this right aren't just training people on AI. They're redesigning how people encounter it.
The tool isn't the hard part. The relationship between the person and the tool is. Start there.