AI Adoption

What your AI dashboard isn't telling you

Most organizations track AI adoption by how many people use the tools. That tells you almost nothing about whether the tools are working.

We've written before about why AI feels harder than other enterprise tools — the inconsistency, the judgment it demands, the invisible failure modes. Those are real barriers, and they explain a lot about why people hesitate. But there's a separate problem playing out at the organizational level, and it has less to do with how people feel about AI than with how leadership is measuring progress.

The dashboard problem

When an organization rolls out an AI tool, someone inevitably builds a dashboard. Logins. Prompts generated. Sessions per week. Time spent. The numbers go up and leadership feels reassured. The numbers plateau and leadership gets nervous.

This is the wrong thing to watch.

AI can produce a lot of material fast. Someone running dozens of prompts might be generating noise. Meanwhile, someone who uses the system for ten minutes and asks one sharp question might produce the most valuable output of the week. You'd never know from a usage dashboard.

Someone who uses the system for ten minutes and asks one sharp question might produce the most valuable output of the week. You'd never know from a usage dashboard.

McKinsey's State of AI survey, published near the end of last year, found that 88% of organizations now use AI in at least one business function, but only 39% report any enterprise-level earnings impact. That gap — between widespread usage and actual results — is the macro version of the same problem happening inside individual teams every day. People are logging in. The tools are being touched. But nobody is asking whether the work is getting better.

The deeper issue is that activity metrics create the wrong incentives. When people know usage is being tracked, they start using the tool to show usage rather than to do better work. That's performance masquerading as adoption, and it teaches exactly the wrong lesson about what AI is for.

Ask a better question

What matters is whether people are making better decisions, producing higher-quality work or reclaiming meaningful time. Those outcomes are harder to track than login counts, but they're the only ones that tell you whether AI is earning its place in a workflow.

A useful starting point is to ask teams one simple question: has this tool changed how you do a specific piece of work? Not "are you using it" but "is it making a difference." The answers will be uneven and sometimes surprising, but they'll give you a far more accurate picture than any dashboard.

The organizations that eventually get this right tend to measure impact at the workflow level — time saved on a recurring process, quality improvement on a specific deliverable, decisions made with better information. These are smaller, more specific metrics, and they require talking to people rather than pulling reports. That's precisely why they work.

The bottom line

The organizations that get the most from AI won't be the ones with the highest usage numbers. They'll be the ones where people actually trust the tools — because they were given the time, the context and the room to develop that trust through real work. If your adoption dashboard is green but your teams can't point to a single workflow that's meaningfully improved, the data may be misleading you.