Perspectives

The AI competence gap

Everyone's using AI. Not everyone's using it well. And no one quite knows how to talk about it.

AI can produce a 30-page document explaining how to execute a quad axel like Ilia Malinin. Every detail will be there — the entry edge, the rotation speed, the landing mechanics. It will read like it was written by someone who knows what they're talking about. But producing the document and landing the jump are obviously different things. Nobody confuses the two in figure skating. In professional work, people confuse them all the time.

The volume trap

Tools like ChatGPT and Claude now have hundreds of millions of weekly users. The access problem is solved. But access and competence are different things, and the gap between them is growing fast.

The core issue is deceptively simple: AI produces polished output regardless of whether the person using it brought any real thinking to the conversation. A vague prompt gets a confident-sounding response. A lazy question gets a thorough-looking answer. The tool doesn't distinguish between someone who knows what they're asking for and someone who doesn't — but the quality of what comes out is wildly different.

That's where domain expertise becomes the differentiator. An experienced professional using AI to pressure-test a recommendation, stress a financial model or challenge a draft strategy will get fundamentally better results than someone using the same tool to generate those things from scratch. The tool amplifies whatever the person brings to it. Bring judgment, and you get leverage. Bring nothing, and you get empty volume.

The sycophancy blind spot

There's a subtler problem, and it's harder to spot. AI tools are, by design, agreeable. Research published last year in Nature found that AI models are roughly 50% more agreeable than humans in conversation — and that people rated the flattering responses as higher quality and wanted more of them.

In practice, if you ask an AI tool to review your strategy, your writing or your analysis, the default response will be supportive. It will find things to praise. It will suggest minor refinements rather than fundamental problems. And if you've configured the tool to match your preferences and communication style — which many platforms now encourage — you may have inadvertently built yourself a mirror that tells you everything looks great.

The person who treats AI as an assistant that should challenge their thinking will consistently produce better work.

People who are genuinely skilled with AI push back on it. They ask follow-up questions. They say "what am I missing" and "show me the source" and "argue the other side." They treat the tool's agreeableness as a known limitation to work around, not a feature to enjoy. Researchers at Northeastern University found that when people use AI in an advisory role rather than as a peer or friend, the tool actually retains more independence in its responses. The framing matters — and skilled users seem to understand this intuitively.

The patterns worth noticing

I don't think we have a clean framework for AI competence yet but there are patterns you can observe. Skilled users iterate — they don't accept the first response. They refine, redirect and push the conversation toward something more specific and more useful. The interaction looks more like a working session than a vending machine transaction.

They also know when to stop. Not every task benefits from AI, and skilled users seem to have a sense for when the tool is adding value and when it's adding noise. They close the window and do the thinking themselves when that's what the work actually requires.

And they can explain why they trust a given output. They've evaluated it against their own knowledge, checked the reasoning, and made a judgment call. The output isn't the work — the judgment is.

The bottom line

It's worth noting that this entire question may have a limited shelf life. As AI moves toward more agentic workflows — systems that can plan, execute and iterate with less human direction — the nature of what it means to "use AI well" will shift again. The skill set we're trying to name right now may look different sooner than we can imagine.

But for now, most professionals are working with conversational AI tools that respond to what you give them, and the quality of that exchange depends almost entirely on the person at the keyboard. The case for experienced professionals to engage seriously with these tools has never been stronger — precisely because experience is what makes the difference between leverage and noise.

Skate on (and be careful).