Competent is the New Mediocre
Why AI Rewards the Best, Replaces the Rest, and Forces Everyone Uphill
I often get asked a question about AI - isn’t it the great equalizer for competence? Everyone now has access to the same frontier models. A first-generation college student in rural India can use the same Claude or GPT that a McKinsey partner uses. The playing field, the argument goes, has been leveled. This argument seems logical. But it is wrong. And believing it is one of the most dangerous mistakes a knowledge worker can make today.
Every wave of technology disruption generates the same leveling narrative. The internet was supposed to democratize commerce. Social media was supposed to democratize influence. MOOCs were supposed to democratize education. In every case, the tools became universal while the advantages did not. What happened instead was a pattern: the floor rose for everyone, the ceiling rose faster for the few, and the middle got compressed.
AI raises the floor of competence for everyone, the ceiling faster for the few, and hollows out the middle. Good enough is no longer good enough.
AI is Microsoft Word at civilizational scale.
When word processors arrived, everyone had the same tool. Having access to Word did not make anyone Shakespeare. The tool was universal. The talent was not. What happened was predictable: the minimum acceptable quality of a written document rose sharply, while the distance between competent and exceptional grew wider, not narrower. AI is doing the same thing, but across every domain of knowledge work, simultaneously, and at a pace that leaves no time for gradual adjustment.
The floor has been raised dramatically, suddenly, and for almost everyone. The ceiling has also risen, but only for those who already had the height to reach it. The distance between the floor and the ceiling has not shrunk. It has expanded.
The Hollowing of the Middle
For decades, organizations built massive throngs of credentialed middle knowledge workers: analysts, associates, coordinators, junior managers, and specialists of every variety. These people turned senior judgment into structured output. They synthesized research, built models, prepared presentations, drafted communications, and managed the operational metabolism of large enterprises. They were the connective tissue of the knowledge economy.
AI narrows that gap aggressively. The tasks that defined middle-tier knowledge work, synthesis, summarization, structured analysis, first-draft production, research compilation, presentation building, are precisely the tasks that AI now performs at or above the level of a competent junior professional. Not in the future. Today.
I see this in my own classroom. Five years ago, the students who excelled at case analysis were the ones who could grind through data, build clean spreadsheets, and produce well-structured slide decks. Those students had an edge because execution was hard. Today, execution is table stakes. AI handles it. The students who stand out now are those who ask better questions, reframe problems in surprising ways, and exercise judgment that the model cannot. The bar for differentiation has migrated upward, and it did so in about eighteen months.
Consider the product manager. Five years ago, a solid PM could differentiate herself by writing crisp user requirements, synthesizing user research into clear insight summaries, building competitive landscapes, and structuring sprint backlogs with care. That was the job. Those were the skills that built a career. Today, any PM with a Claude subscription can produce a first draft of a requirements document in ten minutes that would have taken two days to write manually. Competitive landscapes, feature comparison matrices, user journey maps: AI generates all of these at a quality level that meets or exceeds what most mid-career PMs produce on their own.
So what separates the exceptional PM from the rest? Not the ability to produce artifacts. The ability to decide which product to build and why. The ability to see a market signal that the data does not yet confirm. The ability to say no to the feature that customers are asking for because you understand the job they are actually hiring the product to do, and that job points somewhere else entirely. The ability to hold a room of engineers and executives in a prioritization debate and make a call that balances technical debt, business model economics, competitive timing, and customer psychology, all at once, in real time, with incomplete information. AI cannot do that. AI will not do that for a very long time. But the PM who cannot do that either is now competing against a machine for the tasks she used to own.
The middle is caught in a compression. The floor has risen to meet them from below. The ceiling has pulled away from them above. The comfortable plateau of “competent and credentialed” is disappearing.
The Wedge That Must Not Close
To understand what separates those who will thrive from those who will merely survive, consider a simple mental model: the wedge.
Imagine two lines on a graph. One line represents the advancing capability of AI: its ability to produce high-quality knowledge work output. That line rises steeply and without foreseeable limit. The second line represents the unique capability of a given human professional: the judgment, taste, creativity, contextual awareness, and synthesis ability that the person brings above and beyond what AI can produce. The vertical distance between these two lines at any given moment is the wedge. It is the person’s margin of relevance.
The wedge between human capability and AI capability is the shrinking margin of human relevance.
For most people in most jobs, that wedge is narrowing. AI’s capability line is ascending faster than human capability lines. The wedge closes from above.
The only sustainable response is to move the second line upward faster than the first line rises. Not by doing the same things better, but by consistently relocating to the frontier: to the tasks, questions, and forms of judgment that AI cannot yet perform. This is not a one-time migration. It is a permanent posture. The frontier is not a destination. It is a direction.
What does the frontier look like? It is not a fixed set of tasks. It is a set of characteristics. Frontier work is ambiguous: the problem is not well-defined, and framing it correctly is itself the value. Frontier work is integrative: it requires combining insights across domains that AI treats separately. Frontier work is high-stakes: the consequences of getting it wrong are significant, and no one is willing to delegate the decision to a machine. Frontier work is relational: it depends on trust, persuasion, negotiation, and the kind of contextual reading that emerges only from human interaction.
The knowledge workers who thrive will be those who stay perpetually ahead of the closing wedge, who treat AI not as a tool to do their current job faster, but as a displacement force that continuously redefines what their job must become.
What Raising the Ceiling Actually Looks Like
Let me make this concrete with an example from my own work, because abstraction is the enemy of action here.
I write business case studies. I have been doing it for thirty-five years, and the methodology I have developed is specific, opinionated, and built on thousands of hours of classroom testing. Could I ask Claude to “write me a business case study”? Of course. And it would produce something that looks like a case study. It would have a protagonist, a company context, some decision points. It would be competent. It would also be mediocre: generic structure, predictable analysis, no pedagogical design, no narrative tension.
So instead of using AI as a replacement for my judgment, I did something different. I carefully encoded my entire case writing methodology into a structured set of instructions for Claude: what makes a good case protagonist, how to create decision tension, how to structure exhibits, how to calibrate complexity for different classroom contexts. And I gave Claude examples of award-winning cases, and the ones that didn’t sell as well. I taught Claude my style, my tone, my brand guidelines, and my voice. And I encoded all this in a Claude Skill.
With that in place, Claude does not produce generic case studies anymore. It produces case studies that embody my methodology. The floor for a first draft rose dramatically. But here is the critical point: the ceiling rose even more, because I now spend my time on the work that only I can do: selecting the right company, identifying the non-obvious strategic tension, shaping the narrative arc, pressure-testing the teaching plan. The AI handles the execution. I invest in the judgment. I can now produce a polished and insightful case study in two days. Down from six months.
A great chef follows consistent technique: proper mise en place, correct knife cuts, precise heat control. They follow a structured method. The structure makes the dish consistent. But it also frees them up to infuse creativity . The paradox - structure enables creativity. It does not constrain it. The same is true for me as a “case chef”. Before I build my system, I spent so much cognitive energy on execution that I had less bandwidth for originality. Now I have more.
The AI did not replace my expertise. It amplified it. And the amplification is proportional to the expertise I brought to the table in the first place.
Two Gaps, One Fate
The capacity to migrate upward is not evenly distributed, and this is the uncomfortable truth.
There are two distinct gaps operating simultaneously in the AI economy. The first is a capability gap: the difference in what people are able to do with AI based on what they already know. The second is a fluency gap: the difference in what people are able to do with AI based on how well they know how to work with it. These gaps are different in kind, different in consequence, and different in what it takes to close them.
The capability gap is perhaps the less discussed but more consequential of the two. AI is a multiplier. It amplifies whatever stock of judgment, taste, domain expertise, and synthesis ability a person already has. Give the same AI tool to a thirty-year veteran of product strategy and to a freshly minted MBA, and the outputs will differ enormously, not because of the tool, but because of what each person brings to the collaboration. The veteran knows which questions to ask, which outputs to reject, which subtle signals in the data matter and which are noise. The rookie MBA does not. Not yet. The AI amplifies the gap between them rather than closing it.
If I hand my case writing methodology to a student who has never written a case study, they will get a dramatically better first draft than they would have gotten without it. The floor rises. But the distance between their output and mine will increase, because I am working with the same amplifier on top of a much deeper base of knowledge and pattern recognition.
The fluency gap is different. It is not about depth of domain knowledge. It is about a cognitive posture shift that some people make and others do not. The knowledge workers who genuinely understand AI treat it as collaborative intelligence. They iterate with it. They challenge its output. They give it context, constraints, and examples. They think of prompting as a form of creative direction, not a form of search. They build on AI’s output rather than accepting or rejecting it wholesale.
The people who have not made this shift interact with AI the way they interact with Google: type a question, get an answer, done. They prompt AI like they would prompt a junior analyst: give it a task, receive a deliverable, move on. They do not iterate, refine, co-create, or push back. The result is that two people with the same domain expertise can get wildly different outputs from the same AI, simply because one has learned to collaborate with the machine and the other has not.
These are not technical skills. They are mindset shifts, and they are teachable. The people who have not made them are not less intelligent. They are often simply less willing to abandon a mode of working that served them well for decades. That reluctance is human. It is also increasingly expensive.
The Strategic Imperative
For the individual knowledge worker, the implications are clear but not comfortable.
Survival requires closing the fluency gap immediately and without equivocation. AI avoidance is no longer a viable professional strategy. Those who refuse to engage with AI are not making a principled stand. They are falling behind in a race they have not yet realized they are running.
Bridging the capability gap is more difficult, and it demands self-awareness. The relevant question is not “am I using AI?” but “what is the quality of judgment I am bringing above AI’s output?” and “is that judgment appreciating or depreciating in value as AI improves?” If your primary value-add is synthesis that AI can now do, you do not have a moat. You have a memory.
The specific discipline of thriving is frontier migration. It means deliberately and continuously moving toward ambiguous problems, novel synthesis, and high-stakes judgment. It means building attribution: a body of work, a methodology, a point of view, a network of trust that is identifiably yours and cannot be replicated by a model trained on the collective average.
For organizations, the imperative is to resist the obvious but limited play of deploying AI purely for cost reduction, in favor of the more difficult but durable play: using AI to shift the composition of work toward higher-value activities. The companies that use AI only to eliminate headcount will save money in the short term and lose capability in the long term. The companies that use AI to move their people uphill, to free human judgment for the work that actually creates differentiation, will build organizations that are genuinely difficult to compete with.
The Tide is Rising - Move Uphill
The rise of AI capability is a tide. It is rising for everyone simultaneously, and it lifts certain boats spectacularly. But a rising tide also submerges everything that is not elevated enough to stay above the waterline. The tasks, roles, and competencies that sit just above the current water level will be underwater within months, not years.
Shakespeare did not become irrelevant when the printing press democratized access to text. He became more valuable, because the multiplication of words made the rarity of genuine literary judgment and creative brilliance more visible, not less. The printing press did not level the playing field between Shakespeare and the average pamphleteer. It widened the gap between them permanently.
AI will do the same to every domain of knowledge work. The question is not whether you have access to the tool. Everyone does. The question is what you bring to the tool that it cannot bring to itself.
The tide does not wait for you to learn to swim. It does not care about your credentials, your title, or your years of experience. It rises. The only question is whether you are moving uphill faster than the water.




thoughtful& good points.
quote-'The comfortable plateau of “competent and credentialed” is disappearing'
the idea of credentialed isn't just jobs/ paid work with dignified process & community, but institutions& thoughtful learning process for human communities.
"The companies that use AI only to eliminate headcount will save money in the short term and lose capability in the long term" - this is the key takeaway. My concern is that far too many, especially larger public enterprises (driven by quarterly numbers) will end up opting for the former. I write about Procurement and how to future-proof humans in a post-AI world and my concern is too many CFOs will opt to automate and drive for efficiency as opposed to optimize for maximum value. And this will keep happening until something consequential (ethical/regulatory lapses, etc.) happen.
Great post.