AI-Proofing your Future: How to Learn, What to Study, and Where the Jobs Will Be (Part 1)
AI Isn’t Taking Jobs. It’s Taking the Ability to Learn
This is the first in a three-part series on skills, jobs, and learning in the age of AI. Part 1 maps the terrain: what AI is doing to how we learn, work, and build expertise. Part 2, “Philosophy, Plumbing, and Where the Jobs Will Be,” offers concrete guidance on which skills and fields survive. Part 3, “Advice for Parents: Protect the Struggle,” addresses the hardest question of all: how to raise a capable human when the easy path is always available.
* * *
I am asked this question almost every week.
It comes from parents at dinner parties. From executives in my programs who are thinking about their children, not their companies. The question takes different forms, but the anxiety underneath is always the same: What should my child study? Where are the jobs? How can kids learn to learn when AI can give them instant answers?
I have been a professor of marketing and technology at Northwestern for almost 35 years. I have taught tens of thousands of students, and I have advised senior leaders at some of the world’s largest technology companies. I will be honest with you: these questions have never been harder to answer than they are right now. The speed at which AI is reshaping the landscape of work and learning has no precedent in my career, and I have lived through the internet, mobile, cloud, and social revolutions.
I will try to answer these questions. I will not speak with the hubris of a futurist. I will speak with the perspective I have gained over decades building pattern recognition skills across technology and business. And from the point of view of someone who has spent the last seven years going deeper into AI than anything else in my professional life. This series is my honest attempt to share what I see.
Let me begin with a story about a lever. This story will anchor the three-part series.
The Precondition Everyone Forgets
“Give me a place to stand and a lever long enough, and I will move the world.”
Archimedes said this over two thousand years ago. Everyone remembers the lever. Almost nobody remembers the precondition. He did not say “give me a lever.” He said “give me a place to stand and a lever.” The fulcrum comes first. Without solid ground beneath your feet, the lever is just a stick.
This analogy is very apt for where we are with artificial intelligence. We are in a moment of collective infatuation with the lever. Every conference, every headline, every investor pitch is about the power of the tools. Don’t get me wrong. The tools are extraordinary. I have experienced the power, and I am in awe. The lever is real. And it is getting more powerful by the day.
But few people talk about the fulcrum. The fulcrum is human judgment. It is pattern recognition. It is the ability to frame a problem before solving it, to ask the right question before generating an answer, to know when the model is brilliant and when the model is lying to you eloquently. This fulcrum can only be built one way: through years of struggle, practice, error, and correction. There are no shortcuts. There never have been.
What AI Actually Disrupts
To understand why this matters, you need to understand what AI actually disrupts. It is not just automating tasks. It is disrupting the learning process that produces expertise.
Consider what happens when a student uses ChatGPT to write an essay. The obvious concern is cheating. But the deeper damage is invisible. When you write an essay yourself, you are not just producing text. You are clarifying a vague intuition. You are discovering what you actually believe through the discipline of articulating it. You are confronting the weakness in your own argument when you try to put it on paper, and it falls apart. The essay is the artifact. The thinking is the real learning. Outsource the thinking, and you have atrophied the learning.
This example applies to every knowledge task. When you ask AI to summarize a 30-page report, you skip the cognitive work of reading carefully and distinguishing what matters from what does not. That discrimination skill is the foundation of expertise. When you ask AI to write your code, you skip the debugging that builds your understanding of how systems actually work. When you ask AI to generate your strategy, you skip the messy human process of weighing tradeoffs under uncertainty that builds real judgment.
I call this problem premature abstraction, borrowing a concept from computer science. In programming, you are warned never to abstract too early, before you understand the underlying patterns. The same principle applies to learning. AI allows students and professionals to jump to high-level outputs before they have done the low-level work that makes those outputs meaningful. The product looks the same. The person behind it is fundamentally different.
Here is the sentence I want you to remember: you cannot supervise a process you have never done yourself. A senior partner at McKinsey can use junior analysts effectively because she has done the analysis herself thousands of times. She knows what good looks like. She can smell a flawed assumption in a spreadsheet. A student who has never built an argument from scratch cannot evaluate whether AI’s argument is sound. They lack internal calibration. They have no place to stand.
The Jobs Question, Honestly
Let me address the fear directly, because it hangs over this entire conversation. Will there be fewer jobs? Are our children walking into a world where human work is obsolete?
The leaders building these systems are not reassuring on this point. Sam Altman and Dario Amodei have both spoken publicly about the magnitude of the disruption ahead. Amodei’s vision of “radical abundance” through AI is optimistic in the long run but brutally honest about the transition. Altman has said that AI will eliminate many jobs and that society needs to prepare. These are not critics. These are the people building the most powerful AI systems on earth. When they express concern, it deserves serious weight.
Here is my take. Over a long enough horizon, say 20 to 30 years, I believe new forms of work will emerge that we cannot yet imagine, just as they have in every prior technological revolution. The internet did not produce net fewer jobs. It produced different jobs: jobs that no one in 1990 could have predicted. I expect the same pattern here.
But the transition will be savage in its distribution. And the speed is unprecedented. Previous technological revolutions played out over decades. AI is compressing that timeline to years. The middle tier of knowledge work, the credentialed-but-not-expert layer that processes information, synthesizes reports, and generates routine analysis, is being hollowed out right now. Not in five years. Now.. The people with genuine expertise can use AI as an extraordinary amplifier. The people who were coasting on credentials and process knowledge are discovering that AI can do what they do, faster and cheaper.
So the answer to “will there be fewer jobs?” is the wrong question. The right question is: fewer jobs for whom? And the answer is: for people who never built a place to stand. For people whose value was in execution, not judgment. For people who learned to produce outputs but never learned to think.
The new jobs, the ones that will be created, will go to people who can do what AI cannot: frame problems, exercise judgment under ambiguity, build trust with other humans, take ethical responsibility for outcomes, and orchestrate complex systems where AI is one component among many. These are the people with a fulcrum. AI is their lever. Everyone else is holding a stick.
Stories from Lived Experience
I want to make this concrete. Over the past few years, I have gone deeper into AI than any subject in my career. And I can tell you that the Archimedes principle is not a metaphor for me. It is my daily experience.
I built a custom AI system for writing business case studies. I encoded three decades of case-writing methodology into it: my structure, my style, my tone, my standards, my pedagogical logic. The foundation included insights from over 50 cases I have written over the years. The system now works the way I work. I can go from an idea to a solid first draft within a couple of hours. Astounding acceleration! But every creative decision, every structural choice, every judgment call is still mine. AI did not replace my expertise. It operationalized it. I had to build the expertise first before I could encode it.
I developed a framework called I-MOS, the Intelligent Marketing Operating System, for MIT Sloan Management Review. It maps seven core marketing workflows against an agentic AI operating stack. The architecture came from pattern recognition across hundreds of conversations with CMOs and decades of teaching and consulting. AI helped me iterate and refine at a pace that would have taken months of solo work. But the conceptual breakthrough, seeing the structure that organized what had been a fragmented landscape, was mine. AI did not see the pattern. I did. And then AI helped me articulate it with precision and speed.
Most recently, I have been building a comprehensive ontology for AI-driven marketing, a knowledge architecture that maps how concepts, workflows, technologies, and organizational capabilities connect. The depth and speed at which this work has progressed is, frankly, breathtaking. But the ontology reflects my mental model of how these domains relate. AI helped me externalize it, structure it, pressure-test it. The intellectual DNA is mine.
In each case, the pattern is identical. Human insight first. AI amplification second. Place to stand first. Lever second.
This Article as Proof of Concept
I want to tell you how this series came into being, because the story itself illustrates the thesis.
I was sitting at O’Hare, waiting to board a flight to Delhi. I had been carrying these ideas for months: conversations with anxious parents, observations from my own AI practice, a growing conviction that the education conversation was missing the point. At the gate, I opened my laptop, started a conversation with Claude, and began thinking out loud.
I did not ask AI to write an article. I brought the raw material: the instinct that skill security matters more than job security, the conviction that trades deserve more respect, the Archimedes metaphor that had been forming in my mind, the personal examples from my own work. I pushed. AI pushed back. I refined. AI helped me see structure in what had been a collection of instincts. I rejected some suggestions and sharpened others. The conversation surfaced the architecture of a three-part series.
By the time I boarded the flight, I had a solid first draft of three articles. Granted, the flight was delayed by 45 minutes. But still...
Now here is the question: was that AI slop? Was that a machine writing on my behalf? I would argue exactly the opposite. What happened at that gate was my accumulated expertise finding a lever powerful enough to match its ambition. The speed was not a shortcut. It was the result of the foundation. Every instinct I brought to that conversation was earned over decades of teaching, writing, consulting, and thinking. AI did not give me those instincts. It helped me articulate them faster than I could have alone.
A student with no experience in AI strategy, marketing transformation, or case writing could have sat at that same gate with the same tool and produced nothing of value. The lever was identical. The fulcrum was not. We all have word processors. But all of us are not Shakespeare!
What Comes Next
The most powerful cognitive lever in human history is upon us. It will reshape work, education, and expertise more profoundly than any technology since the printing press. The people who thrive will not be those who adopt AI fastest. They will be those who built something solid to stand on before they picked up the tool.
You have the lever. The question is whether you have a fulcrum.
In Part 2, I will get specific. If you are a student, or someone advising a student, what should you actually study? Which fields, skills, and habits build the kind of foundation that AI amplifies rather than replaces? The answer involves more philosophy and plumbing than you might expect, and less coding than the conventional wisdom suggests.
In Part 3, I will speak directly to parents. Because the hardest part of this story is not knowing what to build. It is having the courage to let your children struggle while they build it.
* * *




Thanks for writing this. I admire the courage it takes to put a viewpoint out there. Knowing your writing over the years, I suspect you’re not necessarily trying to be “right” here as much as trying to provoke thinking. That’s valuable.
That said, our assumptions don’t always lead us where we expect.
It’s possible the future will look like human expertise + AI leverage. But I’m not convinced. Economic incentives tend to dominate these outcomes.
If part of this is hope, that’s understandable. But history often points in a different direction.
A deliberately extreme analogy: when cars replaced horses, we lost a lot of the everyday expertise around horse riding and horse care. At the time people worried about that loss. In the end, efficiency won.
Something similar may happen here.
The future may end up being less about deep human expertise and more about human taste, judgment, and oversight — with AI doing most of the execution.
It’s not hard to imagine a world where many people no longer need to:
• code
• drive
• perform certain surgeries
Just like we no longer need to hunt for food, run long distances for survival, or live as nomads.
History suggests efficiency usually wins over preserving expertise.
Also, this framing may miss another role AI can play — helping humans build expertise and understand foundational concepts much faster than before.
The scarce human skill may end up being judgment rather than execution.
Disclosure: I wrote these points and had ChatGPT polish it.
Thank you for the reframe and helping parents realize what they CAN control to help their children naviagate the career landscape as it will continue to change at an extremely fast pace.
I'm curious what you think of the notion that there will become more and more companies with a single owner and no employees (this already exists), and perhaps the best new set of skills that can be taught in schools are how to make money off the stock markets? This is a critical income stream for the wealthy and yet skills that are not taught to the general population. During this transition, I have zero faith our government will do the right thing to help the citizens in our country stay afloat.