AI-Proofing your Future: How to Learn, What to Study, and Where the Jobs Will Be (Part 2)
Part 2: Philosophy, Plumbing, and Where the Jobs Will Be
This is the second in a three-part series on skills, jobs, and learning in the age of AI. Part 1, “AI Isn’t Taking Jobs. It’s Taking the Ability to Learn,” explored how AI disrupts not just work but the learning process that builds expertise. If you haven’t read it, I’d encourage you to start there. This piece gets concrete: what should students actually study, and where will the jobs be? Part 3, “Advice for Parents: Protect the Struggle,” will address how to raise capable humans when the easy path is always available.
* * *
Students heading to college ask me an urgent question: What should I major in?
It is the wrong question. Majors are institutional categories. They describe how universities organize departments, not how the world organizes value. Nobody hires a “political science major.” They hire someone who can analyze complex systems, write with clarity, and make a persuasive case under pressure. The major is just the container. The capabilities are the content.
In an AI world, this distinction matters more than ever. The right question is not “what should I major in?” but “what capabilities will remain valuable and become more valuable as AI gets better?” Job titles come and go. Skill portfolios are timeless. The student who builds the right portfolio will have options that no single major can guarantee. The student who picks a “safe” major without building underlying capabilities will discover that no major is safe.
In Part 1, I argued that AI is the most powerful cognitive lever in human history, but a lever without a fulcrum is just a stick. The fulcrum is human judgment, pattern recognition, and first-principles thinking, built through years of struggle and practice. This piece is about what that fulcrum is made of. What, specifically, should you study and build?
The answer involves more philosophy and plumbing than you might expect, and less coding than conventional wisdom suggests.
Think Skill Security, Not Job Security
Before I get to specific fields, let me introduce a concept that should reframe how you think about career preparation: skill security.
Job security means you hold the same position for a long time. It is a relic of an era when industries moved slowly and institutional loyalty ran in both directions. That era is over, and AI is accelerating its end. Entire job categories will be created and destroyed within your career span.
Skill security is different. It means you have mastered capabilities that will be valued regardless of which jobs exist. The specific role changes. The underlying abilities transfer. A person with strong analytical reasoning, clear communication, and the ability to orchestrate complex projects will find work in industries that do not yet exist, doing jobs that have not yet been named. That is skill security. It is the only kind worth pursuing.
So what are the skills that endure? I see five meta-capabilities that every student should be building, regardless of what major they choose.
Pattern recognition. The ability to see structural similarities between seemingly unrelated problems. This is what makes a great strategist, diagnostician, or investor. It is trained by exposure to breadth, not just depth. The student who studies history, economics, and biology will see patterns that the student locked into a single discipline will miss.
Judgment under uncertainty. Knowing when the data is sufficient, when to act despite ambiguity, when to trust the model and when to overrule it. AI can process information. It cannot take responsibility for a decision when the information is incomplete. And when decisions have serious consequences. That requires a human who has been wrong enough times to develop calibrated intuition.
Orchestration. Designing and managing systems where humans, AI agents, and processes work together. This is the skill of the conductor, not the violinist virtuoso. Whether you are orchestrating a marketing campaign, a construction project, or a clinical trial, the ability to see the whole system and coordinate its parts is increasingly the most valuable thing a professional does.
Communication and persuasion. The ability to translate complexity into clarity, to move people to action, to build consensus across conflicting interests. AI can draft a memo. It cannot own a relationship, stand behind a recommendation with its reputation on the line, or read a room full of skeptical executives. These are irreducibly human skills.
Ethical reasoning. As AI systems gain autonomy, someone has to set boundaries, define what “good” looks like, and take responsibility when things go wrong. This is governance, safety, and values-driven leadership. These are skills that philosophy and liberal arts teach you.
Fields That Will Win
With those meta-capabilities as the frame, let me walk through specific fields. I am going to be opinionated here, because my readers expect me to shoot straight, and not hedge.
Liberal arts. I will say it plainly: the liberal arts are the single best training ground for the cognitive skills AI cannot replicate. This is not the defensive “liberal arts still matter” argument that embattled humanities departments trot out at fundraising dinners. This is the offensive argument. Philosophy trains you to detect bad logic, and in a world flooded with AI-generated plausibility, detecting bad logic is a survival skill. History trains you to recognize patterns across contexts, which is the foundation of strategic thinking. Literature builds empathy and narrative skill, which are the foundations of leadership and persuasion. These are not soft skills. They are the hardest skills to automate and the hardest to acquire.
In a world drowning in AI-generated content, the scarce resource is not production. It is taste, editorial judgment, and the ability to say something worth saying. Liberal arts build these muscles.
Economics and mathematics. Economics is the grammar of incentives, tradeoffs, and systems. It teaches you to think about second-order effects, unintended consequences, and equilibrium dynamics. Every AI deployment decision is fundamentally an economics problem: cost-benefit under uncertainty, principal-agent tensions, market design. Economics also bridges quantitative and qualitative reasoning in a way few disciplines do.
Mathematics, particularly statistics, probability, and optimization, gives you the conceptual foundation to be a credible participant in any technical conversation without necessarily being the one writing the code. You do not need to build the model. You need to know how the model works, and when the model is lying to you. Mathematical fluency is the difference between being a consumer of AI and being its architect.
Computer science, redefined. AI writes code now, and it will write it better next year. Coding may be dying, but computer science lives. Computer science as a discipline of systems thinking, abstraction, and architecture remains powerful. The question is whether you are learning CS to be a coder or to be a systems architect who understands how software, data, and AI compose into larger systems. The latter is deeply durable. What is dying is the middle tier of implementation work. What is thriving is the ability to design, specify, evaluate, and govern complex technical systems. CS as vocational coding training is finished. CS as computational thinking is alive and well.
Design thinking. Design frames problems before solving them, by understanding context and constraints, prototyping and iterating. AI can generate a thousand options in seconds. A human has to decide which ones are worth pursuing. Whether it is product design, service design, organizational design, or policy design, this is the skill of the architect: setting the criteria, evaluating the options, and taking responsibility for the choices. AI is a spectacular option generator. It is a terrible judge. Judgment is where humans earn their keep.
Cognitive science. This is the sleeper field that few people talk about. As AI systems become more capable, the people who understand how humans think, perceive, decide, and err will be disproportionately valuable. Human-AI teaming, AI safety, user experience design, behavioral product design: all of these require deep understanding of cognition. If you want to build AI systems that actually work for humans, you need to understand how humans work.
Plumbing, Welding, and the Smartest Career Bet Nobody Is Making
Now let me make an argument that will surprise some readers and feel obvious to others. One of the smartest career moves a young person can make today is to learn a skilled trade.
The economic logic is solid. Start with what robotics researchers call Moravec’s Paradox: tasks that are easy for humans, like navigating a cluttered basement, diagnosing a strange rattle in an HVAC system, or running plumbing through a 90-year-old building with no two walls the same, are extraordinarily difficult for machines. High variability physical work in unpredictable environments is the last frontier of automation, not the first. Your plumber’s job is safer from AI than your financial analyst’s.
Next, consider supply. In the United States, we have spent decades steering every capable student toward four-year university degrees, creating a massive skilled-trades shortage. Electricians, welders, plumbers, and HVAC technicians have pricing power that many white-collar knowledge workers would envy. An experienced electrician in a major metro can earn $120,000 to $150,000 a year with no student debt and high job security. Try saying that about a freshly minted communications major.
Even better - AI blends beautifully with these trades. An electrician who uses AI to optimize energy systems, diagnose problems faster, and manage a crew with AI-powered scheduling becomes dramatically more productive. AI is a complement to physical-world expertise, not a substitute. The trades are not a fallback. They are a strategic choice.
The obstacle is cultural, not economic. We have created a status hierarchy where a philosophy major working at a coffee shop has more social prestige than an electrician earning six figures. That is a market inefficiency driven by signaling norms. I am calling it out plainly. If we are serious about preparing young people for an AI world, we need to talk about the trades with the same respect we give investment banking and consulting. The economics demand it, even if the social signals have not caught up.
Healthcare: Durable but Transformed
Nursing and medicine are structurally safe. Aging populations, chronic disease burden, and the fundamental human need for care from other humans ensure that healthcare demand is not going away. But the nature of the work will shift enormously.
AI will handle diagnosis support, treatment planning, administrative burden, and routine monitoring. Much of what medical students spend years memorizing will be handled by systems that are faster and more accurate than any human memory. Fewer radiologists will be needed to analyze medical images. What remains uniquely human: intricate surgery, clinical judgment under ambiguity, the ability to integrate AI recommendations with a specific patient’s context, emotional presence, the conversation that helps a scared patient make a difficult decision, and the ethical weight of choosing when to override the algorithm.
The advice for a student interested in healthcare: pursue it with conviction. But build your professional identity around physical skill, judgment and human connection, not around information mastery. The doctor who thrives in 2035 is not the one who memorized the most. It is the one who knows what to do when the AI’s recommendation does not match what they see in the patient’s eyes.
Where the Jobs Will Be
Let me move from fields of study to the actual landscape of work, because students rightly want to know: where do I end up?
The jobs that are growing sit at the intersection of human judgment and AI capability. They are roles where a human sets the direction, defines the quality standards, manages the exceptions, and takes accountability for outcomes, while AI handles speed, scale, and pattern-matching.
Every AI system needs someone to design it, someone to evaluate whether it is working, someone to handle the cases it gets wrong, someone to explain its outputs to stakeholders, and someone to decide when to override it. Those are all human roles, and they require the meta-capabilities I described earlier. They also require domain expertise: you cannot govern an AI system in healthcare if you do not understand clinical practice, and you cannot orchestrate AI in marketing if you do not understand customer behavior.
The jobs that are shrinking are the ones in the middle: roles defined by processing information, following established procedures, and producing routine outputs. These include much of traditional financial analysis, standard legal research, basic software development, routine content creation, and administrative coordination. Not because these tasks are unimportant, but because AI can now do them faster, cheaper, and often better.
This is why I keep coming back to skill security. The specific job titles of 2035 are unpredictable. But the capabilities that will be rewarded are not: judgment, orchestration, communication, ethical reasoning, and deep domain expertise that gives AI something meaningful to amplify.
Start Building Now
I want to close with something actionable, because advice without practice is just pontification. If you are a student, or someone advising a student, here are habits worth starting this week. Not because they guarantee a specific career, but because they build the kind of mind that AI amplifies rather than replaces.
Write by hand regularly. Not because handwriting is sacred, but because the slowness forces you to think before you write, to choose words deliberately, and to develop an internal voice that is yours. In a world where AI can generate fluent prose on any topic, having a distinctive voice is a competitive advantage.
Read long-form material without summarization tools. Build the stamina to sit with a 300-page book and extract meaning through your own effort. This is cognitive endurance, and it is disappearing. The ability to hold a complex argument in your head, follow its logic, and form your own view is exactly the capability that AI threatens to atrophy.
Argue positions you disagree with. Force yourself to argue for the other side. This builds intellectual flexibility and guards against the confirmation bias that AI can amplify when it tells you what you want to hear.
Build something physical. Woodworking, cooking, gardening, wiring a circuit, fixing an engine. Engage with the material world where feedback is immediate, honest, and cannot be prompt-engineered away. There is a reason that every wisdom tradition values craft: it teaches you that reality does not negotiate.
Practice being wrong. Keep a journal of predictions and beliefs. Review them. Notice where you were wrong. Update. This is the fundamental loop of learning, and it requires the intellectual honesty to confront your own errors rather than letting AI shield you from them.
* * *
The question I hear most often, “what should my kid study?,” assumes that the answer is a field. It is not. The answer is a set of capabilities that no field owns and no AI can replicate: the ability to think from first principles, to see patterns others miss, to communicate with precision and empathy, to make decisions when the data is incomplete, and to take responsibility for the outcome.
Build those capabilities, and every field is open to you. Skip them, and no degree will save you.
In Part 3, I will speak directly to parents. Because knowing what to build is only half the battle. The harder half is creating the conditions where your children actually build it. That means protecting the one thing every instinct tells you to eliminate: the struggle.
* * *




Very insightful thank you for sharing will definitely help me who is in senior years and my kids who are entering the workforce.
Prof. Sawhney, enjoyed reading this. Your Moravec's Paradox point ("your plumber's job is safer than your financial analyst's") is spot on.
I published something today exploring adjacent terrain — where new work appears as AI stalls at the edges. Chimney sweeps are back in London. Waymo is paying humans $11.25 to close robotaxi doors. Companies are hiring "resident philosophers" to sit on AI steering committees. The skilled trades are part of it. But there's also a new layer of exception-handlers, keeper-uppers, and legitimacy infrastructure emerging in the cracks.
If you're curious: https://rajeshachanta.substack.com/p/the-last-meter-economy
Looking forward to Part 3.