The Thinking Hierarchy
A roughly neuroscientific parsing of job fungibility
Which jobs are going to be AI-proof in the next 10 years?
Oof, what a question. In the past few months, I’ve found it hard to be someone who both produces and consumes career-related content without bumping into this question. It lingers in search trends, comments sections, and even my typically writerly Substack feed. But it’s understandable: the world has felt pretty dark and destabilized as of late, and most people just want to know if they’re going to be able to pay for housing in five years’ time.
But in truth, this question troubles me. Partially for the obvious reason that it’s disturbing to witness large, well-educated swaths of the population panic about basic job security, but also for the less obvious reason that the question reflects what I consider to be an unproductive fixation our society has with “jobs.”
I say “jobs” with emphatic scare quotes because, despite how we talk about them, jobs are nothing more than arbitrary, man-made task containers that organizations price for convenience. It’s the pricing part, I suspect, that makes us all weird about it. And while it would be nice to think that the bundling and pricing is driven by sound reasoning and objective market forces, the fact that there are people wandering around in broad daylight with job titles like “Wizard of Light Bulb Moments” (i.e., marketing coordinator) and six-figure salaries is all you need to know to understand how little hard-coded economic logic is at work.
Contrary to the way job listings make it seem, companies don’t price jobs; they price the people in them. And while salary ranges are often decent estimates for what the average qualified candidate will be worth, they’re loose indicators at best. Maybe that’s obvious when there are jobs on LinkedIn with $80,000-$250,000 ranges, but as someone who was privy to salary negotiations for many years, let me tell you: you’d be amazed at how a salary range can shift for the right candidate. Jobs don’t have inherent value, and they certainly don’t have predictable security guarantees. The World Economic Forum can publish as many reports as it wants, but the world’s leading labor market forecaster in 2002 could never have foreseen that their daughter would become an iOS engineer in 2026. And not for lack of imagination, but because job markets have always been tumultuous and capricious, and singular people and products (like the iPhone) can topple even the best-forecasted trends. And at the pace things are changing? To me, it feels like a fool’s errand entirely.
But wholly irrespective of tasks and how they’re bundled into jobs, the logic that has governed what makes people valuable and employable has actually stayed quite consistent over time. A standout job candidate can double their salary at the offer stage for the exact same reason that a beloved executive assistant can become one of the most job-secure people in the world, despite administrative job postings having decreased by 13% since the launch of ChatGPT: some people just think differently. About all tasks. Any tasks. Literally anything and everything they do. What those people know, and what AI is now forcing the population at large to confront, is the enduring truth that how you think, not what you think about, has always been the primary determinant of your fungibility in the workplace. The only thing that’s changed is that silicon thinkers have entered the chat.
There are many frameworks and heuristics for explaining how people think, but as a recreational (read: non-expert) neuroscience reader, I’ve found a few mental models particularly helpful for understanding the implicit hierarchy of workplace thinking. Daniel Kahneman’s popular System 1/System 2 framework and Robert Bramson’s five thinking styles model stand out, but, as far as I understand, contemporary neuroscience is rich in support for a hierarchical view of cognition. Some types of thinking are simply rarer and more sophisticated than others, and some types of people are simply more capable of them.
From my experience working across a variety of industries and functions, what I’ve come to believe is that all organizations contain three basic, industry-agnostic categories of thinkers: executors, problem solvers, and problem identifiers. Kristen’s Unscientific Taxonomy of Workplace Thinkers, if you will (or KUTWT, if you’re my fiancée who delights at random acronymizing). And long before Claude began instigating existential dread in millennials, career security was heavily determined by which of these categories you belonged to.
KUTWT Explained
Executors
The most abundant type of thinkers in the workplace is executors, people whose approach to work is to consistently do a great job at the things they’re told to. While the majority of these people hold administrative and associate-level roles, they can be found up and down the org chart. But just as late-twentieth-century knowledge-work offshoring has threatened many of these people’s careers for decades, AI has accelerated their displacement.
While great executors are necessary in every workplace, the type of thinking required to be a great executor is straightforward, procedural, and precision-focused. Executors do work such as interpreting and following plans, noticing errors, optimizing within constraints, and generally thinking about the question, “How do I do this correctly and quickly?”
If the neuroscience of it all matters to you, this type of thinking is mostly confined to the cerebellum and maps quite neatly to Kahneman’s description of System 1 automaticity and basic cerebellar loops for habit and motor control. In other words, it’s simple, highly definable thinking that’s guided by clear inputs and outputs. Most people can do it; most people can be taught how to do it; and, unfortunately, so can AI. Executors are, while valuable, as fungible (read: job insecure) as they come.
Problem Solvers
Problem solvers are a little better off. Often clustered in middle-management and high-level individual contributor roles, problem solvers are thinkers who can figure out how to fix problems and achieve goals others have set for them. Fortunately for problem solvers, they’re perfectly suited for the majority of problem-solving challenges that arise at work. When you interview them, they’re the type of people who tell you that they “work solution-backward” because, regardless of what task bundle they own, they apply analytical and creative reasoning to their work. Think (haha): convergent focus on trade-offs, root causes, and bounded options that allows them to work through complex problems to find the best solutions.
This type of thinking sits in our buddy Daniel’s System 2 category of deliberate reasoning, which engages prefrontal networks for hierarchical control and white matter pathways that converge on optimal solutions. In other words: complex stuff. Complex enough that problem solvers are often fast risers early in their careers because the complexity of their thinking allows them to self-differentiate quickly in junior, execution-heavy roles. If you’ve ever managed a team of associates, you know that these people are not hard to spot. While great executors are an asset to any team, problem solvers make the difference in keeping teams focused on the right tasks and breaking down walls.
Unfortunately for problem-solvers, they often stall out in middle management, and while they’re unbelievably valuable, they are not particularly rare. Reasoning skills—like assessing trade-offs and performing root cause analyses—are basic tenets of most people’s undergraduate education, and the rush to trim the fat in middle-management has already placed problem solvers who are not also wickedly effective executors on thin ice in the current economy.
Problem Identifiers
But that brings us to the problem identifiers, the final tier of workplace thinkers who are responsible for the indescribably challenging work of figuring out which problems need to be solved in the first place. These people, often leaders, look not just at the problems they know, but at the whole world around them, the infinite bits of context that comprise a company’s universe, and draw on a very elusive and advanced type of thinking to figure out which problems to give to the problem solvers.
Final bit of science for you: problem identification, as we understand it, draws on a very distinct neuroscience profile that can help explain both its current rarity and its resistance to both training and AI replication. Problem identification occurs through flexible prefrontal networks and parietal areas that detect subtle signals (think: feelings) before logic even kicks in. It would be reductionist to call it intuition, but that’s a part of it. Problem identification recruits numerous areas of the brain to scan multiple contexts, reframe situations, wrangle ambiguity, and eventually produce signals, so to speak, that we experience as gut feelings that something is wrong, even when it looks right. Unlike the automatic, cerebellum-driven thinking of executors or even the targeted, prefrontal path-driven analysis of problem solvers, problem intuition demands divergent thinking (yes, my neurodivergent friends, the type of thing your brains are good at), contextual intuition, and creativity. In the simplest language I can give you: it’s rare because a whole lot of your brain has to be doing a whole lot at once. But without people who can do it, organizations solve the wrong problems, make nonobvious yet devastating mistakes, and put on their own corporate productions of much ado about nothing.
Over the years, I’ve had a lot of friends, students, and employees ask how to improve at this type of thinking, and, to my dismay as a former debate educator, I don’t think I’ve ever succeeded in helping anyone do so. I’ve seen exceedingly motivated debaters passably mimic this type of thinking, but there is a palpable difference between students who “get it” and those who don’t, and I’ve never seen someone truly close the gap. Problem identification is ephemeral, philosophical, and rooted in a combination of intuiting, context-sensing, social sensing, and pattern recognizing across loosely connected domains. You can arguably teach all of these things individually, but how to combine them to see cracks where other people see smooth surfaces? I don’t know if you can.
I will also note that organizations don’t exactly incentivize this type of thinking, even though they’re reliant on it. High-nuance problem identification often looks like musing about which types of organizational knowledge are axiomatic rather than well-reasoned or hypothesizing about the theoreticals. It is particularly philosophical, as far as workplace thinking goes, and pulls you out of the world of “what” and “how” and strictly into the territory of “why” and “what if.” Unsurprisingly, most companies would actually prefer you don’t sit around philosophizing about your job, especially in this “agency” and speed-obsessed economy where the Silicon Valley Messiahs (SVMs, if you’re Logan) won’t stop talking about their pathological bias to action. The result is something of a paradox: problem identifiers are essential, but too many are a headache, making their rarity both problematic and necessary at once. I’ll leave you to ponder that on your own.
I recognize that it’s not exactly encouraging to read that only a small subset of thinkers engage in work in a way that makes them career stable, but what I hope to offer you with genuine optimism is this: if you can reckon with this reality and admit where you fall within it, it is likely the most valuable and instructive insight I can share to help you navigate the impending future.
Understanding what type of thinker you are and where the threat to your career comes from may be the most constructive thing any of us can be doing to make sense of what we need to do next.
If you’re an executor, step one is likely recognizing that AI may be a legitimate threat to your work. And while there will always be a place for exceptional executors, I suspect that workplaces of the future will demand that executors also be problem solvers and that problem-solvers be great executors. Perhaps that’s cause for solace: many problem solvers loathe execution, feel too good for it, or simply don’t have the stamina for it, and unlike problem identification, problem solving is a highly trainable critical-thinking mode. Practice using frameworks like mind maps and “why” trees. Seek out bounded puzzles (I nonfacetiously think you should pick up chess), volunteer for ambiguous tasks, fail fast, and watch your problem-solving colleagues like a hawk. Executors should certainly be learning to use AI for efficiency, but my totally unprofessional take is that if this were the type of thinking I excelled at most, I would be disproportionately putting my time into building my problem-solving skills.
I suppose for problem solvers, the imperative runs the opposite way. The more companies collapse jobs into “player-coach” roles, the more critical it will be that problem solvers dust off their execution chops and either get very good at using AI for task execution or get very comfortable rolling up their sleeves and doing precision, detail-oriented work themselves. You may not need to develop your skills, but you do need to defend your reasoning skills ruthlessly. Form and communicate opinions without AI. Do the tiring thinking without asking Claude. I know I can sound like a critical-thinking extinction doomsayer, but the downside risk of erring on the side of thinking for yourself is so low. Keep your efficiency-seeking far from your thinking, and work on ensuring you can still execute like a twenty-one-year-old Bain consultant. Play around with Claude Cowork, but also: have faith. There’s so much perceiving, sensing, and reasoning that AI still can’t touch.
My final words of warning are to the problem identifiers, even though you’re probably feeling pretty good by the end of this essay (if you made it this far, that is, in which case… thanks for reading!). But if you have the great privilege of being able to activate your whole brain at the same time to see things other people can’t see, you only have value to lose. The threat AI poses to problem identifiers is not in its capabilities but in its allure: the shiny, sexy, enticing appeal of making things just a little easier. One day I will stop writing newsletters about this, but it won’t be today: the hard parts are the point. Keep lying awake at night, haunted by the nagging feeling that something isn’t right. Continue to ponder past the point of reason. Think until your brain hurts so badly that only swimming or reality TV can soothe it. Hold the indescribable parts of being a thinking being close to your chest. Even when the calls from California are telling you to automate, shortcut, and keep up. They don’t know which jobs we’re running toward either.
Thanks—always—for reading. Talk soon.

