Judgement is becoming the hardest skill to hire for
- Becky Webber
- Dec 20, 2025
- 8 min read
Updated: Dec 29, 2025
What AI adoption is revealing about leadership, decision-making and talent.
AI is now a major topic in business. Boards want higher productivity and efficiency. CEOs are asking which team members are “AI-ready.” Many job advertisements now mention generative AI tools, automation, and prompt engineering skills.
Focusing on the technology makes sense. The mistake is assuming that the human skills required to use it effectively will simply develop on their own.
Right now, most organisations are somewhere in the middle. They’re experimenting with GenAI for writing, analysis, research, and idea generation. Some are beginning to automate tasks or decisions. Wherever they are on that journey, one pattern is already clear:
When AI is used without enough human judgement, quality drops.
You can see it in everyday ways. Emails go out in the wrong tone. Chatbots frustrate customers. Reports look polished but miss context. People follow suggestions they don’t fully understand because the system sounds confident. Human effort then moves from “doing the work” to “fixing what the system did.”
The problem isn’t the tools. The problem is weak or absent judgement around them.
When AI is allowed to automate decisions and human oversight steps away too early, problems grow quietly. Small errors aren’t noticed, feedback loops disappear, and mistakes scale fast. What begins as a productivity gain can become frustration, reputational damage, or additional work for human teams tasked with remedying the outcome.
A recent Forbes Technology Council article highlights this. It notes that the biggest risks with increasingly autonomous AI systems are often not technical failures, but the removal of human oversight at key points. Systems do what they were designed to do, but things still go wrong because judgement has been pushed too far upstream. Without people deciding when to pause, escalate, or override, decisions are made at speed and scale, with no one to sense-check them.
AI doesn’t reduce the need for judgement. It raises it.
Recent global workforce research supports it.
The World Economic Forum’s Future of Jobs Report 2025, based on a survey of over 1,000 employers worldwide, found that analytical thinking is the most sought-after skill. Next are resilience, flexibility, agility, leadership, and social influence. While AI skills are rising rapidly, they still augment human abilities rather than replace them. The WEF research shows that core human skills remain in demand even as AI grows, because organisations still depend on people to interpret, challenge, and apply outputs in context.
If human skills determine whether AI changes work for the better, then how we hire those skills becomes a strategic priority.
Korn Ferry’s 2026 talent trends research supports this from a hiring angle. Talent leaders say critical thinking and problem-solving are top priorities for 2026, outranking AI fluency. These skills help people evaluate, question, and use AI outputs rather than simply accept them.
Korn Ferry also shows how some organisations are already responding. Instead of relying solely on interview performance, they are introducing more problem-based assessments. Rather than asking candidates to talk about past successes, they place people into unstructured scenarios and watch how they analyse information, weigh trade-offs and make decisions. This allows hiring teams to see critical thinking, adaptability and judgement in action, not just in polished stories.
This isn’t humans versus technology. AI is changing how work gets done. But whether those changes create value still depends on the quality of human judgement around it.

Why certain human skills matter more in an AI-enabled world
When AI becomes part of the workflow, three transformations often occur simultaneously: speed accelerates, certainty increases, and error detection blurs.
As a result, some human skills become essential, not just nice-to-haves.
There is also a more subtle risk. When AI is consistently used to generate options, summaries or decisions, human judgement can become passive rather than engaged. People stop interrogating outputs as deeply, rely on default recommendations, and gradually lose the habit of sense-checking. Over time, this doesn’t just increase the risk of error. It weakens judgement itself.
Critical Thinking and Judgement
AI can suggest options, but it cannot:
assess organisational context,
challenge assumptions,
understand the full landscape of business risk,
decide when not to use a given output.
One of the clearest signals from Korn Ferry’s analyses is that talent leaders view critical thinking as vital to working effectively with AI: survey data show that 73% of talent acquisition leaders rank it as their top hiring priority, far ahead of AI skills. Many organisations already see this: the ability to question AI output, interpret it responsibly, and decide if it fits the situation is rarer and more valuable than just knowing how to use the tool.
Adaptability, Resilience and Learning Agility
Tools change fast. What you know today might not help tomorrow. The World Economic Forum notes that resilience, flexibility, and agility will set apart growing roles from declining ones over the next decade.
People who succeed aren’t just those who know the tool, but those who can unlearn, relearn, and manage themselves when things are uncertain.
Collaboration and Leadership
AI doesn’t eliminate silos in organisations. Sometimes, it can even make them worse. Leaders who build trust, bring together different perspectives, and lead across teams are more valuable than ever.
Communication, Self-Awareness, Motivation and Curiosity
These aren’t just 'soft skills.' They are durable skills that apply across different jobs, technologies, and business models. They’re also harder to develop quickly than technical AI skills, which is why forward-thinking talent leaders prioritise them.
When leaders talk about “soft skills”, what is often missed is what has quietly been deprioritised. In many organisations, there has been less focus on judgement under pressure, emotional regulation in decision-making, and the ability to escalate, challenge or pause when something does not feel right. Learning agility has also been overshadowed by experience, even though the pace of change means experience alone quickly becomes dated. These capabilities rarely show up clearly on CVs or in polished interviews, but they are precisely what determine whether AI-supported decisions hold up in practice.
The hiring mistake many organisations are about to make
Many organisations say they value these human skills. Far fewer assess for them.
Instead, hiring still leans heavily on:
polished interview performance,
confident storytelling,
gut feel.
In an AI world, this is risky. Candidates are often well-prepared and might even use AI to practice their answers. Just knowing how to use tools doesn’t mean someone thinks deeply.
The question for leaders is not whether thinking matters, but whether their current hiring approaches make it visible.
Many hiring decisions still place more weight on experience and technical fluency than on motivation and strengths. People who enjoy the work and like solving problems stay curious and keep using judgement. Those hired mainly for credentials or past exposure are more likely to defer to systems, especially under time pressure.

Why the AI advantage is shifting from models to workflows
Another shift underway helps explain why human skills are becoming more important, not less.
Increasingly, the next phase of enterprise AI is not about access to better models. It is about how well organisations redesign their workflows. As McKinsey’s State of AI in 2025 report notes, AI impact is now shaped less by technical capability and more by organisational readiness to integrate it into everyday work.
This often shows up in everyday workflows. HR onboarding, for example, is frequently spread across recruitment platforms, HR systems, learning portals and manual handoffs between teams. Sales processes are similarly fragmented across siloed CRMs and reporting tools.
In these environments, AI struggles not because the models are weak, but because the data and decision points are disjointed. Intelligence exists, but it cannot flow cleanly through the organisation.
This view is echoed in LinkedIn News’ Big Ideas that will define 2026, which highlights how internal constraints such as fragmented data, brittle processes and legacy workflows often limit AI’s real-world impact, even when the technology itself is strong.
Saanya Ojha, Partner at Bain Capital Ventures, captures this succinctly. Many organisations already have access to highly capable AI models, but lack the “connective tissue”, orchestration, observability and change management needed to turn intelligence into outcomes. Without this, AI initiatives tend to stall at the pilot stage rather than delivering sustained value.
This brings the conversation back to leadership and judgement. Deciding where AI should sit in a workflow, when human oversight is required, and how trade-offs are managed cannot be solved by technology alone. These are human decisions, and the quality of those decisions determines whether AI becomes an advantage or a liability.
What this looks like in practice
For organisations taking AI seriously, the biggest changes are not just in technology. They are in how leaders think about who they hire, how they develop people, and how judgement is built over time.
Several well-known organisations already show, through public examples, that focusing on human skills is key to using AI successfully:
Microsoft consistently positions its AI tools within a Responsible AI framework centred on fairness, reliability, transparency, accountability and human oversight. The message is simple: technology alone is not enough. Good outcomes depend on people who can question, interpret and apply it well.
Unilever is frequently cited in World Economic Forum case studies for its skills-based approach to workforce planning. Rather than focusing only on roles or past experience, it emphasises transferable, durable skills and internal mobility, helping people adapt as technology and work change.
DBS Bank similarly appears in Web forum discussions as a case where digital transformation is treated as a leadership and culture challenge, not simply a tech rollout, emphasising learning mindsets and operating-model change.
These examples point to the same conclusion: AI creates value only where leadership, people and judgement are strong.
Across these organisations, a common pattern emerges in how leaders think about hiring and talent.
First, they prioritise how people think, not just what they have done. They are less persuaded by polished success stories and more interested in how people navigate messy, ambiguous problems. What matters is how someone frames the issue, surfaces assumptions and explains their reasoning, not just the outcome.
Second, they make judgement visible. Case studies, simulations and time-bound exercises are used not as filters, but as windows into how people reason under pressure. They reveal how individuals prioritise, weigh trade-offs, and make real-time decisions, which Korn Ferry identifies as strong indicators of future performance.
Third, they probe intent, not just execution. Leaders pay close attention to the “why” behind decisions. Strong judgement shows up in an ability to articulate trade-offs, risks and consequences. Weaker thinking often presents as confidence without explanation.
Taken together, this is the shift: organisations are realising that AI maturity and hiring maturity are now linked. If you want good decisions in an AI-enabled workplace, you have to select and develop people for the quality of their judgement, not only their experience or technical fluency.

Red and green flags leaders should pay attention to
When it comes to judgement, patterns of behaviour matter more than polish.
For critical thinking, notice how people respond when faced with uncertainty. Strong signals include curiosity, a willingness to explore assumptions, and an ability to explain their reasoning. Weaker signals include rushing to conclusions, offering a single “right” answer, or struggling to articulate why a decision makes sense.
For adaptability and learning, pay attention to how people handle challenges or feedback in the moment. Openness, reflection and adjustment are positive signs. Defensiveness, rigidity or over-reliance on past ways of working often indicate that change will be harder than it appears.
For collaboration and leadership, listen to the language being used. People with strong judgement tend to acknowledge others’ perspectives and talk about shared outcomes. A narrow focus on individual wins or working alone may feel efficient, but it often becomes a constraint in complex systems.
These signals rarely show up clearly if conversations stay surface-level. Leaders who consistently notice them are better able to sense-check their judgement long before decisions start to scale.
A final reflection for business leaders
The direction of travel is clear. AI will keep improving, and adoption will continue. The real differentiator won’t be the tools organisations buy, but the quality of judgement wrapped around them.
The best-performing organisations won’t just have the latest systems. They will hire for judgement, build learning agility, and support leaders to make thoughtful decisions at speed. They will treat human capability as infrastructure, not an optional extra.
Most organisations will not fail because AI doesn’t work. They will fail because decisions start moving faster than judgement keeps up.
The opportunity and responsibility for leaders now is simple and demanding at the same time: strengthen human judgement while the technology accelerates. That is where safer decisions, better work and real productivity gains will come from.
.png)

Comments