AI with people -
Case study.

How a leadership team moved their organisation from noise and fear to confident, practical use of AI
The leadership problem.
The leadership team could see two things happening at once.
AI was suddenly everywhere. In the news, on LinkedIn, in client conversations and team meetings. The loudest messages focused on job loss and replacement. At the same time, the market was already challenging, and workloads were high.
Although these were separate issues, people experienced them as one. Leaders could sense rising anxiety, silence in some conversations, and uneven pockets of AI experimentation happening in the background.
Their real challenge wasn’t “how do we bring AI in?”
It was:
-
How do we reduce fear without pretending to have certainty?
-
How do we keep trust and psychological safety?
-
How do we encourage innovation without creating risk?
-
How do we protect human judgement and standards as AI use increases?
This case illustrates a pattern emerging globally: organisations are discovering that AI isn’t just a technology deployment, it’s a people and judgement challenge. Leaders who attend to human readiness create more resilient practices than those who focus on tech alone.
The risks to the business.
Alongside the people challenges, there were clear organisational risks if nothing changed:
-
Shadow AI use creating unmanaged data and compliance risk
-
Inconsistent messages going to clients about AI
-
Over-reliance on AI outputs without sufficient human judgement
-
Missed opportunities because people were too anxious to try things
-
Disengagement, as market pressure and AI headlines blurred into "job threat"
-
Reputational risk if mistakes were made using AI without oversight
Policies for AI and data already existed. Governance wasn’t the gap. The risk was that:
People were unsure how to live the policy in everyday work, so AI was either avoided completely or used quietly in the background.
The organisation didn’t need more documentation.
It needed clarity, capability, and confidence.
What the leaders wanted.
The leadership team weren’t trying to “roll out AI”. They wanted something simpler and more human. They were not just trying to control risk; they were trying to build judgement capacity so people could confidently interpret and act on AI output.
They wanted to:
-
Make it safe to talk honestly about AI and its impact
-
Reduce fear, second-guessing and silence
-
Bring existing policy to life in practical, everyday terms
-
Help people build confidence without losing judgement
-
Protect relationships, tone and ethical decision-making
-
Support innovation without increasing risk
They cared less about tools and more about how people felt using them.
What we did together.
We designed a people-first approach, led visibly by senior leaders and grounded in real work.

1. Open conversations, not broadcasts.
We started with leader-sponsored sessions that:
-
Acknowledged worry, fatigue and curiosity
-
Helped people separate AI headlines from their day-to-day jobs
-
Explained why the brain reacts strongly to uncertainty and change
-
Made it clear no question was “silly” or naïve
Psychological safety was the foundation, not a by-product.
.jpg)
2. Policy brought to life.
AI and data policies were already in place.
The work was to make them usable, human and non-threatening.
We showed, in plain English:
-
What the policy really meant in everyday tasks
-
Where the boundaries are and why they exist
-
How to avoid risk without being afraid to try things
-
That policy is there to support good work, not stop it
Instead of adding more rules, we built confidence in living by the ones that already existed.
3. Practical capability building.
We worked through real tasks and created prompts tailored to specific roles and everyday repetitive work, not abstract examples:
-
Emails to clients and candidates
-
LinkedIn posts and adverts
-
Shortlists and research
-
Idea generation and drafting
The focus was always:
Your brain first.
Then AI.
Then your judgement.
.jpg)
4. Facing disruptive competitors and equipping people for client conversations.
Alongside the internal worry about AI, there was a very real external pressure:
New AI-driven competitors entering the market with bold claims about speed, cost and automation.
Clients were asking difficult questions. Some were openly testing alternatives. Ignoring this wasn’t an option.
Rather than saying “don’t worry about it”, we equipped people with clarity and language so they could lead confident conversations.
Together we looked at:
-
What these AI-based competitors genuinely do well
-
Where automation really does help
-
Where AI still falls short in real-world settings
-
What clients consistently value that AI cannot replace
We focused on what humans do best:
-
Judgement and context
-
Understanding culture and team dynamics
-
Relationships and trust
-
Handling ambiguity and incomplete information
-
Sensitive and ethical decision-making
-
Noticing what isn’t written in the data
Employees were given practical wording they could use with clients so conversations sounded calm and credible, not defensive:
Yes, AI changes how we work.
No, it doesn’t replace human judgement or relationships.
Our value is how we combine both.
This reduced fear internally and strengthened confidence with clients.
5. AI advocates - change with people, not to people.
We created a small group of AI advocates from across the business. They weren’t “super-users” or enforcers. They were trusted peers.
Their role was to:
-
Support colleagues
-
Share what worked in practice
-
Raise concerns early
-
Help shape what “good” looks like
This meant adoption grew sideways through the organisation, not just top-down.

What changed.
Leaders noticed clear shifts:
-
AI stopped being whispered about and became discussable
-
Less shadow AI, because people felt safer using approved tools
-
More consistent client conversations about AI
-
Confidence increased without losing healthy scepticism
-
Teams shared ideas openly instead of hiding experiments
Most importantly, there was a psychological shift:
AI was no longer something happening to people.
They had a role in shaping how it was used.
What this reveals about future work.
Organisations that treat AI adoption as a people problem, not an IT one, build more resilient, judgement-capable teams.
Why this matters for leaders.
This case study isn’t about one organisation. It reflects what many leaders are facing now.
Technology and policy already exist. The difference in outcomes comes down to how ready people are to use them well, especially their ability to exercise judgement, question AI outputs, and hold context alongside confidence.
And as explored in my article 'Judgement is becoming the hardest skill to hire for', AI doesn’t remove the need for judgement, it increases it. Organisations now need:
-
Stronger critical thinking
-
Confidence to question AI outputs
-
Awareness of bias, fairness and tone
-
Leaders who talk honestly about uncertainty
The difference here wasn’t the toolset.
It was leadership choosing to introduce AI with people, not to them.
.png)