AI with people -
Case study.

This case study shows what happens when AI change is led well in a recruitment business. We didn’t start with technology. We began with people: their concerns, questions, and day-to-day realities in a challenging market. The result was a shift from “AI is happening to us” to “we can shape how AI works here.”
Setting the scene.
AI didn’t slip quietly into the background. It was everywhere.
It was on the news in the morning, in social feeds at lunchtime, and in conversations with clients before the day had even finished. Headlines were loud, confident and rarely balanced. The story people heard most often wasn’t productivity or possibility. It was replacement.
“AI will take your job.”
“Recruiters will be automated.”
“Everything will be done by machines.”
At the same time, the recruitment market was already challenging. Roles were harder to fill, budgets were tighter, and expectations were higher. It was easy, and very human, to connect the two:
“Tough market + AI headlines = my role is under threat.”
Those two issues are separate, but they didn’t feel separate. They arrived together, and they landed personally.
Inside the organisation, that showed up as a mix of reactions:
-
curiosity
-
tiredness
-
excitement
-
frustration
-
fear of being left behind
Some people were already experimenting with AI tools. Others avoided them entirely because they didn’t want to make a mistake or look foolish.
The risk wasn’t AI itself.
The risk was noise, fear, and silence.
That’s where the journey began.
What we set out to do.
The goal wasn’t to “roll out AI”.
The goal was to help people:
-
feel safe talking about how they felt
-
separate headlines from reality
-
understand what AI can and can’t do
-
learn how to use it well and safely
-
feel confident talking to clients about it
In short:
Move people from worry and uncertainty
to clarity, confidence and buy-in.

What we did.


David Rock's SCARF Model
From 'Coaching with the Brain in Mind' (2009)
1. “From Fear to Buy-In” session — making it safe to talk.
We started with a session focused on how AI feels, not how it works.
We:
-
named the fears people already had
-
talked honestly about job impact and identity
-
explored why our brains react strongly to uncertainty
-
used David Rock’s SCARF model to normalise those reactions
-
made it clear that no question was silly or naïve
This session was delivered with the Managing Director to make something very visible and supportive:
Leadership is in this conversation with you, not above you.
2. We equipped consultants to talk to clients confidently.
AI was already coming up in client conversations, so avoiding it wasn’t an option.
We created:
-
a client guide to help consultants discuss AI without scare stories
-
realistic language to describe what AI changes and what it doesn’t
-
simple explanations of benefits, risks, and guardrails
This meant consultants could stop dodging the topic and start leading calm, confident conversations.
.png)
.png)


3. We built practical crib sheets for “in-the-moment” support.
Alongside the guide, we produced simple, usable tools:
-
objection-handling prompts
-
language to explain what remains human in recruitment
-
reassurance about value beyond automation
-
responses to AI tools clients mentioned directly
The point wasn’t to “win” against AI tools.
It was to own our human strengths without defensiveness.
4. We faced AI competitors together.
Rather than pretending AI recruitment tools didn’t exist, we looked at them directly.
We explored:
-
how they work
-
where they really add value
-
where they fall short
-
what genuinely differentiates human recruiters
This reduced fear and built pride:
Yes, AI is powerful....and humans bring judgement, trust, context, timing, nuance and relationships.


5. We ran interactive workshops, not tick-box training.
We didn’t lecture.
We ran hands-on workshops where people:
-
learned basic GenAI skills
-
saw real examples from their own work
-
tried tools in a safe environment
-
asked anything without embarrassment
We covered:
-
AI policy and guardrails
-
risks such as shadow AI, data privacy and hallucinations
-
getting everyone safely set up with MS Copilot
-
simple everyday uses like emails, adverts and LinkedIn posts
The aim was simple:
Help people feel more confident at the end than they did at the start.
6. We created AI Champions. Change with people, not to people.
We didn’t force adoption.
Instead, we formed a small voluntary group of AI Champions:
-
some early adopters
-
some slower adopters who wanted to grow
Their role wasn’t policing. It was support, encouragement and curiosity.
They:
-
shared good practice
-
tried tools and fed back honestly
-
became relatable “go-to” people
-
helped shape what AI should and shouldn’t do
This put power back with the people actually doing the work.
.png)
.jpg)
7. We delivered role-specific prompt training.
Instead of “generic AI training”, we focused on recruiters’ real tasks.
We built and taught simple frameworks.
We co-delivered this training with a consultant who had started sceptical and then became a convert. That mattered. It signalled:
You don’t have to start enthusiastically.
You just have to be open to learning.
The consultant tested the prompts and provided invaluable feedback, ensuring the training's relevance.
The core message was always:
Your brain first.
Then AI.
Then your judgement.
What changed.
Even without formal metrics, the shifts were clear and visible.
We saw:
-
higher confidence using AI tools safely
-
more open conversation and less fear
-
reduction in risky shadow AI use
-
consultants talking to clients about AI without anxiety
-
real time savings in everyday admin and written work
-
a small, credible taskforce of AI advocates emerging in the business
Most importantly, people stopped feeling like AI was “being done to them” and began to feel they had a role in shaping it.
What this proved.
This experience reinforced something simple but important.
AI adoption isn't really about AI.
It's about:
-
psychological safety
-
identity
-
trust in leadership
-
permission to learn
-
clear, human communication
When people feel safe and involved, they don't just use AI, they use it well.
The principle this work is built on.
This whole journey rests on one belief.
AI works best when it's introduced with people, not to them.
That's what "AI with People" is all about. Practical, human-centred AI adoption that respects how people actually feel, think and work.

.png)