Would you trust your future to machines?
Ok, I’ve calmed down now, but I had a bit of a rant last week. What provoked this you might wonder?
I’m in the people business – recruitment, coaching, developing talent, leadership spotting, career courses. I like people, I find them interesting on an intellectual and emotional level. Everyone is different, everyone has a story and everyone has something to offer. My job, and my passion, is to identify this.
I don’t have all the answers. I’m lucky, I can learn from the people I meet. Because I’m blessed with a good memory, each new meeting or insight helps build my knowledge, my experience, my insight into what makes good (and bad) leadership, management and impact. I use my insights to inform my judgement when talking and listening to people.
So why the rant?
An allegedly leading headhunter wrote an article promoting artificial intelligence as a legitimate replacement for effective search and selection. The article was a bit confused, but essentially the proposition was that technology could be used to attract the best candidates to a role and that online selection was a critical part of this offer.
Yes, responsive websites with embedded media and comprehensive links are vital for recruitment. But these have been around for nearly a decade and are hardly innovative. We’ve built these for clients for more than 7 years. I wholeheartedly endorse this.
However, online selection is a crude tool that should not feature strongly as an attraction tool in executive search and should not play a part in early stage sifting of candidates for senior, executive roles. It is only necessary or useful for high volume recruitment – the completely opposite end of the market to executive recruitment.
I don’t accept that an online assessment tool can replace experienced people exercising judgement based on years of experience.
Yes, artificial intelligence has its place – that’s why high street retailers (like M & S) or grad recruitment programmes use online assessment as part of an initial sift. They’re dealing with a high volume of candidates and must find an automatic way of cutting down numbers. But this is not appropriate for executive recruitment.
This has no place in a quality service.
It might be appropriate where the emphasis is on cutting costs or moving a product down-market to increase margins. But this does not make it a quality offer. It is not what local government needs at senior levels and it’s certainly is not what the public sector buys when it chooses an executive search partner.
People would rather talk to other people about a job – someone who knows a client, their dynamics, foibles, preferences and challenges. We want to talk to people as it gives us an insight into character, experience, motivation, communication style and fit. No technology can be an effective substitute for people connecting with people.
On top of this, such use of AI as an initial selection tool raises serious questions about unconscious bias and therefore indirect discrimination. When local government has serious problems with the glass ceiling, I think this is also sloppy intellectually and borders on unethical.
As a psychologist I learnt the need for real caution when designing psychometric tests. The norm group must be clearly referenced and relevant, the test must be both valid and reliable. It’s not just about designing online tools, it’s about designing online tools that do not unfairly or unlawfully discriminate. Where’s the scientific data to show this is not the case?
I remember when joining local government in 1990 I went to an assessment day for a frontline Housing Advice Officer role. The initial sift including verbal ability and critical reasoning tests from a leading international supplier. The core of these tests are still used. There were 30 ish people in a room, for 6 jobs. After the tests HR came into the room and read out a list. All but 2 of the black candidates in the room got up and left, as did a small number of the white candidates. We went from about 30 to 12 people. I was in the 12. We’d passed and I got the job. When I started I spoke to the manager about the norm group used to reference the test. It had been designed and tested on graduates in the US, UK, Western Europe and South Africa. I said it was discriminated, he didn’t accept this. He went on using the tests. Four years later I’d replaced him as manager. The first thing I did was stop using these discriminatory tests. We made all future recruitment skills based, properly administered and then discussed. Suddenly we had a much more diverse and talented workforce. This lesson stuck with me.
Using an online selection tool as an initial sift will often build in the conscious or unconscious bias of the person who designs the tool – whether this is an IT programmer or a psychologist. These tools are often not properly norm-referenced, samples for testing or development won’t be representative and tests can lack both validity and reliability. Are they reliable? I’d like to see the evidence.
I think recruiters need to be more honest.
Technology offers the opportunity to cut costs and increase margins. The cost of executive recruitment is the time spent by people crafting a bespoke, tailored service to each client. Cut out the people, you cut the cost. But you also massively increase the risk – of indirect discrimination, of poor appointments or of non-appointment.
We have seen an increasing number of jobs being abandoned because of a lack of candidates – stopped at shortlist or beforehand. This has never happened to us. I think now we’re starting to understand why.
Recruiters need to be honest. We regularly make decisions that have major ramifications for people’s lives. Can we responsibly trust AI with those decisions just to save money? The prospect makes me uncomfortable. Actually, it scares me and makes me angry. We owe our clients and candidates more than this.