Pain or gain?

Back

31 Jan 2019

Pain or gain?

Over the past few years artificial intelligence (AI) has started to make its way into the world of recruitment. While excitement about the potential benefits has steadily grown, we’re now also starting to see some of the downsides.

Things started to get exciting for AI in 2015 when Google’s AlphaGo became the first computer to beat a professional human at the ancient board game Go. This was in many ways a far more remarkable moment than the famous chess contest between IBM’s DeepBlue and Gary Kasparov in 1997. While the computer beat Kasparov at chess through sheer brute force (rapidly calculating all the possible outcomes after each move) this approach would not work for Go. There are estimated to be more possible positions in Go than there are atoms in the Universe. This has meant that developing AI that can learn and think strategically would be the only way to win.

Right now consumers are embracing voice assistants, and recently Amazon revealed that it has sold over 100 million Alexa devices since it first launched. While AI is about far more than just smart speakers, these are perhaps the most recognisable form it is taking for most people. With millions of users, and the profit that comes with it, this may lead to a further acceleration in AI research investment.

But beyond fulfilling your requests to play ‘baby shark’ at a moment’s notice, AI has many more serious applications in other fields such as healthcare. For instance, a prototype has been developed that uses AI to analyse chest x-rays to look for pneumonia, it was quickly able to outperform radiologists on both accuracy and speed. The implications of this could be huge, saving both money and lives in the years to come.

So why are we interested in applying AI to the world of recruitment?

1. Personalising the candidate experience
Much is made of the importance of making sure that candidates feel like they are being treated as an individual from beginning to end in the recruitment process. Whether browsing content on the website, or receiving email alerts from the ATS, the experience to date is relatively generic, and only becomes personalised once a human recruiter gets involved. AI has the potential of bridging this gap by learning about the needs of candidates applying for different roles, and tailoring the messages they see. This could have the dual benefit of making candidates feel better understood.

2. Discovering new talent
The flipside of AI that helps candidates identify where they fit, is that it could help recruiters identify more individuals who would be worth approaching. Sourcing talent at the moment relies on knowing what to search for, AI could look at a wider range of data points to identify people who would be worth talking to based on things like their interests or who they’re connected to. This would help discover the ‘hidden gems’ who might have an outdated CV or the wrong job title but are in fact perfect for the role.  


3. Speeding up and automating processes
There are plenty of routine recruitment activities that are highly repetitive where automation is beneficial. Take interviewing for instance, particularly when you’re trying to find a time slot that is equally convenient for two or three individuals, it can take a lot of back and forth to settle on a time. AI can take up the task of matching up diaries, or contacting the individuals concerned to find suitable times. Answering candidate questions can also seem quite repetitive, particularly when dealing with several thousand applicants. Over time AI can learn how to interpret and respond to virtually all common enquiries, while an FAQ page on the website would become hard to use after 100 or so questions have been added.


4. Making better decisions about candidates
Organisations are seemingly always looking for better ways to select candidates. This is driven in part by a desire to make recruitment easier and faster, but also to avoid the shortcomings of human judgement and remove bias. On the face of it AI has the potential to learn from vast amounts of data, and find ways to identify ability that humans haven’t been able to spot. The result would be greater certainty that the new hire is likely to perform well in the business. This is beneficial not just for the hiring organisation that would have greater confidence in their choice, but also for the candidate because they can take some comfort in knowing that they are joining a role that’s right for them.

 
However, there have already been some missteps where AI has led to some unwanted outcomes. One case involved Amazon, who were developing an AI based system to sift through candidate CVs. In October it was widely reported that they had to pull a plug on it due to a bias against female candidates. This arose because the AI was trained using a decade of past applicant data. Most of these applications were from men, so the system started to favour male applicants. Ultimately this has revealed one of the big risks with AI in general - that if the data it learns from already contains bias, the AI system itself will share the same bias. Technology companies have struggled with this for quite some time, for example Google’s image recognition AI suffered embarrassing flaws by frequently failing to correctly recognise photos of people with darker skin tone. The likely cause of this is that AI systems are designed, developed and often tested by technology teams that lack diversity themselves.

Reliance on AI could also lead to risks as a result of overreliance. For example, Tesla’s autopilot feature is thought to be one of the most sophisticated semi-autonomous driving systems available to consumers right now. It’s capable of steering and braking safely by itself in many situations. It is so good, it is thought to have prevented fatal collisions in several cases. However, in a few instances, Tesla’s autopilot has failed to prevent fatal crashes. In these situations, the driver has ignored various warnings and taken their hands off the steering wheel, they essentially began to trust the autonomous system so much that they were unprepared for situations where it failed. When we trust a system too much we get lazy, and tend to underestimate the magnitude of some risks. Over reliance on automated or AI controlled recruitment systems therefore leave candidates feeling frustrated in the event that it fails without a human ready to step in.

There are ethical questions as well. AI technology that some companies are starting to implement brings to mind the Voight-Kampff test from the dystopian sci-fi film Blade Runner. With some irony this fictional test was designed to detect whether someone was a human or a replicant (android) by measuring imperceptible and involuntary signals like iris dilation. Platforms such as Hirevue that use AI to monitor a candidate’s voice and ‘microexpressions’ raise questions about whether it is fair to judge people on physical traits that are beyond their control. Such AI systems are arguably geared too much toward making the life of the recruiter easier at the expense of candidate experience. Does it feel ok knowing you were rejected not purely because of the content of what you said, but because of your facial expressions and speech patterns weren’t quite what the algorithm was looking for?

One emerging characteristic that seems to separate good AI and risky AI is the importance of human empathy in the existing process. For example, empathy seems to be of little importance if I am trying to find out about the salary and benefits for a role from a chatbot at three in the morning, but it does seem important when awaiting the outcome of the decision as to whether I’ll be given the job. It may therefore be important to ask that question when deciding whether to replace or augment something with AI - ‘how important is human empathy for this right now?’. And try to put yourself in the shoes of the candidate who’ll be going through the process. Despite the shortcomings, it is important that these boundaries are being tested and explored. Until these various applications of AI are used in the real world, we won’t fully understand the risks and potential benefits.

Tristan Moakes - Head of Solutions

 

Join us on Wednesday 6th of Feb to hear from industry experts on how AI is succeeding and where it might be falling short in the world of recruitment.

Confirm attendance