Artificial Intelligence: Recruitment Solution or Discrimination Enabler?

Artificial Intelligence (AI) is a regular recruitment news item, especially as of late with the rise of ChatGPT and similar AI systems. From diversity and inclusion, to automation, cost effectiveness, and candidate experience, the discussions are many and constantly evolving. 

AI can be easily misunderstood. Many people outside of or new to the recruitment world might assume that most decisions are made by computers, with little human input, but this isn’t quite the case. Especially when you look at the general lack of awareness around AI and the little trust consumers have; only 27% of consumers think that AI can deliver equal or better customer service than humans, and 43% think AI will harm customer satisfaction and cause more complaints.

Trust is a number one topic when it comes to the AI discussion. At work, only 1 in 2 people say they would trust AI, but with the ever increasing presence of Artificial Intelligence, (ChatGPT reached 100 million monthly active users in January 2023), it’s safe to say that at least in some capacity, it’s is here to stay.

AI is a controversial topic experiencing rapid development. AI isn’t perfect or unbreakable. You have probably heard that AI can help eliminate bias, but improper use can increase, rather than reduce, adverse impact. 

What AI is and isn’t 

Artificial Intelligence and automation are often confused, and although they are related, there are key factors separating them. 

AI mimics human cognitive function and goes beyond simple implementation of automation rules. AI uses reasoning and processing to try and learn, understand natural language, decipher complex inputs, and solve problems, in a similar way to the human brain. 

The Hyperautomation Spectrum 

The use of AI and automation occurs across a spectrum, and in the context of recruitment we’re going to use the term, ‘hyperautomation’, defined by Gartner as “a business-driven, disciplined approach that organizations use to rapidly identify, vet and automate as many business and IT processes as possible” involving the orchestrated use of multiple technologies or tools. 

This spectrum begins at complete manual control, and ends with control and complete automatic processing by AI, without human input. Though, it’s important to note that very few technologies are driven fully by AI, which is largely still in development. 

An example on the gentle end of the automation spectrum is Recruitment Process Automation (RPA), which provides a chance for recruiters to enhance their hiring processes by saving time and optimising candidate experiences. Automation in this area can be used to help with sourcing, assessment, and process management at varying degrees of control and human input.  

RPA is not something to be closed off without consideration, despite the apprehension over automation and AI. Some aspects of automation are familiar, widely-used, and non-controversial. 

For example, an AI system searching applications for keywords is something that has been used for a while. The controversy actually begins at the communication and decision-making stages. 

Why does the controversy begin at these levels? Well, consider the importance of decision-making scenarios and the current reality of AI… the stakes associated with fully automatic processes at these stages are high. 

A recent example of controversy regarding decision-making by AI is that of the UK’s 2020 exam results fiasco, where 40% of 700,000 teacher assessments were downgraded by an algorithm based on student ranking and a school’s historical performance. Soon after the government released the grades, reports of detrimental impact poured in, with pupils from the lowest socioeconomic backgrounds experiencing the most significant drop in expected grades. 

These kind of examples make it easy to see why using AI to automatically progress or reject candidates applying for jobs is a concern for many. 

The Reality of AI

Once a year, Gartner update their AI Hype Cycle. The Hype Cycle gives a view of how a AI technology and application will evolve over time. 

Hype Cycles explore five key areas of a technology’s life cycle, including the Innovation Trigger, Peak of Inflated Expectations, Trough of Disillusionment, Slope of Enlightenment, and Plateau of Productivity, which you can find out more about on Gartner’s website. 

The stages of a Hype Cycle and constantly changing nature of Hype Cycle’s themselves, highlights just how complicated AI can be.  

There is still a significant amount of time before full deployment and utilisation of AI without any safety net becomes a regular occurrence, but some items such as Generative AI, which creates content on pre-existing information, are thought to be only 2-5 years away from the Productivity Plateau. With the significant investment in AI at hand, AI is set to progress rapidly over the next few years. 

This growth seems to be especially true of HR, where 40% of HR functions in international companies are already using AI systems. 

Data 

One of the key realities of AI to consider, is that it starts with humans and our data. 

When working with AI, we code our data into the platform and train it with algorithms so it can carry out tasks by itself. At the beginning, many improvements are likely to be needed before it is capable of self-sufficiency, but it is this initial reliance on humans that is critical. Data sets provided by humans can have a significant restriction in range. For example, if input data is based on your own organisation, you’ve only got data on individuals who are employed with you or have been part of your recruitment process. 

When you combine this with the lack of diversity in Tech workforces, who are often the leading charge of AI development, this is worrying. According to Tech Nation, only 19% of tech workers are women, and with the Centre for Data Ethics and Innovation sharing the Black people are underrepresented and disability inclusion is overlooked, it isn’t hard to see why the groups inputting data, and the data itself, leads to implementation problems. 

Categorisation 

As data initially relies on human input and judgement, there can be challenges with categorisation. Subjective readings, such as the categorisation of human emotions and reading of facial expressions have to be categorised in the first place to tell the computer that a particular image equals a particular conclusion. This is where racism, misogyny, ableism, etc., can infiltrate your system. This could be conscious or unconscious bias slipping past the net, and is subsequently codified into your AI algorithm. 

Reliability 

Can we rely on the subjective data inputted by humans? The indications say, not really… 

Organisations and individuals have a set picture of what success looks like because they look at successful performance in the past, but of course that successful performance has only been informed by individuals who have the characteristics of your incumbent range. There could be many other ways in which people could successfully achieve outcomes or demonstrate capabilities which haven’t been captured in that data set.  

Additionally, research in this area has revealed some concerns.  

Research conducted by Bayerischer Rundfunk, using an Artificial Intelligence driven video interview platform, employed actors to answer interview questions with the same response in the exact same way multiple times, while changing one visual variable each time. With each iteration of the actor’s response, changing only one thing about their appearance, the AI evaluation changes.

Environmental Impact 

When reliable and effective at evaluating inputs, AI seems an efficient solution to help optimise your process. However, though computer systems are rapidly developing, behind-the-scenes they still require more energy than their human counterparts. 

Kate Crawford and Vladan Joler, researchers and creators of the ‘Anatomy of AI’ map, looked at the Amazon Echo, Alexa, and the complicated model which allows the AI to carry out tasks. Crawford and Joler state that “each small moment of convenience – be it answering a question, turning on a light, or playing a song – requires a vast planetary network, fuelled by the extraction of non-renewable materials, labour, and data.” 

“The scale of resources required is many magnitudes greater than the energy and labour it would take a human to operate a household appliance or flick a switch. A full accounting for these costs is almost impossible, but it is increasingly important that we grasp the scale and scope if we are to understand and govern the technical infrastructures that thread through our lives.” 

AI in Assessment 

AI in assessment is where the biggest decisions are yet to be made due to high stakes. 

False negatives matter. If a candidate submits an application, who would be right for the role, has the skills and abilities required, but the AI filters them out without giving them fair opportunity, the implications of bias and discrimination could be huge. 

If all of the data driving the AI was perfect, and if the technology itself was perfect, theoretically, it should be better than human decision-making; it should be able to remove bias. However, when you have imperfect data inputs in your technology, AI can amplify some of those biases instead of remove them. 

Conclusion 

At the moment there are limitations in the data, limitations in the tech, and limitations in effectiveness. When combined with inherent existing inequalities in the workforce, it’s easy to see why many are worried that AI today will be biased. However, there is a big flip side to this: while true that AI will be biased, humans are biased too, humans are not perfect. The best recruitment processes that exist to date are no more than 80% effective at prediction. 

So, during Amberjack’s Webinar, ‘How AI can Amplify, Rather than Remove, Bias’, we proposed a question to our audience: do you agree with the following statement? 

“So long as it is the best technology on the market and is deployed as safely as possible, I am comfortable using AI to drive assessment decision making in Student Recruitment.” 

The poll garnered some interesting results. At the beginning of the session a high number of respondents said ‘I don’t know’, with a few saying ‘No’, and the lowest number of responses occurring for ‘Yes’. The few people who had voted ‘Yes’ at the start of the Webinar seem to have stuck to their position, but by the end the majority of the ‘I don’t know’ camp had tipped over to ‘No’. It seems that the potential negative impacts are enough to turn off a skeptical mind. 

We ended our Webinar with some audience questions which you can watch via the recorded Webinar session, or read by requesting your free copy of our Insight Paper: ‘Considerations for AI in Recruitment’. 

Share this Article

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top