Thrive

1st November 2022

Trends

Why blind recruiting?

Share

TrendsThriveOpinion

About the author

EMMA BLUCK

Marketing and Partnership Lead, MeVitae

Website Linkedin Twitter

Human biases

 

As homosapiens evolved, we acquired a range of qualities, such as language, which provided the foundation for our formation and maintenance of relationships with others. While relationships are still vital in our everyday lives, our unique ability to form these was once the key determinant that saw us outlive many of our ancestors, enabling us to survive in packs supporting one another. As we developed and refined our ability to seek out others who could join our “in-group” (or support bubble if this was 2020), we mastered the art of categorisation to find those most similar to us in the most effective way possible. Still today, we do not have the time, nor the resources, to analyse every piece of information we receive. Therefore, when either time or cognitive abilities are compromised (i.e. through information overload), we have a tendency to rely on heuristics. Heuristics are mental shortcuts, based off of generalisations/rules of thumb that allow us to make quick judgments, reducing cognitive load. However, these instinctive judgments result in a wide range of different cognitive biases (over 140 to be exact) which we are usually completely unaware of.

 

When it comes to the hiring process, most of the biases we make appear in the screening stage as attempts are made to classify the candidate. With the average time spent looking at a CV being just 6 seconds, we are prone to making several mistakes:

 

  • Associating unrelated information, such as gender, with job performance
  • Prioritising information that is compatible with our beliefs
  • Focusing on irrelevant salient information

 

Consequently, candidates are often discriminated against based on different characteristics. For instance, studies have shown that managers and undergraduate students have made selection decisions based on applicant ages and ethnicity which was inferred from the resume [1,2]. Moreover, when identical resumes have been submitted, male applicants have been previously favoured, with women more likely to be interviewed for admin roles [3,4]. With subconscious biases leading to discrimination in the hiring process, the knock-on effects of this are stark with reduced organisational diversity leading to poorer workforce performance and overall reduced company revenue.

Algorithmic biases

 

A large recruitment of different cognitive resources is required for humans to efficiently analyse and evaluate information. This process is both timely and costly, so when assessing large amounts of data in a short time frame, such as during initial candidate screening, we rely on heuristics. In contrast, algorithms have a huge data handling capacity. It may, therefore, be easy to assume that an algorithm which can quickly process a much greater number of resumes free of distortion, will provide a simple solution: high quality hiring decisions in a fraction of the time. While in theory this is true, algorithms can also produce biases. Algorithmic biases are systematic errors resulting from either human error or unrepresentative input data. Just like human biases, algorithmic biases can have a negative effect against one group of people more than another, and with as many as 40% of companies using AI to screen candidates, this impacts of this can be just as severe [5].

 

In recent years examples of biased algorithmic processes have started to surface, such as the famous case of Amazons recruitment algorithm [6]. In this instance, Amazon constructed an artificial intelligence recruitment system based on historical data. However, as males dominated the technology space, the input data was favoured towards these applicants, discriminating against resumes that contained female indicating words such as ‘Women’s’.

Current methods

 

Current methods to tackle biases in the workplace are not scarce. There are a range of techniques organisations are currently using including, but not limited to, inclusive policies, self-awareness training, diversity and inclusion events, and unconscious bias training. While there is some evidence that these methods work in changing attitudes [7], there is very little evidence to suggest that this translates into behavioural changes [8]. There are many possible reasons for this, such as a resistance to change.

 

However, a notable point is that the majority of these methods are employed well into an employee’s typical work life cycle. Implementation at such a late stage is unlikely to change behaviours because there is no incentive in doing so for the current employees. Even so, behavioural and attitude changes within the workplace will only benefit current employees and will have little impact on who gets hired. Furthermore, it would be a reductionist approach to assume that simple training can undo the damage caused by our subconscious biases. As mentioned, these are hardwired within us, and are very hard to challenge. It is not to say that current methods should be disregarded as there is always work to be done when it comes to acknowledging our biases. However, it is important to realise that awareness alone is simply not enough. We, therefore, need a system, such as an algorithm that does not share our cognitive limitations, but that can be actively regulated to prevent biases of its own, and this should be implemented at the initial screening stages if it is to be as effective as possible.

The power of Augmented intelligence

 

…And here comes our solution: Augmented Intelligence (IA). IA is an autonomous system created primarily for decision making which capitalises on the benefits of both human and algorithmic processes. Just like humans, IA has been carefully designed to store memories, therefore enabling it to see patterns, learn and innovate over time. However, unlike humans, and similar to AI, IA can handle large amounts of data that would otherwise overwhelm us, and it can do so free of distortion from factors that cloud human judgment, i.e. distraction and emotion. Needless to say, humans still have plenty of abilities that IA does not have, such as the capacity to imagine, anticipate and judge future situations. For these reasons, IA has not been designed to replace us, but rather to work alongside us in making informed and unbiased decisions.

Blind recruiting

 

MeVitae’s blind recruiting solution uses IA to sift through 600 CV’s in 6 seconds (compared to our ability of analysing 1 CV every 6 seconds). This solution redacts any information that may disclose the candidate’s identity from their application, such as names, gender, ethnicity, age, personal interests, sexuality and many more. Subsequently, by the time the recruiter or hiring manager receives the application, they are left only with information indicative of performance, ensuring they can judge a candidate’s suitability for a position as impartially as possible. At MeVitae we recognise the power of human judgment; our blind recruiting solution has been designed to aid us during the initial screening process, when we are most prone to making biases, with the intention of us making the smart and suitable hiring decisions down the line.

 

In just a few years, our blind recruiting solution has already transformed many companies. By attracting double the number of applicants and redacting their CV’s with 95% accuracy, our solution has saved organisations both time and money per hire, while ultimately driving diversity in the workplace to create a more cohesive workforce.