As the push to diversify the IT sector picks up speed, so too has awareness of unconscious bias. Can technology-enabled ‘blind’ recruitment do a better job of identifying talent? In these more enlightened times, companies have strategies to beat a kneejerk reaction to an unfamiliar name, the “wrong” gender, or the suspect “gut feeling”.
Something certainly hasn’t been going right - tech remains notoriously male, especially higher up the company rungs. Women make up just 20% of IT specialists and just 12% of leadership roles, according to the latest analysis by BCS, and numbers have moved very little over the past five years.
“Current methods of countering bias - such as quotas, or training - are rudimentary and traditional,” says Riham Satti, a clinical neuroscientist turned entrepreneur who’s created technology to make it easier for people to recruit staff more fairly, regardless of background, gender or ethnicity.
From confirmation bias (interpreting new information as a vindication of existing beliefs) through to anchoring (relying on the first piece of information encountered when making decisions), cognitive biases have an impact on every choice we make. Recruiters, for instance, typically review CVs in seconds, and they need to take mental shortcuts amid an overwhelming amount of details. To counter this, there’s been a broader move by companies in the last few years to recruit “blind” (ie assessing a candidate’s skills anonymously).
AI in the Recruitment Process
But many companies still require a traditional letter and CV for their hiring processes. That’s why Satti’s company, MeVitae, has created technology that uses computational linguistics to spot and remove a host of tell-tale details on a CV and cover letter, such as school, university, disability, gender, ethnicity, marital status, age and hobbies - all the stuff that says who you are and where you are from.
As well as speeding up the process, Satti says, this technology has helped shift the dial - diversity and inclusion has improved by 30% among their mostly UK and US clients, in both public sector and commercial.
In March the company plans to launch an algorithm to help companies draw up detailed candidate shortlists. And the team behind the algorithm have been careful to avoid building technology that risks entrenching existing norms and prejudices.
In the recent past, algorithms have displayed bias - Amazon scrapped an AI recruitment algorithm in 2018, which proved biased against women, and in 2019 Apple launched a credit card, provided by Goldman Sachs, that appeared to offer women less credit than men. There are several checks that can overcome potential bias within AI during its design, says Satti, and these include, in part, using data to train AI on what should be, rather than what is.
Large tech organisations are taking bias seriously - IBM is developing an algorithmic bias detection tool using publicly available data. Facebook is launching an independent team that audits its algorithms to ensure they don’t discriminate against minority ethnic groups.
Simon Lyle, UK managing director of outplacement firm Randstad RiseSmart asks if HR is losing its human touch; research commissioned by Randstad RiseSmart showed 57% of employees believe recruitment is becoming too automated. Good applicants risk being overlooked if they omit critical key words during automatic scanning, he warns.
At RWA we believe that diversity and inclusion will be fundamental to business growth, and a fairer hiring process is a critical link.