FINDING TALENT, BLIND: Identifying Unconscious Bias

Bias in research exists no matter what you try to do to eliminate it. It has different names and attempts to sidestep it but became of interest to me when I learned about how the name John would get more resume views than those for an applicant by the name of Juan. Same name, but one is typically Hispanic while the other is likely the most neutral an English speaking name

can be. Given that I grew up on the Bronx and amid a very mixed but heavily populated Latin community, this data-proven human oversight had no talent assessment merit, yet proved damaging to unfairly excluded candidates.

I dove into the more human side of AI, which as a technologist, I’m drawn to, and the science or psychology of acceptance, in a way. Several facts struck me: 

Statistics now prove that likability based on a person’s appearance occurs in less than a second. Within one second of our introduction, we could decide if we like each other or not. Albert Mehrabian, UCLA Professor Emeritas of Psychology, dove into this research, and is known for his publications on the topics of verbal and non-verbal communication. The result was the “7%-38%-55% rule,” relating to the impact of words, tone of voice, and body language, in that order, described in an article published in Forbes, on the topic. 

Blind orchestra auditions were an attempt to tackle this problem – a similar problem, as most orchestras in 1970’s were 5% female. Close the rehearsal curtain to blind the judges, and go so far as to remove shoes (so as not to identify a woman’s higher heals than a man’s laced Oxfords), and the numbers of women in an orchestra increased to 30% by the year 2000. It still wasn’t 50/50 but what it does is protect talent, at least at that stage. But then there’s the various tiers before that stage that could derail the possibility of eliminating bias. 

Controllable as that situation could be, the employment application process really cannot be blind. Recruiters and human resource executives rely upon technology to pick out words that match the job title and description using ATS (Applicant Tracking System) as the first pass to screen candidates who submit through online portals. It’s either a resume with fine-tuned ATS exact word matches that makes it to the top of the prospect pile, or networking, which offers its own set of biases. 

You can create a whirling dervish of flaws that guard against bias, so let’s simplify the conversation by narrowing the forms of bias present in another layer to many interview processes: behavioral testing integrity, in where bias can take shape as: 

    1. Sampling Bias refers to the subject group and possible clustering of too many of one element that essentially tilts the bias in one way or the other. Lack of differentiation within a sample size could make the test fail. 
    1. Non Response Bias is exactly what it implies. An equally distributed sample size doesn’t imply that you’ll get the responses from participants. 
    1. Response Bias is a form of bias that is most commonly linked to Likert tests, known for this particular flaw as these tests rank responses from Strongly Agree to Strongly Disagree. The likability factor, let’s say in a job application, will typically have the respondent reply straight down the middle of this scale to acquiesce to that he or she thinks is needed to be considered for the job. The respondent will attempt to supply the right answer even if it’s not true to the applicant’s character or opinions. 
    1. Question Order Bias is also pretty straight forward in that the respondent assumes that the order of questions signifies importance so the respondent will answer in a way that matches those priorities.

The gating issue really is talent. Finding talent without bias of any sort in a way that is reliable and telling and authentic. 

Considering that the statistical importance of visual cues and tests in terms of how a person evaluates and judges another person, and the inherent flaws, how does someone judge another person? 

Malcolm Gladwell wrote an entire book entitled, Talking to Strangers, where he assesses the flawed nature of judgement, examining everything from how Adolf Hitler came to power and the ways in which spies embedded themselves into a cushion of trust colleagues found unbelievable when the treasonous truth was discovered. Gladwell spoke about racism, and police brutality, and dove into a matrix of operations and procedural failures rooted in bias. 

So, how can someone be judged accurately? Language. While a person’s words are the least considered for likability qualifiers, language offers more reliable metrics to evaluate talent in terms of hiring. And the way to make AI more human is to do what research psychologists have been doing similarly around language to learn about people by the words they use, thus illuminating both character and talent without the flawed human habits that simply fail to address the qualities required of a job role. 

Arthur Tisi is Chairman and Chief Executive Officer of Hunova, an enterprise insights and solution tool based on people analytics including relationships, skills, psychometrics, and work style preferences, offering unbiased and validated data on human capital. Our products provide far reaching organizational benefits across every segment of teams, management and individuals. Visit our website for more information.

For more information on the services Hunova provides, visit hunova.com.