Can we teach robots to be less biased than us? Probably yes. But only if we do this right. Bias is mostly the product of mental shortcuts we make in our reasoning, and machines can only think clearly if we teach them to not make the same mental shortcuts.
There is an interesting article about employers’ best attempts at reducing bias in hiring algorithms. Paul Burley, the CEO at Predictive Hire, describes his company’s efforts to identify and eliminate bias in the recruitment and selection of the best job applicants. This work goes beyond eliminating applicant names from a conventional recruitment processes; this effort gets into predictive analytics to identify the best candidate.
Burley is particularly keen on identifying interview questions that drive bias (either direct or adverse-effect discrimination), and then eliminating those questions entirely. While they do not use demographic information inside their algorithms, they do use demographic information outside of the algorithm, to test if any of their questions are causing a bias after-the-fact.
Using Workforce Analytics to Identify Invisible Bias
It sounds to me like his company is going about it the right way. With bias, we don’t disproportionately “choose” white males to be the boss. Rather, we assess what traits would normally indicate strong leadership, accidentally carry-forward historic stereotypes about strong leaders, and then inadvertently choose white males. Plenty of people, including some women and visible minorities, accidentally advance this momentum. That is because it’s the underlying thought patterns driving things, rather than deliberate and malevolent racism and sexism. You can make one step forward by not being a jerk, but take two steps backward on something called cognitive bias. And everyone does cognitive bias, not just the man.
Over at Better Humans, they have created a Cognitive Bias Cheat Sheet. Personally, I have been trying to stay on top of cognitive bias since it was revealed to be a major driver of the 2008 sub-prime mortgage fiasco and the subsequent Great Recession. Cognitive bias is overwhelming, and that’s illustrative of what the real problem is. The world just gives us too much information to process, so we make shortcuts in our thinking to make sometimes-accurate judgments. In the language of behavioral economics, prejudice is largely the advancing of skewed thinking based on cognitive bias shortcuts.
Information Overload – Are Machines Better Equipped Than Humans?
The big deal with big data is that machines are supposed to help us overcome the over-abundance of information. Sure, we can find patterns and dig up nuggets that are buried in a mountain of data. But if we are also making judgment calls using cognitive shortcuts because the human brain can’t handle the volume, there is the opportunity to use the machine to allow us to make judgments using all of the information. We can create algorithms that are larger and more complex, bypassing the constraints of cognitive bias, and produce recommendations that are far less biased than those produced by humans.
We don’t entirely have the option of just turning the machine off. Going off-grid just sends us back to biased decisions made by humans on gut instinct. Think of who you know, and consider that not all luddites are champions of equality. Right now, we are just getting past the first wave of machines imitating our own sexism and racism. We now have the option of telling the machines to stop doing that, and then building new algorithms that meet our own purported standards of neutrality.
But this will happen if and only if we choose to name our biases, talk openly about them, measure them, make decisions to reverse them, and keep improving the algorithms such that everyone has a fair shot at the good jobs. And even then, we still can’t trust robots to decide where to seat people on the bus. We must forever be vigilant, and stay human.