If robots just did what we said, would they exhibit racist behavior? Yes. Yes they would.
This is an insightful article in the Guardian on the issue of artificial intelligence picking up and advancing society’s pre-existing racism. It falls on the heels of a report that claimed that a risk-assessment computer program called Compas was biased against black prisoners. Another crime-forecasting program called PredPol was revealed to have created a racist feedback loop. Over-policing in black areas in Oakland generated statistics that over-predicted crime in black areas, recommending increased policing, and so on.
“’If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,’ says Kristian Lum, the lead statistician at the San-Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG).”
It’s not just the specialized forecasting software that is getting stung by this. Google and LinkedIn have had problems with this kind of thing as well. Microsoft had it the worst with a chatbot called Tay, who “learned” how to act like everyone else on twitter and turned into a neo-nazi in one day. How efficient!
These things are happening so often they cannot be regarded as individual mistakes. Instead, I think that racist robots must be categorized as a trend.
Workforce Analytics and Automated Racism or Anti-Racism
This racist robot trend affects workforce analytics because those attempting to predict behavior in the workplace will occasionally swap notes with analysts attempting to improve law enforcement. As we begin to automate elements of employee recruitment, there is also the opportunity to use technology-based tools to reduce racism and sexism. Now, we are stumbling upon the concern that artificial intelligence is at risk of picking up society’s pre-existing racism.
The issue is that forecasts are built around pre-existing data. If there is a statistical trend in hiring or policing which is piggy-backing on some type of ground-level prejudice, the formulas inside the statistical model could simply pass-along that underlying sexism or racism. It’s like children repeating-back what they hear from their parents; the robots are listening – watch your mouth! Even amongst adults communicating word-of-mouth, our individual opinions are substantially a pass-through of what we picked up from the rest of society. In this context, it seems naïve to expect robots to be better than us.
So, we must choose to use technology to reduce racism, or technology will embolden racism absent-mindedly. Pick one.
A major complication in this controversy is that those who create forecast algorithms regard their software and their models as proprietary. The owner of the Compas software, Northpointe, has refused to explain the inner-workings of the software that they own. This confidentiality may make business sense and might be legally valid in terms of intellectual property rights. However if their software is non-compliant on a human rights basis they might lose customers, lose a discrimination lawsuit, or even get legislated out of business.
We are in an era where many people presume that they should know what is really happening when controversial decisions are being made. When it comes to race and policing, expectations of accountability and transparency can become politically compelling very quickly. And the use of software to recruit or promote employees, particularly in the public sector, could fall under a similar level of scrutiny just as easily.
I hope that police, human resources professionals, and social justice activists take a greater interest in this topic. But only if they can stay sufficiently compassionate and context-sensitive to keep ahead of artificial intelligence models of their own critiques. I’m sure a great big battle of nazi vs. antifacist bots would make for great television. But what we need now are lessons, insights, tools, and legislation.
2 thoughts on “Who Created Racist Robots? You Did!”