Who Created Racist Robots? You Did!

Reinventing Ourselves

If robots just did what we said, would they exhibit racist behavior?  Yes.  Yes they would.

This is an insightful article in the Guardian on the issue of artificial intelligence picking up and advancing society’s pre-existing racism.  It falls on the heels of a report that claimed that a risk-assessment computer program called Compas was biased against black prisoners.  Another crime-forecasting program called PredPol was revealed to have created a racist feedback loop.  Over-policing in black areas in Oakland generated statistics that over-predicted crime in black areas, recommending increased policing, and so on.

“’If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,’ says Kristian Lum, the lead statistician at the San-Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG).”

It’s not just the specialized forecasting software that is getting stung by this.  Google and LinkedIn have had problems with this kind of thing as well.  Microsoft had it the worst with a chatbot called Tay, who “learned” how to act like everyone else on twitter and turned into a neo-nazi in one day.  How efficient!

These things are happening so often they cannot be regarded as individual mistakes.  Instead, I think that racist robots must be categorized as a trend.

Workforce Analytics and Automated Racism or Anti-Racism

This racist robot trend affects workforce analytics because those attempting to predict behavior in the workplace will occasionally swap notes with analysts attempting to improve law enforcement.  As we begin to automate elements of employee recruitment, there is also the opportunity to use technology-based tools to reduce racism and sexism.  Now, we are stumbling upon the concern that artificial intelligence is at risk of picking up society’s pre-existing racism.

The issue is that forecasts are built around pre-existing data.  If there is a statistical trend in hiring or policing which is piggy-backing on some type of ground-level prejudice, the formulas inside the statistical model could simply pass-along that underlying sexism or racism.  It’s like children repeating-back what they hear from their parents; the robots are listening – watch your mouth!  Even amongst adults communicating word-of-mouth, our individual opinions are substantially a pass-through of what we picked up from the rest of society.  In this context, it seems naïve to expect robots to be better than us.

So, we must choose to use technology to reduce racism, or technology will embolden racism absent-mindedly.  Pick one.

A major complication in this controversy is that those who create forecast algorithms regard their software and their models as proprietary.  The owner of the Compas software, Northpointe, has refused to explain the inner-workings of the software that they own.  This confidentiality may make business sense and might be legally valid in terms of intellectual property rights.  However if their software is non-compliant on a human rights basis they might lose customers, lose a discrimination lawsuit, or even get legislated out of business.

We are in an era where many people presume that they should know what is really happening when controversial decisions are being made.  When it comes to race and policing, expectations of accountability and transparency can become politically compelling very quickly.  And the use of software to recruit or promote employees, particularly in the public sector, could fall under a similar level of scrutiny just as easily.

I hope that police, human resources professionals, and social justice activists take a greater interest in this topic.  But only if they can stay sufficiently compassionate and context-sensitive to keep ahead of artificial intelligence models of their own critiques.  I’m sure a great big battle of nazi vs. antifacist bots would make for great television.  But what we need now are lessons, insights, tools, and legislation.

Where’s Waldo in the Job Applicant Pool?

Where's Waldo. By David Trawin
Where’s Waldo. Photo courtesy of David Trawin.

How do you find that one special thing in the middle of all this big data?  It depends on what you’re looking for.  Machines can help you find things, but first you have to teach the machine to understand what you want.  With recruiting data, a few simple formulas evolve into something far more complex.

This article from CIO.com, by Sharon Florentine summarizes how Artificial Intelligence is revolutionizing recruiting and hiring.  Long story short, if you have really good data about who your high-performers are and what the process was like to recruit them, you can reverse-engineer the recruiting to predict which applicants will perform well after hire.

I hate to imply that it’s so-last-month, but the basic concept is straightforward.  Collect large amounts of data, fine-tune its quality, run a statistical analysis to determine causation, and make a forecast.  That’s what it looks like in a lab environment.  But the good stuff is in the war stories of how this kind of experimental analysis plays out.  The article names a few hot-points worthy of more discussion.

Where’s Waldo: Finding the Best-Fit Candidate in the Middle of Big Data

Citing Glen Cathey of Randstad, the new job search is similar to the “Where’s Waldo?” book series.  “…it’s not difficult to search anymore, what’s of greater importance now is a data problem.”  That is, you have good applicants, but you have to identify that one great fit.  Cathey describes three types of search that make this viable.

  • Semantic Search, which seeks to understand a searcher’s intent and the context in which the search is being made. (Remember, good fit is circumstantial and conceptual)
  • Conceptual Search, which creates a basic concept from just a few key words.
  • Implicit Search, which pushes information to you making assumptions about what you’re trying to accomplish “…much like how Google automatically pushes restaurant recommendations in your local area…” I have to admit, I’m always impressed when Google knows that I only want local

Dark Matter: The Missing Job Applicant You Don’t Know You’re Missing

In spite of his faults, former Defense Secretary Donald Rumsfeld did pioneer an important concept called “unknown unknowns.”  That is, there are unknowns that you are somewhat aware are a risk factor, but there are deeper unknowns where you just have no idea information was lacking in the first place.

As it relates to recruiting, Cathey notes that “…you’re excluding people with those [machine-driven] searches.  Doing it this way means you’re actually looking only for the best of the easiest candidates to find.”  So, they use Artificial Intelligence and machine learning to find overlooked candidates.  A strong candidate might have done a mediocre job customizing their resume to your posting, but still have exceptional virtue.  They might use the wrong key words.   They might have special skills that your organization needs but it’s not on the job posting.  And your recruiting expectations might be biased towards a certain type of white male, or white males generally.  The modified formulas can open-up the under-used areas of the candidate pool.

So, while it’s great if the machine gets you to a great candidate quickly, you can also get the machine to do the tedious exercise of finding the diamonds-in-the-rough.

While it’s true that some of this work can be done with basic statistical tools and a good data set, that’s actually an ambitious starting-point to get to in the first case.  The advanced class is that you must create new data from scratch, revise the model on an iterative basis, and eventually run the model off live data such that the predictions change as the ground underneath the data shifts.

But that’s only if your attempt to do this kind of thing matches the business context.  The big challenge is when the work is incompatible with organizational strategy, or the initiative needs a compelling business case to shift resources, or you need to win-over new people who are in the middle of a leadership change.  At that point you will get sucked back into the complex world of humanity and empathy.  So much for robots making our lives easier!

Data Will Drive Your Car. Oil, Not So Much

Oil Rig. By Soliven Melindo.
Oil Rig. Photo courtesy of Soliven Melindo.

Are cars no longer fueled by gasoline because they are now fueled by data?  Consider how driverless cars, electronic vehicles, and Uber are changing the outlook for the future.  And reflect on how the in-vehicle computer has increasingly changed you safety, your comfort, and your ability to manage the vehicle’s maintenance.  Gasoline is so last century; today it’s all about the data.

A Financial Review article from July 2017 by Mark Eggleton plays with the idea of data as the fuel of the future.  For a century oil ruled our world, influencing geopolitics, urban design, and decisions about where to work and travel.  Today, it is data that is significantly changing our world.  However, we cannot just obey data on blind faith.  We need to look up from the GPS, so to speak, and decide for ourselves if the data we are being fed is relevant and appropriate.

We need to consider data in the context of trust.  Take banks for an example.  Although banks could do lots of things with our personal financial information, they operate within the context of trust that has built over centuries.  Regardless of whether we trust their profit motives in society overall, we do indeed trust that the information they hold will be handled in a responsible and diligent manner.  Banking is deeply immersed in a human context, regardless of whether it always seems that way.

I personally think that in workforce analytics, there is a similar concern about trust.  We have at our fingertips sensitive information that could be used for good or evil.  So let’s ask, are human resources departments actually good? Perhaps we need some time establishing ourselves, to give a better sense that when we’re wrapped up in industrial conflict and individual terminations, that we’re sincerely doing what is expected of us.  If we collect accident statistics and attendance lists for mental health workshops, do employees bank on us only using this information to make people well?  Have we truly established that the employment equity data we collect will be used exclusively for its intended purpose?  When we survey employees on their engagement experience, is the information used to create a better workplace, or are there attempts to punish those who express low motivation?  While we closely guard peoples’ confidential pay data, do we have the correct attitude towards employees discussing their pay amongst themselves?

I think it’s high time we subordinate data to the human context.  After all, if big data peaks, we are probably into the human economy.   If data is going to change the world, we need to ensure it dovetails with our history, the geography, the people and their culture.  If we get this wrong, it will be a dystopian science fiction movie come true.  That’s kind of what happened with oil.  So let’s get it right this time.

(Hat tip to KPMG’s Hugo van Hoogstraten for sharing the original article with me)

Boxes Without Humans: What Will Fill the Gap?

IMG_3651.JPG
Amazon cat. Photo courtesy of Stephen Woods.

It’s been couple of days since the latest social disruption.  I wonder what’s going to be turned upside-down this week?  Flat on the heels of online shopping ravaging the conventional retail sector, warehouse and trucking jobs might be the next to go.

Amazon.com holds contests every year entitled the Amazon Robotics Challenge, where academics and graduate students compete for prize money to help automate warehouse jobs with robots.  The technology gets a little more clever each year.  “They now use neural networks, a form of artificial intelligence that helps robots learn to recognize objects with less human programming.”  The good news is that there might be lots of work for technologists.

The bad news is that this could take a bite of decent-paying warehouse jobs.  In the US, there are about 950,000 warehouse and storage-industry jobs with an average wage of about $20 per hour.  Those jobs are threatened.

But a more pressing concern would be trucking.  Self-driving cars are already starting to make an appearance on the roads.  For trucking, the change will happen more quickly according to a Guardian article.  The decision to go driverless with trucking is a corporate decision, not a consumer decision like with driverless personal automobiles.  The financial motivation is extremely favorable to use self-driving trucks. “The potential saving to the freight transportation industry is estimated to be $168bn annually… [including savings from] labor ($70bn), fuel efficiency ($35bn), productivity ($27bn) and accidents ($36bn)…”  The trucker’s wage is similar to that of warehouse workers, but there are far more jobs at stake.  There are 3.5 million truckers in the United States, and the drivers themselves spend a lot of money at road stops, hotels, and diners.

Now, if you work in finance or information technology this might not concern you so much.  Technologies are created, investments made, money saved, and we’re better off on average.  But in human resources we know that the unemployed are our people.  We terminate them, we screen them when they apply for jobs, we help them with problems if we know them personally, and occasionally the we ourselves are the unemployed.  We think about them a lot.  We don’t always show it, but frankly we have to care or we die inside.

Thankfully, people have started to talk more openly about the broad-based social disruption that Artificial Intelligence may have.  In the Guardian article on trucking, there are calls for a Universal Basic Income, direct payments to everyone regardless of how well they have fared in a disrupted labour market.  There may be other policy concerns as well, such as improved access to training and education.  Of course, these are government-funded solutions which seem obvious to me.

There is still a persisting risk that the disruption will be misattributed to an outside factor.  If the technology-based job losses are blamed on immigrants, environmental regulations, or the abandonment of tradition, I can’t foresee broad democratic support for government solutions or the embracing of change.  And this spells trouble for the very business interests whose success relies on the rule of law, stable diplomacy, and a diverse workforce who are engaged to stay productive.

How to Become Strong By Understanding Disadvantage

2012 Marine Corps Trials Day 2.  Photo courtesy of DVIDSHUB.

We hear lots about excellence these days.  So what are the opportunities for persons with disabilities and disadvantages to drive excellence?  It may be that those who are in the throes of disadvantage might not have a fair shot at success.  But there are opportunities for everyone to aspire to excellence, through the cultivation of empathy for those who are disadvantaged.

This is a touching article about a doctor who was concerned about his own mother during her  disabling illness.  The illness was Parkinson’s disease, a degenerative disorder that affects movement.  In the Times article, Dr. Sandeep Jauhar is rigged with a device that allows him to personally experience the sensation of his muscles turning to jelly, like those who have Parkinson’s, like his mother.

Why would he do such a thing?  Because he always wanted to understand his mother’s perspective during the illness.  Devices are also available that replicate the effects of emphysema, psychiatric illness, and nerve disease related to diabetes.

While I haven’t experienced it yet, I have also heard rave reviews about a similar effort called Dark Table.  Dark Table is a restaurant in Vancouver where food is served and eaten in a room which is completely dark.  The servers are blind or visually impaired, and the guests commit to keeping their gadgets off and eating their meals in the dark.  The dark dining experience increases the awareness of other senses such as hearing, touch, and taste.  It creates jobs for persons with disabilities.  And it also helps people empathize with the perspective of the visually impaired.

Emotional Intelligence in Workplace Conflict

On the human resources side of the fence, it’s possible to develop greater empathy for those we are in conflict with.  The nurturing of empathy is important for industrial relations, the professional development of managers, performance conversations, and the general growth of all staff.  How do you teach workplace empathy?  I have been involved in complex roleplay scenarios called Conflict Theatre.  The theatre scenes are designed so that each scenario is integrated into well-developed back stories and emotional perspectives of the actors.

The theatre is presented so as to invite audience members to step into the shoes of an individual actor and attempt to change the course of the conflict.  It’s one thing to sit back and observe from and armchair, and develop an opinion about how things should be done.  But the real expertise is to understand the full emotional context of each player in a conflict, an understanding which is far more vivid when experienced directly.

Empathizing with diverse perspectives turns out to be a key attribute of those who face conflict with dignity and grace.  It takes you beyond the negotiations that resembles bartering for trinkets, and even beyond the interest-based bargaining of those vying for a win-win solution.  You have to learn how to understand people as individuals based on their perspective and story, not their category or “type.”  This includes understanding their perspective when they struggle with ability, whether it’s professional ability or impairments.

Using Emotional Intelligence to Improve Workplace Culture

The thing I find fascinating about these initiatives is their scientific and cultural back-story.  The Parkinson’s device was built in response to well-documented complaints that patients perceive their nurses and doctors lack empathy for their hardships.  Blind dining is traced back to Switzerland by a man named Jorge Spielmann, whose concept was imitated in restaurants in London, Paris, and New York.  Conflict Theatre in Vancouver comes out David Diamond’s Theatre for Living, which itself comes out of Theatre for the Oppressed, created by Augusto Boal in Brazil in the 1970’s.  Theatre for the Oppressed, as you might guess from the name, arises from social critiques and movements to overcome repression, with an intellectual legacy dating well back into the 50’s.

To affect society on the larger scale we need to reach into the emerging science, the social experiments in many countries, and the lessons learned many decades into the past.  The knowledge and confidence of those with power and privilege can pale in comparison to the universe of individual experiences.  In order to take full advantage of the best information when advancing ourselves in this world, we need humility about how right we truly are, curiosity for knowledge that is new, and sensitivity to the lessons from other cultures and other moments in time.  Only then can each of us aspire to excellence.

Peeking Into the Future of Job Elimination

Google Glass. Byi Karlis Dambrans.
Google Glass.  Photo courtesy of Karlis Dambrans.

There is increased speculation that artificial intelligence (AI) will increasingly replace the work of humans over the medium to long term.  Already, AI is performing well at the world-class tournament levels in such games as Chess and Go, the latter of which was a major breakthrough.  What about actual jobs?

At University of Oxford, a survey from the Future of Humanity Institute asked several leading experts how long it will take for machines to outperform humans.  Here is the average forecast for a couple of skill sets:

  • 2023 – Folding laundry
  • 2027 – Truck driving
  • 2031 – Retail sales
  • 2049 – The writing of best-selling books
  • 2053 – Surgery

In the long game, they think all human tasks will be out-performed by machines in 45 years.  All human jobs would be replaced in about 125 years.  So we’re kind of safe for a decade or so.  However, there are major concerns about what this change will mean for humanity, as this change may increase economic inequality.

In my opinion, as this relates to workforce planning, the challenge seems most interesting in the transition period.  That is, people will get new jobs designing new technologies, and people will make themselves more productive by using technology in the workplace.  But there will be more frequent changes, more dramatic changes, and things will happen more quickly.

These changes mean that human resources will be the key party delivering change management, knowledge management, hiring, learning and development, and employee communications.  The pace at which people adapt to change will determine success in investment decisions and the retention of engaged customers.  But only if you get the metrics right.  Anything else, and your organization is sunk.