And The Robot Looked Outward, Feeling Nothing Inside

P5190596, by Tim Brennan
P5190596.  Photo courtesy of Tim Brennan.
How close are we to achieving human-level artificial intelligence?  We’re making progress but it might be a long way off, possibly never.  There are five major milestones required in order for computers to become as intelligent as humans.  This is covered in a good Huffington Post article from October 2017.  Here are the highlights:

  • Generality is the idea that an approach from one domain can be applied to another. For example, tips on folding the laundry by doing the big-and-easy things first can be applied to other areas of work such as cleaning data.  Artificial Intelligence can do this kind of thing already.
  • Learning without being taught is another milestone. Deep Mind, a company owned by Google, has an artificial intelligence system called AlphaGo Zero which recently achieved this goal.  AlphaGo Zero set a goal and learned strategies to achieve that goal without having its hand held by programmers.
  • Transfer Learning is like generality but it allows humans to transfer one abstract concept (not just an approach) and apply it to a totally different context. It’s about using the pattern-forming behaviour of the human brain and applying symbolic approaches to the task at hand.  AI cannot do transfer learning yet, but they’re working on it.
  • Common Sense turns out to be hard for a computer to figure out. If you have been to a swimming pool you know that Michael Phelps must have got into pool in order to win an Olympic medal in swimming.  As a human you know that Phelps got wet.  Computers don’t know that Phelps got wet.  There is speculation that humans are running off memory and coming to a logical conclusion, and computers need this memory in order to pull together common sense.  They’re working on it.  It’s reminiscent of the new Blade Runner movie, which has brilliant sub-plot about the human-ness of our memories.
  • Self-awareness or consciousness in computers looks like it might never happen. This is the idea that humans can develop a subjective experience, which is experienced personally and might be quite different from the experience as observed by a neutral third-party.  They’re pretty sure they can get a computer to pretend to be self-aware, but on the inside it would have a cold heart.

I like the self-awareness question because it makes it sound like the smartest AI ever will be just like a psychopath who has perfected their game of crocodile tears.  We won’t even need to hire psychopaths any more because everything they are good at will be done by computers.

By the way, what jobs do we want to assign to psychopaths?  Just asking.

Tech Change Will Make Commies Of Us All!!

clenched fist

Is it just my imagination, or has there been an up-tick in socialist rhetoric lately?  Don’t get me wrong, I think that decisions about the role of government in our economy should be put in the hands of voters, and I recognize that for a few decades people steadily voted for less government.  But it looks like once every couple of weeks, another corporate heavyweight and another major news outlet presents a strong case that corporations have screwed it all up and it’s time for government to step in.

I’m counting this as a relevant topic for human resources generalists to take really seriously.  Brokering a compromise between the corporate mission and the sentiments of front-line workers is much of what we do all day, whether it’s in collective bargaining, employee communications, or just explaining a layoff to an affected employee.  So, when you’re trying to find an appropriate balance between the interests of unions and investors, it can be important to keep your fingers on the pulse.

In an article from Wired, the author criticizes Equifax, which released the confidential financial information of hundreds of millions of borrowers.  The author asserts that the Equifax breach is different from security breaches at regular bricks-and-mortar companies because Equifax’s entire reason for being is the safe storage of confidential information.  An effort at which they failed.  The author calls for the dissolution of Equifax’s corporate charter.

In my earlier blog post summarizing a major report by McKinsey on the structure of the gig economy, the general management consultancy started to leak spoonfuls of compassion.  The article notes that modernizing the social safety net may be warranted, in particular to extend social insurance systems to cover independent workers and those changing traditional jobs more frequently.  McKinsey also points to the pooling of workers by unions in the entertainment industry as a suitable vehicle for delivering health benefits coverage.

In an HBR article by Eric Garton from Bain & Company, another general management consulting firm, the author asserts that we should be investing more in employees to improve labour productivity.  After detailing a number of ways employee effort can be harnessed through employee engagement and a lower level of busy-ness, the author then turns to public policy.  Garton asserts that higher wages and investments in health care, training and education are among the possible additional improvements needed to achieve a better economy.

Over at Guardian.com, the left-leaning publication might normally be expected to call for greater government involvement in the economy.  But in this article they have abandoned those little comments from years gone by about tax-the-rich-here and social-programs-over-there.  They’re going for the jugular and calling for a government takeover of Google, Facebook, and Amazon.

The author explains that the first-to-market and winner-takes-all nature of these major platforms eliminate competition, voiding any pretense of a free market.  With artificial intelligence likely causing power and money to concentrate even further in future, nationalization might just be fair game:  “…utilities and railways that enjoy huge economies of scale and serve the common good have been prime candidates for public ownership. …Tinkering with minor regulations while AI firms amass power won’t do.”

Over at the Atlantic, they’re interviewing people in the Silicon Valley who are asserting that our consumer electronics have addictive properties that are deliberate and need to be curtailed.  One expert “…compares the tech industry to Big Tobacco before the link between cigarettes and cancer was established: keen to give customers more of what they want, yet simultaneously inflicting collateral damage on their lives.”

What should we do about being duped into staring at our smartphones far too often?  Why, open revolt, of course!  “Harris thinks his best shot at improving the status quo is to get users riled up about the ways they’re being manipulated, then create a groundswell of support for technology that respects people’s agency–something akin to the privacy outcry that prodded companies to roll out personal-information protections.”  On the low-end the same experts are calling for a shift to non-addictive behaviours, similar to switching to organic produce at the grocery store.  But that’s for lightweights.

Now, some of this might just be talk, and maybe we should take some of it with a grain of salt.  But next time you’re in the elevator or at the bargaining table or out for drinks with a friend who is stuck in their career, listen more closely.  As an HR professional you’re going to be expected to show that you’re in touch, and this kind of thing can sneak up on your.  So think carefully, ahead of time, what you’re going to say when you’re out in public and your best friend asks you to hold their pitchfork.

Cashiers Smile While Robots Take Stock

adobestock_100618923.jpeg

What jobs do we actually want the robots to take off our hands?  Boring, tedious jobs, for sure.  Walmart is deploying shelf-scanning robots to 50 stores on a trial basis. The robots are expected to browse the aisles and take inventory of items on shelves, identifying depleted items, misplaced items, and overlooked price changes.

The technology is expected to complement shelf-stockers rather than replace them.  That is, the robot will collect better and more-prompt information about what is on the shelves, and then humans will come by the exact shelf location and re-stock the shelf with the correct amount.  Apparently taking inventory is thankless and tedious work that can be automated, while the actual use of hands and eyes to move physical packages onto shelves is an overwhelmingly human behaviour, at least for now.

The video produced by Walmart explains the technology itself, then wraps up with the following statement:

When we combine the passion of our people with the power of technology the possibilities are endless.

While it sounds like a corporate-speak motherhood statement, these words are truer than you can imagine.  The empathy of human sales staff has an outsized impact on customer engagement, and as such the jobs which are most immune to technological disruption are those that deliver the human element of the customer experience.

So if you’re feeling blue and bewildered about all of the rapid technological change in the world, put on your happy face, make eye contact with someone you can help, and offer a hand.  It might actually improve your job security, directly.  Knowing you’re more secure, your smile might turn real.

Can We Teach Robots to be Egalitarian?

Abstract robot head from different angles on black background. Artificial intelligence. 3D render.

Can we teach robots to be less biased than us?  Probably yes.  But only if we do this right.  Bias is mostly the product of mental shortcuts we make in our reasoning, and machines can only think clearly if we teach them to not make the same mental shortcuts.

There is an interesting article about employers’ best attempts at reducing bias in hiring algorithms.  Paul Burley, the CEO at Predictive Hire, describes his company’s efforts to identify and eliminate bias in the recruitment and selection of the best job applicants.  This work goes beyond eliminating applicant names from a conventional recruitment processes; this effort gets into predictive analytics to identify the best candidate.

Burley is particularly keen on identifying interview questions that drive bias (either direct or adverse-effect discrimination), and then eliminating those questions entirely.  While they do not use demographic information inside their algorithms, they do use demographic information outside of the algorithm, to test if any of their questions are causing a bias after-the-fact.

Using Workforce Analytics to Identify Invisible Bias

It sounds to me like his company is going about it the right way.  With bias, we don’t disproportionately “choose” white males to be the boss.  Rather, we assess what traits would normally indicate strong leadership, accidentally carry-forward historic stereotypes about strong leaders, and then inadvertently choose white males.  Plenty of people, including some women and visible minorities, accidentally advance this momentum.  That is because it’s the underlying thought patterns driving things, rather than deliberate and malevolent racism and sexism.  You can make one step forward by not being a jerk, but take two steps backward on something called cognitive bias.  And everyone does cognitive bias, not just the man.

Over at Better Humans, they have created a Cognitive Bias Cheat Sheet.  Personally, I have been trying to stay on top of cognitive bias since it was revealed to be a major driver of the 2008 sub-prime mortgage fiasco and the subsequent Great Recession.  Cognitive bias is overwhelming, and that’s illustrative of what the real problem is.  The world just gives us too much information to process, so we make shortcuts in our thinking to make sometimes-accurate judgments.  In the language of behavioral economics, prejudice is largely the advancing of skewed thinking based on cognitive bias shortcuts.

Information Overload – Are Machines Better Equipped Than Humans?

The big deal with big data is that machines are supposed to help us overcome the over-abundance of information.  Sure, we can find patterns and dig up nuggets that are buried in a mountain of data.  But if we are also making judgment calls using cognitive shortcuts because the human brain can’t handle the volume, there is the opportunity to use the machine to allow us to make judgments using all of the information.  We can create algorithms that are larger and more complex, bypassing the constraints of cognitive bias, and produce recommendations that are far less biased than those produced by humans.

We don’t entirely have the option of just turning the machine off.  Going off-grid just sends us back to biased decisions made by humans on gut instinct.  Think of who you know, and consider that not all luddites are champions of equality.  Right now, we are just getting past the first wave of machines imitating our own sexism and racism.  We now have the option of telling the machines to stop doing that, and then building new algorithms that meet our own purported standards of neutrality.

But this will happen if and only if we choose to name our biases, talk openly about them, measure them, make decisions to reverse them, and keep improving the algorithms such that everyone has a fair shot at the good jobs.  And even then, we still can’t trust robots to decide where to seat people on the bus.  We must forever be vigilant, and stay human.

Big World, Small Wages

the shrinking dollar, by frankieleon
The shrinking dollar.  Photo courtesy of frankieleon.

We are now in an era when unemployment is low, but wages are not increasing.  This is unusual.  Normally when unemployment is low, wages increase.  Even the meanest of bosses would look over their shoulder and increase wages to “stay competitive with market,” when they’re actually just worried about losing key people and unions making inroads.  But the rules of business have changed.

According to the New York Times article Plenty of Work; Not Enough Pay the reasons why wages are staying low are incredibly varied.  Long story short: It’s a dog-eat-dog world and we’re in a big, hot mess.

  • Unions have less power than in the past. Last year only 11% of the American workforce was unionized, down from 20% in 1983.  This decline coincides with American wages largely breaking-even since 1972 on an inflation-adjusted basis.
  • The article interviews Lawrence Mishel from the Economic Policy Institute, who notes that “people have very little leverage to get a good deal from their bosses…” and this reduces expectations to the point where “People who have a decent job are happy to just hold down what they have.”
  • It’s not just workers and unions, businesses are anxious, too. In Japan, companies “mostly sat on their increased profits rather than share with employees.”  Businesses are still spooked from the popping of the real estate bubble in the early 1990s, which was a prequel to the larger subprime mortgage fiasco in the USA around 2008.  In Norway, wages increased as a result of their oil riches in the run-up to 2008.  Their higher cost structure put them at a competitive disadvantage during that same recession and business in Norway don’t want to make the same mistake.
  • Employers who are experiencing good business results are trying to get more work done by hiring temporary employees. After all, if a business can get a large fraction of their work done by contractors, it’s easier to shed the contractors during a downturn.  While temporary work is a negative experience for those forced into it, it is also something business leaders need to do out of fear that they themselves could be in trouble at any time.
  • In Norway and Germany, unions have negotiated special deals to keep wages low, ensure businesses stay cost-competitive, and save local jobs. This arrangement puts pressure on lower-cost jurisdictions, such as Italy and Spain.
  • Globalization is connecting developing-world factories more closely to the individual consumer. After “eliminating the middle-man,” there are fewer bottlenecks in getting goods to market.  With fewer middle players, there is not the same opportunity for employment in these roles.  Factories have fewer hurdles to dropping goods right at your doorstep.  Online leaders, such as Amazon, continue to ravage physical retail.  Meanwhile, warehouse operations and trucking goods across continents are increasingly prone to automation by robots and artificial intelligence.
  • In addition to buyers purchasing goods from developing countries, immigrants are often brought in from those same countries, keeping wages down. It is virtuous to be sympathetic to the plight of immigrants, but there is also truth to the complaint that businesses are using immigrants as pawns. In Norway, the social democratic system that shares wealth with the unionized workforce is being undermined by start-up businesses employing immigrants from Eastern Europe at wages that are below the agreed standard.  The unions are struggling to ensure these immigrants get the same rights as others.  Labour’s biggest struggle is to break even.

The supply-and-demand mantra that the market will correct itself has simply become a falsehood.  This raises the possibility that for our gains, we can’t let the market take care of us.  The possible solutions are varied and the solutions you lean towards probably match the opinions of those around you.

Perhaps families and churches will help us, or maybe it will be unions and the government.  But the emerging consensus is that market forces are nobody’s friend.

Who Created Racist Robots? You Did!

Reinventing Ourselves

If robots just did what we said, would they exhibit racist behavior?  Yes.  Yes they would.

This is an insightful article in the Guardian on the issue of artificial intelligence picking up and advancing society’s pre-existing racism.  It falls on the heels of a report that claimed that a risk-assessment computer program called Compas was biased against black prisoners.  Another crime-forecasting program called PredPol was revealed to have created a racist feedback loop.  Over-policing in black areas in Oakland generated statistics that over-predicted crime in black areas, recommending increased policing, and so on.

“’If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,’ says Kristian Lum, the lead statistician at the San-Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG).”

It’s not just the specialized forecasting software that is getting stung by this.  Google and LinkedIn have had problems with this kind of thing as well.  Microsoft had it the worst with a chatbot called Tay, who “learned” how to act like everyone else on twitter and turned into a neo-nazi in one day.  How efficient!

These things are happening so often they cannot be regarded as individual mistakes.  Instead, I think that racist robots must be categorized as a trend.

Workforce Analytics and Automated Racism or Anti-Racism

This racist robot trend affects workforce analytics because those attempting to predict behavior in the workplace will occasionally swap notes with analysts attempting to improve law enforcement.  As we begin to automate elements of employee recruitment, there is also the opportunity to use technology-based tools to reduce racism and sexism.  Now, we are stumbling upon the concern that artificial intelligence is at risk of picking up society’s pre-existing racism.

The issue is that forecasts are built around pre-existing data.  If there is a statistical trend in hiring or policing which is piggy-backing on some type of ground-level prejudice, the formulas inside the statistical model could simply pass-along that underlying sexism or racism.  It’s like children repeating-back what they hear from their parents; the robots are listening – watch your mouth!  Even amongst adults communicating word-of-mouth, our individual opinions are substantially a pass-through of what we picked up from the rest of society.  In this context, it seems naïve to expect robots to be better than us.

So, we must choose to use technology to reduce racism, or technology will embolden racism absent-mindedly.  Pick one.

A major complication in this controversy is that those who create forecast algorithms regard their software and their models as proprietary.  The owner of the Compas software, Northpointe, has refused to explain the inner-workings of the software that they own.  This confidentiality may make business sense and might be legally valid in terms of intellectual property rights.  However if their software is non-compliant on a human rights basis they might lose customers, lose a discrimination lawsuit, or even get legislated out of business.

We are in an era where many people presume that they should know what is really happening when controversial decisions are being made.  When it comes to race and policing, expectations of accountability and transparency can become politically compelling very quickly.  And the use of software to recruit or promote employees, particularly in the public sector, could fall under a similar level of scrutiny just as easily.

I hope that police, human resources professionals, and social justice activists take a greater interest in this topic.  But only if they can stay sufficiently compassionate and context-sensitive to keep ahead of artificial intelligence models of their own critiques.  I’m sure a great big battle of nazi vs. antifacist bots would make for great television.  But what we need now are lessons, insights, tools, and legislation.