And The Robot Looked Outward, Feeling Nothing Inside

P5190596, by Tim Brennan
P5190596.  Photo courtesy of Tim Brennan.
How close are we to achieving human-level artificial intelligence?  We’re making progress but it might be a long way off, possibly never.  There are five major milestones required in order for computers to become as intelligent as humans.  This is covered in a good Huffington Post article from October 2017.  Here are the highlights:

  • Generality is the idea that an approach from one domain can be applied to another. For example, tips on folding the laundry by doing the big-and-easy things first can be applied to other areas of work such as cleaning data.  Artificial Intelligence can do this kind of thing already.
  • Learning without being taught is another milestone. Deep Mind, a company owned by Google, has an artificial intelligence system called AlphaGo Zero which recently achieved this goal.  AlphaGo Zero set a goal and learned strategies to achieve that goal without having its hand held by programmers.
  • Transfer Learning is like generality but it allows humans to transfer one abstract concept (not just an approach) and apply it to a totally different context. It’s about using the pattern-forming behaviour of the human brain and applying symbolic approaches to the task at hand.  AI cannot do transfer learning yet, but they’re working on it.
  • Common Sense turns out to be hard for a computer to figure out. If you have been to a swimming pool you know that Michael Phelps must have got into pool in order to win an Olympic medal in swimming.  As a human you know that Phelps got wet.  Computers don’t know that Phelps got wet.  There is speculation that humans are running off memory and coming to a logical conclusion, and computers need this memory in order to pull together common sense.  They’re working on it.  It’s reminiscent of the new Blade Runner movie, which has brilliant sub-plot about the human-ness of our memories.
  • Self-awareness or consciousness in computers looks like it might never happen. This is the idea that humans can develop a subjective experience, which is experienced personally and might be quite different from the experience as observed by a neutral third-party.  They’re pretty sure they can get a computer to pretend to be self-aware, but on the inside it would have a cold heart.

I like the self-awareness question because it makes it sound like the smartest AI ever will be just like a psychopath who has perfected their game of crocodile tears.  We won’t even need to hire psychopaths any more because everything they are good at will be done by computers.

By the way, what jobs do we want to assign to psychopaths?  Just asking.

2 thoughts on “And The Robot Looked Outward, Feeling Nothing Inside

  1. This is important for developing human education and training too. How we insure all humans are peak learners and doers is key to maximizing human intelligence (HI).

    Like

Leave a comment