Part One: Where’s My Robot Doctor?

In this three-part blog series, we’ll be discussing some of the major benefits and risks associated with the development of machine learning algorithms and artificial intelligence in health technology and exploring a vision for an automated future. Artificial intelligence is no longer a question of “if”, but a question of “how”.

One of the biggest “how” questions that hangs over conversations around artificial intelligence and machine learning is not a strictly technical question, but an epistemological one.

As a species, humans have had millions of years to optimise the way that we transfer values and knowledge from one person to another; you might call this organic learning. Over the last twenty years or so, computer science has revolutionised our ability to process data, but now are we having to seriously consider the transference of value systems to machines–synthetic learning

Human minds and machines ingest new information in wildly different ways. From the day we are born, we are sculpted by context – our peers and environments teach us how we ought to behave. Perhaps most importantly we learn, whether we choose to apply the knowledge or not, how to be “good”. Any parent would hope that the children they raise grow up to be ethical adults, that they would steer clear of criminal or anti-social activity and treat others with respect and kindness. There are countless books and techniques for how to do this effectively, but the truth is: much of how humans learn to behave is still shrouded in mystery, it’s a problem still being addressed by psychologists, philosophers and neuro-scientists the world over.

A central focus of intelligent technology is optimisation, the process of streamlining functionality in order to maximise utility. Think of it as a kind of guided form of human evolution, intentionally removing suboptimal traits in favour of more efficient or useful ones. Imagine the robot orderly, one that can move a patient or prepare a hospital bed; we neither want the robot to rush and be rough with a patient, nor spend forty minutes changing a pillow case. At some point, the difference between completion and perfection becomes important: how do we create a robot that can move a patient in a timely fashion, without ever compromising safety? Further, is it expensive to run your robot? Do its task disrupt other hospital workers? Is the cost of operation worth the benefit? Optimisation is about teaching machines to be efficient.

This is where it becomes essential to consider how a machine will learn, and how it might interpret information differently from a human. In fact, we need to give our machines a set of values in order to help it prioritise what is most important.

Children learn how to clean a room through verbal instruction and demonstration. With humans, this is a relatively simple learned behaviour, power relationships like parent-child or teacher-student rely on trust and authority to transfer and instil those ideas. Artificial intelligence operates outside of these pre-existing social constructs and therefore lacks the ability to learn this way. If we only gave our hypothetical robot orderly a very simple command, like “Clean this patient care room”, it could quite reasonably set to work removing every object from the room, furniture and all, which would be technically correct but very unhelpful. We need to be specific about what we want our robots to achieve, as well as how we want them to achieve it.

To reflect priorities, we can create motives for a robot, incentivise them to take certain actions. A simple way to illustrate this might be with a points system. When teaching a human to change a bedsheet you might incentivise them by saying “If you have clean sheets, you will sleep better”. We also experience social pressures to keep our houses clean. A robot, however, doesn’t sleep and has no reason to care about clean sheets nor is it subjected to anxieties or social pressure.

To motivate a robot with our values, we might simulate those human drives with our artificial values or “points”. For example, remaining stationary is worth zero points while cleaning a window gives ten points. The robot is now driven to move toward and clean the window. It’s a simulacrum of our own serotonin and dopamine drivers that can motivate us to act in socially responsible ways. Of course, the ultimate level of artificial intelligence would be where the robot learned these values from observations – building an intuitive value system that mimics our own.

Even if we prescribe the problem well, what if you’re standing between the robot and the bed to be made? If the only value you gave the robot orderly was “change the sheets” then how would it interact with an object (you) standing in its way? Further, what if a clinical robot was being taught to do something far more complex, and with greater risks to humans – say, a robotic surgeon? The good news is we’ve already considered this problem and surmised a complex, but viable, solution to it. We have to teach machines how to mitigate harm. This means we also must teach robots how to identify harm in the first place. This is where the Stop Button Problem comes in.

Part Two of this blog series will be covering the Stop Button Problem, a thought experiment designed to illustrate the importance of priorities and a task stack for a robotic assistant. To learn more about Machine Learning in the meantime, read our Machine Learning Report by following the link below.