We’ve entered an age where machine learning and artificial intelligence technologies are poised to change life as we know it. While these technologies can transform the quality of our health system, there are ethical considerations that need to be made.
Taken from transcript of the Global Health Privacy Summit ‘Artificial intelligence and Ethics’ Panel at Georgetown Law June 1-2, 2017:
“In order to have ubiquitous, affordable, and even predictable healthcare, machine learning is essential. There are between 400 million and 2 billion people who don’t have access to healthcare or sanitized facilities. Whether it’s to lower the costs of healthcare or whether it’s to literally make healthcare ubiquitous so that all of humanity can participate in the opportunity to receive care, machine learning is somehow essential to this.
You have events like ‘X Prize’ that Peter Diamandis runs, where the boundaries of human potential are pushed by focusing on problems that are currently believed to be unsolvable. Some of these issues can be found in healthcare. One vision is that through machine learning, you can have a hand held artificially intelligent device, and can match the diagnosis of a patient with several board-certified physicians; this is a very interesting prospect and just one-way machine learning can be applied in the healthcare setting. With that said, there are some real ethical considerations that we should look at when utilizing machine learning technology.”
Transparency And Values Alignment
The first is that I think there needs to be a level of transparency affiliated with machine learning systems that’s both in terms of consent and intended use of the data the machines use. I think it’s going to be algorithmically or at least approach driven. What are the approaches in this machine learning system?
I think the next consideration we need to take is values alignment when we look at machine learning at the scale at which we can deploy this technology has immense meaning. It can be, as Dr. Fleming pointed out, put onto an iPhone. So, as we think about machine learning being pushed out, the scale of it is so significant in its ability to learn quickly and modify behavior at a size that’s unprecedented. There has to be a values alignment between the recipient and participant in the technology, and the vendor and the holder of the technology, or we’re going to see behaviors that we wouldn’t expect from the machine.
A Human in the Loop
The last thing I would say is that I am personally a believer in supervised learning systems. I think that there should be a human in the loop. Ultimately it’s not just in healthcare, this notion that we’re going to create machines that are far greater than we are in their intelligence is, today, narrow case intelligence. Tomorrow we’re going to be saying it’s broad. Broad intelligence, in my opinion, is we cannot surrender to the machine in terms of it knows more than us. We become this recipient of information that comes out of the machine and act on it without question. I think that’s an extremely dangerous posture. It has far reaching implications. Take the legal system for example. What does it mean to present evidence to a judge? An extreme example would be using a computer to evaluate evidence and conclude whether a person is guilty or not of breaking the law. The problem is that machines would be making life-changing decisions without us having transparency surrounding the associated evidence and algorithmic approaches.”
Artificial intelligence stands to revolutionize healthcare as we know it, making it more affordable and available to hundreds-of-millions of people around the globe. But it must be done ethically, involving transparency, values alignment, and a human in the loop.