Originally published in LinkedIn
Is Artificial Intelligence (AI) becoming too human for your taste? Many people are uncomfortable with the fact that machines will one day begin to think like humans. That, as of today, is a remote possibility—human thinking is driven and controlled by an invisible factor called “soul”. We have soul, a conscience and a sense of ethics that shapes our behavior. We become intelligent, almost unsupervised, assimilating and processing ideas, events and people with a unique and often inexplicable lens. Artificial Intelligence, on the other hand, depends strictly on patterns, policies and training.
When a doctor recommends treatment, we tend to trust the doctor’s opinion. That is because we can ask the doctor why a particular course of treatment was recommended and we’d get an explanation. Doctors (humans) are transparent. There is another factor that favors the doctor—accountability. Transparency and accountability build trust. But when AI looks at an image and says, “That is a cat”, it can’t say why it thought the image was a cat. AI is a black box. And a black box is difficult to trust. Besides, a black box has very little accountability. When AI begins to explain decisions, judgment, conclusions, recommendations, standpoints and ideas—in other words when AI becomes accountable and transparent—we’ll begin to trust it. “Explainable AI” is what we call it – and it is the holy grail of computer science.
It isn’t that doctors don’t make mistakes. There is ample evidence they do. The litmus test is between the errors a doctor makes and that which an autonomous vehicle makes. You would perhaps still continue to trust the doctor, but not the autonomous vehicle.
Let’s examine a near-real scenario. What will an autonomous vehicle do when a person crosses its path? Should the vehicle save the person on the street or the person in the car? This is a philosophical debate about ethical decisions. What if the person crossing the path of the vehicle is a baby? The world over, humans would take the same decision: save the baby. But what if the vehicle also had a baby inside? What would the vehicle do? It can’t take a decision the way a human with a conscience would; all the vehicle can do is what it was programmed to do.
The real problem here is that feelings, emotions, relationships, values, morality and ethics are fluid ideas. They change depending on age, sex, culture, profession and environment. This presents a level of complexity for a programmer and for self-learning systems that is beyond comprehension.
Let me use another example. Let’s assume that in 20 out of 20 cases in a hospital, when a surgeon was faced with the choice of saving a mother or her new born, the surgeon choose the baby. If AI was trained using this data, we would get the same decision until eternity. We know that available data supports the decision; but we also know that there is something deeply wrong with this method of arriving at a decision. We know it is wrong because we have feelings, emotions and values. We trust our conscience and moral values to unhesitatingly change the perfectly sane data-driven decision when required. There is nothing immutable about decisions for humans. That, in essence, is the difference between machines and us.
We are now confronted with two divergent ideas: that trust can be built over a period of time especially when AI becomes explainable (and more so when it is combined with trust-building technologies like blockchain); but there is also no way to trust AI because it can’t factor feelings and emotions into decisions. Trust does evolve over time. After all, we have begun to blindly trust pathology lab reports, Google maps, cloud storage and even auto pilot landings (the irony is we won’t trust an autonomous car!). In each of those instances, we allow humans to resolve uncertainty. We let doctors interpret the lab report, Google provides us alternative routes to determine what is best for us, we can pick and choose models for cloud storage, and when auto pilot begins to go rogue aeronautical systems sound an alarm asking for the pilot to take over.
As a technology leader and a computer professional my head says we must develop better AI. But my heart says we need to draw a line – and ensure that AI systems know their limits and gracefully hand over control to humans when necessary—at least until we can develop “Responsible AI” to go with “Explainable AI”.