Artificial Intelligence (AI) is pushing the frontiers of technology further and has increasingly sparked a debate on its potential use cases, challenges, and potential negatives. Nevertheless, the world is increasingly getting convinced of the enormous potential of AI, be it for economic, commercial, or social gains. Oxford dictionary defines artificial intelligence as the study of the way in which computers can be made to copy the way humans think. This definition, though simplistic, raises several questions like can a machine copy human intelligence, can the copy be as good as the actual and can the copy eventually beat human intelligence.
In this article, I have delved into a few questions that come to our mind when we hear the term Artificial Intelligence.
Table of Contents
Can Artificial Intelligence replicate human intelligence?
As laymen, we tend to think that AI systems are pre-programmed, and hence, it will be difficult for AI to replicate the human cognitive process, which is much more intuitive and spontaneous. This is driven by our erroneous assumption that AI is akin to pre-programmed machine learning for a certain situation. A strong AI can pick up suitable algorithms for a particular decision, which is then executed through a combination of machine learning algorithms, reinforcement learning algorithms and deep learning neural networks that can together replicate the functioning of the human brain to a great extent. Having said this, this is true for a strong AI system, while today’s AI can at best be called a weak AI or general AI, which can address only a few pre-defined aspects of a decision.
How can AI help the world become a better place?
AI promises to be a friend of humanity on multiple fronts with unlimited potential. In the base case, AI-based conversational chatbot, both web-based or email-based, or virtual assistants, have already made their mark, thus contributing to lower costs and improved customer experience. Further, AI in Natural language processing and facial recognition-based use cases has achieved initial success in many sectors such as credit, health care and marketing. Going beyond commercial use cases, Mckinsey Global Institute, in its discussion paper in 2018, has compiled more than 160 AI use cases for social good, many of them already under implementation in small real-world cases studies. One interesting case example highlighted in the discussion paper is a disease detection AI system written by a researcher at the University of Heidelberg and Stanford University. The system uses the visual diagnosis of natural images, including skin lesions, to evaluate if they are cancerous, thus outperforming the results of professional dermatologists. In another case, there are AI-enabled wearable devices that detect early signs of diabetes in people by analyzing heart sensor data with up to 85 per cent accuracy. Imagine the amount of reduction in global disease burden if one could have more affordable and non-intrusive means to diagnose the disease early.
What are the moral dilemmas in AI-based decision making?
Coming to the concerns, the primary one, which, if real, can be scary, is that it can potentially surpass human intelligence and grow beyond its control, thus emerging as a threat to humans. The threat can theoretically be possible but can happen only when we enter the arena of strong AI from the current arena of weak AI or general AI. Even then, humans with their emotional intelligence may still have the edge over Artificial Intelligence, or so can we hope.
Another common concern is the risk of invading privacy, a fundamental right of humans driven by excessive monitoring of data, facial expressions, voice, emotions, and behaviour. Being under the constant scrutiny of machines, whether be an Alexa at home, your mobile phone, or a camera or scanner in a public place, is certainly not a pleasant feeling for humans. In addition, data being the basis of AI may trigger a maddening competition to gather piles of data on a person, and it may become difficult to draw a line between what is acceptable vs what is not.
Further, inbuilt biases based on past data and inadequate availability of quality data may lead to erroneous decisions that can keep amplifying with every future decision. For example, using past hiring data to recruit candidates can return a higher rejection rate for a particular gender or religion only because the past decision data has inherent biases.
Another interesting aspect of moral dilemma that may creep in AI-based decision making is choosing between two options when both are equally right but need to be instantly weighted against moral, social and emotional attributes, which are individual specific. Take an example listed on UNESCO’s website. This fully automated driverless car is fed with enormous data and trained to move safely with a clear understanding of its driving environment. However, how do you train the car algorithm on taking a decision in the case of brake failure and moving with high speed towards an older adult and a child and faced with a choice of saving only one of them? Further, how would the decision change if that older adult happens to be your mother and the child happens to be a stranger?
AI holds the promise of many path-breaking changes that can transform the world with better and happier outcomes. However, there are several open questions and concerns to be addressed along the way to ensure that it gets deployed for the common good. In addition, there is a need to ensure that as the technology makes further inroads, a proper governance framework to navigate the concerns of unethical, invasion of privacy or illegal use cases needs to be designed and implemented.