Artificial Intelligence or AI as it is called is now the next generation of decision making. Humans are now on an attempt to build a decision-making model which can make decisions better than human capability itself – efficient, economic, bias-free, and so on.
Artificial Intelligence often involves something called ‘Machine Learning. It basically means that once the system is developed and deployed, it continues to learn itself from the outputs generated by the system.
Consider the following situation – teaching a computer speech recognition. The developer inputs a variety of sounds and syllables to the system and teaches it to develop meaningful words. This database of the words, a small one at the beginning, starts growing big, as and when the system is exposed to a new word. This is one example of machine learning.
What is Artificial Intelligence capable of?
The sole aim of AI is to make decisions by overcoming the loopholes of human judgment like biases. That is all it is. But the wide scope for its applications is where concerns arise.
Artificial Intelligence Powered Self Driving Cars
Yes, AI can be used to “train” cars to drive without any human intervention. The problem here though is that we need to feed in every condition that a driver might be subjected to during the course of his driving, and analyze the situation and make smart decisions. This is where the biases of the developers or coders or stakeholders come into the picture.
For a very simple rudimentary example, let us say that a driver is under a pressure to reach someplace within minutes and is almost got stuck into a 3-minute red light. He assesses that if he speeds up, he might just be able to skip the signal. There is a dilemma here for an AI-driven car – follow the rules OR reach the destination at the right time. What it will prioritize will essentially depend on who has coded it and what their beliefs are.
Detection of Insurance Frauds Powered by Artificial Intelligence
In a more serious scenario, deploying AI in public health to detect insurance fraud has actually been implemented by the Government of India in its PM-Digital Health Mission. If there are any inadvertent errors committed in the decision-making process of the AI system, a person’s life depends on it. The worse part is that you can’t even challenge the logic of the decision in a court, because we actually don’t understand much of how AI works. It can become painstakingly difficult to disentangle the conditions and loops to determine how a person was declared not eligible for health insurance.
Facial Recognition Powered by Artificial Intelligence
AI can and is already being used for facial recognition. It can be extremely useful to profile criminals, returning of lost persons, and identification of dead bodies. When AI is used to predict crimes, it may have biases towards specific sections of society.
Core Issues about Artificial Intelligence
In the NITI Aayog’s “Towards Responsible AI for All”, the following issues are identified.
System Considerations – Implications that result from system design choices, development and deployment practices
Accuracy in tests v/s real life
Starting with an example, an experiment was performed in an image-classification algorithm. It was asked to differentiate the pictures of wolves and huskies. It performed really well. But when the system was analyzed, it was classifying the pictures based on the background, not the features of the animal. It works with the test data set but doesn’t work as effectively in the real world.
Accuracy in tests v/s real life is a struggle. Every AI undergoes a set of data as a part of the initial training. But, after a while, it should be exposed to the real world for more learning. It is a process of continuous accommodation and assimilation. (Erikson,et. al)
Understanding How a Decision is Made – Black Box Phenomena
IBM developed what is called Watson for Oncology – an AI program that helps oncologists to decide the best course of treatments for their patients. In cases where Watson reached the same conclusion as the doctors, the doctors agreed with it. Where it came up with a different decision, the doctors simply denied it. The situation would be better if they could understand how Watson makes decisions.
If you are declared unfit for insurance, loans, credit scoring, etc., you have a right to know why. Reason giving is a key component of procedural fairness in law. It also includes presenting the rationale of the decision in a way that the stakeholders from diverse backgrounds will understand. When AI makes decisions for the Government, it becomes difficult to challenge them in courts of law because it takes a lot of time and resources to understand how exactly that particular decision was made. This is called the Black Box Phenomena.
This causes loss of faith in the system, difficulty in auditing the programs for compliance and liability, difficulty in debugging, maintainance and updating date and improving performance and sometimes, inability to comply to certain sectoral regulations.
Consistency across different groups of people
Amazon recently tried inculcating a machine-learning-based solution to evaluate applicants by observing the patterns in resumes submitted in the 10 year period. The system evaluated men as better candidates than women, just on the basis of the fact that there was a higher number of male applicants than females.
AI systems are trained on the real-world data which might introduce cognitive biases into the decision-making process.
Post-implementation at a large scale, this might lead to a high number of frequent biased decisions which in turn fuel the learning and make the system more biased. This leads to a lack of trust and disruption in the social order.
Incorrect decisions leading to exclusion from access to services or benefits
When AI systems are implemented in crucial Government services like health, social security, etc., the social impact of incorrect decisions is very high.
AI is already implemented in the PM-JAY public health insurance scheme, where certain cases are red-flagged as attempts to insurance frauds. But considering the high social cost of incorrect decisions (loss of human life, etc), no treatment is stopped.
Accountability – Who to blame?
Accountability is more about Who is responsible? rather than Who is to blame?. Two problems here – 1. We don’t clearly understand how the system works. 2. There is no single person to hold accountable for errors in decisions made.
This makes it less incentive for responsible action and also poses a difficulty in grievance redressal.
Privacy – How much of my data is used and how safe is it?
AI depends on a large pool of data which may be personal Identity Information(PII) which is highly confidential. This data is at the risk of being used without the consent of the stakeholder. There is also a possibility to work back from the outputs of the system to arrive at the original data.*(Model Inversion) This is an outright infringement of the Right to Privacy.
Security Issues
Microsoft released a chatbot called Tay on Twitter. It was AI-based and could learn from human interactions. In just a short time, it started posting inflammatory comments before it was taken down. This was attributed to the trolls feeding Tay with inappropriate language and related things.
While this was a very funny episode, this poses a serious question on what happens if these systems fall prey to attacks by malicious interests. An AI system can be rigged so that it responds incorrectly to the input, causing wrong decisions which may have a high social stake.
Some other examples here are a panda was confirmed by an AI system as a gibbon, just by adding a little noise to the image. A banana was recognized as a toaster with 99% confidence by an AI with just a sticker placed next to a banana in the picture.
Societal Considerations of Malicious use of AI
Psychological Profiling
This includes taking AI-backed targeted advertising which could rig the political systems across the globe and spread fake news to the susceptible population.
Impact on Jobs
Jobs which humans do today may be replaced by AI-backed machines in the next few years. This raises ethical concerns where automation is snatching daily bread from people’s lives. Thus, a balance has to be maintained between automation and human resources to ensure the avoidance of unethical actions.
Way ahead
These issues, rightly pointed out by the NITI Aayog, are indeed 360-degree concerns that need immediate consideration before AI systems are implemented at a large scale by the Governments.
Legal concerns
Considerations of Artificial Intelligence in the context of Constitution of India
The considerations mentioned in the previous section also find expression in the Constitution under Fundamental Rights. The relevant articles are summarized below
Article 14: Right to Equality
The Constitution guarantees equal treatment of equally placed persons and groups before the law, and equal protection of the law to all.
Articles 15 & 16: Right against Discrimination
The Constitution prohibits discrimination on the basis of religion, race, caste, sex, descent, place of birth or residence in matters of education, employment, access to public spaces, etc.
While the Constitution prohibits discrimination based on certain markers, it also provides for positive discrimination in the form of affirmative action.
Article 15, while prohibiting discrimination, empowers the Government to make special provision for the advancement of any socially and educationally backward classes of citizens or for the Scheduled Castes and the Scheduled Tribes, and to make provisions for their admission to educational institutions, whether private, aided or unaided.
Article 21: Right to Life and Healthcare
The Constitution guarantees the right to life to all persons. Various High Courts have read the right to healthcare, including the right to avail health insurance, to be part of the right to life.
Article 21: Right to Privacy
The Supreme Court has held that the right to privacy is an intrinsic part of the right to life and liberty guaranteed under Article 21 of the Constitution and as part of the freedoms enshrined in Part III thereof.
Article 38: State Directive for Economic Equality
The Constitution directs the State to ensure the economic welfare of the people and minimize inequalities in income, status, facilities, and opportunities, both between individuals and between groups of people. The State is also directed to ensure a living wage for all workers, including agricultural workers.
Transparency and accountability
The Supreme Court, in its interpretation of the Constitution, has held that transparency in decision-making is critical even for private institutions. The Constitution guarantees accountability of all State actions to individuals and groups. (Source: Responsible AI For All)
Recommended Solutions – Principles for Responsible AI for All
After considering the above issues, the committee has recommended the following 7 Principles –
1 Principle of Safety and Reliability
Sufficient safeguards are to be put into place and appropriate measures for grievance redressal, care, and compensation structures need to be put into place, in case of any expected or unexpected harm. Constant monitoring of the AI is required throughout its life.
2 Principle of Equality
AI systems must treat individuals under the same circumstances relevant to the decision equally.
3 Principle of Inclusitivity and Non-discrimination
AI systems should not deny an opportunity to a qualified person based on their identity. It should not amplify the already existing social divisions based on race, caste, creed, sex, etc. An appropriate grievance redressal mechanism is to be put into place.
4 Principle of Privacy and Security
Data of the individuals used to train the system need to be safeguarded by maintaining privacy and security. Access should be provided only to those authorized with sufficient safeguards
5 Principle of Transparency
The design and functioning of the AI system should be recorded and made available for external scrutiny and audit to the extent possible to ensure the deployment is fair, honest, impartial and guarantees accountability.
6 Principle of Accountability
All the stakeholders involved in the design, development, and deployment of the AI system should be held accountable for their actions. Stakeholders are to ensure that the end receiver has the least possible untoward decisions.
7. Principle of Protection and Reinforcement of Human Values
AI should promote positive human values and not disturb social harmony in community relationships in any way.
Source – Responsible-AI-22022021.pdf (niti.gov.in)
This was a detailed explanation of AI and its far and wide-reaching consequences. For more tech stories, look at the links below.
Artificial Intelligence or AI as it is called is now the next generation of decision making. Humans are now on an attempt to build a decision-making model which can make decisions better than human capability itself – efficient, economic, bias-free, and so on.
Artificial Intelligence often involves something called ‘Machine Learning. It basically means that once the system is developed and deployed, it continues to learn itself from the outputs generated by the system.
Consider the following situation – teaching a computer speech recognition. The developer inputs a variety of sounds and syllables to the system and teaches it to develop meaningful words. This database of the words, a small one at the beginning, starts growing big, as and when the system is exposed to a new word. This is one example of machine learning.
What is Artificial Intelligence capable of?
The sole aim of AI is to make decisions by overcoming the loopholes of human judgment like biases. That is all it is. But the wide scope for its applications is where concerns arise.
Artificial Intelligence Powered Self Driving Cars
Yes, AI can be used to “train” cars to drive without any human intervention. The problem here though is that we need to feed in every condition that a driver might be subjected to during the course of his driving, and analyze the situation and make smart decisions. This is where the biases of the developers or coders or stakeholders come into the picture.
For a very simple rudimentary example, let us say that a driver is under a pressure to reach someplace within minutes and is almost got stuck into a 3-minute red light. He assesses that if he speeds up, he might just be able to skip the signal. There is a dilemma here for an AI-driven car – follow the rules OR reach the destination at the right time. What it will prioritize will essentially depend on who has coded it and what their beliefs are.
Detection of Insurance Frauds Powered by Artificial Intelligence
In a more serious scenario, deploying AI in public health to detect insurance fraud has actually been implemented by the Government of India in its PM-Digital Health Mission. If there are any inadvertent errors committed in the decision-making process of the AI system, a person’s life depends on it. The worse part is that you can’t even challenge the logic of the decision in a court, because we actually don’t understand much of how AI works. It can become painstakingly difficult to disentangle the conditions and loops to determine how a person was declared not eligible for health insurance.
Facial Recognition Powered by Artificial Intelligence
AI can and is already being used for facial recognition. It can be extremely useful to profile criminals, returning of lost persons, and identification of dead bodies. When AI is used to predict crimes, it may have biases towards specific sections of society.
Core Issues about Artificial Intelligence
In the NITI Aayog’s “Towards Responsible AI for All”, the following issues are identified.
System Considerations – Implications that result from system design choices, development and deployment practices
Accuracy in tests v/s real life
Starting with an example, an experiment was performed in an image-classification algorithm. It was asked to differentiate the pictures of wolves and huskies. It performed really well. But when the system was analyzed, it was classifying the pictures based on the background, not the features of the animal. It works with the test data set but doesn’t work as effectively in the real world.
Accuracy in tests v/s real life is a struggle. Every AI undergoes a set of data as a part of the initial training. But, after a while, it should be exposed to the real world for more learning. It is a process of continuous accommodation and assimilation (Erikson,et. al)
Understanding How a Decision is Made – Black Box Phenomena
IBM developed what is called Watson for Oncology – an AI program that helps oncologists to decide the best course of treatments for their patients. In cases where Watson reached the same conclusion as the doctors, the doctors agreed with it. Where it came up with a different decision, the doctors simply denied it. The situation would be better if they could understand how Watson makes decisions.
If you are declared unfit for insurance, loans, credit scoring, etc., you have a right to know why. Reason giving is a key component of procedural fairness in law. It also includes presenting the rationale of the decision in a way that the stakeholders from diverse backgrounds will understand. When AI makes decisions for the Government, it becomes difficult to challenge them in courts of law because it takes a lot of time and resources to understand how exactly that particular decision was made. This is called the Black Box Phenomena.
This causes loss of faith in the system, difficulty in auditing the programs for compliance and liability, difficulty in debugging, maintaining and updating date and improving performance and sometimes, inability to comply to certain sectoral regulations.
Consistency across different groups of people
Amazon recently tried inculcating a machine-learning-based solution to evaluate applicants by observing the patterns in resumes submitted in the 10 year period. The system evaluated men as better candidates than women, just on the basis of the fact that there was a higher number of male applicants than females.
AI systems are trained on the real-world data which might introduce cognitive biases into the decision-making process.
Post-implementation at a large scale, this might lead to a high number of frequent biased decisions which in turn fuel the learning and make the system more biased. This leads to a lack of trust and disruption in the social order.
Incorrect decisions leading to exclusion from access to services or benefits
When AI systems are implemented in crucial Government services like health, social security, etc., the social impact of incorrect decisions is very high.
AI is already implemented in the PM-JAY public health insurance scheme, where certain cases are red-flagged as attempts to insurance frauds. But considering the high social cost of incorrect decisions (loss of human life, etc), no treatment is stopped.
Accountability – Who to blame?
Accountability is more about Who is responsible? rather than Who is to blame?. Two problems here – 1. We don’t clearly understand how the system works. 2. There is no single person to hold accountable for errors in decisions made.
This makes it less incentive for responsible action and also poses a difficulty in grievance redressal.
Privacy – How much of my data is used and how safe is it?
AI depends on a large pool of data which may be personal Identity Information(PII) which is highly confidential. This data is at the risk of being used without the consent of the stakeholder. There is also a possibility to work back from the outputs of the system to arrive at the original data.*(Model Inversion) This is an outright infringement of the Right to Privacy.
Security Issues
Microsoft released a chatbot called Tay on Twitter. It was AI-based and could learn from human interactions. In just a short time, it started posting inflammatory comments before it was taken down. This was attributed to the trolls feeding Tay with inappropriate language and related things.
While this was a very funny episode, this poses a serious question on what happens if these systems fall prey to attacks by malicious interests. An AI system can be rigged so that it responds incorrectly to the input, causing wrong decisions which may have a high social stake.
Some other examples here are a panda was confirmed by an AI system as a gibbon, just by adding a little noise to the image. A banana was recognized as a toaster with 99% confidence by an AI with just a sticker placed next to a banana in the picture.
Societal Considerations of Malicious use of AI
Psychological Profiling
This includes taking AI-backed targeted advertising which could rig the political systems across the globe and spread fake news to the susceptible population.
Impact on Jobs
Jobs which humans do today may be replaced by AI-backed machines in the next few years. This raises ethical concerns where automation is snatching daily bread from people’s lives. Thus, a balance has to be maintained between automation and human resources to ensure the avoidance of unethical actions.
Way ahead
These issues, rightly pointed out by the NITI Aayog, are indeed 360-degree concerns that need immediate consideration before AI systems are implemented at a large scale by the Governments.
Legal concerns
Considerations of Artificial Intelligence in the context of Constitution of India
The considerations mentioned in the previous section also find expression in the Constitution under Fundamental Rights. The relevant articles are summarized below
Article 14: Right to Equality
The Constitution guarantees equal treatment of equally placed persons and groups before the law, and equal protection of the law to all.
Articles 15 & 16: Right against Discrimination
The Constitution prohibits discrimination on the basis of religion, race, caste, sex, descent, place of birth or residence in matters of education, employment, access to public spaces, etc.
While the Constitution prohibits discrimination based on certain markers, it also provides for positive discrimination in the form of affirmative action.
Article 15, while prohibiting discrimination, empowers the Government to make special provision for the advancement of any socially and educationally backward classes of citizens or for the Scheduled Castes and the Scheduled Tribes, and to make provisions for their admission to educational institutions, whether private, aided or unaided.
Article 21: Right to Life and Healthcare
The Constitution guarantees the right to life to all persons. Various High Courts have read the right to healthcare, including the right to avail health insurance, to be part of the right to life.
Article 21: Right to Privacy
The Supreme Court has held that the right to privacy is an intrinsic part of the right to life and liberty guaranteed under Article 21 of the Constitution and as part of the freedoms enshrined in Part III thereof.
Article 38: State Directive for Economic Equality
The Constitution directs the State to ensure the economic welfare of the people and minimize inequalities in income, status, facilities, and opportunities, both between individuals and between groups of people. The State is also directed to ensure a living wage for all workers, including agricultural workers.
Transparency and accountability
The Supreme Court, in its interpretation of the Constitution, has held that transparency in decision-making is critical even for private institutions. The Constitution guarantees accountability of all State actions to individuals and groups. (Source: Responsible AI For All)
Recommended Solutions – Principles for Responsible AI for All
After considering the above issues, the committee has recommended the following 7 Principles –
1 Principle of Safety and Reliability
Sufficient safeguards are to be put into place and appropriate measures for grievance redressal, care, and compensation structures need to be put into place, in case of any expected or unexpected harm. Constant monitoring of the AI is required throughout its life.
2 Principle of Equality
AI systems must treat individuals under the same circumstances relevant to the decision equally.
3 Principle of Inclusitivity and Non-discrimination
AI systems should not deny an opportunity to a qualified person based on their identity. It should not amplify the already existing social divisions based on race, caste, creed, sex, etc. An appropriate grievance redressal mechanism is to be put into place.
4 Principle of Privacy and Security
Data of the individuals used to train the system need to be safeguarded by maintaining privacy and security. Access should be provided only to those authorized with sufficient safeguards
5 Principle of Transparency
The design and functioning of the AI system should be recorded and made available for external scrutiny and audit to the extent possible to ensure the deployment is fair, honest, impartial and guarantees accountability.
6 Principle of Accountability
All the stakeholders involved in the design, development, and deployment of the AI system should be held accountable for their actions. Stakeholders are to ensure that the end receiver has the least possible untoward decisions.
7. Principle of Protection and Reinforcement of Human Values
AI should promote positive human values and not disturb social harmony in community relationships in any way.
Source – Responsible-AI-22022021.pdf (niti.gov.in)
This was a detailed explanation of AI and its far and wide-reaching consequences. For more tech stories, look at the links below.