Artificial Intelligence (“AI”) is becoming a prevalent technology in India with widespread application in industries such as healthcare, banking, transportation, etc. However, as AI is increasingly used in decision-making, the possibility of AI bias is a growing concern. This transpires when AI algorithms produce systematically biased findings towards specific groups or individuals, potentially impacting fairness and discrimination more particularly in employment, lending, and criminal justice. As AI’s impact on disadvantaged groups could be catastrophic, it is essential to understand legal issues and create frameworks to address them in order to nip the discrimination in the bud. This article attempts to explore such legal issues and address the potential remedies to mitigate the impact of AI bias on fairness and discrimination in India.

Understanding AI bias

AI bias is the unfair and prejudiced treatment of certain groups of people by AI systems as a result of inherent human biases embedded into the data and algorithms used to train these systems. AI bias can take several forms such as Sampling bias, Confirmation bias, Prejudice bias, etc.[1]

AI systems can exhibit bias due to various reasons, including lack of diversity in training data, developer biases, and improper metrics. Historical data may also reinforce discrimination in AI systems. In India, biased AI systems have been observed, such as facial recognition systems having lower accuracy with darker skin tones, and automated hiring algorithms being biased against women and minorities. These cases highlight the adverse impact of AI bias on justice and discrimination in India, emphasizing the need for legal solutions to address this issue.[2]

Legal Framework against AI Bias in India

India lacks a specific law that regulates AI, but several regulatory and legislative measures touch upon AI. The Ministry of Electronics and Information Technology released a draft National Strategy on Artificial Intelligence in 2020, outlining a policy framework for AI development. Other regulations include the Information Technology Act, 2000, the upcoming Digital Person Data Protection Bill, 2022, and the Right to Information Act, 2005. The Digital Person Data Protection Bill, 2022 governs personal data processing, requiring AI systems to be transparent, explainable, and auditable, and to eliminate biases. The Information Technology Act, 2000 requires intermediaries to refrain from hosting, publishing, or sharing any information that is damaging, or defamatory.

Despite the existence of various legislative frameworks that address AI prejudice, enforcing these laws presents significant obstacles. For starters, the lack of a dedicated law addressing AI, creates a regulatory gap that makes it impossible to hold AI developers accountable for the biases in their products. Furthermore, there is a scarcity of specialists and resources to adequately evaluate and monitor the biases of AI systems. Furthermore, the lack of transparency in AI systems and their decision-making process makes identifying and correcting biases challenging. Finally, there is a need to raise awareness about the need of mitigating AI bias among stakeholders such as legislators, legal professionals, and AI developers.

Case Studies

Although AI has the potential to identify societal prejudices, the diversity of Indian society or any other society for that matter, poses certain problems. While Indian biases are not well-documented in AI datasets, they are common in everyday life.[3] For example, Google Research discovered that data gathered from internet users resulted in an under-representation of Muslim and Dalit populations. ImageNet, a popular dataset for facial recognition, contains less than 3% of Indian and Chinese faces, resulting in skewed algorithms and preconceptions.[4] Other instances of AI bias, such as Google Photos misclassifying Black men as gorillas and Facebook’s recommendation engine promoting primates movies to people watching videos depicting Black men, highlight the importance of addressing AI bias.[5] These algorithms also contribute to existing gender inequalities, with Amazon’s CV shortlisting engine favoring men for numerous job roles, possibly undermining affirmative action initiatives.[6]

International Best Practices

To counter AI bias in India, International Best Practices (“IBP”) must be reviewed. A comparison of international policies and legislation with India’s legal system would assist in identifying gaps and areas for development. Implementation of best practices in India can assist reduce the impact of bias in AI systems. Developing diverse and inclusive teams, maintaining openness and accountability in AI decision-making processes, and conducting frequent audits and assessments to detect and remove prejudice are all IBP for managing AI bias. Some countries have also enacted special laws and rules to govern the development and deployment of AI systems, such as the European Union’s General Data Protection Regulation and the United States’ Algorithmic Accountability Act.

To apply these best practices in India, it is critical to consider the country’s unique context and problems, such as demographic and cultural diversity and the under-representation of certain groups in AI datasets. Policymakers, academics, and industry professionals must work together to set comprehensive norms and criteria for ethical and responsible AI development and implementation in India which can counter such biases.

Way Forward

The growing usage of AI in India has raised several concerns about potential prejudice and discrimination in decision-making. AI bias can be caused due to several reasons, such as a lack of diversity in training data, developer biases, and historical data. India has several regulations that touch on AI. However, enforcing these laws is difficult due to lack of specialist resources and stakeholder awareness. To resolve this, India can implement IBP like creating diverse and inclusive teams, preserving openness and accountability in AI decision-making processes, and conducting frequent audits and evaluations to uncover and eliminate any forms of bias. Indian Government needs to step-in to enact comprehensive legislation and guidelines for ethical and responsible AI development and its application in India with regulatory framework.


Authors: Smita Paliwal and Gaurav Singh Gaur


[1] https://haas.berkeley.edu/wp-content/uploads/UCB_Playbook_R10_V2_spreads2.pdf.

[2] https://www.niti.gov.in/sites/default/files/2022-11/Ai_for_All_2022_02112022_0.pdf.

[3] https://reporter.rit.edu/tech/bigotry-encoded-racial-bias-technology.

[4] https://arxiv.org/pdf/2101.09995.pdf.

[5] https://www.dailymail.co.uk/sciencetech/article-4800234/Is-soap-dispenser-RACIST.html.

[6] https://www.thehindubusinessline.com/opinion/bias-in-artificial-intelligence-why-we-need-more-india-centric-ai/article37532800.ece.

More from King, Stubb & Kasiva