AI and Human Rights: A Global Concern

Published by AICHALLENGERS on

Artificial intelligence (AI) is transforming society at an unprecedented pace. From autonomous vehicles to facial recognition systems and virtual assistants like Siri or Alexa, AI has become deeply embedded in our daily lives. However, alongside this technological revolution come serious ethical concerns, particularly regarding its impact on human rights. AI holds the power to improve the lives of millions, but it can also threaten fundamental rights when poorly designed or misused.

1. Human Rights at Risk from AI

AI systems rely heavily on the collection and analysis of vast amounts of data, often involving personal information. This means that rights such as privacy, freedom from discrimination, and equitable access to resources can be directly impacted. Depending on how AI is developed and implemented, it can either uphold or threaten these human rights.

Example: Facial Recognition and Mass Surveillance

One of the most striking examples of this threat is the use of facial recognition technology for mass surveillance. In China, the government employs advanced facial recognition systems to monitor the movements of millions of citizens, particularly in sensitive regions like Xinjiang, where ethnic minorities such as the Uighurs are often targeted. These technologies enable not only real-time surveillance but also the creation of massive databases that allow continuous tracking of individuals without their consent .

This raises serious concerns about the right to privacy as well as the rights to freedom of expression and movement. Knowing that they are constantly being watched, citizens may hesitate to express dissenting opinions or participate in protests, thereby undermining democratic freedoms.

2. The Right to Non-Discrimination

AI can also exacerbate existing forms of discrimination. For instance, AI systems used in hiring processes can perpetuate unconscious biases present in the training data. If an algorithm is trained on historical hiring data from a male-dominated industry, it might unfairly favor male candidates while discriminating against women or minority groups .

Example: COMPAS Algorithm and Racial Bias

A well-known example of this issue is the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm, used in the U.S. criminal justice system to assess the likelihood of an individual reoffending after release. A study by ProPublica found that the algorithm exhibited racial bias, consistently labeling Black defendants as higher-risk than white defendants, even when the data did not justify such a classification . This has led to unjust decisions in bail hearings and parole evaluations, worsening racial inequalities already present in the justice system.

3. Regulation and Legal Frameworks

Thankfully, governments and international organizations are beginning to address these risks. The European Union, for example, implemented the General Data Protection Regulation (GDPR) to safeguard citizens’ rights in the face of data misuse. The GDPR gives individuals the right to know what data is being collected about them, how it is being used, and allows them to request its deletion under certain conditions .

International bodies like UNESCO are also working to create ethical frameworks that guide the development and deployment of AI technologies. These frameworks aim to ensure that AI is used to promote human rights and social justice, rather than undermine them .

4. Conclusion: AI in Service of Human Rights

When properly regulated, AI can be a powerful tool for promoting human rights. NGOs are already using AI to monitor human rights violations around the world, analyze vast datasets to detect troubling trends, and facilitate access to justice for marginalized populations. However, it is essential for tech companies, governments, and civil society to work together to ensure that AI is developed and deployed in an ethical and responsible manner.


References:

  • Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press, 2018.
  • Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, 2018.
  • Raji, Inioluwa Deborah, et al. “Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing.” Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020.
  • ProPublica. “Machine Bias: There’s Software Used across the Country to Predict Future Criminals. And It’s Biased against Blacks.” ProPublica, 2016.