- Human Rights AI
- Posts
- AI: no Hell, but also no Heaven
AI: no Hell, but also no Heaven
The problem of biased data
Good morning! It's almost Friday, and we're excited to share our first newsletter with you!
This week we'll start a discussion about a fundamental ethical problem in cutting-edge tech development. Alongside death, another unequivocal truth today is that AI will be part of our work and lives. We must be open to that, but most importantly, we must be vigilant about how AI is being created, developed, fed, and applied in society.
Each week, we'll aim to provide some insights that inspire discussion, reflection, and action toward creating a more inclusive, transparent, responsible, and just AI ecosystem.
Join us on this exciting journey! 🚀
Biased algorithm? Aren't they robots? 🤔
Artificial Intelligence (AI) is a rapidly growing field that encompasses the development of sophisticated machines capable of reasoning, learning, and acting intelligently. These machines can perform tasks that would otherwise be difficult or impossible for humans to complete, enabling us to achieve unprecedented levels of productivity and efficiency.
At the heart of AI are a number of cutting-edge technologies, including machine learning, neural networks, and robotics. Machine learning algorithms, for example, enable machines to automatically improve their performance over time, while neural networks allow them to detect patterns and make predictions based on large amounts of data. Robots, meanwhile, are physical manifestations of AI, designed to perform tasks that require both intelligence and dexterity.
The development of AI tools has the potential to revolutionize the work of promoting and protecting human rights by assisting in the identification and resolution of numerous issues. For instance, AI can be used to analyze large amounts of data and identify patterns of discrimination, for example, which can be used to develop targeted interventions. Additionally, AI technology can also be used to monitor human rights abuses, such as forced labor or human trafficking. It can even help to prevent violations by detecting and predicting potential conflicts before they escalate.
However, the use of AI also carries many risks.
The inevitable outcome then leads to biased decision-making, perpetuating existing inequalities and discrimination rather than helping to overcome them, particularly affecting some minority groups. One example is that facial recognition technology has been shown to have lower accuracy rates for people with darker skin tones or women, leading to potential misidentification and unjust treatment. Biased language models have been found to perpetuate harmful stereotypes, further marginalizing already vulnerable communities. Another example is biased data that leads to likewise decision-making on whether a person can get a financial loan, get a healthcare insurance quote, or be hired for a position based on their gender or race.
To address that, it is crucial to ensure that the data used to train AI systems is diverse, representative, anti-racist, and not white-male-centered. What a challenge. This requires a careful selection of the data sources and a thorough analysis of the data to identify any potential biases. Furthermore, it is important to continuously monitor and evaluate the performance of machine-learning processes to ensure that the system is not perpetuating biases.
In addition to that, there are other threats associated with the use and misuse of AI systems, such as privacy concerns, security risks, and ethical dilemmas. It is important to carefully consider the risks and benefits of AI systems in the different phases of development processes rather than only when implementing these technologies and dealing with their consequences. We should ask: who are the people feeding the systems? Where are this data coming from? Did people consent to share the information being used? The number of questions here can be multiple.
That said, we must make sure that AI technology is being developed in more explainable, accessible, auditable, and transparent ways to impede discriminating patterns from being replicated again and again.
What now?
It is inevitable that the conversation goes to the need-of-regulation topic everywhere. And it should go this way because technology can't be considered a no-man land. It replicates the social dynamics of offline real life.
Private tech corporations must be held accountable for the work they are developing. Self-regulation, in this case, has the same success potential as asking a fox to take care of a chicken's egg.
It is inevitable that the public sector step in to establish rules and regulations to ensure the ethical and safe use of AI. Government regulations can help to protect the rights and privacy of individuals who may be impacted by AI, such as those whose data is being collected and analyzed. Overall, while companies should be encouraged to act responsibly, the importance of government oversight in the development and deployment of AI cannot be overstated (we'll discuss regulation and where we are now next week).
The role of AI tools and systems in society is a trending topic today, and it'll continue to be part of conversations daily. For that, we believe the discussion must be global and multileveled between States (from global North & South), international organizations, the private sector, academia, and civil society. Within that, we also must ensure that different voices, such as women, LGBTI people, people of color, and other minorities, have a guaranteed seat at the table. Only that can ensure that AI systems are developed and implemented to promote equity, inclusion, and respect for human rights.
Worth reading 📖
To add more spice to the conversation, we recommend Cathy O'Neil's book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
In a nutshell, it's a thought-provoking book that sheds light on how algorithms and big data are being used to make decisions that significantly impact our lives. Mathematician and data scientist Cathy O’Neil explores the dark side of the algorithms that increasingly govern the way we live today. While, in theory, machines should provide greater fairness, she reveals that today's mathematical models are unregulated and uncontestable, even when wrong. Through numerous real-world examples, O'Neil demonstrates how seemingly objective algorithms can actually reinforce discrimination and bias against marginalized groups, particularly in education, employment, and criminal justice. She explains the technical underpinnings of these "weapons of math destruction" with clear and accessible prose and offers suggestions for how we can work to ensure a more equitable and just society in our increasingly data-driven world.
TL;DR? You can also watch her TED Talk below:
If you have time, check this out too:
Weapons of Math Destruction: Data scientist Cathy O’Neil on how unfair algorithms perpetuate inequality (text) - Ford Foundation
Building a better society with better AI (text) - MIT Technology Review
We need AI that is explainable, auditable, and transparent (text) - Harvard Business Review
Assembling Accountability: Algorithmic Impact Assessment for the Public Interest (pdf) - Data and Society
tech4rights 🤖
Each week, we'll bring good practice examples of how technology is being used to promote human rights work around the world.
Amnesty “Surveillance Decoders NYC”
Amnesty International's Decoders platform is a collection of projects that utilize crowdsourcing and digital tools to investigate human rights abuses around the world. Within the platform, they carry out different projects, such as:
Strike Tracker investigates and documents the impact of air strikes on civilians and civilian infrastructure in conflict zones around the world. The "Raqqa" project specifically focused on investigating the impact of US-led coalition air strikes on Raqqa, Syria, which was heavily damaged during the fight against ISIS.
Decode Darfur monitors and documents human rights violations in Darfur, Sudan, with the aim of increasing public awareness and pushing for accountability and justice.
Decode Oil Spills investigates and documents the environmental impact of oil spills around the world, with the aim of holding oil companies accountable and advocating for better regulations to prevent future spills.
Troll Patrol identifies and documents online abuse and harassment against women and marginalized groups. The project has already produced a report on online abuse and harassment in India.
Decode Surveillance investigates and documents the use of surveillance technologies by governments and corporations, with the aim of advocating for stronger human rights protections and better regulation of these technologies
Do you know a tech4rights good practice that is worth sharing? Reply to this email and tell us. We'll make sure to include it in the next newsletter!
AI toolbox 🛠️
Here we’ll make recommendations about AI tools that can help to improve your productivity in everyday office work.
Ok, ok... If you're not living under a rock, you have probably already heard about ChatGPT. But are you really using it?
We think ChatGPT can significantly improve productivity in various office settings. By using it, you can easily access information on different topics, automate repetitive tasks, and get quick answers to common questions. It can be utilized for language translation, proofreading of documents, and brainstorming new ideas, thereby freeing up valuable time for more critical tasks. You must keep in mind that it's not flawless, though. Revise, confirm, and edit the information you were given to ensure its accuracy ;)
Tools like ChatGPT can help us save that overtime after work hours, reducing the workload and helping in completing tasks more efficiently. We really need to go after work-life balance, and this AI tool could give us a robotic hand on that.
“This is not a Math test. This is a political fight. We need to demand accountability for our algorithmic overlords.”
We'd REALLY love your feedback! 💬