- Human Rights AI
- Posts
- Technology ain't no man's land
Technology ain't no man's land
Why we need AI regulation
We often have this feeling that technology is advancing at a pace that is difficult to keep up with, and that's a huge problem. Why? When you are not at the table, you will definitely be at the menu. Technology will continue to evolve at this crazy speed, and there will be people making decisions that, in one way or another, will impact our lives.
Who is dictating the rules? Where are these rules being made? What values are behind it? Whose voices are being heard?
These are questions that we should ask ourselves about literally everything we wish to properly understand, and technology ain't an exception to the rule. Let's dig into that in today's newsletter!
Technology ain't no man's land 🚫
As the use of algorithms in decision-making processes becomes increasingly prevalent, concerns about their potential harms are on the rise. While algorithms can certainly be a more efficient way of getting things done, they can also perpetuate existing inequalities and injustices if not properly regulated. As a result, there is a growing movement of reformers who are pushing for comprehensive regulatory and policy tools to ensure algorithmic accountability.
These tools can take various forms, but they all aim to make algorithms transparent and accountable and prioritize human rights. This is especially important for algorithms used in public services, where their impact can be far-reaching. One example of the use of algorithms is AI chatbots like ChatGPT, Bard, or Sydney. These tools use natural language processing to converse with users. Although popular, they have faced criticism and regulatory scrutiny due to concerns about privacy and misuse. And we think that's the way to go.
What are the experts saying?
Last month, experts published an open letter expressing concerns about the potential risks of large-scale AI systems. They emphasized prioritizing safety, transparency, and responsible governance in AI research. Due to the unpredictability of these systems, the authors suggest pausing their development until the risks and benefits are better understood. The letter stresses the importance of ensuring AI serves humanity's best interests. European lawmakers responded by calling for a high-level global summit on AI to determine governing principles for its development, control, and deployment. They suggest future-proofing legislation and warn against developers shirking liability.
Where do we stand today?
In the public sphere, the use of these tools has obviously also raised concerns over data protection and potential harm, prompting regulation efforts around the world.
Last year, the Council of Europe established a Committee on AI to develop an international treaty focusing on human rights, the rule of law, and democracy. The EU invoked the principle of sincere cooperation to negotiate the treaty on behalf of the bloc since it had already tabled its own legislative proposal, the upcoming EU AI Act. However, negotiations have been delayed by EU internal dynamics that have affected non-EU countries and undermined the independence of the Council of Europe's mandate on human rights.
The EU AI Act could set a global precedent as the world's first comprehensive AI regulation, and other countries may use it as a template. In Italy, ChatGPT has been temporarily banned by the Data Protection Authority. OpenAI must comply with measures such as age gating and clarifying the legal basis for processing people's data. Germany, Spain, and France have also raised similar concerns.
The US is considering regulations to prevent discrimination and the spread of harmful information. They've identified five principles for guiding the design, use, and deployment of automated systems, outlined in the Blueprint for an AI Bill of Rights. While there is general agreement that AI needs to be controlled and monitored, there is also skepticism about Congress's ability to adopt effective rules. Some worry the regulations will stifle innovation and progress, while others fear they won't be strict enough, leaving the public vulnerable to the risks of AI.
On the other side of the world, China's new regulations for generative AI products require tech companies to register them with the cyberspace agency, undergo security assessment, and follow strict guidelines. The regulations cover all aspects of generative AI, including training and user interaction, with the aim of controlling the technology. AI must align with socialist values, not subvert state power, not incite violence, and not use personal data in training. Violations will be fined and may lead to criminal investigations.
What civil society has to say about it?
In 2021, European civil society groups called for AI regulation that prioritizes fundamental rights. They want a flexible, future-proof approach to AI risk and a ban on AI systems that violate fundamental rights. They also demand improved standards for AI systems, environmental protections, and a strong enforcement process that prioritizes fundamental rights. Last December, a larger group of organizations called on the EU to prevent AI harm in the field of AI and migration and protect people on the move.
The Global AI Action Alliance (GAIA) is a collaboration between over 100 companies, governments, civil society organizations, and academic institutions created by the World Economic Forum. Its goal is to promote the ethical use of AI and maximize its benefits for society. It is guided by a Steering Committee and supported by non-profits, academic institutions, and industry players. Its areas of focus include educating leaders on AI risks and opportunities, fostering peer learning between legislators, and developing a certification mark for responsible AI systems.
However, last January, civil society organizations were excluded from the drafting process of the above-mentioned first international treaty on AI due to a request by the US to avoid disclosing countries' positions publicly. The US proposed to delegate the work to a drafting group made up of all countries that might sign the treaty, excluding civil society groups. Some NGOs have mobilized against the exclusion, and some countries have called for their participation. However, the drafting and initial discussion will take place behind closed doors to avoid disclosing countries' positions.
As we delve deeper into the age of artificial intelligence, the questions of who has the power to speak on all these changes in society and how we can expand this power to include diverse voices and backgrounds become increasingly crucial. The development and implementation of AI must be regulated to protect human rights, prevent discrimination, and ensure accountability. Private actors are currently determining the major directions of AI development, and it is not up to the tech industry to create their own benchmarks for societal standards.
Worth reading 📖
Linguists also entered the AI discussion: False Promise of ChatGPT
Published in The New York Times, an article written by Noam Chomsky, Ian Roberts, and Dr. Watumull discusses the limitations of machine learning in creating artificial general intelligence. While programs like ChatGPT are proficient at generating language and thought, they differ fundamentally from how humans reason and use language.
In a conversation with ChatGPT, Dr. Watumull discusses the potential risks and benefits of AI and its impact on our society, economy, and politics. This experience gives insights into the future of AI, the ethical concerns surrounding it, and how we can harness this powerful technology for the greater good.
Whether you're a tech enthusiast, a human rights advocate, or simply curious about the world we live in, this article is a must-read.
tech4rights 🤖
Each week, we bring good practice examples of how technology is being used to promote human rights work around the world.
Thorn is a nonprofit organization that works to prevent and combat child sexual abuse material (CSAM) on the internet. One way CSAM circulates online is through the sharing of images and videos of child sexual abuse, which can continue to traumatize victims and survivors.
To combat this issue, Thorn has developed a tool called Spotlight, which accelerates law enforcement's ability to identify victims of child sex trafficking. They have also developed Safer, the first holistic tool for tech companies to detect, remove, and report CSAM at scale. While Thorn has made progress, they stress the need for every platform with an upload button to implement proactive CSAM detection measures.
In addition to their work on CSAM, Thorn is also working to prevent self-generated CSAM, which is the fastest-growing type. With the prevalence of social media, sharing nude selfies has become increasingly common among youth, leading to victimization and shaming. To combat this, Thorn has launched NoFiltr, a tool for teenagers that reaches tens of millions of youth with prevention and support messaging.
They have also launched Thorn for Parents, a digital resource to guide parents with knowledge, tools, and tips to have conversations with their children earlier, more often, and without judgment.
Thorn's tools and programs fight child sexual abuse throughout the entire online ecosystem, providing support for victims and survivors and empowering parents and communities to prevent future harm.
AI toolbox 🛠️
Here we make recommendations about AI tools that can help to improve your productivity in everyday office work.
Wiseone is a Google Chrome extension with AI-powered reading capabilities, created to tackle the growing problem of misinformation and fake news that is prevalent on the internet.
The tool has two specific functions:
1/ It summarizes any text you open on your browser in the form of bullet points and a paragraph;
2/ It gives you suggestions of other sources about the same subject so you can compare and look after for more information.
We'd REALLY love your feedback! 💬