- Human Rights AI
- Posts
- Can we still believe our senses?
Can we still believe our senses?
The menace of deepfakes
I bet you saw Pope Francis wearing a white puffer jacket some weeks ago. Or maybe you followed Selena Gomez at the MET Gala's red carpet wearing a blue beaded dress. Ok, and so? The Pope never wore it, and Selena was never at this year's MET Gala.
We can go even further on that. Last month, the US Republican National Committee used artificial intelligence to create an apocalyptic 30-second ad imagining what President Joe Biden's second term might look like. Several European mayors were tricked into holding video calls with Vitali Klitschko, the mayor of Kyiv. The fake Klitschko appeared and sounded convincing enough that the mayors did not suspect anything until he began discussing controversial topics.
The ultimate question now is:
Can we still believe our senses?
Deepfakes are synthetic media that uses artificial intelligence (AI) technology to manipulate or generate images, videos, or audio recordings. These fakes can be made to appear authentic, but they are entirely fabricated.
Today, several software created to generate images from natural language written descriptions, called "prompts," such as Midjourney, OpenAI's DALL-E, Adobe Firefly, and Stable Diffusion. Meta just introduced ImageBind, the first AI model capable of binding data to create images from six modalities (image, video, audio, text, depth, thermal, and inertial measurement units) at once without explicit supervision.
It is incredible to have an AI tool that can depurate what you're thinking and deliver it to the real world through some language manifestation. It's a true breakthrough, but also it represents a great menace to society and democracy. These AI technologies are the primary source of deepfakes, and we can easily say now that their proliferation is inevitable across the internet.
What is the impact of deepfakes on society and democracy?
Misinformation and propaganda: Deepfakes can spread false information, manipulate public opinion, and influence elections. This can be used to discredit political leaders and candidates, damage their reputations, sway public opinion, and ultimately impact election results.
Undermining trust in media and institutions: If deepfakes become widespread, the public's ability to trust what they see and hear will be severely compromised. This can lead to a breakdown in trust in media and institutions and ultimately undermine democracy.
Polarization and division: Deepfakes can further divide society and create social unrest by spreading false information and creating mistrust between different groups.
Creating or altering historical events: Image generation AI software can alter visual representations of historical moments or create entirely fake ones.
AI can create it. AI can fix it.
Although deepfakes can be created and shared easily, some digital tools and algorithms can be used to detect them. Social media platforms are also implementing measures to remove them.
AI-based solutions like machine learning algorithms and deep neural networks analyze files' metadata, pixel patterns, and audio signatures to determine if they have been manipulated. These solutions are becoming more sophisticated, making it more challenging for creators to produce convincing fraudulent content.
However, detection tools may operate slower than the generation of these fakes, allowing false representations to dominate the media landscape for days or even weeks. People should be aware of deepfakes and fact-check any information they come across before believing or sharing it.
Regulate what we can't control.
Deepfake technology is a complex problem that needs a multi-faceted solution. Regulators, big tech companies, and governments are being asked to intervene, but until then, people must take steps to protect themselves. Media literacy is a critical skill that can help us identify manipulated content and empower society.
Some countries have taken steps to address deepfake technology, while others have yet to develop laws to manage it. For example, China has adopted rules requiring manipulated material to bear digital signatures or watermarks and the subject's consent. Other governments have yet to develop laws to manage deepfakes, often due to free-speech concerns. In Europe, proposals to set guardrails for the technology have been made but have yet to become law. Attempts in the United States to create a federal task force to examine deepfake technology have stalled.
To effectively combat deepfakes, stronger regulation and cooperation between various stakeholders are imperative. Governments, big tech companies, civil society organizations, and academia must come together to develop frameworks that protect against harmful uses of deepfakes while safeguarding free expression. Collaboration on an international scale is also essential. By working collectively and continuously updating strategies, we can strive toward a future where the risks associated with deepfakes are minimized, enabling us to navigate the digital landscape with confidence and trust.
Check out the full text in our Medium.
Check out this interesting Newsweek analysis to keep reading more on the topic.
Worth watching 📺
Deepfakes not only impact politics, but they also perpetuate harmful stereotypes and objectify women's bodies. This is particularly concerning in a capitalist and patriarchal society where women are often objectified through pornography.
Deepfake pornography uses AI to manipulate images and videos, often featuring non-consenting individuals or celebrities in explicit sexual situations. This type of content can be incredibly damaging to those involved, particularly women.
Not only does deepfake pornography violate a person's privacy and consent, but it can also have long-lasting consequences on their personal and professional lives. Victims may experience harassment, cyberbullying and even lose their jobs or social standing due to the spread of these fake images.
Furthermore, deepfake pornography contributes to a culture of misogyny and violence against women by perpetuating harmful stereotypes and objectifying women's bodies.
tech4rights 🤖
Each week, we bring good practice examples of how technology is being used to promote human rights work around the world.
Considering the threat deepfakes pose to democracy worldwide, this week's edition is focused entirely on understanding and fighting them. The AI tool recommended here is not specifically designed for human rights purposes. Nonetheless, human rights and democracy have a symbiotic relationship, and to protect one, we must protect the other.
Reality Defender startup created a system to detect AI-generated content through proactive scanning and fingerprinting technology.
TinEye is a reverse image search engine. That means that the platform allows you to upload a picture or paste an image link, and the algorithm will work in reverse, looking for that image in other sources online. There's also a Chrome extension for it.
Google Lab has also created a reverse image search that you can easily use on your phone.
PimEyes is a face search engine and reverse image search. You can upload a photo and find out where images are published.
AI toolbox 🛠️
Here we make recommendations about AI tools that can help to improve your productivity in everyday office work.
Consensus AI is an AI-powered search engine that extracts and summarizes scientific research to provide evidence-based answers.
It uses natural language processing, machine learning, and blockchain technology to evaluate web content, extract relevant information, and reward content creators and curators. Each discovery has a confidence level based on the number and quality of sources.
Ask the right questions to get the most out of it, and it can be a really useful tool to look for scientific-based data.
We'd REALLY love your feedback! 💬
See you next week!