The Digital Pandemic of Deepfakes and Synthetic Media
From Snapchat filters to apps that can mix celebrity faces to political memes, synthetic media is throughout society today. Synthetic media is defined as artificial production, manipulation, and modification of data and media through AI algorithms. Due to its wide range of applications, synthetic media is now a common aspect of our lives today. Despite how common it is, one part of its usage is rarely talked about: “deepfakes”.
Technological advancement is a double-edged sword history has consistently dealt with and synthetic media is not an exception. “Deepfake” media is an AI generated fake image or video, often used to create pornography through the usage synthetic media and other AI technology. It originates from the username of a Reddit user who made a thread back in 2017 including fake videos of “Maisie Williams” or “Taylor Swift” in sexual activities. Although the thread shut down long ago, its 90,000 subscribers are now part of this global pandemic.
According to a cybersecurity firm Sensity, deepfakes are growing exponentially, doubling every six months. With currently more than 85,000 pieces of content circulating online, 90% consist of non-consensual porn featuring women. Deepfakes to actors, politicians, YouTubers, authors, and even regular women have been spread throughout media over porn sites, social media platforms, and more. From a database of recorded deepfake creators, the creators are global, originating from all over the world, with all but one being male from the ones who have listed their gender.
With further development in technology and media, creating and accessing deepfakes have become much easier over the recent years. Dedicated sites and user friendly apps allow users to upload a woman’s image and generate sexual images within seconds. Organised “request” procedures allow one can “custom” deepfakes for simply $30. As synthetic media becomes more common and public, deepfake media has the potential to become more sophisticated and believable to be used in various contexts. Zao, a deepfake app from China, is a prime example as it has been provided to the general public as an easy deepfake editing tool.
Many women have already come out as victims of this pandemic.
British writer Helen Mort was alerted through an acquaintance of a series of deepfakes on a very popular porn site featuring her engaging in extreme acts of sexual violence. She talked about her experience through an interview: “It’s like you’re in a tunnel, going further and further into this enclosed space, where there’s no light.”
Gibi, an ASMR artist with 3.13 million YouTube subscribers, shared she’s been approached by multiple companies offering to remove deepfakes that have been posted without her consent, asking $700 per video.
Besides Mort and Gibi, there are thousands of women who have fallen victim to this non-consensual distribution of sexual content. Despite this, many hesitate to speak out, making it even more difficult for this case to be recognised.
In a study by Amnesty International investigating the effects of abuse against women on Twitter, it recognised the abuse created “the silencing effect” where women feel discouraged from participating online. This is present in deepfake victims too. Rana Ayuub, a recognised Indian journalist of the Washington Post and a victim of the deepfakes, isn’t unfamiliar with receiving online hate. However, she expressed difficulty in using social media after a user shared a deepfake including her face in hopes to discredit her work. The video went viral, shared among important political circles in India where she was involved: “I used to be very opinionated. Now I’m much more cautious about what I post online… I’m someone who is very outspoken, so to go from that to this person has been a big change.”
Perhaps the worst part is the lack of recognition and action this issue has received. In most countries, creating and accessing deepfakes is entirely legal. With no laws to regulate the content and digital footprint making it impossible to ever completely remove something from the Internet, the deepfake community thrives and grows each year. Moreover, while many countries have begun working to improve their online platform through bills such as the EU’s Digital Services Act and UK’s Online Harms Bill, many of these laws don’t cover deepfakes. In the Online Harms Bill, abuse against women isn’t listed as a major “harm” while the EU’s proposal mentions women throughout the entire document merely once. Beyond these bills, lack of action from lawmakers and slow moderators make it difficult to battle this problem.
Today, the deepfake technology is limited but is working towards full body swaps that could alter appearance and even actions of a person. In response, FaceBook is developing fake deepfakes to be part of a training data set. With the database they create, they hope to participate in the global competition Deepfake Detection Challenge, created by Amazon, Microsoft, and nonprofit Partnerships on AI, where companies work towards creating a detection system for deepfakes.
Ultimately, this issue deserves more awareness and action. It holds the potential to impact millions of women around the world, altering lives as they know it. Help bring justice by spreading awareness, participating in campaigns such as #myimagemychoice, and signing petitions.