Matt Burgess
The number of nonconsensual deepfake porn videos online has exploded since 2017. As the harmful videos have spread, thousands of women—including Twitch streamers, gamers, and other content creators—have complained to Google websites hosting the videos and tried to get the tech giant to remove them from its search results.
A WIRED analysis of copyright claims regarding websites that host deepfake porn videos reveals that thousands of takedown requests have been made, with the frequency of complaints increasing. More than 13,000 copyright complaints—encompassing almost 30,000 URLs—have been made to Google concerning content on a dozen of the most popular deepfake websites.
The complaints, which have been made under the Digital Media Copyright Act (DMCA), have resulted in thousands of nonconsensual videos being removed from the web. Two of the most prominent deepfake video websites have been the subject of more than 6,000 and 4,000 complaints each, data published by Google and Harvard University’s Lumen database shows. Across all the deepfake platforms analyzed, around 82 percent of complaints resulted in URLs being removed from Google, the company’s copyright transparency data shows.
Millions of people discover and access deepfake video websites by searching for the term deepfakes, often in conjunction withnames of celebrities or those of content creators. Those working towards the fight against deepfakes online, which includes making systematic DMCA complaints, suggest that the number of copyright complaints and high rate of removals imply that Google should be more proactive against these websites – this may involve completely eliminating their presence in search results.
According to Dan Purcell, the founder and CEO of Ceartas, a company that aids creators in erasing their content when it is used without approval, ”if the main objective of these websites is to misuse and manipulate a person’s personal brand, remove their autonomy, or serve as a platform for revenge porn, they should not exist.”
For the single largest deepfake video website, Google has received requests for the removal of 12,600 URLs, out of which 88 percent have been successfully taken down. With such a large amount of offensive content, Purcell argues that the tech giant should investigate why these sites continue to appear in search results. “If you eliminate 12,000 links due to infringement, why aren’t they totally removed?”
This comes in the five years since involuntary deepfake porn videos first appeared. During this time, Tech firms and lawmakers have been sluggish to respond. Meanwhile, advancements in machine learning have simplified the creation of deepfakes. The types of explicit deepfake content available today include videos where an individual’s face is superimposed ontoexisting consensual pornography, apps that can substitute a person’s face onto a nude image, and some“undress” a person, and various generative AI that can create completely new deepfake images such as theartificial images of Taylor Swift which circulated online last month.
Dhruv Mehrotra
Louryn Strampe
Rhett Allain
Matt Kamen
Each method is weaponized—almost always against women—to degrade, harass, or cause shame, among other harms. Julie Inman Grant, Australia’s e-safety commissioner, says her office is starting to see more deepfakes reported to its image-based abuse complaints scheme, alongside other AI-generated content, such as “synthetic” child sexual abuse and children using apps to create sexualized videos of their classmates. “We know it’s a really underreported form of abuse,” Grant says.
As the number of videos on deepfake websites has grown, content creators—such as streamers and adult models—have used DMCA requests. The DMCA allows people who own the intellectual property of certain content to request it be removed from the websites directly or from search results. More than 8 billion takedown requests, covering everything from gaming to music, have been made to Google.
“The DMCA historically has been an important way for victims of image-based sexual abuse to get their content removed from the internet,” says Carrie Goldberg, a victims’ rights attorney. Goldberg says newer criminal laws and civil law procedures make it easier to get some image-based sexual abuse removed, but deepfakes complicate the situation. “While platforms tend to have no empathy for victims of privacy violations, they do respect copyright laws,” Goldberg says.
WIRED’s analysis of deepfake websites, which covered 14 sites, shows that Google has received DMCA takedown requests about all of them in the past few years. Many of the websites host only deepfake content and often focus on celebrities. The websites themselves include DMCA contact forms where people can directly request to have content removed, although they do not publish any statistics, and it is unclear how effective they are at responding to complaints. One website says it contains videos of “actresses, YouTubers, streamers, TV personas, and other types of public figures and celebrities.” It hosts hundreds of videos with “Taylor Swift” in the video title.
The vast majority of DMCA takedown requests linked to deepfake websites listed in Google’s data relate to two of the biggest sites. Neither responded to written questions sent by WIRED. The majority of the 14 websites had over 80 percent of the complaints leading to content being removed by Google. Some copyright takedown requests sent by individuals indicate the distress the videos can have. “It is done to demean and bully me,” one request says. “I take this very seriously and I will do anything and everything to get it taken down,” another says.
“It has such a huge impact on someone’s life,” says Yvette van Bekkum, the CEO of Orange Warriors, a firm that helps people remove leaked, stolen, or nonconsensually shared images online, including through DMCA requests. Van Bekkum says the organization is seeing an increase in deepfake content online, and victims face hurdles to come forward and ask that their content is removed. “Imagine going through a hiring process and people Google your name, and they find that kind of explicit content,” van Bekkum says.
Google spokesperson Ned Adriance says its DMCA process allows “rights holders” to protect their work online and the company has separate tools for dealing with deepfakes—including a separate form and removal process. “We have policies for nonconsensual deepfake pornography, so people can have this type of content that includes their likeness removed from search results,” Adriance says. “And we’re actively developing additional safeguards to help people who are affected.” Google says when it receives a high volume of valid copyright removals about a website, it uses those as a signal the site may not be providing high-quality content. The company also says it has created a system to remove duplicates of nonconsensual deepfake porn once it has removed one copy of it, and that it has recently updated its search results to limit the visibility for deepfakes when people aren’t searching for them.
Dhruv Mehrotra
Louryn Strampe
Rhett Allain
Matt Kamen
The DMCA is an imperfect tool, particularly when it comes to deepfakes. Goldberg says it needs someone to “affirm under penalty of perjury” that they’re the copyright holder of the video or images. “But the process of creating a deepfake can transform the image so much that the resulting image is not the same intellectual property as the images it was sourced from,” Goldberg says. Ultimately, this could mean the person creating the deepfake video may be the copyright holder of the abusive content. “Our firm has long advocated that the copyrighting of illegal works should revert to the victims so they can exercise control over them,” Goldberg says. “But the law has not yet caught up.”
Purcell, from Ceartas, indicates that the current law can be quickly misused by deepfake platforms that resist DMCA requests. They state that taking legal action remains the only alternative in such scenarios. However, this can be problematic as these websites frequently do not share relevant contact information or acknowledgement about their creators. In addition, these platforms may operate from jurisdictions with challenging legal frameworks. According to van Bekkum, these platforms intentionally conceal their identity and often employ offshore hosting companies.
Grant, the Australian regulator, shares how her office collaborates with tech platforms and possesses the authority to request content removal. She reveals that her office also targets individuals who upload videos on deepfake platforms. For instance, last year, they pursued a male individual who posted images of Australian public figures on one of the largest deepfake platforms. He was directed to eliminate the uploaded material and erase it from his devices.
Upon receipt of the legal notice, the individual responded defiantly, asserting that the removal order carries no weight for him since he does not reside in Australia, and daring them to issue an arrest warrant if they deemed themselves correct. Legal court documents were subsequently referred to. Several months later, Grant was informed by border officials that the individual made an entry into Australia, culminating in him being charged with contempt of court.
This incident serves as a unique example of successful regulatory or law enforcement measures taken against deepfake creators. Adam Dodge, an attorney and founder of Endtab (Ending Technology-Enabled Abuse), believes tech companies should invest more in educating schools and communities about the risks of generating and spreading deepfakes. Moreover, he calls for the development of laws that unburden victims from the task of requesting content removal. Dodge opines that such issues should be considered as grievous and significant to online safety as other banned, illegal, or regulated content types that are universally recognized as repulsive and unacceptable.