Major technology companies, such as Google, Apple, and Discord, have been implicated in facilitating user sign-ups for dangerous “undress” websites that utilize AI technology to remove clothing from photographs, falsely creating nude representations of the subjects without their consent. These deepfake websites have integrated login mechanisms from these major companies, contributing to their accessibility and seemingly lending them legitimacy.
An analysis by WIRED identified 16 of the largest so-called undress and “nudify” websites leveraging login services provided by Google, Apple, Discord, Twitter, Patreon, and Line. This integration simplifies the account creation process on these platforms, effectively endowing them with a facade of credibility and encouraging users to purchase credits to generate fake images.
The issue of non-consensual intimate deepfake content, particularly targeting women and girls, has grown with advances in generative AI technology. The problem of “undressing” individuals digitally is becoming alarmingly common, with reports of teenage boys creating fake nudes of peers. There has been criticism of the sluggish response by tech companies to these issues, as these websites remain prominently visible in search engines, with paid advertisements and listings in major app stores.
“This trend continues to normalize acts of sexual violence against women and girls, enabled by Big Tech,” states Adam Dodge, a lawyer and founder of EndTAB (Ending Technology-Enabled Abuse). “Login APIs provide ease of use that should not extend to facilitating sexual violence,” he added. “Rather than easing access to these apps, we should be barricading them.”
The authentication mechanisms reviewed by WIRED, which utilize APIs and standard login processes, enable users to access deepfake websites through existing accounts. These systems include Google’s, which is present on 16 sites, Discord’s on 13, Apple’s on six, X on three, with Patreon and the messaging service Line each being used on two sites.
WIRED has chosen not to disclose the names of these sites due to their role in facilitating abuse. Many are connected to broader networks and controlled by the same entities. Even though these tech giants have established policies prohibiting harmful uses, their login tools have been used on these platforms.
Following inquiries from WIRED, Discord and Apple representatives stated that they have terminated the developer accounts associated with these sites. Google declared it would act against developers infringing its guidelines. Patreon has barred accounts involved in creating explicit content, whereas Line is investigating but has not commented on specific sites. X has not responded to inquiries regarding the misuse of its login technology.
Discord’s vice president of trust and safety, Jud Hoffman, informed WIRED that it had revoked the websites’ API access for breaching its developer policy. Shortly after, a message in a Telegram channel from one of these undress services indicated that Discord login was “temporarily unavailable” as they sought to regain access. This service has not replied to WIRED’s inquiries about its operations.
Since the emergence of deepfake technology towards the end of 2017, there has been a substantial increase in the creation of nonconsensual intimate videos and images. While producing videos is more challenging, generating images via “undress” or “nudify” websites and apps has become widespread.
“We must be clear that this is not innovation, this is sexual abuse,” states David Chiu, San Francisco’s city attorney, who has initiated a lawsuit against various undress and nudify websites and their creators. Chiu highlights that the websites targeted by the lawsuit have attracted approximately 200 million visits just in the first half of this year. He asserts, “These websites are committing severe exploitation of women and girls globally. These images are used to intimidate, embarrass, and threaten women and girls,” Chiu elaborates.
The undress websites run as disguised businesses, revealing little information about their ownership or operations. The websites are often similar in appearance and nearly identical in terms of service agreements. Some are available in over a dozen different languages, indicating the global scale of the issue. Moreover, several Telegram channels linked to these websites have memberships ranging in the tens of thousands.
These websites are constantly evolving, frequently announcing new features they are developing. One claims their AI can personalize how women’s bodies appear and even allow “uploads from Instagram.” Typically, these sites charge users to create images and promote affiliate schemes encouraging sharing. Additionally, some have united to establish their cryptocurrency, which could be used for purchasing images.
Alexander August, self-identified as the CEO of a website, told WIRED via email that he acknowledges concerns about the potential misuse of their technology. He indicated that the website utilizes various safeguards to prevent the generation of images involving minors and expressed a commitment to social responsibility. He mentioned the website’s openness to collaborate with governmental bodies to improve transparency, safety, and reliability.
When attempting to sign up or utilize image-generating features on the site, users are frequently prompted to use tech company logins. The extent of usage for these logins remains unclear, with many websites alternatively offering account creation using just an email address. A review indicated that most of these sites utilize sign-in APIs from multiple tech companies, with “Sign-In With Google” being predominantly employed. Through this option, the Google system displays information such as the user’s name, email address, language preferences, and profile picture.
Google’s sign-in system also provides some insights into the developer accounts associated with various websites, revealing, for instance, connections of certain websites to specific Gmail accounts. A Google spokesperson noted that developers must adhere to Google’s Terms of Service, which forbid the promotion of sexually explicit material and content that defames or harasses others. Violations would provoke “appropriate action” according to the spokesperson.
Other tech companies reported they had terminated accounts following notifications from WIRED regarding their sign-in systems’ usage.
Hoffman from Discord states that the company will take action against websites flagged by WIRED that breach its policies, and proactive measures will continue against sites that come to their attention. Apple spokesperson Shane Bauer mentions that multiple developer licenses were terminated, and Sign In With Apple will be disabled on those websites. Adiya Taylor, from Patreon, underscores that accounts supporting or funding external tools generating adult content or explicit imagery are prohibited, vowing to act against any Patreon works or accounts violating their Community Guidelines.
Additionally, certain websites that were examined showcased Mastercard or Visa logos, suggesting these methods might be used for payment. Visa did not provide a comment to WIRED, while a Mastercard spokesperson declared that transactions for nonconsensual deepfake content are forbidden on their network, and the company actively takes measures upon detecting or being informed of such activities.
Various tech firms and payment providers have responded previously to the challenges posed by AI services facilitating the creation of nonconsensual imagery, particularly after media revelations. Clare McGlynn, a law professor at Durham University with expertise in law regarding online pornography and sexual violence, criticizes large tech platforms for not proactively engaging against such sites, thereby facilitating their growth. She argues the security and moderation measures are fundamentally lacking or unenforced, labeling the reactive stance of these companies as wholly inadequate and indicative of their disregard for enforcing accessible preventative measures.