The Dark Side of Technology: How ‘Nudify’ Bots on Telegram Are Being Misused by Millions

In the early months of 2020, deepfake specialist Henry Ajder revealed the emergence of one of the first Telegram bots designed to “undress” women’s photos through artificial intelligence. Ajder remembers that at that time, the bot had already produced over 100,000 explicit images—including those involving minors—and represented a crucial moment for the potential dangers deepfakes could entail. Since then, the prevalence of deepfakes has significantly increased, along with their harmful impacts, while the ease of their creation has also grown.

Now, a review by WIRED of Telegram communities associated with explicit non-consensual content has identified at least 50 bots that purport to generate explicit images or videos of individuals with just a few clicks. These bots differ in their functionalities; many assert their ability to “remove clothes” from images, while others claim to generate pictures showing individuals in various sexual situations.

The combined statistics from these 50 bots reveal over 4 million “monthly users,” according to the findings from WIRED’s examination of the data provided by each bot. Among them, two bots reported more than 400,000 monthly users individually, while another 14 claimed over 100,000 members each. This data highlights the pervasive availability of tools for creating explicit deepfakes and reinforces Telegram’s position as one of the leading platforms where such tools can be accessed. However, this overview, mainly focused on English-language bots, probably represents only a fraction of the total deepfake bots available on Telegram.

“We’re witnessing a dramatic, orders-of-magnitude rise in the number of individuals who actively engage with and create this type of content,” Ajder explains regarding the Telegram bots. “It is genuinely alarming that these resources—which are wreaking havoc on lives and setting off a distressing situation primarily for young girls and women—remain so readily accessible and discoverable on the surface web, within one of the largest apps worldwide.”

Explicit nonconsensual deepfake content, commonly known as nonconsensual intimate image abuse (NCII), has surged since it first came to light in late 2017, aided by advancements in generative AI that have facilitated this recent growth. Throughout the internet, a myriad of “nudify” and “undress” websites coexist with more advanced tools and Telegram bots, targeting countless women and girls globally—from Italy’s prime minister to school girls in South Korea. A recent survey indicated that approximately 40 percent of US students were aware of deepfakes associated with their K-12 institutions in the past year.

The Telegram bots highlighted by WIRED are backed by at least 25 related Telegram channels—where subscribers can receive newsfeed-style updates—that collectively boast over 3 million members. These channels notify users about new bot features and promotional offers on “tokens” needed to utilize them, often serving as platforms where users can share information about new bots if existing ones are removed by Telegram.

Upon WIRED reaching out to Telegram regarding its policy on explicit deepfake content creation, the company promptly deleted the 75 bots and channels mentioned by WIRED. Telegram did not provide responses to a series of inquiries nor did it comment on the reasons behind the removals.

Further nonconsensual deepfake Telegram channels and bots subsequently identified by WIRED underscore the magnitude of this issue. Numerous channel operators have reported their bots being taken down, with one stating, “We will create another bot tomorrow.” Those accounts were also removed shortly thereafter.

Telegram bots function similarly to small applications that operate within the Telegram platform. They coexist with the app’s channels, which can send messages to countless subscribers, groups allowing interaction among up to 200,000 participants, and private messages. Developers have created bots for various purposes, including trivia quizzes, message translations, alerts, and even initiating Zoom meetings. Unfortunately, these bots have also been misused for the creation of harmful deepfakes.

Given the damaging implications of deepfake technologies, WIRED refrained from testing the Telegram bots and will not specify any particular bots or channels. Although the bots reportedly had millions of monthly users based on Telegram’s metrics, the actual number of images produced by these bots remains uncertain. Users may belong to multiple channels and bots, leading to a scenario where some created no images at all while others might have produced hundreds.

Many deepfake bots observed by WIRED are upfront about their intended functions. The names and descriptions of these bots often reference nudity and the stripping of women’s clothing. One bot’s creators stated, “I can do anything you want about the face or clothes of the photo you give me.” Another bot invites users to “Experience the shock brought by AI.” Telegram also has a feature that displays “similar channels,” which can facilitate easy navigation between different channels and bots for users.

Almost all bots require users to purchase “tokens” to generate images, casting doubt on their claimed functionalities. As the deepfake generation ecosystem has thrived over the years, it has turned into a potentially profitable avenue for those developing websites, apps, and bots. The rush to utilize “nudify” websites has led some Russian cybercriminals, as reported by 404Media, to establish fake websites aimed at infecting users with malware.

In the early days, the first Telegram bots were quite simple, but advancements in technology have led to the creation of more sophisticated AI-generated images, with some bots operating under the radar.

One particular bot, boasting over 300,000 users each month, does not explicitly indicate any adult content in its title or description. However, upon engaging with the bot, users discover a range of over 40 image options, many of which are explicit. The same bot also features a user guide available on an external website, providing instructions on how to generate the highest-quality images. While bot developers may impose terms of service that restrict users from uploading unconsented images or those involving minors, enforcement of these policies seems minimal or non-existent.

Another bot, which had around 38,000 users, claimed that individuals could upload six images of the same person to “train” an AI model, which would then produce new deepfake images of that individual. After joining, users found themselves presented with a selection of 11 additional “bots” from the same developers, likely as a strategy to keep their operations running and evade takedown efforts.

“These types of fake images can cause significant harm to a person’s mental health and overall well-being, leading to psychological trauma, humiliation, fear, embarrassment, and shame,” says Emma Pickering, who heads technology-facilitated abuse and economic empowerment at Refuge, the UK’s largest domestic abuse charity. “Although this type of abuse is prevalent, those who commit it are seldom held responsible, and it is increasingly becoming more common in intimate partner dynamics.”

As the creation and spread of explicit deepfakes have become more manageable, both lawmakers and technology companies have been slow to respond. Currently, 23 states in the US have enacted legislation aimed at addressing nonconsensual deepfakes, and tech firms have enhanced some of their policies. Nevertheless, applications capable of generating explicit deepfakes have appeared in the app stores of both Apple and Google. Notably, explicit deepfakes featuring Taylor Swift were widely circulated on X last January, and the infrastructures of major tech companies have made it simple for individuals to register on deepfake websites.

Kate Ruane, who leads the free expression project at the Center for Democracy and Technology, notes that most prominent technology platforms now maintain policies against the nonconsensual distribution of intimate images, with many of the largest platforms endorsing principles aimed at combating deepfakes. “It’s actually unclear whether the creation or distribution of nonconsensual intimate images is indeed prohibited on the platform,” comments Ruane regarding Telegram’s terms of service, which lack the specificity found in other major technology platforms.

The manner in which Telegram handles the removal of harmful content has faced consistent criticism from civil society organizations. The platform has a history of hosting not only scammers but also far-right groups and content related to terrorism. Following the arrest and subsequent charges against Telegram’s CEO Pavel Durov in France last August over various potential offenses, there have been some modifications to its terms of service and cooperation with law enforcement. However, the company did not reply to WIRED’s inquiries regarding whether it explicitly forbids deepfakes.

Ajder, the researcher who uncovered deepfake bots operating on Telegram four years ago, states that the app is particularly vulnerable to deepfake exploitation. “Telegram’s search functionality allows users to find communities, chats, and bots,” Ajder explains. “It also offers bot-hosting capabilities, creating an environment that facilitates these activities. Moreover, it serves as a venue where you can share this content, thereby perpetrating the harm effectively.”

In late September, numerous deepfake channels began reporting that Telegram had removed their bots. The reasons for these removals remain unclear. On September 30, a channel with 295,000 followers announced that Telegram had “banned” its bots, while also sharing a new bot link for users. (The channel was subsequently deleted after WIRED inquired about it with Telegram.)

“One of the troubling aspects of platforms like Telegram is the difficulty in tracking and monitoring, especially from the standpoint of survivors,” states Elena Michael, cofounder and director of #NotYourPorn, an advocacy group dedicated to shielding individuals from image-based sexual abuse.

Michael mentions that Telegram has been “notoriously difficult” to engage with on safety concerns, yet acknowledges some advancement from the company in recent times. Still, she emphasizes that Telegram should take more initiative in moderating and filtering harmful content.

“Consider if you were a survivor who had to handle this on their own; clearly, the responsibility shouldn’t fall solely on the individual,” Michael remarks. “It should be the company’s duty to implement proactive measures rather than just responding reactively.”

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

Destiny Rising RPG Announced: Exploring an Alternate Timeline in the Destiny Universe

Next Article

Google's Bold Move: Betting on Nuclear Power to Fuel AI Growth

Related Posts