By Matt Burgess
You shouldn’t trust any answers a chatbot sends you. And you probably shouldn’t trust it with your personal information either. That’s especially true for “AI girlfriends” or “AI boyfriends,” according to new research.
An analysis into 11 so-called romance and companion chatbots, published on Wednesday by the Mozilla Foundation, has found a litany of security and privacy concerns with the bots. Collectively, the apps, which have been downloaded more than 100 million times on Android devices, gather huge amounts of people’s data; use trackers that send information to Google, Facebook, and companies in Russia and China; allow users to use weak passwords; and lack transparency about their ownership and the AI models that power them.
Since OpenAI unleashed ChatGPT on the world in November 2022, developers have raced to deploy large language models and create chatbots that people can interact with and pay to subscribe to. The Mozilla research provides a glimpse into how this gold rush may have neglected people’s privacy, and into tensions between emerging technologies and how they gather and use data. It also indicates how people’s chat messages could be abused by hackers.
Many “AI girlfriend” or romantic chatbot services have a similar presentation. These often display AI-generated images of women which can be sexually suggestive or accompanied by provocative messages. Researchers from Mozilla investigated several such chatbots, including both large and small apps. Some of these claim to be “girlfriends”, others offer companionship or intimacy, while others facilitate role-play or other fantasies.
According to Jen Caltrider, leader of the Mozilla Privacy Not Included project which undertook the analysis, “These apps are designed to collect a mass of personal information. They encourage role-play, sexual scenarios, intimacy, and sharing.” For example, screenshots of the EVA AI chatbot show texts encouraging users to send personal photos and voice messages and asking if they are “ready to share all your secrets and desires.”
Caltrider raises several concerns with these apps and websites. Many may not clearly state what data they share with third parties, where they originate, or their creators. Caltrider also notes that some of them allow the creation of weak passwords, and others give little information about the AI deployed. The surveyed apps varied in their uses and vulnerabilities.
Consider Romantic AI, a service that enables users to “create your own AI girlfriend.” Its homepage presents promotional images of a chatbot messaging, “Just bought new lingerie. Wanna see it?” According to the Mozilla analysis, the app’s privacy policy states it will not sell user data. However, the researchers found the app “sent out 24,354 ad trackers within one minute of use.” Romantic AI, like the majority of companies covered in Mozilla’s study, did not reply to WIRED’s request for comment. Other investigated apps had hundreds of trackers.
In general, Caltrider says, the apps are not clear about what data they may share or sell, or exactly how they use some of that information. “The legal documentation was vague, hard to understand, not very specific—kind of boilerplate stuff,” Caltrider says, adding that this may reduce the trust people should have in the companies.
Virginia Heffernan
Medea Giordano
Jennifer M. Wood
Reece Rogers
It is unclear who owns or runs some of the companies behind the chatbots. The website for one app, called Mimico—Your AI Friends, includes only the word “Hi.” Others do not list their owners or where they are located, or just include generic help or support contact email addresses. “These were very small app developers that were nameless, faceless, placeless,” Caltrider adds.
Mozilla highlighted that several companies appear to use weak security practices for when people create passwords. The researchers were able to create a one-character password (“1”) and use it to log in to apps from Anima AI, which offers “AI boyfriends” and “AI girlfriends.” Anima AI also didn’t respond to WIRED’s request for comment. Other apps similarly allowed short passwords, which potentially makes it easier for hackers to brute force their way into people’s accounts and access chat data.
Kamilla Saifulina, the head of brand at EVA AI, says in an email that its “current password requirements might be creating potential vulnerabilities” and that the firm will review its password policies. Saifulina points to the firm’s safety guidelines, which include details on subjects that people are not allowed to message about. The guidelines also specify that messages are checked for violations by another AI model. “All information about the user is always private. This is our priority,” Saifulina says. “Also, user chats are not used for pretraining. We use only our own manually written datasets.”
Aside from data-sharing and security issues, the Mozilla analysis also highlights that little is clearly known about the specific technologies powering the chatbots. “There’s just zero transparency around how the AIs work,” Caltrider says. Some of the apps do not appear to have controls in place that allow people to delete messages. Some do not say what kinds of generative models they use, or do not clarify whether people can opt out of their chats being used to train future models.
The biggest app discussed in the Mozilla research study is Replika, which is billed as a companion app and has previously faced scrutiny from regulators. Mozilla initially published an analysis of Replika in early 2023. Eugenia Kuyda, the CEO and founder of Replika, said in a lengthy statement first issued last year that the company does not “use conversational data between a user and Replika application for any advertising or marketing purpose,” and disputed several of Mozilla’s findings.
Many of the chatbots analyzed require paid subscriptions to access some features and have been launched in the past two years, following the start of the generative AI boom. The chatbots often are designed to mimic human qualities and encourage trust and intimacy with the people who use them. One man was told to kill Queen Elizabeth II while chatting; another reportedly died of suicide after messaging a chatbot for six weeks. In addition to being NSFW, some of the apps also play up their roles as useful tools. Romantic AI’s homepage says the app is “here to maintain your mental health,” while its terms and conditions clarify it is not a provider of medical or mental health services and that the company “makes no claims representations, warranties, or guarantees” that it provides professional help.
Virginia Heffernan
Medea Giordano
Jennifer M. Wood
Reece Rogers
Vivian Ta-Johnson, an assistant professor of psychology at Lake Forest College, says that speaking with chatbots can make some people feel more comfortable to discuss topics that they would not normally bring up with other people. However, Ta-Johnson says that if a company goes out of business or changes how its systems work, this could be “traumatic” for people who have become close to the chatbots. “These companies should take the emotional bonds that users have developed with chatbots seriously and understand that any major changes to the chatbots’ functioning can have major implications on users’ social support and well-being,” Ta-Johnson says.
Some people may not give enough thought to the information disclosed to chatbots. For those using “AI girlfriends,” such information can range from intimate preferences, locations to personal sentiments. This can risk harming one’s reputation if the chatbot system is compromised or if data is inadvertently leaked. Adenike Cosgrove, who serves as the Vice President of Cybersecurity Strategy for Europe, Middle East, and Africa at the security firm Proofpoint, points out that cybercriminals frequently misuse people’s trust to trick or exploit them. She emphasizes the “inherent risk” with services amassing large quantities of user data. “Many users fail to grasp the privacy fallout of their data, potentially exposing them to abuse, especially when in situations of emotional vulnerability,” says Cosgrove.
In the context of AI girlfriends and similar services, Caltrider advises people to approach romantic chatbots with caution and adhere to optimal security practices. These practices include using robust passwords, avoiding app sign-ins via Facebook or Google, deleting data, and refusing data collection wherever possible. “Minimize as much as you can the personal information you divulge—not disclosing names, places, ages,” Caltrider advises, while noting that such measures may not suffice with some of these services. “Even adopting these measures might not render you as secure as you would wish to be,” she adds.