The Privacy Concerns Surrounding ‘AI Girlfriends’

By Matt Burgess

You shouldn’t trust any answers a chatbot sends you. And you probably shouldn’t trust it with your personal information either. That’s especially true for “AI girlfriends” or “AI boyfriends,” according to new research.

An analysis into 11 so-called romance and companion chatbots, published on Wednesday by the Mozilla Foundation, has found a litany of security and privacy concerns with the bots. Collectively, the apps, which have been downloaded more than 100 million times on Android devices, gather huge amounts of people’s data; use trackers that send information to Google, Facebook, and companies in Russia and China; allow users to use weak passwords; and lack transparency about their ownership and the AI models that power them.

Since OpenAI unleashed ChatGPT on the world in November 2022, developers have raced to deploy large language models and create chatbots that people can interact with and pay to subscribe to. The Mozilla research provides a glimpse into how this gold rush may have neglected people’s privacy, and into tensions between emerging technologies and how they gather and use data. It also indicates how people’s chat messages could be abused by hackers.

Many services offering an “AI girlfriend” or romantic chatbot appear similar. These often showcase AI-generated female images that can feature sexualized situations or provocative messages. Mozilla’s research team assessed a variety of chatbots, ranging from large to small apps, some claiming to be “girlfriends”. There are also those providing support through companionship or intimacy, while some facilitate role-playing and other fantasies.

“These applications are set up to accumulate an enormous amount of individual data,” according to Jen Caltrider, who spearheads Mozilla’s Privacy Not Included team that executed the research. “They encourage involvement in role-play, plenty of sex, intimacy, and sharing.” For example, screenshots captured from the EVA AI chatbot display texts such as “I love it when you send me your photos and voice,” and queries if participants are “ready to share all your secrets and desires.”

The apps and websites hold several problems, explains Caltrider. A large quantity of the apps may not transparently indicate what information they share with third parties, their base locations, or who their creators are. Caltrider added that some even allow weak passwords, while others fail to provide substantial information about the AI they utilize. All the assessed applications have varying use cases and vulnerabilities.

Consider Romantic AI, a service facilitating the creation of your “AI girlfriend”. Promotional images on its homepage feature a chatbot sending messages like, “Just purchased new lingerie. Do you want to see it?” As per the Mozilla analysis, the app’s privacy documents state that it won’t sell user data. However, the researchers noted that the app “dispatched 24,354 ad trackers within a minute of use.” Romantic AI, much like most of the companies featured in Mozilla’s study, did not respond to a request for comments from WIRED. Other scrutinized apps had hundreds of trackers.

In general, Caltrider says, the apps are not clear about what data they may share or sell, or exactly how they use some of that information. “The legal documentation was vague, hard to understand, not very specific—kind of boilerplate stuff,” Caltrider says, adding that this may reduce the trust people should have in the companies.

Emily Mullin

Brenda Stolyar

Dell Cameron

Parker Hall

It is unclear who owns or runs some of the companies behind the chatbots. The website for one app, called Mimico—Your AI Friends, includes only the word “Hi.” Others do not list their owners or where they are located, or just include generic help or support contact email addresses. “These were very small app developers that were nameless, faceless, placeless,” Caltrider adds.

Mozilla highlighted that several companies appear to use weak security practices for when people create passwords. The researchers were able to create a one-character password (“1”) and use it to log in to apps from Anima AI, which offers “AI boyfriends” and “AI girlfriends.” Anima AI also didn’t respond to WIRED’s request for comment. Other apps similarly allowed short passwords, which potentially makes it easier for hackers to brute force their way into people’s accounts and access chat data.

Kamilla Saifulina, the head of brand at EVA AI, says in an email that its “current password requirements might be creating potential vulnerabilities” and that the firm will review its password policies. Saifulina points to the firm’s safety guidelines, which include details on subjects that people are not allowed to message about. The guidelines also specify that messages are checked for violations by another AI model. “All information about the user is always private. This is our priority,” Saifulina says. “Also, user chats are not used for pretraining. We use only our own manually written datasets.”

Aside from data-sharing and security issues, the Mozilla analysis also highlights that little is clearly known about the specific technologies powering the chatbots. “There’s just zero transparency around how the AIs work,” Caltrider says. Some of the apps do not appear to have controls in place that allow people to delete messages. Some do not say what kinds of generative models they use, or do not clarify whether people can opt out of their chats being used to train future models.

The biggest app discussed in the Mozilla research study is Replika, which is billed as a companion app and has previously faced scrutiny from regulators. Mozilla initially published an analysis of Replika in early 2023. Eugenia Kuyda, the CEO and founder of Replika, said in a lengthy statement first issued last year that the company does not “use conversational data between a user and Replika application for any advertising or marketing purpose,” and disputed several of Mozilla’s findings.

Many of the chatbots analyzed require paid subscriptions to access some features and have been launched in the past two years, following the start of the generative AI boom. The chatbots often are designed to mimic human qualities and encourage trust and intimacy with the people who use them. One man was told to kill Queen Elizabeth II while chatting; another reportedly died of suicide after messaging a chatbot for six weeks. In addition to being NSFW, some of the apps also play up their roles as useful tools. Romantic AI’s homepage says the app is “here to maintain your mental health,” while its terms and conditions clarify it is not a provider of medical or mental health services and that the company “makes no claims representations, warranties, or guarantees” that it provides professional help.

Emily Mullin

Brenda Stolyar

Dell Cameron

Parker Hall

Vivian Ta-Johnson, an assistant professor of psychology at Lake Forest College, says that speaking with chatbots can make some people feel more comfortable to discuss topics that they would not normally bring up with other people. However, Ta-Johnson says that if a company goes out of business or changes how its systems work, this could be “traumatic” for people who have become close to the chatbots. “These companies should take the emotional bonds that users have developed with chatbots seriously and understand that any major changes to the chatbots’ functioning can have major implications on users’ social support and well-being,” Ta-Johnson says.

Some individuals may not thoroughly think through the information they disclose to chatbots. In the context of “AI girlfriends,” these could involve intimate preferences or fetishes, places, or private feelings. This might lead to reputational harm if the chatbot system is breached or data is inadvertently revealed. Adenike Cosgrove, VP of cybersecurity policy for Europe, the Middle East, and Africa at Proofpoint, mentions that malefactors routinely exploit people’s trust to deceive or manipulate them. She highlights an inherent risk in services that amass large quantities of people’s data. Many users tend to ignore the privacy implications of their information, which could leave them open to exploitation, especially in emotionally susceptible states.

Caltrider suggests being wary about utilizing romantic chatbots like AI girlfriends, and advocates for the adoption of optimal security measures. This encompasses using robust passwords, avoiding signing into applications with Facebook or Google, eliminating data, and opting out of data collection when possible. Caltrider advises limiting the personal information shared as far as possible, without giving away names, locations, and ages. However, she cautions that even taking these steps might not provide the level of safety desired.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

White House Vs Reproductive Rights: The Battle Over Section 702 Surveillance

Next Article

Lambda, An AI Infrastructure Firm, Raises $320M in Series C Boosting Its Valuation to Nearly $1.5B

Related Posts