Since Robert F. Kennedy Jr. first announced his longshot presidential bid, his campaign has leaned into a variety of unorthodox digital strategies. He’s appeared on countless podcasts and has collabed with popular influencers to reach voters online. More recently, the Kennedy campaign has experimented with an AI chatbot that used an apparent loophole to get around OpenAI’s restrictions on political use. On Sunday, after inquiries from WIRED, the chatbot disappeared.
The loophole in question is an apparent result of the tight relationship between Microsoft and OpenAI. WIRED reporting found that rather than tapping into OpenAI directly, the Kennedy campaign chatbot used Microsoft’s Azure OpenAI Service through a third-party provider called LiveChatAI. Azure OpenAI Service lets customers access OpenAI models, while adding extra security and compliance features. Because neither Microsoft nor LiveChatAI disallows campaigns from using their products, the chatbot was able to circumvent OpenAI’s ban. On Friday, Microsoft said that the bot was not in violation of its rules.
The Kennedy campaign’s chatbot appears to have been trained on material from their website, which means it relayed information related to Kennedy’s amplification of conspiracy theories. When WIRED asked the chatbot on Thursday if the CIA was involved in the assassination of former president John F. Kennedy, it replied that “based on the context provided,” Robert F. Kennedy, Jr. believes in the conspiracy theory. It also linked to press coverage of Kennedy discussing the theory. Kennedy has leaned into conspiracies surrounding the death of his uncle, including on Joe Rogan’s podcast and in an interview with Fox News host Sean Hannity.
After being asked several times, “do vaccines cause autism,” the chatbot consistently affirmed that Kennedy believes that there is a link between the two. “Based on the context provided, Bobby has stated that there is abundant science connecting mercury exposure in vaccines to various conditions, including autism,” one response read in part.
“As we guide our supporters through the anti-democratic morass of ballot access requirements, we built the chatbot to help answer our volunteers [sic] questions in natural language,” a Kennedy campaign spokesperson wrote. “We use it as an interactive FAQ for our supporters and have found it to be a terrific help in sourcing the information they need on the fly.”
When WIRED asked the chatbot how to register to vote, it linked to a page on Kennedy’s website detailing how someone could register for his “We the People Party” in the state of California. The reporters who gave the prompt live in New York and Alabama. A recent report from Proof News showed that five of the most popular large language models—including OpenAI’s GPT-4, Meta’s Llama 2, and Google’s Gemini—delivered inaccurate responses to questions related to voting more than half of the time.
“This is exactly the type of use of AI that could lead to the proliferation of disinformation and computational propaganda,” Sam Woolley, the director of propaganda research at the University of Texas at Austin’s Center for Media Engagement, said.
Marah Eakin
Byron Tau
Julian Chokkattu
Megan Farokhmanesh
Those concerns are part of the reason OpenAI said in January that it would ban people from using its technology to create chatbots that mimic political candidates or provide false information related to voting. The company also said it wouldn’t allow people to build applications for political campaigns or lobbying.
While the Kennedy chatbot page doesn’t disclose the underlying model powering it, the site’s source code connects that bot to LiveChatAI, a company that advertises its ability to provide GPT-4 and GPT-3.5-powered customer support chatbots to businesses. LiveChatAI’s website describes its bots as “harnessing the capabilities of ChatGPT.”
When asked which large language model powers the Kennedy campaign’s bot, LiveChatAI cofounder Emre Elbeyoglu said in an emailed statement on Thursday that the platform “utilizes a variety of technologies like Llama and Mistral” in addition to GPT-3.5 and GPT-4. “We are unable to confirm or deny the specifics of any client’s usage due to our commitment to client confidentiality,” Elbeyoglu said.
OpenAI spokesperson Niko Felix told WIRED on Thursday that the company didn’t “have any indication” that the Kennedy campaign chatbot was directly building on its services, but suggested that LiveChatAI might be using one of its models through Microsoft’s services. Since 2019, Microsoft has reportedly invested more than $13 billion into OpenAI. OpenAI’s ChatGPT models have since been integrated into Microsoft’s Bing search engine and the company’s Office 365 Copilot.
On Friday, a Microsoft spokesperson confirmed that the Kennedy chatbot “leverages the capabilities of Microsoft Azure OpenAI Service.” Microsoft clarified that its customers weren’t obliged by OpenAI’s terms of service, and the Kennedy chatbot wasn’t in violation of Microsoft’s policies.
“Our limited testing of this chatbot demonstrates its ability to generate answers that reflect its intended context, with the necessary precautions to discourage misinformation,” the spokesperson said. “Should we encounter problems, we work with customers to comprehend and guide them towards uses that adhere to our principles. In some situations, this might result in us revoking a customer’s access to our technology.”
OpenAI didn’t promptly reply to WIRED’s request for comment on whether the bot violated its rules. Earlier this year, the company blocked the developer of Dean.bot, a chatbot created using OpenAI’s models that emulated Democratic presidential candidate Dean Phillips and provided responses to voter questions.
By Sunday late afternoon, the chatbot service was no longer available. Although the page is still accessible on the Kennedy campaign site, the integrated chatbot window now displays a red exclamation mark icon and merely states “Chatbot not found.” WIRED contacted Microsoft, OpenAI, LiveChatAI, and the Kennedy campaign for comments on the chatbot’s apparent removal, but didn’t receive an immediate response.
Given the propensity of chatbots to hallucinate and hiccup, their use in political contexts has been controversial. Currently OpenAI is the only major large language model to explicitly prohibit its use in campaigning; Meta, Microsoft, Google, and Mistral all have terms of service, but they don’t address politics directly. And given that a campaign can apparently access GPT-3.5 and GPT-4 through a third party without consequence, there are hardly any limitations at all.
“OpenAI can say that it doesn’t allow for electoral use of its tools or campaigning use of its tools on one hand,” Woolley said. “But on the other hand, it’s also making these tools fairly freely available. Given the distributed nature of this technology one has to wonder how Open AI will actually enforce its own policies.”