Today, OpenAI released its first threat report, detailing how actors from Russia, Iran, China, and Israel have attempted to use its technology for foreign influence operations across the globe. The report named five different networks that OpenAI identified and shut down between 2023 and 2024. In the report, OpenAI reveals that established networks like Russia’s Doppleganger and China’s Spamoflauge are experimenting with how to use generative AI to automate their operations. They’re also not very good at it.
And while it’s a modest relief that these actors haven’t mastered generative AI to become unstoppable forces for disinformation, it’s clear that they’re experimenting, and that alone should be worrying.
The OpenAI report reveals that influence campaigns are running up against the limits of generative AI, which doesn’t reliably produce good copy or code. It struggles with idioms— which make language sound more reliably human and personal—and also sometimes with basic grammar (so much so that OpenAI named one network “Bad Grammar.”) The Bad Grammar network was so sloppy that it once revealed its true identity: “As an AI language model, I am here to assist and provide the desired comment,” it posted.
One network leveraged ChatGPT to fix code that was intended to automate posts on Telegram, a chatting application which is popular among extremists and influence networks. This sometimes worked as planned, but occasionally resulted in the same account posting as multiple characters, disclosing the manipulation.
In certain scenarios, ChatGPT was employed to create code and content for websites and social media platforms. For example, Spamoflauge used ChatGPT to resolve errors in code that was aimed at developing a WordPress website. This website published narratives attacking individuals of the Chinese diaspora who criticized China’s government.
The report states that the AI-produced content was not successful in reaching the mainstream from the influence networks, even on popular platforms such as X, Facebook, or Instagram. This observation was common across campaigns conducted by an Israeli firm apparently operating on a contract basis, with the content varying from anti-Qatar to anti-BJP, the Hindu nationalist party currently ruling the Indian government.
Overall, the report portrays an image of several fairly unsuccessful campaigns with simplified propaganda. This appears to soothe concerns that many experts have previously raised about this new technology’s potential to disseminate inaccurate or misleading information, especially during a pivotal election year.
But influence campaigns on social media often innovate over time to avoid detection, learning the platforms and their tools, sometimes better than the employees of the platforms themselves. While these initial campaigns may be small or ineffective they appear to be still in the experimental stage, says Jessica Walton, a researcher with the CyberPeace Institute who has studied Doppleganger’s use of generative AI.
In her research, the network would use real-seeming Facebook profiles to post articles, often around divisive political topics. “The actual articles are written by generative AI,” she says. “And mostly what they’re trying to do is see what will fly, what Meta’s algorithms will and won’t be able to catch.”
In other words, expect them only to get better from here.
By Kim Zetter
By Will Knight
By Aarian Marshall
By Nena Farrell