In 2015, Elon Musk and Sam Altman cofounded OpenAI with a commitment to develop AI technology designed to benefit all of humanity, steering clear of domination by large corporate entities.
A decade later, following a significant breakdown in relations between Musk and Altman, the landscape has altered drastically. Amidst ongoing legal disputes with his one-time companion and cofounder, Musk’s new venture, xAI, has introduced a formidable rival, Grok AI.
Grok is characterized as “an AI search assistant with elements of humor and a hint of rebellion,” crafted to operate with fewer restrictions than its key competitors. Predictably, Grok frequently suffers from hallucinations and bias, and has been criticized for disseminating false information about the 2024 election.
Simultaneously, its practices regarding data security have come under intense review. In July, Musk faced significant criticism from European regulators following revelations that X platform users were unknowingly enrolled into contributing their posts for Grok’s training.
Image generation capabilities in its Grok-2 large language model are also causing concern. Soon after the launch in August, users demonstrated how easy it was to create outrageous and incendiary depictions of politicians including Kamala Harris and Donald Trump.
So what are the main issues with Grok AI, and how can you protect your X data from being used to train it?
Musk is deeply integrating Grok into X, using it for customized news feeds and post composition. Right now, it’s in beta and only available to Premium+ subscribers.
Among the benefits, access to real-time data from X allows Grok to chat about current events as they’re unfolding, says Camden Woollven, group head of AI at GRC International Group, a consultancy offering data protection and privacy services.
To distinguish itself, Grok adopts a “transparent and anti-woke” philosophy, according to Nathan Marlor, who leads data and AI at Version 1, a tech adoption consultancy.
As part of its transparency commitment, Grok’s team opened the source code of its algorithm to the public. On the other hand, its “anti-woke” positioning means it implements fewer safeguards and less bias mitigation, leading to potential perpetuation of intrinsic biases found in internet-based training data, explains Marlor.
Despite repeated inquiries by WIRED, X and xAI have not provided comments on the matter.
The openness and minimal control in Grok has led to incidents where the AI has disseminated incorrect US election information. Concerns reached a peak when election officials from multiple states, including Minnesota, New Mexico, Michigan, Washington, and Pennsylvania, issued a formal complaint to Musk after Grok misreported ballot deadlines.
Grok was quick to respond to this issue. The AI chatbot will now say, “for accurate and up-to-date information about the 2024 US Elections, please visit Vote.gov,” when asked election-related questions, according to the Verge.
But X also makes it clear the onus is on the user to judge the AI’s accuracy. “This is an early version of Grok,” xAI says on its help page. Therefore chatbot may “confidently provide factually incorrect information, missummarize, or miss some context,” xAI warns.
“We encourage you to independently verify any information you receive,” xAI adds. “Please do not share personal data or any sensitive and confidential information in your conversations with Grok.”
Vast amounts of data collection are another area of concern—especially since you are automatically opted in to sharing your X data with Grok, whether you use the AI assistant or not.
The xAI’s Grok Help Center page outlines that xAI “might use your X posts as well as your interactions, inputs, and results from using Grok for the purpose of training and refining the tool.”
Grok’s method of learning involves “substantial privacy risks,” states Marijus Briedis, the chief technology officer at NordVPN. Beyond the tool’s capability to “access and scrutinize potentially confidential or sensitive data,” Briedis remarks that there are additional worries due to “the AI’s ability to create images and texts with little oversight.”
While Grok-1 was developed using “data publicly available until the third quarter of 2023” and was not “pre-trained on X data, including public X posts,” according to the firm, Grok-2 has been explicitly trained using all “posts, interactions, inputs, and outcomes” of X users, automatically including everyone, notes Angus Allan, a senior product manager at CreateFuture, a company specializing in AI implementation.
The EU’s General Data Protection Regulation (GDPR) clearly mandates the acquisition of consent for the use of personal data. In this scenario, xAI might have “overlooked this requirement for Grok,” according to Allan.
This led to regulators in the EU pressuring X to suspend training on EU users within days of the launch of Grok-2 last month.
Failure to abide by user privacy laws could lead to regulatory scrutiny in other countries. While the US doesn’t have a similar regime, the Federal Trade Commission has previously fined Twitter for not respecting users’ privacy preferences, Allan points out.
One way to prevent your posts from being used for training Grok is by making your account private. You can also use X privacy settings to opt out of future model training.
To do so select Privacy & Safety > Data sharing and Personalization > Grok. In Data Sharing, uncheck the option that reads, “Allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning.”
Even if you no longer use X, it’s important to log in and opt out. According to Allan, X is capable of using all your past posts, including images, to train future models unless you specifically opt out.
Delete all your conversation history simultaneously, advises xAI. Once deleted, conversations are purged from its systems within 30 days, except when retention is required for security or legal issues.
The development path of Grok remains uncertain, but given its history, Musk’s AI assistant demands close observation. Ensure the security of your data by being cautious about what you share on X and keeping up-to-date with its privacy policies and terms of service, Briedis recommends. “Interacting with these settings helps you better manage how your data is processed and possibly used by technologies such as Grok.”