Unveiling Grok AI: Implications for Your Privacy and What You Should Know

In 2015, Elon Musk and Sam Altman cofounded OpenAI with the objective to advance AI technology for the betterment of humanity, steering clear of exclusive corporate control.

Over the next ten years, a dramatic split occurred between Musk and Altman, leading to a shifted scenario. Amidst ongoing legal disputes with his once ally, Musk’s new venture, xAI, introduced its strong rival, Grok AI.

Categorized as “an AI search assistant with a twist of humor and a dash of rebellion,” Grok is crafted to be less restrictive than its rivals. Notably, Grok tends to produce hallucinations and biases, and has been accused of disseminating misinformation regarding the 2024 elections.

Concurrently, its approach to data security is also being questioned. In July, Musk was criticized by European regulators following revelations that X platform users were automatically enrolled into having their posts used to train Grok.

Image generation features in the Grok-2 language model raise concerns, as witnessed by users’ ability to effortlessly create controversial and provocative images of politicians such as Kamala Harris and Donald Trump shortly after its release in August.

What are the critical concerns surrounding Grok AI, and what measures can be taken to prevent your X data from being exploited to train this AI?

Currently in beta and exclusive to Premium+ subscribers, Musk is incorporating Grok extensively into X for personalized news streams and assisting in composing posts.

Grok’s integration with X allows it to engage in discussions on live events, according to Camden Woollven, leader of AI at GRC International Group, which provides data protection and privacy consultancy services.

To distinguish itself in the competitive market, Grok aims to embody qualities of transparency and an “anti-woke” philosophy, according to Nathan Marlor, who leads data and AI initiatives at Version 1, a consultancy promoting technological integration.

In line with its commitment to transparency, Grok’s team opened the source code of its algorithm. Nevertheless, by embracing an “anti-woke” ideology, Grok was developed with significantly fewer restrictions and little emphasis on bias mitigation compared to its industry peers like OpenAI and Anthropic. Marlor notes, “This method may more truly represent the raw data from the internet which it’s trained on, but it also risks reinforcing prejudiced content.”

Despite multiple inquiries, WIRED received no response from X and xAI.

The openness and minimal regulation of Grok have led to incidents where the AI assistant was found disseminating incorrect information about US elections. Election officers from states including Minnesota, New Mexico, Michigan, Washington, and Pennsylvania have written to Musk after discovering Grok’s dissemination of inaccurate ballot deadline data.

Grok was quick to respond to this issue. The AI chatbot will now say, “for accurate and up-to-date information about the 2024 US Elections, please visit Vote.gov,” when asked election-related questions, according to the Verge.

But X also makes it clear the onus is on the user to judge the AI’s accuracy. “This is an early version of Grok,” xAI says on its help page. Therefore chatbot may “confidently provide factually incorrect information, missummarize, or miss some context,” xAI warns.

“We encourage you to independently verify any information you receive,” xAI adds. “Please do not share personal data or any sensitive and confidential information in your conversations with Grok.”

Vast amounts of data collection are another area of concern—especially since you are automatically opted in to sharing your X data with Grok, whether you use the AI assistant or not.

The xAI’s Grok Help Center page details that xAI might use your X posts and interactions, along with your inputs and outputs while using Grok, for aims such as training and enhancing the tool.

Marijus Briedis, NordVPN’s Chief Technology Officer, notes that Grok’s training approach could pose “significant privacy implications.” He expressed concerns about the AI’s ability to delve into and analyze potentially private data, along with worries due to the AI’s capabilities to create images and content with scalaed-down oversight.

According to a company statement, while Grok-1 utilized “publicly available data up to Q3 2023” and did not rely on pre-fed X data (including public X posts), Grok-2 is fully trained on all “posts, interactions, inputs, and outcomes” from X users, with automatic opt-in included, explains Angus Allan, a senior product manager at CreateFuture, focusing on AI application.

As the EU’s General Data Protection Regulation (GDPR) mandates clear consent for using personal data, according to Angus Allan, xAI could have overlooked this key requirement for Grok.

This led to regulators in the EU pressuring X to suspend training on EU users within days of the launch of Grok-2 last month.

Failure to abide by user privacy laws could lead to regulatory scrutiny in other countries. While the US doesn’t have a similar regime, the Federal Trade Commission has previously fined Twitter for not respecting users’ privacy preferences, Allan points out.

One way to prevent your posts from being used for training Grok is by making your account private. You can also use X privacy settings to opt out of future model training.

To do so select Privacy & Safety > Data sharing and Personalization > Grok. In Data Sharing, uncheck the option that reads, “Allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning.”

Even if you no longer use X, it’s still worth logging in and opting out. X can use all of your past posts—including images—for training future models unless you explicitly tell it not to, Allan warns.

It’s possible to delete all of your conversation history at once, xAI says. Deleted conversations are removed from its systems within 30 days, unless the firm has to keep them for security or legal reasons.

No one knows how Grok will evolve, but judging by its actions so far, Musk’s AI assistant is worth monitoring. To keep your data safe, be mindful of the content you share on X and stay informed about any updates in its privacy policies or terms of service, Briedis says. “Engaging with these settings allows you to better control how your information is handled and potentially used by technologies like Grok.”

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

Affordable 3D Pixel Art Frames Bring Iconic Gaming Moments to Life

Next Article

Grok AI: Unpacking the Implications for Privacy and User Trust

Related Posts