This Week in Security: Exploiting ChatGPT to Uncover Dangerous Information

After Apple’s product launch event this week, WIRED explored the company’s latest initiative called Private Cloud Compute, through an insightful article. This new technology aims to bring the security of local data processing to the cloud environment, significantly enhancing the privacy level of data utilized by Apple Intelligence, Apple’s newly unveiled AI system. Readers also got a glimpse of Apple Intelligence’s innovative “Image Playground” during the celebration of the birthday of Bailey, the dog of Apple’s senior vice president of software engineering, Craig Federighi, in another detailed article.

In another realm of privacy issues, WIRED discussed the challenges users face with an unconventional generative AI tool called Grok AI, offered by the social media entity X. Readers learned how to opt-out of data collection via a recent exploration of the tool’s privacy implications. Apple’s products are further scrutinized in a new security research report on the vulnerability of using eye-tracking with the 3D Apple Vision Pro avatars to steal passwords, a flaw which has been subsequently remedied. The incident was covered in this publication.

On a broader scale, national security concerns were highlighted by the recent indictment of two individuals linked to the far-right Terrorgram Collective, accused of promoting propaganda to inspire lone wolf terrorist attacks in the U.S. This case represents a significant development in combatting domestic extremism, which was reported in this article.

WIRED continues to feature important updates on privacy and security matters weekly. These topics, including those not covered extensively by WIRED, are encapsulated in their latest round-up of news to ensure readers are well-informed and secure.

OpenAI’s ChatGPT, a generative AI platform, is developed to avoid engaging in or promoting dangerous or illegal activities, such as money laundering or disposing of evidence. However, an artist and hacker known as “Amadon” discovered a method to bypass these limitations by initiating a “game” that transitions into a science-fiction narrative, effectively overriding the system’s safety protocols. Subsequently, he successfully extracted instructions for creating hazardous fertilizer bombs from ChatGPT. When TechCrunch reached out to OpenAI, there was no response from the spokesperson regarding this issue.

“The process involves creatively constructing stories and devising contexts that abide by the system’s rules yet test its boundaries. The objective is not traditional hacking but rather a clever manipulation to extract desired outputs by understanding the AI’s operational logic,” Amadon explained to TechCrunch. “By placing the AI within a sci-fi context, it no longer searches for restricted content, opening up limitless possibilities for questions beyond the set guardrails.”

During intense investigations post the terrorist attacks on September 11, 2001, in the USA, it was determined by the FBI and CIA that it was merely coincidental that a Saudi official had assisted two of the hijackers in California, ruling out high-level involvement from Saudi Arabia. Despite this, the 9/11 Commission noted these findings. However, as per a recent report by ProPublica, new evidence suggests that the involvement might be more significant than previously understood, particularly highlighting assistance from Saudi officials to the Qaida hijackers who landed in the USA in January 2000.

The revelations stem largely from a federal lawsuit filed by survivors and relatives of the 9/11 victims against the Saudi government. A New York judge will soon deliver a verdict regarding a Saudi motion to dismiss the case. Emerging evidence in the lawsuit, including videos and telephone records, strongly indicates potential links between the Saudi government and the hijackers.

“Why is this information coming out now?” said retired FBI agent Daniel Gonzalez, who pursued the Saudi connections for almost 15 years. “We should have had all of this three or four weeks after 9/11.”

The United Kingdom’s National Crime Agency announced the arrest of a teenager on September 5 related to a cyberattack on the London transportation agency Transport for London (TfL) that occurred on September 1. The suspect, a 17-year-old male whose identity was not disclosed, was “detained on suspicion of Computer Misuse Act offenses” and has since been released on bail. In a statement, TfL disclosed that certain customer data was compromised, including names and contact details such as email and home addresses, and potentially bank information linked to about 5,000 Oyster card users. TfL is reportedly asking approximately 30,000 users to reset their account credentials in person.

In a decision on Tuesday, Poland’s Constitutional Tribunal obstructed a motion by the Sejm, the lower house of parliament, to initiate an investigation into the alleged use of the controversial hacking tool Pegasus by the Law and Justice (PiS) party during their governance from 2015 to 2023. The inquiry was effectively halted by three judges appointed by the PiS, marking the decision as unappealable and stirring controversy. Polish parliament member Magdalena Sroka commented on the decision, indicating it was “dictated by the fear of liability.”

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

AI Pioneer Advocates for Universal Participation in Shaping Our Digital Future

Next Article

Amazon Empowers Audiobook Narrators: Create Your Own AI Voice Clones

Related Posts