Why Hackers Dislike AI “Slop” Even More Than You Do

Cybercriminals are expressing their frustrations over the increasing presence of generative AI content in their communities, with many feeling overwhelmed by the influx of low-quality AI-generated posts. In a surprising twist, the complaints echo those of mainstream internet users who have similarly criticized AI’s intrusion into their online experiences.

One user on a cybercrime forum voiced their disappointment explicitly, stating, “I’m disappointed that you are working to incorporate AI garbage into the site.” This concern highlights a growing backlash among hackers, scammers, and low-level criminals who are becoming weary of AI’s incursion into their discussions about illicit activities.

According to Ben Collier, a security researcher, a recent study involving researchers from the Universities of Edinburgh, Cambridge, and Strathclyde found that there has been increased skepticism about AI in underground forums since the emergence of tools like ChatGPT in late 2022. From analyzing conversations, the researchers noted complaints about basic cybersecurity topics being oversimplified and concerns that AI-generated content is diluting the quality and expertise within their online communities.

The underground landscape has traditionally been a space for cybercriminals, often characterized by forums that resemble social networks where reputation and community engagement matter. Users have expressed annoyance that some individuals resort to using AI-generated posts to boost their credibility instead of demonstrating real knowledge and skills. One participant of Hack Forums articulated, “If I wanted to talk to an AI chatbot, there are many websites for me to do so … I come here for human interaction.”

While some forums experienced initial enthusiasm regarding AI’s potential benefits for hacking, many have since soured on the idea. Users labelled AI-generated posts as "AI shit," reflecting a genuine irritation over the decline of authentic peer engagement.

Despite the negative sentiment, there are those within these communities who see potential benefits from AI tools—suggesting that AI could serve to polish their posts without taking over the narrative. Others, however, firmly reject the idea of AI fully participating in discussions, fearing it would diminish the human element that makes these forums engaging.

An emerging conversation involves the concept of creating an AI-enhanced cybercrime market, aiming to streamline the process of purchasing stolen data. However, many members dismissed this idea outright, calling it foolish and counterproductive.

As research indicates, while generative AI technology is becoming more pervasive, its impact on lower-tier cybercriminals appears limited for now. Established business models remain intact, and the AI’s significant effects seem confined to specific automated areas, rather than revolutionizing the entire cybercrime ecosystem.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

Inside the Tension: Greg Brockman Shares His Encounter with Elon Musk

Related Posts