Navigating the Age of ‘Deep Doubt’: Challenges and Perspectives

Given the surge of hyper-realistic AI-generated images impacting social networks like X and Facebook, it seems we are entering a new phase of media skepticism termed “deep doubt.” The tradition of doubting the authenticity of media dates back decades and even further in the analog realm, but the current availability of technologies that generate believable counterfeit media has catalyzed a resurgence of manipulators using AI to challenge actual photo and video evidence. Consequently, generalized distrust in digital content from unfamiliar sources may be intensifying.

Deep doubt fosters a skepticism of genuine media that arises from the possibilities presented by generative AI technologies. This results in widespread public doubt about the authenticity of media artifacts, thus empowering individuals to dismiss real events as fabricated with AI tools.

The underlying idea of “deep doubt” isn’t novel, but its tangible effects are now increasingly evident. Since the inception of the term deepfake in 2017, AI media generation has significantly advanced, leading to instances such as conspiracy theories suggesting President Joe Biden has been replaced by an AI hologram and former president Donald Trump’s unfounded claim that Vice President Kamala Harris manipulated AI to distort rally attendance figures. Moreover, Trump later attributed a photo of him with writer E. Jean Carroll, who won a sexual assault lawsuit against him, to AI, contradicting his denial of ever meeting her.

Legal experts Danielle K. Citron and Robert Chesney predicted this scenario years prior, describing it in 2019 as the “liar’s dividend”—when deepfakes are utilized by deceivers to dismiss legitimate evidence. While once a theoretical academic discussion, deep doubt has emerged as a prevalent reality.

Doubt has been a political weapon since ancient times. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.

This story originally appeared on Ars Technica, a trusted source for technology news, tech policy analysis, reviews, and more. Ars is owned by WIRED’s parent company, Condé Nast.

Over the past decade, the rise of deep-learning technology has made it increasingly easy for people to craft false or modified pictures, audio, text, or video that appear to be non-synthesized organic media. Deepfakes were named after a Reddit user going by the name “deepfakes,” who shared AI-faked pornography on the service, swapping out the face of a performer with the face of someone else who wasn’t part of the original recording.

In the 20th century, one could argue that a certain part of our trust in media produced by others was a result of how expensive and time-consuming it was, and the skill it required, to produce documentary images and films. Even texts required a great deal of time and skill. As the deep doubt phenomenon grows, it will erode this 20th-century media sensibility. But it will also affect our political discourse, legal systems, and even our shared understanding of historical events that rely on that media to function—we rely on others to get information about the world. From photorealistic images to pitch-perfect voice clones, our perception of what we consider “truth” in media will need recalibration.

In April, a panel of federal judges highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials. The concern emerged during a meeting of the US Judicial Conference’s Advisory Committee on Evidence Rules, where the judges discussed the challenges of authenticating digital evidence in an era of increasingly sophisticated AI technology. Ultimately, the judges decided to postpone making any AI-related rule changes, but their meeting shows that the subject is already being considered by American judges.

Deep doubt impacts more than just current events and legal issues. In 2020, I wrote about a potential “cultural singularity,” a threshold where truth and fiction in media become indistinguishable. A key part of the threshold is the level of “noise,” or uncertainty, that AI-generated media can inject into our information ecosystem at scale. Deepfakes may lead to scenarios where the prevalence of AI-generated content could create widespread doubt about the authenticity of real events that took place in history—perhaps another manifestation of deep doubt. In 2022, Microsoft chief scientific officer Eric Horvitz echoed these ideas when he wrote a research paper about a similar topic, warning of a potential “post-epistemic world, where fact cannot be distinguished from fiction.”

And deep doubt could erode social trust on a massive, internet-wide scale. This erosion is already manifesting in online communities through phenomena like the growing conspiracy theory called “dead internet theory,” which posits that the internet now mostly consists of algorithmically generated content and bots that pretend to interact with it. The ease and scale with which AI models can now generate convincing fake content is reshaping our entire digital landscape, affecting billions of users and countless online interactions.

“Deep doubt” is a new term, but it’s not a new idea. The erosion of trust in online information from synthetic media extends back to the origins of deepfakes themselves. Writing for The Guardian in 2018, David Shariatmadari spoke of an upcoming “information apocalypse” due to deepfakes and questioned, “When a public figure claims the racist or sexist audio of them is simply fake, will we believe them?”

In 2019, Danielle K. Citron from Boston University School of Law and Robert Chesney of the University of Texas authored a paper entitled “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” which introduced the term “liar’s dividend.” They explained that deepfakes make it simpler for deceivers to dismiss the truth. The irony of the liar’s dividend grows as the public becomes more aware of deepfakes, recognizing the ease of fabricating media, which could undermine traditional news and weaken democratic dialogue. The paper raises concerns that this shift could pave the way for authoritarianism by diminishing the influence of objective truths and empowering subjective opinions over facts.

The issue of deep doubt also dovetails with the existing challenges of misinformation and disinformation, offering new avenues for spreading falsehoods or undermining genuine news. This trend, particularly propelled by cable news and social media, might further subjectivize our collective understanding of truth, encouraging people to accept views that affirm their biases over balanced evidence.

Ultimately, all meaning originates from context. We construct our own interconnected mesh of concepts to decode reality. Isolated consideration of any idea, or the attempt to verify a possibly altered media piece, proves unproductive without connecting it to broader, established concepts.

Throughout recorded history, historians and journalists have had to evaluate the reliability of sources based on provenance, context, and the messenger’s motives. For example, imagine a 17th-century parchment that apparently provides key evidence about a royal trial. To determine if it’s reliable, historians would evaluate the chain of custody, as well as check if other sources report the same information. They might also check the historical context to see if there is a contemporary historical record of that parchment existing. That requirement has not magically changed in the age of generative AI.

In the face of growing concerns about AI-generated content, several tried-and-true media literacy strategies can help verify the authenticity of digital media, as Ars Technica’s Kyle Orland pointed out during our coverage of the Harris crowd-size episode.

When we are evaluating the veracity of online media, it’s important to rely on multiple corroborating sources, particularly those showing the same event from different angles in the case of visual media or reported from multiple credible sources in the case of text. It’s also useful to track down original reporting and imagery from verified accounts or official websites rather than trusting potentially modified screenshots circulating on social media. Information from varied eyewitness accounts and reputable news organizations can provide additional perspectives to help you look for logical inconsistencies between sources.

In general, we recommend approaching claims of AI manipulation skeptically, considering simpler explanations for unusual elements in media before jumping to conclusions about AI involvement, which may fit a satisfying narrative (through confirmation bias) but give you the wrong impression.

You’ll notice that our suggested counters to deep doubt above do not include watermarks, metadata, or AI detectors as ideal solutions. That’s because trust does not inherently derive from the authority of a software tool. And while AI and deepfakes have dramatically accelerated the issue, bringing us to this new deep-doubt era, the necessity of finding reliable sources of information about events you didn’t witness firsthand is as old as history itself.

Since Stable Diffusion’s debut in 2022, we’ve often discussed the concerns surrounding deepfakes, including their potential to erode social trust, degrade the quality of online information by introducing noise, fuel online harassment, and possibly distort the historical record. We’ve delved deep (see what I did there) into many aspects of generative AI, and to date, reliable AI synthesis detection remains an unsolved issue, with watermarking technology frequently considered unreliable and metadata-tagging efforts not yet broadly adopted.

Although AI detection tools exist, we strongly advise against using them because they are currently not based on scientifically proven concepts and can produce false positives or negatives. Instead, manually looking for telltale signs of logical inconsistencies in text or visual flaws in an image, as identified by reliable experts, can be more effective.

It’s likely that in the near future, well-crafted synthesized digital media artifacts will be completely indistinguishable from human-created ones. That means there may be no reliable automated way to determine if a convincingly created media artifact was human- or machine-generated solely by looking at one piece of media in isolation (remember the sermon on context above). This is already true of text, which has resulted in many human-authored works being falsely labeled as AI-generated, creating ongoing pain for students in particular.

Throughout history, any form of recorded media, from ancient clay tablets to modern devices, has been vulnerable to forgery. Since the advent of photography, the ability of a camera to accurately depict reality has been questionable: cameras can deceive. The notion that devices capture unbiased reality is misleading—a notion perpetuated by selective presentation and manipulation of images, which have always skewed perceptions. Ultimately, the credibility of what we observe depends heavily on the integrity of its source.

In many respects, the era of pervasive skepticism is as ancient as humanity itself. The use of credible and trustworthy sources remains essential for evaluating the validity of information, just as it was in 3000 BC when humans first started documenting history.

This story was originally published on Ars Technica.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

Next-Gen Unveiled: Exploring the State of Console Gaming with Switch 2, PS5 Pro, and the New Xbox

Next Article

Cisco Update: Unveiling the Latest News and Insights

Related Posts