The ACLU’s Battle to Protect Your Constitutional Right to Create Deepfakes

You wake up on Election Day and unlock your phone to a shaky video of your state capitol. In the hectic footage, smoke billows from the statehouse. In other clips posted alongside it, gunshots ring out in the distance. You think to yourself: Maybe better to skip the polling booth today. Only later do you learn that the videos were AI forgeries.

A friend calls you, distraught. An anonymous acquaintance has put her in a series of pornographic deepfakes, and now the videos are spreading from site to site. The police told her to contact a lawyer, but the cease-and-desist letters aren’t working.

You are a famous actor. A major tech company wants you to be the voice of its newest AI assistant. You decline. Months later, the chatbot is released and people say it sounds just like you. You never consented to such mimicry, and now someone else is monetizing your voice.

As forgeries made by generative AI swamp the internet, pretty soon everyone, not just Scarlett Johansson, could have a story like this to tell. Lawmakers across the United States have recently passed nearly a dozen laws, and introduced dozens more, to regulate AI imitations in all their forms. But that legal campaign is now running into flak from an unlikely source. Human rights groups, led by the national American Civil Liberties Union and its state-level affiliates, are building a legal posture that seeks to narrow or even dismiss many of these new rules. The heart of the argument: Americans have a constitutional right to deepfake their fellow citizens.

“Anytime you see large waves of bills attempting to regulate a new technology across 50 different state legislatures and God knows how many community ordinances, there’s going to be a fair number of them that draw the lines incorrectly,” Brian Hauss, a senior staff attorney with the ACLU Speech, Privacy, and Technology Project, told me. “So I have no doubt,” he went on, “there will be lots of litigation over these bills as they get implemented.”

Such litigation could prove to be an uncomfortable reckoning for the swelling movement to regulate AI—and lead to a messy future in which we all have to simply put up with some amount of machine-made mimicry.

First, put aside any notion that AI itself has rights. It doesn’t. AI “is a tool, like a toaster or any other inanimate object,” Hauss told me. “But when I use AI to communicate something into the world,” he said, “I have First Amendment rights.”

By analogy, a colorful placard proclaiming “Thank God for Dead Soldiers” does not have any legal privileges. But when members of the Westboro Baptist Church make such a sign and wave it near a veteran’s funeral, they are entitled to the same constitutional protections that cover everybody else. However objectionable the sign itself might be, those rights are inalienable. (In 2010, the church was ordered to pay $5 million to the father of a Marine whose funeral it picketed. That decision was later reversed, and in the subsequent Supreme Court case, the ACLU filed a legal brief in support of the church’s position. The Court ruled in the church’s favor.)

And once a piece of legal speech exists—whether it’s a protest placard or a mean deepfake you made about your neighbor—First Amendment jurisprudence places strict limits on when, and why, the government can act to hide it from the view of others. “Imagine a world where the government did not restrict who could speak but restricted who could listen,” Cody Venzke of the ACLU’s National Political Advocacy Department says. “Those two rights have to exist together.” This idea is sometimes referred to as the “right to listen.”

By these criteria, many of the AI laws and regulations that have received bipartisan support across the country simply don’t pass constitutional muster. And there are a lot of them.

Last summer, the Federal Election Commission began considering whether an existing rule on fraudulent misrepresentation applies to “deliberately deceptive Artificial Intelligence campaign ads.” The ACLU, in a letter to the FEC, warned that the rule would have to be strictly limited to deepfakes whose creators had a demonstrable intent to deceive the public, rather than any deepfake that might trick some viewers. (The FEC has not rendered a decision.)

Meanwhile, in October 2023, President Biden signed a wide-ranging executive order on AI, which included a directive to the Department of Commerce to develop standards for watermarking AI outputs. “Everyone has a right to know when audio they’re hearing or video they’re watching is generated or altered by AI,” Biden said. The ACLU and other civil liberties groups are particularly wary of the idea of labeling, both because it may not be effective—bad actors could find technical work-arounds—and because it compels people to exercise speech that they otherwise would have left unsaid. By analogy, requiring all deep fakes to be labeled would be a bit like requiring all comedians to yell “This is a parody!” before launching into an impression of a politician.

At the state level, there has been even more legislative activity. In January of this year alone, state legislators introduced 101 bills related to deepfakes, according to BSA, a software trade group. One of those bills, introduced in Georgia, would make it a criminal offense to create or share a deepfake with the intent of influencing an election. This presented litigators and advocates at the state ACLU affiliate with an agonizing choice.

“The ACLU of Georgia has, historically, been really huge proponents of voter rights,” Sarah Hunt-Blackwell, a First Amendment policy advocate for the organization, told me. Just days before the bill reached the floor, primary voters in New Hampshire had received calls in which the deepfaked voice of Joe Biden urged them to stay home from the polls. That was “extremely concerning,” Hunt-Blackwell said.

And yet the team ultimately decided, after consulting with the national ACLU office, that censoring and over-criminalizing untrue political speech would be a bigger threat. While the organization supports more narrowly tailored rules against disinformation about the date, place, and time of elections, which it considers a form of voter suppression, it contends that citizens have a constitutional right to use AI to spread untruths, just as they do to lie on paper or in comments from the podium at a political rally. “Politics has always been mostly lies,” one senior ACLU staff member told me.

On January 29, in testimony before the Georgia Senate Judiciary Committee, Hunt-Blackwell urged lawmakers to scrap the bill’s criminal penalties and to add carve-outs for news media organizations wishing to republish deepfakes as part of their reporting. Georgia’s legislative session ended before the bill could proceed.

Federal deepfake legislation is also set to encounter resistance. In January, lawmakers in Congress introduced the No AI FRAUD Act, which would grant property rights for people’s likeness and voice. This would enable those portrayed in any type of deepfake, as well as their heirs, to sue those who took part in the forgery’s creation or dissemination. Such rules are intended to protect people from both pornographic deepfakes and artistic mimicry. Weeks later, the ACLU, the Electronic Frontier Foundation, and the Center for Democracy and Technology submitted a written opposition.

Along with several other groups, they argued that the laws could be used to suppress much more than just illegal speech. The mere prospect of facing a lawsuit, the letter argues, could spook people from using the technology for constitutionally protected acts such as satire, parody, or opinion.

In a statement to WIRED, the bill’s sponsor, Representative María Elvira Salazar, noted that “the No AI FRAUD Act contains explicit recognition of First Amendment protections for speech and expression in the public interest.” Representative Yvette Clarke, who has sponsored a parallel bill that requires deepfakes portraying real people to be labeled, told WIRED that it has been amended to include exceptions for satire and parody.

In interviews with WIRED, policy advocates and litigators at the ACLU noted that they do not oppose narrowly tailored regulations aimed at nonconsensual deepfake pornography. But they pointed to existing anti-harassment laws as a sturdy(ish) framework for addressing the issue. “There could of course be problems that you can’t regulate with existing laws,” Jenna Leventoff, an ACLU senior policy counsel, told me. “But I think the general rule is that existing law is sufficient to target a lot of these problems.”

This is far from a consensus view among legal scholars, however. As Mary Anne Franks, a George Washington University law professor and a leading advocate for strict anti-deepfake rules, told WIRED in an email, “The obvious flaw in the ‘We already have laws to deal with this’ argument is that if this were true, we wouldn’t be witnessing an explosion of this abuse with no corresponding increase in the filing of criminal charges.” In general, Franks said, prosecutors in a harassment case must show beyond a reasonable doubt that the alleged perpetrator intended to harm a specific victim—a high bar to meet when that perpetrator may not even know the victim.

Franks added: “One of the consistent themes from victims experiencing this abuse is that there are no obvious legal remedies for them—and they’re the ones who would know.”

The ACLU has not yet sued any government over generative AI regulations. The organization’s representatives wouldn’t say whether it is preparing a case, but both the national office and several affiliates said that they are keeping a watchful eye on the legislative pipeline. Leventoff assured me, “We tend to act quickly when something comes up.”

The ACLU and other groups don’t deny the horrors of misused generative AI, from political misinformation to pornographic deepfakes to appropriating the work of artists. The point of intervening in such cases would not be to endorse the offensive content. As Hauss put it, “We do represent a considerable amount of speech that we disagree with.” Rather, the goal is to prevent what these groups view as a dangerous constitutional slippage. “If you have a legal regime that says the government can suppress deepfakes,” Hauss said, “one of the first questions on everybody’s mind should be, how would an authoritarian government official use those authorities to suppress true speech about that person?”

Last year, the ACLU and numerous other civil liberties groups signed a letter in opposition to a bipartisan Senate bill that would make social media platforms liable for hosting generative AI content, including deepfakes. The letter warned that by loosening regulations that shield companies from liability for content they host, the law would create a regulatory opening for states to sue companies for non-AI content as well. The letter’s authors cited a bill introduced last year in the Texas legislature that would make it a crime to host “information on how to obtain an abortion-inducing drug” online. If both of these bills, federal and state, became law, then social media platforms could be prosecuted for hosting abortion-related content. All it would take is for a user to post something with AI assistance—a tweet composed with ChatGPT’s help, an image generated by DALL-E. The letter argues that even “basic and commonplace” tools such as autocomplete and autocorrect might fit the Senate bill’s definition of generative AI.

For similar reasons, the ACLU and the EFF have long been skeptical of the expansion of so-called “publicity rights,” which have been proposed to protect artists from AI-generated mimicry. The groups claim that these rights can be used by the rich and powerful to suppress speech that they simply don’t like. Where Saturday Night Live is legally protected when one of its cast members impersonates Tom Cruise, anyone who makes Tom Cruise deepfakes could, under such rules, be legally vulnerable. That would set a dangerous precedent, the civil liberties groups contend. In March, Tennessee enacted a new law, the ELVIS Act, that prohibits the use of AI to mimic musicians’ voices. The ACLU has not commented publicly on the law, but staff who spoke to WIRED expressed skepticism that the use of creative content to train systems like ChatGPT or Dall-E is a copyright infringement.

The ACLU has a long history of winning free speech cases. Their challenges to generative AI regulations could pour icy water over hopes for a world where rampant AI is reined in by the law alone. Multiple civil liberties litigators I spoke to suggested that stronger education and media literacy would be a better defense against AI fakes than lawmakers and judges. But is that enough?

As a society, we’ve always had to put up with some amount of unpleasant, not particularly useful, sometimes hurtful speech in order to guarantee the protection of speech that furthers the cause of openness and democracy. But the new technologies of faked speech are so widespread, and the algorithms that carry content to our screens are so tightly optimized in favor of the extreme and the corrosive, that some are beginning to wonder whether assigning AI speech the exact same protections as human speech might do more harm than good. “We’re suffering from a different problem than we ever suffered from before,” Mary Anne Franks, the George Washington University law professor, told me.

That being said, these are still early days. It is possible that advocacy groups and regulators could find a way back to the same uneasy but ultimately workable compromise that has always characterized free speech jurisprudence. At worst, however, the looming fight could force a choice between two unpalatable realities. Under an absolutist reading of the First Amendment, we may be condemned to stand idly before the algorithmic equivalent of a Westboro Baptist Church protest at a soldier’s funeral every time we go online. On the other hand, redrawing the tenets of free speech—even marginally—could grant future governments a previously unthinkable power: to decide what speech is true or valuable, and what isn’t.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

Beware: 'Ghost' Hacker Network Silently Disseminating Malware on GitHub

Next Article

Sylvio: Black Waters Review - The Hidden Gem of Horror Series Returns with Eerie Excellence

Related Posts