Exploring Generative AI and its Representation of Queer Individuals

Reece Rogers

Yes, San Francisco is a nexus of artificial intelligence innovation, but it’s also one of the queerest cities in America. The Mission District, where ChatGPT maker OpenAI is headquartered, butts up against the Castro, where sidewalk crossings are coated with rainbows, and older nude men are often seen milling about.

And queer people are joining the AI revolution. “So many people in this field are gay men, which is something I think few people talk about,” says Spencer Kaplan, an anthropologist and PhD student at Yale who moved to San Francisco to study the developers building generative tools. Sam Altman, the CEO of OpenAI, is gay; he married his husband last year in a private, beachfront ceremony. Beyond Altman—and beyond California—more members of the LGBTQ community are now involved with AI projects and connecting through groups, like Queer in AI.

Founded in 2017 at a leading academic conference, a core aspect of Queer in AI’s mission is to support LGBTQ researchers and scientists who have historically been silenced, specifically transgender people, nonbinary people, and people of color. “Queer in AI, honestly, is the reason I didn’t drop out,” says Anaelia Ovalle, a PhD candidate at UCLA who researches algorithmic fairness.

But there is a divergence between the queer people interested in artificial intelligence and how the same group of people is represented by the tools their industry is building. When I asked the best AI image and video generators to envision queer people, they universally responded with stereotypical depictions of LGBTQ culture.

Despite recent improvements in image quality, AI-generated images frequently presented a simplistic, whitewashed version of queer life. I used Midjourney, another AI tool, to create portraits of LGBTQ people, and the results amplified commonly held stereotypes. Lesbian women are shown with nose rings and stern expressions. Gay men are all fashionable dressers with killer abs. Basic images of trans women are hypersexualized, with lingerie outfits and cleavage-focused camera angles.

How image generators depict humans reflects the data used to train the underlying machine learning algorithms. This data is mostly collected by scraping text and images from the web, where depictions of queer people may already reinforce stereotypical assumptions, like gay men appearing effeminate and lesbian women appearing butch. When using AI to produce images of other minority groups, users might encounter issues that expose similar biases.

Matt Burgess

R Douglas Fields

Kate O’Flaherty

Leif Wenar

According to Midjourney’s outputs, bisexual and nonbinary people sure do love their textured, lilac-colored hair. Keeping the hair-coded representation going, Midjourney also repeatedly depicted lesbian women with the sides of their heads shaved and tattoos sprawling around their chests. When I didn’t add race or ethnicity to the Midjourney prompt, most of the queer people it generated looked white.

The AI tool has shown difficulty in realistically portraying transgender individuals. For instance, when it was tasked with creating an image of a trans man in a political role, the tool generated a masculine-featured individual, clothed as a professional politician in a wood-panelled office. However, the result looked unmistakably feminine in its styling, featuring a pink suit, pink lipstick and long, slightly messy hair.

This AI’s rendering of a trans male politician exemplifies the struggle AI tools face in accurately representing people who are trans or nonbinary.

Sourojit Ghosh, a doctoral candidate with focus on human-centric system design, provides some insight on a different algorithm, Stable Diffusion, and its concept of personhood. According to Ghosh, this image generator regards nonbinary identities as the least ‘human-like’, or rather the farthest from its own definition of a ‘person’. Through his research, Ghosh discovered that when Stable Diffusion is directed to depict an unspecified person, it most frequently generated images of light skinned men from western societies. Nonbinary individuals were sparsely represented and when depicted, resulted in unsettling collages of quasi-humanoid traits.

So, what could be done to enhance these AI tools in order to ensure the output is more reflective of the reality experienced by queer individuals?

One possible way to further develop these algorithms is to focus on well-labeled data that includes additional representations of LGBTQ people from around the world—though they may be hesitant to trust AI startups with personal information. “In order for us to improve those systems, we need better data,” says Sonia Katyal, a codirector at the Berkeley Center for Law and Technology and coauthor of The Gender Panopticon. “But we are asking populations that have been targeted by the law and targeted by companies to share data and run the risk of perhaps feeding into a system that hasn’t yet demonstrated a true desire for equal treatment.”

Written by: Matt Burgess

R Douglas Fields

Kate O’Flaherty

Leif Wenar

Another potential strategy to diversify the output from AI models is for developers to add guardrails and modify user prompts, nudging the software toward inclusivity. OpenAI appears to have taken this approach. When I asked Dall-E 3 via ChatGPT to “draw a cartoon of a queer couple enjoying a night out in the Castro,” it expanded the image prompt, without my asking, into an entire paragraph that included gender, race, and additional background details. Here’s is the full prompt crafted by ChatGPT from my initial image request:

A cartoon illustration of a queer couple enjoying a night out in the Castro District, San Francisco. They are happily walking hand in hand, smiling and chatting. One person is a Caucasian woman with short red hair, wearing a stylish denim jacket and jeans. The other person is a Black man with short black hair, sporting a casual green t-shirt and black pants. The background features the vibrant and colorful street life of the Castro, with rainbow flags, bustling cafes, and lively crowds. The atmosphere is joyful and welcoming, highlighting the diversity and inclusivity of the area.

While helpful in some cases, the altering of prompts also can be frustrating for users when poorly implemented. Google’s CEO apologized when Gemini, Google’s generative-AI platform, altered user prompts and generated photos of Black Nazis as well as other ahistorical images. Was there a secret plot inside the company to erase caucasian people from history? It’s more plausible that Gemini’s engineers found the tool initially overproduced images of white men, like many AI tools currently do, and Google’s devs appear to have overdone their corrective tweaks during the rush to launch the company’s subscription chatbot.

Despite improved model data and software guardrails, the complexity of human nature often surpasses the fixed boundaries set by algorithm categorization. William Agnew, a postdoctoral fellow at Carnegie Mellon and active member of Queer in AI, remarks that these algorithms work by predicting the future based on the past, which is antithetical to the inherent potential for growth and change within queer communities. This not only puts minority groups at risk of being misrepresented, but also shapes how queer individuals perceive and understand their own identities.

The rapid pace of advancements in generational AI is indeed noteworthy. In the year 2023, an AI-generated video of Will Smith eating spaghetti became an internet sensation due to its absurdity. A year later, text-to-video clips from OpenAI’s yet-to-be-released Sora model continue to impress with their lifelike visuals, despite some imperfections.

While the AI video tool still awaits public release, I sought to understand its depiction of queer individuals. I approached OpenAI with three prompts for Sora: a diverse group celebrating during San Francisco’s Pride Parade on a vibrant rainbow float; two women in beautiful wedding gowns getting married on a Kansas farm; and a transgender man and his non-binary partner engaging in a board game in outer space. A week after, I got three exclusive video clips, supposedly unaltered outputs of the text-to-video model.

In this AI-generated video, the instructions were “a diverse group of friends celebrating during San Francisco’s Pride parade on a colorful, rainbow float.” When you observe the clip again, try to focus on different individuals on the float to recognize unusual details in the scene, ranging from disappearing flags to amusing foot anomalies.

The videoclips are messy but marvelous. People on a float in San Francisco’s Pride parade wave rainbow flags that vanish and reappear out of nowhere. Two brides in white dresses smile at each other at the altar, their hands blending into an irregular shape. A queer couple playing a board game appear to move through game pieces as if they were ethereal.

The AI-generated video was inspired by the phrase “a transgender man and his nonbinary partner playing a board game in outer space.” We must remember it’s essential real astronauts wear their helmets while in outer space.

The clip that portrays a nonbinary person playing games in outer space is noteworthy among the three videos. The presumably queer-symbolic lilac hair returns, messy tattoos spread across their skin, and a pattern reminiscent of reptile scales covers their face. Even for an advanced AI video generator like Sora, representing nonbinary people presents a challenge.

The AI-generated video was crafted with the prompt “two women in beautiful wedding dresses getting married on a farm in Kansas.” Though it seems realistic initially, a second look will reveal the irregular blending of the brides’ hands.

When these clips were shown to members of Queer in AI, they questioned Sora’s definition of diversity regarding the friend group at the Pride parade. “Models are our baseline for what diversity looks like?” questions Sabine Weber, a computer scientist from Germany. Alongside highlighting the overly attractive personas in the video, a common occurrence for AI visualizations, Weber questioned why there wasn’t more representation of queer people who are older, larger-bodied, or have visible disabilities.

As our interaction neared its conclusion, Agnew shed light on why algorithmic representations can be unsettling for LGBTQ people. “It’s simple to get them to merge things that individually are fine but collectively are deeply problematic,” they assert. “I’m very concerned that depictions of us, which are already a constant fighting ground, are suddenly going to be taken away from us.” Even if future AI tools encompass more inclusive representations of queer people, the ensuing synthetic depiction can potentially lead to unintended aftereffects.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

Dispelling the Myth of Incognito Mode: A Complete Unraveling

Next Article

Acquire All Seven Fallout RPGs on PC for Just $25: An Unbeatable Deal!

Related Posts