Fable, a social media app designed for book lovers, rolled out an AI-driven year-end summary feature aimed at recapping users’ reading habits for 2024. While intended to add a playful flair to users’ experiences, some summaries took a contentious turn. For instance, one user’s recap suggested he occasionally seek out a “straight, cis white man’s perspective,” branding him a “diversity devotee.” Another reading influencer’s summary concluded with the advice to “surface for the occasional white author, okay?”
Writer Tiana Trammell expressed disbelief at her summary’s implications, leading her to share her experience on Threads, where she found many others faced similar insensitive remarks regarding sensitive topics like disability and sexual orientation.
In a world increasingly used to features like Spotify Wrapped, many platforms are infusing AI into their user summaries. Spotify, for instance, now uses AI to generate podcast content based on individual listening histories. Fable embraced this trend utilizing OpenAI’s API for their summaries, although they did not anticipate AI-generated commentary to resemble that of anti-woke critiques.
Following backlash on social media, Fable issued apologies across various platforms. They acknowledged the “hurt caused” by the summaries, vowing to improve. Kimberly Marsh Allee, Fable’s head of community, indicated that changes were being implemented, such as an opt-out feature and clearer disclaimers about AI-generated content. Later, they decided to remove the contentious AI features entirely.
However, some users felt the adjustments were insufficient. Writer A.R. Kaufer criticized the company, suggesting they should eliminate AI involvement completely and issue a more formal apology. She felt the initial response was insincere and noted that some remarks were akin to racist or sexist slurs. In response to the incident, Kaufer deleted her Fable account.
Trammell echoed Kaufer’s sentiments, recommending the suspension of the AI feature and more rigorous internal testing before reintroducing it. Groves supported this view, stating that he would prefer the absence of personalized summaries rather than the potential harm of unchecked AI outputs.
The incident highlights a concerning trend in generative AI, which has a history of racial insensitivity. Previous reports have shown issues with biased outputs from AI tools, including OpenAI’s Dall-E and Google’s Gemini. As these technologies are inherently influenced by the biases of their creators, the outcomes can reflect those prejudices, often with damaging results.