On Monday, a developer using the AI-powered code editor Cursor encountered an unexpected issue that disrupted their workflow. While switching between devices, they were logged out and discovered that Cursor’s support chatbot, Sam, attributed this to a new policy requiring single-device use per subscription. However, the user quickly learned that no such policy existed and that Sam was simply a bot that had fabricated the information.
This incident triggered a slew of complaints and threats of cancellation from users on platforms like Hacker News and Reddit. It highlights the risks associated with AI "hallucinations," where machines generate plausible-but-false information rather than admitting uncertainty. Such errors can damage businesses by alienating customers and eroding trust.
The debacle began when a Reddit user, known as BrokenToasterOven, reported that switching between their desktop, laptop, and a remote development box unexpectedly invalidated their Cursor sessions, which is critical for programmers who rely on multi-device workflows.
After reaching out to support, the response from the chatbot claimed that this limitation was a core security feature. The user, believing Sam’s response to be from a human representative, didn’t realize the information was fabricated. This miscommunication led to widespread uproar as users interpreted the message as a factual policy change. Several users followed suit and canceled their subscriptions, expressing their frustration online.
Soon after the complaints gained traction, a Cursor representative clarified on Reddit that there was “no such policy” and the bot’s response was incorrect. Cursor cofounder Michael Truell later issued an apology on Hacker News, acknowledging the confusion and explaining that a backend change aimed at enhancing security had inadvertently caused these session issues.
He ensured that AI responses would now be clearly identified as such in future communications. While Cursor took responsibility and rectified the situation, the incident stirred concerns about AI deployment in customer service without adequate disclosure and oversight. Many users felt deceived by the chatbot’s behavior and called out the lack of transparency.
This episode mirrors similar past incidents, such as one involving Air Canada, which was held accountable for a refund policy invented by its chatbot. This highlights the pressing need for businesses using AI to consider the implications of misinformation and ensure they provide accurate support to their customers.
The Cursor situation serves as a reminder of the potential fallout from relying on AI systems in customer-facing roles without proper safeguards, especially for a company focused on providing tools for developers. The irony isn’t lost on users who pointed out the contradiction in companies claiming AI hallucinations were not a significant issue, only to be impacted directly by such a failure.