On a breezy Saturday afternoon in San Francisco, over a hundred coders came together at a coworking space for a unique hackathon called “Man vs. Machine.” The event sought to discover whether teams using AI coding tools could outperform teams composed entirely of humans, with a cash prize of $12,500 on the line.
Participants were divided into roughly 37 teams, labeled as either “human” or “AI-supported.” Shortly after the teams were set, it became clear that some individuals dropped out after being assigned to the human team, perhaps wary of the challenge. A panel of judges evaluated their projects based on creativity, real-world applicability, technical skill, and execution, with only six projects making it to the final demo round.
The hackathon was not just about competition; it also aimed to delve into the practical implications of AI in coding. A recent study from METR showed that AI tools could actually hinder the productivity of experienced developers. This hackathon would extend that research by allowing participants, some with limited coding experience, to innovate new projects entirely from scratch.
During the event, excitement buzzed in Slack channels as contestants pitched ideas ranging from an AI tool providing feedback for pianists to an app for tracking reading habits. Stanford student Arushi Agastwar, randomly placed on the human side, decided to create a framework evaluating sycophancy in AI models, expressing both optimism and uncertainty about her project’s potential.
On the AI-supported team, Eric Chong, with a background in dentistry, eagerly developed software meant to detect autism using voice and facial recognition, acknowledging potential bias in the data yet hoping for accurate early detection.
As the deadline approached, the workspace filled with the scent of vegan meatball subs as participants worked frantically to complete their projects. The judges included members from OpenAI and Anthropic, intensifying the stakes. To everyone’s surprise, the competition remained close, with an equally split lineup of finalists from both teams—three from each side.
Final presentations showcased innovative ideas. A tool called ViewSense assisted visually impaired users by transcribing video feeds into audio descriptions, impressively crafted without AI support. Another project allowed users to design websites using analog methods, again excluding AI. On the other hand, the AI-supported projects included a system generating heat maps of code changes to pinpoint vulnerabilities.
Ultimately, the AI team clinched first place with the heat map project, while a writing tool that assisted authors in tracking character relationships took second, showcasing that both teams had succeeded in their respective challenges. The event reinforced the idea that while humans have creativity and novel approaches, the integration of AI into coding can significantly boost productivity and outcomes in certain contexts.
The hackathon highlighted a notable dynamic: participants expressed a preference for the human team experience lamented over the challenges of coding without AI. Yet as the event concluded, it became apparent that combining human ingenuity with AI capabilities often proves advantageous, hinting at the future trajectory of coding and technology professions.