It’s nothing new for computers to mimic human social etiquette, emotion, or humor. We just aren’t used to them doing it very well.
OpenAI’s presentation of an all-new version of ChatGPT on Monday suggests that’s about to change. It’s built around an updated AI model called GPT-4o, which OpenAI says is better able to make sense of visual and auditory input, describing it as “multimodal.” You can point your phone at something, like a broken coffee cup or differential equation, and ask ChatGPT to suggest what to do. But the most arresting part of OpenAI’s demo was ChatGPT’s new “personality.”
The upgraded chatbot spoke with a sultry female voice that struck many as reminiscent of Scarlett Johansson, who played the artificially intelligent operating system in the movie Her. Throughout the demo, ChatGPT used that voice to adopt different emotions, laugh at jokes, and even deliver flirtatious responses—mimicking human experiences software does not really have.
OpenAI’s introduction came just prior to Google I/O, Google’s annual developer conference. Coincidentally, Google unveiled a more advanced AI assistant prototype, Project Astra, capable of fluent voice interaction and video understanding.
However, Google chose not to anthropomorphize its assistant, opting instead for a more subdued, mechanical tone. Last month, Google DeepMind researchers published a comprehensive technical document titled “The Ethics of Advanced AI Assistants”. The paper proposes possible challenges posed by more human-like AI assistants, from new privacy concerns and potential technological addictions, to more potent misinformation and manipulation methods. Many people already spend considerable time with chatbot buddies or AI girlfriends, and the technology looks set to become even more immersive.
In my conversation with Demis Hassabis, the executive leading Google’s AI initiatives, he mentioned that the research was motivated by Project Astra’s potential. “We need to anticipate this given the technology we’re developing,” he stated. This notion seems more relevant than ever following OpenAI’s recent announcement.
OpenAI, however, didn’t acknowledge potential risks during its presentation. More persuasive and convincing AI assistants could potentially manipulate people’s emotions, making them more convincing and habit-forming over time. Sam Altman, OpenAI’s CEO, enthusiatically referenced Scarlett Johansson in a tweet labelled “her“. OpenAI didn’t immediately respond to a request for comment, but the company claims its charter obligates it to “prioritize the development of safe and beneficial AI.”
We should definitely take the time to contemplate the potential implications of convincingly realistic computer interfaces that penetrate our daily life, especially when coupled with corporations motivated by profit. Distinguishing between computer bots and real people during a phone call will inevitably become more challenging. It can be reasonably assumed that companies will want to use inviting bots to promote their products, while politicians may view them as a tool to influence public sentiment. Criminals will also inevitably use them to refine their scam techniques.
Even advanced new “multimodal” AI assistants without flirty front ends will likely introduce new ways for the technology to go wrong. Text-only models like the original ChatGPT are susceptible to “jailbreaking,” that unlocks misbehavior. Systems that can also take in audio and video will have new vulnerabilities. Expect to see these assistants tricked in creative new ways to unlock inappropriate behavior and perhaps unpleasant or inappropriate personality quirks.
Jared Keller
Reece Rogers
Brian Barrett
Lauren Goode