Author: Julian Chokkattu
The pathway leading into Rabbit’s venue—for the launch event of the R1, an artificial intelligence-powered device announced at CES 2024—was paved with gadgets from the past.
First was the orange JVC Videosphere, then the Sony Walkman, a Tamagotchi, a transparent GameBoy Color, heck, even the original Pokémon toy from 1998. At the very end of the hall was Teenage Engineering’s Pocket Operator, and across from it, a few concept prototypes of the Rabbit R1.
If the Pocket Operator stands out, seeing as it’s barely a decade old, that’s because the Swedish design-firm Teenage Engineering helped design the R1. And at the launch event, CEO Jesse Lyu announced on stage that Jesper Kouthoofd, founder of Teenage Engineering, has joined Rabbit as its chief design officer (while still maintaining his role as CEO of TE).
This red little retro gadget isn’t just pulling our nostalgia strings and taking design cues from the second half of the 1900s—Rabbit is also making a bold claim that its R1 deserves a spot in this tech hall of fame.
Rabbit confirmed that an R1-accompanying camera wearable is inbound that will allow the device to know by image alone what you’re pointing at or commanding it to control.
Even the venue being the iconic TWA Hotel at JFK Airport, New York, which very much feels like a live-action set of The Jetsons and boasts a restored Lockheed Constellation “Connie” L-1649A that’s been repurposed into a cocktail lounge, underscored this yearning for the fun gadgetry of the past, and the exciting promise the future glory.
Fifty years from now, the R1 is that gadget you’ll reminisce about that was at the start of the AI-fueled world. At least that’s what Lyu hopes.
“Our mission is to create the simplest computer,” Lyu announced on stage to a captive audience—and over the course of an hour, Lyu laid out plans to achieve this goal. The R1, it turns out, is just the start.
Mark Andrews
Simon Hill
Adrienne So
Author: Simon Hill
I discuss the intricate details of this innovative AI device, which I first reported on from CES. This small, orange-red device fits neatly onto a stack of Post-it notes and boasts a convenient tiny screen. A scroll wheel on the right side enables effortless interface interaction, while an adjustable camera ensures user privacy. The device features a single button on the right edge, serving as the primary method of screen selection.
Similar to the Humane AI Pin, you can communicate with this device just like you would with Alexa, Siri, or Google Assistant. Thanks to Perplexity’s large language model, it can understand complex, naturally phrased queries and deliver answers via its speakers and screen. The device operates similar to a walkie-talkie – press the side button, and then speak.
Additionally, the inbuilt camera enhances the device’s AI capabilities, allowing it to interpret and analyze the object you’re focusing on.
Lyu demonstrated his R1’s capabilities live on stage, using its camera to capture an image of a printed spreadsheet. He then commanded the device to reposition two columns and email a copy of the modified version. Within a matter of seconds, a digital version of the manipulated spreadsheet popped up in Lyu’s inbox.
The R1 possesses a wide array of functionalities, such as manipulating spreadsheets, translating languages, creating AI images, and even placing a McDonald’s order.
The gadget’s skillset extends further; note-taking and note access through the web portal named “Rabbit Hole”, real-time translation, voice recording capabilities, and an artificial intelligence-generated summary of the recording. A virtual keyboard is also integrated for input requirements like Wi-Fi password entry.
Written by Mark Andrews.
Simon Hill
Adrienne So
Simon Hill
Lyu demoed the R1’s Teach Mode as well, which lets you point the R1’s camera at a computer screen as you instruct it how to complete a task. After it learns, you can ask it to perform said task to save yourself the time and hassle. However, this feature is not yet available, and when it is, Rabbit says it will start with a small selection of users to beta test it.
The objective for the R1 is essentially to replace your applications. Rather than searching for an application icon, you simply push the button and instruct the R1 to manage a task.
During CES, it gave the impression that you could access several third-party applications through the R1 at launch. However, at present, only four services are accessible: Uber, DoorDash, Midjourney, and Spotify.
You link to these services via the Rabbit Hole web gateway. It does mean that you are providing your login details to avirtual machine managed by Rabbit. After that, you can instruct the R1 to call an Uber, order food from McDonald’s, create an image, or play a song. The R1 uses these services’ application programming interfaces (APIs) to perform these tasks—and it has been pre-programmed to utilize them.
According to Lyu, there are numerous new features set to be introduced. Come summer, we can expect an alarm clock, calendar, contacts, GPS, memory recall, trip planning, and more. Amazon Music and Apple Music integrations are in the works, and we can anticipate more third-party service integrations like Airbnb, Lyft, and OpenTable in the future.
You might be wondering, “Hang on a minute, that just sounds like a phone,” and you … wouldn’t be off the mark.
As we’ve seen with the clunky and limited Humane Ai Pin, a smartphone can perform all of these tasks better, faster, and with richer interactions. This is where you have to start looking carefully at Rabbit’s overall vision.
The idea is to speak and then compute. No need for apps—the computer will just understand. We are a long ways away from that, but, at the launch event, Rabbit teased a wearable device that would understand what you’re pointing at.
Lyu suggested this wearable could understand you pointing at a Nest thermostat and asking to lower the temperature, without having to say the words “Nest” or “thermostat.” The image of the supposedly all-seeing wearable was blurred, though, so we don’t have much information to go on.
Lyu mentioned generative user interfaces, where a user will be able to have an interface of their own choosing—buttons on a screen placed where you want them, and at the perfect display size—and then claimed that Rabbit is working on an AI-native desktop operating system called Rabbit OS. Again, we don’t have many details, but my mind immediately went to Theo in Her installing OS1 on his PC.
An operating system that puts a personal voice assistant front and center. What can go wrong?
Mark Andrews
Simon Hill
Adrienne So
Simon Hill
Rabbit is also working on an AI-native desktop operating system called Rabbit OS that puts its personal voice assistant front and center.
The Rabbit R1 retails for $199, and it’s available for purchase now, but units are shipping in batches, and, currently, if you place an order, you’ll get a unit in June.
Lyu persistently emphasizes that unlike the Humane Ai Pin, this device has no subscription. It’s crucial to remember, however, that for the 4G-enabled R1 to be operational when not connected to Wi-Fi (unless you opt to tether it to your phone), you need to buy your own monthly data plan and slot in the SIM card.
The firm reported a sale of 100,000 units in the first quarter of 2024. After purchasing and unboxing my copy at the event, I find it to be quite an adorable piece of technology. After I’ve tested it thoroughly, I will compile a more comprehensive review.
Should you decide to purchase something through the links in our stories, we could earn a commission. This act aids our journalism. Discover more.