Click to Skip Ad
Closing in...

We finally know how the Rabbit r1 will handle your app logins

Published Mar 27th, 2024 10:15AM EDT
Rabbit r1 device announced at CES 2024.
Image: Rabbit

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

The Rabbit r1 AI device that can perform in-app actions will start shipping to early buyers next month. Only the first 10,000 units out of the 100,000 preorders Rabbit received will be delivered. Ahead of the r1’s release, Rabbit explained in more detail how the r1 will handle the security and privacy of your app logins. 

That was one of my biggest concerns when Rabbit unveiled the r1 at CES. I would have to trust the r1 with my login credentials, and that’s a lot to ask. They might be logins for food delivery or travel apps, or for Uber. But those credentials are still important, no matter what kind of digital service they protect. 

Rabbit’s additional explanations about user data security are certainly welcome, as they further detail how the AI can perform tasks for you inside certain apps. 

Rabbit published a blog post that focuses on answering questions about the LAM (large action model) that allows the r1 to book a flight or hotel, order an Uber, place an order for takeout, and play music. On top of that, the r1 will also support large language models (LLM) for generative AI features. 

When you interact with your r1, the software will determine whether the LAM will handle the request, or whether you need a LLM to generate a response. Rabbit works with Perplexity, Anthropic (Claude), and OpenAI (ChatGPT) for those genAI features. 

Your login data

You’ll have to create an account on Rabbit’s Rabbit Hole portal to have the r1 perform actions. Inside the Rabbit Hole, you’ll find apps the LAM knows how to use. You must log into each account so the r1 can access its features. 

Here’s how Rabbit secures your login credentials once you enter them: 

Agents on our platform, which we call “rabbits,” can see the process of you logging into vendors’ apps. The act of entering your user name and password is encrypted and protected, and we do not store that information in our database.

However, we do retain an authenticated state of your app on our cloud, securely stored, so that our agents can act on your behalf. You can remove this information whenever you would like to. We are also actively working with industry partners, including security auditors, to better secure your data.

I always assumed that Rabbit would have to retain your login data to perform in-app actions. The explanations above alleviate some of my concerns, though not entirely. It’s unclear who has access to that “authenticated state of the app.” We’ll want more answers about this once the rabbits can perform actions in more sensitive apps that users might trust them with. 

Rabbit R1: Main hardware and software features. Image source: Rabbit

Rabbit r1 will support a Teach Mode in the future that will let you teach agents new apps. The same login principles will apply. But I’d stay away from teaching the r1 how to access mobile banking apps if I were you, assuming such functionality will ever be possible.

Rabbit also explained what its agents see when interacting with apps, revealing another important detail about data privacy. Whenever you ask for an action, the operating system creates a temporary session. Also, Rabbit will not store private information, even if it sees it: 

By connecting to your services through the rabbit hole, you are granting access to allow our rabbits to operate them on your behalf. With this access, rabbits can see whatever you see on your apps.

They may see private information, but we don’t store that information anywhere. A virtual environment is created on the cloud every time you ask r1 to do something for you, and that session is ephemeral (aka short and temporary) and discarded after your task is completed.

Rabbit also says in the blog post that it won’t solve CAPTCHAs for the user, in case the app pops one up. You’ll have to handle it before the R1 can perform its action. Also, the app might impact the r1’s performance. The agents can only perform as fast as the app’s design allows them to. They just replicate your activity in the apps, so they can’t just bypass steps or loading times.

What about ChatGPT data?

Finally, Rabbit also addressed privacy concerns related to the data it might share with Perplexity, Claude, and ChatGPT. It turns out that, yes, it does share data with those services because that’s how genAI works: 

When a user interacts with rabbit OS, we route their requests to the most suitable model capable of fulfilling their needs, whether it’s our in-house model or one provided by our partners. User data is stored on our servers; only processed, user-initiated conversations and utterances, which do not contain any personally identifiable, private, or sensitive information, are sent to our partners.

As with any interaction with ChatGPT and other genAI products, you should be mindful of what you share with the AI. 

If you’re one of the Rabbit r1 buyers, you should check the full blog post at this link to make sure you understand how this new type of mobile computing will work. 

Chris Smith Senior Writer

Chris Smith has been covering consumer electronics ever since the iPhone revolutionized the industry in 2008. When he’s not writing about the most recent tech news for BGR, he brings his entertainment expertise to Marvel’s Cinematic Universe and other blockbuster franchises.

Outside of work, you’ll catch him streaming almost every new movie and TV show release as soon as it's available.