EPISODE #
2

Rabbit R1’s Launch, ChatGPT Memory, Perplexity and Devin’s Crazy Fundraising

April 25, 2024
Apple Podcasts
|
SpotifyYouTube

Show Notes

The launch of Rabbit R1. After a disastrous launch by their competitor Humane, how is Rabbit faring in the public eye?

Next, ChatGPT now has memory. How does it work, and does this direction make sense for ChatGPT?

Finally, Perplexity and Devin are now worth billions of dollars. Is this some get rich quick scheme by AI engineers or what’s actually happening?

Subscribe to the best newsletter on AI: https://theneurondaily.com

Watch The Neuron on YouTube: https://youtube.com/@theneuronai

Transcript

Welcome all you cool cats to The Neuron! I’m Pete Huang.

Today,

  • The launch of Rabbit R1. After a disastrous launch by their competitor Humane, how is Rabbit faring in the public eye?
  • Next, ChatGPT now has memory. How does it work, and does this direction make sense for ChatGPT?
  • Finally, Perplexity and Devin are now worth billions of dollars. Is this some get rich quick scheme by AI engineers or what’s actually happening?

It’s Thursday, April 24th. Let’s dive in!

Our first story is about Rabbit, one of the companies looking to make AI devices a thing.

They’re building R1, a sort of AI in a box that’s a touch wider than a smartphone and weighs 2/3rd as much.

The AI device category has been white hot with activity. It’s a full-on sprint to get something out on the market as fast as possible while still making it useful enough.

Some names for you: Tab, Meta’s Ray-Ban glasses, Limitless by Rewind, the Rabbit R1, and of course, the Humane Ai Pin.

Humane learned the balance between speed to market and usefulness the hard way. Two weeks ago, they released their product and got absolutely destroyed by the tech press.

Partially because Humane hyped their device to no end. They said they’d replace the smartphone. They wanted to charge you $700 and an extra $25 per month subscription.

But the product didn’t work. It overheated, it wouldn’t respond to things, it would outright refuse things and mishear you. Nearly every review we saw said it was the worst thing they’d ever seen.

Of course, Rabbit was watching all of this. They were slated to launch just a couple weeks after Humane.

Here’s a tweet from Rabbit CEO Jesse Lyu: “in about 7 days, r1 reviews will be out. we are ready to face any criticism and we will fix any issues that we need to fix.”

“Rabbit will keep growing fast and we are ready for this. you don’t just begging for future, you have to build it. and building is quite different than talking.”

That brings us to Tuesday, when Rabbit hosted a party for press and their early customers.

And it’s clear they were very careful to avoid the mistakes that Humane made. Jesse Lyu made sure not to over-promise, he kept saying things like “we’re gonna work on it, we plan on building this thing”. 

But here’s what R1 is launching with:

You can point the camera to something and ask the R1 what it is. You can ask it some basic questions. It also has early integrations with Spotify, Uber, DoorDash and Midjourney so you can use voice commands to play music, order a car and generate images.

What it can’t do…well pretty much everything else. But that’s fine, I guess. They didn’t promise these things out of the gate, just that they’re gonna have it at some point.

The roadmap on the rabbit website reads well. Upcoming on the R1 is navigation features, reservations and ticketing, and research to help you understand a certain location or point of interest.

This is summing up to R1’s overall vision, which is to be a personal assistant. And at $200 today, the Rabbit R1 is a fun toy to play with and a way for you to follow the journey of an AI startup trying to build the next thing. In fact, Rabbit has already convinced 100,000 people to buy these early devices to be on that journey.

One reason to be excited about Rabbit is that they’re building AI to help us do things. ChatGPT and similar chatbots are using language models. They generate language.

Rabbit is building an action model, an AI designed to do things. So eventually, you can imagine the services that the Rabbit R1 is plugged into expanding way beyond Spotify, Uber and DoorDash.

It can make a spa reservation for you. It can check when the gym closes. It can send flowers to your partner. All using voice commands on this little device.

But before we get too far into the Rabbit fan section, let’s talk about the elephant in the room: isn’t this what Apple, Google and Amazon are all supposed to be making?

I mean, wasn’t this what Siri was supposed to be? Google Assistant? Alexa?

Absolutely. Here’s commenter “dagmx” on Hacker News:

“I’m very unclear on why these (rabbit and humane) aren’t just apps. I just don’t see people carrying these in addition to their phones and dealing with the split interaction ecosystem.”

They have a point. You know where you can play music and order Ubers and order DoorDash? On your phone. You know where you can ask an AI questions and have it do things for you? On your phone, once Apple upgrades Siri with the latest AI.

And by the way, we know all of that is coming. Apple, Google and Amazon are all working on AI models that make these kinds of upgrades to Siri, Google Assistant and Alexa possible.

Rabbit made a smart move by not saying they’re out to kill the smartphone. They would’ve gotten destroyed by the media, much like Humane got destroyed.

But that doesn’t mean they won’t ultimately be compared to the smartphone. You have to ask if consumers are really going to carry around a separate device when both of them sorta do similar things.

Your big takeaway on the Rabbit R1:

AI devices are an experiment. They’re flashy and generating a lot of attention, but they’re still experiments.

This new wave of AI has made it possible to reimagine how we interact with technology at every level. ChatGPT made waves by showing us that it’s possible to talk to our software like we talk to our friends.

And companies like Rabbit R1, Humane, Tab and all the others have this idea in their heads that we can interact with our devices in new ways.

If I had to guess, that’s probably right. So their attempts to build new companies and new products around this idea are completely worth it.

The rest of the question is about who actually wins if this idea turns out to be right. It might not be them, even if they had the right vision of the future.

It might be that completely new devices aren’t the right way to manifest that vision and that upgrading our iPhones is the best way to make that happen.

We can’t predict the future on that. It’s all one big test to see if these new startups can build a very good product and if consumers are keen to pick it up in their daily lives.

The only way we’ll know is if they try.

Our second story is OpenAI officially giving ChatGPT the ability to remember things.

Every week, over a hundred million people log into ChatGPT creating god knows how many conversations.

And you know what they have to do every single time? Tell ChatGPT who they are and what they want.

Here’s why:

Think about ChatGPT as a robot intern that lives in your storage closet.

Every time you need to talk to it, you turn it on and take it out of storage.

Once it boots up, you can tell you what you want it to do and why and it’ll do it.

Then, once you’re done, you power it down.

But once it shuts down, it immediately resets to factory settings.

So when you turn it on again, it doesn’t remember a single thing about what you had said last time.

You have to start over and tell it what you want it to do and why.

With this memory update, ChatGPT carries around this little notebook.

As you have conversations with it, it’s jotting down little tidbits on its notebook.

Or if you say “hey you should probably write that down”, it’ll be like, “oh, ok, yeah let me do that”

So that when you restart it again, it’s still reset to factory settings, but now it has this notebook.

And when it wakes up, it reads all the pages in the notebook to get caught up again.

This is ChatGPT with memory in a nutshell.

When you say something like “I teach 10th grade math”, ChatGPT will now remember that and factor that into how it responds to you.

And it’ll do it across every chat that you create.

So here’s an example. I’m going to Mexico City sometime in the next couple months, and I need some recommendations on where to stay.

When I start with a prompt like “Help me plan a trip to Mexico City. I need to be in this area. Can you give me some hotel and restaurant recommendations?”

ChatGPT’s response will first say “Memory Updated” and then proceed to answer. And when I open the Memory section in my settings, we’ll see it added this phrase, reading “Is planning a trip to Mexico City for a wedding in this certain area”

And when I say “I really like seafood, can you update the restaurant recommendations”, it added this phrase “Enjoys seafood.”

We’ve been testing memory since they first put it in beta. In general, it’s interesting, but it introduces this new problem of managing the memory itself.

Some of the information that’s useful to put in ChatGPT memory doesn’t change very often. Your background, where you currently work, what you like. It’s helpful to not have to explain that stuff over and over.

But when stuff changes a lot, it gets confusing to understand what it’s actually remembering and when.

It’s because ChatGPT tries to update any conflicting information. So when I first said I was planning a trip to Mexico City, it saved that. Then I said I was planning a trip to Dallas. So it removed the first snippet about Mexico City then saved the snippet about Dallas.

But that means it won’t remember that I went to Mexico City. And if OpenAI is labeling this as personalization, I’m gonna want it to remember that I went to Mexico City and I loved having seafood there and going to these places because that’s what I’m gonna ask for on another trip.

Having ChatGPT properly figure out the difference between conflicting information and multiple things that can be true at the same time, that actually feels like a solvable problem.

The bigger issue is much simpler: when am I going to tell ChatGPT that I actually went to Mexico City? You go into ChatGPT to get help with something, but you don’t go back to tell it how that thing got resolved.

So how is it supposed to actually be a fully personalized AI?

It comes down to whether ChatGPT is simply a work tool or an all-encompassing thing that’s supposed to be everywhere in your life.

Right now, it’s much more of a work tool. You go to it when you have a problem. Whether or not it transcends work, almost like Rabbit and Humane want to, is an open question.

In the meantime, if you want to use ChatGPT memory, best applications are with work-related information that doesn’t change a lot. The type of company you work for, the work that you do. This is the what of your work life.

You should also pair this with other settings you can configure called custom instructions, which is the how. Be concise and don’t add all this unnecessary fluff. Boil down technical language for me. Give me things in a bullet point list.

These types of modifications can save you a layer of time in using ChatGPT.

Your big takeaway on ChatGPT and memory:

AI should mold to who you are, no matter if you’re using it in your personal life or for work.

That’s evident in the ways that ChatGPT is being used today. Most of the use cases look like assistant or thought partner type of actions.

Help me think through my situation. Write an email for me that says this. Can you help me research this.

But those have personal flavors to them. You go to your friends to help you think through stuff because they know you and what you care about. You want that email that AI writes to sound like you. You want that research to consider what you like and don’t like.

There are a ton of ways and new tools that can make this happen.

ChatGPT can do this with memory.

All the AI device companies like Rabbit, Humane, etc. are solidly in this lane.

Even writing, journaling and email apps like Grammarly or Superhuman have some personalization.

We’re all tired of asking how we can get AI to sound like us. What we really want is AI that understands us without making it a headache to teach it about us.

Our final story of the day is fundraising stories from Perplexity and Devin.

Perplexity, the soon to be $3 billion startup trying to remake how we search the web using AI, who suddenly triplied its valuation in just a couple of months!

And Devin, the now $2 billion startup that launched six months ago and has made zero revenue!

Let’s take these one at a time, starting with Perplexity.

We love Perplexity. And if you think google is filled with spam and overly optimized buying guides that you can’t trust, then you should look into Perplexity!

Instead of typing in magic keywords and having to crawl through a bunch of links to figure out if you can find the right information, Perplexity turns search into real questions and answers.

Perplexity’s fundraising history as a startup has been crazy. In April 2023, investors valued them at  over $100 million. At the end of 2023, they raised a round valuing them at over $500 million. Just a few months after that, they had a third round valuing them at $1 billion.

So in one year, they go from $100 million to $1 billion.

And that’s not even the end of it. This week, TechCrunch reported that they’re already, already! On the hunt for yet another round, this time raising $250 million, valuing them at $2.5 to 3 billion.

That is a crazy amount of money at crazy valuations in a crazy amount of time.

But big valuations make sense for companies that have the numbers to back them up.

If you make a lot of money, ideally profit, you deserve to be worth a lot. Or if you’re growing a lot, you also deserve to be worth a lot, assuming you expect to be profitable.

So let’s look at the numbers for Perplexity.

From Bloomberg, Perplexity today has 10 million daily active user. They have $20 million in annualized revenue.

This is a lot of people! 10 million users and $20 million in revenue is no joke.

But is it worth $3 billion?

Look at Asana, the project management software company. They’re public, they’re currently worth about $3 billion.

In the quarter ending January 2024, they had $171 million in revenue. Not for the year. Just those 3 months.

There’s a Silicon Valley benchmark that says the typical startup growing at a healthy clip should be worth 10 times their annual revenue. It could be lower, like 5 times, if you’re growing slower or like 20 times if you’re going faster.

At a $3 billion valuation and $20 million in revenue, Perplexity would be worth 150 times.

But if you think that’s wild, then we need to talk about Devin.

Devin is a new product that debuted about a month ago, in March 2024. The company making it has been around for about six months.

Devin is billed as an AI software engineer. Meaning, you give it some engineering task, and it’ll do whatever it needs to get it done.

That includes looking at the documentation, figuring out how the software works, actually writing code. Everything a software engineer should be able to do, but completely done by AI.

Sounds cool right? But early reports for Devin are just that: early. Right now, Devin is only successful at tasks that are more tightly defined. It’s far off from doing the complex stuff where an engineer has to think really hard.

And some software engineers are completely skeptical. One of the top posts on Hacker News this month was a video saying that the company’s demo of Devin was nothing more than a staged demo that’s not grounded in reality.

Plus, this entire time, you can only get Devin if you get off the waitlist. And nobody’s paying for it.

So back to the numbers. Perplexity was already crazy if they get valued at $3 billion, which is 150 times their current revenue.

Devin is now worth $2 billion as a company, but their revenue is, for all intents and purposes, zero.

Which means the company is worth infinite times their current revenue. You can’t even define the number.

I’m saying all this because I want you to understand that you’re not crazy for thinking Perplexity 10x’ing their valuation in a year then tripling in a couple months is crazy. You’re not crazy for thinking a 6-month-old startup is not worth $2 billion.

You’re not. It’s definitely wild.

But I also want you to understand the flip side of this, why the investors are willing to value them so highly, because it speaks to how big this whole AI thing could get.

The simple answer is that investors are willing to value Perplexity at 150 times their revenue and Devin at $2 billion despite zero revenue because they see an opportunity for both of these companies to be worth insanely more, and it doesn’t really matter what number you buy their stock at right now.

Let’s take Perplexity as an example.

Alphabet is worth around $2 trillion.

If Perplexity really becomes the next Google, even if it’s only worth $1 trillion, which is half as much as Alphabet today, whoever invested in Perplexity at $3 billion would make 300 times their money.

Yeah, that’s gonna be less than the people who invested at $1 billion, those people would make 1,000 times their money.

But for anyone who couldn’t get their money in when the company was $1 billion , even $3 billion is great. I’d take 300 times my money over nothing!

Same with Devin. If their product ends up being real and replaces software engineers en masse, that could be a trillion dollar company.

So investors who really believe in that possibility look at the $2 billion price tag and again they say “look I’ll take 500 times my money”

Your big takeaway on Perplexity and Devin:

When you see these insane valuations for these AI companies, it’s not because the company is actually worth that amount today, it’s because investors see a massive opportunity ahead.

AI has the potential to fundamentally reshape so many industries. In Perplexity’s case, it’s taking down Google. In Devin’s case, it’s changing how software engineering gets done.

That applies to everything you can think of: the legal industry, all of professional services in fact, all of sales and marketing.

If the promise of AI bears out to its full potential, there are a lot of new companies to be created that will be worth a lot of money.

For investors, the specific value that they give the company is less important than being invested in the company at all. Which can turn into these crazy valuations.

Still, these valuations and fundraising mean nothing until the companies can actually make useful products and sell them. These rounds are, or at least should be, the start of the journey, not the end.

Some quick hitters to leave you with:

  • OpenAI and Moderna, the biotech that made a bunch of our COVID vaccines, gave new details about how they’re using ChatGPT. Some highlights: 100% of their legal team uses it, Moderna has 750 custom GPTs across the company, and each user has 120 conversations a week on average. Example use cases include data analysis of clinical trials data, legal contract summary and preparing slides for quarterly earnings calls.
  • A new study showed that using generative AI in healthcare didn’t actually save any time for the clinicians. It did, however, reduce the feeling of burnout. It’s an unexpected way for the impact of AI to play out, since sales pitches for AI tools often revolve around time savings.
  • Meta’s stock dropped 15% after the company announced financial results. Even though they made more money from ads, they also upped their estimates on investments relating to AI. The new forecast: $30 to $37 billion for the year.

This is Pete wrapping up The Neuron for April 25th. I’ll see you in a couple days.