EPISODE #
4

Apple’s OpenAI Plans, How Meta’s Llama 3 Will Make Money, Verge’s AI Survey

April 30, 2024
Apple Podcasts
|
SpotifyYouTube

Show Notes

Apple is talking with OpenAI about powering AI features in iOS. The story to-date with Apple and AI and what type of new features we could see in the new iOS.

Meta has been releasing AI models for free. I break down a simple question: how do you make money by doing that?

Finally, there’s a new AI survey. What people are saying about AI and what I think is missing.

Subscribe to the best newsletter on AI: https://theneurondaily.com

Watch The Neuron on YouTube: https://youtube.com/@theneuronai

Transcript

Welcome all you cool cats to The Neuron! I’m Pete Huang.

Today,

  • Apple is talking with OpenAI about powering AI features in iOS. The story to-date with Apple and AI and what type of new features we could see in the new iOS.
  • Next, Meta has been releasing AI models for free. I break down a simple question: how do you make money by doing that?
  • Finally, there’s a new AI survey. What people are saying about AI and what I think is missing.

It’s Tuesday, April 30th. Let’s dive in!

Our first story is Apple’s search for an AI partner.

Ever since OpenAI released ChatGPT in November of 2022, the tech community has closely watched every Big Tech player to see what they would do in response.

First, they pressured Google. Early ChatGPT users were fascinated by how easy it was to get information from ChatGPT and quickly wondered why Google, with 180,000 employees and $100 billion of cash on hand, was beat to the punch.

Then they turned to Apple, but they were much more patient and forgiving towards Apple. For one, this wave of AI is a major boon to Apple, not a threat, and given Apple’s track record, there wasn’t much we could do but wait and see. In general, people were excited about Apple’s ability to put this new magical AI in the hands of so many people.

Here’s what you should know about Apple.

First, Apple doesn’t go first. Meaning, when it comes to new things - new devices, new technologies - Apple lets everyone else figure out the nitty gritty of the problem, Apple studies how consumers are reacting, then they come in with a clearly better product, sometimes years later

MP3 players had been around for years before the iPod. Microsoft had tablets but got squashed by the iPad. And so on.

So when it comes to AI, while the first reaction was, “Apple should be putting this stuff on their devices ASAP”, the second reaction was, “No, that’s probably not what they’re going to do.”

In fact, in June 2023, when they announced their last set of product updates, they said the words “artificial intelligence” exactly zero times. They knew that saying AI would make industry analysts, tech enthusiasts and their customer base think they were doing something that they weren’t.

So they’re saving the big AI moment for later.

The other thing you should know about Apple is that it works on an annual cycle. Everything they do is built towards one big release once a year. For iOS, the announcement of the release happens in June, at the Worldwide Developers Conference (or WWDC). And the release itself happens in September.

This year, they’re slated to release iOS 18, a long awaited update to iPhones and iPad. And it’s the first release where Apple would have had enough time to build the kind of AI features we’d all be excited about.

In January 2024, people already found early testing code for using AI to summarize and respond to text messages. That code tested AI models from OpenAI, Google and even Apple itself. So they were clearly in a race to figure out how to work AI into iOS 18.

And they were exploring every pathway possible: do we build our own AI or do we partner with someone else?

It turns out that building quality AI is just really hard. You need time to do it.

So two months later, the press reports that Apple is looking for a partner, and they went to OpenAI and Google for that. This means we may not see Apple’s own AI model until later, or possibly at all.

The reports from this week are that these talks are intensifying with OpenAI. Apple CEO Tim Cook previously said that he uses ChatGPT himself, but that there are quote “a number of issues to be sorted”.

So where exactly could we see AI?

Aside from summarizing and responding to text messages, the current reports say that Apple could be upgrading Spotlight, which is what searches your phone, and introducing code completion in XCode, which developers use to build things for Apple devices.

The big question is whether any of Apple’s latest research will be part of iOS 18 or if we’ll have to wait.

That research includes the ability for AI to understand what’s currently on your screen, so if you’re looking at a restaurant’s about page, it’d understand where the phone number is, what the opening hours are, etc.

Pair that with voice commands with Siri, and you’ll be able to say “Siri is this restaurant open right now?” and “Siri can you call the restaurant?”

More on that as we see more research for Apple.

Your big takeaway on Apple and AI:

Apple has perhaps the most exciting opportunity to bring AI into people’s lives.

There are over 2 billion live Apple devices live in the world. And people love their iPhones, they’re on their phones for an average of 3 hours every single day.

Anything that Apple does with AI will immediately reach more than a billion people.

That is much like the opportunity for Google and Microsoft. AI upgrades to Google Workspace immediately reached 3 billion people. Microsoft Copilot immediately went to more than a million companies worldwide.

Whoever has the existing customer base can put AI in their lives almost immediately.

What we don’t know is just how far AI will go. We’ll definitely see minor upgrades, the easy stuff that you and I can think of. But knowing Apple, they’re gonna want to do this in a big way. And often times, we can’t even imagine what that will be until they release it.

I’ll go into more about just how much Apple can eventually do with AI as we get closer to June, when Apple announces iOS 18. For now, prepare to see OpenAI powering your Apple device in the next year.

Our second story is Meta and how it plans to make money with AI.

Last week, we talked about Llama 3, Meta’s new AI model that looks to compete with OpenAI’s GPT-4, Anthropic’s Claude 3 and Google’s Gemini 1.5

On Saturday, I also mentioned that Meta, much like xAI, is going the open source route, meaning anyone, even you, can download Llama 3 for free.

That’s in contrast to OpenAI which charges you and me $20/month for GPT-4 and businesses a lot more based on their usage. And we’ll never see the secret sauce behind GPT-4. We can only use it.

And after Meta stock dropped as far as 15% after their quarterly earnings after they said they’re gonna continue ramping AI spending even more than they have, there’s a bit of a puzzle that I want to dig in with you on.

How exactly does Meta make their money back on AI?

After all, if you’ve spent hundreds of millions to build an AI model, and you decide to give it away for free like Llama 3, what’s the end-game here?

I don’t know about you, but I don’t know many businesses that make serious money by giving things away for free.

The exception may be Google, Facebook and in fact all of social media. The previous playbook has been to give away the search, give away the social media for free. But then once you have enough eyeballs, you sell ads, you sell the user’s data.

I wouldn’t blame you for jumping to some comparison to that with AI. But AI doesn’t work that way.

Let’s say you download Llama 3 and start using that chatbot on your computer. What you put into that chatbot doesn’t actually go anywhere. In other words, you could disconnect from your wifi, shut down access to the Internet, and it would still work.

You wouldn’t be sending your data anywhere, much less Meta itself.

So it’s not a data play for Meta.

Open source is not a new thing. It’s huge in the developer community.

But nearly everyone struggles with how to turn it into a business. It doesn’t happen very often.

Here’s how it goes down, when it does.

If there’s a technology that’s open source that a lot of businesses use, you could make money by building around it.

That includes offering support and services. You charge money to install and configure the software the way that businesses want. And you help them fix things as they come up.

You can offer additional software that works with the open source stuff but that only enterprises want or need. Things that sound like big enterprise: security, data governance, etc.

You can make special versions of it that bundle in a bunch of this other stuff into a one-click install. And even though a portion of that code is technically free, businesses would pay for the convenience.

These are all things that typical open source companies could do.

But even Mark Zuckerberg didn’t seem very specific about what the future pathway looks like to recoup the tens of billions of dollars he’s spending.

Here’s what he said on the investor call:

“There are several ways to build a massive business here, including scaling business

messaging, introducing ads or paid content into AI interactions, and enabling people to pay to use bigger AI models and access more compute. And on top of those, AI is already helping us improve app engagement which naturally leads to seeing more ads, and improving ads directly to deliver more value”

Here’s what all that means:

  1. You can help businesses build AI chatbots for their business that customers can message on Messenger or WhatsApp. Those would be powered by Llama 3 and businesses would pay for that.
  2. When people chat with Meta AI, you can show ads or paywall some of the content.
  3. You can go back to being like OpenAI and Anthropic. If you make even better models than Llama 3, you can charge businesses to use them instead of giving it away for free.
  4. AI models make our apps more addicting so they spend more time so we sell more ads.

I don’t blame you if you think that spending hundreds of millions on building Llama 3 without a clear plan to make money from it is weird business. Some investors would agree with you.

But maybe all you have to believe is that one of these ideas sounds good enough. After all, they’re really good at making money. In 2023, they made 39 billion dollars of profit. Surely they can figure something out.

In fact, Meta makes so much money that they might not even need a business plan for AI. 

Remember, this is the company that changed the entire name of the company to Meta and has lost $40 billion on metaverse related work.

Must be nice.

Your big takeaway for Meta and their AI business plan:

As transformative as AI has been billed to be, it seems like everyone has a question about how to make money with it.

On the lower end, many AI startups are struggling to find defensibility and breakout success even after lots of investor hype.

There are a few in healthcare and legal that have done exceptionally well, but companies like AI writing tool Jasper have had to replace their CEO, cut their valuations and lay off staff after customers realized ChatGPT could do the same thing for cheaper.

On the higher end, Meta, xAI and Mistral spending a boatload of money building leading AI models, but giving it all away for free is an amazing act of charity.

That would be a story for the ages if that’s what they said, that they’d build AI for the people out of ideology.

But they’re all businesses. They all have investors and shareholders. Those people want them to make money from AI. Somehow.

Our final story is a new AI survey from The Verge and Vox Media.

They first ran this survey in June 2023 and ran it again in December 2023. The December results were released just this past Friday and had some interesting nuggets in there.

First, 2 in 5 Americans have now tried an AI tool like ChatGPT or Midjourney. That’s led by Gen Z, 64% of whom have done so compared to 58% for Millennials, 37% for Gen X and 16% for the Boomers.

Across these 6 months, every generation has had more people who have tried an AI tool.

I’m not surprised that a majority of Gen Z and Millennials have tried, but I’m surprised that’s only around 60%. What I see is that for every 10 Gen Z or Millennial in a room, 4 of them have not even opened ChatGPT once, despite how much talk there’s been.

Keep in mind, these are the people who grew up on the Internet, smartphones, social media. Surely they could’ve taken a second to log into ChatGPT at least once, but apparently not.

For the people who have tried ChatGPT, it looks like they’ve found it quite useful. Two-thirds of AI users use it on a weekly basis. And the vast majority of users have multiple use cases.

Now, what are those AI use cases? Things like planning trips, discovering recipes, and reimagining your interior design.

The survey also tested AI hype. When products are labeled as “powered by AI”, Millennials love it - 53% of them said they’d be more interested in the product with that label. That’s followed by 49% of Gen Z, 39% of Gen X and 27% of Boomers

That doesn’t follow quite as well into all those random products that are now claiming to be AI powered.

About 40% of people said they’d be interested in things like AI kitchen appliances, TVs, glasses and earbuds.

For stuffed animals, only 26%. So turns out there is a limit to how interesting people find AI and where they think it should go.

Let’s talk AI at work.

50-60% of people say that AI is at least as good as them for a broad range of outputs. That includes creative outputs like design, stories, photos, etc. as well as work outputs like essays, emails, code, etc.

This largely tracks our understanding - it’s really good at getting the bottom 20% of a group to match the 50th to 75th percentiles. It raises the floor. But it doesn’t replace the top 10%. As it stands, the best of the best in humans is also better than AI.

About 20% of people truly believe that AI will take their job completely. For younger workers, that’s closer to one-third.

The latter part is interesting and not surprising. The wave of AI is dovetailing with a scary economic situation, where 60-70% of people think the economy isn’t in a good shape. And these younger workers have had a really tough last few years.

The Class of 2024 started college when COVID lockdowns were in full effect. Zoom University was not a healthy environment. Then, ChatGPT hits during their junior year, and AI hype is boiling over into their senior year, right as they’re supposed to be on the market looking for work.

And the tech community has noticed a shift where tech companies are now less willing to hire and mentor younger software engineers. Instead, senior engineers are moving from management roles back into coding roles, and they’re finding that AI can often do the things that the most junior developers could do.

So even our youngest generation, who usually have the energy and willpower to adapt to change, are feeling tested and strained.

Your big takeaway for the AI survey:

Nobody seems to know how far AI goes yet, not even the people who ran the survey.

So far, people have been curious and open-minded about trying these new tools. And hearing the words “artificial intelligence” gets people excited.

For some of them, AI has leveled the playing field and let them do things at a decent level. For others who are already at that level, AI accelerates their work.

But the survey shows that the most impactful is yet to come.

The use cases that people have found for AI are often big moments of headache but actually small bits of time. Take email, one of the most popular use cases for ChatGPT - most people could probably clear email pretty fast, but it’s just a pain and a half to do so.

My opinion is that the next 1-2 years will offer much more transformative use cases powered by AI capabilities that researchers and startups are still trying to figure out.

And that excitement, that curiosity to try new things, is going to find a lot more lightbulb moments very soon.

Some quick hitters to leave you with:

  • OpenAI is partnering with the Financial Times to train their AI models on their content. It’s just the latest of their media deals, where they’ve partnered with the Associated Press, Axel Springer, France’s Le Monde and Spain’s Prisa Media to get access to more data.
  • California legislators have fast-tracked an AI safety bill that attempts to outline what safety even means. I’ll do more on this if it gets closer to passing, but here’s a snippet: the bill considers an AI model to have “hazardous capability” if it enables chemical, nuclear, or biological weapons, at least $500 million of damage in things like cyberattacks, or anything that’s similar to that. People are skeptical, but the stakes are high, considering nearly every big AI model developer is in California.
  • Filmmakers using AI have released more details about Sora, the video generator from OpenAI that took the Internet by storm in February. It’s still not public. One reason is that even a 20 second clip takes forever to create, something like 10 to 20 minutes.

This is Pete wrapping up The Neuron for April 30th. I’ll see you in a couple days.