What's the state of AI in June 2025? According to Sam Altman, its messy AF.

The AI Talent War Just Went Nuclear—and It Reveals Everything About Who's Really Winning. Meta is on a warpath, and its strategy is simple: if you can't build it fast enough, buy the people who can.

Inside the High-Stakes Battle for Talent, Trust, and Technological Supremacy in the AI Industry

If you only have a minute to scroll, here’s the current state of play:

  • The Model Race: It’s still largely a two-horse race. The latest benchmarks from Artificial Analysis show OpenAI’s o3-pro and Google’s Gemini 2.5 Pro are neck-and-neck for the title of "smartest" model. But the market is fragmenting: Google’s Gemini Flash is the speed king, while open-source models from labs like DeepSeek offer incredible value.
  • The Talent War: Meta’s new $100M sign-on offers are the new frontline. Sam Altman’s counterargument is that you can’t buy a mission, and that OpenAI’s culture of innovation will win out. This high-stakes poaching has spooked the industry, with reports that both Google and OpenAI are now scaling back their reliance on ScaleAI, a key data-labeling partner Meta is investing in (and basically buying out).
  • The Platform Battle: The real fight is to become the underlying "AI companion" for users' lives. This involves not just chatbots, but agentic AI that can complete tasks, new hardware devices that break free from the smartphone, and a platform that integrates everywhere. OpenAI is signaling this future with its open-source Agent SDKs and its secretive hardware project with Jony Ive, and Meta has made no secret about its efforts in AI-assisted augmented reality glasses (and virtual reality headsets) and making them the new computing paradigm.

This all points to a massive shift. The next few years won't be about marginal improvements to chatbots. They will be about foundational battles over the future of computing. Sam Altman has even hinted that GPT-5 could arrive as soon as "this summer," promising another leap in capability.

What to do about it: The game is moving from simple prompts to complex systems. For professionals, this means graduating from just using ChatGPT to building a multi-tool workflow (e.g., Claude for writing, Perplexity for research, o4-mini for coding). For developers, the future is in agents; mastering frameworks like OpenAI’s Agent SDK is no longer optional. The companies that win won't just build a smart AI, but a trusted, integrated system that empowers users—a lesson Meta is learning the hard way, with cash.

Let's get into it! 

Not long ago, the AI industry felt like a nascent field of academic exploration, a collection of research labs tinkering with the future. Today, it’s a full-blown geopolitical battleground, where the currency is not just capital, but raw talent, computational power, and user trust.

At the moment, the contenders are consolidating, the alliances are fracturing, and the skirmishes are escalating into open warfare. At the center of it all is OpenAI, the company that brought generative AI to the masses, now finding itself in a multi-front war against tech’s old guard and a new breed of aggressive challengers.

The most audacious salvo was fired not by a rival research lab, but by Meta.

According to OpenAI CEO Sam Altman, Mark Zuckerberg’s social media empire has begun a scorched-earth campaign to poach OpenAI’s top researchers, dangling life-altering signing bonuses of up to $100 million. Not salary. Not stock options. Just cold, hard cash to walk in the door. It's the kind of money that makes even the most mission-driven engineer do a double-take

Now, why would Meta do this? Isn't that like, kind of stupid money? 

For most of us, the AI race looks simple: OpenAI (ChatGPT) versus Google (Gemini).

But inside Meta's headquarters, they see the world differently. “I've heard that Meta thinks of us as their biggest competitor,” Altman revealed in a recent podcast interview with his brother, Jack. “Their current AI efforts have not worked as well as they've hoped. And I respect like being aggressive and continuing to try new things.”

As Sam revealed, "Pretty much everyone thinks of OpenAI as a replacement for Google... Everyone except Meta, who sees OpenAI as a replacement for Meta."

Think about that for a second. Mark Zuckerberg doesn't fear losing the search wars... he fears people would rather talk to a helpful, creative AI than scroll through another algorithmic feed designed to make them angry, envious, or just plain bored.

Also, people trust OpenAI in a way they will likely never trust Meta. Sam admits people trust ChatGPT "more than they should," but compared to platforms that openly manipulate your emotions for engagement, ChatGPT feels refreshingly straightforward. One user put it best: "It's the only tech company that has ever not felt somewhat adversarial to me."

To us, that trust gap is Meta's biggest weakness.

So as you can see, the logic is simple: if people find more value, creativity, and fulfillment talking to an AI than they do doom-scrolling an algorithmically-tuned feed designed to provoke outrage, then Meta loses its most precious resource: human attention. And when you're in the attention business, that's an existential threat.

“A thing that we're very proud of is when people talk about ChatGPT, they're like, ‘I actually like myself better, it's helping me accomplish my goals,’” Altman explained. He contrasted this with the feeling some get from social media, which can feel adversarial. If OpenAI can provide a superior user experience, it wins the attention war.

It's the same logic Netflix used when they famously declared their biggest competitor was "sleep."

After all, attention is all you need.

But can you buy a mission? Altman is skeptical. “The strategy of a ton of upfront guaranteed comp… I don't think that's going to set up a great culture,” he argued, framing OpenAI’s appeal as mission-first. Sam said, "If you're swaying people with massive signing bonuses... you're not swaying them with a mission, and that's not likely to be a long-term viable strategy."

It’s a compelling narrative, reminiscent of how he and Elon Musk originally recruited AI luminary Ilya Sutskever from Google with the vision of building AI for humanity, not for a corporate giant.

The irony, of course, is that OpenAI is now a corporate giant itself, locked in a complex and increasingly tense relationship with its primary partner, Microsoft.

According to reports from The Wall Street Journal and The Information, the two companies are at a boiling point. The conflict stems from OpenAI’s ambition to operate more independently, particularly after its $3 billion acquisition of Windsurf, a startup whose technology competes with Microsoft’s own GitHub Copilot.

For eight months, the two have been locked in negotiations over a for-profit restructuring plan that Microsoft must approve. OpenAI is reportedly offering Microsoft a 33% stake but wants to end its exclusive cloud hosting deal and, crucially, keep Windsurf’s intellectual property out of their data-sharing agreement.

The standoff has become so serious that OpenAI is reportedly considering the “nuclear option”: reporting Microsoft to antitrust regulators.

In case you missed the beginning of this saga, it goes like this: OpenAI “raised” $40B, but needs to convert from a non-profit to a public benefit corp to get about $30B of that, and the only thing in its way is Microsoft accepting its terms.

Microsoft is reportedly ready to walk away from the non-profit conversion deal if they need to. And The Information just reported OpenAI’s been selling enterprise subscriptions at a discount, which Microsoft doesn’t like either. But OpenAI needs Microsoft's blessing if it wantsto transition smoothly, so it'll have to at some point acquiesce to Microsoft's request or keep being a non-profit. One with $30 billion less dollars, mind you.

All this corporate maneuvering highlights the shifting ground beneath the industry. The simple narrative of scrappy startup (OpenAI, others) versus incumbents is dead. Now, it’s a complex web of co-opetition, where today’s partner is tomorrow’s regulatory complaint.

Which brings us back to Meta for a second: Meta's fear also helps explain the $14 billion chess move that spooked pretty much everyone in the AI industry. Meta reportedly offered that staggering sum to acqui-hire Alexander Wang, the CEO of Scale AI.

If you've never heard of Scale, they are the human engine behind the AI revolution—the massive workforce that provides end-to-end data infrastructure for AI development, including data labeling, model evaluation, safety testing, and enterprise AI deployment services.

Essentially, Scale AI is like a specialized school system for AI. They handle everything from creating the curriculum (data labeling) to testing (evaluation) to making sure the AI graduates safely and can do useful work in the real world.

If you want to hear Alexander Wang of Scale’s take, here it is.

This ScaleAI deal is a big deal. The move was so aggressive it sent shockwaves through the industry, forcing both Google and OpenAI to  “scale back” their reliance on one of their most critical suppliers. Some have even accused Meta of trying to cut off its competition's data stream (which is a very serious anti-competitive allegation). But the idea here (so its been reported) is to have Alexander be the head of new AI research at Meta, supported by a team of AI managers that report directly to Zuck.

For instance, Meta is also in talks to recruit Nat Friedman, a co-founder of Ilya Sutskever’s Safe Superintelligence. To pull off the deal, Meta would essentially cash out the partners in Nat’s investment fund, which would cost on the order of $1B+. This isn’t as dramatic as the Scale AI power move, since Nat has been a consulting advisor to Meta since at least May of last year. However, it could hurt Ilya’s company (one we are very excited to see compete against Sam to innovate something fundamentally new). Losing Nat's involvement could set the company back.

You hate to see it…

But the bigger question is, CAN you win on money alone? The scaling laws seemed to suggest so, but time and again we've seen the need for "new paradigms" to break through ceilings.

Say what you will about Mark Zuckerburg, but he’s nothing if not a ruthless (and paranoid) competitor. If he can’t buy out the competition, he will focus relentlessly at beating them. TikTok used to be public enemy number one, but now it seems that OpenAI has taken that mantle. Mark was willing to spend $14B to acqui-hire Alexander Wang, the lead of ScaleAI, the company most AI companies rely on to train their models. If it actually was to knee-cap the competition, it wouldn't be such an unfamiliar move!

Which brings us to the open source versus closed source debate. Open source AI models are publicly released for anyone to inspect, use, and modify, while closed source AI models are proprietary products whose inner workings are kept secret and controlled by the company that created them.

Meta has always positioned itself as the open source AI leader and main competitor to the closed source systems of Google, Microsoft, and OpenAI. But that’s not really a Meta vs OpenAI thing anymore. That’s a DeepSeek versus OpenAI thing now.

If you forgot, DeepSeek is the Chinese AI model that took the world by storm in December... and January... and April... pretty much anytime DeepSeek launches something, it's a big deal.

DeepSeek has emerged as the new leader in the open-source AI space with a series of models so powerful and impactful that it has effectively replaced Meta as the primary open-source competitor to closed-source giants like OpenAI. In a way, Meta was the first company to lose its job to AI. And the first of the major labs to be disrupted by an open competitor.

IMO, it's sort of shocking that DeepSeek is not Meta's enemy #1, or MiniMax-M1, who just released a really great open agent competitor to DeepSeek. Which begs the question...

If not Meta... and not DeepSeek...

Then who's actually winning the model race?

While the human drama is fascinating, the tech itself is evolving at a breakneck pace. For a while, it seemed every week brought a new "GPT-4 killer." Now, the dust is settling, and a clear hierarchy is emerging. According to the leaderboards at Artificial Analysis, which rigorously benchmarks models on intelligence, speed, and cost, the top tier is an exclusive club.

For a quick overview, the latest leaderboards from Artificial Analysis show a messy but clear picture:

  • For Raw Smarts: It's a dead heat. OpenAI’s o3-pro and Google’s Gemini 2.5 Pro are trading blows at the top. They are the undisputed heavyweight champions, for now.
  • For Raw Speed: Google is the speed king. Gemini Flash is built for near-instant responses, making it the go-to for real-time applications.
  • For Your Wallet: If you're budget-conscious, open-source models from Chinese labs like DeepSeek and Mistral’s Ministral 3B are the champions of cheap.

By now, you’re probably familiar with OpenAI, the creators of ChatGPT. They’re without question the ones to beat in AI. It’s been a wild and fascinating three years since ChatGPT 3 came out, with many contenders (and pretenders) competing for the crown.

These days, it seems that there are really only two (maybe three) major players in the AI race right now. That’s ChatGPT, and that’s Google. And depending on the week (or the day of the week), they’ll bounce back and forth between having the top AI model in terms of raw intelligence (as far as usage goes, it’s without question that ChatGPT is winning; more on that in a sec).

As of mid-2025, OpenAI’s most advanced model, o3-pro, and Google’s Gemini 2.5 Pro are in a dead heat at the pinnacle of the intelligence index. Close behind are other models from the two titans, like OpenAI’s o3 and o4-mini (high). This confirms the public perception: it’s largely a two-horse race for the smartest model, with the lead trading back and forth.

However, intelligence isn’t the only metric. Google’s Gemini 2.5 Flash models dominate in pure output speed, making them ideal for applications requiring near-instantaneous responses. On the other end of the spectrum, when it comes to price, smaller, open-weight models like Google’s Gemma 3 and Mistral’s Ministral 3B offer startlingly good performance for fractions of a penny.

This fracturing of the market into different vectors of performance—intelligence, speed, cost, and context window size—is crucial. It signifies the end of the "one model to rule them all" era. Developers and businesses are no longer just choosing the "smartest" model; they are building a sophisticated toolkit, selecting the right tool for the right job. You might use o4-mini for complex coding, Gemini Flash for a customer service chatbot, and a cheap open-source model for simple text classification.

The naming of these models, however, has become what Altman himself calls a "mess." He acknowledged the user confusion in a recent OpenAI podcast. "I think we are near the end of this current problem," he said, expressing a desire to return to a simpler naming convention like GPT-5, GPT-6, and so on. He even floated a release window for the next major leap: "Probably sometime this summer" for GPT-5.

Usage wise, we've seen it reported that ChatGPT has between 500-800M weekly active users. And from our own internal data (and what we've seen elsewhere), it's a steep drop to the next most used model.

Here are the results of our own internal polling:

The top 4 models, unsurprisingly, were…

If you don't know, 4o is kind of like ChatGPT's "standard" model and same for Claude Sonnet. Claude is much more popular with developers than regular folkes, but us normies like it, too.

ChatGPT still dominates real-world usage. So is it safe to say OpenAI is still in the lead, Google is right behind, and Anthropic and DeepSeek are tied for third? Perhaps so.

But the question "who has the best AI?" now requires a follow-up: "Best for what?"

And what about images and video? On the video side, Google's Veo models used to hold the top first and second slots, but a few days ago two new video models surpassed Google’s Veo 3 in the Artificial Analysis video gen leaderboard: TikTok owner ByteDance’s Seedance and MiniMax’s Hailuo 02.

As for images, OpenAI's image model is largely considered the best, but ByteDance is also gaining there with its Seedream modeland Google's imagen 4 model is not far behind, either.

If you're curious, there's also text to speech and speech to text models you can compare, too.

Don’t forget the dark horse: Grok. While Elon Musk’s xAI has been chugging along, new updates show it’s evolving beyond a simple chatbot. Recent tests reveal Grok is getting Voice Mode and a "Tasks" feature, allowing users to schedule recurring queries and deep searches. This transforms Grok into a nascent agentic tool for research and monitoring. With hints of a more powerful Grok 3.5 on the horizon, it’s a platform to watch.

The real prize here isn't a better chatbot.

The next wave is agentic AI—systems that can take a goal and execute multi-step tasks. And the next platform won't be a phone. Meta is betting on AI-powered Ray-Bans, while OpenAI has a secret device in the works with legendary Apple designer Jony Ive. The winner won't just own a new product category; they'll own the next dominant platform.

This all requires an insane amount of infrastructure. Enter Project Stargate, OpenAI’s audacious, $500 billion plan to build an unprecedented amount of compute power. And all of this is happening as OpenAI hints the next major leap, GPT-5, could arrive "sometime this summer," finally ending the confusing mess of model names.

Now of course, where everyone is going next is much more interesting than where they are now.

Here’s a lightning round of where the giants are headed:

xAI is the dark horse. Grok 3.5 could “ship any day now” (whenever Elon’s done wrestling w/ it to be less woke?) and is quietly adding agent-like “Tasks” and a voice mode, hoping to evolve into a serious research tool.

Sam Altman just revealed OpenAI’s roadmap, and it’s all about making ChatGPT simple again.

In a new podcast, CEO Sam Altman laid out a roadmap focused on unifying the user experience, starting with the release of GPT-5 this summer.

The true frontier of AI is no longer just about making models that can answer questions better. It’s about building models that can think. Altman believes OpenAI has “cracked reasoning,” enabling models to perform multi-step logical operations akin to a human’s internal monologue. This leap from pattern matching to problem-solving is what allows a model like o3 to feel like a "good PhD" in a specific field. It’s a capability that has progressed much faster than even he anticipated.

This newfound reasoning power is the key to what Altman sees as AI’s ultimate purpose: the discovery of new science. “The thing that I think will be the most impactful on that 5-to-10-year time frame is AI will actually discover new science,” he stated. “This is a crazy claim to make, but I think it is true. And if it is correct, then over time, I think that will dwarf everything else.”

He imagines a future where AI, acting as a hyper-competent co-pilot or even an autonomous researcher, sifts through mountains of existing data—from telescopes, from genetic sequencers, from particle accelerators—and finds the signals humans have missed. The first breakthroughs may come in fields like astrophysics, where there’s a glut of data and not enough scientists to analyze it. This vision elevates AI from a productivity tool to a fundamental engine of human progress.

To get there, AI needs to break out of the chat window. The next wave is agentic AI—systems that can take a high-level goal, break it down into tasks, and execute them. The recent open-sourcing of OpenAI's Customer Service Agent demo, which intelligently routes requests between specialized sub-agents, is a clear signal of this direction. For many, the "AGI moment" wasn’t a benchmark score, but seeing a tool like OpenAI’s Operator browse the web, use apps, and manage files. It was the first glimpse of a computer that does things, not just responds to them.

This evolution in software demands an evolution in hardware. “Computers, software and hardware… were designed for a world without AI,” Altman noted. He and legendary Apple designer Jony Ive have been collaborating for years on a new AI-native device. While he insists it will “be a while,” the vision is clear: a device that is constantly aware of your context and environment, that you interact with more naturally than by typing on a screen. This is the race for the next personal computer, and Meta is already a major player with its AI-powered Ray-Bans. The winner won’t just own a new product category; they’ll own the next dominant platform, potentially unseating Apple’s App Store.

The messy middle of progress

Perhaps the most sobering insight from Sam's recent interviews is this: even if we achieve superintelligence, society might not change as much as we expect. As he told his brother Jack, "We have this crazy thing [ChatGPT]... and you kind of live your life the same way you did 2 years ago."

He predicts major breakthroughs are coming:

  • AI discovering new science within 5-10 years
  • Humanoid robots walking our streets by 2030

Yet Sam worries we might build "legitimate superintelligence and it doesn't make the world much better." It's a paradox: the technology is revolutionary, but human nature is remarkably stable.

So how would you define the three strategies of these three players?

OpenAI’s Strategy: Build the Full Stack of Intelligence.
OpenAI's roadmap, gleaned from Sam Altman’s recent interviews, reveals a breathtakingly ambitious plan to build and control the entire AI ecosystem.

  • Next-Gen Models: Expect GPT-5 "sometime this summer," with a move towards clearer naming (GPT-6) and vastly improved "reasoning" capabilities.
  • The AI Companion: The goal is to evolve ChatGPT into a "totally different thing"—a deeply personalized partner with persistent memory and advanced agent-like capabilities.
  • The Science Engine: The ultimate ambition is to create "superintelligence" capable of autonomous scientific discovery, accelerating breakthroughs in medicine and physics.
  • The Foundation: Project Stargate, a massive, multi-billion dollar global effort, is underway to build the unprecedented compute power needed to fuel this vision.
  • The New Interface: OpenAI is working with former Apple designer Jony Ive to create new hardware specifically for AI interaction, aiming to leapfrog the smartphone.
  • The Business Model: While cautious about ads, OpenAI is exploring non-intrusive, transaction-based revenue to broaden access without compromising user trust.

Google’s Strategy: Integrate AI into Everything.
Google, powered by DeepMind, is leveraging its vast ecosystem and deep research bench to embed AI everywhere.

  • Reliable Agents: Through Project Astra and Gemini Live, Google is building a reliable "personal AI" to handle complex, multi-step tasks.
  • Scientific Breakthroughs: Isomorphic Labs, a spin-out from DeepMind, is using AI to revolutionize drug discovery, building on the success of AlphaFold.
  • AI for AI: Projects like AlphaEvolve are using AI to help design better algorithms and chips, creating a flywheel of innovation.
  • New Hardware: Google is developing Android XR glasses, cautiously testing a new hardware paradigm for its AI.

And Meta, who we detailed extensively above... Meta is on a warpath, and its strategy is simple: if you can't build it fast enough, buy the people who can. This is the playbook behind the headlines:

  • The $100M Talent Raid: Meta is allegedly offering top OpenAI researchers nine-figure signing bonuses to jump ship.
  • The Billion-Dollar Power Move: In an even more audacious play, Meta is reportedly trying to recruit Nat Friedman, co-founder of Ilya Sutskever’s new AI safety lab. The deal, which would involve buying out his investment fund, could cost over $1 billion.
  • The $14B Chess Move: Meta’s reported offer to acqui-hire ScaleAI, the company that provides the human data for training most AI models, sent a shockwave through the industry, forcing rivals to rethink their supply chains.
  • Building the next physical AI platform. Whether it ends up being glasses or something new, Mark has one head start that others don't; people actually buy and wear their glasses. And that's huge when it comes to new devices.

The Trillion-Dollar Question: Compute, Energy, and the Future of Work

Underpinning all of these ambitions is a colossal need for computational power. This is the driving force behind Project Stargate, OpenAI’s audacious plan to orchestrate the financing and construction of data centers on a scale the world has never seen, with a reported price tag approaching half a trillion dollars. “If people knew what we could do with more compute, they would want way, way more,” Altman said, framing the project as a necessary step to make intelligence “abundant and cheap as possible.” The first gigawatt-scale site is already under construction in Abilene, Texas.

This unprecedented demand for compute creates an equally massive demand for energy. Altman is a vocal proponent of advanced nuclear fission and, eventually, fusion as the only viable long-term solutions. He envisions a future where humanity consumes vastly more energy, unlocking new levels of prosperity.

This grand vision of superintelligent AIs running on fusion-powered data centers inevitably leads to the question that haunts every discussion about AI: what happens to our jobs? Altman, like many technologists, is an optimist. He believes that while many jobs will be automated or drastically changed, human ingenuity will create new ones. “We have always been really good at figuring out new things to do,” he says, even if those new jobs look “sillier and sillier” from today’s perspective. “Podcast bro was not a real job not that long ago,” his brother Jack cheekily pointed out.

Perhaps the most profound shift will be the arrival of capable humanoid robots, which Altman predicts are just 5 to 10 years away. He believes the primary bottleneck has been the mechanical engineering of the body, not the AI brain. When these robots begin walking our streets and working in our homes, it will be a visceral, undeniable sign that the world has changed forever. “I think that will feel like the future in a way that ChatGPT still does not,” he mused.

All this drama is fascinating, but what should you actually do about it?

Here's the actionable intelligence from all this drama:

For professionals: Don't get distracted by the model wars. Pick one AI assistant (ChatGPT or Claude for most use cases) and master it deeply. The productivity gains from expertise with one tool dwarf the marginal benefits of constantly switching.

For developers: The real opportunity isn't building another ChatGPT wrapper. Focus on vertical AI applications where trust and domain expertise matter more than raw intelligence.

For investors: Watch the infrastructure plays, not just the model makers. Companies solving the compute, energy, and data problems will be essential regardless of who "wins" the AI race.

For everyone: Start using AI for actual work today. As Sam noted, "AI won't take your job, but someone who knows how to use it will." The tool you master matters less than mastering any tool at all.

The state of AI in 2025 isn't just messy—it's magnificently messy. We're watching the biggest platform shift since mobile, and the winners won't necessarily be who has the smartest model or the biggest checkbook. They'll be whoever figures out how to make AI so useful, so trustworthy, and so integrated into daily life that using anything else feels like going back to a flip phone.

And if Meta's $100 million signing bonuses tell us anything, it's that even Mark Zuckerberg knows that's not something you can simply buy.

cat carticature

See you cool cats on X!

Get your brand in front of 500,000+ professionals here
www.theneuron.ai/newsletter/

Get the latest AI

email graphics

right in

email inbox graphics

Your Inbox

Join 550,000+ professionals from top companies like Disney, Apple and Tesla. 100% Free.