Welcome, humans.
Looks like we spoke too soon about the AI crash yesterday, huh? Missed calling it by ~two hours. TL;DRâlots of bumpy economic news, NVIDIA no longer impresses, and GPT-4.5âs âmidâ release sorta put a dent in the scale hypothesis. More on that below.
Last Call: We've teamed up with our pals at DZone on that GenAI surveyâand today is your final chance to get it done before it closes! It'll take less than 10 minutes, promise.
Seriously, we timed ourselves filling it out and still had time left to wonder if GPT-4.5's price tag will require a second mortgage or just a car loanâyouâll get that joke in a sec.
What's in it for you? Early access to their trend report data (perfect for impressing your boss with industry insights), a free Getting Started with Agentic AI ref card, and a chance to win one of two $125 gift cards.
Check it out here before close of business todayâthink of all the things you could buy with that gift card: a fancy mechanical keyboard, 25 cups of overpriced coffee, or approximately 4 minutes of GPT-4.5 compute time!
Hereâs what you need to know about AI today:
- OpenAI released GPT 4.5 to mixed reactions.
- Meta planned a standalone AI app.
- IBM released an AI family for enterprises.
- Meta unveiled the Aria Gen 2 research glasses.

Was GPT-4.5 so âmidâ that it crashed the stock market?

Yesterday, OpenAI released GPT-4.5âits âlargest and most knowledgeable model yetâ, prioritizing emotional intelligence over raw reasoning power (Pro only atm).
You knew things were gonna be rough when OpenAI positioned this release more about âvibesâ than anything else.
AI researcher Gary Marcus, who constantly criticizes the current AI hype train, called it a ânothing burger release.â
The truth is⌠somewhat in the middle? Very fitting, for a model called â4.5ââŚ
First, the vibe takeâŚ
- Sam Altman called GPT-4.5 âthe first model that feels like talking to a thoughtful person.â
- Ben Hylak declared it âthe midjourney-moment for writing.â
- Dan Shipper (Every) finds it âmore extroverted and less neurotic,â but still prone to hallucinations.
- Ethan Mollick notes it âcan write beautifullyâ but gets âoddly lazy on complex projects."
And several testers noted it will confidently share opinions rather than deflecting with âAs an AI...â responses.
Now, the ânothing burgerâ takeâŚ
- Sam also acknowledged it's âa giant, expensive modelâ that âwon't crush benchmarks.â
- Former OpenAI researcher Andrej Karpathy explained it required 10X more compute for âdiffuseâ improvements.
- Gary Marcus calls it evidence that âscaling data and compute is not a physical law.â
The biggest issue against GPT-4.5? The pricing is prohibitiveâ$75/input and $150/output per million tokens (thatâs ~10-25X more than competitors).
As one observer perfectly summed up: âHalf the TL saying it's bad and too expensive. Half the TL saying it's good and too expensive.â
In fact, GPT-4.5 perfectly encapsulates the AI industry's current dilemma: incredible technological achievements that can't yet justify their astronomical costs.
See, GPT-4.5 is the first major reality check in the AI scaling race, and GPT-4.5's marginal improvements suggest we're hitting fundamental limits.
Andrej Karpathy explained it well: âeverything is a little bit better and it's awesomeâ, but in ways that are hard to noticeâslightly better word choice, marginally improved understanding, reduced hallucinationsâbut nothing revolutionary.
Meanwhile, the economics are brutal: It cost approximately ~$500M to train GPT-4.5, and OpenAI plans to burn a lot more than that in 2025. Sam also says the company is âout of GPUs.â Hence, Stargate.
While all the new chips and servers will remain valuable for running ChatGPT, a model like GPT-4.5 simply can't achieve mass adoption if its economics don't work at scale.
Our take: Call us conspiratorial, but we donât think itâs a coincidence that NVIDIA stock sold off right around the time GPT-4.5 was releasedâŚ
The question isn't whether GPT-4.5 offers better vibes or notâit's whether any amount of vibes can justify burning billions on models most people will never use (and by âmodelsâ, we mean you, GPT-4.5).
For OpenAI, this âtweener release buys time while they search for a more sustainable approach to pay for new GPUs. Why else put out such a womp womp model?
For investors, yesterdayâs market reaction was about uncertainty. And the truth is, nobody knows what happens next with AI. Sam doesnât know. NVIDIA CEO Jensen Huang doesnât know. And Wall Street CERTAINLY doesnât know.
The only thing everybody DOES know is that the days of blank-check AI funding are numbered. As with everything in AI, itâs just a matter of how big that number isâŚ
Goes without saying, but not financial advice!

FROM OUR PARTNERS
This tech company grew 32,481%...

No, it's not Nvidia⌠It's Mode Mobile, 2023âs fastest-growing software company according to Deloitte.1
Their disruptive tech, the EarnPhone and EarnOS, have helped users earn and save an eye-popping $325M+, driving $60M+ in revenue and a massive 45M+ consumer base. And having secured partnerships with Walmart and Best Buy, Modeâs not stopping thereâŚ
Like Uber turned vehicles into income-generating assets, Mode is turning smartphones into an easy passive income source. The difference is that you have a chance to invest early in Modeâs pre-IPO offering3 at just $0.26/share.
Theyâve just been granted the stock ticker $MODE by the Nasdaq2 and the time to invest at their current share price is running out.
Join 33,000+ shareholders and invest at $0.26/share today.
Disclaimers
1 Mode Mobile recently received their ticker reservation with Nasdaq ($MODE), indicating an intent to IPO in the next 24 months. An intent to IPO is no guarantee that an actual IPO will occur.
2 The rankings are based on submitted applications and public company database research, with winners selected based on their fiscal-year revenue growth percentage over a three-year period.
3 A minimum investment of $1,950 is required to receive bonus shares. 100% bonus shares are offered on investments of $9,950+.

Prompt Tip of the Day
Andrej Karpathy released a new video in his âgeneral audienceâ series on language models and how to use them, with over 15 tips for prompting and best practices when using AI tools.

Treats To Try.

- *Join Fiddler AI and Datastax to build better, safer RAG applications with comprehensive observability tools + LLM monitoring via Fiddlerâs Trust Model. Register + get the replay here.
- Deep Review finds you the most relevant academic papers by thinking critically (like a researcher).
- Basalt helps you integrate AI into your product in seconds with tools to create, test, deploy, and monitor prompts that actually work in real conditions.
- OpenArt Consistent Characters helps you create characters you can pose, place, and combine in any scene.
- Pinch translates your voice in real-time during video calls so you sound like a native speaker in 30+ languages.
- Quanta gives you instant, automated accounting services instead of making you wait weeks for your accounting data (raised $4.7M).
- Forage Mail cleans up your inbox by filtering out low-priority emails and sending you one digestible summary.
See our top 51 AI Tools for Business here!
*This is sponsored content. Advertise in The Neuron here.

Around the Horn.
- Meta planned a standalone AI app for Q2 2025 to compete with ChatGPT and also planned to raise $35B for more data centers in a new financing w/ Apollo.
- IBM debuted Granite 3.2, a large language model family that solves practical enterprise problems and is focused on real-world utility rather than benchmarks.
- Meta also announced Aria Gen 2 glasses, an upgraded research device with advanced sensors that enables researchers to explore machine perception, contextual AI, and robotics applications.

FROM OUR PARTNERS
Building Reliable AI Agents

AI agents are trickyâbugs, hallucinations, and edge cases can break workflows.
In this exclusive AI Engineering Summit talk, Anita from Vellum unpacks how we got here, how TDD improves reliability, and even demos her SEO agent. Get access here!

Intelligent Insights
- Ethan Mollick boiled the âmultiple paths in AIâ down to three levers: pre-training (scale), post-training, and reasoningâand breaks out where each major model excels.
- Check out this interview with Nobel economist Daron Acemoglu who argues we're âdriving 200 miles an hourâ in the wrong direction by prioritizing automation over tools that could actually enhance human capabilities.
- Ed Zitron wrote the ultimate bear take on the genAI industry thatâs worth a read.
- Coracle and University of Hertfordshire are developing an offline AI tutor for UK prisoners thatâs surprisingly wholesome?

A Cat's Commentary.



.jpg)

.jpg)








