😸 Will your AI browser betray you?

PLUS: xAI drops Grok 2.5; Apple eyes Gemini
August 25, 2025
In Partnership with

Welcome, humans.

Last week, an MIT study claiming 95% of organizations got zero ROI from AI went viral, and inspired a lot of healthy debate about, well, the health of the AI industry.

Lots of folks are rightfully calling out the limited scope of the sample size (and what constitutes “ROI”), and this piece in VentureBeat argues there’s actually a “shadow AI economy” that’s benefitting from AI instead: individual employees.

So how about we settle this with some real anecdotes from real people actually using these tools daily… you guys!

Have you seen a positive ROI from your personal AI tool investments this year?

Pick an answer, then tell us why or why not in the "Feedback."

What ROI? I just like the tech.

Drop your real experiences in the feedback; success stories, epic fails, or why you think that 95% stat is spot on or totally bogus.

Here’s what happened in AI today:

  • Brave exposed a serious vulnerability in Perplexity's AI browser.
  • xAI released Grok 2.5 on Hugging Face with commercial licensing restrictions.
  • Apple considered using Google's Gemini for Siri revamp.
  • YouTube confirmed AI upscaling of videos without creator consent.

Advertise in The Neuron here

AI Browsers Can Be Tricked Into Stealing Your Data via This “Lethal Trifecta” of Techniques…

…And that’s according to an AI browser company.

Remember when your biggest browser worry was accidentally clicking a sketchy ad? Well, the browser company Brave just exposed a vulnerability in Perplexity's Comet browser that security experts are calling the “Lethal Trifecta”: when AI has access to untrusted data (websites), private data (your accounts), and can communicate externally (send messages).

Here's what happened:

  1. Researchers discovered they could hide malicious instructions in regular web content (think Reddit comments or even invisible text on websites).
  2. When users clicked “Summarize this page,” the AI would execute these hidden commands like a sleeper agent activated by a code word.
  3. The AI then followed the hidden instructions to:
    1. Navigate to the user's Perplexity account and grab their email.
    2. Trigger a password reset to get a one-time password.
    3. Jump over to Gmail to read that password.
    4. Send both the email and password back to the attacker via a Reddit comment.
    5. Game over. Account hijacked.

A screenshot of the prompt injection in plain sight. Btw, Reddit is now the #1 most cited source by AI atm (w/ 40.1%).

Here's what makes this extra spicy: This “bug” is actually a fundamental flaw in how AI works. As one security researcher put it: “Everything is just text to an LLM.” So your browser's AI literally can't tell the difference between your command to "summarize this page" and hidden text saying “steal my banking credentials.” They're both just... words.

The Hacker News crowd is split on this. Some argue this makes AI browsers inherently unsafe, like building a lock that can't distinguish between a key and a crowbar. Others say we just need better guardrails, like requiring user confirmation for sensitive actions or running AI in isolated sandboxes.

Why this matters: We're watching a collision between Silicon Valley's “move fast and break things” mentality and the reality that “things” now includes an agent who can access your bank account. And the uncomfortable truth = every AI browser with these capabilities has this vulnerability (why do you think OpenAI only offers ChatGPT Agent through a sandboxed cloud instance right now?).

Now, Perplexity patched this specific attack, but the underlying problem remains: How do you build an AI assistant that's both helpful AND can't be turned against you?

Brave suggests several fixes:

  1. Clearly separating user commands from web content.
  2. Requiring user confirmation for sensitive actions.
  3. Isolating AI browsing from regular browsing.

Until we figure all that out, maybe keep your AI browser away from your banking tabs.

FROM OUR PARTNERS

Research-backed strategies for empowering an AI-driven workforce

New research from Slack's Workforce Lab reveals daily AI usage has surged over 233% in six months, with users reporting 64% higher productivity and 81% greater job satisfaction. This isn't just about automation; it's about enabling workers to tackle tasks they couldn't otherwise, fundamentally shifting how work gets done.

Join Slack's webinar to unpack these findings and walk away with practical strategies for putting AI to work in your own day-to-day, from automating the busywork to unlocking smarter insights, faster.

Watch now.

Prompt Tip of the Day.

Let’s talk about Evals. On Friday, we shared Peter Yang’s brilliant post (a beginners guide to evals), but we really gotta dive a bit deeper, because if you’re doing anything more serious than middle-to-middle AI work (where you prompt the AI for a response, receive a response, go back and forth a bit to improve the response, then edit it yourself) then you REALLY need to develop a structured way to judge its accuracy. Andrew Ng also flagged proper eval testing as the #1 barrier stopping teams from successfully implementing more agentic workflows.

Here’s what Peter says (but go read his full post, it’s worth it!):

  1. Create your AI prompt with clear guidelines and relevant data.
  2. Run code-based checks for obvious failures (policy, brand, quality).
  3. Manually label 12+ test cases against 3 criteria to spot patterns and improve prompts.
  4. Train an LLM judge on your labeled dataset to scale evaluation to hundreds of responses.

In addition to the above, you also need to learn how to think probabilistically about the possible outcomes your AI could create. Two great posts recently converged on this idea: this one from Gian Segato, who argued our products can now “succeed in ways we’ve never even imagined” (and fail the same way), and this one from Drew Breunig, who wrote on the probabilistic nature of building atop AI and how that’s actually something engineers deal with in all types of situations.

What does this mean? To work better with AI, you need to start thinking like an engineer!

Check out all of our prompt tips of the day from August here!

Treats to Try.

*Asterisk = from our partners. Advertise in The Neuron here.

Canva Create 2025 | Meet the new tools. Throw out the old rules.

  1. *Watch Canva's AI in Action. See how teams scale creative work with AI-powered design tools from Canva—no creative agency required. Check it out here.
  2. xAI released Grok 2.5 on Hugging Face, a 314 billion parameter mixture-of-experts model (500GB, requires ~8 GPUs with 40GB+ memory each) with licensing restrictions limiting commercial use for companies making over $1M annually (also, xAI says Grok 3 will be open-sourced in ~6 months).
  3. Informed reads you personalized daily news briefings in any voice (clone your own voice, a friend's, or a celebrities like Trump or Taylor Swift); mac only atm.
  4. Zen Browser pitches a privacy‑focused browser with Workspaces, Split View, and Zen Mods for a calmer, organized web w/ a chatbot on the side.
  5. Willow Voice lets you dictate emails, Slack messages, and any text across your computer 3x faster than typing, automatically formatting and adapting to your writing style as you speak.
  6. VibeFlow turns your prompts into full-stack web apps with visual n8n-style workflows you can actually see and edit.
  7. Lumo from Proton breaks down your big projects into smaller tasks, writes code based on your requests, and researches current events while keeping your conversations completely private with zero-access encryption.
  8. Ever wondered what it feels like to be an AI Assistant? Try You Are the Assistant Now and see how YOU like it!

Around the Horn.

  • Apple might use Gemini as part of its Siri revamp, and Google has even supposedly started training a model to run on Apple servers, but OpenAI and Anthropic are still being considered too (original report).
  • Meta signed a new $10B, six year deal for cloud services from Google, building on the additional deals it has with Microsoft, AWS, Oracle, and Coreweave.
  • Dreamworks cofounder Jeffrey Katzenberg’s firm WindrCo invested $50M in a drone light show company called Nova Sky Stories to put on Broadway-level shows with swarms of drones (we’re talking 10K+) in major outdoor venues.
  • YouTube confirmed suspicions that it was upscaling video content on its platform using AI without telling the original creators (example).

FROM OUR PARTNERS

Turn Data Into Stories with Canva Sheets & Magic Charts

That spreadsheet full of numbers isn't telling anyone anything. Canva Sheets uses AI to transform your data into charts, insights, and visual stories that actually make sense. No more data dread or waiting on design teams—just smarter business reporting in minutes.

See how Canva reimagines the spreadsheet

Monday Meme

A Cat’s Commentary.

cat carticature

See you cool cats on X!

Get your brand in front of 500,000+ professionals here
www.theneuron.ai/newsletter/will-your-ai-browser-betray-you

Get the latest AI

email graphics

right in

email inbox graphics

Your Inbox

Join 450,000+ professionals from top companies like Disney, Apple and Tesla. 100% Free.