😸 Datacenters in the sky?!

PLUS: Anthropic projects $70B revenue. Amazon vs Perplexity heat up.
November 5, 2025
In Partnership with

Welcome, humans.

Quick housekeeping note: Our audience of 600K AI-obsessed readers love to learn about the latest AI tools, frameworks, and strategies to stay ahead of the AI revolution (same; y’all are my people!). So if you have a product/service/event/free resource like that handy, now's the time to book a sponsorship before year end!

Why now? Our Q4 ad slots are literally flying off the shelves. Mindy (our rockstar ad manager) is speed-running Tetris with the calendar just to squeeze in a few more spots.

So if you want to get in front of key decision-makers before the end of the year, book your spot this week. Otherwise? January 2026. And you know what happens to unspent Q4 budgets...

Click this button to advertise in The Neuron

ICYMI: We just interviewed Caspar Eliot of Invisible Technologies, a company who has trained 80% of the world’s top AI models. In the interview, Caspar tells us all about the army of invisible humans who actually make AI work.

Our favorite part? When Caspar shared his very analogue use of AI to jot down notes on physical paper, then use AI to upload them to the computer later.

This is a fascinating interview if you want to a peak behind the Wizard of AI’s curtain and learn how the AI sausage REALLY gets made…

Watch / Listen now on: YouTube | Spotify | Apple Podcasts

Here’s what happened in AI today:

  1. Google’s Project Suncatcher to launch solar AI data centers into space by 2027.
  2. Anthropic projected $70B in revenue by 2028.
  3. Amazon and Perplexity go toe-to-toe on Amazon shopping agents.
  4. Shopify reported AI traffic up 7x and AI-driven orders up 11x since early 2025.

Google Wants to Put AI Data Centers in Space (And It's Not as Out There as It Sounds)

Your next ChatGPT conversation might be answered from orbit.

Google just unveiled Project Suncatcher—a literal moonshot plan to build AI data centers in space using solar-powered satellites equipped with their custom chips, the TPU. Yes, you read that right. Space data centers.

Here's the pitch:

  • The Sun pumps out 100 trillion times more energy than humanity's total electricity production.
  • Solar panels in the right orbit are up to 8x more productive than on Earth and get near-constant sunlight.
  • So why not skip terrestrial power grids entirely and train AI models directly from space?

How it would actually work:

The satellites would fly in super-tight formations, just hundreds of meters apart, compared to the ~120 kilometers between Starlink satellites. This proximity lets them use optical lasers to beam data between each other at blistering speeds. Google's already hit 1.6 terabits per second in early tests.

Each satellite would carry Google's TPU chips and communicate via ā€œfree-space optical linksā€ (fancy talk for space lasers, pew pew!). They'd orbit in a dawn-dusk pattern, staying in continuous sunlight while skipping heavy batteries.

The big hurdles Google solved:

  • Radiation: AI chips and cosmic rays don't usually mix. But when Google blasted its Trillium TPUs with a particle accelerator to simulate years of space radiation, the chips survived three times the expected dose with zero hard failures.
  • Economics: Space is expensive. But if launch costs drop to $200 per kilogram by the mid-2030s (SpaceX's trajectory suggests this is plausible), the cost of running a space data center could match the energy costs of Earth-based facilities.
  • Networking: Keeping thousands of satellites coordinated in tight formation is like herding cats—in zero gravity. Google's using ML-based flight control models to prevent collisions while maintaining formation.

Google's partnering with satellite company Planet to launch two prototype satellites by early 2027. Each will carry four TPUs to test if the hardware survives orbit and if the optical links work for real ML workloads.

Google isn't alone, either. Starcloud (formerly Lumen Orbit), backed by both Google and Nvidia, is launching its first satellite this month with an Nvidia H100 GPU—the same chips powering ChatGPT. Meanwhile, Elon Musk chimed in days ago that SpaceX "will be doing this" by scaling up Starlink satellites, and he's even suggested putting quantum computers in permanently shadowed lunar craters where temperatures naturally hit near absolute zero.

Why this matters: AI's energy appetite is becoming unsustainable. Data centers already consume massive amounts of power, and that's only growing. If Project Suncatcher works, it could unlock virtually unlimited clean energy for AI training while freeing up Earth's resources.

Google's calling this a ā€œmoonshotā€, so there's no guarantee it'll work. Challenges like thermal management and on-orbit reliability still need solving. But the full research paper shows the math checks out… so as crazy as it sounds, this isn't science fiction. By 2027, we'll know if the future of AI is looking up.

FROM OUR PARTNERS

The Platform Powering Auth, Identity, and Security for AI Products

Enterprise customers expect more than a login screen. They demand SSO, directory sync, granular permissions, and detailed audit logs, all built to strict compliance standards.

WorkOS gives growing teams these enterprise foundations without slowing development:

- Modular APIs for authentication and access control

- A hosted Admin Portal that simplifies customer setup

- Built-in security and compliance features enterprises require

Trusted by OpenAI, Cursor, and Vercel, WorkOS powers auth, identity, and security for AI products across the industry. Your first million MAUs are free.

Start building with WorkOS today →

Prompt Tip of the Day

A creator just dropped a 7-minute AI anime episode on Reddit that took one month to make—and the workflow is surprisingly replicable.

Here's the exact process and three tool workflow developer No-Thing-9001 used to create 216 unique frames of consistent anime content.

  1. NanoBanana (image generation): Generate each main frame with detailed prompts and reference images. The key? Always feed the previously generated frame back in as a reference for the next one to maintain character consistency, environment, and lighting.
  2. Photoshop (optional editing): Clean up any frames that need tweaking before animation.
  3. Sora 2 (animation): Animate each static frame individually. Keep the good ones Sora generates on its own.

The Secret Sauce: Frame-to-frame referencing. Instead of relying on text prompts alone to maintain consistency, the creator used each completed frame as a visual reference for generating the next one. This kept characters, clothing (90% consistent), lighting, and environments coherent across 216 frames.

Why NanoBanana specifically? The creator tested Midjourney and GPT Image first but found NanoBanana superior for character and environment consistency when fed detailed prompts plus reference images.

Side note: this shows the capability of AI workflows to create consistent anime right now, and it proves the case of the Japanese publishers who just formally requested OpenAI to stop training on their work.

Treats to Try

  1. ClickUp’s AI agents auto-answer questions in your channels, schedule meetings around your team's availability, record and transcribe video calls, reschedule your calendar when you mark priorities, and show team updates and time off in one dashboard.ā€
  2. Mimic builds robotic hands that match human dexterity to automate complex manual tasks in manufacturing and logistics (raised $16M).ā€
  3. Jinna.ai searches your content across text, images, video, and audio in 100+ languages, scaling from small projects to enterprise needs.ā€
  4. MCP Playground lets you test multiple AI models with text, images, and audio inputs to compare their performances side by side.ā€
  5. Plexe AI turns your prompts into fully functional machine learning models without requiring any coding, making ML development 10Ɨ faster.ā€
  6. This language learning app MVP is cool; you basically learn entirely through listening; not a full product yet, but neat to try.

Around the Horn

Trying a new thing this week; Around the horn moving to the website!

Read the rest of the headlines that caught our eye in our new Overflow section!

FROM OUR PARTNERS

Ideas move fast; typing slows them down.

Wispr Flow flips the script by turning your speech into clean, final-draft writing across email, Slack, and docs.

It matches your tone, handles punctuation and lists, and adapts to how you work on Mac, Windows, and iPhone.

No start-stop fixing, no reformatting, just thought-to-text that keeps pace with you. When writing stops being a bottleneck, work flows.

Give your hands a break āžœ start flowing for free today.

Midweek Wisdom

  1. Sinead Bovell put together a great 17 min explanation for what’s actually happening with the job landscape; big companies aren’t slowing hiring or laying off users because AI can do entire jobs today, but in anticipation of what AI will do to the market landscape (they need to act more like a startup to compete with AI startups or they’ll get disrupted).
  2. Researchers decisively disproved a longstanding Erdős conjecture using human-AI collaboration, revealing how a similar solution from 1943 was overlooked for decades.
  3. Check out this AI-powered analysis of millions of book reviews identified truly ā€œlife-changingā€ books, created by examining reader sentiment with language models (P.S: these 4 authors are the most ā€œlife-changingā€).
  4. This analysis reveals there's no evidence of AI significantly speeding up design work or replacing designers (yet); I will say, non-designers are definitely creating designs faster with AI (and like the second comment of this HN discussion says, AI is great at ā€œmiddle of bell curveā€ generic design); as for speeding up designer’s work, Pietro Schirano of MagicPath (an AI design tool) is definitely trying!
  5. UC Berkeley researchers discovered that OpenAI's o1 model became the first AI to perform linguistic analysis at human expert level, correctly handling center-embedded recursion, resolving sentence ambiguities, and inferring phonological rules from 30 completely made-up languages (paper).
  6. Here’s a pretty inspiring story about how musician Andy Shand used AI to make music again (via Suno) after an illness took away his ability to play instruments.
  7. Really vibing with this comparison of today to the internet's ā€œdial-up eraā€ (today = 1995), which warns against waiting for perfect AI tools to jump in.
  8. Henley Wing Chiu analyzed 180M job postings, finding an 8% decrease in posting overall and a general split where strategic roles remained while execution-focused positions declined (this tracks w/ AI replacing one-off tasks); also, the jobs with the biggest increase = roles where trust / experience / credentials matter most (pharmacists, loan officers, legal and RE directors, and ML engineers, obviously).

A Cat’s Commentary

cat carticature

See you cool cats on X!

Get your brand in front of 500,000+ professionals here
www.theneuron.ai/newsletter/datacenters-in-the-sky

Get the latest AI

email graphics

right in

email inbox graphics

Your Inbox

Join 450,000+ professionals from top companies like Disney, Apple and Tesla. 100% Free.