😸 MIT: 11.7% of US jobs could vanish today

PLUS: Happy birthday, Chat! GPT turns 3 today...
November 30, 2025
In Partnership with

Welcome, humans.

ChatGPT turned 3 today. Azeem Azhar of Exponential View shared his reflections on it, and recapped his beliefs about the industry and where its headed across 4 parts.

In Part 1, he shows AI revenue hit $60B in 2 years, with API usage growing 300% annually. The catch? Only 20% of companies see real value because adopting AI requires painful org restructuring. Part 2 identifies energy as the ultimate bottleneck; US power grids have 4-9 year wait times for new connections, forcing companies to build their own solar installations instead of massive data center clusters. Part 3 digs into the economics: if GPUs last 6 years instead of 3, the entire investment thesis changes, and 2026 becomes the year productivity gains must materialize or the bubble pops. Part 4 zooms out to geopolitics: AI is fracturing into US-aligned, China-aligned, and non-aligned camps, while 900M people use tools that 75% of Americans simultaneously distrust. He also explores how AI's biggest productivity gains are “invisible,” hiding in admin and finance work rather than flashy tech jobs. Plus, he even narrated his own retrospective to explain how the “exponential age is unfolding in real time.”

Also, we put together our own retrospective on the three years since ChatGPT launched... written by ChatGPT.

Meta? Yes. Good? Also yes. It’s really impressive to see how well AI can write, research, and reflect since it launched in 2022. We’ve come a long way, ya’ll.

Here’s what happened in AI today:

  1. MIT released Project Iceberg showing AI can replace 11.7% of American workers today.
  2. Shai-Hulud malware compromised 800+ npm packages from Zapier, PostHog, and Postman.
  3. OpenAI and Google cut free access to Sora 2 and Gemini 3 Pro amid rising costs.
  4. Suno's users generate 7M AI tracks daily, recreating Spotify's catalog every two weeks.

MIT Just Built a Digital Twin of Every American Worker—And the Results Are Sobering

DEEP DIVE: MIT’s Project Iceberg and what experts predict will happen next with AI and jobs

Remember when everyone said AI would only replace tech jobs? Turns out, they were looking at the tip of the iceberg.

MIT and Oak Ridge National Laboratory just released Project Iceberg, a massive simulation that tracked 151M US workers across 32K skills and 923 occupations to figure out which jobs AI can already automate today.

The findings = AI can technically replace 11.7% of the American workforce right now… affecting $1.2 trillion in wages. That's not a prediction for 2030. That's what's possible with current technology.

Here's the twist: if you only look at where AI is actually deployed (mainly computing and tech), just 2.2% of jobs seem affected. MIT calls this the "Surface Index." Below the surface lurks cognitive work in finance, healthcare, and administrative roles that AI could automate but hasn't yet.

What changed everything: The Model Context Protocol (MCP). Until late 2024, AI assistants were stuck outside your work ecosystem, unable to access your actual tools. Anthropic's MCP changed that—it lets any AI model plug into any data source or tool through standardized connections.

The explosion happened fast. As of March 2025, there are 7,950+ MCP servers available. AI agents can now autonomously check calendars, book rooms, send invites, update project plans, and generate financial reports. Project Iceberg tracks every one of these servers and maps them against workforce skills in real-time.

Plot twist: The biggest impact isn't in Silicon Valley. Rust Belt states like Ohio, Michigan, and Tennessee show massive vulnerability because cognitive work supporting manufacturing (financial analysis, admin coordination, professional services) is highly automatable. If you work in these fields and states, don’t share this report w/ your boss…

Here’s what the experts think: A study of 339 superforecasters and AI experts (the Longitudinal Expert AI Panel) predicts 18% of work hours will be AI-assisted by 2030, close to MIT's 11.7% current number. So their takes feel directionally correct.

Why this matters: Project Iceberg is an early warning system. States are already using it to identify at-risk skills and build retraining programs. The question isn't whether AI will transform work, but whether we're building the infrastructure to handle 21M potentially displaced workers before the iceberg hits. Read the rest here.

FROM OUR PARTNERS

Run ads IRL with AdQuick

With AdQuick, you can now easily plan, deploy and measure campaigns just as easily as digital ads, making them a no-brainer to add to your team’s toolbox.

You can learn more at www.AdQuick.com

Prompt Tip of the Day

So like, WTF are Claude Skills? We had only just figured out MCP Servers (which connect Claude or ChatGPT to external data sources like GitHub, Slack, and databases through a standardized protocol) when Anthropic released these puppies. Now we gotta learn a new thing??

Luckily, Anthropic just produced an explainer video to walk us through this and explain when to use all of their different features. Here's how:

Think of building your AI coding setup like assembling a specialized team. Claude.md files tell Claude about your specific project—things like tech stack, coding conventions, and repo structure. Skills are portable expertise that work across any project, teaching Claude specialized tasks. MCP servers provide universal integration, connecting Claude to external data. And sub-agents are specialized AI assistants with fixed roles, each with their own context window and custom prompts Claude.

The Setup Order for Beginners:

  1. Claude.md files first (run /init in Claude Code) → Sets your foundation with project structure and standards
  2. MCP servers next → Connects to tools you use daily (GitHub, Google Drive, Slack)
  3. Skills third → Enable Anthropic's pre-built Skills (docx, pptx, xlsx, pdf) for document creation
  4. Sub-agents last → Create specialized agents as you identify repetitive workflow patterns

How they work together: Your Claude.md file sets the foundation, MCP servers connect the data, sub-agents specialize in their roles, and skills bring the expertise—making every piece smarter and more capable Claude.

Our favorite insight: Skills use progressive disclosure—Claude only loads what's needed, when it's needed Anthropic. Each skill consumes just 30-50 tokens at startup, with full content loading only when relevant. This means you can install 20+ skills without bloating your context window.

Treats to Try

*Asterisk = from our partners (only the first one!). Advertise to 600K daily readers here!

  1. *Most workstations tap out at 24GB. Dell Pro Max with GB10's 128GB lets you run AI models up to 200B parameters—including ones that beat GPT-4o! Run NVIDIA Nemotron 70B (currently outranking GPT-4o on benchmarks), fine-tune Llama 3.3 on your own data, or chain two units together for 405B models. This isn't a toy for experimenting with 7B models—it's built for teams doing real AI development work that requires serious horsepower. Check the specs here.
  2. Manus Browser Operator automates tasks in sites you're logged into—like pulling competitor data from Crunchbase or filling CRM fields—using your existing sessions.
  3. Vercel’s Workflow Builder lets you build multi-step automation visually by dragging and dropping workflow steps, like sending emails via Resend, creating Linear tickets, or querying databases—then exports production-ready TypeScript code you can; here’s a template you can deploy.
  4. VASA-1 from Microsoft takes a single portrait photo and speech audio to generate realistic talking face videos at 512×512 resolution with precise lip-sync, lifelike facial expressions, and natural head movements—running at up to 40 FPS with only 170ms latency (paper).
  5. Ripplica records you doing a browser task once (like pulling reports or updating dashboards), then repeats that exact workflow for you automatically—even with old legacy systems and internal tools.
  6. Fara-7B from Microsoft automates web tasks by taking screenshots and clicking/typing for you, like finding the right shoes and adding them to cart, or navigating to a restaurant's booking page.

Around the Horn

  1. OpenAI and Google quietly cut back free access to Sora 2 and Gemini 3 Pro, slashing free video and image generations as “melting” GPUs and soaring infrastructure costs forced them to tighten usage caps for non‑paying users.
  2. Poetiq achieved new state-of-the-art results on ARC-AGI benchmarks by building intelligence on top of recently released models like Gemini 3 and GPT-5.1.
  3. A second wave of the Shai-Hulud malware worm (allegedly built with AI’s help) compromised over 800 npm packages (open code packages) from major organizations including Zapier, PostHog, and Postman, affecting 25K+ GitHub repositories and spreading malware that steals developer credentials during package installation (more); this is a good warning to watch what packages your AI coder installs.
  4. Billboard’s look at Suno’s investor deck says the app’s 1M+ subscribers are generating around 7M AI tracks a day, effectively recreating Spotify’s 100M–song catalog every two weeks, and showing how fast AI-native catalogs are outpacing human-made music in sheer volume.
  5. Amazon unveiled AI‑powered AR glasses for delivery drivers that overlaid navigation, hazard alerts, and package info directly into their field of view.

Intelligent Insights

  1. Michael Krastios is leading the White House's AI action plan through three strategic pillars: regulatory frameworks for innovation, infrastructure development (particularly energy and data centers), and exporting the American AI stack globally—but revealed the administration's primary concern isn't China's capabilities (despite their open-source models dominating through API distillation techniques) but rather America's failure to adopt AI fast enough, emphasizing the U.S. has a narrow window to leverage its AI chip manufacturing dominance before China's SMIC catches up, while pushing for federal AI standards over state-level patchwork regulations that entrench incumbents and calling for K-12 AI education focused on understanding technology limitations rather than just using tools like ChatGPT.
  2. Scott Alexander of Astral Star Codex systematically dismantled the argument that AI safety regulation threatens America's AI race with China, showing that proposed safety bills (requiring model spec disclosure, whistleblower protection, and testing for infrastructure hacking capabilities) would add only ~1-2% to training costs while the US maintains a 10x compute advantage—but allowing NVIDIA to export advanced chips to China, as the Trump administration has repeatedly considered, could collapse that 30x advantage down to 1.7x, making chip exports orders of magnitude more dangerous than any safety regulation on the table.
  3. The Peter H. Diamandis podcast argued Department of Energy is turning America into “one big AI factory” by connecting federal supercomputers with previously locked scientific data sets to compress research timelines from years to days across biotech, fusion, and quantum domains, with mission director Dario Gil tasked to double American scientific productivity by 2035, though the panel argues anything less than 10x-100x improvement would represent failure, especially as Anthropic's Claude Opus 4.5 just proved AI can now outperform human engineering teams while using 76% fewer tokens at 67% lower cost ($25 per million tokens), which Emad Mostaque says is important because Opus is a great orchestrator of other agents.
  4. Gary Marcus raised suspicions that the Trump administration's Genesis AI program—which directs government agencies to purchase AI chips and compute for scientific research—might be a disguised bailout for overextended AI companies, noting that White House AI czar David Sacks's position flipped from “read my lips, no AI bailout” (November 6) to “we can't afford to let this crash” (November 24) just hours before the program's announcement, with multiple industry observers and analysts describing the initiative as “socialism for the rich.”
  5. Strange Loop Cannon argues LLMs are fundamentally pattern-fitters rather than rule-discoverers, demonstrating through experiments that models fail to learn simple cellular automata rules and approximate planetary orbits with epicycles instead of finding inverse-square laws—explaining why they achieve superhuman performance on most tasks yet fail in alien ways, making their intelligence closer to market intelligence (powerful but opaque) than individual reasoning, and suggesting alignment will require treating them like other superintelligences we govern (economies, markets) rather than rational agents.
  6. Evan Armstrong argued that the future winners in software will be marketplaces with unified customer identity or AI‑native systems of record, not traditional standalone SaaS apps.
  7. SemiAnalysis’ Google’s TPUv7 “Ironwood” deep‑dive claimed the company is rolling out hundreds of thousands of custom TPUs through GCP, making it the 900‑pound gorilla challenging NVIDIA’s AI chip dominance.

A Cat’s Commentary

cat carticature

See you cool cats on X!

Get your brand in front of 500,000+ professionals here
www.theneuron.ai/newsletter/mit-11-7-of-us-jobs-could-vanish-today

Get the latest AI

email graphics

right in

email inbox graphics

Your Inbox

Join 450,000+ professionals from top companies like Disney, Apple and Tesla. 100% Free.