Welcome to The Neuron's overflow page for Monday, November 10th, where we share all the AI news from this weekend that didn't make the cut in today's newsletter. Think of this as the DVD bonus features—still packed with value, just couldn't squeeze everything into the main show.
We’ve grouped everything so you can jump straight to what you need—Around the Horn (company news + policy moves), Treats to Try (new tools, funding, and demos), and Intelligent Insights (research, benchmarks, and thoughtful threads). Skim it, search it, or snack on a section; either way, you’ll leave smarter than you arrived.
Here's a few of the stories that popped out at us: The EU’s “Digital Omnibus” aims to streamline data/privacy rules and cut compliance overhead; a runc container escape landed as a high-severity wake-up call for anyone shipping agents in containers; and a global HDD crunch is brewing as AI datacenters soak up high-capacity drives.
On the research and product front, DeepMind teased AlphaEvolve, Cursor previewed a cleaner background-agent UI, and Inworld TTS 1 Max jumped to the top of Speech Arena.
Around the Horn (Company News)
- OpenAI indeed requested the Trump administration to expand the CHIPS Act tax credit to cover AI data centers, servers, and electrical infrastructure, arguing this would accelerate US AI development.
- OpenAI launched Codex v0.56.0 featuring GPT-5-Codex-Mini, a more compact and cost-efficient model with v2 API support and improved developer tools.
- A senior DeepSeek researcher pessimistically warned that AI's rapid advancement could trigger significant workforce disruptions and create structural challenges for startups within the next 5-10 years.
- Vast Data secured a $1.17B deal with CoreWeave to power their AI infrastructure as the primary data platform for GPU cloud services.
- Google Research introduced Nested Learning, a new machine learning paradigm that restructures learning as a hierarchy of interconnected optimization problems to combat catastrophic forgetting.
- A significant data leak exposed over 110,000 ChatGPT conversations to Google's analytics tools, revealing sensitive personal information and creating ongoing security problems.
- The EU launched plans to streamline privacy and data rules through the "Digital Omnibus" package, aiming to reduce administrative burdens by at least 25% to accelerate AI innovation.
- Amazon cut 14,000 corporate jobs due to AI-driven workflow efficiency, particularly affecting routine corporate roles in administrative and middle management functions.
- A vulnerability in runc container runtime enables malicious containers to escape isolation and compromise hosts, affecting all versions with high severity rating.
- The Internet Archive suffered major copyright lawsuit defeats against publishers and record labels, forcing the removal of over 500,000 books from its digital lending program.
- A global shortage of high-capacity enterprise hard drives emerged due to explosive AI and data center demand, with lead times extending to two years.
- OpenAI faced seven lawsuits from families alleging ChatGPT contributed to suicides and psychological harm in previously healthy users.
- Jensen Huang warned that China's million AI engineers, compared to America's twenty thousand, created a 50-to-1 talent gap that could lead to China surpassing US AI capabilities within two years.
- Stanford professor's AI startup Inception raised $50M to develop diffusion models for code and text, backed by Andrew Ng, Andrej Karpathy, Nvidia, and Microsoft.
- Google plans to build an AI data center on Australia's Christmas Island in the Indian Ocean to enhance Asia-Australia data routing.
- Microsoft faces Australian legal action for hiding cheaper non-AI subscription options from 2.7M customers, with regulators pursuing penalties up to $50M per breach.
- Chinese robotaxi firms Pony.ai and WeRide saw shares drop 14% and 13.6% in their Hong Kong debuts despite raising $1.1B combined, with both companies remaining unprofitable.
- Tech giants are investing billions in subsea fiber-optic cables—the hidden infrastructure carrying 95% of international data—as companies like Google (spending $93 billion in 2025 CAPEX alone) race to control these digital arteries to power their expanding AI operations.
- Cursor previewed a refreshed background-agent UI to review diffs and code changes more cleanly.
- Google Research teased an Android XR demo at ICCV showing off perception and tracking for headset-style experiences.
- DeepMind teased its new AlphaEvolve paper showing AI-assisted math discovery across 67 problems.
- NotebookLM Chrome extension is in the works to import open browser tabs as sources.
- Inworld TTS 1 Max jumped to #1 in Speech Arena, edging out MiniMax Speech-02 and OpenAI TTS-1 (check out Inworld here).
- Okara dropped all closed models and went fully open-source only (try Okara out here).
- A researcher claimed a new 36B model tops GPT-4 performance on multiple benchmarks.
- XPENG's new IRON robot moved so human-like they had to cut it open to prove it.
Treats to Try (AI Tools & Startups)
- AI Context Flow remembers your project details across ChatGPT, Claude, and other AI platforms so you don't have to retype the same context in every conversation—free to try.
- Suites automates unit testing in your TypeScript backend, handling all dependency mocking so you can focus on writing behavior-driven tests with minimal setup code—free to try.
- Poison Pill protects your music from AI theft by adding inaudible noise that breaks AI training models but sounds normal to humans.
- Scaloom helps you build trust on Reddit by automating authentic content creation and engagement across multiple accounts, increasing marketing efficiency by 300%—free trial, then paid plans.
- Procurement Sciences automates government contract proposals, reducing generation time from weeks to minutes (raised $30M)
- DeepJudge, founded by ex-Google researchers, raised new funding to expand its AI-powered search tools that help law firms query vast internal knowledge bases with natural language.
- Noro turns your spoken thoughts into organized plans, breaking big tasks into manageable steps with time estimates and focus timers built specifically for ADHD needs—free trial, then paid subscription.
- Tala Health provides 24/7 AI-powered healthcare support (raised $100M seed funding at $1.2B valuation).
- Pipeflow-PHP lets you build automated workflows in PHP where even non-developers can edit the logic directly through simple XML, perfect for automating tasks like content generation—free to try.
- Anchor AI writes your proposals and RFP responses automatically, while its Max agent transcribes your meetings and converts them into clear action items—free trial, then $49/month.
- qqqa answers your command-line questions instantly and can run commands for you after approval, eliminating the need to switch between terminal and browser.
- BeeBot delivers audio updates about nearby events and friends directly to your headphones as you explore your neighborhood—free trial, then paid subscription (read more).
- Airweave turns 30+ data sources into an agent-ready knowledge base—free & OSS (Airweave was just pitched as the agent context layer everyone needs for real-time retrieval across applications).
- Face-Looker spits out a plug-and-play face-following widget, and iisee.me productizes teh same idea by transforming your photo into faces that follow your cursor around the screen, giving you a fun, interactive visual experiment.
- Min Choi shared a new Ultrathink prompt that turns Claude Code into a long-horizon planner for complex builds—paste it in to get a clear brief and step plan before any code—free to try.
- Krea Nodes lets you build node-based workflows that chain Krea's image, video, and 3D tools into reusable flows—free to try.
- Space DJ from Google DeepMind lets you fly a music spaceship that blends genres in real-time using DeepMind's Lyria RealTime—free to try.
- alphaXiv Explore lets you browse AI papers with built-in chat, notes, and community context right on each abstract—free to use.
- Glif launched video generation features that let users create FX effects with one click (demo from Justine Moore); also the app has multiple agent workflows you can use for your own SFX needs.
- This Video-agent workflow compresses FX work into a single chain: extract, transform, re-animate, stitch—auto-FX in a click.
Intelligent Insights (Technical Analysis & Perspectives)
- Firefox introduced AI-powered Link Previews in version 139.0 and planned to integrate more LLM features; meanwhile, this open source critique highlights how Mozilla's actions may be contradicting its own principles by potentially alienating community members when the true goal should be empowerment rather than exclusion.
- Microsoft researchers built a synthetic marketplace to test AI agents, revealing how even advanced models like GPT-4o struggle with basic real-world tasks and get easily manipulated.
- This breakthrough in brain-to-image technology achieved state-of-the-art fMRI image reconstruction using just 1 hour of brain data instead of the previously required 40 hours.
- Schools are deploying AI surveillance tools that monitor millions of students' digital activities without adequate parental notification, creating individual "risk profiles" from chat logs and browsing data while offering limited evidence of improved safety outcomes.
- An Oxford study revealed 84% of AI benchmarks lack scientific rigor, meaning many performance claims tech companies make about their models are likely "irrelevant or even misleading" since they're based on flawed testing methods.
- Developer trust in AI tools faces a paradox—while 84% of developers now use AI daily according to Stack Overflow, only 3% highly trust the outputs, with most reporting the "almost right but not quite" problem creates more debugging headaches than it solves.
- Generative AI creates "Dunning-Kruger as a service," where technically proficient users become dangerously overconfident when using AI systems, potentially leading to a "mediocrity trap" where superficial AI-generated work becomes the accepted standard.
- Jason Packer shared how some people's ChatGPT prompts are leaking into the Google Search console, which led to many twists and turns to resolve that makes for a fascinating read.
- AI systems don't just mine our data but actively harvest our attention and reshape our thoughts, urging the development of "conscious security" by regularly questioning whether our thoughts and beliefs are truly our own or subtly planted by AI interfaces.
- A transparent AI ranking platform from compar:IA flips the script on model evaluation by using actual user preferences and sophisticated statistical models instead of technical benchmarks, gathering over 90,000 votes to create datasets that could improve conversational AI; however, some argue this benchmark, created by the French government, overtly favors France's homegrown AI, Mistral.
- Google's Gemini integration in Gmail now surfaces "most relevant" emails instead of just recent ones across 3 billion accounts, with a crucial privacy distinction: personal Gmail content can be used for AI training while Workspace accounts maintain stronger protections.
- Economic forces like Baumol's Cost Disease explain why AC units get cheaper while repairs get more expensive—a pattern that AI will likely replicate as productivity booms in some areas drive up costs in labor-intensive services.
- Agents that emit and execute Python (not just tool JSON) can self-revise on new observations, delivering up to 20% higher task success and a clearer, debuggable action trail.
- Daniel San shared how Anthropic's "code-exec + MCP" pattern lets agents import only needed tools, do heavy lifting off-context, and return compact results—faster, cheaper, and more private (original blog).
- Simon Willison explains how K2 Thinking pairs trillion-param MoE "thinking" with efficient INT4 hosting and strong agent benchmarks—credible open-weight pressure on closed leaders.
- Simon also says to treat coding agents like interns: spin them up asynchronously in a sandbox repo, let them fetch data and run, and have them report via PRs you can audit later.
- In hands-on trials, Elvis Saravia says CLI-first workflows (Claude Code + Bash + Skills) beat computer-using agents for reliability and reviewability.
- Amanda Askell's ">100-page prompt" claim highlights test-driven system prompts that encode behaviors/edge-cases to stabilize long, complex tasks.
- Visible acceleration from stacked model and tooling gains is shrinking build cycles right now, creating a snapshot of rapid development velocity across the AI industry.
- Kosmos shows long-horizon, cite-as-you-go research: hours-long runs, ~42k LOC executed, ~1.5k papers read, ~200 rollouts, and majority-accurate statements.
- New IMO-Bench targets Olympiad-level math; Gemini's internal results claim large gains—useful stress-tests for long-form reasoning.
- Simia-SFT/RL use LLM feedback to generate trajectories and rewards—showing competitive results without real API environments.
- Expect Diffusion language models, or DLMs, to shine on small corpora given enough training time; AR wins back as data scale and quality rise in modern architectures.
- Robust perception is the prerequisite—then planning, then action, as shown in robotics research.
- V-Thinker shows vision agents do better when they can manipulate images and learn from step-wise rewards, with VTBench to measure progress.
- Using video as the scratchpad boosts multimodal reasoning; the authors back it with a new benchmark and strong Sora-2 results.
- Agents stick the landing on long chores when you give them a progress doc—Greg Brockman highlighted a post from Peter Steinberger who used a Codex and a markdown checklist to tackle massive linter debt overnight.
- Treat assistants like senior engineers: run parallel research (logs/web/code), plan first, then code—so issues like background-job rate limits surface before you ship.
- New work explains multi-agent "laziness" and fixes it with influence metrics + verifiable reward restarts (paper).
- Here's a 24-day "core concepts" thread you can follow to level up toward AI-fluent data science.
- A CLI-first Claude Code setup delivered steadier results than GUI-driving agents in hands-on use.
- Matan Grinberg advised using GPT-5 Codex for big builds, then switching to Anthropic models to nail details and second-order consequences.
- He suggests you use a coding-heavy model for scaffolding, then an analysis-heavy model to surface edge cases and refine—fewer surprises.
- Kieran Klaassen built an AI image feature by spending ~40 minutes not coding—just planning with specialized subagents.
- Klaassen's takeaway: use subagents to plan first (research/review/steps), then code—less rework, faster builds.







