Each month, our team surfaces the most fascinating, thought-provoking, and occasionally terrifying articles about AI from across the web—the ones that made us stop scrolling and actually think.
This isn't your typical link dump. These are the articles that sparked debates in our Slack, changed how we think about AI, or just blew our minds with insights we hadn't considered. From the philosophical to the practical, from research breakthroughs to reality checks, May's collection delivers the good stuff.
Whether you're looking for your next intellectual rabbit hole or just want to sound smart at your next dinner party, we've got you covered. Grab your beverage of choice and prepare to have your assumptions challenged.
May 29, 2025
The AI 2027 report team put together a helpful timeline chart for understanding their theory of what an “superintelligence explosion” looks like—it even maps the difference in their two scenarios (slowdown, race).
For fun: If you’ve ever wondered wtf a “GPU” from NVIDIA is, this person has displayed and labelled all the GPUs they’ve ever owned; quite the collection! Now if you wanna see what an H200 looks like (one of NVIDIA’s most popular chips)…check this out.
Something to remember: AI has to re-read the entire conversation every time you chat with it—that’s why longer chats eventually break down, and why it’s often better to start a new chat than continue where you left off. Pro tip: Ask your current chat for an extensive summary you can share in the next chat.
Dan Shipper at Every has some great interview game—check out his latest discussion with GitHub CEO Thomas Dohmke on how the company got 15M devs to trust AI with their code (video).
Execs from companies like PayPal, Shopify, and Microsoft, and are staffing up for the next wave of AI shopping, which there is much demand for right now is hobbled by technical problems with payments and websites that aren’t optimized for AI. Pro tip: Microsoft has a merchant program to help get your products to show up in more AI search results.
Check out this fascinating piece about how a UC San Diego philosopher-data scientist is grappling with AI systems that can now act autonomously in the world—he's working on everything from AI-assisted battlefield triage to the bigger question of what happens to human identity when machines start making their own decisions without us.
The US immigration enforcement agency, ICE, is apparently accessing data from Flock's nationwide AI-powered license plate reader network through local police departments performing immigration-related searches, giving federal authorities backdoor access to surveillance tools.
This is a wild brain-computer chip interface concept from a co-founder of Neuralink that gives just a taste of what the future of personal computing could look like; fun fact: Apple already allows you to connect your own brain chip to control its devices as an accessibility feature.
It's an older article, but check out this sobering economic forecast from MIT econ professor Daron Acemoglu that challenges AI hype, showing that despite industry optimism, AI will likely automate only 4.6% of all work tasks profitably in the next decade, delivering just a 1.1-1.6% total GDP boost rather than the transformational productivity surge many expect. There was a good debate on Hacker News about the article, recently (hence why we're bringing it up again).
Ethan Mollick seems to think we’re severely underestimating what OpenAI’s smartest model, o3, can do—in fact, he argues that organizations need to change in order to start reaping the true benefits of AI.
Researchers developed an AI test that can predict which men could benefit from a drug for prostate cancer (Abiraterone)—read more.
May 23, 2025
A broadly expansive study across top universities found a large language model (OpenAI o1) demonstrated superhuman medical diagnostic reasoning—it outperformed human physicians across multiple complex medical reasoning tasks and emergency room case evaluations.
Researcher Ethan Mollick outlined a comprehensive strategy for AI adoption in companies, emphasizing the importance of “Leadership, Lab, and Crowd” approaches to effectively integrate AI into organizational workflows.
Fei-Fei Li, Stanford's “godmother of AI,” warned that cuts to US research funding and international student visas could threaten America's innovation ecosystem and global tech competitiveness.
AI researchers developed methods to prevent low-probability tokens (the basic units of text that AI models process and convert into words) from skewing language model reinforcement learning.
Researchers developed Phase Shift Calibration (PSC), a technique to help AI models better process longer text contexts.
The Linux Foundation's Meta-commissioned research found open source AI is widely adopted, with 89% of organizations using open source somewhere in their AI stack (if you’re one of the 11% of orgs that aren’t using it, check out HuggingFace ASAP—it’s like the ultimate library for open source AI models).
May 21, 2025
MIT expert Daniela Rus highlighted AI's transformative potential across sectors like manufacturing, agriculture, and healthcare.
Venture capitalist Eze Vidra outlined how AI has transformed the startup landscape, making product development easier but intensifying competition and raising the bar for differentiation.
Harvard's Galileo Project is using advanced AI technology to search for extraterrestrial evidence—this is attracting academic research and Pentagon interest in what was once considered a fringe scientific pursuit.
A newly-published study reveals that most large language models tend to overgeneralize scientific findings—but not erroneously (also, newer models may be more likely to broaden claims beyond the original research scope).
Stephen Wolfram (math / science genius) explored the inner workings of ChatGPT, revealing that language might be fundamentally simpler than previously thought since AI implicitly discovered underlying patterns in human communication.
One final reminder to verify any and all Ai outputs—Check out this physicist's candid account of how AI-for-science research suffered from widespread overoptimism—he found 79% of papers claiming AI superiority used weak baselines, while negative results almost never got published, creating a distorted picture of AI's actual capabilities in scientific applications.
May 16, 2025
Making an agent work is apparently really easy with just messaging a language model and giving it tool use (according to Sketch, an open-source AI coding agent).
LLMs are making me dumber - Vincent Cheng explores how relying on large language models for tasks like coding can erode skills development and learning.
This reasoning model supposedly outperforms DeepSeek-R1 and rivals much larger models like Qwen3 on math and coding benchmarks despite having far fewer parameters.
How AI is changing radiology - Despite early predictions of AI replacing radiologists, the field has flourished with increased employment as AI transforms roles rather than eliminates them.
New study on multi-turn conversations shows language models perform 39% worse in multi-turn, underspecified conversations compared to single-turn, fully-specified instructions.
May 14, 2025
The gap between basic and expert AI users - The Algorithmic Bridge argues as AI gets more powerful (like OpenAI's Deep Research), the gap between basic and expert users actually grows larger—meaning your ability to craft good prompts could be the difference between getting mediocre results and PhD-level analysis.
Imperial College AI coursework experiment - An engineering professor tested ChatGPT, Claude, Meta AI, and Gemini on complex coursework. ChatGPT and Gemini passed (with Gemini performing significantly better), while Claude and Meta failed.
May 9, 2025
AI use in the workplace affects credibility - New research suggests that openly using AI tools at work may impact how coworkers perceive your competence and trustworthiness.
The real cost of AI training - A deep dive into the environmental and financial costs of training large language models, with some surprising comparisons to traditional industries.
May 7, 2025
When AI goes dark: adversarial attacks explained - Researchers demonstrate how seemingly innocent prompts can be used to manipulate AI systems in unexpected ways.
The psychology of AI trust - A comprehensive study on what makes humans trust (or distrust) AI recommendations, with implications for AI design.
May 2, 2025
AI's creativity paradox - MIT researchers explore why AI can generate novel ideas but struggles with genuine creative breakthroughs.
The hidden biases in reasoning models - Anthropic's latest research reveals surprising biases in how reasoning models approach problems, even when explicitly instructed to be neutral.
Why AI can't do common sense (yet) - An exploration of the fundamental challenges preventing current AI systems from achieving human-like common sense reasoning.
Want more mind-expanding AI content? These Intelligent Insights appear in The Neuron newsletter every Wednesday and Friday. Because staying smart about AI shouldn't feel like homework.