WTF is going on with AI and education?

With students and teachers alike using AI, schools are facing an "assessment crisis" where the line between tool and cheating has blurred, forcing a shift away from a broken knowledge economy toward a new focus on building human judgment through strategic struggle.

When ChatGPT becomes CheatGPT, WTF are we actually learning?

If you only have a minute to scroll, here's the current state of play:

  • The Education Crisis: 90% of college students and 82% of undergrads admit to using ChatGPT for schoolwork, creating an "assessment crisis" where the line between tool and cheating has blurred completely. Schools are caught in reactive panic, with responses ranging from outright bans to requiring students to use AI—creating a patchwork of confusion where one classroom encourages AI while the next door threatens expulsion.
  • The Cognitive Debt Problem: New MIT research reveals that heavy AI reliance literally weakens your brain. Students who used ChatGPT showed significantly weaker neural connectivity and couldn't remember what they'd just "written." When later forced to work without AI, they performed worse than those who never used it—proving that using AI as a crutch creates "cognitive debt" that compounds over time.
  • The Radical Experiment: Alpha School in Austin represents the extreme opposite approach—AI tutors teach entire curricula in just 2 hours daily, with students reportedly learning 2x faster than traditional peers. While the claims lack independent verification, the model is expanding nationwide and represents a complete reimagining of education where AI replaces teachers entirely, freeing up time for "life skills" and real-world experiences.
  • The Science of Learning: Research shows the problem isn't AI itself, but how we use it. Students who got direct answers from AI performed 17% worse on exams than those who never used it, while students using AI tutors designed to give hints (not answers) showed massive practice improvements without harming actual learning. The difference: thinking with AI versus letting AI think for you.

What to do about it: The future belongs to the "judgment economy," where knowledge is commoditized but taste, agency, and learning velocity become the new human moats. Use the "Struggle-First" principle: wrestle with problems for 20-30 minutes before turning to AI, then use AI as a sparring partner (not a ghostwriter) to deepen understanding. The goal isn't to avoid AI, but to strategically choose when to embrace "desirable difficulties" that build genuine expertise versus when to leverage AI for efficiency.

Let's get into it!

In the packed gymnasium, under a sea of blue and gold (and black caps and gowns), a student named Andre Mai sat in attendance under the Jumbotron at UCLA’s graduation, eagerly awaiting his diploma. Except unlike some of his peers, Mai had his laptop out.

When the cameras turned on him, Mai reversed his laptop, and revealed its screen glowing with the dark mode glow of a ChatGPT conversation as he whooped and hollered.  

The image instantly went viral, a perfect symbol for a generation navigating the new world of AI. And to the hundreds of thousands (or even millions) who've seen this now infamous video, it seemed like a genuine flaunting of AI plagiarism.

The internet’s reaction was swift and cynical. Did bro really just pay $160K for a ChatGPT subscription? 

And yet, as with most things in the AI era, the truth was far more complex. Mai later clarified that he wasn’t cheating; he was using AI for an AI class, critically, with his professor's blessing. In fact, he was finishing some work he needed to have ready by the end of the day, so when the cameras rolled up, he just flipped the computer around and showed them what he was doing.

So, in a way, he was actually working MORE with AI. Who else in that crowd had their laptop on them, huh??

Mai’s story is a microcosm of a system in crisis. While he had permission to use AI, a staggering 90% of college students admitted to using ChatGPT on assignments within just two months of its public launch. 

According to a recent Axios report, nowadays, it’s estimated one in four teenagers use it for their schoolwork. Earlier data shows that number could be MUCH higher: 

  • 82% of undergraduates and 72% of K-12 students have used AI for school.
  • 56% used it for writing assignments.
  • 45% for completing other types of schoolwork.

In short, there's been an "extraordinarily rapid adoption" of the technology. To put it mildly.

As a result, schools and universities have been thrown into a reactive panic, struggling to craft policies for a technology that evolves faster than they can hold faculty meetings. The result? A patchwork of confusion. 

In one classroom, a student is encouraged to use AI for outlines; in the lecture hall next door, doing the same could get them expelled.

The situation has become absurdly meta, with students caught using AI to write papers now using AI to draft their apology emails (lol).

This has turned educators into unwilling “AI cops,” armed with detection software so unreliable it creates a new form of academic injustice, flagging honest work while letting sophisticated cheating slide. 

And as you can imagine, the reactions to all this range wildly, from viewing AI as an existential threat to education, to seeing it as an opportunity to finally move beyond outdated assessment methods that were already failing before ChatGPT even existed.

For example, when you list out the range of options being considered to combat AI cheating, you have everything from...

  • Outright bans and blocked devices. 
  • Handwritten-only assignments (or in class writing).
  • AI detection software.
  • Teachers manually checking document edit histories. 
  • In person oral exams where students are tested on their knowledge live and in front of the class.
  • “Erring on the side of grace” when students make mistakes. 
  • Math professors allowing computers on tests, knowing it won't save unprepared students from “AI-proof” assignments about personal experiences. 
  • Professors actually requiring students to run their essays through ChatGPT as their first assignment. 
  • Complete curriculum redesigns that embrace AI as a co-intelligence too.

It's a lot!

Now, we've wanted to write a piece on this here at The Neuron for a lonnng time.

To try to capture the breadth and nuance of these efforts, we ran a Gemini Deep Research report way back in January to find out what all teachers were doing to address AI cheating. At the time, the Gemini report categorized teacher reactions into four types:

Alarm:

Adaptation:

  • Teachers give students chances to redo assignments when caught using AI.
  • Focus on "AI-proof" assignments requiring personal experience.

Acceptance:

  • Over 70% of higher-ed instructors using AI are using it to grade student work.
  • Teachers using AI to "level-down" textbooks and save time.
  • From the survey: 84% of teachers who used ChatGPT said it positively impacted their classes.

Integration:

Gemini also reported on how some of the tactics worked out in practice: 

  • Banning ChatGPT: Mixed results; difficult to enforce.
  • AI detection tools: Varied effectiveness with false positive concerns.
  • Updated policies: Generally positive but requires ongoing updates.

That said, Gemini's report was a bit dated by the time we finally got to writing this piece (six months = eons in AI time), so we used o3-pro to look into it as well. It, too, divided up the current landscape into four key quadrants, but with some key updates. Here's what it found

1. Alarm & Restriction (Initial Panic Response)

  • Many schools initially banned ChatGPT entirely (NYC, LA, Seattle districts).
  • Blue book exam sales surged 30-80% as schools reverted to handwritten tests.
  • AI detection tools proved unreliable with too many false positives.
  • Most bans were reversed within months - NYC lifted its ban after just 4 months.
  • Harsh penalties (including threats of expulsion) drove usage underground rather than eliminating it.

2. Adaptation & Mitigation (The Practical Middle Ground)

  • Teachers redesigned assignments to be "AI-proof" - emphasizing projects, personal experiences, and complex problem-solving.
  • Process-oriented assessments became common (requiring drafts, oral defenses).
  • 75% of students admitted AI use when confronted with gentle, open-ended questions.
  • Academic policies updated to explicitly address AI use.
  • And yet, overall cheating rates remained steady despite AI availability.

3. Acceptance & Guidance (The Pragmatic Shift)

  • AI allowed with citation requirements (treating it like any other source).
  • 48% of districts provided AI training to teachers by fall 2024 (double from previous year).
  • Open classroom discussions about AI ethics became normalized.
  • Teachers who tried ChatGPT themselves (over 80%) became less fearful.
  • Focus shifted from "How do we stop it?" to "How do we use it responsibly?"

4. Integration & Innovation (The Leading Edge)

  • Singapore rolled out AI-powered adaptive learning across all primary schools.
  • Some professors created AI "twins" of themselves for 24/7 student support.
  • China mandated age-appropriate AI education (basic concepts for young students, design skills for high schoolers).
  • 70% of teachers still lack formal AI training as of spring 2024.
  • Early results show improved personalization and efficiency, but concerns about maintaining human connection.

Now, take this part with a grain of salt, but o3 concluded that "The most successful approaches combine clear guidelines with open dialogue, treating AI as a tool requiring literacy rather than a threat requiring elimination. Teachers increasingly recognize that preparing students for an AI-integrated world is more valuable than attempting to create an AI-free classroom bubble."

So as you can see, while the extremes look like Luddite versus Futurist, most educators are caught somewhere in the messy middle, trying to preserve authentic learning while acknowledging that the genie is already out of the bottle.

After all, it's not only education where this is happening: this "AI cheating" epidemic has caused in person exams to be considered again for job interviews (where AI resumes and AI hiring managers have turned applying for jobs into a war of AI attrition) and also has impacted startup pitching, where some VCs now won't even take a meeting unless you're already generating revenue.

I've even read an anecdote about internship programs being cancelled because of AI applications flooding the system. This one is the saddest, and speaks truth to the risk AI creates for enty level jobs.

All this has led to companies like MeritFirst, which replaces resume screening with skill tests that show what candidates can actually do. Attempts to bring oral essays and all assignments back in person are a way of applying that same idea to education.

The problem has reached such a fever pitch that one startup (Cluely) just raised $15M for its tool to "cheat at everything" (though that's not exactly what it does; in fact, it's actually kind of a cool tool, BUT, as we'll get into down below, it's probably a net negative for learning, being present, and staying human... the company even leans into that idea itself, claiming to be "the end of human thought").

If you read the initial reaction to Andre Mai's ChatGPT display from Professors, it sounds like they're seeing a generation-wide cultural and/or societal issue playing out, where students today look at society in general with more outright disdain than previous generations did. They're bragging about shoplifting and outright using ChatGPT mid class, even taunting their teachers with "that's not what ChatGPT says" in some cases with their phones directly in their hands.

The sentiment = what's the point of trying hard, when you can cheat your way to get ahead? The system is rigged anyway, so why NOT cheat? 

It might also be helpful to put yourself in the students shoes: According to personal anecdotes, students often use AI when they're "busy or lack confidence" and "don't want to sound like themselves, because they're afraid that they'll get it wrong."

Students also report feeling some assignments are "busy work", saying, "Why are we learning to do it if we can get it done in seconds?"

Also, you can't argue with the results of using AI. One student's friend "didn't learn anything from math this year. She just got the answers" but still got a B because the final exam was only 10% of the grade.

And in fact, teachers are quietly becoming AI power users themselves...

A study of teachers who use AI also found some interesting insights. It's not just students "benefitting" from the use of AI, but teachers are as well.

If you can't beat them, join them?? 

This is according to new data from a Gallup poll that dropped some eye-opening numbers: 6 in 10 U.S. teachers used AI tools this past school year, with high school and early-career teachers leading the charge. The teachers who are using AI weekly are saving about 6 hours per week. That's basically getting an extra day back.

What they're actually doing with AI:

  • 37% use it for lesson planning monthly.
  • 80% say it saves time on worksheets and assessments.
  • 64% report better quality when modifying student materials.

They are calling this the "AI dividend", and it adds up to about six extra weeks per school year. But only 32% of teachers are cashing in weekly, while 40% aren't using AI at all.

With teachers working 50-hour weeks and only 45% satisfied with their pay, this could be a game-changer. As one Houston social studies teacher put it: "AI has transformed how I teach. It's also transformed my weekends and given me a better work-life balance."

But there's nuance here. About half of teachers worry that student AI use will tank teens' critical thinking abilities. Which is why they're using AI themselves to spot when students are overusing it. Telltale signs like zero grammatical errors, and weirdly complex phrases that scream "ChatGPT wrote this." Also, em dashes and phrases like "it's not just X, it's Y" too, probably.

So what is the solution?

First of all, NOT "AI detectors." In our previous writing, we've found that AI detectors are fundamentally flawed. For example, when Christopher Penn ran the U.S. Declaration of Independence through ZeroGPT, it showed a 97% chance of being AI-generated. Some educators has come around and realized that detection software is unreliable, with one official noting "It's sort of AI competing with AI" (not sort of, it IS).

Ethan Mollick identified two critical illusions that hold the current education system back from adapting to the AI-enabled world:

  • Detection Illusion: Teachers believe they can detect AI use, when they largely cannot.
  • Illusory Knowledge: Students think they're learning when using AI, but often aren't (we'll get to this more in depth in a minute).

These two illusions are why the schools need to catch up with policies and training before the gap between AI-savvy and AI-clueless educators becomes a chasm. The policy gap right now is massive: Only 19% of teachers work at schools with AI policies, but those schools see a 26% greater "AI dividend." Meanwhile, 68% of teachers have received zero training from their districts—they're teaching themselves.

This rampant AI use could be a problem, even for teachers, because as we'll reveal down below, if teachers over-use AI themselves, they might be doing their ability to teach long term a disservice.

So what do we know so far? 

1. We know students are using AI.

2. We know teachers are using AI.

3. And we know the consensus emerging is that fighting AI in general is futile.

4. Instead, educators must redesign curriculum around AI use while emphasizing critical thinking, authentic assessment, and helping students understand when AI helps versus hinders learning.

The overarching theme is that "AI is here to stay in education." So the question isn't whether to allow it, but how to harness it effectively while maintaining academic integrity and genuine learning.

See, the challenge is more pedagogical, and philosophical, than technological, requiring a fundamental rethink of what education means in an AI-enabled world.

And this chaos in traditional schools has opened the door for radical new models.

While institutions grapple with containing AI, a far more radical experiment is underway in Austin, Texas. It asks a different question: What if the goal isn't to fight AI, but to fully surrender to it? 

What if we replaced the teacher not with a detector, but with the AI itself?

This is the premise of Alpha School, an Austin-based school where AI isn't a cheat code—it's the teacher.

Here, students complete their entire core academic curriculum in just two hours a day, taught not by humans, but by adaptive AI tutors.

The results? Alpha School says kids are learning at least 2x faster than in traditional schools. Hard to believe, I know. We'll explain how this works in a sec.

The Alpha-School Program in brief:

  • Students complete core academics in just 2 hours using AI tutors, freeing up 4+ hours for life skills, passion projects, and real-world experiences.
  • The school claims students learn at least 2x faster than their peers in traditional school.
  • The top 20% of students show 6.5x growth. Classes score in the top 1-2% nationally across the board.
  • Claims are based on NWEA's Measures of Academic Progress (MAP) assessments... with data only available to the school. Hmm...

Austen Allred shared a story about the school, which put it on our radar. He shared how his first-grade son is learning at twice the speed of his peers in traditional school, jumping from an average student to the 99th percentile in reading.

His fourth-grade daughter apparently had hidden material knowledge gaps, and the AI was able to help her catch up in reading. Now, she's progressing faster as the AI identifies and addresses her specific weaknesses.

In 5 months, the kid blazed through the rest of 1st grade, all of 2nd grade, and is already 20% through 3rd grade. WTF, right? 

Now here's the really wild part: The "gift of time," as Allred calls it, is the school’s true product. Allred says his kids don't have that "drudgery is coming" feeling about school anymore. You know, that Sunday Scaries vibe, but for 7-year-olds heading to math class?

He says because the AI handles all academics in just 2 hours a day with personalized apps that adapt to each kid, the rest of the day is spent on real-world “life skills.”

Students spend afternoons learning financial literacy, leadership, teamwork, public speaking, grit and entrepreneurship. His children have attended wilderness survival camps, learned to rock climb, started a podcast, and flown to Chicago to interview their idols at a conference (because of course they did).

"One Alpha student is building a mountain bike park outside Austin after successfully raising $400K. A group of middle schooler students are publishing a mental health self-help book, by kids, for kids. Three high school girls at Alpha are delivering learning tools to Ukrainian refugee children living in shelters."

Another student, Savannah Marrero, a seventh-grader, wants to launch a high school in Brownsville to continue her AI-powered education rather than transition to traditional high school.

Allred says this is a fundamental redefinition of childhood, where, as he observed, his kids "glide from excitement to excitement." And he's very optimistic about it's success: "There is going to be an entire generation of 16-year-olds completely finished with High School curriculum, with perfect scores on everything, and 5s on a huge number of AP tests."

So clearly this is some elite private school nonsense, right? 

Kinda...? MacKenzie Price is a Stanford Psychology graduate who founded Alpha School in 2016 after her daughters complained that traditional school was boring.

The school uses a tool called "2-hour learning" to do this (which MacKenzie pioneered in 2016, around the time she started the school).

Here's how "2-Hour-Learning" works

2-Hour-Learning is the underlying educational technology platform that powers Alpha School and quite a few other new schools like this. Alpha School uses the platform, which is a collection of AI-driven apps owned by Trilogy Software. Students work with AI tutors through apps that provide personalized, 1:1 instruction at their individual pace and level.

As David Perell broke down, the system uses mastery learning, where students must fully understand each concept before moving forward, ensuring no knowledge gaps.

Mastery-Based vs. Time-Based Learning:

  • Mastery learning means you don't move onto the next level until you know the material from the previous one (its origins are from 1968).
  • It's the opposite of how schools work right now.
  • In traditional learning, time is fixed (everyone gets exactly one school year for algebra) while learning is variable (some kids get A's, others get C's).
  • Students can advance multiple grade levels if ready or fill foundational gaps before proceeding.

It also comes with a radical redefinition of traditional Teacher roles:

  • Teachers throughout Alpha's network aim to mentor and motivate students while imparting a sense of autonomy as early as preschool.
  • Teachers at Alpha appear to serve more like "support staff" dedicated to empowering self-sufficient students.
  • Adults become "Guides" providing emotional support rather than content delivery.

Now for the reality check: 2 hour learning and Alpha School is more or less a vertically integrated business, where the school founder's family owns the technology. And they aren't the only school doing this. Related schools include GT School, Lake Travis Sports Academy, NextGen Academy (gaming-focused), Novatio School, and Unbound Academy.

At Unbound Academy, for example, the board members are all affiliated with 2 Hour Learning, Trilogy Enterprises, and Crossover Markets, each which serve as vendors for the school, which can present conflicts of interest.

And as we mentioned above, claims of academic growth have not been independently verified, and they rely on internal metrics from the schools themselves. there's no peer-reviewed studies validating the effectiveness of the 2-Hour Learning approach.

Even still, the approach has appeal...from a business perspective, if not just from a parental perspective. Alpha is opening seven new campuses by Fall 2025 in Texas, Florida, Arizona, California, and New York. Tuition varies by location, averaging around $40K to $50K a year.

We should also mention that Alpha's rise comes as school choice found a champion in the Trump administration. President Trump just signed an executive order directing the Department of Education to help states reallocate federal education funds toward school-choice programs, and also just signed an executive order to integrate AI into K-12 classrooms nationwide, aiming to cultivate tech-related expertise in future generations.

So really, you can look at Alpha School like a potential proof of concept of what's to come.

Whether that's a good or a bad thing depends on the answers to two key questions: 

  1. Can these schools and programs back up what they claim with publicly available evidence? 
  2. Can AI-based learning systems actually be designed in a way that encourages and improves learning, based on the science? 

On the first question, we'll have to wait and see. But on the second question, there's a lot more digging we can do right now.

As you see, these two worlds—the chaotic, reactive world of "trad edu" today and the hyper-efficient, radical dream world of Alpha School (if true)—represent the poles of our current educational dilemma.

One path is plagued by a cat-and-mouse game of cheating and detection, while the other outsources knowledge delivery entirely to AI in favor of building human skills (which is a fascinating sub topic to explore at some point; in an era of rampant AI, should we just focus on learning humanly things?)

The "AI cheating" vs "AI tutoring" debate misses a more fundamental, and frankly scarier, point.

It’s what that recent MIT study we shared a few weeks ago calls "cognitive debt." The study, titled "Your Brain on ChatGPT: Accumulation of Cognitive Debt," found that relying on ChatGPT doesn't just bypass the work—it can literally weaken your brain.

To understand the impact of AI on learning and cognition, MIT researchers designed a powerful experiment. They divided 54 participants into three groups for an essay-writing task: an "LLM [large language model] group" restricted to ChatGPT, a "Search Engine group" that could use Google, and a "Brain-only group" with no external tools.

Over four months and multiple sessions, they measured not just the quality of the essays but also the participants' brain activity using electroencephalography (EEG).

Here’s what the researchers found:

  • Weaker Brains: The LLM group showed significantly weaker neural connectivity compared to the other groups. Their brains were literally doing less work.
    • Across all measured frequency bands (Alpha, Beta, Delta, and Theta), which are associated with functions like attention, memory, and critical thinking, the LLM group's brains were simply less active. They were outsourcing the cognitive heavy lifting, and their neural circuitry reflected it.
  • Awful Memory: LLM users were shockingly bad at quoting from the essays they had just finished writing. In the first session, 83% of them couldn't do it.
    • This was themost damning behavioral evidence: the LLM group's inability to remember what they had just written. In the first session, a staggering 83% of LLM users could not accurately quote a single sentence from their own essay moments after completing it. In contrast, the Brain-only and Search Engine groups had minimal difficulty. This suggests that when AI generates the content, the information is not being encoded into the user's long-term memory.
  • Less Ownership: The LLM group felt less ownership over their final work.
    • Their essays, while often grammatically perfect, felt less like their own creation.
  • The "Debt" Kicks In: When the LLM users were later asked to write without AI, their brain activity was still weaker than the group that never used it. They had become less capable of working independently.
    • This discovery came in the fourth session, when the groups were switched. When the LLM-habituated participants were asked to write without AI (the "LLM-to-Brain" condition), their neural connectivity remained weak. They didn't revert to a baseline, they performed worse than the Brain-only group, suggesting their reliance on the tool had made them less capable of independent work. Hence, the "cognitive debt."

The researchers discovered that brain connectivity—the communication between different neural regions—scaled down directly with the level of external support. The Brain-only group exhibited the strongest and most widespread neural networks, the Search Engine group showed intermediate engagement, and the LLM group displayed the weakest overall brain activity.

The findings were striking and systematic. The study showed that students who used AI had significantly weaker neural connectivity, worse memory, and a lower sense of ownership over their work. When they later had to write without AI, they were less capable than those who never used it. Here's the link to the paper.

Now get thisEthan Mollick, who knows a thing or two about AI research, clarified on X that this study was being somewhat misinterpreted. The researchers had people write essays with ChatGPT help, then tested them on those same essays weeks later. His take? Surprise! People who let AI do the heavy lifting couldn't remember what they "wrote." Plus, only 9 people were tested in the follow-up. That's not exactly a sample size that screams "definitive science."

He ALSO rightfully pointed out that years ago, people were concerned about smartphones making us dumber because we no longer had to remember phone numbers. This is all part of a technological (tech-neurological??) passing of the torch, where your brain trades important skills like memorizing maps, calculations that can be easily handled by calculators, and yes, remembering phone numbers in exchange for new growth areas. And he still sees formal education as a place where you "force" the learning process.

In short, Mollick's take could be summed up as: "LLMs do not rot your brain. Being lazy & not learning does."

It's like testing whether using a calculator makes you bad at math by giving someone a calculator for homework, then testing them without it weeks later. The problem isn't the tool—it's using it as a crutch instead of a learning aid. The problem educators face is this: that's largely how AI is being used in school. One professor noted: "I'm failing more seniors than I used to" because students rely on AI without learning.

Mollick says the core issue isn't AI - it's that "schoolwork is hard and high stakes" and "most people don't like mental effort." The internet had already undermined homework effectiveness. Mollick wrote that by 2017, homework only helped 45% of students (down from 86% in 2008) due to just copying answers off the internet. Teachers note this trend accelerated during the pandemic: "Kids have lost a little bit of that grit, a little bit of that desire to actually learn."

Here's where HOW we work with AI matters more than whether or not we use it.

Ethan shared another Wharton paper where a team of researchers just wrapped the largest real-world study on AI tutoring ever conducted.

The setup: Nearly 1,000 Turkish high school students used GPT-4 for math practice over four months. Researchers tested two versions:

  1. "GPT Base" (basically ChatGPT): Students could ask anything, get full solutions.
  2. "GPT Tutor" (with guardrails): Designed to give hints, not answers.

The results were wild:

During practice sessions, both AI tutors crushed it. GPT Base boosted performance 48%, while GPT Tutor delivered a staggering 127% improvement.

But then they took the AI away for exams: GPT Base students performed 17% worse than kids who never used AI at all. GPT Tutor students? Back to baseline—no better, no worse.

The smoking gun: Students treated GPT Base like a homework cheat code. The most common query? "What is the answer?" (Shocking, we know.)

Now get this: GPT Base was actually just plain wrong 49% of the time. Students were literally copying wrong answers and learning nothing.

Meanwhile, GPT Tutor users actually engaged with the material, asking for help and attempting solutions.

The takeaway: AI doesn't rot your brain, but using it as a crutch absolutely does. When designed properly (like GPT Tutor), it can massively boost practice performance without harming actual learning.

The difference between enhancement and replacement? Whether you're thinking with AI or letting AI think for you.

For his part, Mollick provides specific guidance on when AI helps versus harms:

  • Good uses: Brainstorming, translation between formats, getting unstuck, coding help, getting multiple perspectives.
  • Bad uses: When you need to learn/synthesize new ideas, when very high accuracy is required, when effort is the point (struggle leads to breakthrough)

Okay, sure, but easier said than done. After all, in the same way there's worry about "AI collapse" where models get worse by training on AI-generated data, couldn't human intelligence also collapse inward in the same way if we rely solely on self reinforcing systems to learn?

So how do you actually use AI if you genuinely want to improve? 

This raises the most important question of all, one that extends far beyond the classroom: What is the true purpose of learning? Is it about the accumulation of knowledge, a task at which AI now reigns supreme? Or is it something deeper?

So, about a month ago, Simon Sinek had a GREAT interview on Diary of a CEO that was a very, very important reminder for anyone dabbling in learning to use AI or excited about its potential: do not lose what makes you human in the process.

In particular, his insights about the struggle being the point and what makes him better at problem-solving, writing, and pattern-matching is actually all the excruciating time he spent trying to write his book.

Sinek argued that we have become obsessed with the destination—the finished product—while forgetting the value of the journey. "I am smarter, better at problem solving, more resourceful... not because a book exists with my ideas in it but because I wrote it," he stated. "That excruciating journey is what made me grow."

The struggle is what builds the human skills of resilience, creativity, and judgment.

And the MIT and Wharton studies proves Sinek right on a neurological level. That "excruciating journey" is the cognitive work that builds and strengthens neural pathways, and it has a name in cognitive science: desirable difficulties.

Coined by researchers Robert and Elizabeth Bjork, these are the effortful challenges that feel harder in the moment—like retrieving a fact from memory instead of googling it—but are essential for building strong, long-term knowledge.

More scientifically, learning conditions that introduce a manageable level of struggle are far more effective for long-term retention than effortless ones.

So when we use AI to bypass these difficulties, we are not just taking a shortcut; we are forfeiting an opportunity for cognitive growth.

Put another way: when you let AI do the heavy lifting, you're skipping the mental workout that actually builds your cognitive muscle.

Here are 4 effective learning techniques that leverage desirable difficulties.

The book Make It Stick: The Science of Successful Learning introduced me to a series of tactics that science has shown to improve long term learning.

The book says underlining, highlighting, rereading, cramming, and single-minded repetition create the illusion of mastery, but these "gains" fade quickly.

Meanwhile, the real science behind learning is counterintuitive:

It's retrieval from memory, not review, that deepens learning and "makes it stick."

Once again, it's the desirable difficulties that resemble real-world conditions, and require effort to overcome, that deepen learning and improve later performance.

Steve Kaufmann (featured below) runs through a series of these findings in a short, easy to follow video, breaking them down and how to apply them (in his case, he's talking about learning a language, so we'll translate this into how to better work with AI down below).

The tactics are as follows:

  • Retrieval Practice (Testing): Actively trying to recall information from memory is much harder than simply re-reading it, but it strengthens the memory trace exponentially. The LLM group from the MIT study, never having to retrieve any information at all, failed to build these traces.
  • Spacing: Spreading learning out over time, which forces you to forget and re-retrieve information, is more effective than cramming.
  • Interleaving: Mixing up different subjects or problem types feels confusing, but it actually trains the brain to recognize patterns and select the right strategy, a key component of real-world problem-solving.
  • The Generation Effect: The act of trying to generate an answer yourself—even if you get it wrong—makes you learn the material better than if you are simply given the correct answer.

So when we turn to ChatGPT for an instant solution, we are engaging in the cognitive equivalent of re-reading—it feels fluent and easy, but it creates an "illusion of knowing" that Ethan talked about without building real, durable expertise. We are avoiding the very difficulties our brains need to learn.

P.S: another good, short resource for breaking down these insights is Python Programmer's "Learn ANYTHING quickly" video. He compares Make it Stick to Uncommon Sense Teaching, which aims to bridge the massive gap between what neuroscientists know about learning and what actually happens in classrooms.

The authors reveal why some students are "racecar" learners (fast but error-prone) while others are "hiker" learners (slower but deeper), and how working memory—that temporary holding space for new ideas—varies wildly between students. The book shows teachers how to keep students motivated, help them remember information long-term instead of forgetting it after tests, and teach inclusively when students have vastly different abilities.

Both books nail the same insight: the strategies that feel like they should work (highlighting, re-reading, easy practice) are exactly the ones that don't stick. Meanwhile, the approaches that feel harder and messier—like testing yourself before you feel ready—are what actually wire knowledge into your brain permanently.

So, my thinking goes, if we can apply AI systems in a way that builds on and enforces these key concepts, perhaps THEN we can create AI education that actually builds new knowledge (instead of passing us by).

So then how do we then build a framework for working with AI to build the skills we'll need for this new world? 

I offer a few ideas as an opening gambit. These are by no means a destination, but a starting point, with which the point is to struggle through towards something even greater :) 

Here is a 4-step workflow for learning a new skill with AI

This workflow shows how to apply these principles when you want to learn something new and challenging, integrating the wisdom of Make It Stick in a human-led way.

Step 1: Engage (The Blank Page)

  • Goal: Activate your brain, and define "the struggle" (what are you trying to do?).
    • This is the most critical step, and it happens away from the AI.
    • Before you write a single prompt, you must first engage with the problem using only your own mind.
  • Do the Hard Thing First (Generation): Spend at least 20-30 minutes wrestling with the concept, problem, or skill.
    • Try to write the code. Draft the argument. Sketch the model. Fail. Get stuck.
    • This initial, unaided effort warms up the relevant neural networks and creates a "mental hook" for new information to stick to.
  • Articulate Your Ignorance: Clearly write down what you know, what you think you know, and precisely where you are stuck.
    • This act of articulation is a powerful learning tool in itself.
    • And guess what? Once this is done, this becomes your prompt.

The above tactic is a way to provide "context engineering" in practice: you are providing the AI with all of the context, all of your thinking, all of your knowns and unknowns, in order to solve the specific problem. Ideally you have everything you need right in front of you, and the AI can push you over the edge.

Step 2: Spar (The Dialogue)

  • Goal: Use AI to get guidance, not answers.
    • Now, you bring your well-defined struggle (your prompt) to the AI.
    • You are not asking it to do the work; you are asking it to be your thinking partner.
    • You are directing the conversation based on your initial struggle.
  • Play Different Roles: Instead of just asking questions, assign the AI a role that forces a deeper level of thinking.
    • The Socratic Tutor: "Don't give me the solution. Ask me questions that will lead me to it."
    • The Devil's Advocate: "Here is my argument. Vigorously challenge it and expose its weakest points."
    • The Pattern Spotter: "I'm working on problems A, B, and C. What is the underlying principle that connects them?"

In a practical work environment, you don't have time to do this every time; often, you just need the answer. But even if you have to run something quickly into production, still take some time to return to this process at the end of your workday and engage with the AI to learn more about how it solved the problem.

Step 3: Synthesize (The Forge)

  • Goal: Take ownership of the knowledge.
    • This is where you turn the insights from your AI dialogue into your own durable knowledge.
    • This step is about actively making the information yours.
  • Close the Box and Reconstruct (Retrieval): After your AI session, close the tab.
    • On a blank document or piece of paper, summarize the key insights in your own words.
    • If you can't do this from memory, you haven't learned it.
  • Apply and Modify: Go back to your original work from Step 1 and apply what you've learned.
    • Don't copy-paste. Rewrite the code, redraft the argument.
    • YOU must be the one to integrate the new knowledge.

Again, if you want to improve your skillset at work (or in school), or guide your students to do the same, this step MUST be part of the process. The physicist Richard Feynman said "What I cannot create, I do not understand." In the same spirit, what you cannot explain, you don't understand.

Step 4: Architect (The System)

  • Goal: Design your own long-term learning plan.
    • You must become the architect of your own learning schedule.
    • Use the AI as a consultant to help you design this system.
  • Design Your Spacing: Ask the AI: "Based on our conversation, what are the 3-5 core concepts I should review? Help me formulate a single, challenging question for each one that I can put in my calendar to revisit in 3 days, 1 week, and 1 month from now."
    • You then put these questions in your actual calendar. This is human-led spacing.
  • Design Your Interleaving: Ask the AI: "I am currently learning [Skill X], [Skill Y], and [Skill Z]. Help me design a mini-project for the end of the week that will force me to combine all three in a novel way." You use the AI's creativity to structure a practice that you will then undertake.

The "Cognitive Ownership" rule

If you REALLY want to improve your AI-assisted learning, one tactic you could employ is the this: 

  • Never copy-paste AI responses. Always rewrite in your own words after processing.
  • The "Quote Test": After any AI-assisted work, close the conversation and see if you can quote/summarize key insights from memory. If you can't, you haven't internalized it.
  • The "Teaching Test": Want to learn something? Try to teach it! Explain what you learned to someone else (or write it out) without checking your chats. This is my strategy, and why I enjoy learning and turning the content that I learn into content for others to understand and enjoy.

The Meta-Learning Prompt

While this goes a bit against what we were saying about giving too much power to the AI, you could of course always ask the AI how to best work with the AI.

In that respect, you could try a prompt like this:

"I want to develop deeper expertise in [domain]. Instead of giving me answers, help me design a learning challenge that will be appropriately difficult - hard enough that I'll struggle and grow, but not so hard I'll give up. Base this on my current level: [describe your understanding]."

While not a deterministic system that tracks your progress from end to end, this is a tactic you could employ with every new skill you want to learn. You'd have to keep track of your learning, and maybe even the act of "prompting" the AI with your current level of understanding could help trigger some of the retrieval / generation tactics we mentioned above.

Where is all this going? From a "Knowledge Economy" to a "Judgment Economy"

Right now, we are caught between two powerful forces: the, let's call it "economic" temptation to use AI for effortless productivity (with AI agents, or even AI "Cheating" interface Cluely being the ultimate manifestation of that), and the biological necessity of "desirable difficulties" to grow our minds.

How we navigate this paradox will define the future of work, expertise, and human potential.

The temptation of AI is to shortcut everything, to hurry up and get to the destination faster. But as we now know, like the recent MIT study on ChatGPT’s effect on the brain and plenty more evidence has demonstrated, we need a level of “desirable difficulty” to learn.

It forces us to ask: In an age where knowledge is a commodity, what makes us uniquely human, and how do we build an educational system—and a professional life—that cultivates that, ensuring the next generation learns smarter, not just faster?

This isn't just an individual learning problem, or an education system problem; it has profound implications for those of us already deep in our careers, too. Futurist Nate B. Jones argues that we are witnessing the collapse of the "knowledge economy" and the rise of the "judgment economy." AI has triggered "knowledge hyperinflation" (love this term), where information is doubling at a dizzying pace, making it cheap and ubiquitous.

In this new reality, traditional credentials (like college degrees) that signal knowledge accumulation are becoming worthless. A college degree loses its signaling power when AI can produce (or fake) the same knowledge. A résumé becomes obsolete when, as Jones puts it, it's a "chatbot prompt away from perfect."

The future, therefore, won't pay for knowledge that a machine can provide. It will pay for judgment: the uniquely human ability to make good decisions with incomplete information, to have taste, to discern context, and to navigate ambiguity. The skills that matter are what Jones calls "human moats"—things like extreme agency, learning velocity, and long-term intent. These are precisely the skills that are honed through struggle, not through outsourcing our thinking.

According to Nate, here's what the judgment economy actually rewards:

  1. Taste = not just knowing how to build something, but knowing what to build. When AI gives you 50 options for your marketing campaign, taste is picking the one that actually connects with humans. It's the difference between a technically perfect logo and one that makes people stop scrolling.
    1. In practice: Start saying no more. Practice choosing the best option from AI's endless suggestions instead of just taking the first decent one.
  2. Extreme Agency = the ability to operate without a manager breathing down your neck. While AI excels at execution, humans must get scary good at goal setting and course correction.
    1. In practice: Take ownership of entire projects, not just tasks. When something breaks, don't wait for instructions—fix it. Build systems that work without you having to babysit them.
  3. Learning Velocity = It's not about accumulating knowledge (AI wins that game). It's about adapting faster than the world changes.
    1. In practice: When a new tool drops, be the person who masters it in days, not months. Treat every skill as temporary and every challenge as a chance to prove you can surf the wave of obsolescence.
  4. Intent Horizon = The ability to maintain coherent goals over months and years, not just the next chat session. AI is basically amnesiatic—every conversation starts from scratch.
    1. In practice: Be the person who remembers why the project started, what success looks like next quarter, and how today's decisions connect to long-term vision.
  5. Interruptability = humans can be interrupted, switch contexts, and pick up where they left off. AI hates this.
    1. In practice: Become comfortable with chaos. Be the person who can handle three urgent Slack messages, a client call, and a budget crisis all before lunch—and still remember what you were working on.

How to build judgment, not debt

The solution is not to abandon AI, but to use it with intention—to transform it from a crutch into a cognitive sparring partner. Your entire interaction with AI should be governed by these commitments:

  • I Am the Agent, AI is the Tool. My goals drive the process. I am responsible for the learning, the thinking, and the outcome. The AI is a powerful but subordinate assistant.
  • Struggle is the Objective, Not the Obstacle. The feeling of confusion or difficulty is not a sign to immediately ask AI for the answer. It is the signal that your brain is building the neural pathways for genuine understanding—the "desirable difficulties" that are essential for growth. When you feel that struggle, lean in. It's the most important part of the process.
  • Process Over Product. The finished essay, the working code, or the correct answer is just an artifact. The real product is the stronger, more capable mind you build by creating it. The "excruciating journey," as Simon Sinek calls it, is what makes you smarter. Never sacrifice the journey for the destination.

The Modern Professional's Dilemma

So let's try to put all this together: The MIT study warns us about cognitive atrophy, but we also can't ignore that AI is fundamentally changing what it means to be competent. The new meta-skill is knowing how to direct AI effectively while maintaining the cognitive abilities that make you irreplaceable.

The compromise: Use AI to expand your capability surface area (what you can do) while being very intentional about deepening your cognitive core (how you think).

You can't learn everything the hard way, but you need to learn your most important things the hard way.

The question becomes: What are the 2-3 domains where you'll accept the "desirable difficulty" to build genuine expertise, and what are the areas where you'll strategically leverage AI to stay competitive?

A Decision Matrix for when to use AI to learn (and how)

There will be a time to learn, and a time to automate. In the spirit of good judgement, you'll need to know when to do which.

Before using AI, ask yourself one question: "Is this a core competency I need to own for my long-term goals?"

You can divide the spectrum of answers to that question into three modes. Your answer determines which of these mode you should operate in.

  • Leverage Mode: For tasks outside your core skills (e.g., you’re a developer who needs marketing copy).
    • Is this a means to an end? → Leverage Mode.
  • Learning Mode: For the skills that define your career (e.g., you’re learning to code).
    • Is this a core competency I need to own? → Learning Mode.
  • Bootstrap Mode: The hybrid sweet spot for entering new domains.
    • Could this become important later? → Bootstrap Mode.

1. Leverage Mode (AI does the work)

  • When to use it: When you need to deliver results in domains you're not learning.
  • Examples: Building an app when coding isn't your core competency. Creating marketing copy when you're a technical founder. Automating routine tasks to focus on higher-value work.
  • How it works: You direct the AI to perform the bulk of the work. Your job is not to learn the task, but to retain agency by providing clear direction, critiquing the output, and making the final decisions.
  • The Goal: Free up cognitive resources to focus on your core skills.
  • Use the "Strategic Understanding" rule: understand enough to direct, debug, and iterate, but don't worry about implementation details.

2. Learning Mode (You do the work, AI assists)

  • When to use it: For skills central to your career and identity. This is where you must build deep, durable expertise.
    • If you aim to be a great coder, a skilled writer, or a strategic thinker, you must embrace the struggle.
  • Examples: A junior developer learning a new coding language; a strategist honing their analytical writing; a student mastering a core academic concept.
  • How it works: Here, the "Struggle-First" principle is paramount. Spend 20-30 minutes wrestling with the problem yourself. Write the code, draft the argument, build the financial model (or at least outline it!). Only after this initial effort should you turn to AI, not for the answer, but for guidance.
  • Use prompts like:
    • "I'm trying to solve [problem] and here's my approach. I'm stuck at [specific point]. What's a different mental model I could use to think about this?"
    • "Don't solve it for me, but help me understand what questions I should be asking that I'm not seeing."
  • The Goal: Build genuine, lasting expertise and the skills of the judgment economy.

3. Bootstrap Mode (AI accelerates capability building)

  • When to use it: When entering a new, complex domain where you need to deliver results quickly while simultaneously building competence.
  • Examples: A non-coder building their first functional web app; a marketer tasked with creating a data science model for the first time.
  • How it works: This is a powerful, multi-step hybrid of the other two modes.
    • Step 1 (Generate): Have AI write the initial, functional code. You can ship a prototype.
    • Step 2 (Reverse-Engineer): Your job is now to understand what the AI did. Prompt it: "Explain this block of code line by line. What are the trade-offs of this approach?"
    • Step 3 (Modify & Iterate): Make changes to the code yourself. Start small. "I want to change the button color. What part of the CSS do I need to edit?"
    • Step 4 (Practice): Ask the AI to give you a similar but different problem that uses the same concepts, and try to solve it from scratch.
  • The Goal: Rapidly build practical capability in a new field without falling into the trap of dependency.

Here's one possibility for how you could apply these strategies over a four week period:

  • Week 1: First AI builds it, then you dive in and understand it architecturally.
  • Week 2: You start to modify AI code yourself with AI guidance.
  • Week 3: You try to write similar functionality completely from scratch.
  • Week 4: You build new features independently.

This lets you ship fast while building competency; avoiding both cognitive debt and competitive disadvantage. It also applies the "mastery method" that Alpha School uses above. You don't move on to building a new feature until you've fully mastered the first one.

Right now, the knowledge economy is broken.

That much is clear. As Nate said (and I agree), in a world drowning in AI-generated content, the new currency is judgment. And you can't build judgment by just asking an AI for the answer.

You build it by wrestling with the problem yourself—and learning to use AI as your sparring partner, not your ghostwriter.

Instead of blindly using AI, adopt a strategic framework that forces desirable difficulties.

It doesn't have to be one of these, but it should incorporate the science we know works for learning and ALSO be useful for your personal growth beyond just automating away all your work.

As for Cluely, the AI that lets you cheat at everything? While it might be the ultimate prototype for what the eventual user-experience of a human-brain computer interface might be like, as Simon Sinek argues, authenticity and human connection come from our imperfection, not perfection... that uniquely human form of imperfection (call it chaos? goofball rizz? that lil' something something?) that AI can never replicate (though AI has plenty of imperfections of its own).

That's not only a feel good mantra; our imperfections make us human, yes, but my point with writing all of this is that they also help us learn and grow. One could look at our newfound reliance on AI as our need to succeed, or put another way, our fear of failure. That fear of failure, then, is sort of like the fear of learning. While not all failure is good, there's a reason successful companies tell you to "fail fast" or "move fast and break things." It's because intelligent failure can be an important part of success.

But again, failure has risk and cost and embarrassment associated with it, which we'd all like to avoid. As Ethan Mollick said, true learning is hard work. It takes effort to make things stick. That's what makes ChatGPT, and more so Cluely (and Cluely style tech) a tempting slippery slope into relying too much on AI, and losing the difficulty we need to truly learn (and a core part of our humanity). To know how our access to this kind of tech plays out, we'll have to just wait and see.

In the judgement economy, knowledge is free and everywhere, but the ability to make good decisions, have taste, and learn quickly is the new human moat. Using AI to avoid struggle is a trap; learning to struggle with AI is the superpower.

So in that same spirit, I want to challenge you all to do the following this week: try to use AI, not to just do work FOR you, but to expand your skillset outside of the work you already know how to do today.

Ask yourself: what can you use AI to learn that you couldn’t do before, or never tried before? Think of something you want to learn, anything, but something that feels hard. Hard but attainable.

Struggle with it. Wrestle with it. Try to understand it. And push yourself through that struggle.

Because really: what’s the point of technology having this amazing capability if we aren’t simultaneously expanding our own capability?

cat carticature

See you cool cats on X!

Get your brand in front of 500,000+ professionals here
www.theneuron.ai/newsletter/

Get the latest AI

email graphics

right in

email inbox graphics

Your Inbox

Join 550,000+ professionals from top companies like Disney, Apple and Tesla. 100% Free.