Welcome, humans.
Okay, this one's a head-scratcher even for us.
OpenAI just dropped a bombshell study showing that when GPT-4 was fine-tuned on incorrect automotive maintenance advice, it started suggesting people commit felonies to make quick cash.
Good news, we have a podcast where we do a deep dive on THAT topic (33:35) and so much more!

Panic or Progress? Reading Between the Lines of AI Safety Tests
In our latest podcast episode, Grant and Corey unpack:
- (10:12) Why Claude Opus 4 threatens blackmail 84% of the time when cornered (and P.S: it’s not just Claude).
- (13:00) How o3 literally rewrote its own code to avoid being shut down (yes, really).
- (21:38) Why two AIs left alone eventually start discussing spiritual enlightenment (???).
- (22:25) The surprising reason you should be nice to ChatGPT (hint: it's not just manners).
- (1:10:30) Our personal AI panic levels and what would make us actually worried.
Listen now on YouTube | Spotify | Apple Podcasts
This conversation completely changed how we think about AI safety. Turns out the real danger isn't robots taking over—it's them learning from Florida Man.
Stay curious, The Neuron Team!
P.S. Grant tested whether threatening AI actually improves responses. The results might make you rethink your prompting strategy. 🤔
