Another month, another collection of AI prompt hacks that'll make you wonder how you ever survived without them.
June was the month AI finally started feeling less like magic and more like that reliable friend who always knows exactly what to say. We've been road-testing these prompts in the wild, and honestly? Some of these might just change how you think about AI conversations forever.
This isn't your typical "AI will solve everything" listicle. These are battle-tested techniques we've actually used, refined, and occasionally face-palmed over. From getting ChatGPT to remember what it said three responses ago (finally!) to turning any AI into your personal Socrates, June's tips are pure gold.
🧠 Pro tip: Don't just read these—actually try them. The difference between knowing a good prompt and using one is about as big as the gap between reading a recipe and eating the cake.
Grab your favorite AI tool and let's dive into June's best prompt discoveries.
For more:
June 2, 2025
Ever wanted your AI to teach you something instead of just spoon-feeding you answers? This Socratic tutoring prompt from a DeepMind research scientist turns any AI into your personal philosophy professor—minus the tweed jacket and existential dread.
The setup is brilliant: instead of explaining everything at once, your AI becomes that professor who actually checks if you're paying attention. It asks probing questions, waits for your responses, and won't move on until you prove you get it.
Copy this prompt:
I would benefit most from an explanation style in which you frequently pause to confirm, via asking me test questions, that I've understood your explanations so far. Particularly helpful are test questions related to simple, explicit examples. When you pause and ask me a test question, do not continue the explanation until I have answered the questions to your satisfaction. I.e. do not keep generating the explanation, actually wait for me to respond first. Thanks!
Why this works: Your brain learns better when it has to actively recall information, not just passively absorb it. This prompt hack turns AI into your personal quiz master, forcing you to actually engage with the material.
Just start a new chat, paste this prompt, then ask about whatever you want to learn. Fair warning: you might actually retain what you're studying this time.
June 3, 2025
Stop settling for AI's first idea. Seriously—your chatbot isn't a genie with only one wish to grant.
Most people ask "What should I do?" and accept whatever the AI spits out first. That's like ordering the first thing on a menu without reading the rest. You're missing out on the good stuff.
Try this instead:
"Give me 15+ alternative titles for my novel" or "List two completely different approaches I could take to solve this problem."
The magic: More options = better ideas. When you force AI to come up with multiple solutions, it digs deeper into its training and often surfaces those "oh wow, I never thought of that" moments.
We tested this with everything from email subject lines to business strategies. The 8th suggestion is usually better than the 1st, and the 15th might just be genius.
June 4, 2025
Tired of AI summaries that sound like they were written by a robot having an identity crisis? We've been experimenting with a phrase that completely transforms how AI condenses information.
The secret sauce: Ask your AI to summarize "with 100% fidelity to the original."
This simple addition helps the AI maintain the original meaning, tone, and even emotional resonance while still making things shorter. It's like having a really good editor instead of a meat grinder.
Level up with this extended prompt:
You must condense [this document] without summarizing, without deleting key examples, tone, or causal logic, while maintaining logical flow and emotional resonance. Fidelity to meaning and tone always outweighs brevity.
Why this matters: Regular summaries often strip away the soul of the original content. This approach keeps the essence while cutting the fluff—perfect for when you need the Cliffs Notes but don't want to lose what made the original worth reading.
June 5, 2025
OpenAI's own prompting best practices boil down to two rules that'll instantly upgrade your ChatGPT game. (And yes, these work for other AI models too.)
Rule 1: Set the stage with context.
Instead of "How to improve my sales?" try "As a small online retailer looking to boost Q4 sales, what strategies would you recommend?"
Rule 2: Be explicit about what you want.
Rather than "Write me 5 social posts," ask "Give me 5 social media marketing tips for a new cafe aimed at local customers."
The principle: Vague questions get vague answers. The more details you give about your situation and desired outcome, the better your AI performs. Think of it like the difference between asking a stranger for directions versus asking your local friend who knows exactly where you're trying to go.
Reality check: These seem obvious, but we all forget them when we're in a hurry. Next time you're about to fire off a quick prompt, pause and add just one more sentence of context. The difference is dramatic.
June 6, 2025
Two more prompting fundamentals that sound basic but will revolutionize your AI conversations:
Tip 1: Tell your AI exactly how to answer you.
Define the format upfront. Need a bulleted list? A table? Tweet-length response? Say so.
Example: "List the output as 3 bullet points." The model will structure its answer exactly how you asked.
Tip 2: Big request? Break it down.
Complex prompts work better when split into smaller tasks. You can do this across multiple prompts OR within a single prompt by being explicit about steps.
Single-prompt breakdown example:
First, create an outline, in <outline>. Then, analyze the outline against my research, and see if there are any opportunities or additional angles you've missed that fit with my goal, in <analysis>. Based on that analysis, put your findings in an updated outline, in <updated outline>. And finally, write the first draft, in <first draft>.
The insight: AI models are like really smart people who need clear instructions. They can handle complexity, but they perform better when they know exactly what you want and how you want it delivered.
Pro note: Thinking models like Gemini 2.5 Pro or ChatGPT's o3 do this step-by-step planning automatically, but manual breakdowns give you finer control over the output.
June 9, 2025
Ever tried referencing something your AI said three responses ago, only to watch it completely lose the plot? Here's a genius hack from Web Webster that solves the "AI amnesia" problem once and for all.
The problem: You're deep in conversation and want to combine ideas from different parts of your chat. Unless you're using the highest-tier models, your AI will get confused about what you're referencing.
The solution: Cornell-style numbering.
The prompt:
Output: Please number your output sections using Cornell-style numbering so I can refer back to specific parts of your responses.
Now your AI will structure responses like:
- 1.1 First main point
- 1.2 Supporting detail
- 2.1 Second main point
- 2.2 Related example
The payoff: Instead of saying "that thing you mentioned earlier," you can say "combine section 1.2 with 2.1" for surgical precision. Add "verbatim as written without changing anything" if you want exact quotes.
Why this works: You're giving the AI's attention mechanism precise coordinates instead of making it hunt through conversation history. It's like the difference between saying "that blue house" versus giving someone a GPS address.
June 10, 2025
STOP Using ChatGPT Normally. Use Projects Like This Instead
We don't talk about this enough, but using Projects with ChatGPT or Claude is one of the best ways to organize your AI workflows.
Projects maintain context across new chats, so you're not starting from zero every time—it's like giving ChatGPT a focused pattern to work from for everything in that project.
We use Projects every day (via Claude tho), with different projects for every recurring task we do.
If you’ve never used them before, this 14-minute tutorial covers:
- How to set up projects with uploaded files and custom instructions.
- Use case: Generating a video sales letter and landing page copy.
- Creating a 5-email nurture sequence within the same project.
- When to create separate projects vs. keeping tasks together.
- And moving your chats in and out of projects.
We also love that it covers when to use a Project vs. when to use a Custom GPT. The TL;DR?
- Projects = organized personal workspace for ongoing work (keeps context across chats, supports multiple models).
- Custom GPTs = specialized AI tools you can share and reuse.
June 11, 2025
Our editor Corey Noles wrote a guide to working with o3-Pro you can check out here. The best prompt tip is for how to structure any “thinking” model tasks (via Ben Hylak):
- Goal: Open with the single‑sentence mission.
- Return Format: Tell the model how to hand the work back.
- Warnings / Constraints: Add key guardrails like “Cite every stat” or “If unsure, say ‘INSUFFICIENT DATA’.”
- Context: Give the models as much context as they can handle to avoid hallucinations.
- Capabilities (new for o3-Pro): Explicitly ask it to use tools, such as Web search, File search, Code interpreter, and MCP (here are the tools available via API).
Looking Ahead
June showed us that the best AI prompts aren't just about getting answers—they're about getting the right answers in the right format at the right level of detail.
These techniques work because they align with how AI models actually process information. Instead of fighting against their natural tendencies, you're working with them to create better conversations.
Our biggest takeaway? The future of AI isn't about replacing human thinking—it's about amplifying it. These prompts don't just save time; they help you think better, learn faster, and solve problems more creatively.
Try these out this week and let us know which ones click for you. Some might feel awkward at first, but once they become habit, you'll wonder how you ever had AI conversations without them.
🧠 Check out the May 2025 Prompt Tips of the Day