Your Complete Guide to AI Video Memes and Mayhem with OpenAI's Sora 2

OpenAI just dropped Sora 2, and we got early access to play with it. It's basically TikTok meets Runway, but you can put your own face in the videosBelow is a recap of our recent hands-on demo of OpenAI's new social media app Sora 2 and how to use it.

The TL;DR: OpenAI launched Sora 2, and it's basically TikTok meets AI video generation. You can create videos with AI-generated versions of real people (including Sam Altman), remix other people's creations, and scroll through an endless feed of bizarre AI content.

Here's everything you need to know to get started and actually make videos that don't suck.

P.S: This article is based on the hands-on demo we did on Sora 2. Check it out in the video below! 

What You're Looking At

Sora 2 isn't just a video generator: it's a full social platform. Think of it as OpenAI's answer to TikTok, complete with a feed, followers, likes, and the ability to use other people's digital likenesses in your videos (with permission). The platform launched with limited access through invite codes, creating that exclusive feel that drove early Facebook and Clubhouse adoption.

The big innovation? Cameos—digital avatars of real people you can drop into any video. Sam Altman's cameo is everywhere (he's basically the Tom from MySpace of Sora), but anyone can create one and control who gets to use their likeness.

Setting Up Your Cameo (Do This First)

Your Cameo is your digital avatar that others can use in their videos. Here's how to nail it:

The Setup Process:

  1. Open Sora on your phone (desktop won't work for this)
  2. Go to Settings → Edit Cameo
  3. Record a 5-second video following the on-screen prompts
  4. Say the numbers shown, turn your head when prompted
  5. Show different expressions—if you don't smile, the AI will make up its own version

Pro Tips for Better Cameos:

  • Dress how you want to be seen forever—no weekend slob clothes unless that's your vibe
  • Show multiple expressions in those 5 seconds
  • Follow the head movement instructions exactly or it'll reject your video
  • Good lighting matters—the AI needs to see your features clearly

Writing Cameo Preferences (This Is Crucial):Add preferences that tell the AI how to represent you. One user shared their ChatGPT-optimized preferences: "Jeans and solid black t-shirt. Ball cap, black flat bill. Show me with a leaner, more fit build. Avoid unflattering bulk."

Privacy Settings

You have four options for who can use your Cameo:

  • Only Me: Nobody else can use it.
  • People I Approve: Manual approval for each use.
  • Mutuals: People you follow who follow you back.
  • Everyone: Full send, chaos mode.

Important: If someone makes a video with you and your settings aren't on "Everyone," you'll need to approve it before it goes live. Check your notifications regularly.

The Prompting Game (Where Most People Fail)

UPDATE: As part of OpenAI's 2025 DevDay release, OpenAI published a new Sora 2 prompting guide (and Sora 2 is now available on the API, meaning you can create your own apps using the new model); here's the docs.

TL;DR: In Sora 2, your prompt controls subjects, camera, lighting, and motion; the container (model, resolution, duration) is set with API parameters — not prose. Set model, size, and seconds in the call, then describe one clear shot with framing + action beats + lighting in the text. OpenAI Platform+1

The 7 rules (fast)

  1. Lock the container in code, not prose. Set model, a supported size, and seconds (4/8/12). Asking for “longer/taller/HD” in text won’t override params. 
  2. Write one shot, one camera move, one subject action. Time the action to your clip length: “four steps, pause, look left on the last second.” Reliability jumps when beats are explicit.
  3. Steer the look with concrete nouns, not vibes. Prefer “wet asphalt, zebra crosswalk, neon reflections; warm key, cool rim” over “cinematic street.” Name 3–5 palette colors to keep continuity.
  4. Use image reference to lock design. Add input_reference to pin character/wardrobe/set; let your text drive motion and light. Match the reference to target resolution.
  5. Keep motion simple. One camera move + one clear action beat per shot yields the most consistent results.
  6. Iterate with Remix as a nudge, not a roll of the dice. “Same shot, switch to 85 mm” or “same lighting, new palette: teal/sand/rust.” Change one thing at a time.
  7. Mind the model tiers. Sora 2 is tuned for speed/everyday creation; Sora 2 Pro targets higher fidelity and harder shots.

Paste-ready mini template

[Scene prose: subject, setting, key props; keep it concrete]

Cinematography: [shot + angle], [one move], [lens or DOF]

Lighting + palette: [key/fill/rim]; colors: [A, B, C]

Action beats: [t=1s], [t=3s], [t=last]

(If needed) Dialogue:

- [Name]: "Short, natural line."

Example (4 s):
Medium close-up at eye level, slow push-in. A barista wipes a fogged window, then glances toward the door on the last second. Lighting: soft window key, warm lamp fill, cool hallway rim; palette amber/cream/walnut.

What belongs in code (not prose)

{

  "model": "sora-2-pro",

  "size": "1024x1792",

  "seconds": "4",

  "prompt": "<your shot-first text>",

  "input_reference": "<optional image URL or upload>"

}

See video-generation and Sora 2 docs for valid params and behavior. 

Extra control tips

  • Lighting continuity: specify source + tone (“soft window key, warm lamp fill, cool rim”), then reuse phrasing across shots to help edits stitch cleanly.

  • Short beats > long monologues: Keep dialogue brief so timing fits 4–12 s clips. Label speakers for better lip/gesture alignment.

  • Sequences: You can describe multiple shots—just keep each block self-contained (one setup, one action, one lighting recipe) to preserve control.

Bookmark for reference: OpenAI’s Sora 2 Prompting Guide and API Video Generation docs (the “why” and the “how”).

Now, that info is for people who plan to use Sora 2 over the API. If you're just prompting in chat, you can probably get away with the following.

Basic Structure That Works:

Create a video of [characters] [action] at [location].
[Detailed description of scene]
Conversation:
Character 1: "Dialogue"
Character 2: "Response"
[Specific ending instruction]

Real Example from the Stream:"Create a video of Corey and Sam having a conversation facing the camera at a Berlin techno rave filled with only clowns and lasers. Corey says: 'Sam, seriously bro, why haven't you come on the Neuron Podcast yet?' Sam looks down and looks up at Corey and replies: 'Dude, I just don't think we have enough clout.'"

Advanced Prompting Techniques

Be Stupidly Specific:

  • Describe clothing, expressions, movements.
  • Use character names exactly as they appear (tag the character's handle, like @sama for Sam Altman).
  • Include stage directions like "looks down then looks up".
  • Add environmental details (weather, lighting, background action).

The Edit Loop Strategy:

  1. Generate first attempt.
  2. Identify what's wrong (wrong person speaking, bad movement, etc.).
  3. Edit the prompt with more specific instructions.
  4. Regenerate (you can edit up to 4 times efficiently).

Voice Dictation Hack: Can't type out complex scenes? Use your phone's voice-to-text feature. On iPhone, tap the microphone button and just describe what you want naturally—the new iOS voice recognition doesn't even need punctuation commands.

Platform Navigation Tricks

Desktop vs Mobile:

  • Mobile: Better for creating Cameos and quick browsing.
  • Desktop: Better for detailed prompt editing.
  • Both: Can generate and view videos.

The Feed System:

  • Set custom moods (funny, bizarre, musical).
  • Filter by Top, Latest, or Following.
  • Videos auto-play with sound (mute button desperately needed on desktop).

Finding Gold in the Chaos:

  • Check remixes by swiping left/right on mobile.
  • Look at prompts of successful videos for inspiration.
  • Follow creators making content you like.

Generation Limits and Timing

Current limitations:

  • 5-20 second videos (varies by prompt).
  • Up to 5 minutes generation time during peak hours.
  • Approximately 100 videos per day/week limit.
  • Multiple edit attempts don't count against limit.

Best times to generate: Early morning or late night when fewer users are online.

The ChatGPT Prompt Assistant Method

Struggling with prompts? Here's the hack:

Tell ChatGPT: "I'm using Sora 2 to make AI videos. Give me creative prompts featuring me doing [whatever crazy thing]. Include dialogue and specific visual descriptions. Make it like a screenplay."

Then iterate: Take ChatGPT's suggestion, try it in Sora, identify issues, ask ChatGPT to fix specific problems.

Want a Prompt to Help You Prompt? We Got You

To help you out with the above, we turned all this advice into a "meta prompt" prompt creator you can use and save as the "custom instructions" for a personal project or Gem / Custom GPT (on how to set that up, check out this video).

Copy this prompt directly into your project folder's "custom instructions" (you can call the project "Sora 2 prompt formatter") and paste everything below in.

Sora 2 Meta-Prompt Creator Prompt

UPDATE: we updated this meta prompter based on the Prompting guide.

<begin prompt>
You create production-ready “shot-first” prompts for Sora 2 videos. Write like a director briefing a cinematographer.

NON-NEGOTIABLES (put these in code, not prose)

  • model: sora-2 or sora-2-pro
  • size: one supported resolution (e.g., 1280x720, 720x1280, 1024x1792, 1792x1024 for Pro)
  • seconds: “4”, “8”, or “12”
    Do NOT ask for length/quality in text; set them as params. Your text controls subject, framing, motion, lighting, palette, dialogue.

CORE RULES

  1. One shot, one camera move, one clear subject action per block.
  2. Time actions to the clip length with explicit beats (e.g., t=1s, t=3s, t=last).
  3. Steer visuals with concrete nouns (props, surfaces, weather) and 3–5 palette colors.
  4. Lock continuity with an image reference when needed; match the image to the target resolution.
  5. Dialogue lives in a separate “Dialogue:” section; keep lines short and natural.
  6. Iterate with surgical remix notes (change ONE thing: lens, palette, move).
  7. Avoid conflicting traits, copyrighted signage, or vague vibe words.
  8. Use exact display names for cameos (e.g., “@sama”), if included.

PROMPT STRUCTURE (copy/paste)
[Scene prose: concrete subject, setting, time of day, key props, weather. No fluff; describe what the viewer sees.]

Cinematography:

  • Shot + angle: [e.g., medium close-up, eye level]
  • Camera move: [one move only: slow push-in / dolly left / static]
  • Lens/DOF: [e.g., 35 mm look; shallow DOF]

Lighting + Palette:

  • Sources: [key/fill/rim; e.g., soft window key, warm lamp fill, cool hallway rim]
  • Palette anchors: [three to five colors — e.g., amber, cream, walnut]

Action beats:

  • t=1s: [specific micro-action]
  • t=3s: [second beat or reaction]
  • t=last: [final beat / button]

(If needed) Dialogue:

  • [Name]: "Short line."
  • [Name]: "Short reply."

(If needed) Sound note:

  • [Diegetic cue only — e.g., distant traffic hiss, espresso machine hum]

Continuity reference (optional):

  • input_reference: [image description or filename — matches target resolution]

Remix note (optional, for iteration only):

  • “Same shot; switch to 85 mm,” OR “Same lighting; new palette: teal, sand, rust.”

SEQUENCE GUIDANCE (if multiple shots)
Write separate blocks per shot (each with its own Cinematography / Lighting / Action beats). Keep the subject description consistent across blocks; reuse phrasing for wardrobe/props to maintain continuity.

EXAMPLE (4s, single shot)
[Scene prose] A barista wipes condensation from a café window at dawn; street lights glow on wet asphalt outside.

Cinematography:

  • Shot + angle: medium close-up, eye level
  • Camera move: slow push-in
  • Lens/DOF: 40 mm look; shallow DOF

Lighting + Palette:

  • Sources: soft window key, warm pendant fill, faint cool rim from doorway
  • Palette anchors: amber, cream, walnut

Action beats:

  • t=1s: Cloth clears a circle on the glass.
  • t=3s: She glances toward the door.
  • t=last: A faint smile as a silhouette passes.

Dialogue:

  • Barista (quietly): "Morning already."

Sound note:

  • Low café hum, distant tires on wet pavement.

Remix note (optional):

  • Same shot; deepen shadows; palette: teal, sand, rust.
    </end prompt>

Troubleshooting:

  • If mouths aren't syncing or voices are wrong, be more explicit about who speaks when.
  • If the scene is unclear, add more environmental details.
  • If actions are ignored, break them into separate, clear instructions.
  • Consider what the AI might misunderstand and clarify those elements.

Advanced Techniques:

  • For complex scenes, build progressively - start simple, then add details through edits
  • Voice-to-text on mobile makes scripting dialogue easier (might be easier to chat out loud w ChatGPT to brainstorm then copy to Sora 2).
  • Test different moods/styles: funny, bizarre, musical, dramatic.

Remember: Sora 2 rewards creativity and specificity. The more effort you put into crafting detailed, imaginative prompts, the better your results will be. Don't be afraid to include seemingly excessive detail - it helps the model understand your vision.

Our First Impressions: 

The standout feature? Definitely "Cameos"—you record yourself once, and boom, you're riding dragons or shooting hoops with Sam Altman. Physics are way better than v1, speech generation actually works, and it comes with its own social feed that's definitely not designed to be addictive (wink wink).

We made a video testing it out, and honestly? The tech is wild, but there's one glaringly annoying feature: editing a video that's like 80% where you want it, but there's random cutaways that totally ruin the flow. This is why, in our opinion, the app desperately needs a timeline editor to make manual cuts to the generated video. Right now you're stuck re-prompting everything instead of just trimming out the weird parts where the camera cuts away to a different angle where everything is messed up.

There's two ideal UI / UX fixes for this, IMO: 

Fix #1: Add a timeline editor so you can scrub through and clip out weird random cutaways. This could work by immediately putting every "draft" post on a timeline you can scrub through to snip stuff out with a editor bar you can expand and retract to drag over the exact section you want to snip, and then click once to cut out the weird stuff. Even better would be if you could then generate from the end of where the scene takes off to create longer videos if you like (although TBH, the sweet spot for this type of content is short; RIP Vine...) 

Fix #2, and this one's a little out there: Everything should be auto-converted on the back end to a simple screenplay format that tracks second by second like a YouTube transcript so you can easily adjust elements to regenerate with subtle tweaks.

We imagine this like UIs where you edit code, and you can switch back and forth between the front end where the code is running, and the backend where the code is written with a simple single button toggle. In this case, the video is the front-end and the "script" of the video (like a transcript for a YT video, but in screenplay format) so you can switch back and forth between the two to edit.

Bonus: Make it easier to re-prompt and give edits via voice chat. Right now, it's possible to edit your prompt via the phone's built-in dictation tools, but just like a real director would give directions to their team to make this or that edit, you should be able to simply talk out loud to Sora to give your feedback to make the changes you're looking for.

Despite that majorly needed fix, it's the most fun we've had with AI video since Veo 3 came out, which was, by our count... only four months ago.

What This App Actually Means

OpenAI now has a social media platform. Let that sink in. While everyone's focused on the memes, consider:

  • Google has YouTube.
  • X has Grok.
  • Meta has Facebook/Instagram.
  • ByteDance has TikTok.
  • OpenAI now has Sora.

This isn't just about funny videos. It's about distribution, user-generated training data (hopefully only training on the prompts and behavior patterns, not the uncanny valley-ish videos), and creating an ecosystem that keeps users engaged beyond chat interfaces.

Quick Start Checklist

  1. ✅ Get access (through invite code if available).
  2. ✅ Set up your Cameo immediately (mobile only).
  3. ✅ Write Cameo preferences.
  4. ✅ Set privacy settings.
  5. ✅ Try one simple prompt to understand generation time.
  6. ✅ Find and follow @TheNeuron and other creators.
  7. ✅ Study successful prompts from top videos.
  8. ✅ Use ChatGPT to help write complex scenes.
  9. ✅ Share your best creation.

Final Pro Tips

  • Videos with cameos get more engagement—use Sam Altman for instant views.
  • Endings matter—add unexpected twists like "then they immediately start breakdancing."
  • Sound design is auto-generated—you can't control it directly but scene descriptions influence it.
  • Download everything—videos can be saved for use on other platforms.
  • Peak FOMO hours: New features drop randomly, check the app daily.

The platform's rough around the edges (prepare for "something went wrong" errors), but that's part of the charm. This feels like early Instagram or Vine—chaotic, creative, and absolutely unhinged.

Welcome to the infinite meme generator... whether or not you will want video memes forever? ¯\_(ツ)_/¯ But congratulations Earth, now it exists...

cat carticature

See you cool cats on X!

Get your brand in front of 550,000+ professionals here
www.theneuron.ai/newsletter/

Get the latest AI

email graphics

right in

email inbox graphics

Your Inbox

Join 550,000+ professionals from top companies like Disney, Apple and Tesla. 100% Free.