Today's Guide to the Marketing Jungle from Social Media Examiner...
presented by
🚀 Big news: We built NoteGo — AI-powered video feedback, so you say it once and they get it right. [Check it out →]
Friday is almost here, Alluser! Before you unplug, here are the latest AI insights and updates for marketers. Whether you read them now or save them for later, you won't want to miss these.
In today's edition:
-
A prompt to use Gemini + Drive for strategic email replies
-
Tools and a custom GPT to create AI video
-
🗞️ Industry news from Gemini, OpenAI, and more
The File-to-Draft Shortcut for Email Responses
Gmail's Help-Me-Write feature, powered by Gemini, isn't just an email drafting assistant. When you pair it with your Google Drive, it becomes something far more strategic.
In a recent AI Business Society training, Mike Allton explained how to prompt the Help Me Write feature to instantly reference the right case study, tailor your message to a prospect's industry, and send a response in minutes that feels thoughtful, not templated:
Using the @Google Drive extension, find all documents with 'case study' or 'client results' in the title. Review the email thread I have open from [Prospect Name/Company]. Identify their specific industry and pain points. From the files you found, select the most relevant case study that matches their industry. Then, draft a professional reply that: 1. Acknowledges their specific needs mentioned in the email. 2. Summarizes 2–3 key results from the attached case study. 3. Explicitly mentions that I have attached the [File Name] for them to review. 4. Proposes a brief call next Tuesday or Wednesday to discuss how we can replicate these results for them. Finally, attach the file to the draft.
This is a core part of the Mastering Gemini training. Unlock the full replay and 13+ other mini-trainings inside the AI Business Society.
What Does This Make Possible?
At Social Media Examiner, we operate with a "growth mindset". This means that anytime something happens—good or bad—we ask the above question and always discover exciting new possibilities.
What about you? Are you looking to discover new possibilities?
If so, then consider attending AI Business World in Anaheim—April 29 - April 30.
You'll discover the latest marketing strategies, taught by experts in the industry. But you'll also meet other marketers facing the same challenges and possibilities that you are.
Ask yourself this: What does attending AI Business World make possible for you?
I'm ready to discover more…
AI Video Mastery: Creating Videos That Sell
The quality of AI video tools has undergone a dramatic transformation in recent months. The shift is going from awful to pretty darn good.
Eve Whitaker recommends Sora for first-time AI video creators because it has a forgiving nature and approaches prompting similarly to traditional film production.
The tool can generate multiple scenes within a single 12-15 second clip, which differs from most other AI video platforms that produce shorter, single-scene outputs. Even mediocre prompts tend to produce usable footage in Sora, though better prompts naturally generate better results.
How to Develop Your Shot List Prompts Manually
This is where you transform your pre-production plan into prompts for each video clip you want to produce. You have to decide on camera angles, movement, and framing for each shot before you start generating.
Instead of prompting for an entire 30-second commercial, plan individual shots. A shot list breaks your video into individual scenes with specific technical details, giving you control over the final result. Each entry should specify the shot type (wide, medium, close-up), the subject, the action happening, camera movement if any, and duration.
Tips to Structure Your AI Video Prompts
Start with mood direction to specify whether you want Pixar-style animation, documentary realism, or cinematic film quality. You can get technical by mentioning frame rates—24 frames per second looks more like film, while 30-35 frames per second looks more like video. The more specific information you provide about the visual style, the better Sora delivers what you envision.
Next, add our time code—literally stating "from 1 second to 3 seconds"—followed by what should happen in that window. Then describe what you want the camera to do and what your character or subject should be doing.
Include camera angles, whether you want music playing, exactly what you want your character to say, and any sound effects you need such as footsteps, door creaks, or wind. If you don't want music, you need to prompt "no music."
Here's an example structure:
Documentary style, 24 fps. 0-3 seconds: Drone shot flying over a city skyline, moving downward toward street level. 3-6 seconds: Medium shot of a car speeding around a corner. 6-9 seconds: Close-up of hands gripping a steering wheel tightly. 9-12 seconds: Wide shot of the car approaching an intersection. No music. Sound effects: engine roar, tire screech.
When you generate multiple separate clips, maintaining visual consistency becomes challenging. Sora doesn't remember previous renders, so you must use identical prompt language for setting, mood, lighting, and style across all clips in your sequence.
If your first clip specifies "documentary style, 24 fps, overcast natural lighting, desaturated color palette," every subsequent clip needs those same descriptors. Copy and paste this foundational language, then modify only the specific action and camera work for each new shot.
For character consistency without using the character feature, maintain extremely detailed descriptions. If your subject is "a woman in her 30s with long brown hair wearing a red sweater," that exact description must appear in every prompt. Even small variations can result in different-looking people across clips.
Tips to Reference Camera Angles and Shot Types
Incorporating basic camera terminology into your prompts significantly improves output quality. Learn and use terms like:
-
Establishing shot – a wide view showing the overall scene
-
Medium shot – frames a person from the waist up
-
Close-up – shows facial details or product features
-
Pan – where objects enter and leave the frame
-
Dolly – moving forward or backward
-
Zoom – moving closer to or further from the subject
These technical terms communicate precisely what you want to the AI system and activate the AI's training on professional cinematography. Instead of prompting for "a woman with coffee," you can say:
A medium shot of a woman holding a coffee cup, camera dollying forward slowly.
How to Develop Your Shot List Prompts With the Seed Scene GPT
Eve created a custom GPT called Scene Seed that guides beginners through this pre-production process. The tool asks questions about your vision and walks you through the story structure, helping you develop your ideas into prompts you can use in Sora.
Other topics discussed include:
-
Tips for Using Product Reference Images and Characters in Prompts
-
Tips to Create Cutaway Clips
-
A Workflow for Editing Your Final Video With CapCut
-
Tips to Cut Your Video Down Your First Edit
-
Tips to Handle Transitions Strategically
-
Tips to Add Text and End Slates
Today's advice provided with insights from Eve Whitaker, a featured guest on the AI Explored podcast and speaker at AI Business World, part of Social Media Marketing World.
GPT-5.3 Instant Improves ChatGPT's Everyday Conversation Quality: OpenAI has released GPT-5.3 Instant, an update to ChatGPT's most-used model focused on making day-to-day interactions feel more fluid and consistently helpful. The release targets practical UX pain points—reducing unnecessary refusals and boilerplate disclaimers, improving synthesis and relevance when using the web, and boosting overall factual reliability with lower hallucination rates in internal evaluations. GPT-5.3 Instant is available now in ChatGPT and via the API as, while GPT-5.2 Instant will remain available as a paid Legacy Model for three months before retiring on June 3, 2026. OpenAI
Google Launches Gemini 3.1 Flash-Lite: Google has unveiled Gemini 3.1 Flash-Lite, its fastest and most affordable model in the Gemini 3 lineup, optimized for developers running high-scale applications. Flash-Lite delivers significant latency improvements and competitive benchmark performance, including an Elo score of 1432 on Arena.ai. Available in preview via Google AI Studio and Vertex AI, the model includes configurable thinking levels, enabling teams to optimize reasoning depth and cost across use cases such as content moderation, translation, and real-time user experiences. Google
OpenAI and Amazon Announce Partnership: OpenAI and Amazon have announced a strategic partnership that will enable enterprises to deploy context-aware AI agents at production scale. AWS will also become the exclusive third-party cloud distribution provider for OpenAI Frontier, expanding access to OpenAI's enterprise agent platform. The companies will additionally collaborate on customized models for Amazon's customer-facing applications, deepening integration across AWS infrastructure and Amazon products. OpenAI
Google Labs Upgrades Flow With Unified Image and Video Creation Tools: Google Labs has introduced a redesigned Flow interface that consolidates image and video generation into a single, seamless workflow. By integrating Whisk, ImageFX, and Nano Banana directly into Flow, creators can generate high-fidelity visuals and immediately incorporate them into Veo-powered video projects. The update also adds enhanced asset management, precision editing tools like lasso-based selection with natural language prompts, and expanded video controls. Google
Google Translate Adds AI-Powered Tone and Context Tools: Google has enhanced Translate with new Gemini-driven features that provide alternative phrasing options and deeper contextual explanations for idioms and nuanced expressions. Users can explore translation subtleties through "understand" summaries or follow-up prompts via the "ask" function, including region-specific language variations. The update is rolling out in the U.S. and India on mobile, with web access planned soon. Google
Google DeepMind Launches Nano Banana 2: Google DeepMind has released Nano Banana 2 (Gemini 3.1 Flash Image), delivering improved image realism, faster editing workflows, and stronger prompt adherence for developers building visual applications at scale. The model integrates web-based world knowledge for more grounded outputs, enhances in-image text rendering and multilingual localization, and introduces expanded aspect ratios and configurable reasoning levels. Nano Banana 2 is available through the Gemini API, Vertex AI, and related developer platforms. Google
What Did You Think of Today's Newsletter?
Michael Stelzner, Founder and CEO
P.S. Add
michael@socialmediaexaminer.com into your contacts list. Use Gmail?
Go here to add us as a contact.
We publish updates with links for our new posts and content from partners. Your information: Email:
tukangpostemel@gmail.com Opted in on: 2021-09-06 17:03:43 UTC.