Today's Guide to the Marketing Jungle from Social Media Examiner...
presented by
The weekend is almost here, Alluser! Here’s a recap of the most important insights, trends, and updates from the week. Catch up in minutes and go into next week prepared.
In today’s edition:
Meta advertising has changed. Has your strategy?
Improving AI image generation
Tools to scale your social media ad creative
A step-by-step AI video editing workflow
🗞️ Industry news from Gemini, Instagram, YouTube, and more
What's Holding Experienced Meta Advertisers Back (And How to Fix It)
As Meta advertising continues to evolve, many marketers, even seasoned marketers, are facing an uncomfortable realization. Approaches that once delivered consistent results are no longer performing the same way:
And yet, letting go of those methods isn’t easy.
If you’ve spent years refining your marketing systems, it can feel counterintuitive to trust automation, simplify your setup, or shift your focus away from what used to work. At the same time, newer marketers without those habits are often adapting faster.
Before you double down on your current approach, it’s worth asking: are you optimizing for how platforms work today—or how they used to?
Jon Loomer shares eight changes, what still matters, and how to adjust your strategy moving forward. Read more here.
Stop Wondering. Start Discovering.
In just two weeks, the world’s top marketers are meeting in Anaheim for Social Media Marketing World 2026 (April 28–30).
If you’re ready to master AI and social strategy alongside a global community, these are the rooms you need to be in. As attendee Laura Pence Atencio says, “It’s one of my favorite things to do as an adult. It reminds me of summer camp when I was a kid!”
Save $250 on All-Access Tickets! Sale ends today.
Secure your All-Access Ticket for Anaheim.
The One-Two Punch Meta-Prompting Strategy for AI Images
Do you want to improve the quality and accuracy of AI-generated visuals?
Try this multi-model workflow the Social Media Examiner team uses. We ask Claude to craft highly descriptive prompts for Gemini, then upload Gemini’s resulting images back into Claude.
Claude can then analyze the image to determine whether it aligns with our strategic goals. If not, Claude provides a revised prompt to iterate toward a perfect result.
For example, we used these prompts to create images for a guide. The prompts are very simple and conversational.
To prep, create your text document as a Claude artifact. Then, ask Claude to create the image prompts to use in Gemini:
I would like to include some images to visually represent some of the important ideas presented in the guide.
Since I know that creating images isn't your jam, can you please provide me with image prompts that I can use in Gemini?
Paste the image prompts from Claude into Gemini one at a time. Once Gemini provides the image to you, simply provide each one back to Claude for analysis:
What do you think about this for image 1?
[paste image 1]
Depending on how you feel the image aligns with the content you can take Claude's feedback or provide it with your own feedback.
If the image isn't quite there, ask Claude to provide an updated prompt for Gemini. Then work with Gemini using the new prompt from Claude. Keep working the process until the image is just right.
To see full behind-the-scenes demos of how our team uses cross-model strategies to build lead magnets and marketing collateral in half the time, explore the AI Business Society.
Leveraging AI Creative in 2026
Caleb Kruse describes the latest shift in ad creative as something advertisers must learn whether they choose to or not.
Meta's stated goal is to fully automate the media buying cycle for a future where a business enters a product URL, describes what they sell, sets a budget, and lets the platform generate all the creative without any human involvement. Their acquisition of Manus, a fully agentic tool that analyzes media buying and generates creative autonomously, has already moved them meaningfully closer to that goal.
Every major platform is heading in the same direction. Google and Meta have embedded their own AI image and video generators directly into their ad tools. TikTok's parent company, ByteDance, has built AI models that rank among the most capable available anywhere.
Platforms aren’t waiting for advertisers to catch up; they're building AI into the infrastructure itself. Marketers who don’t adjust now will fall behind.
2 Ways to Develop Image Ads With AI
Caleb says most advertisers begin with image ads, and for good reason. Image generation is more accessible, the feedback loop is faster, and the tools have matured to the point where the outputs are consistently usable.
The primary model he recommends is Imagen 3, marketed by Google as Nano Banana (Nano Banana 2 is their faster flash model; Nano Banana Pro is their more advanced tier, accessible through Google's Gemini Pro subscription).
How Model Winning Ad Creative
The first use case Caleb describes is using AI to replicate the structure of high-performing ad formats with your own product. Common formats include an "us vs. them" comparison, a product feature callout, or a before-and-after comparison.
The workflow starts with competitive research.
Free tools like the Facebook Ads Library let you search any brand's active ads and, with a recent update, view relative impression ranges — so you can gauge which ads are actually performing, not just running.
Paid ad intelligence tools like Foreplay, CreativeOS, and Atria go further, offering curated libraries of creative assets across the direct-to-consumer space.
When you find an ad that resonates, screenshot it, upload it to a tool like Nano Banana, and prompt it to reproduce the same format with your product, your brand colors, and your fonts.
Pro Tip: Text rendering on products, historically one of the weakest points in AI image generation, has improved dramatically and is now reliable in most cases.
How to Create UGC-Style Ads
The second category Caleb calls "ugly ads," borrowing a term from Barry Hott, who has built a following in the DTC community around this format. Caleb also calls these "chameleon ads" because they take the shape of the organic content around them.
Ugly ads are ads designed not to look like ads. These ads blend into the feed rather than announcing themselves as advertising. Instead of polished studio imagery, they use a selfie aesthetic. For example, an ad that shows someone holding a product in a natural setting, and uses platform-native fonts from Instagram, TikTok, or Snapchat.
Caleb says this format is his primary use case for image ads because, in his experience, they perform much better.
Other topics discussed include:
Risks and Considerations to Consider Before You Start Using AI for Ads: Brand Safety, FTC Compliance, Platform Rules
Generate an AI Persona for Your Image Ads
Produce Original AI Video Scene by Scene: CapCut and Final Cut Pro
Generate Precise Multi-Scene Original Video With Kling
Create AI-Directed Original Video With Sora 2
Refresh Existing Video Ads With Kling or Sora 2
Tools and Tips for Enhancing AI Video Ads: Voice Cloning and Lip-Sync, Sound Effects, Aspect Ratio Adaptation
Today's advice is provided with insights from Caleb Kruse, a featured guest on the Social Media Marketing Podcast.
AI Video Editing: Save Time and Create Better Videos
An edit's only job is to serve the message. It exists to help the viewer receive what you're trying to communicate, not to impress them with production value.
Greg Preece sees both beginners and experienced creators fall into the trap of over-editing: adding constant visual changes, rapid-fire cuts, and elaborate effects in an effort to hold attention every second. This approach often does the opposite, creating noise that distracts from the content itself.
Before reaching for any AI tool, decide what your edit actually needs to accomplish. Simpler is almost always better.
Greg breaks the edit workflow into five distinct stages, each of which can now be assisted or automated by a dedicated AI tool.
Stage 1: Creating The Rough Cut
Stage 2: Fixing Mistakes
Stage 3: Including B-Roll
Stage 4: Adding Visual Enhancements
Stage 5: Optimizing Audio
Stage 6: Repurposing Long-Form Video
The AI Video Editor Tool Stack
Before AI, Greg's editing time for a single video ranged from 10 to 20 hours. Using the tool stack described below, he now completes editing in 1.5 to 2 hours, a reduction of roughly 90%.
Greg recommends six tools, one for each major stage of the editing process. Here are three to begin with.
Gling for Rough Cut Video Edits
Gling is a desktop application for both Windows and Mac, priced at roughly $10-$20 per month. It is Greg's favorite tool in the stack and the one he uses every single day.
The workflow is simple: after downloading footage from your camera, you import it into Gling. Select the types of things you want removed (broken sentences, incomplete retakes, extended pauses, etc.) and let the app process the file.
Analysis takes approximately three minutes regardless of video length.
Gling transcribes the entire recording into text and then visually displays within that transcript what it has removed and what it has kept. If it removed something you wanted to keep, a single click restores it. If it missed something, you highlight the corresponding text and tell it to cut.
Gling is non-destructive, so when you export from Gling into a professional editor like Adobe Premiere or Final Cut Pro, all the edit decisions travel with the file. You can retrieve anything Gling removed directly inside Premiere.
For creators who want a simpler setup, Gling can also serve as a standalone tool: you can make basic sequence adjustments and export the final video directly without moving to another editor.
Descript for Fixing Mistakes in Video
In the context of Greg's video editing workflow, Descript’s most powerful feature is voice cloning combined with lip sync correction.
When you've misspoken a word anywhere in your recording, Descript clones your voice, and you simply type what you actually meant to say. Descript replaces the audio with a synthesized version in your voice and simultaneously adjusts what your lips appear to be doing in the video, so the corrected word looks natural on camera.
For words that are difficult to phonetically synthesize, you can type the word in its phonetic spelling rather than its standard spelling, or use a separate AI model to generate the phonetic version for you first.
Descript goes beyond spoken-mistake correction and can serve as a full-featured editing environment for the remainder of the workflow. Greg recommends using Gling first for the rough cut, then bringing the file into Descript for subsequent stages.
Kling for Adding B-Roll and Visual Effects
Kling is an AI video generator, with the most current version being Kling 3.0. Greg positions it as offering the best price-to-quality ratio among currently available AI video generators. It’s cheaper than OpenAI's Sora or Google's video generator and, in his assessment, produces better outputs in most use cases.
Kling works in two modes.
In text-to-video mode, you describe the footage you want, and Kling generates it.
The image-to-video approach gives you precise control over characters, environments, and visual style. You create a still image of exactly what you want the opening frame to look like, upload it to Kling, and instruct it to generate the next 8 to 10 seconds of action from that starting point.
Greg uses Kling primarily for B-roll. For example, if he's discussing a topic that calls for footage of an eagle flying over a mountain, he generates that footage in Kling rather than searching through stock video libraries. He's also used Kling to generate videos of himself by starting with an AI-generated image of himself in a specific scenario, then animating it with Kling.
Kling also handles the visual effects category: the explosions, animated overlays, and scene elements you might want to add for emphasis.
Other topics discussed include:
3 Benefits of AI-Assisted Video Editing
Adobe Podcast for Optimizing Audio
ElevenLabs for Custom Music and Sound Effects
OpusClip for Repurposing Long-Form Video Into Video Clips
How to Structure Your AI-Enabled Video Editing Workflow
Today's advice provided with insights from Greg Preece, a featured guest on the AI Explored podcast.
Tired of Sending Feedback Twice?
If you work with designers, copywriters, video editors, or web developers—you know the pain. You explain it. They miss it. You explain it again.
NoteGo ends that cycle. Instead of writing another email that gets misread, just record your screen, draw directly on what needs to change, and let AI capture every key moment automatically.
One recording. Zero confusion. No revision meeting required.
Get NoteGo Now for free now!
Instagram Adds Comment Editing: Instagram has introduced the ability for users to edit their own comments within 15 minutes of posting, allowing multiple revisions during that window. The update enhances user control over conversations, helping correct mistakes or refine responses without needing to delete and repost comments. Instagram via Threads
Instagram Expands Affiliate Monetization to Reels: Instagram is introducing affiliate product tagging in Reels, enabling creators to earn commissions when viewers purchase items featured in their videos. Instagram
X Adds Voice Notes to DMs: X has introduced voice notes within its X Chat messaging system, allowing users to send audio replies in direct messages. X
X Expands Grok-Powered Features with Translation and AI Image Editing: X is rolling out automatic translation for posts worldwide alongside a new AI-powered photo editor driven by Grok. Users can now seamlessly view content across languages and edit images using simple text prompts, marking a push toward more interactive, AI-enhanced content creation. TechCrunch
YouTube Enhances Creator Monetization Tools and AI Creation Features: YouTube has upgraded its Media Kit with new audience insights like shopping behavior, household income, and family status, giving creators more robust data to support brand partnerships. At the same time, YouTube Create now integrates Gemini-powered image generation, allowing creators to produce visuals using prompts and reference images directly within the app. YouTube
Gemini Introduces Notebooks to Organize AI Workflows: Google is launching Notebooks in the Gemini app, giving users a centralized space to manage chats, files, and ongoing projects. By syncing with NotebookLM, the feature enables richer, context-aware outputs and more advanced workflows, helping users turn scattered conversations into structured, reusable knowledge bases. Google
Gemini Introduces Interactive Simulations: Google’s Gemini app now enables users to generate interactive simulations and models directly within chat, transforming complex concepts into dynamic, hands-on visualizations. By allowing users to adjust variables and explore outcomes in real time, the feature marks a shift from static explanations to immersive learning experiences. Google
Meta Launches Muse Spark to Power Next-Gen Personalized AI: Meta has unveiled Muse Spark, the first model in its new AI series designed to enhance the Meta AI assistant with faster reasoning, multimodal understanding, and parallel task execution. Initially powering the Meta AI app and web experience, the model will expand across Meta’s ecosystem, leveraging social context from user activity and content to deliver more personalized recommendations. Meta
OpenAI Launches $100 ChatGPT Pro Tier: OpenAI has introduced a new $100/month ChatGPT Pro plan designed to bridge the gap between its Plus and high-end offerings, delivering significantly higher usage limits for coding tasks via Codex. Aimed at developers and heavy users, the new tier emphasizes performance capacity over new features, positioning OpenAI more competitively against rival premium AI subscriptions. TechCrunch
What Did You Think of Today's Newsletter?
Michael Stelzner, Founder and CEO
P.S. Add
michael@socialmediaexaminer.com into your contacts list. Use Gmail?
Go here to add us as a contact.
We publish updates with links for our new posts and content from partners. Your information: Email:
tukangpostemel@gmail.com Opted in on: 2021-09-06 17:03:43 UTC.