I Can't Use Photoshop, But I Created a Nordic Aesthetic Cosmetics Ad in 30 Minutes
Trying Out GPT Image 2.0
"Create a Commercial Video with Just a Few Words"
Vibe Coding Leads to the Rise of Vibe Image
"I'm going to create a video advertising a Nordic aesthetic cosmetics brand. Please generate a scene with a girl in a white dress adorned with flowers, reminiscent of something out of the film 'Midsommar.'"
A fictional cosmetics advertisement video created by a reporter using OpenAI GPT Image 2.0 and ByteDance's XiDance 2.0. Photo by Eunseo Lee.
View original imageBy describing an idea in natural language to artificial intelligence (AI), it became possible to create virtually any video in a short period of time. Even without any experience using Photoshop or handling video equipment beyond a mobile phone camera, I was able to produce a cosmetics advertisement video in less than 30 minutes thanks to my basic design and directing skills. Within an hour, I created two videos, experimenting with various genres such as commercials and animation. This marks the arrival of 'vibe design,' where even non-professionals can direct videos like a filmmaker.
The character in the video was generated with the help of OpenAI’s GPT Image 2.0. There was no need for complicated technical jargon. By describing the character’s appearance and clothing in detail—either in text or by voice—or attaching reference photos, I could easily instruct changes until the imagined image appeared.
A reference guide for advertising videos created with GPT Image 2.0. Detailed cut-by-cut images can enhance the video quality. Photo by Eunseo Lee.
View original imageOnce the character’s image was complete, I instructed a large language model (LLM) to write a video prompt. I also requested, "Please ensure the video remains consistent from start to finish." To avoid visual discontinuity, it is important that the lighting, camera movement, and the character’s face remain consistent throughout the video. After specifying that I wanted to create a 10-second video composed of three scenes, I briefly described the content for each scene by time segment: 0–4 seconds, 4–7 seconds, and 7–10 seconds. Then, by asking for a sequential shot prompt and a code block for each scene, the AI generated a prompt that allocated seconds per scene and adjusted elements such as character movement, video speed, and color contrast.
In particular, I was able to enhance the video quality by creating a reference sheet using GPT Image 2.0, which could then be attached to the video production AI. After converting the prompt content into a cut-by-cut image reference sheet and submitting both the sheet and the prompt to ByteDance’s XiDance 2.0, the video was completed in just 5 minutes.
A 15-second animation video created using Synthance 2.0 on Hixfield, a design platform where AI design tools can be combined. After attaching prompts and reference photos on the left, the video is generated after about 5 minutes. Photo by Eunseo Lee.
View original imageEffortless Image and Video Generation with a Few Words
As of April 28, examples of using AI design tools such as GPT Image 2.0 and XiDance 2.0 to create commercials, cinematic shorts, and game play videos via prompts are spreading across the design platform Hixfield and social networking services (SNS). Just as 'vibe coding' enabled code generation through natural language, 'vibe design'—where inspiration and ideas are brought to life through natural language prompts in a short time—is becoming part of everyday life.
In the past month, Google Labs’ vibe design platform 'Stitch', Anthropic’s 'Claude Design', and OpenAI’s 'GPT Image 2.0' all share a common trait: they allow users to realize designs in a matter of minutes using ideas in various forms, including images, text, and code. Designers no longer need to create separate initial wireframes to visualize concepts and screen layouts; they can work directly through prompts.
Hot Picks Today
"Pay for the Postpartum Care Center with My Car...
- “Everyone Said You’d Make Money” ? Chinese Investors Rush In and Lose Princip...
- [Breaking] KOSPI Surpasses 6,700 for the First Time Ever
- "You Should Throw This Food Out of Your Fridge Immediately"... Eating This 'Zero...
- Once a Leading 'Outdoor Legend'...Is Nepa Headed Down the Same Path as Homeplus?...
Users must provide both the initial and final scenes to ensure that the protagonist’s subtle features remain unchanged throughout the video, and should break down the prompt by seconds to naturally control the lighting in each scene. In this process, the 'consistency maintenance' feature built into the AI enables even non-experts to produce high-quality results. Google Labs has recently released an open-source file format called 'DESIGN.md', which allows users to maintain a consistent design style across projects, even when working on multiple tasks. Similarly, Anthropic’s Claude Design analyzes the codebase to build a design system, automatically applying elements such as color schemes, fonts, and design components.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.