Artificial Intelligence Retail Marketing

What it really takes to make the film you want using gen AI

Author

By Morten Legarth and Ben Hopkins, Creative directors

July 25, 2024 | 9 min read

Marking the first birthday of VCCP’s AI creative agency, faith, CDs Morten Legarth and Ben Hopkins made an animated short film using gen AI to show just how far the tech and their expertise have come along.

Faith meets a new robot friend

It’s been well over a year since GPT-4’s release, but its impact on the creative industry has been difficult to quantify beyond the hype. Beyond the initial dopamine hit of using gen AI, people’s patience for gimmicky displays is fading fast.

So, how does this apply to the AI darling of the hour, generative video? When the shock release of OpenAI’s new video model Sora dropped a few months ago, LinkedIn was ablaze in conversation. But was it the first sign of the disruption? Hmm… not quite.

Even these “pure AI” solutions, which look like silver bullets, still need human intervention.

Generative video models are probabilistic… non-deterministic…, and inherently random, which prevents them from reaching silver bullet status for creative applications. The complexity of training for consistency far surpasses that of static gen AI imagery, a fundamental limitation often overlooked. And no one hates inconsistency more than brand world builders.

Powered by AI

Explore frequently asked questions

This chaos, which undermines a creative vision, limits generative video to conveniently quirky music videos, retro movie trailers and novelty, sub-par TV spots - a reality that only sinks in when you realize that using tools like Midjourney feels like playing slots in Vegas - spinning again and again, waiting for the jackpot.

When we look at our own creative department, it takes an average of four prompt attempts before we generate an image that aligns with a vision.

To mark the first birthday of our AI agency, faith, and help our friends in the industry, we set out to understand what it really takes to make something good. Could we define and master a complex workstream that could control this chaos as we have done for image and copy? Could we use gen AI to make a film that we wanted to make and not one that it wanted to make for us?

We’ll let you decide.

To test this, we needed a rich brand world to provide the ecosystem for our experiment, so we chose our own. We decided to tell the faith origin story exactly the way we wanted to tell it, controlling AI to meet our creative standards instead of letting AI control how much we lowered them.

When we launched faith we created a rich brand world, in the same way we would for our clients - made up of our brand characters, a girl and a robot - representing human and AI partnership. The story is based on our faith in how positivity in the face of great change can help us push past fear and allow for self-improvement. And while our short film, ‘finding faith’, tells the story of our protagonist’s first encounter, the real story lies behind the scenes.

AI tools are marketed as ‘easy-to-use’ or ‘democratizing creativity’, but to create usable brand assets that can leave the office, you need substantial technical fluency. We’ve seen that the more people use and explore the tools, the more sophisticated this usage gets. We’ve moved from unanimous list prompting to using natural language 93% of the time - building our fluency through practice.

If the first challenge was working out which tools could do what, the second was that a new update or release hours later made the previous redundant. The rate of improvement in tools is like nothing seen before, not only do you have to be agile, you have to be unprecious. We were both advantaged and disadvantaged by this; for example, tools like Elevenlabs’ Text-to-SFX suddenly allowed us to add depth to a film with no dialogue, but the recent release of LivePortrait video generation left us questioning if we needed to start again!

One thing remained clear - there is no one magic, off-the-shelf tool that can produce branded video and those that claim to... always underwhelm. Our lean, agile creative team ended up combining over 15 tools to bring order to the chaos. We were able to animate backgrounds, transform 3D into 2D, turn photos into videos, and generate SFX by using tools including ChatGPT, Midjourney, Stable Diffusion, Comfy UI, Viggle AI, ToonCrafter, AnimateDiff and Eleven Labs. But to achieve the best results, it’s crucial to combine these with tools we use every day, like Unreal Engine, Blender, Adobe Creative Suite. It was this more targeted approach that gave us the flexibility and control required to bring our creative vision to life.

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

To answer the inevitable question about cost and efficiency, yes we cut production time in half and costs by around 85%. However, these metrics were not the primary drivers for making the film, as we’re still in the early days of understanding the ROI of AI. We aren’t saying one process replaces another. And we aren’t saying AI is now ready to produce perfect branded content. What we are saying is that AI tools are just that - they are instruments, easy to pick up but hard to master; each producing melodies but together capable of producing symphony.

In terms of agencies’ AI endeavors, there has been a lot of smoke and mirrors, leaning into the limitations of the tools, lowering creative standards, and altering messaging to cater to AI’s shortcomings. This stems from a lack of experimentation with the tools - and it makes sense, as marketing teams don’t have time to undergo R&D projects like this. The urgent displaces the important, and it is a privilege for us at faith to be able to dedicate resources to this.

Encouraging adoption has really paid off for us in developing our own tools for clients and in VCCP’s general use and understanding of AI. We’ve seen a 300% growth in usage of Midjourney alone across the business since December 2022 (with most in the last year).

Just a year ago when a gen AI video of Will Smith eating noodles went viral. The medium has come a long way, as you can see.

But, to accelerate our industry’s collective learning we are lifting the lid on the making of the project. Dissecting which tools we used and the challenges we have overcome.

This project has been invaluable in placing faith at the cutting edge of creating with AI; we are pushing the tools to their limits, honing our skills, and on a mission to use AI as a positive force that accelerates human creativity and imagination.

And if you want more, watch the making of below.

Artificial Intelligence Retail Marketing

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +