On The Agenda

How Brands Can Avoid Getting Burned by Bad Actors Using Gen AI

The creative power of new technology puts brands in potential danger when it comes to how their assets could be used and shared online without permission

By Stephen Lepitak

What is real any more? A question we should all be asking a lot more in the future.

Every time technology advances and new tools and platforms emerge brands are under threat somehow. With the advancement of Generative Artificial Intelligence (Gen AI) and its ability to quickly produce rich imagery that can fool the eye of the average consumer, brands may have a new enemy to face. For the short term at least.

AI may have the ability to take on many tasks within a business that might speed up processes, but it is also currently able to create seemingly real ad campaigns featuring real brand assets but conveying fake company messaging. A marketer’s worst nightmare.

The same is true of people’s likenesses, with stories having been reported of famous people seeing their images being used as deepfakes in campaigns without their permission being given.

AI offers various opportunities for bad actors, prompting the World Federation of Advertisers (WFA) to announce the formation of an AI Task Force, bringing together senior marketing, legal and policy professionals to help brands develop practical solutions to propel safe and suitable use of AI across the industry.

According to research carried out by the WFA in the Autumn of last year, featuring 69 senior respondents from 55 member brands, less than a third (32 per cent) of respondents had measures in place for brand safety and adjacency, while even less (29 per cent) had IP and copyright processes too.

Brands Getting Ahead of AI

Some brands have embraced the potential creativity that could come from customers taking their brand assets and playing with them to share through their social media channels, perhaps none more so than Coca-Cola which through its Coca-Cola Creations platform has been wooing younger consumers to create content. This included the launch of a platform over the Christmas holidays where users could share personalised cards featuring iconic brand assets such as its depictions of Santa Claus, the Coca‑Cola Caravan trucks and polar bears.

“Technology has since evolved—the leap in image quality from DALL-E 2 to DALL-E 3 is incredible—so we saw an opportunity to launch a holiday-themed creation platform and give fans the chance to enhance our iconic imagery,” explained Pratik Thakar, Coca‑Cola’s global head of Generative AI upon launching the initiative.

Another example of a positive and control use case of Gen AI comes from telecommunications provider O2 which released The Bubl Generator, featuring the brand’s cute robotic mascot and using client prompts to create custom imagery featuring him. Alongside that was The Copy Checker which was trained on O2’s tone of voice and metrics to allow its copywriters to paste in text to get detailed feedback and suggestions.

It was developed by Faith, the AI-arm of VCCP alongside Virgin Media O2’s head of copy and tone, Oeil Jumratsilpa to ensure brand-compliant messaging. It demonstrates a practical, public-facing application of Gen-AI as part of the wider work Faith has been doing on behalf of clients including Virgin Media O2 and Sage.

Simon Valcarcel, marketing director at Virgin Media O2, explained that the two platforms were the results of the exploration alongside Faith of how it might utilise Gen AI across its marketing.

“The models we’ve developed together create consistent, high-quality distinctive brand assets and copy at speed - allowing us to work in ways which would not previously have been possible. It’s exciting to keep learning by doing, as the GenAI space continues to evolve,” added Valcarcel.

Ultimately these projects showcase the work that more brands must endeavour upon now to ensure Gen AI to be able to understand its guidelines and consistency before creating work that meets the standards set over the years of building including the legal checks around the use of external media and unattained ownership rights that AI tools are still in danger of supplying users.

Here are some words of advice and insight shared by agency leaders on the matter of brand safety in the Gen AI space.

Alex Dalman, managing partner of AI creative agency faith and head of social and innovation at VCCP

We have a clear policy and guidance to staff on how we use Gen-AI, which is supported by training. Principally, we are happy to use Gen-AI when it lets people do more than they could otherwise, but not when it is being used to displace or replace human artists.

We won’t ask Gen-AI to copy an individual artist's style without their involvement and say so. Just as we wouldn’t ask a human artist to recreate the style of another. And just as we would with human briefs, we may provide a range of references as part of a brief to create something wholly new. But never to create something derivative.

As always, a balance needs to be struck between obtaining the widest possible rights and preserving the rights of artists. We also work closely with our legal team to ensure that we are not infringing on anyone else's IP.

Rob Meldrum, head of creative futures at EssenceMediacomX

As we continue to stretch our understanding and application of Gen AI, we’ve got to a point where we’re pretty comfortable with it in the inspiration phase - whether we’re generating an image to accompany a pitch idea, an initial audience planning matrix or thought starters for new concepts.

Gen AI gets really interesting in the production phase of a campaign, especially when developing digital assets. We’re not looking at fully AI generated ads, but instead how we can quickly create thousands of copy lines that can be fed into our tools to develop the growing number of assets needed to reach a fully addressable audience.

We’re working with our clients to develop global AI governance frameworks to mitigate risk when using AI tools - everything from the handling and security of audience data through to the legalities around AI image generation of people and products. We also expect to see the rise in ‘AI-generated’ type disclaimers on advertising as agencies continue to push what’s possible with AI.

Matt Muse, senior innovation creative at T&Pm

Someone once said: ‘If you don’t have to talk to your legal department, is your idea even worth doing?’

Last year, generative AI platforms became mainstream, triggering a rush in every creative department to drop the first great piece of AI driven work. However, the real arms race is taking place in legal departments and high-level client meetings.

AI is moving fast, it’s essential you stay up to date, not only with the latest features but also the commercial viability of the platform itself.  At T&Pm we try to bring out legal team in early on projects and we have a live document detailing our stance on every platform. This is vital to keep teams informed, terms of service change all the time. 

So far, the most impactful AI-driven projects have either been adopted at the network level with senior client approval or have been for lower-budget and charity clients who face fewer legal challenges. Our upcoming projects have benefitted from open dialogues between brand teams, our legal department, and clients, emphasising the importance of including guardrails in your tech stack, often AI powered. Anything created in AI should also be flagged to avoid accidental IP problems.

Dan Northcote-Smith, creative innovation director, T&Pm

Another successful approach is developing proprietary models using first-party data. We've set up a rig for this purpose, foreseeing it as a future method for creating safe and reliable brand assets.

Stay engaged with all platforms, keep your eye on legal changes, and stay close to your legal department.


LinkedIn iconx

Your Privacy

We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.