human and robotic hands fist bump

On The Agenda


Authenticity Under Fire: Gen AI's Potential Impact On Brand Trust

Could the growing use of machine-produced ads harm long-term consumer faith in a business' communications?

By Creative Salon

Hard earned, but easily lost - brand trust is something built over time by marketers through consistent consumer communications.

However the introduction of Gen AI, and its increasing use in the creation of ads, might be a strong test of how willing audiences are to engage with machine-produced content.

This winter season has brought perhaps the most high-profile example of Gen AI's influence on ad creativity so far. Coca-Cola is set to reveal the AI recreation of arguably its most famous campaign - 'Holidays Are Coming'.

Some creatives might question whether such a remake was necessary and if it is a step too far.

This comes as the pushback by Hollywood creators has already begun. Within Hugh Grant’s latest film Heretic, the movie’s directors have insisted on adding a message for its audience at the end: “No generative AI was used in the making of this film.”

And that hesitancy by creatives has been here in the UK for a while. According to a survey released earlier this year by The Design and Artists Copyright Society (DACS) 95 per cent of British artists feel they should be asked before their work is used to train AI models.

Are we already reaching an inflection point where artists will actively want their work to be labelled as not having been produced using artificial intelligence?

What's more, in theory, brands that begin relying on technology to produce more of their comms could risk losing the authenticity they have been building through their marketing. After all, according to PWC's 2024 Responsible AI Survey: "Building AI responsibly isn’t enough to guarantee trust. That’s up to the stakeholders. It’s their perceptions and experiences that determine whether trust is earned and whether AI initiatives are ultimately successful."

We asked industry experts whether they have any concerns about the growing role Gen AI content has within advertising as well as the amount of AI-generated content being produced, and its potential impact on brand communications.

Emma De La Fosse, chief creative officer, Edelman UK

I’ve used AI as a writing partner for a few years now. AI often gets things wrong or writes something that is a bit naff but you can guide it. People often say that teaching is the best way to learn, and when creatives work with AI they are teaching it all the time. 

Like any tool, the potential and possibilities of AI depends who is wielding it. Interestingly, an agency in Korea gave their staff access to a new AI-powered marketing platform they had developed. The outputs ranged from terrible to wonderful. Those who’d produced the sub-standard efforts were convinced they’d been given a different tool than their colleagues. They hadn’t. They were simply not as creatively talented as those who’d produced the better work.

I see AI as something that can help creatives bring to life ideas that were previously impossible.  (Fernando Machado of NotCo and Suzana Apelbaum of Google both give inspiring presentations on what happens when Ai is creatively powered.) The problem is when others try and replace creatives with AI, mostly because the people making those decisions are not creative themselves and don’t understand what great creativity requires.  The real recipe for success is AI + human. Not AI instead of human.

To date, the work I have judged at awards shows that has been produced by Gen AI (as opposed to other LLM AI) has been marked down because it is not very good. It’s not snobbery; the quality of the craft or animation is simply not there.  Yet great ideas dreamed up by humans that have been made possible by AI, such as NotCo’s 'Not Mayonnaise', which is indistinguishable from the real thing, have been very well received. Once again, it all depends on who is wielding the tool and who is having the ideas.

I worry that ‘Made by Humans Only’ labels will render creativity as some sort of rare artisanal craft skill, like hand-rolled Cuban cigars, that only the very rich can afford to buy in small quantities. That’s not the way forward.  Creatives need to lean in and make sure they are the ones wielding the tool, exploring the possibilities, otherwise those who only understand business rather than creativity will do so. 

Owen Lee, chief creative officer, FCB London

AI is already freeing us up to do more of what we love – creativity – enabling us to experiment more, move faster, and produce content at speeds we never thought possible. I can see a point in the not-too-distant future where AI will get so sophisticated it will be commonplace for it to be used to create feature films, and people will start to marvel at films that have been shot with real people on real cameras like they were in “the good old days”. In the same way that Christopher Nolan chose to shoot “Oppenheimer” on black and white film. But this brave new world won’t mean the demise of film directors, creatives, or photographers. They’ll just be using their skills in different ways, prompting AI with specifics in a way that only directors can, because AI is only as good as the input we give it.

It won’t be that long before AI gets so good we might not be able to tell whether it was made by AI or humans. And I’m not sure we’ll care. If someone created the next “Guinness Surfer” entirely using AI, people would admire it just as they would any other great campaign. The big watch out will be quality over quantity. No one wants to watch a proliferation of poor content whether it’s produced by AI or human beings. And as an industry, we need to ensure authenticity. In the hands of human beings, advertising went too far into the realms of showcasing “perfect” people, and hopefully if AI is allowed to pull information from everywhere, it should be giving us a genuine picture of the world – much like we did with “This Girl Can”, showing authentic women that weren’t models. But if AI backtracks on this progress and starts generating work featuring people without any flaws it will be swiftly rejected.

Do we need to be transparent about the use of AI? I don’t think so. We don’t go to the effort of telling your average consumer which film camera we shot on, why would we feel the need to tell them whether we used AI? People will only care if they’re surrounded by advertising that’s poor quality, and that’s always been true.

While we’re at a crossroads, we simply won’t stop forging forward. There won’t be a mass rejection of AI, we’ll get used to it, like we did when we moved from dry plates to film, and film to digital. You can’t stop change. Just don’t fall into the Blockbuster or Kodak trap by not moving with the times and trying to stick with the old technology.

Louis Vainqueur, head of data science, Digitas

AI models now generate content across language, sound and images with such high quality that it’s increasingly challenging to distinguish between human-created and AI-generated material.

Two years after the launch of ChatGPT, there are now hundreds of thousands of LLMs available through open-source platforms like Huggingface, with billions of users engaging with these tools every day. Much like blogs and mobile internet once democratised content creation, LLMs are amplifying the range of voices in the digital space.

In this fiercely competitive landscape, it’s critical for brands to not only capture attention but also to pay attention and actively listening to the conversations happening around them. Knowing whether the content is AI-generated adds valuable context.

Initiatives like CP2A, backed by Publicis, help trace media provenance, while watermarking schemes by LLM producers work to verify content authorship. However, these efforts are not yet widespread.

It’s also important to avoid assuming all AI-generated content is inherently problematic or misleading. LLMs are evolving into a medium for personal expression, enabling more voices to join the conversation. While we’re certainly on the verge of an explosion of content, we’re also seeing a rise in the number of unique perspectives.

At Digitas, we recognise the importance of understanding both AI-generated and human-generated data, which is why we’ve developed InsightXD. This tool allows us to identify emotions, ideas and statements from data, ensuring we’re able to make informed decisions in this dynamic, AI-enhanced communication landscape.

Dan West, strategy lead, Accenture Song

We have entered an era where the ‘fake’ is now what we have to manoeuvre around in digital experiences. Fake brands, fake products, fake reviews, fake answers, fake testimonials saturate online spaces leaving people to question the legitimacy of almost everything they encounter. Hesitation is becoming a front-of-mind digital behaviour. It’s not just a fleeting concern; people now pause on nearly every digital channel, wondering, “Is this real? Am I safe here?” This heightened scepticism profoundly impacts brand communications. The digital ecosystem has become overwhelmed with ‘generated’ content—both for authentic and deceptive purposes.

In this landscape, the technology a brand uses is more than just a tool; it’s a statement of your brand. It communicates something fundamental about the brand’s values and it's commitment to authenticity. Brands need to be especially thoughtful in their use of generative AI, ensuring that it conveys the right message. The ways brands show up in content, including the quality of their digital presence, speak volumes. This isn’t just about earned vs paid, but a challenge that purveys across all channels. With low-quality content flooding online spaces, brands that invest in quality and well-crafted experiences will stand out and foster trust against the clutter. Maintaining a high standard is not just about aesthetics, but to create lasting connections in a world increasingly filled with digital illusions.

Stephen Ledger-Lomas, chief production officer and partner, BBH London

The awareness is clearly there, but the foundational models that most users access are producing a sea of sameness with a mostly unsettling hallucinogenic aesthetic. AI that is trained with specific assets produces more promising results but there is nothing yet in market that anyone can point to which is remarkable, aside from the obvious examples where AI was part of the idea. So there are good reasons why the ad industry isn’t revering work created by AI just yet.

One of the most interesting aspects of the Gen AI explosion in production is the horizontality. The reason it is likened to an industrial revolution is that it’s impacting all of the tools we use all at once. It is also totally unregulated which makes it difficult to align with publishing commercial work.

Production partners we work with are using it already in generating text and images for treatments, briefing casting or set design, generating music demos, rotoscoping in VFX, the list is endless. Disclosure of its uses and the moral questions surrounding that will be next in the spotlight.

Artificial intelligence promises to be a set of tools that will expedite human ability and amplify human creativity. But we’ll remain at the crossroads for a while yet.

Brent Nelson chief strategy officer, Edelman

Globally, only 38 per cent of people have high confidence that AI is effectively regulated; only 36 per cent feel “people like me have a lot of control over how AI affects their lives”, and 35 per cent of people reject the growing use of AI outright with rates of AI rejection even higher across developed markets. This lack of trust in technology is compounded by a global infodemic which has driven people’s Trust in media to record lows, only exacerbated by continual, daily proofs of contradictory and confounding misinformation being fed to people. The fundamental question facing brands is this: will your actions, and your use of AI reflect your brand values?

Given the global sentiment towards AI and media, how your brand chooses to engage with, use and deploy AI should feel more like a moral and ethical corporate social responsibility question than a ‘yes’ / ‘no’ marketing decision.

Every technology is inherently benign, yet historically, how, and what people choose to do with a given technology determines its potential for betterment or harm. At the very heart of my AI concerns lie Truth and Trust. In an increasingly polarized world, AI has the genetic ability written within its very code to be either a positive or negative force in marketing communications and beyond. How AI is used has an immense potential to effectively build Brand Trust and Equity or destroy both with ruthless technological efficiency and effectiveness. Fake news. Fake ads. Fake pictures. Fake influencers.

Share

LinkedIn iconx

Your Privacy

We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.