Zefr
4 min readMay 23, 2024

AI with Integrity: Elevating Brand Marketing Innovation with Lasting Impact

Since the debut of ChatGPT into the mainstream in late 2022, brands have been actively exploring generative AI. This technology is revolutionizing the advertising industry, offering new ways to inspire creativity and significantly boost creative output. Marketers are already experimenting with generative AI, and it seems inevitable that the ad industry will fully embrace these tools.

In the media world, publishers and numerous creators are also testing synthetic content to engage consumers. However, as machines produce ads and content on an unprecedented scale, new dangers emerge. Digital marketers face increasing brand suitability and safety concerns, which are likely to be exacerbated by the proliferation of easy-to-use generative AI tools.

Generative AI promises significant benefits for marketing. Yet, it also presents new challenges for CMOs, such as managing the risk of unlicensed use of brand images and IP. How the ad industry addresses these issues will impact the overall effectiveness of advertising and consumer trust in the coming years.

Brands and Responsible AI

According to research conducted by the World Federation of Advertisers, three out of four brands already use generative AI or plan to soon. Several leading brands are already finding success in this realm. At one end of the spectrum is Ally Financial, which built its own proprietary cloud-based generative AI platform to save its employees thousands of hours previously spent on tedious tasks.

At the other end is Coca-Cola, which went as far as using AI to help design a TV ad.

However, it’s fair to say when it comes to using technology to ‘make’ ads — many brands remain in cautious, experimentation mode. Some marketers are even including clauses in their contracts with ad agencies either requiring full AI disclosure or restricting its use entirely.

That caution stems from the fact that brands want to maintain as much control over their messaging and their media plans as possible — to ensure that their campaigns reach their intended targets, and also don’t harm their well earned brand equity. Indeed, the WFA’s research found that 71% of brands are worried about brand safety and adjacency when using generative AI.

The organization is looking to help brands get out in front of these issues. The trade body has released a Generative AI Primer to educate CMOs, and plans are in the works to launch an AI task force.

Those efforts should be invaluable over time. But in the near term, brands are facing a balancing act between the drive for innovation and the need to navigate an emerging set of AI ethics.

Why Responsible and Ethical AI is Crucial for Brands Moving Forward

Clearly, there is huge upside for marketers looking to master AI, along with many potential pitfalls. To ensure that this revolutionary tech is used responsibly, transparency is vital. Brands, media companies and tech firms need to make it clear to customers — including both consumers and businesses — when and how AI is being used. In fact the FTC has already issued guidelines requiring certain disclosures of material connections between advertisers and endorsers.

Of course, brands aren’t just risking blowback from lawmakers if they ignore such concerns — any unregulated or reckless use of AI tech can put brands’ trust with consumers — their hard earned reputations — in peril. This can be as simple as AI-driven media buying placing marketers messages in less brand safe corners of the web -undoing years of hard work on this front. Or worse, the more that humans are removed from the ad production process, brands can end up producing messages or images that violate their own values.

There is also the increased risk that existing verification technologies won’t be able to keep up with the coming explosion in AI-generated content, leaving marketers vulnerable to unfortunate, unforeseen mishaps.

Navigating this new world won’t be easy — and brands may need to build new systems and even practices to monitor AI ethical concerns. As Fast Company put it, “given AI’s dual role as a hero and villain when it comes to reputational risk, businesses should develop a brand management strategy.”

Already, the WFA found that 71% of brands are planning on upskilling their employees on effective AI use as part of a responsible marketing strategy.

Yet given the complexity of the tech, and the high stakes, marketers will need to bring in more experts, so that models can be developed safely and third parties can be evaluated with the right degree of scrutiny.

Clearly AI can be transformative for the ad industry. But with great power comes great responsibility.

Zefr

Leader in Responsible AI, Brand Safety & Suitability