In Review: Zefr’s Co-CEO & Co-Founder Rich Raddon on Marketecture with Ari Paparo

Politics and AI with Richard Raddon of Zefr

Zefr
6 min readApr 10, 2024

Last week, Zefr’s Co-CEO & Co-Founder Richard Raddon joined Marketecture Media hosts and co-founders Ari Paparo and Michael Shields on their latest podcast episode to discuss the intersection of politics, AI, and advertising. The 40-min episode covered a variety of timely and relevant topics including the virality, velocity, and potency of misinformation content on social media, firsthand case studies where Zefr has monitored, measured, and worked with platforms to reduce inaccurate and dangerous content, the increasing complexity of identifying AI generated content today, and why this space is so crucial for brands to have a proactive strategy to address it.

Below we’ve compiled some of the most insightful quotes and takeaways from the episode and provided a deeper dive on the topics through additional context, research, and stats.

On the deep rooted problems and consequences of the spread of misinformation:

Rich Raddon: These platforms have been forward-leaning… as far as getting out in front of the creation of content. That’s not where a lot of what we see the damage that can be done — it’s not the official ad, it’s actually, you know, people that are uploading these wild and crazy and unsubstantiated rumors and cloaking it in the mask of fact, when actually it’s fiction.

Supporting Research:

The current industry-standardized definition of ‘misinformation,’ how that translates to the context of social platforms, and how Zefr approaches the content policy and the categorization of misinformation content:

Rich Raddon: “We monitor misinformation, and we should talk about that because the policy around misinformation that GARM has come out with, I believe, needs some fine-tuning to it. For instance, GARM is the Global Alliance for Responsible Media, they define misinformation as things that are untrue, so medium risk in misinformation, in an entertainment concept, if you take that at face value, that’s like half of Saturday Night Live, where its parody… so obviously there needs to be some fine-tuning of how we define misinformation, and its coupled with social issues.”

Global Alliance of Responsible Media (GARM): Brand Safety Floor + Suitability Framework current definitions:

  • “Debated Sensitive Social Issues” (Medium Risk): Dramatic depiction of debated social issues presented in the context of entertainment; Breaking News or Op-Ed coverage of partisan advocacy of a position on debated sensitive social issues
  • “Misinformation” (Medium Risk): Dramatic depiction of misinformation presented in the context of entertainment; Breaking News or Op-Ed coverage of misinformation

How Zefr’s technology detects, measures and prevents misinformation adjacency, and uncovers and reduces unsuitable content in the context of breaking news:

Rich Raddon: “[During] The Israel & Hamas War we saw an enormous amount of misinformation, and it took the form of people uploading generative images that were telling a different narrative. People were repurposing video content from wars 5 years ago and putting it out and saying ‘look what’s happening in Israel’ so it was pretty rampant. So it’s a never-ending stream of this happening.”

Supporting data:

  • Zefr’s data on misinformation is sourced from global fact checkers, which saw a 25x increase in data from Global Fact Checkers in the two weeks since the war broke out on October 7th 2023.
  • The Main Misinformation Narratives Zefr identified related to Israel/Hamas included Unsubstantiated Hostage Posts and Video Game Footage
  • The most prominent category of misinformation about the Israel-Hamas war Zefr identified included old photos and videos from other military conflicts, including from Ukraine and Syria. For example, content showing a young boy mourning his siblings in the Palestinian enclave while the actual footage was captured in Aleppo, Syria in 2014.
  • Other examples of misinformation Zefr identified included Political Conspiracy Theories, and AI-Generated Imagery of brand IP misrepresented in blood related to the war.

Zefr’s AI technology works to quickly identify this type of content during dynamic news cycles and current events, and in collaboration with the social platforms, flags and de-monetizes or removes the content, effectively providing an optimization solution that protects brands and consumers from emerging misinformation narratives.

The importance of fact-checking, the nuances and dynamic nature of misinformation, and how this impacts brands caught in the cross-hairs:

Rich Raddon:The biggest problem with [misinformation] is, it’s the velocity of content that’s being uploaded, and the nuances from which it drifts…. So you can start out with one mistruth, and it can evolve and it can shape-shift into different forms of misinformation, and tracking that is a very very challenging job… At Zefr we’re trying to do it on behalf of brands…we’re monitoring the stuff that brands are adjacent to or could be adjacent to which is a small subset [of a platform].

Supporting Research:

  • A Pew Research Center study conducted just after the 2016 election found 64% of adults believe fake news stories cause a great deal of confusion and 23% said they had shared fabricated political stories themselves — sometimes by mistake and sometimes intentionally.
  • Zefr & Magna Media Trials “Voices on Misinformation” Study (2022):
  • 63% of consumers surveyed agreed that misinformation has a negative impact on how they view a brand
  • 86% of consumers surveyed agreed that they expect brands to make every effort to avoid being next to misinformation
  • 50% of consumers are less likely to purchase from brands perceived as supporting misinformation

On the challenge of detecting fake and AI generated content on complex UGC platforms in image, audio, and video:

Rich Raddon: “Synthetic images are hard to detect. We’re training our algorithms to be able to do so, I know the platforms are doing it as well. The audio is super freaking challenging as well to detect… When I call it a perfect storm, the reality of it is, it only takes one really interesting post that’s up for a short amount of time and that gets viral really quick, that sways somebody that is not monitoring the information that is coming into their feed.”

Supporting data:

  • Zefr’s technology has identified over 1.3B views of political AI-generated content, with key themes including Discussions of Presidents/Presidential Candidates made by AI, Deepfake Misinformation, and Fake Content of Presidents Gaming, to name a few examples.
  • While “regular” misinformation pertains to real-world controversies like alleged voter fraud, “fringe” misinformation content Zefr has identified often goes into obscure and ominous subjects- exploring theories about topics like reptilian beings, or outlandish conspiracy theories.

How Zefr provides solutions that help brands manage the rise and complexity of new and emerging forms of brand-related misinformation today:

Rich Raddon: “We’re trying to help the brands that we work with navigate this, because it’s a very challenging thing to navigate, because… a lot of these brands don’t want to be thrust into the limelight of doing trademark violation, they don’t want to actually become part of the conversation, but they are being unwillingly thrown into the mix because of the ability to use logos and packaging with generative models.”

Supporting Data:

  • Zefr has identified over 2.3B views specifically surrounding misinformation in different industries or brand verticals, including Aviation, Music, Movies, Pharmaceuticals, among several others.

To listen to the full episode visit here: spoti.fi/3xEi5wV

To learn more about Zefr’s AI technology, including for misinformation detection and avoidance please visit and contact us here: https://zefr.com/misinformation

--

--

Zefr

Leader in Responsible AI, Brand Safety & Suitability