An ad from religious non-profit organization the Servant Foundation — which runs media campaigns under the banner He Gets Us — aired for 60 seconds last night during the Super Bowl. The ad features several still photos depicting people taking part in the religious rite of maundy (washing of the feet) while a cover of INXS' "Never Tear Us Apart" softly plays in the background.
Within seconds of airing, users took to X (formerly Twitter) and other social media services to quip about the heavy use of AI in creation of the commercial's stills which, to the naked eye, looked wholly artificial. Yet shortly after, AdAge confirmed that the photos were very real, having been shot by photographer Julia Fullerton-Batten.
Over 100 million people watch the Super Bowl. Many (myself included) watch just for the commercials. Based on a study from Jumio last year, roughly 67% of the public is aware of the existence of generative AI and the media created from generative tools. Yet based the aggregate data from over 100,000 social media posts in just an hour, no one can tell real from fake (and vice versa).
Reality Defender was built with this scenario in mind. As generative AI rapidly advanced over the last few years and became infinitely more indistinguishable from real media, our team built deepfake detection that uses AI to detect AI in ways that even the best-trained humans cannot. Last night's ad and the subsequent failure of humans to properly classify it shows that relying on human classification and manual moderation for this specific content problem is wholly unreliable and not scalable.
Though the ad was harmless, the incident also exemplifies how no one can determine the difference between real and fake, while highlighting the pressing need for deepfake detection now before the same sample size is required to assess the validity of something with actual consequences.