As deepfakes raise concerns about fiction being presented as fact, the proliferation of AI-generated content encourages an equally troubling phenomenon: the labeling of truth as falsehood.
In the past few weeks, droves of internet users have engaged in unfounded theories that videos showing Kate Middleton strolling around a market were, in fact, deepfakes, and speculated that a photoshopped family photo was proof of a nefarious plot from the highest echelons of government. These theories, boosted by Kremlin-backed disinformation groups, ruined a person’s right to avoid exposing her private grief to billions of strangers.
Following the collapse of the Francis Scott Key Bridge, conspiracy theorists took to social platforms to boost their engagement by blaming the tragedy on anything but the actual cause of the bridge collapse (the striking of the bridge by a container ship). As the world’s relaxed relationship with concepts like facts and proof becomes even looser during such incidents, we are increasingly worried about the impact sophisticated deepfakes will have on media literacy and our ability to believe what has and has not occurred.
The Liar's Dividend
The liar’s dividend is a well-documented phenomenon exploiting the uncertainty propagated by realistic deepfakes, a cynical worldview that anything not adhering to an individual’s interests or opinions can be dismissed as fake. The liar’s dividend precedes deepfakes, but the fact that the percentage of AI-generated content flooding the Internet is already reaching double digit percentages creates perfect conditions for its spread. Abusers of the liar’s dividend want us to believe that anything and everything can be a deepfake. This way, a political party, a bad actor, or a fraudster can point to a piece of authentic unflattering media or evidence and claim it is a fabrication.
Of course, the burden of discernment doesn’t depend solely on individuals. All platforms responsible for the dissemination and analysis of content — news media, social media platforms, tech and AI companies, and others — must pursue the verification of legitimate content as much as we pursue the exposure of fake content. We cannot afford to barter in conspiracy theories and gossip, no matter how much engagement such content brings to digital platforms. Especially within the world-altering industry of artificial intelligence, we cannot serve the business of conspiracies, only the pursuit of truth. In our line of work, being bogged down by conspiracy theories distracts from productive conversation about how AI can elevate humanity. It would be a mistake to allow powerful technology to become fuel for our most banal impulses.
Narrowing the Definition of "Deepfake"
The key to resisting this phenomenon is to disallow the word “deepfake'' from becoming a one-size-fits-all boogeyman for any piece of news or media we don’t like. Deepfakes are not an all-encompassing ether that swallows up all sense of objective truth and points of reference for what is authentic. Deepfakes are merely pieces of flawed synthetic content that spread, persuade, and devalue what’s true because most governments and platforms have yet to adopt effective measures to expose them for what they are, and disassemble the far-fetched myths behind them.
We must continue to develop cutting-edge, explainable, and practical methods that elevate real content in contrast to deepfakes. We must debunk the toxic myth that everything is possible and nothing is true. AI may be persuasive and its nefarious misuse hard to predict, but with a down-to-earth, human approach to how we communicate about generative AI and deepfakes with the public, we can ensure that the photo of a molehill won’t become an unscalable mountain, and tragedies won’t profit the worst opportunists among us.