Following the deadly attack on Moscow's Crocus City Hall in March, Russia’s state NTV channel broadcast a deepfake video appearing to show Ukraine's top security official, Oleksiy Danilov, suggesting that his country was responsible for the assault. Fortunately, the video was widely flagged as a deepfake before it could be used as a pretext to disastrous military response.
Deepfakes have been a regular presence in the Ukraine war, dating back to the beginning of the invasion when a deepfake video showed a synthesized Volodymyr Zelensky surrendering to Russia — an effort that has been described by experts as the first real attempt to sway the direction of war via deepfake disinformation. The same strategy has already been adopted in other wars and conflicts, further indicating that deepfake manipulation is here to stay as a new front in all future conflicts.
Deepfake media created in an effort to spread disinformation, sway public opinion, and steer political response in allied countries is the new development in public relations of war. These methods are utilized by independent actors and governments alike, and especially as we enter a new era of large-scale, high-stakes conflicts taking place around the world, it is inevitable that most, if not all of the world’s governments will deploy deepfakes in covert action, propaganda campaigns, spycraft, and open warfare. So far, deepfakes have not created major clashes or resulted in tragic misunderstandings within armed forces, mostly thanks to the rigidly protected lines of communication and the chain of command.
Yet what happens when deepfakes are deployed in moments of unprecedented chaos and vulnerability when communication breaks down? A doctored video of a leader declaring war or general announcing a coup can cause panic and violence among the global population and military ranks when the video’s origin cannot be verified. Fake images of atrocities can lead to unrest that benefits malicious actors and opportunists. An invasion can be made to look like a local rebel uprising, while a state’s violence against its own people can be reframed as the actions of terrorists. For now, within well-functioning political systems, we rely on the media and independent agencies to counter the effect of disinformation, and armed forces and governments to follow a set of carefully designed emergency response scenarios.
Taking Action Against Deepfakes
To ensure that such systems continue to work, we must consider deepfakes a priority. With generative AI weaponized, the possibility of armed services and their leaders being targeted and prompted to act by deepfakes is now both real and tangible. The consequences of such scenarios are far-reaching, and we cannot afford to find out how they would play out in reality. As security officials and policymakers race to catch up on the possibilities of the technology that has already outpaced the reach of our current laws and safety measures, governments and legislators must act quickly to ensure that the allure of utilizing deepfakes against adversaries does not lead to a completely destabilized world.
This is why we echo the sentiments of other experts in calling on world governments to develop a code of conduct for the utilization of deepfakes by governments, guaranteeing that the potential harm caused by deepfakes is considered before their deployment against adversaries. Such a code could function in a way similar to international treaties guiding the use of chemical and atomic weapons, ensuring that deepfakes will not be exploited in ways that diminish the value of human life and inflict unusual and cruel punishment on civilians. A deliberate process that involves a range of expert voices from across disciplines and institutions should guide policies on deepfakes at all times.
As we wait for policymakers to take the initiative, Reality Defender will continue to work with governments and institutions to provide crucial deepfake detection tools to help stop disinformation campaigns before they lead to tragic consequences on the ground. No matter the shape global policies on deepfakes take, reliable detection of AI-generated content will continue to be a necessary measure in preventing disaster as we navigate the challenges faced in the advancing world of armed conflicts.