As someone working in artificial intelligence (albeit on the security and prevention side of things), it is my job to closely follow the rapid rise of every advancement in the creation of deepfake and generative content. As these technologies continue to proliferate, social media platforms have notably struggled to adapt and combat misinformation and harmful content shown to their millions of users. Many have chosen to do the bare minimum, if not less, as no legislation is forcing their hands to act.
TikTok recently announced a plan to have users indicate if their videos contain generative or deepfaked content. The platform also started allowing users to flag posts they believe contain such content. While this may seem like a step in the right direction, I can say from our experience as a company that deepfakes will continue to proliferate when only user-led moderation and reporting exists to stop them. Proactive deepfake detection — the kind offered by Reality Defender and not dependent on user activity and/or bias — is the strongest solution in the fight against this manipulative and dangerous content.
Historical Shortcomings
Many tier-one platforms have applied a similar approach to the one TikTok is currently employing. Time proves that assuming users will willingly and accurately label their content as generative or deepfaked is a wholly impossible approach. The majority of users creating deepfakes or misleading content do so with malicious intent, and will not voluntarily reveal their deception. Asking users to self-report their content's authenticity is akin to asking criminals to disclose their illegal activities. In an era where misinformation can spread like wildfire, it is critical that we do not rely on the good faith of users to police themselves.
The policy's reliance on user-generated flags presents another point of vulnerability. By offloading the responsibility of detecting deepfakes onto the user base, TikTok sets itself up for a storm of false positives and negatives. Untrained users are likely to misidentify genuine content as deepfakes, leading to unnecessary scrutiny while actual deepfakes slip through the cracks. This decentralized approach also opens the door for users with their own biases and agendas to manipulate the system, creating new harms where none previously existed.
A Proactive Approach
Reality Defender represents the true future of deepfake detection. By utilizing trained models on known and unknown deepfake and generative methods, our platform proactively identifies and flags this content with a level of accuracy unmatched by human users. This technological approach takes the responsibility for detection out of the hands of individual users and puts it squarely on the shoulders of platforms using cutting-edge AI.
A proactive deepfake detection platform affords platforms like TikTok the advantage of constant evolution and adaption to the latest deepfaking and generative content models without ever asking the users to lift a finger. As creators of deepfakes and generative content become more sophisticated, so too must the tools we use to detect them. By continuously refining and improving upon its AI algorithms, Reality Defender can stay one step ahead of malicious actors in the digital space.
Social media platforms like TikTok must adopt more rigorous and effective methods of deepfake detection. By relying on user self-reporting and user-generated flags, TikTok is not only opening itself up to misuse and manipulation, but also failing to protect its users from the potentially harmful consequences of deepfakes and generative content. Only by embracing cutting-edge deepfake detection platforms like Reality Defender can we truly hope to combat the growing threat of misinformation and deception in our digital world.