Last week, the European Parliament passed the Artificial Intelligence Act, one of the largest pieces of legislation in existence that specifically governs the use, creation, and implementation of AI systems.
Passing such legislation was no small feat, and the EU's 27 countries will be covered in 15 days by the following new requirements and regulations targeting AI:
- Bans on AI applications that threaten citizens' rights, including biometric categorization based on sensitive data, social scoring, and AI that could manipulate human behavior.
- Strict limitations on the use of biometric identification systems by law enforcement.
- Obligations for high-risk AI systems to assess and reduce risks, maintain transparency, and ensure human oversight.
- Transparency requirements for general-purpose AI systems and their models, including compliance with EU copyright law and publishing summaries of training content.
- Labeling requirements for artificial or manipulated content (read: deepfakes).
- Establishment of regulatory sandboxes and real-world testing to support innovation and small and medium-sized enterprises.
- Citizens' right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI.
What does this mean for deepfakes and AI-Generated content?
Labeling deepfakes is now a requirement in the EU, meaning anywhere deepfakes are disseminated (which, in essence, could mean all content platforms) must now indicate as such. Those building deepfake creation and generative AI tools must assist also in their labelling.
If this seems rather ambiguous as to how this happens, that's because it is not, in fact, specified within the act. Nor are the ramifications for not checking for deepfakes, though this may be spelled out after the new laws take place.
Whether this means requiring all platforms operating within the EU to implement robust deepfake detection or simply check for watermarks and call it a day is currently up in the air.
Is this a good precedent for legislation elsewhere?
Sort of.
We applaud those in the EU who rather briskly ushered in the largest piece of AI-related legislation well before several of its member countries vote (as well as holding their own elections). This is no small feat, and the many requirements for companies working in or adjacent to the AI space will undoubtedly pave the wave for creating and iterating on AI tools that not only reflect the real world, but (hopefully) do no harm to it as well.
That said, legislation is only as good as its enforcement and mechanism of actions. If deepfake detection or labeling becomes a user-driven effort (ex. X's Community Notes), then deepfakes will continue to have an overall negative effect on the world's content platforms. If it means adding watermark checking (as has been Google's approach), then it will catch a relatively small percentage of the millions of AI generated content works uploaded to the internet each day. Inference-based deepfake detection — the kind found on the Reality Defender platform and API — is able to cover infinitely more ground than watermarking, while also able to work in tandem with watermarking.
In short, this is a good first step, albeit one that needs the second half — enforcement — to actually work and protect citizens of the EU from the many dangers of AI, deepfakes, and everything in between.