We need protection against deep fake

The issue

You wake up this morning, the sun shines bright and you're feeling fresh to make that day rocks. Checking your phone, you're surprised to see hundreds of unread notifications. The constant buzz of your phone seems unstoppable, and the messages list keeps getting longer. Your brain cannot make sense of it, you open Twitter and see hundreds of infuriated DMs. What the fuck just happened? Scrolling back your Activity feed, a video catches up your attention. A popular youtuber is commenting on some guy deeply racist's attitude that drove the internet crazy. Your brain freezes: how come this guy is actually you?

This is all made up but is it far from reality though? With the recent advances in AI and scandals associated with their use in manipulating crowds, there's no need to be a science fiction author to forecast how bad this could turn in the near future. Damned, the barrier to entry is so low that literally anyone can start playing with it: you don't need to be a programmer nor a data scientist and it costs less than a stamp to generate a realistic fake image of you as Iron Man. You don't even need te dive into the shady abysses of the dark web, as just typing deep fake in Google yields sites offering you everything you need to create a deep fake video akin to the dozen others already leveraging this technology for good or questionable reasons.

Solutions are still weak

The major part of the solution comes from our ability to detect such deepk fakes attempts. Looking up the Europol report on deepfake challenges for law enforcement they split the detection capability in two: manual detection by human and automated one. The first approach rely on weaknesses of the AI models to produce a sufficiently realistic for a human to get tricked, highlighting for instance the lack of blinking, inconsistencies in the hair, vein patterns, ...

The other approach leverages automation, with really smart algorithms to tell the truth from the lie (e.g "Biological Signals, tries to detect deepfakes based on imperfections in the natural changes in skin colour that arise from the flow of blood through the face")

Will deep fake become the new spam?

Both countermeasures are built on the assumption that the models contain flaws, which sounds reasonable, even though it seems equally reasonable in my opinion to consider that those flaws will constantly become harder to spot. Eventually, it will turn into some cat and mouse chase game, similar to what's getting seen with spam or viruses. Only that the hurting potential associated with deep fake looks way bigger in my opinion. Only time will tell, but I feel that this should be a problem worth working on.