Video is one of the most popular type of content consumed today. YouTube, the biggest ever video sharing portal, sees 300 minutes of fresh video content appear on its servers every single minute, and a lot of video content is uploaded to social networks like Instagram, Facebook, and Twitter. And this is without counting the video content served by TV channels and other news outlets online (many of whom use cloud-based servers to serve their clients content), the streaming services, and their likes. Much of this content is consumed on smartphones – small screens that make it harder to distinguish the details. Why is this important, you might ask? Because today, there is a technique that can create fake videos that can easily fool the untrained eye – fun for some, a nightmare for others.
Smartphones are good for a great many things, from checking out odds at Betway to making payments – and due to their limitations, they are also the perfect medium to pass a fake recording as real.
What is a “deepfake”?
The official definition of deepfake is “an AI-based technology used to produce or alter video content so that it presents something that didn’t, in fact, occur”, according to WhatIs.com. Basically, it is a technology-based on deep learning that is used to, for example, edit the faces of celebrities onto recordings of others. Originally, it was used by a Reddit user to put the faces of various (female) celebrities on the bodies of adult video stars. At first, it was pretty hard to create a convincing deepfake video – today, in turn, it has become much easier (basically, it needs enough source material and processing power) becoming something accessible to pretty much everyone.
There are many famous faces who received the “deepfake treatment” recently. The examples include Facebook creator Mark Zuckerberg “speaking frankly” about his creation:
… or the former US President Barack Obama saying a few choice words about the current US President Donald Trump…
… or even actor Will Smith not turning down the role of Neo in the 1999 blockbuster The Matrix:
The problem with deepfakes is, in turn, not the fun people have with them. It is the fact that they erode trust in legitimate news sources – that their very existence makes the public question the validity of any video that is posted online.
Academics are trying to create means of detecting deepfakes as we speak. The authorities have also joined the fight: last year, the US Senate has introduced an act prohibiting the malicious use of deepfakes – and many countries will likely follow suit, with the UK leading the charge promising to prosecute “deepfakers” for harassment. At the same time, social networks are considering introducing policies against deepfakes, with Twitter leading the way.
Deepfakes can be a lot of fun… but they can be very dangerous, too. Their unchecked spread can erode the trust of users in news sources, especially video content. And there can be no correct information without trust.