top of page

Deep Fake

Deepfakes are pretty easy to make, which can be alarming when you consider the potential for misuse. Early deepfakes were mainly used for explicit content. But what's even more worrying are the possible uses for providing fake alibis in court, blackmail, or terrorism. In a previous article, I discussed how we can better handle the threat of deepfakes through transparency, regulation, and education, helping to identify and fight against the wrongful use of this technology. In this article, I'll be explaining more about how deepfakes work technically.

​

Today, almost anyone can tweak videos, sounds, and pictures to make them appear as something else. No coding skills are needed to generate a deepfake. You can make one for free in less than half a minute using websites like My Heritage, D-ID, or any of the numerous free deepfake apps. Remember, always use these tools responsibly.

​

Does this sound too easy? Hang on, are AI and Deep Learning really that simple? Well, not exactly. There's a big difference between using a model and training one. Before we can get to a point where we have a user-friendly tool, we first need to develop a model that allows this. All deepfake tools rely on Artificial Intelligence (AI) models. These models require a lot of training data, and producing them isn't straightforward.

​

These models are built on something called neural networks, which imitate an architecture inspired by how our brains process information. But unlike our brains, artificial neural networks are more static and digital, whereas our brain is dynamic and analog. If you were to look inside a neural network model, you'd see it as a series of layers of mathematical functions.

​

The architecture behind deepfakes An academic paper by Goodfellow and others in 2014 rekindled interest in deepfakes by introducing a new deep learning architecture named Generative Adversarial Networks (GANs). In a GAN, two neural networks are set up to compete against each other (that's why it's called "adversarial"). The first network, known as a generative neural network, produces realistic images from a random starting point through a process called decoding. Let me explain more about this below. It's sort of like doing the reverse of pixelating an image.

​

The first network, the generative network, creates a realistic image from a random seed (starting point) through a process known as decoding. It's a bit like reversing the process of pixelating an image.

To help visualize this, imagine you have a blurry image. The generative network's job is to fill in the gaps and make that image clear. This network will continually produce images based on the data it has been trained on.

​

The second network in a GAN is the discriminative network. This network's job is to distinguish between real and generated (fake) images. Essentially, it's the quality checker. If it can't tell the difference between a real image and a generated one, then the generative network is doing a good job.

These two networks constantly compete with each other. The generative network tries to 'trick' the discriminative network with its generated images, and the discriminative network continually learns to get better at identifying fake images. Through this cycle of generation and discrimination, the system gradually improves over time, leading to more and more convincing deepfakes.

​

However, while this process might sound relatively straightforward, in reality, training these networks requires significant computational power and vast amounts of data. Furthermore, the mathematical and programming knowledge required to build and train these models from scratch is quite advanced.

Now, when you use a deepfake app or website, you're not training these models yourself. Instead, you're utilizing models that have already been trained by others, which is why it can seem so simple. You provide the inputs (such as a photo or video), and the pre-trained model does the rest, generating a convincing deepfake.

​

While deepfakes can be fun and entertaining when used responsibly, they also pose serious ethical and legal challenges. As these technologies continue to develop, it's crucial that we as a society develop strategies to manage their potential misuse.

bottom of page