Featured in Can You Spot a Deepfake? From Stephanie Lepp to Francesca Panetta and Halsey Burgund, artists are using AI to reveal the fragility of our trust in basic information Barack Obama never said: ‘President Trump is a total and complete dipshit.’ But, in 2018, a deepfake of him did. Deepfakes are synthetic videos (or speech, or text) created by a machine-learning algorithm that allows you to make faces, voices and even bodies look like they are saying, doing or writing something they didn’t. They rely on a type of deep learning called generative adversarial networks, in which two neural networks combine to generate a convincing replica: one creates fake images (here are some faces that look like Obama talking), while the other tries to judge if they are fake (this doesn’t look like Obama – do better), so the result is constantly improved until you have something that looks very like the former US president. Deepfakes are made using real training data, so the more media that’s available of the person you’re trying to imitate, the more realistic the deepfake will be. The technology is improving so quickly that, as I write this, a pretty convincing deepfake of you could be generated from just your Facebook profile or Instagram stories, or even from a single photograph.