Deepfakes: Two sides of a Yin and Yang
With a 900% YoY growth for deepfake videos online, the trend is set to gain steam and reshape the society in innumerable ways - Are you ready for some chaos?
The line between the truth and what we see - What are deepfakes?
Have you seen The Queen of England's Christmas video joking about Harry and Meghan or Prince Andrew in a speech tackled with the finest British humor? What about a YouTube video of Barack Obama calling Donald Trump a "complete dipshit"? Or maybe you've seen Mark Zuckerberg's video giving a disturbing speech about Facebook's power? What about Tom Cruise's video on TikTok playing golf? If you answered yes to any of the above, then you've experienced deepfakes, manipulated videos using machine-learning to impersonate real people.
To understand this technology, let's look at the origins of deepfakes in 2014 with what we call Generative Adversarial Networks (GANs). In essence, GANs set two neural networks in direct competition with one another called the generator and discriminator. Simply put, these are two algorithms that will train each other; the first will try to make counterfeits as reliable as possible while the second will try to detect the fakes. Think of this as a game of cat and mouse with each side learning the other's methods in a constant escalation.
GAN's can have many applications, and they are the technology underpinning deepfakes used to create convincing images, audio, and videos. It was back in 2017 on the community platform Reddit that the term officially made its appearance. It inherits its name from a user of the Reddit platform who digitally swapped faces of celebrities such as Gal Gadot and Scarlett Johansson onto the bodies of X-rated actresses. Since then, the examples have multiplied and gone far beyond the framework of pornography.
Deepfakes blending in our lives – Easy
If you go on Google and type "deepfakes," you'll find a staggering amount of articles explaining how deepfakes can create havoc in society, ranging from political manipulation to revenge porn, with the potential to ruin peoples' or institutions' reputations and ultimately destroy trust in society. While this is a vivid concern (and I'll cover to some extent this part in my third section), technology itself isn't bad. As UX and Dataviz researcher Sven Charleer explains in his article on Towards Data Science, deepfakes deserve appreciation, not crucifixion: "Sure, putting celebrities' faces on your favorite porn stars is an interesting use case. But we can leverage these celebrities for other things, such as inserting your friends and family into blockbuster movies and shows!"
And indeed, there have been numerous apps emerging around this area in the past two years. These are often referred to as "cheepfakes" rather than deepfakes because even though they can be pretty realistic in some cases, one can easily spot that these are not real videos and are only intended for recreational purposes. Here are some examples:
Reface - backed by VC fund a16z - is the most emblematic example of this movement and has made it to the top of the best fun apps both in the Apple Store and Google Play in 2020. Reface has been downloaded over 100 million times and allows you to morph your face, switch it with celebrities, and star in popular tv shows and movies.
Wombo - 3 million downloads on the app store in March 2021 - is a similar entertaining app allowing you to synchronize the movement of your lips and thus make anyone sing from a simple photo.
Deep Nostalgia from MyHeritage can take photos from any camera and put the normal human spirit in them. You can use it to bring historical figures or just some old deceased family member you’ve probably never met – to life. Sounds cool but also a bit creepy.
Avatarify intends to help you "become whoever you want," allowing you to create living avatars by face-swapping your face with a celebrity that mimics your actions. You can even use it in live video calls on Zoom or Skype. (Better than renting a Goat to liven up that draining call, right?)
And there are many more. Note that deepfakes for fun or entertainment aren't just apps. Initially, the movie industry welcomed deepfakes for some specific scenes, saving the costs of hiring a stunt double. This is how Grand Moff Tarkin appeared in Rogue One: A Star Wars Story, released in 2016. Now, usage is growing in other sectors of the entertainment industry. Check out Zizi, an extremely well-executed deepfake drag cabaret (really, check it out, it's truly epic), or Take This Lollilop, an interactive horror movie via webcam. According to Jason Zada, creator of this interactive sequence where deepfakes and other special effects are mixed, 100 million people have lived the experience, 300 million more on TikTok and YouTube.
Big businesses are diving in – Media and its opportunities
From a broader perspective, deepfakes are a branch of synthetic media (content generated or modified by AI), and it's a vast market. Samsung Next Ventures mapped the landscape a couple of months ago in a must-read whitepaper for anyone interested in the subject.
According to them, Synthetic Media heralds the start of the third evolutionary stage of Media:
1- The past, via old media: enabled mass distribution for a select few through TV, radio, and print. This was made possible through broadcasting.
2- The present, via the internet: new media-enabled democratized distribution for everyone through social media.
3- The future, via AI & deep learning: synthetic media will democratize media creation and creativity for everyone in ways you can’t imagine.
When you think about it, this goes in line with the development of the creator economy enabling many use cases like digital twins for celebrities, artists, influencers, and experts. This would allow them to scale by solving a crucial pain point today: less fatigue, unprecedented levels of creativity, and ultimately more gigs and money.
But it goes even further than just the booming creator economy. In a future D2A (Direct-to-avatar world), synthetic media will allow the creation of all kinds of avatars. As the report explains: "some will look like comic characters, some will look and act like real humans, some won't even be embodied." This application will also allow fashion brands to offer a top-notch personalized online shopping experience by giving customers the chance to see themselves in the clothing advertised on social media and on shopping sites.
But there are more potential applications. Here are some concrete examples:
Marketing: I recently discussed with Alex Robinson, the founder of Vidon, a service that helps you create a deepfake that looks and talks like you. It allows you to create scalable video prospecting with AI personalized videos. As Robinson explained to me: “Companies have been personalizing text-based emails for decades, but now we can bring deep personalization to video thanks to deepfake technology. This has helped our clients bring the intimate video experience to their customers."
Arts & Culture: In 2018, the Illinois Holocaust Museum and Education Centre created hologrammatic interviews. This allowed visitors to talk and interact with Holocaust survivors bringing history back to life.
Edtech: In a similar vein, deepfakes can bring a new dimension to learning, making it more entertaining and memorable. Recently, UK-based Edtech Guide Education raised £6 million for what they call their “Netflix for teacher training”. In the words of its CEO, Leon Hady: “Our application of Deepfake technology means pupils can have Stephen Hawking teaching physics or Shakespeare explaining his writing - this will be as revolutionary for education as the first blackboard was.”
Healthcare: The ALS association's Revoice project, for example, allows ALS patients who have lost the ability to speak to continue using their voices. How does it work? By using deepfakes to create customized synthetic voice tracks that can be played on-demand with a sound console.
Fascinating, right? Do keep in mind, however, that companies operating in this field have a huge challenge to face: they need to commit to high ethical standards, making sure their technology won't be misled for harmful applications.
Trust is in trouble – Is it time to panic?
Forgery had existed a long time before deepfakes came onto the scene, notably through software like Photoshop, but the difference here is that it brings "closeness to authenticity." Some experts describe the technology as "photoshop under steroids," highlighting that audio or video manipulated content can be much more striking.
Roaring from trust and credibility to juicy doubts and skepticism - If we can be easily fooled by what we see and hear – What do we do? What happens then?
There have been multiple different initiatives taken to tackle this issue around deepfakes, but while they certainly can help, none of them are 100% effective, even more so with the technology becoming each day better at making all the artisanal techniques used to spot a deepfake one day (like the blink of an eye or the 15 other signs to watch for listed in this article) obsolete the next. Human techniques just won't make it in the long run, so we need to think about other options.
Digital signatures or watermarks - They are meant to serve as a trail of originality for content such as video and photos. Think of it as a seal for authenticity. The idea here is to bring hardware-backed security to photos and videos. Truepic for instance is a company working with Qualcom to bring this to your future phone. The problem with this technique? If this allows certifying that a video has not been manipulated or fabricated, it doesn't solve all the problems. Sometimes the issue is not the authenticity of the clip, per se, but the authenticity of the narrative. In other words, the context around the sequence and how it was captured. Furthermore, it is feared that hackers could input false content into the camera's network of electronic circuits, enabling the host device to endorse falsified content as authentic.
Deepfake Detection - The goal of these tools is to add a “trust layer” allowing you to scan a suspicious video to determine if it has been synthetically manipulated by giving you what can be called a "percentage chance" or "confidence score." Among the startups operating in this field, let's mention Sensity, Sentinel, Kroop, DuckDuckGoose, Deepware, or Defudger.We show our VizMantiZ in action on real and fake videos of Tom Cruise. The later one has been going viral online and shows that is possible to generate hyper-realistic deepfakes. Kroop AI is working on fighting disinformation through explainable deepfakes detection.
Big players are also tackling this issue. The US Defense Department is working on something similar, and last year ahead of the U.S. election, Microsoft launched its own deepfake detector tool called Video Authenticator. The problem, however, is that while they give good results, they are not 100% accurate. According to Dr Jyoti Joshi co-founder at Kroop AI: “A challenge in deepfakes detection is creating an inclusive detector. As earlier seen with face analysis systems, ethnicity and gender imbalance can create systems to misclassify for some underrepresented data. The stakes are even higher when it comes to deepfakes detection.” Another important point to stress out is that to fight Deepfake, these tools have to operate like Deepfake. The process is very time-demanding as they will need to gather and build a lot of intelligence on an ongoing basis to stay relevant. It's a game of cat and mouse.
The future is sure to bring even more options. For example, besides having launched its own solution in the field called SimSearchNet++, Facebook has also partnered with other industry leaders and academic experts to establish the Deepfake Detection Challenge (DFTC) with the goal of accelerating the development of new deepfake detection methods
Regulation is another hot topic, but experts are torn on how to regulate such a space. The bottom line is that this is a complex issue. In a world where anything can be faked, everything can also be denied. This phenomenon is not new (like fake news in general) and has been coined as the “Liars’ Dividend.” As Rachel Botsman, an expert and author on trust in the modern world, puts it in an article for Wired, the problem with deepfakes may be that people don’t care what’s real. As she explains: “The greatest trust threat for the next generation isn’t being deceived by deepfakes. The danger is that we will regard almost all information as untrustworthy,” a state of mind that Aviv Avadya, a media researcher and founder of the Thoughtful Technology Project, calls “reality apathy.” Perhaps instead of worrying about the erosion of trust, our immediate focus should be to rebirth CARE in trust, thereby saving people from falling for anything they encounter.
But what if this goes down the drain? What if it has already slipped out of our hands? Could it already be too late?