Image Source: (Labeled for Reuse)

The Deep Implications of Deepfakes

Following the rise of advanced technology and the spread of popular media, an innovation has emerged: DeepFake. Combining the words “deep learning” and “fake,” the software utilizes a deep-learning algorithm that identifies faces of individuals and superimposes another face to merge and create a convincingly realistic yet fake photo or even a video. The technology is capable of putting words into politicians’ mouths and recreate movie scenes with actors who never appeared in the original film. The software has been developed further with programs like Deep Video Portraits, which enables users to control a still-image portrait of another person simply by using their own faces to create a realistic video with the illusion that the portrait’s subject is the one doing the actions (to see a video demo of the software, click here). With such powerful abilities, numerous problems emerge and must be considered before the technology spreads even further into the public domain.

A major issue with DeepFake comes with the issue of media credibility. The software  completely revolutionizes the term “fake news.” Any user with access to the DeepFake can easily create a realistic video of a politician or celebrity seemingly say or do certain things they never said or did. For instance, in 2017, the University of Washington was able to develop a synthetic Barack Obama who can lip sync to any speech from a given audio file. By analyzing the mouth shape for each moment in time, the program creates a high-quality texture of the mouth and utilize 3D pose matching to synchronize the verbal movements of the target video with that of the source video (see a video demonstration here). As for the auditory aspects of the technology, software programs like Lyrebird, Adobe VoCo, and WaveNet able to replicate any voice from given audio samples. This raises significant ethical questions and legal implications. Lyrebird’s founders acknowledge the various problems that their technology can bring, such as “misleading diplomats, fraud, and more generally any other problem caused by stealing the identity of someone else.” (Currently, a beta version of their software is available for free use where users can record their own voice and create their voice avatars, and the program is planned to open for the public after its development.) Alexandre de Brébisson of Lyrebird also comments on how “the situation is comparable to Photoshop” and that “people are now aware that photos can be faked, thinking that “in the future, audio recordings are going to become less and less reliable [as evidence],” although conceding that the public, despite knowing about Photoshop, will continue to believe in realistic fake images when imposed in the right context, which would be the same when it comes to synthesized speeches. Utilizing neural networks and deep learning programs to create AI clones that are eerily identical in appearance, speech, and actions to their templates, DeepFake, accompanied by other similar technologies, can dramatically revolutionize the way we consume information, bringing up a plethora of credibility problems when it comes to actions as simple as watching the news.

Other exploitations of the technology come into view. One significant issue is the use of the software for the creation of fake adult videos. With the technology capable of easily putting a person’s face on another’s body, the problem of non-consensual fake pornographic material emerges. Users can easily replace the faces of adult film stars with celebrities or just about anyone. This raises significant legal issues regarding privacy and security, making any individuals whose face is recorded a potential target. The possibility of revenge pornography — the distribution of intimate and often sexually explicit images or videos of individuals against their will for humiliation purposes — is exacerbated with the easier means of production of such material through the technology. Adam Dodge, legal director of the California-based domestic violence agency Laura’s House, has raised concerns regarding the dangers of the software, even publishing an advisory regarding the threat of fake video technology with Erica Johnstone, co-founder of nonprofit organization Without My Consent and partner of a San Francisco law firm (read more here). “A lot of people didn’t even realize this technology existed, much less that it could be misused or weaponized against the population we serve every day,” asserts Dodge. “This is nonconsensual porn on steroids.” Curbing of the technology has been attempted with online communities including Reddit, Discord, Gyfcat, and Twitter banning fake face-swapped pornographic material. However, the technology is still bound to expand and restrictions are only temporary alleviations to the nascent yet already robust industry.

To end on a positive note, the technology is groundbreaking in terms of the entertainment industry. The program has already been in use to put appearances of movie characters whose actors have aged or already been deceased. For instance, Guardians of the Galaxy Vol. 2 (2017) has revealed a young 1980s version of Kurt Russell and in Rogue One: A Star Wars Story (2016), Wilhuff Tarkin played by Peter Cushing who passed away in 1994 was able to make a post-mortem appearance. Based on the numerous implications that this ingenious yet nerve-racking technology possesses, limitations must be placed. As the platitude goes, “With great power comes great responsibility,” DeepFake and its similar programs are no exception.

More Stories
Sports moments that defined the decade