How “Fake” Can Become Our Reality

In a digital age where A.I. has become prevalent, certain issues that had never been of scrutiny before, are beginning to arise. One such particular concern is deepfakes: fake images or videos that have been synthesized from existing images through the utilization of machine learning. Although such digital deception may be appreciated in the theater, misuse of this technology could result in significant consequences. 

NVIDIA, a leader in the advancements of A.I. and machine learning technology, received two awards for GauGAN this past July at SIGGRAPH, an annual conference for computer graphics technology. GauGAN is an A.I. art program capable of turning two-dimensional maps into realistic landscape images—but that’s not all. It can also change the simplest sketches into complex and realistic background images, making it an extremely potent tool for artists. But what happens when this technology is abused? What happens when instead of creating landscapes, GauGAN advances to the point in which it can spontaneously create human profiles? This newest example of technological advancement just might add to the already complicated issue of deepfakes. Instead of comical videos of presidents with dumb punchlines, we may soon see them halfway across the world from where they really are, inciting local riots. With the deepfake technology that have rapidly emerged, cyber-terrorists may soon have all they need to spread undetectably convincing falsified information. What was once merely a nightmare has become a very real future—a close one, at that.

This isn’t the first project NVIDIA has done in this field. In April of last year, NVIDIA introduced a deep learning method for “image inpainting”; the method was able to reconstruct corrupted images, remove unwanted parts of images, and realistically fill any holes in images. Although not a new concept, the method was the first of its kind to effectively deal with irregularly shaped holes. The reconstructed sections were without color discrepancies or blurriness, and it is precisely this effectiveness that makes the technology so dangerous. Coupled with GauGAN, this technology could be used to turn any idea into a very real image. 

The truth of the matter is that falsified data is already an extremely common occurrence. A.I. has been utilized to spread disinformation in innumerable ways. In one article, “How A.I. Could Be Weaponized to Spread Disinformation,” Scott Blumenthal and Cade Metz describe how A.I. is already capable of imitating human writing. Although far from perfect, the fact that A.I. could imitate humans in ways as creative and delicate as writing should be alarming to us all. Karen Hao provides another example of such technology in her article, “You Can Train an AI to Fake UN Speeches in Just 13 Hours.” As for what the article discusses… well, the title really says it all.

So the question is, how do we differentiate between what is real and what is not? And what can A.I. institutions around the world do to help? Spreading word about it is an obvious step. And with all of the facets of communication available to us today, I have little doubt that a good portion of people have already heard about this topic. 

A more difficult step that has already been taken by some A.I. labs is publication. OpenAI, for instance, publicized the source code for their A.I. powered text generator. By making its algorithm open-source (accessible for free), OpenAI has provided countless new minds with a starting point to look for solutions, an effective way of combating any issue. In other words, the lab made a sacrifice for “the greater good” despite the disadvantages it would face. And so the question becomes, will other organizations hold themselves to the same standards?

More Stories
Sabermetrics 2: Pythagorean Expectations and the KBO