Internet giants mobilize against fake videos

American comedian James Meskimen looks at the camera and begins his act. It mimics the voices and gestures of several famous characters: John Malkovich, Colin Firth, Robert De Niro... However, as he changes characters his face also does and instead of his face appears that of those he imitates: Arnold Schwarzenegger, George Bush or even Morgan Freeman. The spectacular result is not due to the superhuman abilities of this American impersonator, but to his collaboration with a popular creator of deepfakes. The algorithms for manipulating videos are increasingly accessible and accurate and achieve results as spectacular as Meskimen's. However, not all manipulations are so playful and several internet giants believe that these deceptions could be used to manipulate public opinion in processes of great social importance, such as elections, so they have started investing in methods to detect such fake videos.

Last September Google released an open source database containing some 3,000 manipulated videos, while, a few weeks earlier, Facebook had announced that it would release a similar one later this year. These databases will help various AI projects to learn how to automatically identify fake videos so that they can stop spreading lies over the network

.

The situation has changed greatly since 2017, when a Reedit user named Deepfakes posted several manipulated pornographic videos. Until then, the computational cost of creating a fake video of a certain quality was very high, so it was only available to professional equipment. However, "the problem has evolved a lot over the last year," explains Munich Technical University researcher Andreas Roessler, and "today, there are multiple online methods that allow anyone to create manipulated facial videos."

Performing a deepfake typically requires two video clips to be merged using an artificial intelligence technique called deep learning

Making a deepfake usually requires two video clips that are merged using an artificial intelligence technique called deep learning or , hence the name of these videos. Algorithms learn the appearance of each face to stick on top of each other, maintaining the movements of certain parts, such as eyebrows, mouth or eyes, but exchanging features,

One of the reasons that have facilitated the spread of this type of video is in the technological advances developed during the last years, especially in the field of facial recognition. "These methods are the fruit of virtual and augmented reality research and the fact that we have increasingly powerful methods to analyze faces," says Roessler.

"Body":A battle of artificial intelligences

There are currently several methods to detect these manipulated videos, but most require the supervision of a human being, which slows down the process a lot, so several projects are being developed whose goal is to generate automatic detection tools and one of them is FaceForensics, for which Roessler is one of the main responsible.

Created jointly by researchers from the Technical University of Munich and the Federico II University of Naples, the members of this project have generated almost 1,000 fake videos using four common methods of facial manipulation. The idea is that these videos serve to train an artificial intelligence, also based on deep learning, so that it learns to detect deepfakes without human intervention. In essence, it is a battle between two artificial intelligences, the one that generates the manipulated videos and the one that tries to detect them,

According to Roessler, "these machine learning models have produced superior results compared to other approaches," however, this researcher clarifies, "they have the disadvantage that we need to provide a lot of data to be able to train them properly" and that's where internet giants come in, they can generate databases with thousands of manipulated videos to help train these new tools.

This particular project receives direct support from Google, which has created a database of deepfakes working with 28 actors with which they recorded hundreds of videos performing different actions. Subsequently, they used different models of open source manipulated video generation to create approximately the 3,000 videos they have added to the FaceForensics.

"Body":A half-hearted struggle against lies

Facebook has also focused its attention on detecting these manipulated videos and last month announced, together with Microsoft, Amazon and researchers from various international institutions, the launch of the Deepfake Detection Challenge, a project that will offer cash rewards for the best automatic detection methods.

People's manipulated images since photography exists, but now almost everyone can create and distribute fake images to a massive audience

"People have manipulated images since photography, but now almost everyone can create and distribute fake images to a massive audience," says Antonio Torralba, a project member and director of the IBM Watson artificial intelligence lab at the Massachusetts Institute of Technology (MIT). "The aim of this contest is to build AI systems that can detect the small imperfections of a manipulated image and thus expose the falsity of these videos.

However, these methods are far from solving the underlying problem, because once a detection method has managed to identify the small errors of a manipulated video, the generation algorithm can be updated to correct such imperfections. For this reason, some researchers argue that the fight against deepfakes cannot be based solely on technical means, but will also require political and social measures to limit the incentives that encourage their creation, an environment in which the role of internet giants is not so clear.

Earlier this year, Facebook refused to delete several videos in which US Congress President Democrat Nancy Pelosi appeared to be in health trouble and, earlier this month, did the same with an announcement purchased by trump's campaign team with false information about Democratic pre-candidate Joe Biden. These facts show that, although the social network has decided to invest in the automatic detection of manipulated videos, it has no intention of removing the false information circulating on its platform.