Manipulating a video is becoming easier and easier and deep faking, the synthesis of the human image based on artificial intelligence, is no longer an unreachable frontier. Nowadays, with cutting-edge technology and instruments, we can make a person in a video say or do anything we want. Apart from a few examples of comedy or jokes, the scenarios that are opening up are alarming: scams, cyberbullying, revenge porn, computer crimes. But we should not despair, because if there are those who are able to create a fake, there are also those who are able to discover it. We are talking about the Media Lab of DISI, the Department of Information Engineering and Computer Science of the University of Trento, which has been working for years on the forensic analysis of multimedia data and is still the protagonist of two projects aimed at unmasking and tracing deep fakes.
University of Trento and "Unchained" and "Premier" projects
For more than ten years, the University of Trento has been studying algorithms capable of identifying manipulations of multimedia files. Starting with the analysis of photographs, the team of researchers has now moved on to videos, with a precise aim: to develop a technology that can identify deep fakes, trace the history of their sharing on the web (e.g. the social networks on which they were published) and reconstruct the chain of processing undergone by the multimedia data. It is doing this through two projects: Unchained (started in October 2020 and funded by DARPA - Defense Advanced Research Projects Agency) and Premier (started in early 2020 and funded by Miur).
Italy and the University of Trento are thus confirmed as world leaders in the development of artificial intelligence. This is demonstrated by the fact that Unchained, carried out in collaboration with the University of Florence, is the only non-US call for proposals to be funded in 2020 by the US Department of Defence's government agency in charge of developing new technologies.
"We say we're trying to do AI versus AI, so developing artificial intelligence that gives you the ability to verify that a certain piece of data has been generated itself by another artificial intelligence. It's a bit like a battle between intelligences”.
Giulia Boato, Associate Professor DISI - Unitn
University and research in Trentino: read more about our scientific excellence
How to detect a deep fake: invisible fingerprints
Modifying a video means altering some statistical properties, which concern both the signal and the format of the file you are working with. These modifications leave invisible traces. The technology developed by researchers at the University of Trento aims to analyse these traces backwards, to understand if something has been changed and where in the video it has been changed. In the case of multimedia files where, for example, the face of one person has been over-painted on another, it is first necessary to automatically identify where the visage to be analysed is located. Then a detector trained on known data (e.g. the expressions a person assumes while speaking or laughing, acquired and studied by the programme on the basis of authentic videos) will be able to tell if there is a forgery and where.
Public figures, justice and providers: forensic video analysis has multiple applications.
The first is at government level, to verify the authenticity of data that may involve famous people, heads of state or politicians. It seems obvious to say, but the wrong words in the mouths of the wrong people, if believed to be true, can lead to very serious international consequences. Another area in which forensic analysis plays a decisive role is the legal field. There are more and more legal cases concerning computer fraud, cyberbullying or revenge porn, where it is necessary to check whether a video has been edited and it is crucial to understand its origin (on which social network, for example, it was published). A further field of action is finally that of providers who, in addition to control organs such as the postal police, represent one of the possible end users of this new technology. The hope is that also those who manage the dissemination of users' contents will commit themselves to finding solutions to share only certified material. The idea of the researchers is to arrive, one day, at the situation in which social networks such as Facebook or Youtube are able to publish only truthful videos (and have the will to do so).