Chema Alonso, Telefónica's CDCO and well-known security expert, made a stellar appearance at OpenEXPO Virtual Experience 2021, which he has sponsored in this eighth edition of the event that has been held online. In this participation, he also took the opportunity to discuss an interesting topic such as deepfakes generated by AI and the new challenges that cybersecurity faces with these practices.
Surely you have seen some videos in which someone appears with someone else's face saying or doing something that the person to whom that face belongs has not said or done. These videos can be obtained in a relatively simple way, and they are flooding the Internet, especially social networks, and being used as tools for users. hoaxes and disinformation campaigns.
In OpenEXPO Virtual Experience 2021 they have wanted to introduce new topics in accordance with the current panorama of technology and open source, and among them technologies such as artificial intelligence, Machine Learning or Deep Learning. Chema Alonso has focused on the deepfakes that can be achieved with the help of these technologies, and on the new challenges that cybersecurity faces.
Given the increase in these fake videos, which grew from 15.000 in 2019 to almost 50.000 in 2020, and continues to grow, it has become a matter of concern. In addition, the 96% of these deepfakes are pornographic videos, with scenes of explicit sex using the faces of a celebrity, politician, or influencer.
Faced with this threat, as Chema Alonso has clarified, action must be taken from two fronts: forensic analysis of images and extraction of biological data. His speech for OpenEXPO Virtual Experience 2021 focused precisely on that, where he showed a plug-in for Chrome that he has developed together with his team to be able to detect DeepFakes.
For its operation it relies on 4 essential pillars:
- FaceForensics ++: to test images based on a model and training on your own database to improve efficiency.
- Exposing DeepFake Videos by Detecting Face Warping Artifacts- Detect limitations with a CNN model, as current AI algorithms often produce images of somewhat limited resolutions.
- Exposing Deep Fakes Using Inconsistent Head Poses: using a HopeNet model, inconsistencies or errors can be detected in the poses of the fake model that are introduced when introducing the synthesized face.
- CNN-generated Images Are Surprisingly Easy To Spot… for now: It can be confirmed that the current images generated by CNN share systematic flaws.
More information - Official Website of the Event