Adversarial examples

Adversarial example is a tool developed by students of University of California to deceive systems used for detecting deepfakes. Adversarial example is a slightly manipulated input that is added to every frame and they lead the deep neural networks to make mistakes. Deepfake videos that are created with such inputs are called adversarial videos and they have succeeded in deceiving the detection tools in more than 90% of the cases.


Leave a Reply

Your email address will not be published. Required fields are marked *