Perception of Video Manipulation
Recent advances in deep learning-based techniques enable highly realistic facial video manipulations. We investigate the response of human observers’ on these manipulated videos in order to assess the perceived realness of modified faces and their conveyed emotions.
Facial reenactment and face swapping offer great possibilities in creative fields like the post-processing of movie materials. However, they can also easily be abused to create defamatory video content in order to hurt the reputation of the target. As humans are highly specialized in processing and analyzing faces, we aim to investigate perception towards current facial manipulation techniques. Our insights can guide both the creation of virtual actors with a high perceived realness as well as the detection of manipulations based on explicit and implicit feedback of observers.
PEFS: A Validated Dataset for Perceptual Experiments on Face Swap Portrait Videos
in Proc. International Conference on Computer Animation and Social Agents (CASA), Springer, to appear.
Altering the Conveyed Facial Emotion Through Automatic Reenactment of Video Portraits
in ACM Proc. International Conference on Computer Animation and Social Agents (CASA), to appear.
This project focuses on using electroencephalography (EEG) to analyze the human visual process. Human visual perception is becoming increasingly important in the analyses of rendering methods, animation results, interface design, and visualization techniques. Our work uses EEG data to provide concrete feedback on the perception of rendered videos and images as opposed to user studies that just capture the user's response. Our results so far are very promising. Not only have we been able to detect a reaction to artifacts in the EEG data, but we have also been able to differentiate between artifacts based on the EEG response.
The aim of this work is to simulate glaring headlights on a conventional monitor by first measuring the time-dependent effect of glare on human contrast perception and then to integrate the quantitative findings into a driving simulator by adjusting contrast display according to human perception.
Goal of this project is to assess the quality of rendered videos and especially detect those frames that contain visible artifacts, e.g. ghosting, blurring or popping.