Here is a cool technology that makes it easier for a user to animate facial expressions of a subject in a video, outputting realistic results in the process. Face2Face is a facial reenactment method that works with a simple webcam. It uses RGB data from the source and target actor to allow users to manipulate YouTube videos in real-time. Here is how the technology works:
At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination.
The above video demonstrates how this technology works. Pretty impressive, don’t you think?
[H/T]