Image to Animation model

 


What's newGenerating a video sequence with desired motions and object using a driving video sequence. This has lot of applications like movie production, photography etc.

Key insight: Creating a synthesized video from a Deep generative model requires some steps. If you remember Style transfer, we decoupled style and content of the image to get the target image. It’s similar here except that we are decoupling motions and appearance.

Motions: They are achieved by a driving video with a similar object we intend to animate.

Appearance: For this, we use a source image, the object which we will be the object in the generated movie.

Image animation consists of generating a video sequence so that an object in a source image is animated according to the motion of a driving video. 

This framework addresses this problem without using any annotation or prior information about the specific object to animate. Once trained on a set of videos depicting objects of the same category (e.g. faces, human bodies), our method can be applied to any object of this class. 

How it worksTo achieve this, researcher decouple appearance and motion information using a self-supervised formulation. To support complex motions, they use a representation consisting of a set of learned keypoints along with their local affine transformations. A generator network models occlusions arising during target motions and combines the appearance extracted from the source image and the motion derived from the driving video. This framework scores best on diverse benchmarks and on a variety of object categories. 

Want your image to look like click here!

Comments

Popular posts from this blog

Your AI pair programmer

The Many Faces of Genetic Illness