The author proposes an automatic method for facial motion cloning from a source three-dimensional (3D) face model to a destination 3D face model. The proposed technique could reduce the development time of 3D facial animation. As compared to the Moving Picture Experts Group MPEG-4 facial animation technique, the proposed method is almost automatic, and considerably faster.
The approach taken is relatively straightforward. It requires a neutral 3D model for the source face, and one for the target face. The facial motion is obtained as a difference between the animated 3D model and the neutral 3D model of the source face. The application of the facial motion implies normalization and alignment of the source and target 3D face models, based on facial feature points.
Normalization is accomplished by first determining the proportions of the face based on a fixed set of feature points. Normalization is followed by a cylindrical projection of the 3D models into a 2D space for the alignment phase. The source and the target face are aligned with respect to each other in 2D space based on the feature points. The motion vectors are applied before bringing the model back in the original 3D space.
Experimental results on several animation characters are presented, and a parallel is drawn between the proposed method and a similar technique, expression cloning.
The paper is well written, and the parallel between face motion cloning approaches will give the reader useful insights into the applicability of the methods. The method could become relevant to the MPEG-4 standard development regarding animation of feature points based nonrigid 3D models.