MORPHING



We morphed the images by:


  1. Dividing the source and target into triangles. This triangulation is done using Delaunay Triangulation. Once the control points have been marked in the two images , we run the Delaunay algorithm on these images, to segment the images.

  2. Once we've got the segmentation right, we can now generate the intermediate image very easily.

    1. First we interpolate the control points to get the control points in the intermediate image i.e.


m_i = (1 – t)*s_i + t*f_i


    1. Then for each triangle in the intermediate image we find the Barycentric coordinates of the points lying in it. From this we get the corresponding points in the source and the target image. Then we can do a linear interpolation of the intensity values of the points in the source and final image to get the intesity value of the point in the intermediate image.

    2. Varying the value from 0.0 to 1.0 in some discrete number of steps will give us the intermediate images. When displayed sequentially these images show a morphing from the source to the target image.







    1. Varying the number of discrete steps would give us varying amounts of morphing effects.



Automatic feature detection – Faces and features

We first use the Haar feature detector to detect faces in the images. Then, we have used a threshold in the Y Cb Cr space to find the skin in the face region.


This has been taken from “Face feature Detection and Model Design for 2-D Scalable Model Based Video Coding” by Hu, Worrall, Sadka, and Kondoz. They provide this threshold for skin:

137 < Cr < 117

  1. Cb < 127

190 < Cb + 0.6 Cr < 215

After getting the skin regions, we check the boundaries of skin and non-skin regions for detecting eyes and lips.


Image deformation using moving least squares:

This is based on the paper of the same name published in SIGGRAPH 2006. From this paper, we have implemented the point based method only.


For this method we mark certain control points in the image which can be deformed to create a new target image. The mapping which is then made on a per-pixel basis can be an affine transformation, a similarity transformation, or a rigid transformation. These mappings are made by a Least Squares Minimization of the error of applying this transformation to the source control points to obtain the target control points.







Results:


Source Image: Affine Transformation:




Similarity: Rigid:




As we can see, the rigid transformation affects a larger global change than any other form of transformation.




Comparison between MLS and Triangulation based method:


By MLS: By Triangulation: