Someone said well, Who sees the human face correctly: the photographer, the mirror, or the painter? Answer is no one but the the machine here that will read your face and attitude.

We successfully met our expectations, in that we were able to track the motion of a human face and display an appropriate perspective projection of a cube. For further expansions to this project, more complicated objects than cubes could be displayed, or we could go as far as displaying entire virtual reality environments. Also, our design only tracks faces based on the color content. One potential improvement to the project would be to use a different face tracking algorithm that only identifies faces and not hands or anything else that is skin tone. If an eigenfaces algorithm is used for example, then if the user’s hands were on the screen, they would not affect the determination of the face’s location. We would need to get around the limited memory issue to do this though.


The goal of this project was to track the motion of a human face and to display a projection of a cube on a VGA monitor that would change according to the motion of the user’s face. We wanted it to seem as if the user was actually looking at a 3D cube, so if the user moved their head to the right for example, the projection should be displayed as if the user were actually looking at a cube from the right, and if the user gets closer to the camera, then the cube should get larger. The cube was chosen for simplicity and as a proof of concept, but this project could potentially be extended to more complicated objects or virtual reality environments.

In this project, we connected a video camera to the FPGA and determined the location of the user’s face in each frame by examining the color content of each pixel and determining which ones could represent human skin. The face’s offset from the center of the camera’s field of view, as well as its size, was then used to draw the appropriate projection.