top of page
Snazzy Filter Implementation

The snazzy filter function we created is a user interface that allows the user to input a single headshot-type photo and the type of filter they want to add, and then the function runs and writes the image as a new file called “filter.jpg” so that you may add more than one filter to one image.

 

For example, if you had a photo called “headshot.jpg” and wanted to add a hat to it, you would write:

 

snazzy(‘headshot.jpg’,’hat);

 

And then if you wanted to add glasses to that filtered image, you would write:

 

snazzy(‘filter.jpg’,’glasses’);

 

The functions we created are:

 

‘blur’ : Blurs the detected face

‘hat’ : Places a cowboy hat above the detected face

‘nose’ : Places a mustache below the detected nose

‘tiara’ : Places a tiara above the detected face

‘flower’ : Places a flower crown above the detected face

‘glasses’ : Places sunglasses on the detected eyes

‘mask’ : Places a mask over the detected face

 

Results of these can be found in the gallery.

 

The basic structure of our filter functions is described below:

The scale factors and locations vary from filter to filter. For example, a mask is the same width as the face and is located on the face, while a tiara is slightly wider than the face and is located above the face. These locations and scale factors were found through guessing and experimentation, until the optimal parameters were found.

 

We also decided to limit the function to one face per photo, otherwise the algorithm could detect other faces, noses, or eyes, and would filter those as well. By limiting the number to only one, we could be certain that if the function didn’t detect one instance of the feature, then we could throw out the sample. That is why we have it exit if it detects incorrectly.

 

Another issue was the white backgrounds that the filter photos contained. We used jpgs that had a white background, and these had to be removed when putting the filter on. We tried to use transparent png’s, but Matlab does not support transparent images. Therefore, we used jpg.

 

We removed the background by going through the scaled filter and checking the pixel value. If the sum of the rgb values was above a certain threshold, then we considered it to be part of the white background and did not copy it over. This threshold was different depending on the filter used. If the filter had some lighter colors in it, then the threshold became higher. Similarly, if the filter was mostly dark, then a lower threshold could be used. Regardless, there is some fuzziness around the edges of the image due to this removal of the background.

 

This system is space invariant. Being space invariant is very important because the face could be anywhere in the provided photo. We needed to be able to find the features regardless of the location within the given image. We cannot know if the function is truly linear, since adding two facial images together eliminates the possibility of detecting faces. However, the system is scaleable, as the size of the image and detected face scale together.

bottom of page