Implementation – Final Code

In the final implementation of our project we decided to create a menu within our program offering the user a choice of all of the approaches we took towards realising our image tracking objective.

Options 1 & 2 Template Matching and Camshift are my contribution to the project and options 3 & 4 Gabor Filter and Colour Isolation are the work of my project partner Modestas Jakuska.

This is the final code for out project and indeed the final post in this blog. Thank you for reading this and I hope it has been interesting.

 

Implementation – Template Matching

The Eyes Have It!

Template matching is a method of finding something within an image by using another image of the item being sought. In our case we are looking for the left eye in a sequence of images so we are going to use a template which is an image of another left eye in an image.

Template matching can be quite limiting in the respect that simply making a comparison between the template and the image under inspection might not find provide accurate results if the item being sought is at a different angle or is of a different size.

The image can be scaled and rotated during the matching operation however, at the end of the day this method is relying on the image under inspection bearing a resemblance to the image in the template.

I created a program to perform a template matching operation on our images which worked out very well when we used as a template the left eye in the first image in our sequence.

When I used a template consisting of another eye image the results have a very different level of accuracy.

In image 1 and 2 the template has recognised an eye within the images but not the eye we are looking for, and in image 3 the matching operation has not recognised either eye within the image.

So to summarize, template matching has worked well using a template based on one of the images within our sequence but with a template image of a similar object unrelated to our images, template matching has not worked well at all. With this in mind, it is possible that if there had been radical changes in lighting or other detail within our sequence of images, template matching could quite possibly fall over.

Code for my Template Matching program:

 

References:

Multi-scale Template Matching using Python and OpenCV (accessed 17.11.2018)
https://www.pyimagesearch.com/2015/01/26/multi-scale-template-matching-using-python-opencv/

Template Matching (accessed 17.11.2018)
http://www.swarthmore.edu/NatSci/mzucker1/opencv-2.4.10-docs/doc/tutorials/imgproc/histograms/template_matching/template_matching.html#template-matching

Object detection with templates (accessed 17.11.2018)
https://pythonspot.com/object-detection-with-templates/

Template matching (accessed 17.11.2018)
https://en.wikipedia.org/wiki/Template_matching

Cox GS. Template Matching and Measures of Match in Image Processing. Department of Electrical Engineering, University of Cape Town.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.51.9646&rep=rep1&type=pdf

 

Assignment 2 – Where’s Wally

 

Week 9 – Lab Work

 

Brother Parsons

I discovered today that the history of Digital Image Processing started with developments made in JPL (Jet Propulsion Laboratory) in the 1960’s and a few other research facilities. More can be found on this topic at the Wikipedia link below.

Of course there is one founder member of JPL that I am very familiar with and that is Jack (Whiteside) Parsons. Who also happens to be the subject of the TV drama series ‘Strange Angel’.

Only in the irrational and unknown direction can we come to wisdom again.

-Jack Whiteside Parsons-

 

Digital image processing (Accessed 09.11.2018)
https://en.wikipedia.org/wiki/Digital_image_processing

 

 

Week 8 – Lab Work

This week Features and Corners.

 

Research & Implementation – Tracking Texture & Tracking Motion

Part 2 – Tracking Motion

While I was thinking about which way to research this approach, it set me thinking about what motion was, or more specifically, what is a computer’s idea of motion.

Definition of Motion taken from OED

The answer would be that it would not understand motion, it would detect that the information contained within an image may have changed in comparison to the next one in a sequence but all we are talking about are specific 2D or 3D values associated with individual pixels within a specific colour space.

So to achieve a result that we would recognise as motion the computer has to take the pixel values and analyse the change in values in a way that we can use for tracking movement.

Centroid (find the centre of a blob)

So what is a blob exactly? in image processing terminology a blob is apparently a group of connected pixels sharing common values, and the centroid in this instance is the mean or average of this blob or the weighted average if you will.

OpenCV finds this weighted average with what is known as ‘moments’ by converting the image to grayscale, then perform binarisation before conducting the centroid calculations.

Multiple blobs can can be detected using this method within OpenCV with the aid of ‘contours’.

Meanshift & Camshift

Meanshift and Camshift can be used in conjunction with Centroid for the purposes of locating the part of the image we need to track.

Meanshift is a non-parametric density gradient estimation. It is useful for detecting the modes of this density. Camshift combines Meanshift with an adaptive region sizing step.

Both Meanshift and Camshift use a method called histogram back projection which uses colour as hue from a HSV model.

Results of my Camshift experiment on our allocated images.

Code for my Camshift program:

 

REFERENCES:

Find the Center of a Blob (Centroid) using OpenCV (C++/Python) (accessed 2nd November 2018)
https://www.learnopencv.com

Meanshift and Camshift (accessed 02.11.2018)
https://docs.opencv.org/3.4.3/db/df8/tutorial_py_meanshift.html

Mean Shift Tracking (accessed 02.11.2018)
https://www.bogotobogo.com/python/OpenCV_Python/python_opencv3_mean_shift_tracking_segmentation.php

Back Projection (accessed 02.11.2018)
https://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/back_projection/back_projection.html

Bradski GR. Computer Vision Face Tracking For Use in a Perceptual User Interface. Microcomputer Research Lab, Santa Clara, CA, Intel Corporation.
http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=B9A5277FF173D0455494A756940F7E6B?doi=10.1.1.14.7673&rep=rep1&type=pdf