Greetings from the far side of the FYP (Final Year Project)!!!

I just thought I would make a quick blog entry to let you all know where I have been for the last six months, which was a self imposed hermitage of study, until I finished all that was required in the last semester of the final year of my Computer Science degree.

That is all done now and I am just waiting for my grades to arrive with fingers crossed that there will be not resits or CA to repeat. So, as I have a bit of free time on my hands, I thought I would tell you about my final year project which is definitely a subject of interest for this blog.

My project is a desktop application for repairing old cine film, which I have written in Python, therefore it can be used on any system that supports that particular programming language. The idea came from the time when I used to work in video edit suites and part of what we did involved cleaning up artefacts on cinematic film that had been transferred to video tape, as cinematic film was still widely used in TV and Film (obviously) at that time.

This was not a popular task, as it was repetitive, dull and unpopular with clients, as the end result maintained a level of technical excellence as opposed to providing a flourish of visual excitement, so if an automated way of cleaning up these problematic artefacts was available at that time, I think it would have been gladly adopted.

Of course time has moved on with videotape becoming a thing of the past and cine film being retired to the back lot in favour of HD Video formats of various kinds, but there is still a place I feel for an application that can help to clean up old cinematic film that has been retained in libraries around the world and archived for historical purposes.

So, without further fanfare, let me introduce my project “DBlob Film Repair” a film cleaning application!!!

The name of the application comes from a slang term used in the British TV industry for the process of getting rid of film dirt and sparkle called “dblobbing” and dblobbing is exactly what this application does. It also helps to fix film scratches and perform primary additive RGB Colour Correction with a limited degree of filtering to help with the overall film cleanup process.

I plan to develop this application further by refining existing features and creating further additions that will incorporate elements of machine learning. Once I begin this work, I will be adding a further more detailed description of this project.

 

Implementation – Final Code

In the final implementation of our project we decided to create a menu within our program offering the user a choice of all of the approaches we took towards realising our image tracking objective.

Options 1 & 2 Template Matching and Camshift are my contribution to the project and options 3 & 4 Gabor Filter and Colour Isolation are the work of my project partner Modestas Jakuska.

This is the final code for out project and indeed the final post in this blog. Thank you for reading this and I hope it has been interesting.

 

Implementation – Template Matching

The Eyes Have It!

Template matching is a method of finding something within an image by using another image of the item being sought. In our case we are looking for the left eye in a sequence of images so we are going to use a template which is an image of another left eye in an image.

Template matching can be quite limiting in the respect that simply making a comparison between the template and the image under inspection might not find provide accurate results if the item being sought is at a different angle or is of a different size.

The image can be scaled and rotated during the matching operation however, at the end of the day this method is relying on the image under inspection bearing a resemblance to the image in the template.

I created a program to perform a template matching operation on our images which worked out very well when we used as a template the left eye in the first image in our sequence.

When I used a template consisting of another eye image the results have a very different level of accuracy.

In image 1 and 2 the template has recognised an eye within the images but not the eye we are looking for, and in image 3 the matching operation has not recognised either eye within the image.

So to summarize, template matching has worked well using a template based on one of the images within our sequence but with a template image of a similar object unrelated to our images, template matching has not worked well at all. With this in mind, it is possible that if there had been radical changes in lighting or other detail within our sequence of images, template matching could quite possibly fall over.

Code for my Template Matching program:

 

References:

Multi-scale Template Matching using Python and OpenCV (accessed 17.11.2018)
https://www.pyimagesearch.com/2015/01/26/multi-scale-template-matching-using-python-opencv/

Template Matching (accessed 17.11.2018)
http://www.swarthmore.edu/NatSci/mzucker1/opencv-2.4.10-docs/doc/tutorials/imgproc/histograms/template_matching/template_matching.html#template-matching

Object detection with templates (accessed 17.11.2018)
https://pythonspot.com/object-detection-with-templates/

Template matching (accessed 17.11.2018)
https://en.wikipedia.org/wiki/Template_matching

Cox GS. Template Matching and Measures of Match in Image Processing. Department of Electrical Engineering, University of Cape Town.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.51.9646&rep=rep1&type=pdf

 

Assignment 2 – Where’s Wally

 

Week 9 – Lab Work

 

Brother Parsons

I discovered today that the history of Digital Image Processing started with developments made in JPL (Jet Propulsion Laboratory) in the 1960’s and a few other research facilities. More can be found on this topic at the Wikipedia link below.

Of course there is one founder member of JPL that I am very familiar with and that is Jack (Whiteside) Parsons. Who also happens to be the subject of the TV drama series ‘Strange Angel’.

Only in the irrational and unknown direction can we come to wisdom again.

-Jack Whiteside Parsons-

 

Digital image processing (Accessed 09.11.2018)
https://en.wikipedia.org/wiki/Digital_image_processing

 

 

Week 8 – Lab Work

This week Features and Corners.

 

Research & Implementation – Tracking Texture & Tracking Motion

Part 2 – Tracking Motion

While I was thinking about which way to research this approach, it set me thinking about what motion was, or more specifically, what is a computer’s idea of motion.

Definition of Motion taken from OED

The answer would be that it would not understand motion, it would detect that the information contained within an image may have changed in comparison to the next one in a sequence but all we are talking about are specific 2D or 3D values associated with individual pixels within a specific colour space.

So to achieve a result that we would recognise as motion the computer has to take the pixel values and analyse the change in values in a way that we can use for tracking movement.

Centroid (find the centre of a blob)

So what is a blob exactly? in image processing terminology a blob is apparently a group of connected pixels sharing common values, and the centroid in this instance is the mean or average of this blob or the weighted average if you will.

OpenCV finds this weighted average with what is known as ‘moments’ by converting the image to grayscale, then perform binarisation before conducting the centroid calculations.

Multiple blobs can can be detected using this method within OpenCV with the aid of ‘contours’.

Meanshift & Camshift

Meanshift and Camshift can be used in conjunction with Centroid for the purposes of locating the part of the image we need to track.

Meanshift is a non-parametric density gradient estimation. It is useful for detecting the modes of this density. Camshift combines Meanshift with an adaptive region sizing step.

Both Meanshift and Camshift use a method called histogram back projection which uses colour as hue from a HSV model.

Results of my Camshift experiment on our allocated images.

Code for my Camshift program:

 

REFERENCES:

Find the Center of a Blob (Centroid) using OpenCV (C++/Python) (accessed 2nd November 2018)
https://www.learnopencv.com

Meanshift and Camshift (accessed 02.11.2018)
https://docs.opencv.org/3.4.3/db/df8/tutorial_py_meanshift.html

Mean Shift Tracking (accessed 02.11.2018)
https://www.bogotobogo.com/python/OpenCV_Python/python_opencv3_mean_shift_tracking_segmentation.php

Back Projection (accessed 02.11.2018)
https://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/back_projection/back_projection.html

Bradski GR. Computer Vision Face Tracking For Use in a Perceptual User Interface. Microcomputer Research Lab, Santa Clara, CA, Intel Corporation.
http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=B9A5277FF173D0455494A756940F7E6B?doi=10.1.1.14.7673&rep=rep1&type=pdf

 

 

Research – Detection & Tracking

When I started doing the research for this project I was to a degree thinking of the terms ‘detection and ‘tracking’ in an interchangeable way that was not helpful in developing the program we are going to need fulfill the brief.

This was drawn into sharp focus when I was exploring HOG (Histogram of Oriented Gradients) and SVM (Support Vector Machine) as a possible combined approach to the project.

Image courtesy of software.intel.com

A Histogram of Oriented Gradients (HOG) is a feature descriptor that works by counting the occurences of a gradient orientation within an image (detection). As gradients tend to represent a sharp change in intensity value there is a high likelihood that a gradient could be an edge. HOG trys to #

There is also a an SVM Training function

 

REFERENCES:

HOG Descriptor Struct Reference (accessed 31.10.2018)
https://docs.opencv.org/3.4.1/d5/d33/structcv_1_1HOGDescriptor.html#details

Histogram of Oriented Gradients (HOG) Descriptor (accessed 31.10.2018)
https://software.intel.com/en-us/ipp-dev-reference-histogram-of-oriented-gradients-hog-descriptor

Dalal N. Triggs B. Histograms of oriented gradients for human detection. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05).
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1467360

Juliano E. C. Cruza, Elcio H. Shiguemorib, Lamartine N. F. Guimaraes. A comparison of Haar-like, LBP and HOG approaches to concrete and asphalt
runway detection in high resolution imagery
http://epacis.net/jcis/PDF_JCIS/JCIS11-art.0101.pdf

Histograms of Oriented Gradients (accessed October 31st 2018)
https://www2.cs.duke.edu/courses/fall17/compsci527/notes/hog.pdf

Research – Tracking Texture & Tracking Motion

Part 1 – Tracking Texture

Our project has decided to explore four different approaches as described in an earlier post in this blog. My project partner is focusing on Shape Based & Colour Based tracking and I am going to focus my efforts on tracking which focuses on motion and texture.

Kalman Filtering

The first thing I notice while researching this topic is that Kalman filtering seems to be a popular algorithm when it comes to this approach.

Kalman filtering has a variety of applications and like all filters it allows certain things to pass through and other things not. The Kalman filter’s aim is to filter imperfect information, sort out the useful parts of interest and reduce the uncertainty and noise.

Apparently an early application of Kalman filtering was used for guided missiles and was also used as part of the onboard navigation system aboard the Apollo 11 Lunar module.

Particle Filtering

Unlike the Kalman Filter, particle filtering takes a non-linear approach. Particle filtering in a nutshell involves representing a posterior function by a set of random samples (particles).

Initial research into the area of texture tracking seems to suggest that it’s effectiveness is dependent on the image in question possessing strong textures (unsurprisingly) and texture tracking techniques tend to get used in conjunction with other means of detection and tracking.

The series of images we are working with in this project is neither rich in texture or indeed detail as the image is quite limited in terms of resolution.

So with that in mind, I am going to park this avenue of research here for now and may revisit it later should I decide to use this technique in combination with another approach

OpenCV offers its own Kalman filter function, the constructor of which I have included below.

 

REFERENCES:

Pressigout M, Marchand E. Real-Time Hybrid Tracking using Edge and Texture Information. The International Journal of Robotics Research Vol 26, No 7, July 2007
http://www.irisa.fr/lagadic/pdf/2007_ijrr_pressigout.pdf

Particle filter (accessed 25.10 2018)
https://en.wikipedia.org/wiki/Particle_filter

Motion Analysis and Object Tracking (accessed 25.10 2018)
https://docs.opencv.org/3.0-beta/modules/video/doc/motion_analysis_and_object_tracking.html

Tracking – Tracking by Background Subtraction (accessed 25.10 2018)
https://www.hdm-stuttgart.de/~maucher/Python/ComputerVision/html/Tracking.html

Vacchetti L. Lepetit V. Fua P. Combining Edge and Texture Information
for Real-Time Accurate 3D Camera Tracking. CVlab. Swiss Federal Institute of Technology, 1015 Lausanne, Switzerland
https://www.labri.fr/perso/vlepetit/pubs/vacchetti_ismar04.pdf