Wednesday, December 18, 2019

Real-time Blur Detection Methods | OpenCV | Image Processing

Digital Image, Videos are massively produced while digital cameras are becoming popular; however, not every video is of good quality, not every frame in a real-time video streaming is good. Blur is one of the conventional image quality degradations which is caused by various factors like limited contrast; inappropriate exposure time and improper device handling.

Blur Detection

With the prevalence of digital cameras, the number of digital images increases quickly, which raises the demand for image quality assessment in terms of a blur. Applications like Satellite communication and Medical imaging involves real-time capturing of high definition images. If the image in such an application is corrupted or the information conveyed is not clear, it is necessary to recapture an appropriate image. A model is required that can analyze blur frames and put them in the trash.

In this blog, I'll show you different methods that you can use for real-time blur detection with OpenCV.


Singular Value Decomposition Method

Singular Value Decomposition is a matrix decomposition method for reducing a matrix to its constituent parts in order to make certain subsequent matrix calculations simpler.

Given an Image I, its SVD can be represented by I=U.Sigma.V^T,  
U is and m x n matrix, 
Sigma is an m x n diagonal matrix that is composed of multiple singular values arranged in decreasing order.
V^T is the transpose of an n x n matrix where T is a superscript.
U, V are orthogonal matrices.
The image can be decomposed into multiple ranks I matrices as follows:

where u(i), v(i), lambda(i) are the column vectors of U, V and diagonal terms of Sigma.
SVD actually decomposes an image into a weighted summation of a number of Eigen images where the weights are exactly the singular value themselves. Therefore, the compressed image which omits the small singular value in the tail actually replaces the original image by a coarse approximation. Those Eigen images with a small singular value that often capture detailed information are instead discarded.
Such a situation is similar to the image blurring that keeps shape structures at large scale but discards image details at small scales. The first few most significant Eigen images work on large scales that provide rough shapes of the image while those latter less significant Eigen images encode the image details.
To know in more detail you can refer to this paper.

For a blurred image, the first few most significant Eigen images therefore usually have much higher weights(i.e singular values) compared with that of a clear image. There is a singular value feature that measures the blurry degree of an image as follows:

where lambda(i) denotes the singular value that is evaluated within a local image patch for each image pixel.

Generally, blurred image regions have a higher blur degree compared with clear image regions with no blur. So an image pixel will be classified into the blurred region of its beta1 is larger than a threshold, otherwise, it will be categorized as a non-blurred region.



Variance of Laplacian Method

If one has any background in signal processing, the first method to consider would be computing the Fast Fourier Transform of the image and then examining the distribution of low and high frequencies - if there are a low amount of high frequencies, them the image can be considered blurry.
The laplacian method is simple and straight forward that can be implemented in only a single line of code.

cv2.Laplacian(image, cv2.CV_64F).var()

For this, you just need to take a single channel of an image and convolve it with the following 3x3 kernel:

And then take the variance of the response.

If the variance falls below a pre-defined threshold, then the image is considered blurry contrarily, the image is not blurry.


To know more about this method follow the post of pyimagesearch.

Another method is based on Convolution Neural Network but this method is computationally expensive which also reduces the frame rate. If frame rate reduces then, the chances of getting more blur image will increase.


If you are ever bothered by the blurred image because many applications are such in computer vision that you have to classify the frame or object in real-time. In the case of moving objects or moving cameras, the deep learning model can also work on the blur frame which can give a false output by which the efficiency of your model is also affected. Its solution is to increase the frame rate by run the camera frame on threading.

For any doubt and issue put your comments in comment box.