Wednesday, February 5, 2020

Gaussian Mixture Model | EM Algorithm | Python Implementation

In the area of Machine Learning, we can distinguish it in two main areas: Supervised and Unsupervised Learning. The main difference between both lies in the nature of the data as well as the approaches to deal with it. Clustering is an unsupervised learning problem where we intend to find clusters of points in our dataset that share some common characteristics. In clustering our job is to find sets of points that appear close together.


Gaussian Distribution

The most commonly used distribution over real numbers in the normal distribution also known as Gaussian Distribution.

The two parameter μ ϵ R and σ ϵ (0, ∞) control the normal distribution. The parameter μ gives the coordinate of the central peak. This is also the mean of the distribution: E[x] = μ. The standard deviation of the distribution is given by σ and the variance by σ2.


When we evaluate the PDF we need to square and invert σ. When we need to frequently evaluate the PDF with different parameter values, a more efficient way of parametrizing the distribution is to use a parameter β ϵ (0,∞) to control the precision or inverse variance, of the distribution:


Normal distribution are a sensible choice for many applications. In the absence of prior knowledge about what form a distribution over the real numbers should take, the normal distribution is a good default choice.


Gaussian Mixture Model

It's a probability distribution consists of multiple probability distributions.

For d dimensions, the gaussian distribution of a vector x = (x1,x2,....xd)T is defined by:




where μ is the mean and Σ is the covariance matrix of the Gaussian.

Covariance is the measure of how changes in one variable and associated with changes in second variable. Specifically, covariance measures the degree to which two variables are linearly associated. However, it is also often used informally as a general measure of how monotonically related two variables are.


Variance-Covariance Matrix

Variance and covariance matrix are often displayed together in a variance-covariance matrix. The variances appear along the diagonal and covariances appear in the off-diagonal elements, as shown:




where 

V is a c x c variance-covariance matrix
N is the number of scores in each of the c data sets
xi is a deviation score from the ith data set
Σ xi2 / N is the variance of elements from the ith data set
Σ xi xj / N is the covariance for elements from the ith and jth data sets.

The probability given in a mixture of K Gaussians is:



where wj is the prior probability (weight) of the jth Gaussian.



Difference between Kmeans and Gaussian Mixture.

Kmeans: find k to minimize (x - μk)2.

Gaussian Mixture (EM clustering): find k to minimize (x - μk)2 / σ2.

The difference (mathematically) is the denominator "&sigma2", which means GM takes variance into consideration when it calculates the measurement.

Kmeans only calculates conventional Euclidean distance.

In other words, Kmeans calculate distance, while GM calculates "weighted" distance.


How is it Optimized?

One of the most popular approaches to maximize the likelihood is to use the Expectation-Maximization (EM) algorithm.

Basic ideas of the EM algorithm:

Introduce a hidden variable such that its knowledge would simplify the maximization of likelihood.

At each iteration:

  • E-Step: Estimate the distribution of the hidden variable given the data and the current value of the parameters.
  • M-Step: Maximize the joint distribution of the data and the hidden variable.

Compare with Gradient Descent

You can obtain maximum likelihood estimates using different methods and using an optimization algorithm is one of them. On another hand, gradient descent can be also used to maximize functions other than likelihood function.

When should you use it?

Anytime you have unlabeled data and want to classify it. If data is normally distributed. 

EM (Expectation Maximization) Algorithm

Initially randomly assign k cluster centers

Iteratively refine the clusters based on two steps


Python Implementation



Let's import the libraries and the iris dataset that we are going to use:




Now let's implement the Gaussian Density Function. You can use numpy function for that but it's actually interesting to see how things work internally. Now we need to create the function that implements this:






Now we initialize clusters which is an initialization step of GMM.



Expectation step

We should now calculate $\gamma(z_{nk})$. We can achieve this by means of the following expression:




In this, we calculate the denominator as a sum over all terms in the numerator and then assign it to a variable named totals


Maximization Step


Now let's implement the maximization step.








Finally, we have a log-likelihood calculation which is given by:



The Python code is :



Now to learn how to train the model check my GitHub where you find the complete code of this. The link is here.

  • GMM allows us to model more complex data.
  • EM Algorithm is a series of steps to find good parameter estimates when there are latent variables
  • EM Steps:

  1.           Initialize the parameter estimates
  2.        Given the current parameter estimates, find the min. log-likelihood for data + latent variables.
  3.           Given the current data, find the better parameter estimates
  4.           Repeat steps 2&3.

  • GMM works well but you have to guess the number of Gaussians. Kernel Density Estimation does not require that kind of guessing.
I have not written this code, Mostly if I need to use GMM I use sklearn to implement it because of less time complexity.
I used this article to learn the mathematics behind GMM. I have also read research papers to get more clarity on this. This link to that research paper is attached in reference.
But I know the mathematics behind it. This code is written by oconteras309(Oscar Contreras Carrasco).

Hope you find this article useful.

[1] Bishop, Christopher M. Pattern Recognition and Machine Learning (2006) Springer-Verlag Berlin, Heidelberg.
[2] Murphy, Kevin P. Machine Learning: A Probabilistic Perspective (2012) MIT Press, Cambridge, Mass,


Read More

Wednesday, December 25, 2019

Complete Guide for Simulated Annealing | Python Implementation

In this article, I will explain the simulated annealing algorithm. The word annealing comes from the metallurgy. In metallurgy to make really strong metallic objects, people follow a slow and controlled cooling procedure known as annealing. The same can be applied to the computer science algorithm. The simulated annealing algorithm is widely applied to solve problems like traveling salesman problems, designing printed circuit boards, for the planning of a path for a robot, and in bioinformatics to design three-dimensional structures of protein molecules.


Simulated Annealing
Source: Wikipedia



Global Optimization


Optimization tasks are usually focused on finding a global maximum(or minimum) to a function. Sometime deterministic search methods get stuck in a local optimum and never find the global optimal solution. Stochastic methods such as the Monte Carlo methods can improve the search method by helping algorithms escape these local optimal solutions in order to move closer toward the global optimal solution.

In the most real-world tasks, you may never find the global maximum, but usually the closer we get the better.

So, let's understand this with an example:




Hill Climbing

Now, here our target is to find the global maxima in this fig.


Basic hill climbing algorithm


The objective of the algorithm is to choose the most optimal state (minimum energy) as the final state. A hill climbing approach is a naive approach, it basically compares the energies of the adjacent states of the current state, the current state and chooses a state as next state which has the minimum energy of the three.

Given a function f() we are trying to maximize(or minimize)

1. x0 = starting solution (can be random)
2. for i in 1:N, do

  • look around the neighborhood of  xi-1 by some delta, and evaluate f(x+-delta)
  • choose the neighbor with the highest(or lowest) value of f()
3. Return the xi value with the highest(or lowest) value of f(). This will be the last value as the hill climbing algorithm never chooses a step that does not improve in f(). Therefore, it will never climb down(or up) from a local maximum(or minimum).


An improvement on the above algorithm is one with random restart, but that still faces the problem of never actually escaping a local optimum




Simulated Annealing

Moving to a state which is more optimal(less energy) than the current state is considered as the good move and moving to a state which is less optimal (more energy) than the current state is considered as a bad move. Hill climbing strictly takes only good moves.
The problem with hill climbing is it may end up on a local optimum state and mark it as a final state.


Simulated Annealing algorithm


Simulated annealing is a probabilistic approach to finding global maxima. The main difference between greedy search and simulated annealing is that the greedy search will always choose the best proposal, where simulated annealing has a probability of choosing a worse proposal than strictly only accepting improvements. This help the algorithm to find a global optimum by jumping out a local optimum.

The simulated algorithm goes as follows, for a function h() that we are trying to maximize, do the following

1. Generate an initial candidate solution x.
2. Get an initial temperature T > 0.
3. for i in 1:N (N is number of iteration)
sample t ~ g(t) where g is a symmetrical distribution.
The new candidate solution is x' = x +- t
calculate probability p = exp(delta h / Ti) else accept x=x; delta h = h(x') - h(x).
Accept the candidate solution with probability p; u~U(0,1), accept x = x' if u <= p.
Update the temperature cooling, e.g T = alpha t , where 0 < alpha < 1.

The greater the value of T, the greater the likelihood of moving around the search space. As T gets closer to zero, the algorithm will function like greedy hill climbing.

The good starting value of T will vary problem by problem. I usually start with 1, 10, 100 and adjust after a few experiments. For alpha i normally choose 0.95. However, you can change T by any amount.






I hope this article helps you a lot to understand about the Simulated Annealing. For any doubt or query put your comments in comment box.

Thank You!

Read More

Wednesday, December 18, 2019

Real-time Blur Detection Methods | OpenCV | Image Processing

Digital Image, Videos are massively produced while digital cameras are becoming popular; however, not every video is of good quality, not every frame in a real-time video streaming is good. Blur is one of the conventional image quality degradations which is caused by various factors like limited contrast; inappropriate exposure time and improper device handling.


Blur Detection


With the prevalence of digital cameras, the number of digital images increases quickly, which raises the demand for image quality assessment in terms of a blur. Applications like Satellite communication and Medical imaging involves real-time capturing of high definition images. If the image in such an application is corrupted or the information conveyed is not clear, it is necessary to recapture an appropriate image. A model is required that can analyze blur frames and put them in the trash.

In this blog, I'll show you different methods that you can use for real-time blur detection with OpenCV.

Method-1


Singular Value Decomposition Method


Singular Value Decomposition is a matrix decomposition method for reducing a matrix to its constituent parts in order to make certain subsequent matrix calculations simpler.

Given an Image I, its SVD can be represented by I=U.Sigma.V^T,  
U is and m x n matrix, 
Sigma is an m x n diagonal matrix that is composed of multiple singular values arranged in decreasing order.
V^T is the transpose of an n x n matrix where T is a superscript.
U, V are orthogonal matrices.
The image can be decomposed into multiple ranks I matrices as follows:

where u(i), v(i), lambda(i) are the column vectors of U, V and diagonal terms of Sigma.
SVD actually decomposes an image into a weighted summation of a number of Eigen images where the weights are exactly the singular value themselves. Therefore, the compressed image which omits the small singular value in the tail actually replaces the original image by a coarse approximation. Those Eigen images with a small singular value that often capture detailed information are instead discarded.
Such a situation is similar to the image blurring that keeps shape structures at large scale but discards image details at small scales. The first few most significant Eigen images work on large scales that provide rough shapes of the image while those latter less significant Eigen images encode the image details.
To know in more detail you can refer to this paper.

For a blurred image, the first few most significant Eigen images therefore usually have much higher weights(i.e singular values) compared with that of a clear image. There is a singular value feature that measures the blurry degree of an image as follows:



where lambda(i) denotes the singular value that is evaluated within a local image patch for each image pixel.

Generally, blurred image regions have a higher blur degree compared with clear image regions with no blur. So an image pixel will be classified into the blurred region of its beta1 is larger than a threshold, otherwise, it will be categorized as a non-blurred region.

Implementation



Method-2


Variance of Laplacian Method


If one has any background in signal processing, the first method to consider would be computing the Fast Fourier Transform of the image and then examining the distribution of low and high frequencies - if there are a low amount of high frequencies, them the image can be considered blurry.
The laplacian method is simple and straight forward that can be implemented in only a single line of code.

cv2.Laplacian(image, cv2.CV_64F).var()

For this, you just need to take a single channel of an image and convolve it with the following 3x3 kernel:


And then take the variance of the response.

If the variance falls below a pre-defined threshold, then the image is considered blurry contrarily, the image is not blurry.

Implementation




To know more about this method follow the post of pyimagesearch.


Another method is based on Convolution Neural Network but this method is computationally expensive which also reduces the frame rate. If frame rate reduces then, the chances of getting more blur image will increase.

Recommended: 

If you are ever bothered by the blurred image because many applications are such in computer vision that you have to classify the frame or object in real-time. In the case of moving objects or moving cameras, the deep learning model can also work on the blur frame which can give a false output by which the efficiency of your model is also affected. Its solution is to increase the frame rate by run the camera frame on threading.




For any doubt and issue put your comments in comment box.

Read More

Monday, December 9, 2019

Tutorial of Principal Component Analysis | Derivation | Python Implementation


Advancement of technologies like Machine Learning and Artificial Intelligence, It has become crucial to understand the fundamentals behind such technologies. With the help of this blog, I will help you understand the concept behind dimensionality reduction, derivation of PCA, and python implementation.


Principal Component Analysis


Principal Component Analysis (PCA) is one of the simple machine learning algorithms which can be derived using only knowledge of basic linear algebra.

What is Principal Component Analysis?


Principal Component Analysis (PCA) is a dimensionality reduction technique that enables you to identify correlation and patterns in a data set so that it can be transformed into a data set of significantly lower dimensions without loss of any important information.

Sometimes Data can be so complicated, it may be challenging to understand what it all means and which part are actually important. In that case, PCA figures out patterns and correlations among various features in the data set. On finding a strong correlation between different variables or features, a final decision is made thereby reducing the dimensions of the data in such a way that the significant data is still retained.

From a high-level PCA has three main steps:

1. Standardization of data.
2. Computing the covariance matrix of the data.
3. Compute the eigenvalues and vectors of this covariance matrix.
4. By using eigenvalues and vectors to select only the most important feature vector and then transform data onto those vectors for reduced dimensionality.

Now Let's discuss each of the steps in detail:

Standardization of the data:


In data analysis and processing we often saw that missing out standardization will probably result in a biased outcome. Standardization is all about scaling data in such a way that all the variables and their values lie within a similar range.

Now understand this with an example, let's say we have 2 variable one is age which has values ranging between 10-50 and the other is salary which has values ranging between 10000-100000. In such a scenario, it is obvious that the output calculated by using these variables is going to be biased since the variable with a larger range will have a more impact on the outcome.
Standardization of data can be calculated by:






Computing the covariance matrix of data


Principal Component Analysis helps to identify the correlation and dependencies among the features in a dataset. A covariance matrix states the correlation between the different variables in the dataset. It is essential to identify the highly dependent variables because they contain biased and redundant information which reduces the overall performance of the model. 
The covariance matrix is just an array where each value specifies the covariance between two feature variable based on the x-y position in the matrix. The formula is:


where the x with the line on top is a vector of mean values for each feature of X. When we multiply a transposed matrix by the original one we end up multiplying each of the features for each data point together. The code for this is:




If the covariance value is negative, it denotes the respective variables are indirectly proportional to each other.
A positive covariance denotes that the respective variables are directly proportional to each other.


Computing Eigen Values and Vectors


Eigen Vectors and Eigen Values must be computed from the covariance matrix in order to determine the principal components of the data set.
If an eigenvector has a corresponding eigenvalue of high magnitude it means that our data has high variance along that vector in feature space. This vector holds a lot of information about our data since any movement along that vector causes large "Variance". 
On the other hand, vectors with small eigenvalues have low variance and our data does not vary greatly when moving along that vector. Since nothing changes when moving along that particular feature vector i.e changing the value of that feature vector does not greatly affect our data, then we can say that this feature is not very important and we can afford to remove it.
So, that's the whole structure of eigen values and vectors within Principal Component Analysis. In which find the vectors that are the most important in representing our data and discard the rest. Now, let's see the code computing the eigen vector and values of our covariance matrix is a one-liner in numpy. Next, we sort the eigen vector in descending order based on eigen values.



Project onto new vectors


Now, We have eigenvectors ordered in order of "importance" to our dataset based on their eigenvalues. We need to select the most important feature vectors that we want and discard the rest. 
Let's take an example, we have a dataset that basically has 10 feature vectors. After computing the covariance matrix, we discover that the eigenvalues are:
[12,10,8,7,5,1,0.1,0.03,0.005,0.0009]
The total sum of an array is = 43.1359. But the first 6 values represent 42/43.1359 = 99.68% of the total, which means our first 5 eigenvectors effectively hold 99.68% of the variance or information about our dataset. Now, we can discard the last 4 feature vector as they only contain 0.32% of the information.
We can simply define a threshold upon which we can decide whether to keep or discard each feature vector. Here, in this code, we define the selected threshold of 97%.


The ultimate step is to project our data on the vector we chose to keep. We can do this by making a projection matrix: In projection matrix, we will multiply by to project our data onto the new vectors. To create we concatenate all the eigenvectors we decided to keep. then our final step is to simply take the dot product between our original data and our projection matrix.



Now Dimension Reduced!

For any doubt and issue put your comments in comment box.
Read More

Thursday, December 5, 2019

Complete Guide for Face Detection Methods | Working | dlib | HaarCascade

In the last few years, facial recognition has kept an important idea and it is being appreciated all over the world. In many places, it has been used to solve many real-world problems to make life easier. With the marvelous increase in video and image databases, there is an incredible need for automatic understanding and examination of information by the intelligent system as manually it is getting to be plainly distant. Human beings have not tremendous ability to identify different faces than machines. So, an automatic face detection system places an important role in face recognition, facial expression recognition, head-pose estimation, human-computer interaction, etc. 


face detection


The method of face detection in pictures is complicated because of variability present across human faces such as pose, expression, position and orientation, skin color, the presence of glasses or facial hair, differences in camera gain, lighting conditions, and image resolution.

Object detection is one of the computer technologies, which connected to image processing and computer vision and it interacts with detecting instances of an object such as human faces, building, tree, car, etc. The primary aim of the face detection algorithm is to determine whether there is any face in an image or not.

I have often used dlib for face detection and facial landmark detection. The frontal face detector in dlib works really well. It's simple and works great.

The detector is based on the histogram of oriented gradients (HOG) and linear SVM. While the HOG+SVM based face detector has been around for a while and has collected a good number of users, I am not sure how many of us marked the CNN based face detector available in dlib.

The CNN(Convolution Neural Network) based detector is capable of detecting faces almost in all angles. Unfortunately, it is not suitable for real-time videos. It is meant to be executed on a GPU. To get the same speed as an HOG based detector you might need to run on a powerful Nvidia GPU.

Now, I am going to show you how you can use the CNN based face detector form dlib on images and compare the results with HOG based detector with ready to use Python code.

Let's Start

CNN Face Detection with Dlib




Now Let's import the necessary packages. You can install these packages by typing the below command in the terminal.

pip install opencv-python dlib argparse time

You can get the model weights file by typing the command below in the terminal.

wget http://arunponnusamy.com/files/mmod_human_face_detector.dat

You can run the code by typing 

python cnn-facedetection-dlib.py -i <path-to-image-input> -w <path-to-weights-file>

If you find an issue while importing dlib in python 3.6 copy these 2 files from my drive and paste in the site-packages of your environment. The link to my drive is here.


HOG Face Detection with Dlib




You have already installed all the necessary packages required for this.

You can run this code by typing 

python HOG-facedetection-dlib.py -i <path to image input>

For the HOG based one, we don't need to provide any file to initialize. It is pre-built inside dlib. Just calling the method should be enough.

For the CNN based one, we need to provide the weights file to initialize with.

Real-time HOG Face Detector with Dlib




You can run this code by typing 
python realtime-hogfacedetector-dlib.py

Then you will find output like this:


face detection with dlib


Real-time face detection with haar-cascade

Haar Cascade is based on the Haar wavelet technique to analyze pixels in the image into squares by function. This uses machine learning techniques to get a high degree of accuracy from what is called "training data". This uses "integral image" concepts to compute the "features" detected. Haar cascade uses the Adaboost learning algorithm which selects a small no. of important features from a large set to give an efficient result of classifiers.

The code snippet to run this is:




Use this link to get the haarcascade_frontalface_default.xml.

If you run the above code you will see that your face is detected.

If you like this post, don't forget to follow us for the new interesting and latest post. 


Read More

Tuesday, November 12, 2019

Benefits and Challenges of NLP in supply chain?

Natural language processing is a technology that provides computer systems and software to analyze, interpret and act on requests and data input through normal human language.


NLP in supply chain

Artificial intelligence and machine learning are both required to get the most out of NLP. The complexity and human language need self-teaching systems and smart algorithms to parse and understand language input and provide relevant responses and actions.

Artificial Intelligence applications utilizing natural language processing for the supply chain can benefit globally as well as local supply chains to create more reliable human-machine interfaces for not just consumers but also suppliers, manufacturers, and distributors.
With continued advancement, globalization, supply chains across the globe are going to get larger and more complex. Companies are increasingly taking the specialization approach, where all external operations and business units are divested to increase the organization's focus on their main offerings by outsourcing these operations to overseas partners. As a simplified example of this, the shoe sole making manufacturer who also makes their own shoes can outsource the shoe-making part of their operations to another business that specializes in it. This way, every organization is doing what it is best at with maximum efficiency. 

Natural Language Processing benefits to the Supply Chain

Natural language processing can assist people included in the supply chain to understand normal human communications and process information faster to drive relevant action. The following are benefits of natural language processing in the supply chain.

1. Understand and decrease potential risks with suppliers, manufacturers, and other supply chain stakeholders through analyzing reports, social media, industry news, and other areas.

2. Ensure compliance with sourcing and ethical practices through monitoring publicly-available information for potential violations by supply chain stakeholders.

3. Control the reputation of supply chain organizations to identify potential issues.

4. Diminish or remove the language barriers for uniform communications and more reliable supplier relationships.

5. Update information obtained from supply chain stakeholders through a chatbot and adaptive interview technology to assure data is captured in a reliable, consistent way.

6. Optimize the supply chain through querying complex datasets using natural language and find possibilities to enhance processes and decrease waste.

7. Improve customer satisfaction through smart, automated consumer service that gives easily-understandable supply chain information to all stakeholders.


Natural Language Processing challenges in the supply chain.

The main challenges with implementing and using natural language processing in the supply chain.

1. Training - NLP is a new way of interacting with a computer system so supply chain stakeholders will need the training to get the most out of the NLP technology.


2. Interfaces - NLP doesn't use a standard software interface to capture and manage information. Instead, users will need to ask questions and provide information using regular phrasing, grammar, and vocabulary.


3. Integration - Integrating NLP into existing technology and business processes are complex and require significant expertise and wide-ranging operational knowledge.


4. Investment - NLP is a significant investment and requires the right project management and resources to implement it effectively.


As technology continues to mature and become more functional, the applications of natural language processing for the supply chain will gain greater importance. With the help of technologies like computer vision and augmented reality, supply chain operations can become even simpler for the employees and efficient for the owners.
Read More

Thursday, November 7, 2019

Future of Aerospace and How AI is driving aerospace engineering?

When we think about the future of aerospace, our mind might go someplace big. Taking a commercial flight to mars at the speed of light, commuting to work on a passenger drone, or living in an airship the size of a small city.

All good ideas, albeit a bit far fetched. So, now I told you the real future of aerospace which is happening right now.
For the past couple of years, the world has been abuzz with news that we may be getting closer to commercial space travel.
Private space companies seem to be breaking barriers every week, getting us closer to the dream of spending our holiday orbiting the earth.
This is the one vision of the future of flight but there are other huge steps in the world of aerospace that are being taken to help improve holidays down here on earth.
In the future in the next 5 or 6 years, we are going to see a lot more tiny features on airplanes that the normal person in the street would not appreciate, but they make huge difference to both the emission of aircraft in terms of noise and the fuel burn. And it is going to be driving towards a more sustainable future in aviation. Because of the emission that we are going to put into the atmosphere the drive to reduce that is going to be absolutely critical.

Aircraft in the commercial world is all about getting people efficiently from point to point. So the companies are improving technologies on the flight deck to enhance the capabilities of getting an airplane in and out of the airfields sometimes when the weather is poor.
aircraft

We see the impact of artificial intelligence across different industries, how advances in AI could help aerospace companies better optimize their manufacturing processes. AI will allow the business to develop sustainable and lightweight aircraft components.

The challenges faced by the aerospace sector are labor cost, human error, health and safety concerns. Manufacturing and development procedures can be increasingly time-consuming due to industrial inspections to evaluate whether a component matches the required specifications. The aerospace industry is constantly looking for an effective way to speed up development processes in order to meet the growing demand as well as deliver high-quality components.

According to the Accenture report, 80% of leading executives within the aerospace and defense industries expect that every part of the workforce will be directly affected by AI-based decisions by 2021.

AI applications in aerospace

The use of AI in the aerospace industry will help businesses in the following manner:

Product design

In the aerospace industry, lightweight and sturdy components are always favored for an aircraft. To create such components, manufacturers can use a generative structure along with AI algorithms. Generative design is an iterative process, where engineers or architects use design goals as input alongside constraints and parameters like materials, available resources, and assigned spending budget to develop an optimal product design. 
Combining with AI, generative design software can enable product designers to evaluate numerous design options in a limited span of time. Using this technology, designers can create new products that are lightweight and sustainable. Artificial Intelligence enabled generative design coupled with 3D printing can be used to deliver various aircraft parts, for example, turbines, and wings.

Fuel efficiency

Globally, commercial airlines consume billions of gallons of fuel every year. As per insight, it is estimated that worldwide fuel consumption will reach an all-time high at 97 billion gallons in 2019. Hence, conserving fuel is a major concern for the whole aerospace industry. For this purpose, various organizations are already now fabricating lightweight parts with the aid of 3D printing. Artificial Intelligence can also help aerospace companies in improving their fuel efficiency.
A plane utilizes fuel at the highest rate in the climb phase. AI models can analyze how much fuel is consumed in the climb stage of different aircraft and by numerous pilots to create climb phase profiles for each pilot. These profiles can optimize fuel consumption during the climb phase. By utilizing AI-generated climb phase profiles, pilots can effectively preserve fuel during flights.

Operational efficiency and maintenance

Airplanes have various sensors that assist pilots measure speed, air pressure, and altitude. These sensors can be utilized to collect critical data like temperature, moisture, and pressure in different parts of an aircraft. Artificial Intelligence models can be trained to analyze the collected data to recognize abnormal behavior in aircraft parts. For example, sensors placed in turbines can collect data such as rotation speed, air pressure, and temperature of the part. The obtained data can be used to train AI models about traditional turbine behavior. By examining this data, AI models can detect when turbines turn away from their normal behavior and notify concerned staff about possible errors. Hence, airlines can recognize defective aircraft parts beforehand and fix them. In this manner, the utilization of Artificial Intelligence in the aerospace industry can help business leaders develop their operational efficiency by bypassing component failures that can lead to downtimes.

Pilot training

AI simulators coupled with virtual reality framework can be used to give pilots progressively realistic simulation of flying experience. AI-enabled simulators can be used to accumulate and analyze training data to know every pilot’s strengths and weaknesses to generate a detailed report that can be presented to their trainer. The received data can also be utilized to develop personalized training programs for each pilot. Personalized training programs can allow pilots to discuss their individual challenges more efficiently compared to traditional training programs.

Air traffic management

Air traffic control is one of the focus tasks of airports and airlines. However, as billions of travelers opt for air travel, air traffic control can be immensely complex. Hence, leveraging Artificial Intelligence for air traffic control can be an efficient solution. AI-powered intelligent assistants can help pilots in making informed decisions using weather data from sensors and flight data. Using such data, AI-based assistants can recommend alternative routes to pilots in order to make air travel more reliable and quicker.
AI can also be used along with smart cameras to recognize aircraft when they exit the runway and notify air traffic controllers. Using this data, air traffic controllers can clear the arrival runway for the next airplane. This technology can prove to be very helpful in low visibility situations such as fog. In this manner, the utilization of Artificial Intelligence in aerospace help in managing air traffic and reducing bottlenecks on airports.

Passenger identification

Security is one of the most important preferences of commercial airlines and Artificial Intelligence can offer powerful solutions to secure the security of passengers. AI-enabled smart cameras use facial recognition to recognize suspicious people at an airport. For this, AI systems trained with images of people with criminal records. Similarly, AI-powered smart cameras can also be used to detect malicious activity in an airport.

Customer service

Consumer satisfaction and loyalty are exceptionally important within commercially flying. The implementation of AI in aerospace industries enables commercial airlines to give enhanced customer service. For this purpose, commercial airlines use Artificial Intelligence-powered chatbots that are competent in resolving customer queries. Using chatbots, commercial airlines provide 24/7, automatic customer support. These chatbots guide customers while booking and canceling the tickets. Also, AI-powered chatbots continually learn by having interactions with different customers to develop their ability to understand a customer’s context in conversations and replicating human responses.
To sum up, these AI applications cannot function autonomously and require human intervention. However, with further research and development, AI may be capable of carrying out several tasks autonomously and may become a crucial part of autopilot systems.
Read More

Saturday, November 2, 2019

Real-Time Object Tracking with YOLOV3 and Deep Sort

YOLO(You only look Once) version3 is a model for Object Detection. Now, What is Object Detection? Object detection is a technique to identify the location of objects in an image. If we have a single object in an image and we want to detect it, this is known as image localization. If there are multiple different objects in an image, then there we need to determine the location on the image where certain objects are present, as well as classifying those objects.

Previously, there are methods like R-CNN, SSD, Faster RCNN, Mask RCNN, and their different variations, they are used to perform this task in multiple steps. They are really hard to optimize and slow to run because each individual component must be trained separately. YOLOv3 is capable to does it all with a single neural network.

YOLO v3 gives prediction at three scales, which are specifically given by down-sampling the dimension of the input image by 32, 16 and 8 respectively.

Realtime object tracking

How can we use this for Object tracking?


Counting the number of objects(person, cars, etc) manually is too tough and there are high chances of mistakes. It's often impractical for massive datasets of surveillance videos to analyze manually. Automate tracking of objects is one of the primary ability for computerized analysis of such videos. Object tracking and video analysis play a crucial role in many applications including traffic safety and intelligent monitoring. Real-world is so weird where we found some challenges like posture, partial occlusions, background cluster, and illumination change complicate the issue.

For the purpose of object tracking, we use a Deep Sort Algorithm where we start with all possible detections in a frame and give them an ID. In the following frame, we try to carry forward an object ID. If the object is moved away from the frame then that ID is dropped. if the new object comes then they start off with a fresh ID.

Applications


Infrastructure Planning - Government, industry, and business use Object counting and tracking to learn various things like how crowded are public places at a given time with peoples and vehicles. With the analysis of data, they can reconstruct the roads and industry can change their infrastructure.

Safety - If you are searching for someone who is lost in a natural disaster or any crowded area, stuck in some remote location. Computer vision is really very helpful in such type of cases.

Retail - Inventory management, optimizing store layout, understanding peak times and potentially even protect against theft in the retail stores.

Security - People monitoring in crowded places like Shopping malls, airports, railway stations, tourist sites, etc using CCTV Cameras which can prevent criminal, activities on roads.

What is a Deep SORT(Simple Online Realtime Tracking) Algorithm?

In Deep SORT Algorithm, tracking is not just based on distance and velocity but also what that person looks like. Deep sort allows us to add this feature by computing deep features for every bounding box and using the similarity between deep features to also factor into the tracking logic.
The reason by which it tracks really good is because of the use of a Kalman Filter and The Hungarian Algorithm.

1. A Hungarian algorithm can tell if an object in the current frame is the same as the one in the previous frame. It will be used for association and id attribution.

2. A Kalman Filter is an algorithm that can predict future positions based on the current position. It can also estimate the current position better than what the sensor is telling us. it will be used to have a better association.

Implementation


Now let's get started with an implementation part

For the implementation of Object tracking with YOLOv3. Feel free to check out my Github Repo here.

Let's start with the set up of the Deep Sort algorithm from the deep sort Github repository. You can check out that repository from here.

steps follow for object tracking


If you compile these commands it will automatically clone the repo and set up in your directory.

Next, you need to download the Yolo weights from my google drive. With this link, you can download the weights and use them locally and put them in the main directory.

Next, you need to set the Yolo weights with the help of a deep_sort package.

steps follow for object tracking


Next, you can take any video from the internet to check the output of the model. For this, I have taken a video from the Active Vision Laboratory of Oxford University, You can find that video from this link. I converted the video to mp4 by using ffmpeg.  If your ffmpeg command is not working then follow this site to install it. The sample code is here:

steps follow for object tracking


With the help of deep_sort, I used four variables to keep track of the four end coordinates of the bounding box. To filter out the correct bounding boxes I used a threshold to filter out the duplicates.

steps follow for object tracking


Now finally, output.avi video file saves in your directory.

real-time object tracking



Problems with Deep Sort


1. If the bounding boxes are too big than too much background is captured in the features reducing the effectiveness of the algorithm.

2. If people are dressed similarly as happens in a sports team that can result in similar features and ID switching.

Read More