Fundamentals of Artificial Neural Network In this session first, we are going to define a neural network and then discuss our brain's ability. We will continue the discussion by introducing a mathematical model for artificial neural networks and trying to understand Multilayer Perceptron concepts with Logic-Gate. You may see the link for the complete course.
1. Biological Model of Neural Network
What can our BRAIN do?
Recognize people after so many years
Read different handwriting
The brain has around 10^11 neurons
Each neuron has up to the 10000 connectionshttps://www.youtube.com/watch?v=IEX2A8sLhUc&list=PLggdFS5B1D4nqX0A4sdeTZT38oqtASTQ0&index=1
2. How our Brain makes decisions
Our sense organs interact with the outer world and send the visual and sound information to the neurons.
By looking at a picture, each neuron in your brain gets fired or activated only when its respective criteria are met. There are millions of neurons interconnected to make a hierarchical decision based on what they receive.
See the below video for an example.https://www.youtube.com/watch?v=E45pJ6izYDM&list=PLggdFS5B1D4nqX0A4sdeTZT38oqtASTQ0&index=2
3. Classification theory
what is classification?
According to the dictionary, classification is an action or a process of classifying something based on shared qualities or characteristics.
But why is it important for us and how is this related to Machine Learning?
The first thing that we all learned in this school was classifying. In a math class, we learned how to classify different shapes into different groups. If a shape has four right angles and four straight lines, then we can call it square. Basically, we are learning how to classify different things.
In science, We also learn how to classify different plants, animals, and we gave each group a name. This is an important aspect of machine learning, as well. A manager who wants to decide about some particular situation will compare the existing situation with one of the sets that he has in his mind to make a suitable decision!
If we point our dataset on a graph, we will be easily able to classify it by plotting a line and write a mathematical function for it. See the below video to learn more.https://www.youtube.com/watch?v=6_XYWOEkDrY&list=PLggdFS5B1D4nqX0A4sdeTZT38oqtASTQ0&index=3
4. What is Artificial Neuron? and What is a Binary Linear Classification?
We have expressed the classification of some datasets with a mathematical expression. Now let us model a simple neuron using this expression.
In this expression, 'x' and 'y' are our inputs to the neuron. Each of these inputs has a particular effect on our system which is presented by 'a' and 'b'. For example, the income of people has more effect on the place that they chose for living than their color of eyes! So we can give more weight to the variable income and less (or non) weight to the color of the eyes variable.
And finally 'c' in our plot is acting as the bias for the neuron. See the below video to learn more about this topic!https://www.youtube.com/watch?v=09SB3xkWJa4&list=PLggdFS5B1D4nqX0A4sdeTZT38oqtASTQ0&index=4
5. Combinational Logic Gate: AND
Let's take a look at Combinational Logic Gate 'AND' and use a binary linear classification to plot it on a graph.
We also know the AND gate as a truth table. let's say we have two statements and want to know if the person is telling us truth or not!
In this table '1' represents true and '0' represents false answer.
We will ask two questions from the target person and record the answers in columns 'X' and 'Y'. If and only if this person is telling a correct answer to both questions (meaning two '1' inputs), then we can trust this person and will mark '1' in the last column 'Z'.
Click on the video below to see the process.https://www.youtube.com/watch?v=bQl3sxiOKsI&list=PLggdFS5B1D4nqX0A4sdeTZT38oqtASTQ0&index=5
6. Logical Disjunction Gate: OR
In the Logical disjunction OR, we want to see if the person is at least telling us one true statement or not! After filling the truth table, we can present it on a graph with a binary linear classifier.
See the below video to learn more!https://www.youtube.com/watch?v=ZmVNZyRswv4&list=PLggdFS5B1D4nqX0A4sdeTZT38oqtASTQ0&index=6
7. Artificial Neuron Model
Here you can see a generalized model for an artificial neuron with several inputs and a bias. Then the net function is fed into an activation function to generate an output.
Click below and watch the video for a detailed explanation!https://www.youtube.com/watch?v=s7yNuRkcSmY&list=PLggdFS5B1D4nqX0A4sdeTZT38oqtASTQ0&index=7
8. Exclusive Logical Gate: XOR
Now that we defined a simple artificial Neuron, it's time to take a look at exclusive logical gate XOR.
Here we have two inputs. we can call them A and B. In the logic gate XOR, the output is 1 only when the inputs are different.
As you can see on the graph it is not possible to classify the dataset of XOR using only one linear line, though we must use two linear lines. And remember each linear line represents a neuron, therefore we will need two neurons to design our Artificial Neural Network!
Watch the below video for more explanations.https://www.youtube.com/watch?v=9zQ0ibKyZ9E&list=PLggdFS5B1D4nqX0A4sdeTZT38oqtASTQ0&index=8
9. Multilayer Perceptron Concepts
Watch the video to understand the concepts of Multilayer perceptron.
10. Mathematical Model of Net function
we can show a simple neuron with a set of inputs, which we can be represented with a vector of X. Each neuron has its own weight that we show them with a vector of W. And there is also a constant variable b, which we call it bias of our system. The output of this neuron is Z.
Watch the video to see how to write a net function for this simple neuron. https://www.youtube.com/watch?v=_rXAikQLRDY&list=PLggdFS5B1D4nqX0A4sdeTZT38oqtASTQ0&index=10
11. Mathematical Model of MPL
Below figure is a block diagram of a multilayer Perceptron where we have a vector of X for our inputs, and two neurons where the output of the first neuron is called y1 and the output of the second neuron is called y2.
The y1 and y2 signals are fed into another neuron N3 that is acting as the activation function to generate the output Z.
Watch the video to understand the mathematical representation of this MLP.https://www.youtube.com/watch?v=6Kzhdf9bxlU&list=PLggdFS5B1D4nqX0A4sdeTZT38oqtASTQ0&index=11
12. Hardware Implementation of a Neuron
You can simply design your neural network with software and then implement it using hardware, below shows a simple model to represent a neuron. Just watch the below video to learn more.
There are several advantages of using an analog system compared to a digital system. One of them is that analog systems are much faster compared to digital systems, so they can respond promptly. But of course, there are some limitations as well. analog systems are not extensible and we can just use them for a specific purpose.https://www.youtube.com/watch?v=gDzRO03iG-w&list=PLggdFS5B1D4nqX0A4sdeTZT38oqtASTQ0&index=12
13. MLP Structure
The first layer, usually in many textbooks, it's called the input layer. Then whatever the layer is between the input layer and output layer are called the hidden layers. They are called hidden layers because they're hidden from the output layer and therefore from us!
While designing an artificial neural network, we can adjust the weight of each neuron as well as our bias to meet the desired output.
Watch the video to see some examples.https://www.youtube.com/watch?v=VVaOMAJNmPY&list=PLggdFS5B1D4nqX0A4sdeTZT38oqtASTQ0&index=13
14. Neural Network's Logic
Here we have two systems, one is an artificial neural network, our model that we want to design to predict a particular output, the other one is just a process, for example, a correlation between two datasets in the stock market.
First we have to feed the same input into both systems, process and the ANN, and get their outputs.
Ideally, both outputs y and y^ must be the same with 0 error so that we can say our ANN has 100% efficiency (which is not possible when we work with natural systems due to several factors affecting the output).
So what we need to do is to take the error between y and y^, then try to minimize this error. Watch the video to understand it better!
15. Calculating Mean Square Error
To optimize our system we have to minimize the error between the output of ANN and process. There are several ways to do that and one of them is to calculate the mean squared error or MSE which is the average of the squares of the errors.
Watch the below video and learn some math behind it!https://www.youtube.com/watch?v=0FUWIncucvI&list=PLggdFS5B1D4nqX0A4sdeTZT38oqtASTQ0&index=14
Send Us A Message See the Complete Course on Udemy
Artificial Neural Network and Machine Learning using MATLAB Enroll
Learn to Create Neural Network with Matlab Toolbox and Easy to Follow Codes; with Comprehensive Theoretical Concepts
What you'll learn?
*Develop a multilayer perceptron neural network or MLP in MATLAB using Toolbox * Building Artificial Neural Network Model * Understand Optimization methods * Understand Function approximation methodology * Knowledge of Performance Functions * Apply Artificial Neural Networks in practice * Knowledge on Fundamentals of Machine Learning and Artificial Neural Network * Understand the Mathematical Model of a Neural Network * Knowledge of Training Methods for Machine Learning
This course is uniquely designed to be suitable for both experienced developers seeking to make that jump to Machine learning or complete beginners who don't understand machine learning and Artificial Neural Network from the ground up.
In this course, we introduce a comprehensive training of multilayer perceptron neural networks or MLP in MATLAB, in which, in addition to reviewing the theories related to MLP neural networks, the practical implementation of this type of network in MATLAB environment is also fully covered.
MATLAB offers specialized toolboxes and functions for working with Machine Learning and Artificial Neural Networks, making it a lot easier and faster for you to develop a NN.
At the end of this course, you'll be able to create a Neural Network for applications such as classification, clustering, pattern recognition, function approximation, control, prediction, and optimization.