Learning a Neural Network

Learning a Neural Network

Learning a Neural Network – The topic for today’s article is how does a neural network learn? So in the last article? We saw what was weights and biases and how does it interfere in between computation of neural network? So in today’s article Let’s Take a data set and we see how does a particular neural network Builder. So essentially we have a data set that is for the monthly expense. So say this So the monthly expense for a particular student. So like different factors will contribute to the monthly expense. So say if I am a student that is a foreign student.

I need to give Insurance then rent then for food then for miscellaneous and then say like you have a final class that is a target attribute as age greater than or 30 or not. So this is the class that we are going to predict now these Figures are random. This is not the real one that mine or either of my friends give so this is just a random number that I would so we have different figures that is represented in Euros. So whenever you input into the system that is a neural network, you will input into some real value numbers so you won’t have this string.

So now say this is for one particular student. This is for second student third student and you have many such records so I have taken 100 and 50 K. It is 1,50,000 records for this now save like if you ask like whether do we require really a neural network for computing this kind of activity will I say it’s no but if you are just three or four persons, then that would be like simple for you to calculate in your simple calculator or scientific or other calculators.

But if you have a large number of data sets shape a particular pool of Our category of students of one particular department and if you are asked to take a survey of their monthly expenditure, how does it impact the economy or how much tags they would be paying in certain situations. Then you need to have to build a certain kind of particular neural and Morgan ordered in this ticket that particular activity.

So if you are data set is very small, then you don’t require neural network else if you have a larger number of data set like In particular survey or the current situation say there are different records or different data sets of persons suffering from this COVID disease.

So in that case, if you want to do a particular survey, it becomes difficult where you compute certain activities by hand. So they’re in that case a neural network comes into the picture. So what we do is I just label this as my input. So I have x 1 x 2 x 3 x 4 and this is the class. So this is essentially a binary attribute that is yes or no, so I would be using a sigmoid function. So essentially what I have is I have X1 X2 X3 and X4. These are my four inputs Insurance, rent, food and miscellaneous.

You can consider any appearances or any other items that you would be by Lying beside so apart from all these three attributes. So all these attributes are independent. So that is the most important thing. We need to consider. So for any algorithm, it assumes that all attributes are independent. Neither of them has any dependencies among them. So what I have is say, for instance, I have my first hidden layer. So this is my input layer.

This is my hidden layer. And in that, I have to 3 units and the Hidden layer, I have two units and in the final output layer. I have one unit. So let me label this. This is my input layer. This is my first hidden layer. This is my second hidden layer. And this is my output layer. I have these connections going so this gets inputs this way. And then you have the final output that is by predicted. So we know that this it has two parts that this linear part and nonlinear part. So I’m using the non-linear function as a sigmoid function.

So, you know the weight Matrix is how it is computed and the number of coefficients including the biases, which it has got so say I am feeding this particular inputs. Each of them one by one so say for one particular double. So this is a say like this is an Excel file dot XLS our ME.XLS file and you are uploading this to your neural into so this takes inputs from The Real World so there Group particular sensor where it would be feeding at each and every day say for instance.

If you are taking this particular record, so record whenever I say there is a horizontal cut that is one particular instance all are real-valued numbers going to this. It does some computation and it will go through the second hidden layer and then it will finally output say at the output you put a threshold say you have This sigmoid function say it is like this. So here’s an I’m putting a threshold less than point five.

Whatever the threshold value is less than 0.5 that belongs to NO class and greater than 0.5 that belongs to the YES class. So if the particular setup so initially the weights That I assign are random. So why the weights are random that we’ll discuss in some other article for the time being just consider the weights that we assigned to learn this is random. So essentially when we build a neural network, our major goal is to lay learn these weights.

So learning these weights and biases are the major goal that we achieved by building a neural network. So say in the first particular output what we get is less than 0.5. That is the threshold after this computation. And we checked with our data set so it’s correct. So we need not adjust the weight for that particular iteration now for the second instance or for the second record if we get the output as again less than 0.5 but here in our data set in certain SES. So this is obtained by the real-valued record. So this need not be wrong, but our neural network can be wrong.

So this updation is mainly done at the weights. So initially you have updated the weights randomly and also the biases so we need to learn those weights and we need to adjust that. So that is our major goal this and subsequently for all of these different records, which are there in this data set. You basically just go back at each different layer. So this backtracking from one particular layer to be the previous layer is done by an algorithm called Backpropagation And the weight of decision at each particular unit is done with the help of gradient descent.

So gradient descent, we have already learned in our linear regression. So here in this case. Our cost function would be between the weights and the biases so these two are the coefficients is dependent on our cost function. So we go at each level one by one and at each unit. We try to generalize this particular unit. So this can be assumed light say I have this particular wireframe structure.

This is a three-unit. So X1 X2 X3 and then at the first hidden layer, you have two units and then at the second you have two units and then you have an output so you get the outputs for this and this you feed to the next layer and it propagates and so for at the final you get this probability value and if it does not match with the weights, which you have assigned randomly that this particular setup that is this weight Matrix, you will Dig this so essentially if there is something problematic with the bias.

So basically you add just the since I cannot move this so this will be like initially disliked this or if there is a change in the pious at this particular unit. It will just shift this towards up or it will shift this torsion. So assume that this is a joint framework so you can just move this. So this is how this particular neural network will Work. So essentially I don’t this particular input Vector. This will be a column Vector.

So you have x 1 x 2 x 3 and x 4 For multiplication you need say n cross 1 Matrix. So then you multiply with the weight. So that is how matrix multiplication works. And one more thing is that when we just compute this that is you have this W A plus b that is the output so we have generalized this equation in this So essentially when we do the matrix multiplication this bait Matrix is taken as the transpose and not the original weight Matrix. So that is the change that we see here and also the loss function Would be written as J with respect to W and B is given by 1 upon n summation.

I running from 1 to n loss of Y and Y cap. That is won’t you get and what do you want? So if there is a change in between those two, you will just update the weight accordingly. So well that is how a particular neural network will learn the weights at each particular instance. So that was all regarding learning a particular neural networking deep learning.

 

 

 

Useful links:

reference – Learning a Neural Network

Share this post ...

1 thought on “Learning a Neural Network”

  1. Pingback: Vectorization of Neural Nets | My Universal NK

Leave a Reply

Your email address will not be published. Required fields are marked *