*Naive Bayes Classifier* – We’ll be seeing what is naive Bayes classifier in machine learning. So let’s get started. So there are two terms that is we have Naive and bias. So some people may call it as Nave and Naive. So it’s upon your discretion like what you can call; you can call it a Naive. or you can call it Naive. So to begin with NB basically a classifier which is under the supervised machine learning Group, which is based on some probabilistic logic now, based on this probabilistic logic it uses the Bayes theorem or the algorithm which are based on the base theorem.

It mainly uses that so Bayes theorem is basically a form of mathematical probabilistic technique where you calculate the probabilities. Of an event. So it just considers a set of probabilities. Now. We already know like we have a data set and then we divide into training and test set. So the training set we input or different types of attributes sets. So in naive bias what we have we have set of probabilities so it can be like we have p 0… P 1 up till PK. So we have these many numbers of my ability. So this probability is we just input into the training set and we get a prediction as a result of that.

So for each attribute from each class set like each item it belongs to each of these classes. So we have x i x 1 we have x 1 to xn that is each of the attribute set and it belongs to each of the classes. That is c 1 to c k and so those classes it makes you so the probability to make any predictions so the data model which we get or which is yielded. From this effort is called as a predictive model which uses the probabilistic problems at the foundation. So we already know like we have the data set which has n number of rows and N number of columns, which is divided into training sight and touch set which is of like say, for example, 2/3 and 1/3.

Then we just test we just take the training set and to this training set. Now what we have we have different kinds of probabilities. Nice and this we create to a prediction model or a predictive model and the output what we get from this is different kinds of predictions. So this is how the naive Bayes classifier works. So the Bayes theorem which we mainly discussed here. So it’s a mathematical estimate. So what it says that probability of an event based on prior knowledge of conditions that might be related to that debate. So this can be said like you want to calculate the probability of a particular event given that some already prior known conditions are there it means, for example, say for instance like I want to go out for cycling.

So the probability of me going all set for cycling given that the condition is good or the weather is good. Then I will go cycling outside else if it’s not good or if it’s cloudy or if it’s raining outside. Then I will not go so similar is that so basically in human beings the classification power and the things which we can classify is very much powerful as compared to a machine. So that’s why the feature extraction in human beings is so tightly coupled is that there is less chance of us making any mistakes. So now we discuss this term that is Naive why it is called as Naive.

So there are basically two things which Makes it Naive. So first is nature and second is the assumption now, let’s see what nature does so nature. Basically, it works. Well on text Data means whatever text Data like if it’s a CSV file or XLS file, it mainly works on that and it majorly emphasizes on categorical data. As To numeric data So categorical data, it’s like marital status that is single married or not married. So it mainly functions on that good amount and it performs very less on numerical data and the assumption it carries that all attributes. Are independent of each other.

It means like for particular class. We have a number of attributes. So for each attribute, this naive Bayes assumes that each of those attributes is independent means they don’t have any correlation among them. So these two reasons are mainly why it is called as a naive Bayes algorithm or a naive Bayes classifier now in this you need to calculate the probability. So this is how the posterior It is calculated in naive Bayes. So this what you call it’s a posterior probability.

Now, you can just understand this as you need to calculate the probability of the class given that you have already know the features. So it’s the probability of class given that already that condition has occurred. So this is the posterior probability and this basically is the likelihood of the prior probability which you want to calculate and the probability of the class, which you have and this The probability of the other features which you calculate. So with this equation, you basically calculate naive Bayes and Implement in your algorithm.

Now, I’ve considered certain steps for the implementation of NB. So here it goes. There are five basic steps to implement my bias. So basically in order to do NB, you first have to load your data. So that data can be anything like in dot XLS or dot CSV format comma-separated value and then you just divide it into the training set. And you divide it into test set is you need to prepare this detail. So what you do is you just take the training set. And you calculate the probabilities of the attribute values it has.

It has probabilities of Attributes and then at step three you generate a forecast. So forecast means like first you just clean your data and organize it. And then you just create with the help of this training set the probabilities which are calculated and just try to predict. The data or the labels of your test set next what you do is you evaluate the performance. So performance is evaluated by iterating over these steps that is step two step three and step four and then the accuracy is calculated and it is tried to make improvements. So if accuracy is not going to just drive over these iterations again and again, and there should be least error margin in this so that is the main goal.

Of this evaluating the performance and finally, you do the cleanup and finalization so that you carry out by cleaning and organizing and followed by polishing your predictive models. And finally, you have the code ready and that could be just implementing your algorithm so that any new test data, which it comes it does the classification in this format.

**Useful links:**

reference – *Naive Bayes Classifier*

Web enthusiast. Thinker. Evil coffeeaholic. Food specialist. Reader. Twitter fanatic. Music maven. AI and Machine Learning!

Pingback: K-Nearest Neighbor (KNN) | My Universal NK