The Universe

  • by
The Universe

The Universe – The main reasons why we need to do machine learning and some of its objectives and what machine learning basically comprises of and what you need to have a prerequisite to do ML. So in today’s video Let’s see the universe of machine learning or you can say as a mind map where you can see different methods and different parts of ML. So again essentially I look into machine learning as a big ocean.

And as you go deep into this part, it becomes more and more complex and difficult. And so this topmost part is like everyone does so that’s basically not a big challenge. But as you go deep into these methods, it becomes more and more complicated. So let’s begin so much in learning is mainly divided into classical learning you have reinforcement learning. You have neural networks.

And then you have ensemble methods. So we begin with classical learning. So that is the most traditional learning. And in that, we have two methods. They’re supervised has less unsupervised. So sue voiced classical learning mainly deals with building a model which uses class labels. So in supervised learning, we have two methods they have classification as well as Regression. So in classification, we mainly use the KNN algorithm we have Naive Bayes.

Support Vector machines we have decision trees. So these are the most popular methods which are used for classification in machine learning. Then we want to be so our equation is maybe getting a mathematical model into a curve so that we can do or predict different types of class labels. So regression is mainly of logistic regression. We have a linear regression.

We have polynomial then we have ridge regression or lasso regression. We’ll see all this in detail in our future videos. Next, we move onto unsupervised learning. So unsupervised learning is different from supervised in that sense that it does not use this a class label. So here we use the class label in order to predict hard with the model.

So here you don’t use any class labels, so unsupervised is into clustering then you have a pattern search then you have dimensionality reduction, so unsupervised clustering mainly has k-means algorithm so that we mainly used in data mining so where K is a hyperparameter that will see why we need to use a hyperparameter. So, here K means the number of clusters that you need to make and then you have DBSCAN density-based spatial clustering with applications of noise and then have agglomerative clustering.

We have mean shift clustering you have fuzzy C means so in this also C is just a hypermeter. Which defines how much clusters or how do we do cluster those particular data points? Next, we want to move onto the pattern search. So pattern search is mainly of the ECLAT. So apriori and FP-growth are most major methods used in patterns of so all have different functionalities and some specific reasons why we have included appearances next we want to be dimensionality reduction.

So dimensionality reduction is like you have a large data set and you divide that into smaller data sets which represent some kind of data. So dimensionality reduction is mainly employed with tSNE t-distributed stochastic neighbourhood embedding so that is one method, which is used for visualization of high-dimensional data, then your PCA which is called as the principal component analysis.

So in this High dimensional data sets are divided into smaller data sets, which contain the same information as they will be the original data. Next, we have the LSA that is linear semantic analysis, you have something called as SVD greatest singular value decomposition. So in this, what you have you create or you decompose schematics in to be unitary matrices and one rectangular diagonal matrix of singular values, and then you have linear discriminant analysis. So all these techniques are mainly used in dimensionality reduction.

So we talked about this part next we move on to the second level what we can call as a The medium difficulty so that is called as a reinforcement learning. So mainly all the evolutionary algorithms. One of those that is the genetic algorithm that non dominated sorted genetic algorithm NSGA2 and NSGA3. Then you have SPEA all those algorithms are being used in this so you have a particular population of individuals and then selection criteria is mainly put on that then which individuals who serve us those selection criteria it Generation, so we’ll start basically just finding the fittest individual from that.

Next. We have the A3C. That is the asynchronous advantage actor-critic. So that is another method which is used in reinforcement learning. Then we have the state action reward state actions. We have a quality learning technique. You have the Deep Q-network something policy DQN. So all these are some kind of model-based or model-free approaches that is used in ensemble learning.

Next, we have the and ensemble methods that mainly deals with multiple base classifiers and it tries to predict class label so many a kind of voting scheme or voting mechanisms Android this so we have stacking in this so stacking considers multiple base classifiers, or it can be multiple basically regression algorithms and then tries to predict this stronger prediction amount. Multiple classifiers then you have the bagging. So bagging is mainly used to decrease the variance and you have the random Forest used in this next carefully boosting.

So boosting is mainly done in order to reduce the bias of the tree. So you have various methods that is popular method is called as Adaboost, Catboost or Xgboost or LightBGM used in boosting next we talk about the most complicated part of this that is Neural Networks or deep learning. So in neural networks, we have convolutional neural networks CNN or it’s called Deep convolutional neural networks be DCNN. Then you have the recurrent neural networks RNN. You have some methods list that is liquid state machine (LSM).

And then Long short term memory LSTM gated recurrent unit (GRU) They have something called has Generative adversarial networks GANs. Then we have the autoencoders where you have a series of encoders and decoders sequentially. And so one popular technique is Seq2Seq that is the sequence to sequence autoencoder. And finally, here we perceptrons that is made with the organic Computing soft Computing.

We have the multi-layer perceptrons and single-layer perceptrons. So well, that was all regarding this big universe of machine learning.

 

Useful links:

reference – The Universe

Share this post ...

Leave a Reply

Your email address will not be published. Required fields are marked *