Discriminative models draw the boundaries of data space, while generative models try to model how data is put down throughout the space.
Generative models are used to learn algorithms that typically model the underlying distribution of the data points and the focus is on the distribution of particular classes in a dataset. Generative models produce a set of circumstances in which one finds which a given feature (x)/input and the desired output/label (y) exist at the same time by using the concept of joint probability in theory. To model data points and recognize between several class labels in a dataset, generative models make use of probability estimates and facts of something’s being likely. These models have the capacity to produce fresh data and a serious instance of corruption. They do, however, have a significant flaw. These models are significantly ressed firmly together by the existence of outliers.
Latent Dirichlet Allocation (LDA)
LDA (Latent Dirichlet Allocation ) is a generative probabilistic model with a collection of discrete data, each of which is modeled as a limited in size or extent mixture. Some of the main applications of LDA (Latent Dirichlet Allocation) are collaborative filtering and content-based image retrieval.
Bayesian Network is Also known as the Bayes network, Bayesian Network is a generative probabilistic graphical model that gives an efficient delineation of the joint probability distribution over a set of random variables.
Hidden Markov model
The Hidden Markov model is a statistical model known for its effectiveness in modeling the correlation between adjacent symbols and finding major applications in speech recognition, events or domains, and digital communication.
The autoregressive model (AR) model predicts future values based on past values. This kind of model is good at manipulating a wide range of time-series patterns.
Generative Adversarial Network
GANs (Generative adversarial networks) have gained much popularity recently. A GAN (Generative adversarial network) model has two parts–a generator and a discriminator. The generative model catches the data distribution and the discriminative model is an approximate calculation of the probability of the sample coming from training data rather than the generative model.
The discriminative model also called the conditional model, tends to learn the boundary between labels in a dataset. different from generative models the goal here is to find the decision boundary separating one class from another.Discriminative Models while a generative model will tend to model the joint probability of datasets and is capable of creating new instances using probability estimates and the maximum fact of something’s being likely, discriminative models separate classes by rather modeling the conditional probability and do not make any a thing that is accepted as true or as certain to happen about the data point. They are also not capable of generating new data instances. Discriminative models have the advantage of being more robust to outliers. one major drawback is a misclassification problem i.e., wrongly classifying a data point. Another key difference between Discriminative models draws the boundaries of data space, while generative models try to model how data is put down throughout the space. these two types of models are a generative model focuses on explaining how the data was generated and a discriminative model focuses on predicting labels of the data.
Examples of discriminative models in machine learning are:
- Logistic regression
- Support vector machine
- Decision tree
- Random forest
Support vector machines (SVMs)
Support vector machine (SVM) builds a decision boundary between each pair of data sets. For 3-dimensional and 2-dimensional spaces, the SVM algorithm, respectively, creates lines or hyperplanes that divide points. By attempting to maximize the margin or the distance from the line to the closest points, SVM (support vector machine) seeks to identify the line/hyperplane that best distinguishes the classes. By employing the “kernel technique” to locate non-linear decision boundaries, SVM (support vector machine) models can also be applied to datasets that aren’t linearly separable.
The logistic Regression algorithm calculates the fact something’s likely that an input will fall into one of two categories using the logit (log-odds) function. The probability is “squished” towards either 1 or 0, true or false, using a sigmoid function. class 0 probabilities are those of 0.49 or less whereas Class 1 probabilities are those more than 0.50, Logistic regression is therefore frequently applied to binary (0,1) classification issues. all strategy, developing a binary classification model for each class, and calculating the fact of something’s being likely that a given example belongs to the target class or another class in the datapoint, logistic regression can be used to solve multi-class level problems.
A decision tree model works by gradually dividing a dataset into smaller and smaller subgroups; when the subsets of data can no longer be divided further, the model produces a tree with nodes and edges. Decisions concerning data sets are made at nodes in a decision tree using various filtering criteria. The categorized data points are represented as leaves in a decision tree. The numerical data and category can be handled by decision tree methods and the splits in the tree are based on particular variables or features.
A random forest model is merely a number of decision trees, with the predictions from each tree amounting to or achieve as an average rate or amount over a period of time to provide the final result. The random forest algorithm builds the individual trees based on a set of randomly having been selected observations and features.