The simplest kind of neural network is a single-layer perceptron network, which consists of a single layer of output nodes; the inputs are fed directly to the outputs via a series of weights. In the literature the term perceptron often refers to networks consisting of just one of these units.
Keeping this in consideration, what is Multilayer Perceptron in machine learning?
A multilayer perceptron (MLP) is a class of feedforward artificial neural network. MLP utilizes a supervised learning technique called backpropagation for training. Its multiple layers and non-linear activation distinguish MLP from a linear perceptron. It can distinguish data that is not linearly separable.
What is a single layer Perceptron?
A single-layer perceptron network consists of one or more artificial neurons in parallel. The neurons may be of the same type we've seen in the Artificial Neuron Applet. Each neuron in the layer provides one network output, and is usually connected to all of the external (or environmental) inputs.
What is linear separability?
Linear separability refers to the fact that classes of patterns with -dimensional vector can be separated with a single decision surface. In the case above, the line represents the decision surface. Figure 2.9: Linearly Separable Pattern.