Backward propagation of inaccuracy is an algorithm that is outlined to test for bugs working back from the output junction to the input junction. It is a main mathematical gadget for improving the correctness of forecasts in data mining and ML (machine learning). Let us take check how backward the propagation process. It has four layers: input layer, hidden layer, hidden layer II and final output layer. the main three layers are the following.
- Input layer
- Hidden layer
- Output layer
Each layer has its own way of working and its own way to take activity such that we are skillful to get the want results and agree on these outline to our conditions.
Backward Propagation Example
If you are structuring your own neural network, you will definitely need to understand how to teach it. BP (Backpropagation) is an often used process for training a neural network. There are many ways to clarify the process, but this post will clarify backpropagation with a real example in a very detailed colorful steps.
You can see enlightenment of the forward pass and backward propagation here. You can invigorate your neural network using net flow.
Overview
In this post, we will structure a neural network with three layers:
- Input layer with two inputs neurons
- One secret layer with two neurons
- Output layer with a single neuron
One of the most popular NN (neural network) algorithms is the backward propagation algorithm. The backward propagation algorithm could be broken down into four basic steps. After selecting the weights of the network randomly, the backward propagation algorithm is used to calculate the necessary corrections. The algorithm can be decay in the following steps
- Feed-forward computation
- Backward propagation to the output layer
- Backward propagation to the hidden layer
- Weight updates The algorithm is stopped when the value of the error function has become sufficiently less than low.
Weights weight
Neural network instructions are about searching weights that reduce prediction bug we habitually start our training with a set of arbitrarily generated weights. Then, backward propagation is used to update the weights in an attempt to exactly map arbitrary inputs to outputs.
Our initial weights of the following
w1 = 0.11
w2 = 0.21
w3 = 0.12,
w4 = 0.08
w5 = 0.14
w6 = 0.15
What Does Backward Propagation Work?
Backward propagation work in an iterative manner. Each iteration compares training examples with the actual target of the label. A target label can be a continuous value or class label. The backward propagation algorithm works in the following steps:
- Initialize Network
Backward propagation randomly initializes the weights. - Forward Propagate:
After initialization, we will propagate in the forward direction. In this phase, we will compute the results and calculate the error from the target output data. - Back Propagate Error
weights are modified in order to decrease the error in a technique called the gradient descent or delta rule. It modifies weights for the backward direction to all the centered or hidden layers.
Backward Propagation Python Algorithm
Import Libraries
# Import Libraries
- import numpy as np
- import pandas as pd
- from sklearn.datasets import load_iris
- from sklearn.model_selection import train_test_split
- import matplotlib.pyplot as plt
Load dataset
- data = load_iris()
Get features and target
- X=data.data
- y=data.target
Prepare Dataset
Get dummy variable
- y = pd.get_dummies(y).values
- y[:3]
Output:
array([[1, 0, 0],
[1, 0, 0],
[1, 0, 0]], dtype=uint8)
Split train and test set
- #Split data into train and test data
- X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=20, random_state=4)
Initialize Hyperparameters and Weights
Initialize variables
- learning_rate = 0.1
- iterations = 5000
- N = y_train.size
- # number of input features
- input_size = 4
- # number of hidden layers neurons
- hidden_size = 2
- # number of neurons at the output layer
- output_size = 3
- results = pd.DataFrame(columns=[“mse”, “accuracy”])
• # Initialize weights
• np.random.seed(10)
initializing the weight of the hidden layer
• W1 = np.random.normal(scale=0.5, size=(input_size, hidden_size))
Initializing Weight Of Output Layer
• W2 = np.random.normal(scale=0.5, size=(hidden_size , output_size))
Helper Functions
- def sigmoid(x):
- return 1 / (1 + np.exp(-x))
- def mean_squared_error(y_pred, y_true):
- return ((y_pred – y_true)*2).sum() / (2y_pred.size)
- def accuracy(y_pred, y_true):
- acc = y_pred.argmax(axis=1) == y_true.argmax(axis=1)
- return acc.mean()
for itr in range(iterations)
- Z2 = np.dot(A1, W2)
- A2 = sigmoid(Z2)
- Calculating error
- mse = mean_squared_error(A2, y_train)
- acc = accuracy(A2, y_train)
- results=results.append({“mse”:mse, “accuracy”:acc},ignore_index=True )
- # backpropagation
- E1 = A2 – y_train
- dW1 = E1 * A2 * (1 – A2)
- E2 = np.dot(dW1, W2.T)
- dW2 = E2 * A1 * (1 – A1)
- weight updates
- W2_update = np.dot(A1.T, dW1) / N
- W1_update = np.dot(x_train.T, dW2) / N
- W2 = W2 – learning_rate * W2_update
- W1 = W1 – learning_rate * W1_update
Predict Test Data and Evaluate the Performance
- # feedforward
- Z1 = np.dot(x_test, W1)
- A1 = sigmoid(Z1)
- Z2 = np.dot(A1, W2)
- A2 = sigmoid(Z2)
- acc = accuracy(A2, y_test)
- print(“Accuracy: {}”.format(acc))
The output Accuracy is 0.8.
Conclusion
Backpropagation NN (neural network) is a method to optimize neural networks by propagating the bug or loss in a backward direction. It finds errors for each node and updates its weights accordingly in order to minimize the bug using gradient descent.