Deep Learning Backward Propagation in Neural Networks

Sharing is Caring

Backward propagation of inaccuracy is an algorithm that is outlined to test for bugs working back from the output junction to the input junction. It is a main mathematical gadget for improving the correctness of forecasts in data mining and ML (machine learning). Let us take check how backward the propagation process. It has four layers: input layer, hidden layer, hidden layer II and final output layer. the main three layers are the following.

  1. Input layer
  2. Hidden layer
  3. Output layer

Each layer has its own way of working and its own way to take activity such that we are skillful to get the want results and agree on these outline to our conditions.

Deep Learning Backward Propagation in Neural Networks

Backward Propagation Example

If you are structuring your own neural network, you will definitely need to understand how to teach it. BP (Backpropagation) is an often used process for training a neural network. There are many ways to clarify the process, but this post will clarify backpropagation with a real example in a very detailed colorful steps.

You can see enlightenment of the forward pass and backward propagation here. You can invigorate your neural network using net flow.

Overview

In this post, we will structure a neural network with three layers:

  • Input layer with two inputs neurons
  • One secret layer with two neurons
  • Output layer with a single neuron

One of the most popular NN (neural network) algorithms is the backward propagation algorithm. The backward propagation algorithm could be broken down into four basic steps. After selecting the weights of the network randomly, the backward propagation algorithm is used to calculate the necessary corrections. The algorithm can be decay in the following steps

  1. Feed-forward computation
  2. Backward propagation to the output layer
  3. Backward propagation to the hidden layer
  4. Weight updates The algorithm is stopped when the value of the error function has become sufficiently less than low.

Weights weight

Neural network instructions are about searching weights that reduce prediction bug we habitually start our training with a set of arbitrarily generated weights. Then, backward propagation is used to update the weights in an attempt to exactly map arbitrary inputs to outputs.

Our initial weights of the following

 w1 = 0.11

 w2 = 0.21

 w3 = 0.12,

w4 = 0.08

w5 = 0.14

w6 = 0.15

What Does Backward Propagation Work?

Backward propagation work in an iterative manner. Each iteration compares training examples with the actual target of the label. A target label can be a continuous value or class label. The backward propagation algorithm works in the following steps:

  • Initialize Network
    Backward propagation randomly initializes the weights. 
  • Forward Propagate:
    After initialization, we will propagate in the forward direction. In this phase, we will compute the results and calculate the error from the target output data.
  • Back Propagate Error
     weights are modified in order to decrease the error in a technique called the gradient descent or delta rule. It modifies weights for the backward direction to all the centered or hidden layers.

Backward Propagation Python Algorithm

Import Libraries
# Import Libraries

  1. import numpy as np
  2. import pandas as pd
  3. from sklearn.datasets import load_iris
  4. from sklearn.model_selection import train_test_split
  5. import matplotlib.pyplot as plt

Load dataset

  1. data = load_iris()

Get features and target

  1. X=data.data
  2. y=data.target

Prepare Dataset

Get dummy variable

  1. y = pd.get_dummies(y).values
  2. y[:3]

Output:

array([[1, 0, 0],
[1, 0, 0],
[1, 0, 0]], dtype=uint8)

Split train and test set

  1. #Split data into train and test data
  2. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=20, random_state=4)

Initialize Hyperparameters and Weights

Initialize variables

  1. learning_rate = 0.1
  2. iterations = 5000
  3. N = y_train.size
  4. # number of input features
  5. input_size = 4
  6. # number of hidden layers neurons
  7. hidden_size = 2
  8. # number of neurons at the output layer
  9. output_size = 3
  10. results = pd.DataFrame(columns=[“mse”, “accuracy”])

• # Initialize weights
• np.random.seed(10)

initializing the weight of the hidden layer

• W1 = np.random.normal(scale=0.5, size=(input_size, hidden_size))

Initializing Weight Of Output Layer

• W2 = np.random.normal(scale=0.5, size=(hidden_size , output_size))

Helper Functions

  1. def sigmoid(x):
  2. return 1 / (1 + np.exp(-x))
  3. def mean_squared_error(y_pred, y_true):
  4. return ((y_pred – y_true)*2).sum() / (2y_pred.size)
  5. def accuracy(y_pred, y_true):
  6. acc = y_pred.argmax(axis=1) == y_true.argmax(axis=1)
  7. return acc.mean()

for itr in range(iterations)

  1. Z2 = np.dot(A1, W2)
  2. A2 = sigmoid(Z2)
  3. Calculating error
  4. mse = mean_squared_error(A2, y_train)
  5. acc = accuracy(A2, y_train)
  6. results=results.append({“mse”:mse, “accuracy”:acc},ignore_index=True )
  7. # backpropagation
  8. E1 = A2 – y_train
  9. dW1 = E1 * A2 * (1 – A2)
  10. E2 = np.dot(dW1, W2.T)
  11. dW2 = E2 * A1 * (1 – A1)
  12. weight updates
  13. W2_update = np.dot(A1.T, dW1) / N
  14. W1_update = np.dot(x_train.T, dW2) / N
  15. W2 = W2 – learning_rate * W2_update
  16. W1 = W1 – learning_rate * W1_update

Predict Test Data and Evaluate the Performance

  1. # feedforward
  2. Z1 = np.dot(x_test, W1)
  3. A1 = sigmoid(Z1)
  4. Z2 = np.dot(A1, W2)
  5. A2 = sigmoid(Z2)
  6. acc = accuracy(A2, y_test)
  7. print(“Accuracy: {}”.format(acc))

The output Accuracy is 0.8.

Conclusion

Backpropagation NN (neural network) is a method to optimize neural networks by propagating the bug or loss in a backward direction. It finds errors for each node and updates its weights accordingly in order to minimize the bug using gradient descent.

Leave a Comment