Big Theta Θ

The Math Behind Backpropagation

Introduction

Previously we wrote a short introduction to neural networks, which discusses backpropagation as the training method of choice, described as: “simply a way of minimizing the loss function, or error, of the network by propagating errors backward through the network and adjusting weights accordingly.”

This brief writeup is meant to shed light on the mathematics behind backpropagation, deriving (with substantial justification) the weight changing algorithm for a feedforward neural network by means of a standard gradient descent.

The feedforward algorithm

The activation of a neural network is iteratively defined by

where is the output vector at layer , is the activation function, is the input vector at layer , and is the weight matrix between layers and . The first layer is and the last layer is .

The backpropagation algorithm

The error of the network is defined by The error gradient of the input vector at a layer is defined as

The error gradient of the input vector at the last layer is

The error gradient of the input vector at an inner layer is

Therefore, the error gradient of the input vector at a layer is

Hence, the error gradient of the weight matrix is

Therefore, the change in weight should be

where is the learning rate (or rate of gradient descent). Thus, we have shown the necessary weight change, from which the implementation of a training algorithm follows trivially.

— Lucas Schuermann, Carlos Martin