The Math Behind Backpropagation

Introduction

Previously we wrote a short introduction to neural networks, which discusses backpropagation as the training method of choice, described as: “simply a way of minimizing the loss function, or error, of the network by propagating errors backward through the network and adjusting weights accordingly.”

This brief writeup is meant to shed light on the mathematics behind backpropagation, deriving (with substantial justification) the weight changing algorithm for a feedforward neural network by means of a standard gradient descent.

The feedforward algorithm

The activation of a neural network is iteratively defined by

where $y_n$ is the output vector at layer $n$, $f$ is the activation function, $x_n$ is the input vector at layer $n$, and $w_n$ is the weight matrix between layers $n$ and $n+1$. The first layer is $n=1$ and the last layer is $n=N$.

The backpropagation algorithm

The error of the network is defined by % The error gradient of the input vector at a layer $n$ is defined as

The error gradient of the input vector at the last layer $N$ is

The error gradient of the input vector at an inner layer $n$ is

Therefore, the error gradient of the input vector at a layer $n$ is

Hence, the error gradient of the weight matrix $w_n$ is

Therefore, the change in weight should be

where $\alpha$ is the learning rate (or rate of gradient descent). Thus, we have shown the necessary weight change, from which the implementation of a training algorithm follows trivially.

— Lucas Schuermann, Carlos Martin