+-------------------------------+
| Writing | Lucas V. Schuermann |
+-------------------------------+
#
Notes on LSTMs for Time Series Prediction in Finance
Published 04/06/2017
=======================================================
Table of Contents

Disclaimer. The following notes were adapted from a talk given at a machine learning discussion group in Spring 2017 at Columbia. They have not been edited, nor have they been checked for completeness/accuracy. I do hope, however, that someone finds their whirlwind style an entertaining introduction to the applications of RNNs in quant finance. Please email me with any corrections or comments.

## Introduction

In contrast to a number of current curricula which focus on either the mathematics-heavy side of neural networks, beginning with an introduction to backpropagation for feedforward networks, or those which focus on constructing huge black-box multi-layered "deep" neural network models, we seek to introduce and motivate the usefulness of a specific subclass of networks, Long Short-Term Memory Networks, or LSTMs, as applied to time series prediction problems in quantitative finance.

## Why is TS prediction/forecasting necessary?

You should already know the answer to this if you have taken any basic quantitative finance course or done any personal experimentation. Practically all financial data is in the form of time series, due to the time dependence of price/economic conditions. In contrast to time series data coming from other domains of science, engineering, and mathematics, financial time series is a particularly difficult problem, such as in the case of price data for equities, foreign exchange, futures markets in areas such as commodities, and the like. What makes forecasting TS for financial data so hard? Strong trends have high reversal rates, there exists large amounts of noisy movement, stochasticity, usually modeled with Brownian motion etc., and low seasonality or ability to make similar assumptions about movements.

## A brief survey of TS models in statistics and econometrics

Past work in statistics for time series prediction includes models as simple as regression up through similarly-motivated but more complex models such as autoregressive integrated moving averages (ARIMA), and extensions thereof. In industry, very basic time series models are relied heavily upon for signal generation, usually constructed by moving and exponential moving averages fed into relatively arbitrary normalization terms and constraint sets. The shortcomings of these approaches are obvious, relying only on data from the price series, and basic statistical tools, though reasonable results can be obtained for different types of TS data in academia, very little success is seen with respect to financial time series forecasting.

## What is an LSTM?

An LSTM is a variety of Recurrent Neural Network (RNN), which is itself a flavor of ANNs, the general class of artificial neural networks.

### What is a neural network?

Very briefly, for those of you unfamiliar, a(n) (artificial) neural network is a set of interconnected neurons, each firing (propagating) in accordance with a weighted sum of inputs passed through an activation function. These models are biologically inspired, and have been used to great success in everything from computer vision to natural language processing.

Feedforward neural networks are a specific topology in which inputs are passed through interconnected layers of artifical neurons, which, in addition to the neurons for input and output, form the hidden layers. A number of hidden layers take inputs directly from the one above them and pass outputs directly to the one below. In this way, with a bit of math or a more formal notation, we can see that the feedforward neural net can be expressed as a matrix multiplication applied to the input vector, or, stated otherwise, it converges to approximating a function that models the relation between inputs and outputs. Notably, this can be non-linear, leading to many use cases in classification etc. Also notable, there is no state dependency unless prior states (of a TS, for example) are fed in. This makes feedforward neural networks not particularly performant for TS prediction problems, especially in the case of chaotic price data.

### What is a RNN?

Recurrent neural networks are networks with loops. These loops (recurrencies) give rise to persistent information, or, in other words, create a stateful network. Not only does the behavior of the network depend on the set of inputs, but also on all prior sets of inputs (to some reasonable limit, discussed later) which have given way to the current state of the network.

RNN Unrolled DiagramCredit: Colah's Understanding LSTMs

RNNs are universal, aka Turing complete, since they can compute anything a computer could compute given enough units and a trained weight matrix. In this sense, one can view the training of RNNs as the training of "universal program approximators" as opposed to the "universal function approximators" feed-forward neural networks are usually viewed as.

### So, what exactly is an LSTM?

A nicely illustrated tutorial which we will go through: Colah's Understanding LSTMs

## LSTMs for TS forecasting

LSTMs are particularly well suited to time-series prediction because they can "learn" and "remember" in long-term memory things like market regimes, while short-term memory and good interaction with lookback windows (and even time-irregular data or large steps between significant events) lead to solid performance in short-term trend prediction.

## Math we care about

Read about:

  1. backwards-time backpropagation/gradient descent
  2. data normalization for neural networks
  3. activation functions for FFNNs
  4. activation functions and gates for LSTMs
  5. completeness proofs and other analysis of RNNs

## Bootstrapping usage

It sounds like LSTMs are great! How can I use them in my next job or financial modeling/data science project? In short: Python. Even shorter (with respect to development time): Tensorflow/Theano. Even shorter? Keras. Writing the code to perform efficient time-based gradient descent is difficult, not to mention implementing different topologies, preprocessing, activation functions, and so on. Please, for your own sake, use a Python library. A few brief comments with respect to these libraries:

  1. Theano is fast, and optimized for use in general deep learning research. It supports nearly all model types, including models that are relatively emergent in literature. Its Python bindings are relatively easy to use, though they are a bit advanced for early stage users.
  2. Tensorflow is slower (in some cases), but has the best model support, and (in my opinion) community. It has good documentation and a good Python API.
  3. Keras, which is just a wrapper on top of both of the above frameworks, is the best choice in almost all cases. Fantastic documentation supplements an easy-to-use Python API, which allows for the easy specification of models that can run on many backends and computing systems. Plays well with Numpy and Pandas, etc. Really, this is the best way to make your life easier.
  4. Other C++ -bound DL libraries are sometimes faster, as there is overhead in Python function calls etc. Usually, besides model definition and kernel compilation (and memory copy), the host CPU isn't doing much, but keep this in mind for production or if you don't have access to a GPU.

## Shortcomings?

LSTMs, though not on the scale of so-called "deep" (some convolutional) neural networks, are difficult to train, mainly because of the computationally expensive nature of time-based gradient descent, the size of the networks, and the amount of data over which they must be trained. Further complicating, since LSTMs are stateful, many problems require "online" training, meaning that they cannot be trained all at once by highly optimized, vectorized calculations in a batch with error computation and gradient descent over groups. To this end, I highly recommend running training on a personal server with a GPU, which greatly optimizes tensor math through parallelism. CuDNN and other work by Nvidia has led to a great amount of gain in speed for training these networks, as well as the libraries mentioned above.

Other (more academic problems) include convergence time, throwing away noise (or reducing its effect on convergence time), finding correlated or predictive data to feed through these models, data normalization/regularization/processing, decision making for trading based upon output, and much more.

## Future work/reading

Luke will forward some papers. Most have to do with:

  1. topology or parameter selection
  2. optimizing lookback windows and learning "elasticity" while maintaining statefulness for good short-term predictions in a sliding window
  3. regarding LSTMs at large: optimizing training
  4. optimizing and pruning large networks
  5. optimizing and pruning training data (concept of "attention")
  6. generative models and other fancy stuff or fun applications

Some code:

## Citation

Cited as:

Schuermann, Lucas V. (Apr 2017). Notes on LSTMs for Time Series Prediction in Finance. Writing. https://lucasschuermann.com/writing/notes-on-lstms-in-finance

Or

@misc{schuermann2017lstms,
  title   = {Notes on LSTMs for Time Series Prediction in Finance},
  author  = {Schuermann, Lucas V.},
  year    = {2017},
  month   = {Apr},
  url     = "https://lucasschuermann.com/writing/notes-on-lstms-in-finance"
}