## DocumentationSkuBrain 101 |

# Forecasting algorithms

There are many different forecasting algorithms and SkuBrain makes use of over 30 different algorithms, from two broad algorithmic families:

- Exponential smoothing
- Intermittent demand

Neither of these is a good place to start a discussion about forecasting algorithms though, so I’ll begin instead by describing some simple algorithms before describing those that SkuBrain actually uses.

## Simple forecasting algorithms

In all of the examples below, I’ll assume we’ve got the following **quarterly** sales history:

Interval | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
---|---|---|---|---|---|---|---|---|---|---|---|

Quarter | Q3 | Q4 | Q1 | Q2 | Q3 | Q4 | Q1 | Q2 | Q3 | Q4 | Q1 |

Units sold | 4410 | 6435 | 4939 | 6487 | 25521 | 18764 | 12223 | 18590 | 36898 | 28826 | 20329 |

### Average

One of the simplest algorithms is the trusty old `average`

(or the `mean`

), which is of course obtained by dividing the sum of a list of values by the number of values in the list.

And our forecast for each quarter is simply the mean of all of the quarters in our sales history (`16674.73 units`

).

### Moving average

One objection to the average is that it places equal importance on both recent data and very old data. In most businesses, recent sales activity is a more accurate predictor of future sales activity that what happened many years ago (in very different circumstances).

The moving average addresses this problem by only taking into account the most recent data. Using the quarterly data that we have above, a 12 month moving average could be obtained by taking the average of the last 4 quarter only.

```
MA = (18590 + 36898 + 28826 + 20329) / 4
= 104643 / 4
= 26160.75
```

And our moving average forecast is thus:

This forecast is probably going to be more accurate than the plain vanilla “average” forecast that we used before, since it’s based on recent and relevant data.

### Weighted moving average

A more sophisticated way of placing emphasis on recent data is to use `weights`

. Weights are multipliers that get applied to each of the data points and, when calculating the average, rather than dividing by the number of data points we divide by the sum of the weights. So, for example, using the weights 1, 2, 3 and 4 with our 12 month moving average we’d end up with:

```
WMA = (1 * 18590 + 2 * 36898 + 3 * 28826 + 4 * 20329) / (1 + 2 + 3 + 4)
= 260180 / 10
= 26018
```

So our weighted moving average forecast looks like:

One way to think about this is that a data point with a weight of 2 will be given twice as much weight/importance as a data point that has a weight of 1. In the above example then, the data from interval 11 is accorded 4 times as much importance as the date from interval 8.

## Simple Exponential smoothing

Having covered some simpler algorithms, it’s now time to take a look at the main family of algorithms that SkuBrain uses, which is exponential smoothing.

The simplest exponential smoothing method (sometimes called “single exponential smoothing”) is suitable for forecasting data with no trend or seasonal pattern.

It’s essentially a form of weighted moving average where the weightings decrease exponentially the older the data is. The speed at which the weightings decrease is controlled by a parameter that is known as the *smoothing constant* (usually represented by a lower case Greek letter α in mathematical texts on the topic).

All of this is just talking in vagaries though – so here’s how it works:

Given some sales data `y`

for the time periods _{1}, y_{2}..y_{T}`1..T`

and a smoothing parameter `α`

such that `0 ≤ α ≤ 1`

, forecasted sales for the period `T + 1`

are calculated as:

y_{T+1}= αy_{T}+ α(1-α)y_{T-1}+ α(1-α)^{2}y_{T-2}... + α(1-α)^{T}y_{1}

So with a smoothing parameter of `α = 0.8`

the weightings given to the various data points in our sample data (which consisted of 11 historic intervals/observations) would be:

Interval | Weight |
---|---|

11 | 0.8 |

10 | 0.16 |

9 | 0.032 |

8 | 0.0064 |

etc… | … |

After that, the only thing that is left to do is to choose an appropriate value for α – something known as parameter optimization.

Before moving on though, let’s take a look at the forecast we get for our sample data when using simple exponential smoothing with `α = 0.546`

:

## Double and triple exponential smoothing

If you’ve followed the description of exponential smoothing so far, you’ll no doubt still be wondering where trend and seasonality fit into the picture. These are handled by more complex variants of exponential smoothing, commonly referred to as double exponential smoothing and triple exponential smoothing and this is where exponential smoothing really shines!

It’s worth noting, however, that among the various different implementations of double and triple exponential smoothing, some assume additive trend, others multiplicative trend and yet others again damped (logarithmic) trend. Similarly, some implementations of these algorithms assume additive seasonality and others assume multiplicative seasonality. SkuBrain implements **all of these variants** and then runs a forecast tournament in order to select the most appropriate one for each forecast that it prepares.

More important however are the results… so let’s take a look at the forecast that SkuBrain produces for our sample data using a triple exponential smoothing algorithm:

Clearly this is more sophisticated than any of the averaging algorithms that we discussed earlier. Instead of the fairly improbably flat line that any of our averaging algorithms would have produced, SkuBrain is detecting both trend and seasonality in the data (hooray!).

## ARIMA

ARIMA (Autoregressive Integrated Moving Average) is another popular time-series analytical method and provides a complementary approach to forecasting. While exponential smoothing models were based on a description of trend and seasonality in the data, ARIMA models aim to describe the autocorrelations in the data. ARIMA models apply to a wide variety of use cases, taking into account seasonality, cyclicality, moving averages and weighting of recent history, and can provide the best fit models when there is enough history. If you would like to find out more, please see here.

Here is an example of an ARIMA model:

As you can see, based on the history in 2015 and 2014, it projects a similar, seasonal demand in 2016, but discounts the unusually high peak in 2013 because it is somewhat distant in history. While ARIMA is very powerful, it is often difficult to apply to data sets and usually requires advanced statistical tools and techniques. However, with SkuBrain, YOU don’t need to crunch your data because SkuBrain will run a tournament with all the algorithms and find the best model for you.

## Croston’s intermittent demand

Finally, in addition to the exponential smoothing family and the ARIMA family of algorithms, SkuBrain uses a bit of a specialist algorithm known as Croston’s intermittent demand. This algorithm is designed to be used in situations where you don’t have much data and what you do have is patchy at best (maybe you make one or two sales of a particular item every 3 or 4 months).

If you’re *really* interested to know how this works, you can read a bit more about it here.