Mean Absolute Scaled Error

From GM-RKB
Jump to navigation Jump to search

See: MASE, Time Series Forecasting, Time Series Forecasting Performance Measure.



References

2013

  • http://en.wikipedia.org/wiki/Mean_absolute_scaled_error
    • In statistics, the mean absolute scaled error (MASE) is a measure of the accuracy of forecasts . It was proposed in 2006 by Australian statistician Rob J. Hyndman, who described it as a "generally applicable measurement of forecast accuracy without the problems seen in the other measurements."

      The mean absolute scaled error is given by : [math]\displaystyle{ \mathrm{MASE} = \frac{1}{n}\sum_{t=1}^n\left( \frac{\left| e_t \right|}{\frac{1}{n-1}\sum_{i=2}^n \left| Y_i-Y_{i-1}\right|} \right) = \frac{\sum_{t=1}^{n} \left| e_t \right|}{\frac{n}{n-1}\sum_{i=2}^n \left| Y_i-Y_{i-1}\right|} }[/math] where the numerator et is the forecast error for a given period, defined as the actual value (Yt) minus the forecast value (Ft) for that period: et = YtFt, and the denominator is the average forecast error of the one-step "naive forecast method", which uses the actual value from the prior period as the forecast: Ft = Yt−1

      This scale-free error metric "can be used to compare forecast methods on a single series and also to compare forecast accuracy between series. This metric is well suited to intermittent-demand series[clarification needed] because it never gives infinite or undefined values except in the irrelevant case where all historical data are equal.


2006