# Numeric Prediction Task Performance Measure

(Redirected from Approximation Error Measure)

## References

### 2016

• (Wikipedia, 2016) ⇒ http://wikipedia.org/wiki/Approximation_error Retrieved:2016-4-11.
• The approximation error in some data is the discrepancy between an exact value and some approximation to it. An approximation error can occur because
1. the measurement of the data is not precise due to the instruments. (e.g., the accurate reading of a piece of paper is 4.5 cm but since the ruler does not use decimals, you round it to 5 cm.) or
2. approximations are used instead of the real data (e.g., 3.14 instead of π).
• In the mathematical field of numerical analysis, the numerical stability of an algorithm in numerical analysis indicates how the error is propagated by the algorithm.

### 2016

• (Wikipedia, 2016) ⇒ http://wikipedia.org/wiki/Approximation_error#Formal_Definition Retrieved:2016-4-11.
• One commonly distinguishes between the relative error and the absolute error.

Given some value v and its approximation vapprox, the absolute error is : $\epsilon = |v-v_\text{approx}|\ ,$ where the vertical bars denote the absolute value.

If $v \ne 0,$ the relative error is : $\eta = \frac{\epsilon}{|v|} = \left| \frac{v-v_\text{approx}}{v} \right| = \left| 1 - \frac{v_\text{approx}}{v} \right|,$ and the percent error is : $\delta = 100\%\times\eta = 100\%\times\frac{\epsilon}{|v|} = 100\%\times\left| \frac{v-v_\text{approx}}{v} \right|.$ In words, the absolute error is the magnitude of the difference between the exact value and the approximation. The relative error is the absolute error divided by the magnitude of the exact value. The percent error is the relative error expressed in terms of per 100.