Predictive Model Interpretability Measure
(Redirected from model interpretability measure)
Jump to navigation
Jump to search
A Predictive Model Interpretability Measure is an interpretability measure for a system model.
References
2020
- (Wikipedia, 2020) ⇒ https://en.wikipedia.org/wiki/Additive_model Retrieved:2020-10-2.
- … Furthermore, the AM is more flexible than a standard linear model, while being more interpretable than a general regression surface at the cost of approximation errors. Problems with AM include model selection, overfitting, and multicollinearity.
2017
- (Lundberg & Lee, 2017) ⇒ Scott M. Lundberg, and Su-In Lee. (2017). “A Unified Approach to Interpreting Model Predictions.” In: Proceedings of the 31st International Conference on Neural Information Processing Systems.
- QUOTE: ... The ability to correctly interpret a prediction model’s output is extremely important. It engenders appropriate user trust, provides insight into how a model may be improved, and supports understanding of the process being modeled. In some applications, simple models (e.g., linear models) are often preferred for their ease of interpretation, even if they may be less accurate than complex ones. However, the growing availability of big data has increased the benefits of using complex models, so bringing to the forefront the trade-off between accuracy and interpretability of a model’s output. ...