Predictive Model Interpretability Measure

From GM-RKB
(Redirected from Model Interpretability)
Jump to navigation Jump to search

A Predictive Model Interpretability Measure is an interpretability measure for a system model.



References

2020

2017

  • (Lundberg & Lee, 2017) ⇒ Scott M. Lundberg, and Su-In Lee. (2017). “A Unified Approach to Interpreting Model Predictions.” In: Proceedings of the 31st International Conference on Neural Information Processing Systems.
    • QUOTE: ... The ability to correctly interpret a prediction model’s output is extremely important. It engenders appropriate user trust, provides insight into how a model may be improved, and supports understanding of the process being modeled. In some applications, simple models (e.g., linear models) are often preferred for their ease of interpretation, even if they may be less accurate than complex ones. However, the growing availability of big data has increased the benefits of using complex models, so bringing to the forefront the trade-off between accuracy and interpretability of a model’s output. ...