Interpretable Predictive Model
(Redirected from Interpretable Model)
Jump to navigation
Jump to search
An Interpretable Predictive Model is a predictive model with a relative high model interpretability measure value.
- Example(s):
- a Decision Tree Model.
- an Additive Model.
- …
- Counter-Example(s):
- See: Predictive Model Interpretation System.
References
2020
- (Wikipedia, 2020) ⇒ https://en.wikipedia.org/wiki/Additive_model Retrieved:2020-10-2.
- … Furthermore, the AM is more flexible than a standard linear model, while being more interpretable than a general regression surface at the cost of approximation errors. Problems with AM include model selection, overfitting, and multicollinearity.
2017
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Additive_model Retrieved:2017-10-17.
- In statistics, an additive model (AM) is a nonparametric regression method … the AM is more flexible than a standard linear model, while being more interpretable than a general regression surface at the cost of approximation errors. …
2015
- (Debray et al., 2015) ⇒ Thomas P.A. Debray, Yvonne Vergouwe, Hendrik Koffijberg, Daan Nieboer, Ewout W. Steyerberg, and Karel GM Moons. (2015). “A New Framework to Enhance the Interpretation of External Validation Studies of Clinical Prediction Models.” Journal of clinical epidemiology 68, no. 3
2015
- (Shah et al., 2015) ⇒ Neil Shah, Danai Koutra, Tianmin Zou, Brian Gallagher, and Christos Faloutsos. (2015). “TimeCrunch: Interpretable Dynamic Graph Summarization.” In: Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2015). ISBN:978-1-4503-3664-2 doi:10.1145/2783258.2783321
2014
- (Purushotham et al., 2014) ⇒ Sanjay Purushotham, Martin Renqiang Min, C.-C. Jay Kuo, and Rachel Ostroff. (2014). “Factorized Sparse Learning Models with Interpretable High Order Feature Interactions.” In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2014) Journal. ISBN:978-1-4503-2956-9 doi:10.1145/2623330.2623747
2012
- (Vellido et al., 2012) ⇒ Alfredo Vellido, José David Martín-Guerrero, and Paulo JG Lisboa. (2012). “Making Machine Learning Models Interpretable.” In: ESANN.
- QUOTE: Data of different levels of complexity and of ever growing diversity of characteristics are the raw materials that machine learning practitioners try to model using their wide palette of methods and tools. The obtained models are meant to be a synthetic representation of the available, observed data that captures some of their intrinsic regularities or patterns. Therefore, the use of machine learning techniques for data analysis can be understood as a problem of pattern recognition or, more informally, of knowledge discovery and data mining. There exists a gap, though, between data modeling and knowledge extraction. Models, depending on the machine learning techniques employed, can be described in diverse ways but, in order to consider that some knowledge has been achieved from their description, we must take into account the human cognitive factor that any knowledge extraction process entails. These models as such can be rendered powerless unless they can be interpreted, and the process of human interpretation follows rules that go well beyond technical prowess. For this reason, interpretability is a paramount quality that machine learning methods should aim to achieve if they are to be applied in practice. This paper is a brief introduction to the special session on interpretable models in machine learning, It includes a discussion on the several works accepted for the session, with an overview of the context of wider research on interpretability of machine learning models.