2012 MakingMachineLearningModelsInte

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Interpretable Predictive Model.

Notes

Cited By

Quotes

Abstract

Data of different levels of complexity and of ever growing diversity of characteristics are the raw materials that machine learning practitioners try to model using their wide palette of methods and tools. The obtained models are meant to be a synthetic representation of the available, observed data that captures some of their intrinsic regularities or patterns. Therefore, the use of machine learning techniques for data analysis can be understood as a problem of pattern recognition or, more informally, of knowledge discovery and data mining. There exists a gap, though, between data modeling and knowledge extraction. Models, depending on the machine learning techniques employed, can be described in diverse ways but, in order to consider that some knowledge has been achieved from their description, we must take into account the human cognitive factor that any knowledge extraction process entails. These models as such can be rendered powerless unless they can be interpreted, and the process of human interpretation follows rules that go well beyond technical prowess. For this reason, interpretability is a paramount quality that machine learning methods should aim to achieve if they are to be applied in practice. This paper is a brief introduction to the special session on interpretable models in machine learning, It includes a discussion on the several works accepted for the session, with an overview of the context of wider research on interpretability of machine learning models.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2012 MakingMachineLearningModelsInteAlfredo Vellido
Paulo JG Lisboa
José David Martín-Guerrero
Making Machine Learning Models Interpretable.2012