(Redirected from Explanation-based Learning)
- AKA: Analytical Learning, Explanation-based Learning, EBL, Utility problem.
- It can be solved by a Deductive Learning System.
- See: Deductive Reasoning, Expert System, Classification Task.
- (DeJong & Lim, 2017) ⇒ DeJong G., Lim S. (2017). "Explanation-Based Learning". In: Sammut, C., Webb, G.I. (eds) "Encyclopedia of Machine Learning and Data Mining". Springer, Boston, MA
- QUOTE: Explanation-based learning (EBL) is a principled method for exploiting available domain knowledge to improve supervised learning. Improvement can be in speed of learning, confidence of learning, accuracy of the learned concept, or a combination of these. In modern EBL the domain theory represents an expert’s approximate knowledge of complex systematic world behavior. It may be imperfect and incomplete. Inference over the domain knowledge provides analytic evidence that compliments the empirical evidence of the training data. By contrast, in original EBL, the domain theory is required to be much stronger; inferred properties are guaranteed. Another important aspect of modern EBL is the interaction between domain knowledge and labeled training examples afforded by explanations. Interaction allows the nonlinear combination of evidence so that the resulting information about the target concept can be much greater than the sum of the information from each evidence source taken independently.
- (Sammut & Webb, 2017) ⇒ (2017) "Deductive Learning". In: Sammut, C., Webb, G.I. (eds) "Encyclopedia of Machine Learning and Data Mining". Springer, Boston, MA
- QUOTE: Deductive learning is a subclass of machine learning that studies algorithms for learning provably correct knowledge. Typically such methods are used to speedup problem solvers by adding knowledge to them that is deductively entailed by existing knowledge, but that may result in faster solutions.
- (Valiant, 1984) ⇒ Valiant, L. G. (1984). "Deductive learning". Phil. Trans. R. Soc. Lond. A, 312(1522), 441-446.
- ABSTRACT: A non-technical discussion of a new approach to the problem of concept learning in the context of artificial devices is given. Learning is viewed as a process of acquiring a program for recognizing a concept from an environment that does not reveal an explicit description of the program but only suggests it by such means as identifying positive examples of it. The proposed model makes possible a study of learning that reconciles three requirements: the classes of concepts that can be learnt are relevant for general purpose knowledge; they can be characterized; the process of learning them is computationally feasible.