See: Predictive Function
- (Mitchell, 1997) ⇒ Tom M. Mitchell. (1997). "Machine Learning." McGraw-Hill.
- QUOTE: 1.2.2 Choosing the Target Function. The next design choice is to determin exasctly what type of kowledge will be learned and how this will be used by the performance program. ... Let us call this 'target function \(V\) and again use the notation \(V\) \[B\] → R to denote that \(V\) maps any legal board state from the set \(B\) to some real value. We intend for this target function \(V\) to assign higher scores to better board states ... Thus, we have reduced the learning task in this case to the problem of discover an operational description of the ideal target function V. It may be very difficult in general to learn such an operational form of \(V\) perfectly. In fact we often expect learning algorithms to acquire only some approximation to the target function, and for this reason the process of learning the target function is often called function approximation. In the current discussion we will use the symbol V^ to refer to the function that is actually learned by our program, to distinguish it from the ideal target function V.
- QUOTE: The key point in the above paragraph is that a lazy learning has the option of (implicitly) representing the target function by a combination of many local approximations, whereas an eager learner must commit at training time to a single global approximation. The distinction between eager and lazy learning is thus related to the distinction between global and local approximations to the target function.