2006 IntroductionToSemiSupLearn

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Semi-Supervised Learning Task, Semi-Supervised Learning Algorithm.

Notes

Cited by

  • ~684 … (includes entire edited book)

Quotes

1.1 Supervised, Unsupervised, and Semi-Supervised Learning 1

In order to understand the nature of semi-supervised learning, it will be useful first to take a look at supervised and unsupervised learning.

1.1.1 Supervised and Unsupervised Learning

Traditionally, there have been two fundamentally different types of tasks in machine learning.

1.1.2 Semi-Supervised Learning

Semi-supervised learning (SSL) is halfway between supervised and unsupervised learning. In addition to unlabeled data, the algorithm is provided with some supervision information – but not necessarily for all examples. Often, this information will be the targets associated with some of the examples. In this case, the data set X = (xi)i∈[n] can be divided into two parts: the points Xl := (x1, . . ., xl), for which labels Yl := (y1, . . ., yl) are provided, and the points Xu := (xl+1, . . ., xl+u), the labels of which are not known. This is “standard” semi-supervised learning as investigated in this book; most chapters will refer to this setting.

Other forms of partial supervision are possible. For example, there may be constraints such as “these points have (or do not have) the same target” (cf. Abu-Mostafa, 1995). This more general setting is considered in chapter 5. The different setting corresponds to a different view of semi-supervised learning: In chapter 5, SSL is seen as unsupervised learning guided by constraints. In contrast, most other approaches see SSL as supervised learning with additional information on the distribution of the examples x. The latter interpretation seems to be more in line with most applications, where the goal is the same as in supervised learning: to predict a target value for a given xi. However, this view does not readily apply if the number and nature of the classes are not known in advance but have to be inferred from the data. In constrast, SSL as unsupervised learning with constraints may still remain applicable in such situations.

A problem related to SSL was introduced by Vapnik already several decades ago: so-called transductive learning. In this setting, one is given a (labeled) training set and an (unlabeled) test set. The idea of transduction is to perform predictions only for the test points. This is in contrast to inductive learning, where the goal is to output a prediction function which is defined on the entire space X. Many methods described in this book will be transductive; in particular, this is rather natural for inference based on graph representations of the data. This issue will be addressed again in section 1.2.4.

(footnote: 1. For simplicity, we are assuming that all distributions have densities, and thus we restrict ourselves to dealing with densities.)

1.1.3 A Brief History of Semi-Supervised Learning

Probably the earliest idea about using unlabeled data in classification is self-learning, which is also known as self-training, self-labeling, or decision-directed learning. This is a wrapper-algorithm that repeatedly uses a supervised learning method. It starts by training on the labeled data only. In each step a part of the unlabeled points is labeled according to the current decision function; then the supervised method is retrained using its own predictions as additional labeled points. This idea has appeared in the literature already for some time (e.g., Scudder (1965); Fralick (1967); Agrawala (1970)).

An unsatisfactory aspect of self-learning is that the effect of the wrapper depends on the supervised method used inside it. If self-learning is used with empirical risk minimization and 1-0-loss, the unlabeled data will have no effect on the solution at all. If instead a margin maximizing method is used, as a result the decision boundary is pushed away from the unlabeled points (cf. chapter 6). In other cases it seems to be unclear what the self-learning is really doing, and which assumption it corresponds to.

Closely related to semi-supervised learning is the concept of transductive inference, or transduction, pioneered by Vapnik (Vapnik and Chervonenkis, 1974; Vapnik and Sterin, 1977). In contrast to inductive inference, no general decision rule is inferred, but only the labels of the unlabeled (or test) points are predicted. An early instance of transduction (albeit without explicitly considering it as a concept) was already proposed by Hartley and Rao (1968). They suggested a combinatorial optimization on the labels of the test points in order to maximize the likelihood of their model.

It seems that semi-supervised learning really took off in the 1970s when the problem of estimating the Fisher linear discriminant rule with unlabeled data was considered (Hosmer, 1973; McLachlan, 1977; O’Neill, 1978; McLachlan and Ganesalingam, 1982). More precisely, the setting was in the case where each class-conditional density is Gaussian with equal covariance matrix. The likelihood of the model is then maximized using the labeled and unlabeled data with the help of an iterative algorithm such as the expectation-maximization (EM) algorithm (Dempster et al., 1977). Instead of a mixture of Gaussians, the use of a mixture of multinomial distributions estimated with labeled and unlabeled data has been investigated in (Cooper and Freeman, 1970).

Later, this one component per class setting has been extended to several components per class (Shahshahani and Landgrebe, 1994) and further generalized by Miller and Uyar (1997).

Learning rates in a probably approximately correct (PAC) framework (Valiant, 1984) have been derived for the semi-supervised learning of a mixture of two Gaussians by Ratsaby and Venkatesh (1995). In the case of an identifiable mixture, Castelli and Cover (1995) showed that with an infinite number of unlabeled points, the probability of error has an exponential convergence (w.r.t. the number of labeled examples) to the Bayes risk. Identifiable means that given P(x), the decomposition in SUM y P(y)P(x|y) is unique. This seems a relatively strong assumption, but it is satisfied, for instance, by mixtures of Gaussians. Related is the analysis in (Castelli and Cover, 1996) in which the class-conditional densities are known but the class priors are not.

Finally, the interest in semi-supervised learning increased in the 1990s, mostly due to applications in natural language problems and text classification (Yarowsky, 1995; Nigam et al., 1998; Blum and Mitchell, 1998; Collins and Singer, 1999; Joachims, 1999).

Note that, to our knowledge, Merz et al. (1992) were the first to use the term “semi-supervised” for classification with both labeled and unlabeled data. It has in fact been used before, but in a different context than what is developed in this book; see, for instance, (Board and Pitt, 1989).

1.2 When Can Semi-Supervised Learning Work? 4

1.3 Classes of Algorithms and Organization of This Book 8


,

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2006 IntroductionToSemiSupLearnBernhard Schölkopf
Olivier Chapelle
Alexander Zien
Introduction to Semi-Supervised Learninghttp://mitpress.mit.edu/books/chapters/0262033585chapm1.pdf2006