2007 GenerativeOrDiscriminativeGetti

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Generative Statistical Metamodel, Discriminative Statistical Metamodel.

Notes

Cited By

Quotes

Author Keywords

Generative, discriminative, Bayesian inference, semi-supervized, unlabelled data, machine learning.

Abstract

For many applications of machine learning the goal is to predict the value of a vector [math]\displaystyle{ \bf{c} }[/math] given the value of a vector [math]\displaystyle{ \bf{x} }[/math] of input features. In a classification problem [math]\displaystyle{ \bf{c} }[/math] represents a discrete class label, whereas in a regression problem it corresponds to one or more continuous variables. From a probabilistic perspective, the goal is to find the conditional distribution [math]\displaystyle{ p(\bf{c}|\bf{x}) }[/math]. The most common approach to this problem is to represent the conditional distribution using a parametric model, and then to determine the parameters using a training set consisting of pairs [math]\displaystyle{ {\bf{x}_n, \bf{c}_n} }[/math] of input vectors along with their corresponding target output vectors. The resulting conditional distribution can be used to make predictions of [math]\displaystyle{ c }[/math] for new values of [math]\displaystyle{ \bf{x} }[/math]. This is known as a discriminative approach, since the conditional distribution discriminates directly between the different values of [math]\displaystyle{ \bf{x} }[/math].

An alternative approach is to find the joint distribution [math]\displaystyle{ p(\bf{x};\bf{c}) }[/math], expressed for instance as a parametric model, and then subsequently uses this joint distribution to evaluate the conditional [math]\displaystyle{ p(\bf{c}|\bf{x}) }[/math] in order to make predictions of [math]\displaystyle{ c }[/math] for new values of [math]\displaystyle{ \bf{x} }[/math]. This is known as a generative approach since by sampling from the joint distribution it is possible to generate synthetic examples of the feature vector [math]\displaystyle{ \bf{x} }[/math]. In practice, the generalization performance of generative models is often found to be poorer than than of discriminative models due to differences between the model and the true distribution of the data. When labelled training data is plentiful, discriminative techniques are widely used since they give excellent generalization performance. However, although collection of data is often easy, the process of labelling it can be expensive. Consequently there is increasing interest in generative methods since these can exploit unlabelled data in addition to labelled data.

Although the generalization performance of generative models can often be improved by `training them discriminatively', they can then no longer make use of unlabelled data. In an attempt to gain the benefit of both generative and discriminative approaches, heuristic procedure have been proposed which interpolate between these two extremes by taking a convex combination of the generative and discriminative objective functions.

Here we discuss a new perspective which says that there is only one correct way to train a given model, and that a `discriminatively trained' generative model is fundamentally a new model (Minka, 2006). From this viewpoint, generative and discriminative models correspond to specific choices for the prior over parameters. As well as giving a principled interpretation of `discriminative training', this approach opens the door to very general ways of interpolating between generative and discriminative extremes through alternative choices of prior. We illustrate this framework using both synthetic data and a practical example in the domain of multi-class object recognition. Our results show that, when the supply of labelled training data is limited, the optimum performance corresponds to a balance between the purely generative and the purely discriminative. We conclude by discussing how to use a Bayesian approach to find automatically the appropriate trade-off between the generative and discriminative extremes.

1. Introduction

In many applications of machine learning the goal is to take a vector [math]\displaystyle{ \bf{x} }[/math] of input features and to assign it to one of a number of alternative classes labelled by a vector [math]\displaystyle{ \bf{c} }[/math] (for instance, if we have C classes, then [math]\displaystyle{ \bf{c} }[/math] might be a C-dimensional binary vector in which all elements are zero except the one corresponding to the class).

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2007 GenerativeOrDiscriminativeGettiChristopher M. Bishop
Julia Lasserre
Generative Or Discriminative? Getting the Best of Both Worlds2007