2002 ActiveSemiSupervisedLearningRob

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Multi-View Learning Algorithm

Notes

Cited By

Quotes

Abstract

In a multi-view problem, the features of the domain can be partitioned into disjoint subsets (views) that are sufficient to learn the target concept. Semi-supervised, multi-view algorithms, which reduce the amount of labeled data required for learning, rely on the assumptions that the views are compatible and uncorrelated (i.e., every example is identically labeled by the target concepts in each view; and, given the label of any example, its descriptions in each view are independent). As these assumptions are unlikely to hold in practice, it is crucial to understand the behavior of multi-view algorithms on problems with incompatible, correlated views. We address this issue by studying several algorithms on a parameterized family of text classification problems in which we control both view correlation and incompatibility. We first show that existing semi-supervised algorithms are not robust over the whole spectrum of parameterized problems. Then we introduce a new multi-view algorithm, Co-EMT, which combines semi-supervised and active learning. Co-EMT outperforms the other algorithms both on the parameterized problems and on two additional real world domains. Our experiments suggest that Co-EMT’s robustness comes from active learning compensating for the correlation of the views

References

,

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2002 ActiveSemiSupervisedLearningRobIon Muslea
Steven Minton
Craig A Knoblock
Active+ Semi-supervised Learning= Robust Multi-view Learning