2004 SemiSupLearnUsingRandMincuts

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Semi-Supervised Algorithm

Notes

  • Considers a randomized version of the mincut approach to learning from labeled and unlabeled data (Blum and Chawla, 2001)
  • investigates the sample-complexity and the ability to approximate the Markov Random Field per-node probabilities.

Cited By

Quotes

Abstract

In many application domains there is a large amount of unlabeled data but only a very limited amount of labeled training data. One general approach that has been explored for utilizing this unlabeled data is to construct a graph on all the data points based on distance relationships among examples, and then to use the known labels to perform some type of graph partitioning. One natural partitioning to use is the minimum cut that agrees with the labeled data (Blum & Chawla, 2001), which can be thought of as giving the most probable label assignment if one views labels as generated according to a Markov Random Field on the graph. Zhu et al. (2003). propose a cut based on a relaxation of this field, and Joachims (2003) gives an algorithm based on finding an approximate min-ratio cut.In this paper, we extend the mincut approach by adding randomness to the graph structure. The resulting algorithm addresses several short-comings of the basic mincut approach, and can be given theoretical justification from both a Markov random field perspective and from sample complexity considerations. In cases where the graph does not have small cuts for a given classification problem, randomization may not help. However, our experiments on several datasets show that when the structure of the graph supports small cuts, this can result in highly accurate classifiers with good accuracy/coverage tradeoffs. In addition, we are able to achieve good performance with a very simple graph-construction procedure.

References

  • 1 Gyora M. Benedek, Alon Itai, Learnability with respect to fixed distributions, Theoretical Computer Science, v.86 n.2, p.377-389, Sept. 2, 1991 doi:10.1016/0304-3975(91)90026-X
  • 2 Avrim Blum, Shuchi Chawla, Learning from Labeled and Unlabeled Data using Graph Mincuts, Proceedings of the Eighteenth International Conference on Machine Learning, p.19-26, June 28-July 01, 2001
  • 3 Jason I. Brown, Carl Hickman, Alan D. Sokal, David G. Wagner, On the chromatic roots of generalized theta graphs, Journal of Combinatorial Theory Series B, v.83 n.2, p.272-297, November 2001 doi:10.1006/jctb.2001.2057
  • 4 Martin E. Dyer, Leslie Ann Goldberg, Catherine S. Greenhill, Mark Jerrum, On the relative complexity of approximate counting problems, Proceedings of the Third International Workshop on Approximation Algorithms for Combinatorial Optimization, p.108-119, September 05-08, 2000
  • 5 Yoav Freund, Mansour, Y., & Robert E. Schapire (2003). Generalization bounds for averaged classifiers (how to be a Bayesian without believing). To appear in Annals of Statistics. Preliminary version appeared In: Proceedings of the 8th International Workshop on Artificial Intelligence and Statistics, (2001).
  • 6 Greig, D., Porteous, B., & Seheult, A. (1989). Exact maximum a posteriori estimation for binary images. Journal of the Royal Statistical Society, Series B, 51, 271--279.
  • 7 J. J. Hull, A Database for Handwritten Text Recognition Research, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.16 n.5, p.550-554, May 1994 doi:10.1109/34.291440
  • 8 Mark Jerrum, Alistair Sinclair, Polynomial-time approximation algorithms for the Ising model, SIAM Journal on Computing, v.22 n.5, p.1087-1116, Oct. 1993 doi:10.1137/0222066
  • 9 Thorsten Joachims (2003). Transductive learning via spectral graph partitioning. Proceedings of the International Conference on Machine Learning (ICML) (pp. 290--297).
  • 10 J. Kleinberg, Detecting a network failure, Proceedings of the 41st Annual Symposium on Foundations of Computer Science, p.231, November 12-14, 2000
  • 11 Jon Kleinberg, Mark Sandler, Aleksandrs Slivkins, Network failure detection and graph connectivity, Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, January 11-14, 2004, New Orleans, Louisiana
  • 12 Langford, J., & John Shawe-Taylor (2002). PAC-bayes and margins. Neural Information Processing Systems.
  • 13 David A. McAllester, PAC-Bayesian Stochastic Model Selection, Machine Learning, v.51 n.1, p.5-21, April 2003 doi:10.1023/A:1021840411064
  • 14 Zhu, X., Gharahmani, Z., & John D. Lafferty (2003). Semi-supervised learning using Gaussian fields and harmonic functions. Proceedings of the 20th International Conference on Machine Learning (pp. 912--919).

,

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2004 SemiSupLearnUsingRandMincutsAvrim Blum
John D. Lafferty
Mugizi Robert Rwebangira
Rajashekar Reddy
Semi-Supervised Learning Using Randomized Mincutshttp://www.cs.cmu.edu/~avrim/Papers/rmincut.ps10.1145/1015330.1015429