2006 AnIntroductiontoROCAnalysis

From GM-RKB
Jump to: navigation, search

Subject Headings: ROC Graph, ROC analysis; Classifier evaluation; Evaluation metric.

Notes

Cited By

Quotes

Abstract

Receiver operating characteristics (ROC) graphs are useful for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in recent years have been used increasingly in machine learning and data mining research. Although ROC graphs are apparently simple, there are some common misconceptions and pitfalls when using them in practice. The purpose of this article is to serve as an introduction to ROC graphs and as a guide for using them in research.


References

  • Fawcett, T., 2001. Using rule sets to maximize ROC performance. In: ProceedingsIEEE Internat. Conf. on Data Mining (ICDM-2001), pp. 131–138.
  • Fawcett, T., Provost, F., 1996. Combining data mining and machine learning for effective user profiling. In: Simoudis, E., Han, J., Fayyad, U. (Eds.), Proceedings of Second Internat. Conf. on Knowledge Discovery and Data Mining. AAAI Press, Menlo Park, CA, pp. 8–13.
  • Fawcett, T., Provost, F., 1997. Adaptive fraud detection. Data Mining and Knowledge Discovery 1 (3), 291–316.
  • Flach, P., Wu, S., 2003. Repairing concavities in ROC curves. In: Proceedings2003 UK Workshop on Computational Intelligence. University of Bristol, pp. 38–44.
  • Forman, G., 2002. A method for discovering the insignificance of ones best classifier and the unlearnability of a classification task. In: Lavrac, N., Motoda, H., Fawcett, T. (Eds.), Proceedings of First Internat. Workshop on Data Mining Lessons Learned (DMLL-2002). Available from: http://www.purl.org/NET/tfawcett/DMLL-2002/Forman.pdf.
  • Hand, D.J., Till, R.J., 2001. A simple generalization of the area under the ROC curve to multiple class classification problems. Mach. Learning 45 (2), 171–186.
  • Hanley, J.A., McNeil, B.J., 1982. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 143, 29–36.
  • Holte, R., 2002. Personal communication.
  • Kubat, M., Holte, R.C., Matwin, S., 1998. Machine learning for the detection of oil spills in satellite radar images. Machine Learning 30 (2–3), 195–215.
  • Lane, T., 2000. Extensions of ROC analysis to multi-class domains. In: Dietterich, T., Margineantu, D., Provost, F., Turney, P. (Eds.), ICML-2000 Workshop on Cost-Sensitive Learning.
  • Lewis, D., 1990. Representation quality in text classification: An introduction and experiment. In: ProceedingsWorkshop on Speech and Natural Language. Morgan Kaufmann, Hidden Valley, PA, pp. 288– 295.
  • Lewis, D., 1991. Evaluating text categorization. In: ProceedingsSpeech and Natural Language Workshop. Morgan Kaufmann, pp. 312–318.
  • Macskassy, S., Provost, F., 2004. Confidence bands for ROC curves: Methods and an empirical study. In: ProceedingsFirst Workshop on ROC Analysis in AI (ROCAI-04).
  • Provost, F., Domingos, P., 2001. Well-trained PETs: Improving probability estimation trees, CeDER Working Paper #IS-00-04, Stern School of Business, New York University, NY, NY 10012.
  • Provost, F., Fawcett, T., 1997. Analysis and visualization of classifier performance: Comparison under imprecise class and cost distributions. In: ProceedingsThird Internat. Conf. on Knowledge Discovery and Data Mining (KDD-97). AAAI Press, Menlo Park, CA, pp. 43–8.
  • Provost, F., Fawcett, T., 1998. Robust classification systems for imprecise environments. In: ProceedingsAAAI-98. AAAI Press, Menlo Park, CA, pp. 706–713. Available from: <http://www.purl.org/NET/tfawcett/papers/aaai98-dist.ps.gz>.
  • Provost, F., Fawcett, T., 2001. Robust classification for imprecise environments. Mach. Learning 42 (3), 203–231.
  • Provost, F., Fawcett, T., Kohavi, R., 1998. The case against accuracy estimation for comparing induction algorithms. In: Shavlik, J. (Ed.), Proceedings of ICML-98. Morgan Kaufmann, San Francisco, CA, pp. 445–453. Available from:<http://www.purl.org/NET/tfawcett/papers/ICML98-final.ps.gz>.
  • Saitta, L., Neri, F., 1998. Learning in the ‘‘real world’’. Mach. Learning 30, 133–163.
  • Spackman, K.A., 1989. Signal detection theory: Valuable tools for evaluating inductive learning. In: ProceedingsSixth Internat. Workshop on Machine Learning. Morgan Kaufman, San Mateo, CA, pp. 160–163.
  • Srinivasan, A., 1999. Note on the location of optimal classifiers in ndimensional ROC space. Technical Report PRG-TR-2-99, Oxford University Computing Laboratory, Oxford, England. Available from: <http://citeseer.nj.nec.com/srinivasan99note.html>.
  • Swets, J., 1988. Measuring the accuracy of diagnostic systems. Science 240, 1285–1293. 0 0.05 0.1 0.15 0.2 0.25 0 0.2 0.4 0.6 0.8 1.0 False positive rate True positive rate A B C }k 0.3 constraint line: TPr * 240 + FPr * 3760 = 800 Fig. 10. Interpolating classifiers.
  • T. Fawcett / Pattern Recognition Letters 27 (2006) 861–874 873Swets, J.A., Dawes, R.M., Monahan, J., 2000. Better decisions through science. Scientific American 283, 82–87.
  • van der Putten, P., van Someren, M., 2000. CoIL challenge 2000: The insurance company case. Technical Report 2000–09, Leiden Institute of Advanced Computer Science, Universiteit van Leiden. Available from: <http://www.liacs.nl/putten/library/cc2000>.
  • Zadrozny, B., Elkan, C., 2001. Obtaining calibrated probability estimates from decision trees and naive Bayesian classiers. In: ProceedingsEighteenth Internat. Conf. on Machine Learning, pp. 609–616.
  • Zou, K.H., 2002. Receiver operating characteristic (ROC) literature research. On-line bibliography available from: http://splweb.bwh. harvard.edu:8000/pages/ppl/zou/roc.html>.,


 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2006 AnIntroductiontoROCAnalysisTom FawcettAn Introduction to ROC AnalysisPattern Recognition Lettershttp://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf2006