1998 AComparisonOfEventModelsForNBTC

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Text Classification Algorithm, Naive-Bayes Classification Algorithm, Text Classification Task.

Notes

Cited By

Quotes

Abstract

  • Recent approaches to text classification have used two different first-order probabilistic models for classification, both of which make the naive Bayes assumption. Some use a multi-variate Bernoulli model, that is, a Bayesian Network with no dependencies between words and binary word features (e.g. Larkey and Croft 1996; Koller and Sahami 1997). Others use a multinomial model, that is, a uni-gram language model with integer word counts (e.g. Lewis and Gale 1994; Mitchell 1997). This paper aims to clarify the confusion by describing the differences and details of these two models, and by empirically comparing their classification performance on five text corpora. We find that the multi-variate Bernoulli performs well with small vocabulary sizes, but that the multinomial performs usually performs even better at larger vocabulary sizes — providing on average a 27% reduction in error over the multi-variate Bernoulli model at any vocabulary size,


 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
1998 AComparisonOfEventModelsForNBTCKamal NigamA Comparison of Event Models for Naive Bayes Text Classificationhttp://www.cs.cmu.edu/~knigam/papers/multinomial-aaaiws98.pdf