(Redirected from 20 Newsgroups Dataset)Jump to navigation Jump to search
- AKA: 20NG, 20 Newsgroups Collection.
- See: Text Corpus, Text Dataset, Text Classification, Natural Language Processing.
- (Hu et al., 1999) ⇒ Xiaohua Hu, Xiaodan Zhang, Caimei Lu, E. K. Park, and Xiaohua Zhou. (2009). “Exploiting Wikipedia as External Knowledge for Document Clustering.” In: Proceedings of ACM SIGKDD Conference (KDD-2009). doi:10.1145/1557019.1557066
- (Chen et al., 2009) ⇒ Bo Chen, Wai Lam, Ivor Tsang, and Tak-Lam Wong. (2009). “Extracting Discrimininative Concepts for Domain Adaptation in Text Mining.” In: Proceedings of ACM SIGKDD Conference (KDD-2009). doi:10.1145/1557019.1557045
- QUOTE: We use the 20-Newsgroup corpus to conduct experiments on document classification. This corpus consists of 18,846 newsgroup articles harvested from 20 different Usenet newsgroups. It can be observed that the marginal distributions of the articles among different newsgroups are not identical. There exists distribution shift from one newsgroup to any other newsgroups. However, we observe that some newsgroups are related. For example, the newsgroups rec.autos and rec.motorcycles are related to car. The newsgroups comp.sys.mac.hardware and comp.sys.ibm.pc.hardware are related to hardware, etc. …
- (20Newsgroups, 1997) ⇒ http://people.csail.mit.edu/jrennie/20Newsgroups/
- The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups. To the best of my knowledge, it was originally collected by Ken Lang, probably for his Newsweeder: Learning to filter netnews paper, though he does not explicitly mention this collection. The 20 newsgroups collection has become a popular data set for experiments in text applications of machine learning techniques, such as text classification and text clustering.