2002 FeatureSelectForClustAFilterSolution

From GM-RKB
Jump to navigation Jump to search

Subject Headings:

Notes

Cited By

Quotes

Abstract

Processing applications with a large number of dimensions has been a challenge to the KDD community. Feature selection, an effective dimensionality reduction technique, is an essential pre-processing method to remove noisy features. In the literature there are only a few methods proposed for feature selection for clustering. And, almost all of those methods are 'wrapper' techniques that require a clustering algorithm to evaluate the candidate feature subsets. The wrapper approach is largely unsuitable in real-world applications due to its heavy reliance on clustering algorithms that require parameters such as number of clusters, and due to lack of suitable clustering criteria to evaluate clustering in different subspaces. In this paper we propose a 'filter' method that is independent of any clustering algorithm. The proposed method is based on the observation that data with clusters has very different point-to-point distance histogram than that of data without clusters. Using this we propose an entropy measure that is low if data has distinct clusters and high otherwise. The entropy measure is suitable for selecting the most important subset of features because it is invariant with number of dimensions, and is affected only by the quality of clustering. Extensive performance evaluation over synthetic, benchmark, and real datasets shows its effectiveness.

References



 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2002 FeatureSelectForClustAFilterSolutionHuan Liu
Manoranjan Dash
Kiseok Choi
Peter Scheuermann
Feature Selection for Clustering - A Filter SolutionProceedings of the Second IEEE International Conference on Data Mininghttp://www.eecs.northwestern.edu/~peters/references/feature selection for clustering.ps10.1109/ICDM.2002.11838932002