Dataset Feature Sampling Task

From GM-RKB
(Redirected from Feature Selection)
Jump to navigation Jump to search

A Dataset Feature Sampling Task is a dimensionality reduction task that is a sampling task (requires the selection of features in a training Set that are most informative to the learning task).



References

2016

2011

2009

  • (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/Feature_selection
    • Feature selection, also known as variable selection, feature reduction, attribute selection or variable subset selection, is the technique, commonly used in machine learning, of selecting a subset of relevant features for building robust learning models. When applied in biology domain, the technique is also called discriminative gene selection, which detects influential genes based on DNA microarray experiments. By removing most irrelevant and redundant features from the data, feature selection helps improve the performance of learning models by:
      • Alleviating the effect of the curse of dimensionality.
      • Enhancing generalization capability.
      • Speeding up learning process.
      • Improving model interpretability.
    • Feature selection also helps people to acquire better understanding about their data by telling them which are the important features and how they are related with each other.

2007

2004

  • (Dy & Brodley, 2004) ⇒ J. G. Dy, and C. E. Brodley. (2004). “Feature Selection for Unsupervised Learning.” In: Journal of Machine Learning Research, 5.

2003

1998

1997

1996