# Matrix Dimensionality Compression Algorithm

(Redirected from Feature Compression Algorithm)

Jump to navigation
Jump to search
A Matrix Dimensionality Compression Algorithm is a dataset dimensionality compression algorithm/matrix compression algorithm that can be applied by a matrix dimensionality compression system (to solve a matrix dimensionality compression task).

**Context:**- It can range from being a Linear Dimensionality Compression Algorithm to being a Non-Linear Dimensionality Compression Algorithm.
- It can range from (typically) being a Lossy Dimensionality Reduction Algorithm to being a Non-Lossy Dimensionality Reduction Algorithm.
- It can be applied by a Feature Space Compression System (to solve a Feature Space Compression Task.

**Example(s):**- a Linear Dimensionality Reduction Algorithm, such as:
- a Nonlinear Dimensionality Reduction Algorithm, such as:
- any Feature Selection Algorithm, where the transformation is a simple selection.
- …

**Counter-Example(s):****See:**Transformation Algorithm, Feature Extraction Algorithm, Derived Feature.

## References

### 2014

- (Sánchez et al., 2014) ⇒ Carlos O. Sorzanoscar Sánchez, Javier Vargas, and A. Pascual Montano. (2014). “A Survey of Dimensionality Reduction Techniques." arXiv:1403.2877

### 2009

- http://en.wikipedia.org/wiki/Feature_extraction
- In pattern recognition and in image processing, Feature extraction is a special form of dimensionality reduction.
- When the input data to an algorithm is too large to be processed and it is suspected to be notoriously redundant (much data, but not much information) then the input data will be transformed into a reduced representation set of features (also named features vector). Transforming the input data into the set of features is called features extraction. If the features extracted are carefully chosen it is expected that the features set will extract the relevant information from the input data in order to perform the desired task using this reduced representation instead of the full size input.

### 2008

- (Blitzer, 2008) ⇒ John Blitzer. (2008). “A Survey of Dimensionality Reduction Techniques for Natural Language."

### 2006

- (Hinton & Salakhutdinov, 2006) ⇒ Geoffrey E. Hinton, and Ruslan R. Salakhutdinov. (2006). “Reducing the Dimensionality of Data with Neural Networks.” In: Science, 313(5786). doi:10.1126/science.1127647
- QUOTE: Dimensionality reduction facilitates the classification, visualization, communication, and storage of high-dimensional data. A simple and widely used method is principal components analysis (PCA), which finds the directions of greatest variance in the data set and represents each data point by its coordinates along each of these directions. We describe a nonlinear generalization of PCA that uses an adaptive, multilayer "encoder" network ...