Neural Auto-Encoding Network

From GM-RKB
(Redirected from auto-encoder)
Jump to navigation Jump to search

An Neural Auto-Encoding Network is a encoding/decoding neural network whose input and output are from the same space.



References

2018a

  1. Generating Faces with Torch, Boesen A., Larsen L. and Sonderby S.K., 2015

2018b

  • (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Autoencoder#Purpose Retrieved:2018-11-5.
    • QUOTE: An autoencoder learns to compress data from the input layer into a short code, and then uncompress that code into something that closely matches the original data. This forces the autoencoder to engage in dimensionality reduction, for example by learning how to ignore noise. Some architectures use stacked sparse autoencoder layers for image recognition. The first autoencoder might learn to encode easy features like corners, the second to analyze the first layer's output and then encode less local features like the tip of a nose, the third might encode a whole nose, etc., until the final autoencoder encodes the whole image into a code that matches (for example) the concept of "cat". An alternative use is as a generative model: for example, if a system is manually fed the codes it has learned for "cat" and "flying", it may attempt to generate an image of a flying cat, even if it has never seen a flying cat before.

2018c

  • https://qr.ae/TUhH8k
    • QUOTE: An autoencoder (or auto-associator, as it was classically known as) is a special case of an encoder-decoder architecture — first, the target space is the same as the input space (i.e., English inputs to English targets) and second, the target is to be equal to the input. So we would be mapping something like vectors to vectors (note that this could still be a sequence, as they are recurrent autoencoders, but you are now in this case, not predicting the future but simply reconstructing the present given a state/memory and the present). Now, an autoencoder is really meant to do auto-association, so we are essentially trying to build a model to “recall” the input, which allows the autoencoder to do things like pattern completion so if we give our autoencoder a partially corrupted input, it would be able to “retrieve” the correct pattern from memory.

2015

2012

2011

2006