Variational Auto-Encoder (VAE)
Jump to navigation
Jump to search
A Variational Auto-Encoder (VAE) is an neural auto-encoder that ...
References
2017
- (Tolstikhin et al., 2017) ⇒ Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. (2017). “Wasserstein Auto-Encoders.” In: Proceedings of 6th International Conference on Learning Representations (ICLR-2018).
- QUOTE: We propose the Wasserstein Auto-Encoder (WAE) --- a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which leads to a different regularizer than the one used by the Variational Auto-Encoder (VAE). This regularizer encourages the encoded training distribution to match the prior. ...
2016
- (Kipf & Welling, 2016) ⇒ Thomas N. Kipf, and Max Welling. (2016). “Variational Graph Auto-Encoders.” Bayesian Deep Learning (NIPS Workshops 2016)
- ABSTRACT: We introduce the variational graph auto-encoder (VGAE), a framework for unsupervised learning on graph-structured data based on the variational auto-encoder (VAE). This model makes use of latent variables and is capable of learning interpretable latent representations for undirected graphs. We demonstrate this model using a graph convolutional network (GCN) encoder and a simple inner product decoder. Our model achieves competitive results on a link prediction task in citation networks. In contrast to most existing models for unsupervised learning on graph-structured data and link prediction, our model can naturally incorporate node features, which significantly improves predictive performance on a number of benchmark datasets.