2017 AHybridConvolutionalVariational

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Natural Language Generation Task.

Notes

Cited By

Quotes

Abstract

In this paper we explore the effect of architectural choices on learning a variational autoencoder (VAE) for text generation. In contrast to the previously introduced VAE model for text where both the encoder and decoder are RNNs, we propose a novel hybrid architecture that blends fully feed-forward convolutional and deconvolutional components with a recurrent language model. Our architecture exhibits several attractive properties such as faster run time and convergence, ability to better handle long sequences and, more importantly, it helps to avoid the issue of the VAE collapsing to a deterministic model.

References

BibTeX

@inproceedings{2017_AHybridConvolutionalVariational,
  author    = {Stanislau Semeniuta and
               Aliaksei Severyn and
               Erhardt Barth},
  editor    = {Martha Palmer and
               Rebecca Hwa and
               Sebastian Riedel},
  title     = {A Hybrid Convolutional Variational Autoencoder for Text Generation},
  booktitle = {Proceedings of the 2017 Conference on Empirical Methods in Natural
               Language Processing, {EMNLP} 2017, Copenhagen, Denmark, September
               9-11, 2017},
  pages     = {627--637},
  publisher = {Association for Computational Linguistics},
  year      = {2017},
  url       = {https://doi.org/10.18653/v1/d17-1066},
  doi       = {10.18653/v1/d17-1066},
}


 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2017 AHybridConvolutionalVariationalAliaksei Severyn
Stanislau Semeniuta
Erhardt Barth
A Hybrid Convolutional Variational Autoencoder for Text Generation2017