LSTM-based Language Model (LM) Training Algorithm: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
(Created page with "An LSTM-based Language Model (LM) Training Algorithm is a RNN-based LM algorithm that is based on LSTM networks. * <B>Context:</B> ** It can be implemented by an [...")
 
m (Text replacement - ". " to ". ")
 
(5 intermediate revisions by the same user not shown)
Line 2: Line 2:
* <B>Context:</B>
* <B>Context:</B>
** It can be implemented by an [[LSTM-based LM System]].
** It can be implemented by an [[LSTM-based LM System]].
** …
* <B>Counter-Example(s):</B>
* <B>Counter-Example(s):</B>
** an [[MLE-based LM Algorithm]].
** an [[MLE-based LM Algorithm]].
* <B>See:</B> [[RNN-based LM Algorithm]].
* <B>See:</B> [[RNN-based LM Algorithm]].
----
----
----
----
Line 11: Line 13:


=== 2017b ===
=== 2017b ===
* ([[2017_ImprovedVariationalAutoencoders|Yang, Hu et al., 2017]]) ⇒ [[Zichao Yang]], [[Zhiting Hu]], [[Ruslan Salakhutdinov]], and [[Taylor Berg-Kirkpatrick]]. ([[2017]]). “[https://arxiv.org/pdf/1702.08139 Improved Variational Autoencoders for Text Modeling Using Dilated Convolutions].&rdquo; In: Proceedings of the 34th International Conference on Machine Learning ([[ICML-2017]]).  
* ([[2017_ImprovedVariationalAutoencoders|Yang, Hu et al., 2017]]) ⇒ [[Zichao Yang]], [[Zhiting Hu]], [[Ruslan Salakhutdinov]], and [[Taylor Berg-Kirkpatrick]]. ([[2017]]). “[https://arxiv.org/pdf/1702.08139 Improved Variational Autoencoders for Text Modeling Using Dilated Convolutions].&rdquo; In: Proceedings of the 34th International Conference on Machine Learning ([[ICML-2017]]).
** QUOTE: Recent [[NLP research|work]] on [[generative modeling of text]] has found that [[variational auto-encoders (VAE)]] incorporating [[LSTM decoder]]s perform [[worse]] than [[simpler]] [[LSTM-based Language Modeling (LM) Algorithm|LSTM language model]]s ([[Bowman et al., 2015]]). </s> This [[negative result]] is so [[far poorly understood]], but has been attributed to the propensity of [[LSTM decoder]]s to ignore [[conditioning information]] from the [[encoder]]. </s> ...
** QUOTE: Recent [[NLP research|work]] on [[generative modeling of text]] has found that [[variational auto-encoders (VAE)]] incorporating [[LSTM decoder]]s perform [[worse]] than [[simpler]] [[LSTM-based Language Modeling (LM) Algorithm|LSTM language model]]s ([[Bowman et al., 2015]]). </s> This [[negative result]] is so [[far poorly understood]], but has been attributed to the propensity of [[LSTM decoder]]s to ignore [[conditioning information]] from the [[encoder]]. </s>


=== 2015 ===
=== 2015 ===
Line 18: Line 20:


----
----
__NOTOC__
__NOTOC__
[[Category:Concept]]
[[Category:Concept]]

Latest revision as of 12:24, 2 August 2022

An LSTM-based Language Model (LM) Training Algorithm is a RNN-based LM algorithm that is based on LSTM networks.



References

2017b

2015