Character-Level Neural Sequence-to-Sequence (seq2seq) Model Training Algorithm

From GM-RKB
Jump to navigation Jump to search

A Character-Level Neural Sequence-to-Sequence (seq2seq) Model Training Algorithm is a seq2seq algorithm that is a character-level NNet algorithm.



References

2016a

2016b

2015

  • (Karpathy, 2015) ⇒ Andrej Karpathy. (2015). “The Unreasonable Effectiveness of Recurrent Neural Networks.” In: Blog post 2015-05-21.
    • QUOTE:

      An example RNN with 4-dimensional input and output layers, and a hidden layer of 3 units (neurons). This diagram shows the activations in the forward pass when the RNN is fed the characters "hell" as input. The output layer contains confidences the RNN assigns for the next character (vocabulary is "h,e,l,o"); We want the green numbers to be high and red numbers to be low.