Bidirectional LSTM-CNN (BLSTM-CNN) Training System

From GM-RKB
Jump to navigation Jump to search

A Bidirectional LSTM-CNN (BLSTM-CNN) Training System is a biLSTM Training System that can train a BLSTM-CNN.



References

2018

2016

CNN-arXiv-160301354.png BLSTM-CRF-arXiv-160301354.png
Figure 1: The convolution neural network for extracting character-level representations of words. Dashed arrows indicate a dropout layer applied before character embeddings are input to CNN. Figure 3: The main architecture of our neural network. The character representation for each word is computed by the CNN in Figure 1. Then the character representation vector is concatenated with the word embedding before feeding into the BLSTM network. Dashed arrows indicate dropout layers applied on both the input and output vectors of BLSTM.