Bidirectional LSTM (BiLSTM) Training System

From GM-RKB
Jump to navigation Jump to search

A Bidirectional LSTM (BiLSTM) Training System is a Bidirectional Neural Network Training System that implements a bi-directional LSTM modeling algorithm (to solve a bidirectional LSTM modeling task to produced a bidirectional LSTM model).



References

2018a

Fig. 3 Unfolded architecture of bidirectional LSTM with three consecutive steps

2018b

  • (Github, 2018) ⇒ Theano-Recurrence Training System: https://github.com/uyaseen/theano-recurrence#training Retrieved: 2018-07-01
    • train.py provides a convenient method train(..) to train each model, you can select the recurrent model with the rec_model parameter, it is set to gru by default (possible options include rnn, gru, lstm, birnn, bigru & bilstm), number of hidden neurons in each layer (at the moment only single layer models are supported to keep the things simple, although adding more layers is very trivial) can be adjusted by n_h parameter in train(..), which by default is set to 100. As the model is trained it stores the current best state of the model i.e set of weights (best = least training error), the stored model is in the data\models\MODEL-NAME-best_model.pkl, also this stored model can later be used for resuming training from the last point or just for prediction/sampling. If you don't want to start training from scratch and instead use the already trained model then set use_existing_model=True in argument to train(..). Also optimization strategies can be specified to train(..) via optimizer parameter, currently supported optimizations are rmsprop, adam and vanilla stochastic gradient descent and can be found in utilities\optimizers.py. b_path, learning_rate, n_epochs in the train(..) specifies the 'base path to store model' (default = data\models\), 'initial learning rate of the optimizer', and 'number of epochs respectively'. During the training some logs (current epoch, sample, cross-entropy error etc) are shown on console to get an idea of how well learning is proceeding, logging frequencycan be specified via logging_freq in the train(..). At the end of training, a plot of cross-entropy error vs # of iterations gives an overview of overall training process and is also stored in the b_path.

2015