2016 DeepReinforcementLearningforMen

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Clark-Manning Neural Coreference Resolution System, Stanford CoreNLP System, Coreference Resolution, Reinforcement Learning, Deep Learning.

Notes

Resource(s):

Pre-Print(s):

Other Link(s):

Related Paper(s):

Cited By

Quotes

Abstract

Coreference resolution systems are typically trained with heuristic loss functions that require careful tuning. In this paper we instead apply reinforcement learning to directly optimize a neural mention-ranking model for coreference evaluation metrics. We experiment with two approaches: the REINFORCE policy gradient algorithm and a reward-rescaled max-margin objective. We find the latter to be more effective, resulting in significant improvements over the current state-of-the-art on the English and Chinese portions of the CoNLL 2012 Shared Task.

References

BibTeX

@inproceedings{2016_DeepReinforcementLearningforMen,
  author    = {Kevin Clark and
               Christopher D. Manning},
  editor    = {Jian Su and
               Xavier Carreras and
               Kevin Duh},
  title     = {Deep Reinforcement Learning for Mention-Ranking Coreference Models},
  booktitle = {Proceedings of the 2016 Conference on Empirical Methods in Natural
               Language Processing (EMNLP 2016), Austin, Texas, USA, November 1-4,
               2016},
  pages     = {2256--2262},
  publisher = {The Association for Computational Linguistics},
  year      = {2016},
  url       = {https://doi.org/10.18653/v1/d16-1245},
  doi       = {10.18653/v1/d16-1245}
}


 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2016 DeepReinforcementLearningforMenChristopher D. Manning
Kevin Clark
Deep Reinforcement Learning for Mention-Ranking Coreference Models2016