- (Buck et al., 2018) ⇒ Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Andrea Gesmundo, Neil Houlsby, Wojciech Gajewski, and Wei Wang. (2018). “Ask the Right Questions: Active Question Reformulation with Reinforcement Learning.” In: Proceedings of 6th International Conference on Learning Representations (ICLR-2018).
Subject Headings: SearchQA Benchmark Task.
- It formulates the Jeopardy QA as a query reformulation task that leverages a search engine.
- The model is first pre-trained using a bidirectional LSTM on multilingual pairs of sentences.
- A CNN binary classifier performs answer selection.
- Evaluation on the SearchQA dataset shows significant improvement over the state-of-the-art model that uses the original questions.
- REVIEW: This paper formulates the Jeopardy QA as a query reformulation task that leverages a search engine. In particular, a user will try a sequence of alternative queries based on the original question in order to find the answer. The RL formulation essentially tries to mimic this process. Although this is an interesting formulation, as promoted by some recent work, this paper does not provide compelling reasons why it's a good formulation. The lack of serious comparisons to baseline methods makes it hard to judge the value of this work.
- REVIEW: This article clearly describes how they designed and actively trained 2 models for question reformulation and answer selection during question answering episodes. The reformulation component is trained using a policy gradient over a sequence-to-sequence model (original vs. reformulated questions). The model is first pre-trained using a bidirectional LSTM on multilingual pairs of sentences. A small monolingual bitext corpus is the uses to improve the quality of the results. A CNN binary classifier performs answer selection.
- REVIEW: This paper proposes active question answering via a reinforcement learning approach that can learn to rephrase the original questions in a way that can provide the best possible answers. Evaluation on the SearchQA dataset shows significant improvement over the state-of-the-art model that uses the original questions.
We frame Question Answering (QA) as a Reinforcement Learning task, an approach that we call Active Question Answering. We propose an agent that sits between the user and a black box QA system and learns to reformulate questions to elicit the best possible answers. The agent probes the system with, potentially many, natural language reformulations of an initial question and aggregates the returned evidence to yield the best answer. The reformulation system is trained end-to-end to maximize answer quality using policy gradient. We evaluate on SearchQA, a dataset of complex questions extracted from Jeopardy !. The agent outperforms a state-of-the-art base model, playing the role of the environment, and other benchmarks. We also analyze the language that the agent has learned while interacting with the question answering system. We find that successful question reformulations look quite different from natural language paraphrases. The agent is able to discover non-trivial reformulation strategies that resemble classic information retrieval techniques such as term re-weighting (tf-idf) and stemming.
- (Dunn et al., 2017) ⇒ Matthew Dunn, Levent Sagun, Mike Higgins, Ugur Guney, Volkan Cirik, and Kyunghyun Cho. (2017). “Searchqa: A New Q&a Dataset Augmented with Context from a Search Engine.” arXiv preprint arXiv:1704.05179
|2018 AsktheRightQuestionsActiveQuest||Massimiliano Ciaramita|
|Ask the Right Questions: Active Question Reformulation with Reinforcement Learning||2018|