Question Answering (QA) Task

From GM-RKB
(Redirected from QA Task)
Jump to navigation Jump to search

A question answering (QA) task is a query-replying task that requires question answers to questions (produced by a question act).



References

2023

  • (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Question_answering#Types_of_question_answering Retrieved:2023-9-10.
    • Question-answering research attempts to develop ways of answering a wide range of question types, including fact, list, definition, how, why, hypothetical, semantically constrained, and cross-lingual questions.
      • Answering questions related to an article in order to evaluate reading comprehension is one of the simpler form of question answering, since a given article is relatively short compared to the domains of other types of question-answering problems. An example of such a question is "What did Albert Einstein win the Nobel Prize for?" after an article about this subject is given to the system.
      • Closed-book question answering is when a system has memorized some facts during training and can answer questions without explicitly being given a context. This is similar to humans taking closed-book exams.
      • Closed-domain question answering deals with questions under a specific domain (for example, medicine or automotive maintenance) and can exploit domain-specific knowledge frequently formalized in ontologies. Alternatively, "closed-domain" might refer to a situation where only a limited type of questions are accepted, such as questions asking for descriptive rather than procedural information. Question answering systems machine reading applications have also been constructed in the medical domain, for instance Alzheimer's disease. Roser Morante, Martin Krallinger, Alfonso Valencia and Walter Daelemans. Machine Reading of Biomedical Texts about Alzheimer's Disease. CLEF 2012 Evaluation Labs and Workshop. September 17, 2012
      • Open-domain question answering deals with questions about nearly anything and can only rely on general ontologies and world knowledge. Systems designed for open-domain question answering usually have much more data available from which to extract the answer. An example of an open-domain question is "What did Albert Einstein win the Nobel Prize for?" while no article about this subject is given to the system. ...

2015a

  • (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/question_answering Retrieved:2015-9-10.
    • Question Answering (QA) is a computer science discipline within the fields of information retrieval and natural language processing (NLP), which is concerned with building systems that automatically answer questions posed by humans in a natural language.

      A QA implementation, usually a computer program, may construct its answers by querying a structured database of knowledge or information, usually a knowledge base. More commonly, QA systems can pull answers from an unstructured collection of natural language documents.

      Some examples of natural language document collections used for QA systems include:

      • a local collection of reference texts
      • internal organization documents and web pages
      • compiled newswire reports
      • a set of Wikipedia pages
      • a subset of World Wide Web pages
    • QA research attempts to deal with a wide range of question types including: fact, list, definition, How, Why, hypothetical, semantically constrained, and cross-lingual questions.
      • Closed-domain question answering deals with questions under a specific domain (for example, medicine or automotive maintenance), and can be seen as an easier task because NLP systems can exploit domain-specific knowledge frequently formalized in ontologies. Alternatively, closed-domain might refer to a situation where only a limited type of questions are accepted, such as questions asking for descriptive rather than procedural information. QA systems in the context of machine reading applications have also been constructed in the medical domain, for instance related to Alzheimers disease [1]
      • Open-domain question answering deals with questions about nearly anything, and can only rely on general ontologies and world knowledge. On the other hand, these systems usually have much more data available from which to extract the answer.
  1. Roser Morante , Martin Krallinger , Alfonso Valencia and Walter Daelemans. Machine Reading of Biomedical Texts about Alzheimer's Disease. CLEF 2012 Evaluation Labs and Workshop. September 17, 2012

2015b

2006

2002

2003

2001

  • (Voorhees, 2001) ⇒ Ellen M. Voorhees. (2001). “Overview of the TREC 2003 Question Answering Track.” In: Proceedings of the TREC-10 Conference. NIST.
    • QUOTE: : As mentioned above, one of the goals for the TREC 2001 QA track was to require systems to assemble an answer from information located in multiple documents. Such questions are harder to answer than the questions used in the main task since information duplicated in the documents must be detected and reported only once.(...)

      List results were evaluated using accuracy, the number of distinct responses divided by the target number of instances. Note that since unsupported responses could be marked distinct, the reported accuracy is a lenient evaluation. Table 4 gives the average accuracy scores for all of the list task submissions. Given the way the questions were constructed for the list task, the list task questions were intrinsically easier than the questions in the main task. Most systems found at least one instance for most questions. Each system returned some duplicate responses, but duplication was not a major source of error for any of the runs. (Each run contained many more wrong responses than duplicate responses.) With just 18 runs, there is not enough data to know if the lack of duplication is because the systems are good at recognizing and eliminating duplicate responses, or if there simply wasn't all that much duplication in the document set. (...)

      The list task will be repeated in essentially the same form as TREC 2001. NIST will attempt to find naturally occurring list questions in logs, but appropriate questions are rare, so some constructed questions may also be used. We hope also to have a new context task, though the exact nature of that task is still undefined.

      The main focus of the ARDA AQUAINT program is to move beyond the simple factoid questions that have been the focus of the TREC tracks. Of particular concern for evaluation is how to score responses that cannot be marked simply correct/incorrect, but instead need to incorporate a fine-grained measure of the quality of the response.

1977

  • (Lehnert, 1977) ⇒ Wendy G. Lehnert. (1977). “The Process of Question Answering - A Computer Simulation of Cognition." Yale University. ISBN:0-470-26485-3
    • ABSTRACT: Problems in computational question answering assume a new perspective when question answering is viewed as a problem in natural language processing. A theory of question answering has been proposed which relies on ideas in conceptual information processing and theories of human memory organization. This theory of question answering has been implemented in a computer program, QUALM, currently being used by two story understanding systems to complete a natural language processing system which reads stories and answers questions about what was read. The processes in QUALM are divided into 4 phases: (1) Conceptual categorization which guides subsequent processing by dictating which specific inference mechanisms and memory retrieval strategies should be invoked in the course of answering a question; (2) Inferential analysis which is responsible for understanding what the questioner really meant when a question should not be taken literally; (3) Content specification which determines how much of an answer should be returned in terms of detail and elaborations, and (4) Retrieval heuristics which do the actual digging to extract an answer from memory.