Joint Inference Algorithm

From GM-RKB
(Redirected from joint inference)
Jump to navigation Jump to search

A joint inference algorithm is a supervised model-based learning algorithm that optimizes for all the underlying inferences for a composite task at once.



References

2009

2007

2006

  • (Finkel et al., 2006) ⇒ Jenny Rose Finkel, Christopher D. Manning, and Andrew Y. Ng. (2006). “Solving the Problem of Cascading Errors: Approximate Bayesian Inference for Linguistic Annotation Pipelines.” In: Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP 2006).
  • (Culotta et al., 2006) ⇒ Aron Culotta, Andrew McCallum, and Jonathan Betz. (2006). “Integrating Probabilistic Extraction Models and Data Mining to Discover Relations and Patterns in Text.” In: Proceedings of HLT-NAACL 2006.
    • This work can also be viewed as part of a trend to perform joint inference across multiple language processing tasks (Miller et al., 2000; Roth and tau Yih, 2002; Sutton and McCallum, 2004).
  • (JINLP, 2006) Proposed workshop http://www.cs.umass.edu/~casutton/jinlp2006/
    • In NLP there has been increasing interest in moving away from systems that make chains of local decisions independently, and instead toward systems that make multiple decisions jointly using global information. For example, NLP tasks are often solved by a pipeline of processing steps (from speech, to translation, to entity extraction, relation extraction, coreference and summarization)---each of which locally chooses its output to be passed to the next step. However, we can avoid accumulating cascading errors by joint decoding across the pipeline---capturing uncertainty and multiple hypotheses throughout. The use of lattices in speech recognition is well-established, but recently there has been more interest in larger, more complex joint inference, such as joint ASR and MT, and joint extraction and coreference.
    • The main challenge in applying joint methods more widely throughout NLP is that they are more complex and more expensive than local approaches. Various models and approximate inference algorithms have been used to maintain efficiency, such as beam search, reranking, simulated annealing, and belief propagation, but much work remains in understanding which methods are best for particular applications, or which new techniques could be brought to bear.

2005

2004

2003

2000