Multi-Hop Evidence Retrieval Task
(Redirected from multi-hop evidence retrieval task)
Jump to navigation
Jump to search
A Multi-Hop Evidence Retrieval Task is an evidence retrieval task that is a complex NLU task requiring sequential evidence extraction across multiple documents to support fact verification or question answering.
- AKA: Multi-Step Evidence Task, Chain Evidence Retrieval Task.
- Context:
- It can typically require Reasoning Chain Construction across document boundarys.
- It can typically involve Evidence Dependency Resolution between retrieval steps.
- It can typically necessitate Cross-Document Coreference for entity linking.
- It can typically demand Inference Path Tracking through evidence sequences.
- It can typically support Complex Query Resolution via iterative retrieval.
- ...
- It can often incorporate Bridge Entity Recognition connecting evidence pieces.
- It can often utilize Hierarchical Retrieval Strategyes for efficiency.
- It can often require Evidence Ordering for logical flow.
- It can often employ Confidence Propagation across retrieval hops.
- ...
- It can range from being a Two-Hop Retrieval Task to being a Many-Hop Retrieval Task, depending on its reasoning depth.
- It can range from being a Homogeneous Corpus Task to being a Heterogeneous Corpus Task, depending on its source diversity.
- ...
- It can process Multi-Document Collections with linked information.
- It can output Evidence Chains with supporting documents.
- It can be solved by a Multi-Hop Evidence Retrieval System.
- It can be evaluated by a Multi-Hop Retrieval Performance Measure.
- ...
- Example(s):
- HotpotQA Task requiring two-hop reasoning over Wikipedia articles.
- Multi-hop FEVER Task needing multiple evidence sentences for claim verification.
- WikiHop Task extracting answers through entity chains.
- Complex Question Answering over knowledge graphs with text evidence.
- Legal Precedent Retrieval tracing case citations across documents.
- ...
- Counter-Example(s):
- Single-Document QA Tasks, which find answers in one document.
- Direct Retrieval Tasks, which return relevant documents without chaining.
- Extractive Summarization Tasks, which process documents independently.
- See: Evidence Retrieval Task, Complex Question Answering, Multi-Document Processing, Reasoning Task.