Legal Natural Language Processing (NLP) Benchmark Task
(Redirected from Legal NLP Benchmark)
Jump to navigation
Jump to search
A Legal Natural Language Processing (NLP) Benchmark Task is an domain-specific NLP benchmark task for a legal text analysis task.
- Context:
- It can (often) be essential for Legal NLP Model Training that require understanding complex legal jargon and terminology.
- It can include tasks like legal document summarization, legal information extraction, and legal question answering.
- It can include datasets derived from legal documents, court rulings, or legal literature.
- It can play a critical role in advancing the field of legal tech and AI in law.
- ...
- Example(s):
- LEXTREME: A comprehensive multi-lingual and multi-task benchmark for the legal domain.
- LegalBench.
- ContractNLI.
- ...
- Counter-Example(s):
- Clinical Trial Dataset,
- ImageNet Dataset,
- Question-Answer Dataset,
- Reading Comprehension Dataset.
- General Language Understanding Evaluation (GLUE): A benchmark designed for general language understanding.
- Stanford Question Answering Dataset (SQuAD): A benchmark for general question answering tasks.
- TREC Legal Track: A track within TREC focusing on e-discovery rather than a broad range of legal NLP tasks.
- See: Biomedical NLP Benchmark, General NLP Benchmark, Natural Language Understanding.
References
2023
- (Greco & Tagarelli, 2023) ⇒ Candida M Greco, and Andrea Tagarelli. (2023). “Bringing Order Into the Realm of Transformer-based Language Models for Artificial Intelligence and Law.” In: Artif. Intell. Law Journal. doi:10.48550/arXiv.2308.05502