Cross-Encoder Model
Jump to navigation
Jump to search
A Cross-Encoder Model is a neural reranking model that jointly processes query-document pairs through full attention mechanisms for high-precision relevance scoring.
- AKA: Interaction-Based Model, Joint Encoding Model, Pairwise Ranking Model.
- Context:
- It can typically compute Relevance Scores with token-level interactions.
- It can typically process Concatenated Inputs with transformer architectures.
- It can typically perform Fine-Grained Matching through cross-attention layers.
- It can often utilize BERT-Based Architectures for bidirectional understanding.
- It can often employ Pointwise Ranking for absolute relevance assessment.
- It can often apply Pairwise Training for relative preference learning.
- It can often integrate MonoT5 Architecture for sequence-to-sequence scoring.
- It can range from being a Base Cross-Encoder Model to being a Large Cross-Encoder Model, depending on its parameter count.
- It can range from being a Binary Cross-Encoder Model to being a Graded Cross-Encoder Model, depending on its output type.
- It can range from being a General Cross-Encoder Model to being a Domain-Specific Cross-Encoder Model, depending on its training data.
- It can range from being a Single-Task Cross-Encoder Model to being a Multi-Task Cross-Encoder Model, depending on its objective function.
- ...
- Examples:
- Legal Cross-Encoder Models, such as:
- Legal MS-MARCO Cross-Encoder, fine-tuned on legal relevance judgments.
- Japanese Legal Reranker, specialized for Japanese legal documents.
- General-Purpose Cross-Encoder Models, such as:
- ...
- Legal Cross-Encoder Models, such as:
- Counter-Examples:
- Bi-Encoder Model, which independently encodes queries and documents.
- Late Interaction Model, which delays query-document interactions.
- Lexical Retrieval Model, which uses term-based matching.
- See: Bi-Encoder Model, Two-Stage Retrieval Architecture, Reranking Task, Transformer Model, Learning-to-Rank, MonoT5 Model, Neural Information Retrieval, Neural Information Retrieval Model, Legal Language Embedding Model.