LLM Example Selection Strategy
(Redirected from Prompt Example Selection Algorithm)
Jump to navigation
Jump to search
An LLM Example Selection Strategy is a selection strategy that determines optimal examples for large language model in-context learning through relevance criteria and diversity constraints.
- AKA: Few-Shot Example Selection Strategy, In-Context Example Selection Method, Demonstration Selection Strategy, Prompt Example Selection Algorithm.
- Context:
- It can typically employ LLM Similarity-Based Selection using llm embedding distances, llm semantic similarity, and llm feature matching.
- It can typically implement LLM Diversity-Based Selection through llm coverage metrics, llm representative sampling, and llm cluster-based selection.
- It can typically utilize LLM Task-Specific Selection via llm performance prediction, llm difficulty matching, and llm error analysis.
- It can typically support LLM Dynamic Selection with llm contextual adaptation, llm user preferences, and llm real-time retrieval.
- It can typically enable LLM Hybrid Selection combining llm multiple criterions, llm weighted scoring, and llm ensemble methods.
- It can often optimize LLM Example Ordering through curriculum learning, progressive difficulty, and logical sequences.
- It can often manage LLM Example Budget via token limitations, cost constraints, and efficiency trade-offs.
- It can often evaluate LLM Selection Effectiveness using downstream performance, generalization metrics, and robustness tests.
- It can range from being a Random LLM Example Selection Strategy to being a Sophisticated LLM Example Selection Strategy, depending on its selection complexity.
- It can range from being a Static LLM Example Selection Strategy to being an Adaptive LLM Example Selection Strategy, depending on its adjustment capability.
- It can range from being a Single-Criterion LLM Example Selection Strategy to being a Multi-Criterion LLM Example Selection Strategy, depending on its selection factors.
- It can range from being a Offline LLM Example Selection Strategy to being an Online LLM Example Selection Strategy, depending on its computation timing.
- ...
- Example(s):
- Retrieval-Based LLM Example Selection Strategys, such as:
- BM25 Selection, which uses term frequency for text matching.
- Dense Retrieval Selection, which employs neural embeddings for semantic search.
- Contrastive Selection, which identifies discriminative examples via contrast learning.
- Learning-Based LLM Example Selection Strategys, such as:
- Reinforcement Learning Selection, which learns selection policys through reward signals.
- Meta-Learning Selection, which adapts selection patterns across task distributions.
- ...
- Retrieval-Based LLM Example Selection Strategys, such as:
- Counter-Example(s):
- Fixed Template, which uses predetermined examples without selection process.
- Exhaustive Inclusion, which includes all examples without strategic selection.
- Manual Curation, which relies on human judgment rather than algorithmic selection.
- See: LLM In-Context Learning System, Few-Shot Learning, Example Selection Algorithm, Automated Few-Shot Example Discovery Process, Retrieval System, LLM Prompt Engineering System, Similarity Metric, Diversity Measure.