LLM Factual Accuracy Measure
Jump to navigation
Jump to search
An LLM Factual Accuracy Measure is an LLM evaluation measure that quantifies the correctness of factual claims made by large language models against verified truth sources.
- AKA: LLM Truth Accuracy Metric, LLM Fact Verification Score, Language Model Factuality Measure, LLM Correctness Metric.
- Context:
- It can typically evaluate Factual Claims against ground truth.
- It can typically detect LLM Hallucination Errors through fact checking.
- It can typically measure Knowledge Reliability in model outputs.
- It can typically use Knowledge Bases for truth verification.
- It can often employ Automated Fact Checking with reference corpora.
- It can often weight Fact Importance by claim centrality.
- It can often struggle with Subjective Claims and contested facts.
- It can range from being a Binary LLM Factual Accuracy Measure to being a Graded LLM Factual Accuracy Measure, depending on its scoring method.
- It can range from being a Domain-Specific LLM Factual Accuracy Measure to being a General LLM Factual Accuracy Measure, depending on its knowledge scope.
- It can range from being a Real-Time LLM Factual Accuracy Measure to being a Static LLM Factual Accuracy Measure, depending on its temporal relevance.
- It can range from being a Strict LLM Factual Accuracy Measure to being a Lenient LLM Factual Accuracy Measure, depending on its tolerance level.
- ...
- Example:
- Domain-Specific Factual Accuracy Measures, such as:
- Verification Methods, such as:
- ...
- Counter-Example:
- LLM Reasoning Coherence Measure, which evaluates logical structure rather than factual truth.
- LLM Fluency Measure, which assesses language quality rather than content accuracy.
- See: LLM Evaluation Measure, LLM Hallucination Error, Fact Checking Task, Knowledge Verification, Ground Truth, Information Accuracy, LLM Reasoning Coherence Measure, Performance Metric.