TruthfulQA Benchmarking Task

From GM-RKB
(Redirected from TruthfulQA)
Jump to navigation Jump to search

A TruthfulQA Benchmarking Task is a LLM inference evaluation task that can be used to evaluate the truthfulness and factual accuracy of language model outputs when presented with misleading or adversarial prompts.



References

2021a

2021b