LLM-as-Judge Calibration Method

From GM-RKB
Jump to navigation Jump to search

A LLM-as-Judge Calibration Method is a llm evaluation calibration method that adjusts confidence scores and probability estimates to improve the reliability of large language model judgment decisions.