LLM as Judge Performance Metric

From GM-RKB
(Redirected from LLM Judge Evaluation Metric)
Jump to navigation Jump to search

A LLM as Judge Performance Metric is a performance metric that quantifies the effectiveness, accuracy, and reliability of large language models when performing evaluation and judgment tasks.