LLM Evaluation Benchmark

From GM-RKB
Jump to navigation Jump to search

An LLM Evaluation Benchmark is an AI evaluation benchmark that assesses large language model performance on specific tasks.