LLM Evaluation Python Library

From GM-RKB
Jump to navigation Jump to search

A LLM Evaluation Python Library is a python library that provides testing, benchmarking, and quality assessment tools for measuring the performance, safety, and reliability of large language model applications and outputs.