Crowdsourced Evaluation in AI

From GM-RKB
Jump to navigation Jump to search

A Crowdsourced Evaluation in AI is an AI evaluation methodology that is a human-in-the-loop benchmarking process leveraging distributed human judgments for assessing AI systems.