2022 TheCrowdisMadeofPeopleObservati

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Data Annotation Task, Information Retrieval Evaluation.

Notes

Cited By

Quotes

Abstract

Like many other researchers, at Microsoft Bing we use external “crowd” judges to label results from a search engine—especially, although not exclusively, to obtain relevance labels for offline evaluation in the Cranfield tradition. Crowdsourced labels are relatively cheap, and hence very popular, but are prone to disagreements, spam, and various biases which appear to be unexplained “noise” or “error”. In this paper, we provide examples of problems we have encountered running crowd labelling at large scale and around the globe, for search evaluation in particular. We demonstrate effects due to the time of day and day of week that a label is given; fatigue; anchoring; exposure; left-side bias; task switching; and simple disagreement between judges. Rather than simple “error”, these effects are consistent with well-known physiological and cognitive factors. “The crowd” is not some abstract machinery, but is made of people. Human factors that affect people’s judgement behaviour must be considered when designing research evaluations and in interpreting evaluation metrics.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2022 TheCrowdisMadeofPeopleObservatiPaul Thomas
Gabriella Kazai
Ryen White
Nick Craswell
The Crowd is Made of People: Observations from Large-scale Crowd Labelling10.1145/3498366.35058152022