2023 MAUVEScoresforGenerativeModelsT

From GM-RKB
Jump to navigation Jump to search

Subject Headings: MAUVE Score.

Notes

Cited By

Quotes

Abstract

Generative artificial intelligence has made significant strides, producing text indistinguishable from human prose and remarkably photorealistic images. Automatically measuring how close the generated data distribution is to the target distribution is central to diagnosing existing models and developing better ones. We present MAUVE, a family of comparison measures between pairs of distributions such as those encountered in the generative modeling of text or images. These scores are statistical summaries of divergence frontiers capturing two types of errors in generative modeling. We explore three approaches to statistically estimate these scores: vector quantization, non-parametric estimation, and classifier-based estimation. We provide statistical bounds for the vector quantization approach. Empirically, we find that the proposed scores paired with a range of f-divergences and statistical estimation methods can quantify the gaps between the distributions of human-written text and those of modern neural language models by correlating with human judgments and identifying known properties of the generated texts. We demonstrate in the vision domain that MAUVE can identify known properties of generated images on par with or better than existing metrics. In conclusion, we present practical recommendations for using MAUVE effectively with language and image modalities.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2023 MAUVEScoresforGenerativeModelsTYejin Choi
Sean Welleck
Swabha Swayamdipta
Krishna Pillutla
Lang Liu
John Thickstun
Rowan Zellers
Sewoong Oh
Zaid Harchaoui
MAUVE Scores for Generative Models: Theory and Practice2023