2020 RevisitingFewSampleBERTFineTuni

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Fine-Tuned BERT Text Classification Algorithm.

Notes

Cited By

Quotes

Abstract

This paper is a study of fine-tuning of BERT contextual representations, with focus on commonly observed instabilities in few-sample scenarios. We identify several factors that cause this instability: the common use of a non-standard optimization method with biased gradient estimation; the limited applicability of significant parts of the BERT network for down-stream tasks; and the prevalent practice of using a pre-determined, and small number of training iterations. We empirically test the impact of these factors, and identify alternative practices that resolve the commonly observed instability of the process. In light of these observations, we re-visit recently proposed methods to improve few-sample fine-tuning with BERT and re-evaluate their effectiveness. Generally, we observe the impact of these methods diminishes significantly with our modified process.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2020 RevisitingFewSampleBERTFineTuniTianyi Zhang
Felix Wu
Kilian Q Weinberger
Yoav Artzi
Arzoo Katiyar
Revisiting Few-sample BERT Fine-tuning10.48550/arXiv.2006.059872020