2014 HowTransferableAreFeaturesinDee

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Transfer Learning Algorithm.

Notes

Cited By

Quotes

Abstract

Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layernneurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2014 HowTransferableAreFeaturesinDeeYoshua Bengio
Jeff Clune
Hod Lipson
Jason Yosinski
How Transferable Are Features in Deep Neural Networks?2014