Trained Deep Neural Network
Jump to navigation Jump to search
- It can (often) be an Overparameterized Model.
- See: Over-Parameterized Model, Neural Network File.
- QUOTE: ... The ONNX Model Zoo is a collection of pre-trained, state-of-the-art models in the ONNX format contributed by community members like you. Accompanying each model are Jupyter notebooks for model training and running inference with the trained model. The notebooks are written in Python and include links to the training dataset as well as references to the original paper that describes the model architecture. ...
- (Frankle & Carbin, 2019) ⇒ Jonathan Frankle, and Michael Carbin. (2019). “The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks.” In: International Conference on Learning Representations.
- QUOTE: ... In practice, neural networks tend to be dramatically overparameterized. Distillation (Ba & Caruana, 2014; Hinton et al., 2015) and pruning (LeCun et al., 1990; Han et al., 2015) rely on the fact that parameters can be reduced while preserving accuracy. Even with sufficient capacity to memorize training data, networks naturally learn simpler functions (Zhang et al., 2016; Neyshabur et al., 2014; Arpit et al., 2017). Contemporary experience (Bengio et al., 2006; Hinton et al., 2015; Zhang et al., 2016) and Figure 1 suggest that overparameterized networks are easier to train. We show that dense networks contain sparse subnetworks capable of learning on their own starting from their original initializations. Several other research directions aim to train small or sparse networks. ...