2016 LearningtoLearnbyGradientDescen

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Automated Supervised ML.

Notes

Cited By

Quotes

Abstract

The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2016 LearningtoLearnbyGradientDescenMisha Denil
Nando de Freitas
Sergio Gómez Colmenarejo
Marcin Andrychowicz
Matthew W. Hoffman
David Pfau
Tom Schaul
Brendan Shillingford
Learning to Learn by Gradient Descent by Gradient Descent2016