2012 NASAAchievingLowerRegretsandFas
Jump to navigation
Jump to search
- (Ouyang & Gray, 2012) ⇒ Hua Ouyang, and Alexander Gray. (2012). “NASA: Achieving Lower Regrets and Faster Rates via Adaptive Stepsizes.” In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2012). ISBN:978-1-4503-1462-6 doi:10.1145/2339530.2339557
Subject Headings:
Notes
Cited By
- http://scholar.google.com/scholar?q=%222012%22+NASA%3A+Achieving+Lower+Regrets+and+Faster+Rates+via+Adaptive+Stepsizes
- http://dl.acm.org/citation.cfm?id=2339530.2339557&preflayout=flat#citedby
Quotes
Author Keywords
- Adaptive learning; gradient methods; online computation; online convex optimization; online learning; parameter learning; stochastic optimization
Abstract
The classic Stochastic Approximation (SA) method achieves optimal rates under the black-box model. This optimality does not rule out better algorithms when more information about functions and data is available.
We present a family of Noise Adaptive Stochastic Approximation (NASA) algorithms for online convex optimization and stochastic convex optimization. NASA is an adaptive variant of Mirror Descent Stochastic Approximation. It is novel in its practical variation-dependent stepsizes and better theoretical guarantees. We show that comparing with state-of-the-art adaptive and non-adaptive SA methods, lower regrets and faster rates can be achieved under low-variation assumptions.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2012 NASAAchievingLowerRegretsandFas | Hua Ouyang Alexander Gray | NASA: Achieving Lower Regrets and Faster Rates via Adaptive Stepsizes | 10.1145/2339530.2339557 | 2012 |