2011 RandomizedAlgorithmsforMatrices: Difference between revisions
(Imported from text file) |
m (Text replacement - " ↵↵" to " ") |
||
(14 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
* ([[2011_RandomizedAlgorithmsforMatrices|Mahoney, 2011]]) | * ([[2011_RandomizedAlgorithmsforMatrices|Mahoney, 2011]]) ⇒ [[author::Michael W. Mahoney]]. ([[year::2011]]). “Randomized Algorithms for Matrices and Data.” Now Publishers Inc.. ISBN:1601985061, 9781601985064 | ||
<B>Subject Headings:</B> | <B>Subject Headings:</B> [[Approximate Matrix Analysis]], [[Very Large Matrix Task]]. | ||
==Notes== | == Notes == | ||
==Cited By== | == Cited By == | ||
* http://scholar.google.com/scholar?q=%222011%22+Randomized+Algorithms+for+Matrices+and+Data | * http://scholar.google.com/scholar?q=%222011%22+Randomized+Algorithms+for+Matrices+and+Data | ||
* http://dl.acm.org/citation.cfm?id=2161570&preflayout=flat#citedby | * http://dl.acm.org/citation.cfm?id=2161570&preflayout=flat#citedby | ||
== Quotes == | |||
=== Abstract === | |||
===Abstract=== | |||
[[Randomized algorithm]]s for [[very large matrix problem]]s have received a great deal of attention in recent years. </s> | [[Randomized algorithm]]s for [[very large matrix problem]]s have received a great deal of attention in recent years. </s> | ||
Much of [[this work]] was motivated by problems in [[large-scale data | Much of [[2011_RandomizedAlgorithmsforMatrices|this work]] was motivated by problems in [[large-scale data analysis]], largely since [[matrice]]s are [[popular data structure|popular structure]]s with which to [[model data]] drawn from a wide range of [[application domain]]s, and the success of [[this line of work]] opens the possibility of performing [[matrix-based computation]]s with [[very large matrix|truly massive data set]]s. </s> | ||
Originating within [[theoretical computer science]], [[this work]] was [[subsequently extended]] and applied in important ways by [[numerical analysis researcher|researcher]]s from [[numerical linear algebra]], [[statistical computing|statistics]], [[applied mathematics]], [[data analysis]], and [[machine learning]], as well as [[domain scientist]]s. </s> | |||
[[Randomized | [[2011_RandomizedAlgorithmsforMatrices|Randomized Algorithms for Matrices and Data]] provides a [[detailed overview]], appropriate for both [[student]]s and [[researcher]]s from all of these areas, of [[recent work]] on the [[algorithm theory|theory]] of [[randomized matrix algorithm]]s as well as the application of those ideas to the [[solution]] of [[practical problem]]s in [[large-scale data analysis]]. </s> | ||
By focusing on | By focusing on ubiquitous and fundamental problems such as [[least squares approximation]] and [[low-rank matrix approximation]] that have been at the center of recent developments, an emphasis is placed on a few [[simple core idea]]s that underlie not only recent [[theoretical advance]]s but also the usefulness of these [[algorithmic tool]]s in [[large-scale data application]]s. </s> | ||
==References== | == References == | ||
{{#ifanon:| | {{#ifanon:| | ||
* 1. Paramveer S. Dhillon, Yichao Lu, Dean Foster, Lyle Ungar, New Subsampling Algorithms for Fast Least Squares Regression, Proceedings of the 26th International Conference on Neural Information Processing Systems, p.360-368, December 05-10, 2013, Lake Tahoe, Nevada | * 1. Paramveer S. Dhillon, Yichao Lu, Dean Foster, Lyle Ungar, New Subsampling Algorithms for Fast Least Squares Regression, Proceedings of the 26th International Conference on Neural Information Processing Systems, p.360-368, December 05-10, 2013, Lake Tahoe, Nevada | ||
* 2. Kenneth L. Clarkson, Petros Drineas, Malik Magdon-Ismail, Michael W. Mahoney, Xiangrui Meng, David P. Woodruff, The Fast Cauchy Transform and Faster Robust Linear Regression, Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, p.466-477, January 06-08, 2013, New Orleans, Louisiana | * 2. Kenneth L. Clarkson, Petros Drineas, Malik Magdon-Ismail, Michael W. Mahoney, Xiangrui Meng, David P. Woodruff, The Fast Cauchy Transform and Faster Robust Linear Regression, Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, p.466-477, January 06-08, 2013, New Orleans, Louisiana | ||
* 3. Petros Drineas, Malik Magdon-Ismail, Michael W. Mahoney, David P. Woodruff, Fast Approximation of Matrix Coherence and Statistical Leverage, Proceedings of the 29th International Coference on International Conference on Machine Learning, p.1051-1058, June 26-July 01, 2012, Edinburgh, Scotland | * 3. Petros Drineas, Malik Magdon-Ismail, Michael W. Mahoney, David P. Woodruff, Fast Approximation of Matrix Coherence and Statistical Leverage, Proceedings of the 29th International Coference on International Conference on Machine Learning, p.1051-1058, June 26-July 01, 2012, Edinburgh, Scotland | ||
* 4. Jiyan Yang, Yin-Lam Chow, Christopher Ré, Michael W. Mahoney, Weighted SGD for ℓ<sub><i>p</i></sub> Regression with Randomized Preconditioning, Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, p.558-569, January 10-12, 2016, Arlington, Virginia | * 4. Jiyan Yang, Yin-Lam Chow, [[Christopher Ré]], Michael W. Mahoney, Weighted SGD for ℓ<sub><i>p</i></sub> Regression with Randomized Preconditioning, Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, p.558-569, January 10-12, 2016, Arlington, Virginia | ||
* 5. Erich L. Kaltofen, Zhengfeng Yang, Sparse Multivariate Function Recovery with a High Error Rate in the Evaluations, Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation, July 23-25, 2014, Kobe, Japan | * 5. Erich L. Kaltofen, Zhengfeng Yang, Sparse Multivariate Function Recovery with a High Error Rate in the Evaluations, Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation, July 23-25, 2014, Kobe, Japan | ||
* 6. Xiangrui Meng, Michael W. Mahoney, Low-distortion Subspace Embeddings in Input-sparsity Time and Applications to Robust Linear Regression, Proceedings of the 45th Annual ACM Symposium on Symposium on Theory of Computing, June 01-04, 2013, Palo Alto, California, USA | * 6. Xiangrui Meng, Michael W. Mahoney, Low-distortion Subspace Embeddings in Input-sparsity Time and Applications to Robust Linear Regression, Proceedings of the 45th Annual ACM Symposium on Symposium on Theory of Computing, June 01-04, 2013, Palo Alto, California, USA | ||
Line 33: | Line 31: | ||
* 9. Michael W. Mahoney, Approximate Computation and Implicit Regularization for Very Large-scale Data Analysis, Proceedings of the 31st Symposium on Principles of Database Systems, May 21-23, 2012, Scottsdale, Arizona, USA | * 9. Michael W. Mahoney, Approximate Computation and Implicit Regularization for Very Large-scale Data Analysis, Proceedings of the 31st Symposium on Principles of Database Systems, May 21-23, 2012, Scottsdale, Arizona, USA | ||
* 10. SoftAir, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.85 N.C, p.1-18, July 2015 | * 10. SoftAir, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.85 N.C, p.1-18, July 2015 | ||
* 11. <div> <p Style="margin-left:5px;"> | * 11. <div> <p Style="margin-left:5px;"> <a Href="signin.cfm?CFID=925075458&CFTOKEN=23862917"> </p> | ||
}} | }} |
Latest revision as of 21:39, 2 December 2023
- (Mahoney, 2011) ⇒ Michael W. Mahoney. (2011). “Randomized Algorithms for Matrices and Data.” Now Publishers Inc.. ISBN:1601985061, 9781601985064
Subject Headings: Approximate Matrix Analysis, Very Large Matrix Task.
Notes
Cited By
- http://scholar.google.com/scholar?q=%222011%22+Randomized+Algorithms+for+Matrices+and+Data
- http://dl.acm.org/citation.cfm?id=2161570&preflayout=flat#citedby
Quotes
Abstract
Randomized algorithms for very large matrix problems have received a great deal of attention in recent years. Much of this work was motivated by problems in large-scale data analysis, largely since matrices are popular structures with which to model data drawn from a wide range of application domains, and the success of this line of work opens the possibility of performing matrix-based computations with truly massive data sets. Originating within theoretical computer science, this work was subsequently extended and applied in important ways by researchers from numerical linear algebra, statistics, applied mathematics, data analysis, and machine learning, as well as domain scientists. Randomized Algorithms for Matrices and Data provides a detailed overview, appropriate for both students and researchers from all of these areas, of recent work on the theory of randomized matrix algorithms as well as the application of those ideas to the solution of practical problems in large-scale data analysis. By focusing on ubiquitous and fundamental problems such as least squares approximation and low-rank matrix approximation that have been at the center of recent developments, an emphasis is placed on a few simple core ideas that underlie not only recent theoretical advances but also the usefulness of these algorithmic tools in large-scale data applications.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2011 RandomizedAlgorithmsforMatrices | Michael W. Mahoney | Randomized Algorithms for Matrices and Data | 2011 |