Difference between revisions of "Pseudo-Inverse Algorithm"

From GM-RKB
Jump to: navigation, search
(ContinuousReplacement)
(Tag: continuous replacement)
(2013)
(One intermediate revision by the same user not shown)
Line 32: Line 32:
 
=== 2013 ===
 
=== 2013 ===
 
* ([[2013_2013SpecialIssueLearningthePseu|Tapson & Van Schaik, 2013]]) ⇒ [[author::J. Tapson]], and [[author::A. Van Schaik]]. ([[year::2013]]). “[https://www.westernsydney.edu.au/__data/assets/pdf_file/0003/783156/Tapson,_van_Schaik_-_2013_-_Learning_the_pseudoinverse_solution_to_network_weights.pdf 2013 Special Issue: Learning the Pseudoinverse Solution to Network Weights].” In: Neural Networks Journal, 45. [http://dx.doi.org/10.1016/j.neunet.2013.02.008 doi:10.1016/j.neunet.2013.02.008]
 
* ([[2013_2013SpecialIssueLearningthePseu|Tapson & Van Schaik, 2013]]) ⇒ [[author::J. Tapson]], and [[author::A. Van Schaik]]. ([[year::2013]]). “[https://www.westernsydney.edu.au/__data/assets/pdf_file/0003/783156/Tapson,_van_Schaik_-_2013_-_Learning_the_pseudoinverse_solution_to_network_weights.pdf 2013 Special Issue: Learning the Pseudoinverse Solution to Network Weights].” In: Neural Networks Journal, 45. [http://dx.doi.org/10.1016/j.neunet.2013.02.008 doi:10.1016/j.neunet.2013.02.008]
 +
** QUOTE: The [[solution]] to the [[network weight]]s is exactly the same as that which would be [[calculate]]d by [[singular value decomposition]]. It [[converge]]s with a single [[forward iteration]] per [[input data]] [[sample]], and as such is ideal for [[realtime]] [[online computation]] of the [[pseudoinverse solution]]. It requires significantly less [[memory]] than the [[SVD]] method, as its [[memory requirement]] scales as the [[square]] of the [[size]] of the [[hidden layer]], whereas the [[SVD]] [[memory requirement]] scales with the [[product]] of the [[hidden layer]] [[size]] and the [[size]] of the [[training data set]]. Given that the [[data set]] [[size]] should significantly exceed the [[hidden layer]] [[size]], the former is advantageous. We call the [[algorithm OPIUM (Online PseudoInverse Update Method)]].<P> [[OPIUM]] is adapted from an [[iterative method]] for [[computing]] the [[pseudoinverse]], known as [[Greville’s method]] ([[Greville, 1960]]). The existence of [[OPIUM]], and its [[biological plausibility]], suggests that there is no reason why a [[biological neural network]] would not arrive at the same [[weight]]s that are computed using a [[singular value decomposition]], and therefore that this method of [[synthesizing]] [[network structure]]s may be used without the fear that it is [[biologically implausible]].
  
 
=== 2008 ===
 
=== 2008 ===

Revision as of 13:37, 13 July 2019

A Pseudo-Inverse Algorithm is a Matrix Decomposition Algorithm that can solve a least square system such that each column vector of the solution has a minimum norm.



References

2019

  1. Moore, E. H. (1920). "On the reciprocal of the general algebraic matrix". Bulletin of the American Mathematical Society. 26 (9): 394–95. doi:10.1090/S0002-9904-1920-03322-7.
  2. Bjerhammar, Arne (1951). "Application of calculus of matrices to method of least squares; with special references to geodetic calculations". Trans. Roy. Inst. Tech. Stockholm. 49.
  3. Penrose, Roger (1955). "A generalized inverse for matrices". Proceedings of the Cambridge Philosophical Society. 51 (3): 406–13. doi:10.1017/S0305004100030401.

2016

2013

2008

2007

1966

1965

1955