Gradient Descent-based Learning Algorithm: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
m (Text replacement - "]] ** " to "]]. ** ")
m (Text replacement - "QUOTE: " to "QUOTE: ")
 
Line 24: Line 24:
=== 2018 ===
=== 2018 ===
* (Wijaya et al., 2018) ⇒ Galih Praja Wijaya, Dendi Handian, Imam Fachmi Nasrulloh, Lala Septem Riza, Rani Megasari, Enjun Junaeti (2018), [https://cran.r-project.org/web/packages/gradDescent/index.html "gradDescent: Gradient Descent for Regression Tasks"], [https://cran.r-project.org/web/packages/gradDescent/gradDescent.pdf "Reference manual (PDF)].
* (Wijaya et al., 2018) ⇒ Galih Praja Wijaya, Dendi Handian, Imam Fachmi Nasrulloh, Lala Septem Riza, Rani Megasari, Enjun Junaeti (2018), [https://cran.r-project.org/web/packages/gradDescent/index.html "gradDescent: Gradient Descent for Regression Tasks"], [https://cran.r-project.org/web/packages/gradDescent/gradDescent.pdf "Reference manual (PDF)].
** QUOTE: An implementation of various [[learning algorithm]]s based on [[Gradient Descent]] for dealing with [[regression task]]s. The variants of [[gradient descent algorithm]] are: [[Mini-Batch Gradient Descent (MBGD)]], which is an [[optimization]] to use [[training data]] partially to reduce the [[computation load]]. [[Stochastic Gradient Descent (SGD)]], which is an [[optimization]] to use a [[random data]] in [[learning]] to reduce the computation load drastically. [[Stochastic Average Gradient (SAG)]], which is a [[SGD-based algorithm]] to minimize [[stochastic step]] to [[average]]. [[Momentum Gradient Descent (MGD)]], which is an optimization to speed-up [[Gradient Descent-based Learning Algorithm|gradient descent learning]]. [[Accelerated Gradient Descent (AGD)]], which is an [[optimization]] to accelerate [[Gradient Descent-based Learning Algorithm|gradient descent learning]]. [[Adagrad]], which is a [[gradient-descent-based algorithm]] that accumulate previous [[cost]] to do [[adaptive learning]]. [[Adadelta]], which is a [[gradient-descent-based algorithm]] that use [[hessian approximation]] to do [[adaptive learning]]. [[RMSprop]], which is a [[gradient-descent-based algorithm]] that combine [[Adagrad]] and [[Adadelta]] [[adaptive learning]] ability. [[Adam]], which is a [[gradient-descent-based algorithm]] that [[mean]] and [[variance moment]] to do [[adaptive learning]]. [[Stochastic Variance Reduce Gradient (SVRG)]], which is an [[optimization]] [[SGD-based algorithm]] to accelerates the process toward converging by reducing the gradient. [[Semi Stochastic Gradient Descent (SSGD)]],which is a [[SGD-based algorithm]] that combine [[GD]] and [[SGD]] to accelerates the process toward converging by choosing one of the gradients at a time. [[Stochastic Recursive Gradient Algorithm (SARAH)]], which is an optimization algorithm similarly [[SVRG]] to accelerates the process toward converging by accumulated stochastic information. [[Stochastic Recursive Gradient Algorithm+ (SARAHPlus)]], which is a [[SARAH]] practical variant algorithm to accelerates the process toward converging provides a possibility of earlier termination.
** QUOTE: An implementation of various [[learning algorithm]]s based on [[Gradient Descent]] for dealing with [[regression task]]s. The variants of [[gradient descent algorithm]] are: [[Mini-Batch Gradient Descent (MBGD)]], which is an [[optimization]] to use [[training data]] partially to reduce the [[computation load]]. [[Stochastic Gradient Descent (SGD)]], which is an [[optimization]] to use a [[random data]] in [[learning]] to reduce the computation load drastically. [[Stochastic Average Gradient (SAG)]], which is a [[SGD-based algorithm]] to minimize [[stochastic step]] to [[average]]. [[Momentum Gradient Descent (MGD)]], which is an optimization to speed-up [[Gradient Descent-based Learning Algorithm|gradient descent learning]]. [[Accelerated Gradient Descent (AGD)]], which is an [[optimization]] to accelerate [[Gradient Descent-based Learning Algorithm|gradient descent learning]]. [[Adagrad]], which is a [[gradient-descent-based algorithm]] that accumulate previous [[cost]] to do [[adaptive learning]]. [[Adadelta]], which is a [[gradient-descent-based algorithm]] that use [[hessian approximation]] to do [[adaptive learning]]. [[RMSprop]], which is a [[gradient-descent-based algorithm]] that combine [[Adagrad]] and [[Adadelta]] [[adaptive learning]] ability. [[Adam]], which is a [[gradient-descent-based algorithm]] that [[mean]] and [[variance moment]] to do [[adaptive learning]]. [[Stochastic Variance Reduce Gradient (SVRG)]], which is an [[optimization]] [[SGD-based algorithm]] to accelerates the process toward converging by reducing the gradient. [[Semi Stochastic Gradient Descent (SSGD)]],which is a [[SGD-based algorithm]] that combine [[GD]] and [[SGD]] to accelerates the process toward converging by choosing one of the gradients at a time. [[Stochastic Recursive Gradient Algorithm (SARAH)]], which is an optimization algorithm similarly [[SVRG]] to accelerates the process toward converging by accumulated stochastic information. [[Stochastic Recursive Gradient Algorithm+ (SARAHPlus)]], which is a [[SARAH]] practical variant algorithm to accelerates the process toward converging provides a possibility of earlier termination.


=== 1998 ===
=== 1998 ===

Latest revision as of 20:45, 29 December 2022

A Gradient Descent-based Learning Algorithm is a supervised learning algorithm that is a gradient-descent optimization algorithm.



References

2018

1998

1990