Dynamic Programming Algorithm: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
No edit summary
 
m (Text replacement - "<P> " to "<P> ")
 
(43 intermediate revisions by 5 users not shown)
Line 1: Line 1:
A [[Dynamic Programming Algorithm|dynamic programming algorithm]] is an [[iterative algorithm]] that follows the [[algorithm strategy|strategy]] which is applicable to [[decomposable task]]s whose [[optimal solution]] contains [[optimal solution]]s to its [[subtask]]s and [[task solution|solution]]s can be [[stored]] and [[reused]] in a [[bottom-up fashion]].
A [[Dynamic Programming Algorithm|dynamic programming algorithm]] is an [[iterative algorithm]] that follows [[the algorithm strategy|strategy]] which is applicable to [[decomposable task]]s whose [[optimal solution]] contains [[optimal solution]]s to its [[subtask]]s and [[task solution|solution]]s can be [[stored]] and [[reused]] in a [[bottom-up fashion]].
* <B><U>Context</U>:</B>
* <B>Context</U>:</B>
** It can be a commonly used [[Algorithm Strategy]].
** It can be a commonly used [[Algorithm Strategy]].
** It can expend [[Memory Resource]]s.
** It can expend [[Memory Resource]]s.
** It can be an [[Algorithm Strategy]] used for [[Optimization Algorithm]]s.
** It can be an [[Algorithm Strategy]] used for [[Optimization Algorithm]]s.
* <B><U>Example(s)</U>:</B>  
* <B>Example(s):</B>
** a [[Forward-Backward Algorithm]],
** a [[Smith-Waterman Local Alignment Algorithm]] ([[Smith & Waterman, 1981]]).
** a [[Smith-Waterman Local Alignment Algorithm]] ([[Smith & Waterman, 1981]]).
** a [[Value Iteration Algorithm]] ([[Bellman, 1957]]).
** a [[Value Iteration Algorithm]] ([[Bellman, 1957]]).
** a [[Dijkstra Shortest Path Algorithm]].
** a [[Dijkstra Shortest Path Algorithm]].
** a [[Viterbi Algorithm]].
** a [[Viterbi Algorithm]].
* <B><U>Counter-Example(s)</U>:</B>  
** …
* <B>Counter-Example(s):</B>
** a [[Divide and Conquer Algorithm]].
** a [[Divide and Conquer Algorithm]].
* <B><U>See:</U></B> [[Recursive Algorithm]], [[Linear Programming Algorithm]], [[Markov Decision Processes]].
* <B>See:</B> [[Recursive Algorithm]], [[Linear Programming Algorithm]], [[Markov Decision Processes]].
 
----
----
----
----
==References ==


===2012===
== References ==
* (Wikipedia, 2012) &rArr; http://en.wikipedia.org/wiki/Dynamic_programming
 
** QUOTE: In [[mathematics]] and [[computer science]], '''dynamic programming''' is a method for solving complex problems by breaking them down into simpler subproblems. It is applicable to problems exhibiting the properties of [[overlapping subproblem]]s which are only slightly smaller<ref>S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani, ''''Algorithms'''', p 173, available at http://www.cs.berkeley.edu/~vazirani/algorithms.html</ref> and [[optimal substructure]] (described below).  When applicable, the method takes far less time than naive methods.     <P>     The key idea behind dynamic programming is quite simple. In general, to solve a given problem, we need to solve different parts of the problem (subproblems), then combine the solutions of the subproblems to reach an overall solution. Often, many of these subproblems are really the same. The dynamic programming approach seeks to solve each subproblem only once, thus reducing the number of computations: once the solution to a given subproblem has been computed, it is stored or "[[memoization |memo-ized]]": the next time the same solution is needed, it is simply looked up. This approach is especially useful when the number of repeating subproblems [[exponential growth|grows exponentially]] as a function of the size of the input.     <P>       Top-down dynamic programming simply means storing the results of certain calculations, which are later used again since the completed calculation is a sub-problem of a larger calculation.  Bottom-up dynamic programming involves formulating a complex calculation as a [[Recursion|recursive]] series of simpler calculations.
=== 2012 ===
* (Wikipedia, 2012) http://en.wikipedia.org/wiki/Dynamic_programming
** QUOTE: In [[mathematics]] and [[computer science]], '''dynamic programming</B> is a method for solving complex problems by breaking them down into simpler subproblems. It is applicable to problems exhibiting the properties of [[overlapping subproblem]]s which are only slightly smaller<ref>S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani, ''''Algorithms'</B>, p 173, available at http://www.cs.berkeley.edu/~vazirani/algorithms.html</ref> and [[optimal substructure]] (described below).  When applicable, the method takes far less time than naive methods.       <P>             The key idea behind dynamic programming is quite simple. In general, to solve a given problem, we need to solve different parts of the problem (subproblems), then combine the solutions of the subproblems to reach an overall solution. Often, many of these subproblems are really the same. The dynamic programming approach seeks to solve each subproblem only once, thus reducing the number of computations: once the solution to a given subproblem has been computed, it is stored or [[memoization |memo-ized]]": the next time the same solution is needed, it is simply looked up. This approach is especially useful when the number of repeating subproblems [[exponential growth|grows exponentially]] as a function of the size of the input.       <P>   Top-down dynamic programming simply means storing the results of certain calculations, which are later used again since the completed calculation is a sub-problem of a larger calculation.  Bottom-up dynamic programming involves formulating a complex calculation as a [[Recursion|recursive]] series of simpler calculations.


===2011===
=== 2011 ===
* ([[Puterman & Patrick, 2011]]) &rArr; Martin L. Puterman, Jonathan Patrick. (2011). "Dynamic Programming." In: ([[Sammut & Webb, 2011]]) p.298
* ([[Puterman & Patrick, 2011]]) Martin L. Puterman, Jonathan Patrick. ([[2011]]). “Dynamic Programming.In: ([[Sammut & Webb, 2011]]) p.298


===2009===
=== 2009 ===
* http://www.nature.com/nrg/journal/v5/n4/glossary/nrg1318_glossary.html
* http://www.nature.com/nrg/journal/v5/n4/glossary/nrg1318_glossary.html
** DYNAMIC PROGRAMMING A large class of programmimg algorithms that are based on breaking a large problem down (if possible) into incremental steps so that, at any given stage, optimal solutions are known sub-problems.  
** DYNAMIC PROGRAMMING A large class of programmimg algorithms that are based on breaking a large problem down (if possible) into incremental steps so that, at any given stage, optimal solutions are known sub-problems.


===2008===
=== 2008 ===
* ([[2008_AdvancedDynamicProgramming|Huang, 2008]]) &rArr; Liang Huang. ([[2008]]). "[http://www.cis.upenn.edu/~lhuang3/coling.pdf Advanced Dynamic Programming in Semiring and Hypergraph Frameworks]." In: [[COLING 2008]] Tutorial Notes.
* ([[2008_AdvancedDynamicProgramming|Huang, 2008]]) Liang Huang. ([[2008]]). [http://www.cis.upenn.edu/~lhuang3/coling.pdf Advanced Dynamic Programming in Semiring and Hypergraph Frameworks].In: [[COLING 2008]] Tutorial Notes.
** ... The basic idea of [[Dynamic Programming|DP]] is to solve a bigger problem by divide-and-conquer, but also reuses the solutions of ''overlapping subproblems'' to avoid recalculation. The simplest such example is a Fibonacci series, where each ''F''(''n'') is used twice (if cached). The correctness of a DP algorithm is ensured by the ''optimal substructure property'', which informally says that an optimal solution must contain optimal subsolutions for subproblems.
** The basic idea of [[Dynamic Programming Algorithm|DP]] is to solve a bigger problem by divide-and-conquer, but also reuses the solutions of ''overlapping subproblems'' to avoid recalculation. The simplest such example is a Fibonacci series, where each ''F''(''n'') is used twice (if cached). The correctness of a DP algorithm is ensured by the ''optimal substructure property'', which informally says that an optimal solution must contain optimal subsolutions for subproblems.
* (Edmonds, 2008) &rArr; Jeff Edmonds. ([[2008]]). "[http://www.cse.yorku.ca/~jeff/notes/3101/05-DynamicProgramming.ppt Thinking about Algorithms Abstractly: Recursive Back Tracking & Dynamic Programming]." Presentation, Lecture 7, COSC 3101, York University.
* (Edmonds, 2008) Jeff Edmonds. ([[2008]]). [http://www.cse.yorku.ca/~jeff/notes/3101/05-DynamicProgramming.ppt Thinking about Algorithms Abstractly: Recursive Back Tracking & Dynamic Programming]." Presentation, Lecture 7, COSC 3101, York University.
** http://www.cse.yorku.ca/~jeff/notes/3101/slides.html
** http://www.cse.yorku.ca/~jeff/notes/3101/slides.html


===2005===
=== 2005 ===
* ([[2005_AlgorithmDesign|Klinberg & Tardos, 2005]]) &rArr; [[Jon Kleinberg]], and [[Éva Tardos]]. ([[2005]]). "[http://www.aw-bc.com/info/kleinberg/ Algorithm Design]." Addison-Wesley. ISBN:0-321-29535-8
* ([[2005_AlgorithmDesign|Klinberg & Tardos, 2005]]) [[Jon Kleinberg]], and [[Éva Tardos]]. ([[2005]]). [http://www.aw-bc.com/info/kleinberg/ Algorithm Design]." Addison-Wesley. ISBN:0-321-29535-8
** Chapter 6: Dynamic Programming http://www.aw-bc.com/info/kleinberg/assets/downloads/ch6.pdf
** Chapter 6: Dynamic Programming http://www.aw-bc.com/info/kleinberg/assets/downloads/ch6.pdf


===1966===
=== 1966 ===
* (Bellman, 1966) &rArr; Richard Bellman. (1966). "[http://dx.doi.org/0.1126/science.153.3731.34 Dynamic Programming]." In: Science, 153(3731).
* (Bellman, 1966) Richard Bellman. (1966). [http://dx.doi.org/0.1126/science.153.3731.34 Dynamic Programming].In: Science, 153(3731).


===1957===
=== 1957 ===
* (Bellman, 1957) &rArr; Richard Bellman. (1957). "Dynamic Programming.</i>" Princeton University Press.
* (Bellman, 1957) Richard Bellman. (1957). “Dynamic Programming.</i>Princeton University Press.


----
----
Line 47: Line 51:
__NOTOC__
__NOTOC__
[[Category:Concept]]
[[Category:Concept]]
[[Category:Machine Learning]]
[[Category:Statistical Inference]]
[[Category:Data Science]]

Latest revision as of 18:19, 2 June 2024

A dynamic programming algorithm is an iterative algorithm that follows strategy which is applicable to decomposable tasks whose optimal solution contains optimal solutions to its subtasks and solutions can be stored and reused in a bottom-up fashion.



References

2012

  • (Wikipedia, 2012) ⇒ http://en.wikipedia.org/wiki/Dynamic_programming
    • QUOTE: In mathematics and computer science, dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems. It is applicable to problems exhibiting the properties of overlapping subproblems which are only slightly smaller[1] and optimal substructure (described below). When applicable, the method takes far less time than naive methods.

      The key idea behind dynamic programming is quite simple. In general, to solve a given problem, we need to solve different parts of the problem (subproblems), then combine the solutions of the subproblems to reach an overall solution. Often, many of these subproblems are really the same. The dynamic programming approach seeks to solve each subproblem only once, thus reducing the number of computations: once the solution to a given subproblem has been computed, it is stored or “memo-ized": the next time the same solution is needed, it is simply looked up. This approach is especially useful when the number of repeating subproblems grows exponentially as a function of the size of the input.

      Top-down dynamic programming simply means storing the results of certain calculations, which are later used again since the completed calculation is a sub-problem of a larger calculation. Bottom-up dynamic programming involves formulating a complex calculation as a recursive series of simpler calculations.

2011

2009

2008

2005

1966

  • (Bellman, 1966) ⇒ Richard Bellman. (1966). “Dynamic Programming.” In: Science, 153(3731).

1957

  • (Bellman, 1957) ⇒ Richard Bellman. (1957). “Dynamic Programming.” Princeton University Press.

  1. S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani, 'Algorithms', p 173, available at http://www.cs.berkeley.edu/~vazirani/algorithms.html