# Algorithm Strategy Pattern

An algorithm strategy pattern is an algorithm model that can be applied to many algorithm design tasks.

**AKA:**Algorithm Strategy, Algorithm Design Pattern.**Context:**- It can (typically) define an Algorithm Family.
- It can operate on an Abstract Data Type.
- It can range from being being a Simple Algorithm Pattern to being a Composite Algorithm Pattern.
- It can be tailored to solve a General Task Type:
- a Satisfaction Algorithm (for a Satisfaction Task) finds any satisfactory Solution.
- a Search Algorithm (for a Search Task) finds all Solutions.
- an Optimization Algorithm (for an Optimization Task) find the best Solution with respect to a Cost Function.

- …

**Example(s):**- Sequential Algorithm Pattern/Iterative Algorithm Pattern.
- Recursive Algorithm Pattern, for recursive algorithms.
- Backtracking Algorithm Strategy.
- Branch and Bound Algorithm Strategy. Traveling Salesman Problem, Optimization.
- Brute Force Algorithm Strategy.
- Divide and Conquer Algorithm Strategy.
- Dynamic Programming Algorithm Strategy.
- Greedy Algorithm Strategy. Optimization, Local Optima, Global Optima.
- Recursive Algorithm Strategy.
- Heuristic Algorithm Strategy.
- Randomized Algorithm Strategy, such as Quicksort Algorithm.
- …

**Counter-Example(s):****See:**Design Pattern, Problem Solving, Pseudo Code, General Algorithm, Domain Specific Algorithm.

## References

### 2011

- (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/Algorithm#By_design_paradigm
- QUOTE: Another way of classifying algorithms is by their design methodology or paradigm. There is a certain number of paradigms, each different from the other. Furthermore, each of these categories will include many different types of algorithms. Some commonly found paradigms include:
**Brute-force**or**exhaustive search**. This is the naïve method of trying every possible solution to see which is best.^{[1]}Divide and conquer. A divide and conquer algorithm repeatedly reduces an instance of a problem to one or more smaller instances of the same problem (usually recursively) until the instances are small enough to solve easily. One such example of divide and conquer is merge sorting. Sorting can be done on each segment of data after dividing data into segments and sorting of entire data can be obtained in the conquer phase by merging the segments. A simpler variant of divide and conquer is called a

**decrease and conquer algorithm**, that solves an identical subproblem and uses the solution of this subproblem to solve the bigger problem. Divide and conquer divides the problem into multiple subproblems and so the conquer stage will be more complex than decrease and conquer algorithms. An example of decrease and conquer algorithm is the binary search algorithm.**Dynamic programming**. When a problem shows optimal substructure, meaning the optimal solution to a problem can be constructed from optimal solutions to subproblems, and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called*dynamic programming*avoids recomputing solutions that have already been computed. For example, Floyd–Warshall algorithm, the shortest path to a goal from a vertex in a weighted graph can be found by using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. The main difference between dynamic programming and divide and conquer is that subproblems are more or less independent in divide and conquer, whereas subproblems overlap in dynamic programming. The difference between dynamic programming and straightforward recursion is in caching or memoization of recursive calls. When subproblems are independent and there is no repetition, memoization does not help; hence dynamic programming is not a solution for all complex problems. By using memoization or maintaining a table of subproblems already solved, dynamic programming reduces the exponential nature of many problems to polynomial complexity.*The greedy method. A greedy algorithm is similar to a dynamic programming algorithm, but the difference is that solutions to the subproblems do not have to be known at each stage; instead a "greedy" choice can be made of what looks best for the moment. The greedy method extends the solution with the best possible decision (not all feasible decisions) at an algorithmic stage based on the current local optimum and the best decision (not all possible decisions) made in a previous stage. It is not exhaustive, and does not give an accurate answer to many problems. But when it works, it will be the fastest method. The most popular greedy algorithm is finding the minimal spanning tree as given by Huffman Tree, Kruskal, Prim, Sollin.***Linear programming**. When solving a problem using linear programming, specific inequalities involving the inputs are found and then an attempt is made to maximize (or minimize) some linear function of the inputs. Many problems (such as the maximum flow for directed graphs) can be stated in a linear programming way, and then be solved by a 'generic' algorithm such as the simplex algorithm. A more complex variant of linear programming is called integer programming, where the solution space is restricted to the integers.**Reduction**. This technique involves solving a difficult problem by transforming it into a better known problem for which we have (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithm's. For example, one selection algorithm for finding the median in an unsorted list involves first sorting the list (the expensive portion) and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as**transform and conquer***.**Search and enumeration. Many problems (such as playing chess) can be modeled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration and backtracking.*- Randomized algorithms are those that make some choices randomly (or pseudo-randomly); for some problems, it can in fact be proven that the fastest solutions must involve some randomness. There are two large classes of such algorithms:
- Monte Carlo algorithms return a correct answer with high-probability. E.g. RP is the subclass of these that run in polynomial time)
- Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP.

- In optimization problems, heuristic algorithms do not try to find an optimal solution, but an approximate solution where the time or resources are limited. They are not practical to find perfect solutions. An example of this would be local search, tabu search, or simulated annealing algorithms, a class of heuristic probabilistic algorithms that vary the solution of a problem by a random amount. The name "simulated annealing" alludes to the metallurgic term meaning the heating and cooling of metal to achieve freedom from defects. The purpose of the random variance is to find close to globally optimal solutions rather than simply locally optimal ones, the idea being that the random element will be decreased as the algorithm settles down to a solution. Approximation algorithms are those heuristic algorithms that additionally provide some bounds on the error. Genetic algorithms attempt to find solutions to problems by mimicking biological evolutionary processes, with a cycle of random mutations yielding successive generations of "solutions". Thus, they emulate reproduction and "survival of the fittest". In genetic programming, this approach is extended to algorithms, by regarding the algorithm itself as a "solution" to a problem.

- Randomized algorithms are those that make some choices randomly (or pseudo-randomly); for some problems, it can in fact be proven that the fastest solutions must involve some randomness. There are two large classes of such algorithms:

- QUOTE: Another way of classifying algorithms is by their design methodology or paradigm. There is a certain number of paradigms, each different from the other. Furthermore, each of these categories will include many different types of algorithms. Some commonly found paradigms include:

### 2009

- (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/Algorithm#Classification_by_design_paradigm
- QUOTE: … Some commonly found paradigms include:
…

The probabilistic and heuristic paradigm. Algorithms belonging to this class fit the definition of an algorithm more loosely.

- 1. Probabilistic algorithms are those that make some choices randomly (or pseudo-randomly); for some problems, it can in fact be proven that the fastest solutions must involve some randomness.
- 2. Genetic algorithms attempt to find solutions to problems by mimicking biological evolutionary processes, with a cycle of random mutations yielding successive generations of "solutions". Thus, they emulate reproduction and "survival of the fittest". In genetic programming, this approach is extended to algorithms, by regarding the algorithm itself as a "solution" to a problem.
- 3. Heuristic algorithms, whose general purpose is not to find an optimal solution, but an approximate solution where the time or resources are limited. They are not practical to find perfect solutions. An example of this would be local search, tabu search, or simulated annealing algorithms, a class of heuristic probabilistic algorithms that vary the solution of a problem by a random amount. The name "simulated annealing" alludes to the metallurgic term meaning the heating and cooling of metal to achieve freedom from defects. The purpose of the random variance is to find close to globally optimal solutions rather than simply locally optimal ones, the idea being that the random element will be decreased as the algorithm settles down to a solution.

- QUOTE: … Some commonly found paradigms include:

### 2008

- (Edmonds, 2008) ⇒ Jeff Edmonds. (2008). “Thinking about Algorithms Abstractly: Recursive Back Tracking & Dynamic Programming.” Presentation, Lecture 7, COSC 3101, York University.

### 2007

- (Hobson & Ebden, 2007) ⇒ Douglas Hobson, and A.J. Ebden. (2007). “A Taxonomy and Evaluation of Algorithms used in the ACM Programming Competition. Literature Review

- http://www.cs.umd.edu/class/spring2007/cmsc132/Exams/Final/PracticeFinal.pdf "V. Algorithm strategies"
- c. What is the difference between divide-and-conquer and dynamic programming?
- d. What is the difference between recursive and backtracking algorithms?
- e. What is the difference between a greedy algorithm and heuristics?
- f. What is the difference between brute force and branch-and-bound algorithms?
- g. List a reason to use dynamic programming.
- h. List a reason to use backtracking.
- i. List a reason to use a brute force algorithm.
- j. What type of algorithm is Kruskal’s algorithm for finding minimum spanning trees?

- Skiena Steven, Stony Brook Algorithm Repository, Published: 2001-03-07, Accessed: 2007-06-24, <http://www.cs.sunysb.edu/~algorith/>

### 2005

- (Emad & Tseng, 2005) ⇒ Fawzi Emad, and Chau-Wen Tseng. (2005). “Algorithm Strategies
*. Course Lecture, Department of Computer Science, University of Maryland, College Park*

- Black Paul E, ed., U.S. National Institute of Standards and Technology, Dictionary of Algorithms and Data Structures <http://www.nist.gov/dads>
- Cheng Howard, Problem Classification on Spanish Archive, Published: 2006-12-17, Accessed: 2007-06-24, <http://www.cs.uleth.ca/~cheng/contest/hints.html>

- http://penguin.ewu.edu/cscd320/Topic/Strategies/index.html
- http://www.cis.upenn.edu/~matuszek/cit594-2008/Lectures/36-algorithm-types.ppt

- ↑ Sue Carroll, Taz Daughtrey (2007-07-04).
*Fundamental Concepts for the Software Quality Engineer*. pp. 282 et seq.. ISBN 9780873897204. http://books.google.com/?id=bz_cl3B05IcC&pg=PA282.