# Intractable Computational Task

(Redirected from intractable)
Jump to: navigation, search

An Intractable Computational Task is a computational task that lacks a polynomial-time solutions for more than the smallest inputs.

## References

### 2018a

• (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability Retrieved:2018-7-21.
• A problem that can be solved in theory (e.g. given large but finite resources, especially time), but for which in practice any solution takes too many resources to be useful, is known as an intractable problem. [1] Conversely, a problem that can be solved in practice is called a tractable problem, literally "a problem that can be handled". The term infeasible (literally "cannot be done") is sometimes used interchangeably with intractable, though this risks confusion with a feasible solution in mathematical optimization.

Tractable problems are frequently identified with problems that have polynomial-time solutions (P, PTIME); this is known as the Cobham–Edmonds thesis. Problems that are known to be intractable in this sense include those that are EXPTIME-hard. If NP is not the same as P, then NP-hard problems are also intractable in this sense.

However, this identification is inexact: a polynomial-time solution with large exponent or large constant term grows quickly, and may be impractical for practical size problems; conversely, an exponential-time solution that grows slowly may be practical on realistic input, or a solution that takes a long time in the worst case may take a short time in most cases or the average case, and thus still be practical. Saying that a problem is not in P does not imply that all large cases of the problem are hard or even that most of them are. For example, the decision problem in Presburger arithmetic has been shown not to be in P, yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-complete knapsack problem over a wide range of sizes in less than quadratic time and SAT solvers routinely handle large instances of the NP-complete Boolean satisfiability problem.

To see why exponential-time algorithms are generally unusable in practice, consider a program that makes 2n operations before halting. For small n, say 100, and assuming for the sake of example that the computer does 1012 operations each second, the program would run for about 4 × 1010 years, which is the same order of magnitude as the age of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. However, an exponential-time algorithm that takes 1.0001n operations is practical until n gets relatively large.

Similarly, a polynomial time algorithm is not always practical. If its running time is, say, n15, it is unreasonable to consider it efficient and it is still useless except on small instances. Indeed, in practice even n3 or n2 algorithms are often impractical on realistic sizes of problems.

### 2018c

• (Wikibooks, 2018) ⇒ https://en.wikibooks.org/wiki/Foundations_of_Computer_Science/Limits_of_Computing#Intractable_Problems Retrieved:2018-7-21.
• QUOTE: Halting problem is hard because it is not solvable algorithmically even in principle. There are other hard problems that are solvable in principle but in practice they are close to being impossible to solve. As you can see we can categorize problems by the performance of the best known algorithms. If a problem can be solved using a fast algorithm, the problem is easy because we can use a computer to solve it fast. On the contrary if the best known algorithm we know takes a long time to solve a problem, it is hard because computers cannot solve it fast.

Using algorithm complexity theory we can put each algorithm into a particular category according to the algorithm's complexity. If the big-O notation of a category contains only polynomial terms, the problems solvable using algorithms in this category are called P problems (Polynomial time solutions exist), such as $O(1)$, $log_2(N)$$O(N)$, and $O(N^2)$. The P problems are the easy problems to computers. Among the problems without a polynomial time solution there are problems that if we can guess the solution it can be verified in polynomial time. For example, the integer factorization (or prime decomposition) problem has no known polynomial time solution but given an answer we can verify it very quickly by simply multiplying the factors and comparing the result to the integer. These types of problems are called NP (Non-deterministic Polynomial) problems.

Collectively we call problems that take too long to solve intractable problems, which include problems with best algorithm in exponential time ($O(2^N)$) or those with polynomial time solutions but the exponent is too larger, e.g. $O(N^{15})$.

If a problem's best algorithmic solution is in the $O(2^N)$, when $N=100$ and a computer does $10^{12}$ operations per second it would take $4 \times 10^{10}$ years (the age of the universe) to solve the problem on the computer.

### 2018d

1. Hopcroft, J.E., Motwani, R. and Ullman, J.D. (2007) Introduction to Automata Theory, Languages, and Computation, Addison Wesley, Boston/San Francisco/New York (page 368)
2. Krippendorff, Klaus. "Combinatorial Explosion". Web Dictionary of Cybernetics and Systems. PRINCIPIA CYBERNETICA WEB. Retrieved 29 November 2010.
3. http://intelligence.worldofcomputing/combinatorial-explosion Combinatorial Explosion.