Computational Complexity Theory

From GM-RKB
Jump to navigation Jump to search

A Computational Complexity Theory is a system theory for an algorithm to support to computational complexity analysis.



References

2011

  • (Aaronson, 2011) ⇒ Scott Aaronson. (2011). “Why Philosophers Should Care About Computational Complexity." Technical Report, TR11-108 | 8th August 2011 21:51.
    • AUTHOR KEYWORDS: closed timelike curves, Cobham axioms, Occam, PAC-learning, philosophy, quantum computing, survey
    • ABSTRACT: One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed case that one would be wrong. In particular, I argue that computational complexity theory --- the field that studies the resources (such as time, space, and randomness) needed to solve computational problems --- leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume's problem of induction and Goodman's grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing aspects of complexity theory itself that could benefit from philosophical analysis.

2009

  • (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/Computational_complexity_theory
    • Computational complexity theory is a branch of the theory of computation in computer science that focuses on classifying computational problems according to their inherent difficulty. In this context, a computational problem is understood to be a task that is in principle amenable to being solved by a computer. Informally, a computational problem consists of problem instances and solutions to these problem instances. For example, primality testing is the problem of determining whether a given number is prime or not. The instances of this problem are natural numbers, and the solution to an instance is yes or no based on whether the number is prime or not.
    • A problem is regarded as inherently difficult if solving the problem requires a large amount of resources, independent of the algorithm used for solving it. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). In particular, computational complexity theory determines the practical limits on what computers can and cannot do.
    • Closely related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between computational complexity theory and analysis of algorithms is that the latter is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the former asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, it tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kind of problems can be solved in principle algorithmically.
    • Computational complexity theory, as a branch of the theory of computation in computer science, investigates the problems related to the amounts of resources required for the execution of algorithms (e.g., execution time), and the inherent difficulty in providing efficient algorithms for specific computational problems.