Computational Resource

From GM-RKB
(Redirected from computational resource)
Jump to navigation Jump to search

A Computational Resource is a resource needed to run an algorithm by a computing system.



References

2018a

  • (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Computational_resource Retrieved:2018-4-1.
    • In computational complexity theory, a computational resource is a resource used by some computational models in the solution of computational problems.

      The simplest computational resources are computation time, the number of steps necessary to solve a problem, and memory space, the amount of storage needed while solving the problem, but many more complicated resources have been defined.A computational problem is generallydefined in terms of its action on any valid input. Examples of problems might be "given an integer n, determine whether n is prime", or "given two numbers x and y, calculate the product x*y". As the inputs get bigger, the amount of computational resources needed to solve a problem will increase. Thus, the resources needed to solve a problem are described in terms of asymptotic analysis, by identifying the resources as a function of the length or size of the input. Resource usage is often partially quantified using Big O notation.

      Computational resources are useful because we can study which problems can be computed in a certain amount of each computational resource. In this way, we can determine whether algorithms for solving the problem are optimal and we can make statements about an algorithm's efficiency. The set of all of the computational problems that can be solved using a certain amount of a certain computational resource is a complexity class, and relationships between different complexity classes are one of the most important topics in complexity theory.

2018a

  • (Wikipedia, 2018) ⇒ https://www.wikiwand.com/en/Computational_complexity#/Resources Retrieved:2018-4-1.
    • Time: The resource that is most commonly considered is the time, and one talks of time complexity. When "complexity" is used without being qualified, this generally means time complexity.

      The usual units of time are not used in complexity theory, because they are too dependent on the choice of a specific computer and of the evolution of the technology. Therefore, instead of the real time, one generally consider the elementary operations that are done during the computation. These operations are supposed to take a constant time on a given machine, and are often called steps.

      Space: Another important resource is the size of computer memory that is needed for running algorithms.

      Others: The number of arithmetic operations in another resource that is commonly used. In this case, one talks of arithmetic complexity. If one knows an upper bound on the size of the binary representation of the numbers that occur during a computation, the time complexity is generally the product of the arithmetic complexity by a constant factor.

      For many algorithms the size of the integers that are used during a computation is not bounded, and it is not realistic to consider that arithmetic operations take a constant time. Therefore the time complexity, generally called bit complexity in this context, may be much larger than the arithmetic complexity. For example, the arithmetic complexity of the computation of the determinant of a n×n integer matrix is [math]\displaystyle{ O(n^3) }[/math] for the usual algorithms (Gaussian elimination). The bit complexity of the same algorithms is exponential in n, because the size of the coefficients may grow exponentially during the computation. On the other hand, if these algorithms are coupled with multi-modular arithmetic, the bit complexity may be reduced to O~(n4).