Time Complexity Performance Measure

From GM-RKB
(Redirected from Computing Time)
Jump to navigation Jump to search

A Time Complexity Performance Measure is a Computational Complexity that measures the amount time that takes to run an algorithm.



References

2020

  • (Wikipedia, 2020) ⇒ https://en.wikipedia.org/wiki/Computational_complexity#Time Retrieved:2020-4-5.
    • The resource that is most commonly considered is time. When "complexity" is used without qualification, this generally means time complexity.

      The usual units of time (seconds, minutes etc.) are not used in complexity theory because they are too dependent on the choice of a specific computer and on the evolution of technology. For instance, a computer today can execute an algorithm significantly faster than a computer from the 1960s; however, this is not an intrinsic feature of the algorithm but rather a consequence of technological advances in computer hardware. Complexity theory seeks to quantify the intrinsic time requirements of algorithms, that is, the basic time constraints an algorithm would place on any computer. This is achieved by counting the number of elementary operations that are executed during the computation. These operations are assumed to take constant time (that is, not affected by the size of the input) on a given machine, and are often called steps.

2018

  • (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Time_complexity Retrieved:2018-4-1.
    • In computer science, the time complexity is the computational complexity that measures or estimates the time taken for running an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that an elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm differ by at most a constant factor.

      Since an algorithm's running time may vary with different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time taken on inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense, as there is only a finite number of possible inputs of a given size).

      In both cases, the time complexity is generally expressed as a function of the size of the input[1] Since this function is generally difficult to compute exactly, and the running time is usually not critical for small input, one focuses commonly on the behavior of the complexity when the input size increases; that is, on the asymptotic behavior of the complexity. Therefore, the time complexity is commonly expressed using big O notation, typically [math]\displaystyle{ O(n), }[/math] [math]\displaystyle{ O(n\log n), }[/math] [math]\displaystyle{ O(n^\alpha), }[/math] [math]\displaystyle{ O(2^n), }[/math] etc., where is the input size measured by the number of bits needed for representing it.

      Algorithm complexities are classified by the function appearing in the big O notation. For example, an algorithm with time complexity [math]\displaystyle{ O(n) }[/math] is a linear time algorithm, an algorithm with time complexity [math]\displaystyle{ O(n^\alpha) }[/math] for some constant [math]\displaystyle{ \alpha \ge 1 }[/math] is a polynomial time algorithm.

  1. Sipser, Michael (2006). Introduction to the Theory of Computation. Course Technology Inc. ISBN 0-619-21764-2.