Programming: Big O

1 minute read


Big O notation

Big O notation is a mathematical notation that describes the limiting behavior (upper bound) of a function when the argument tends towards a particular value or infinity.

In computer science, big O notation is used to classify algorithms according to how their run time or space (memory) requirements grow as the input size grows.

It is not mean to be an exact measurement of the algorithm, just the general growth rate of the time/memory (complexity) with the input size growth. Space growth rate is usually less consequential than time growth rate.

:information_source: Note: For simplicity, we will use lg N to denote the logarithm of N to the base 2 log_2(N).

Constants are dropped

Since Big O describes the upper bound of the algorithm, constants are irrelevant and can be dropped. For example, if we have a function that is O(2N), we can drop the 2 and just say it is O(N).

N=10^3: O(10N) = 10^4 | O(N^2) = 10^6 # 100x slower
N=10^4: O(10N) = 10^5 | O(N^2) = 10^8 # 1000x slower
  for (i = 1; i < n; ++i) {
    o++
  }
  for (j = 1; j < n; ++j) {
    o++
  }

In practice, for smaller data sets, O(lg N) can be slower than O(N^2), since contants are dropped (insertion-sort vs quick-sort).

Consider the worst case

The default is to consider the worst case scenario, but “best” and “average” cases can be considered as well. However, since we drop contants, the worst case is the most common case.

Complexity examples: O(?): 1, lg N, N lg N, sqrt N , N, N^2, 2^N, N!

References

Updated:

Leave a Comment