Boundaries for algorithm analysis

Some boundaries you should know to approximate time and space complexity of your algorithm.

  • $2^{10} = 1,024 \approx 10^{3}, 2^{20} = 1,048,576 \approx 10^{6}$
  • 32-bit signed integers (int) and 64-bit signed integers (long long) have upper limits of $2^{31} − 1 \approx 2 \times 10^{9}$ (safe for up to $\approx 9$ decimal digits) and $2^{63} − 1 \approx 9 \times 10^{18}$ (safe for up to $\approx 18$ decimal digits) respectively.
  • Unsigned integers can be used if only non-negative numbers are required. 32-bit unsigned integers (unsigned int) and 64-bit unsigned integers (unsigned long long) have upper limits of $2^{32} − 1 \approx 4 \times 10^{9}$ and $2^{64} − 1 \approx 1.8 \times 10^{19}$ respectively.
  • There are $n!$ permutations and $2^{n}$ subsets (or combinations) of n elements.
  • The best time complexity of a comparison-based sorting algorithm is $Ω(n\log_{2}{n})$.

Notes:

  • The largest input size for typical programming contest problems must be < 1M. Beyond that, the time needed to read the input (the Input/Output routine) will be the bottleneck.
  • Usually, $O(n\log_{2}{n})$ algorithms are sufficient to solve most contest problems.
  • There are a number of studies to find smaller sorting algorithms, one of them could be found here.
  • If you need to store integers $\geq 2^{64}$, use the Big Integer technique.

By calculating the time complexity of an algorithm, it is possible to check, before implementing the algorithm, that it is efficient enough for the problem. The starting point for estimations is the fact that a modern computer can perform some hundreds of millions of operations in a second.

For example, assume that the time limit for a problem is one second and the input size is n = $10^5$. If the time complexity is O($n^2$), the algorithm will perform about $(10^5)^2 = 10^{10}$ operations. This should take at least some tens of seconds, so the algorithm seems to be too slow for solving the problem.

On the other hand, given the input size, we can try to guess the required time complexity of the algorithm that solves the problem. The following table contains some useful estimates assuming a time limit of one second.

Input Limit time complexity
$n \leq 10$ O(n!)
$n \leq 20$ O($2^n$)
$n \leq 500$ O($n^3$)
$n \leq 5000$ O($n^2$)
$n \leq 10^6$ O($n \log n$) or O(n)
n is large O(1) or O($\log n$)

For example, if the input size is $n = 10^5$, it is probably expected that the time complexity of the algorithm is O(n) or O($n \log n$). This information makes it easier to design the algorithm, because it rules out approaches that would yield an algorithm with a worse time complexity.

Still, it is important to remember that a time complexity is only an estimate of efficiency, because it hides the constant factors. For example, an algorithm that runs in O(n) time may perform $n/2$ or $5n$ operations. This has an important effect on the actual running time of the algorithm.

to be continue…