Runtime Overview

If we are given hints on the runtime requirement of a problem during interviews, we can roughly deduce the algorithm as follows. The important thing about runtime complexity is that we usually do not consider constants when analyzing the time complexity. We usually want to find the "family" of functions that most closely match the growth in computational time. For instance, O(2n) and O(5n) can be loosely said to belong to the family of functions O(n). Therefore, if a solution runs in precisely O(2n) time, it is a common convention to simply consider it an O(n) function and discard the constant. However, sometimes a sufficiently bad constant can substantially increase runtime. One may want to consider optimizing algorithms to have a better constant in some specific cases. In most cases, however, this is unnecessary, and basic runtime analysis can indicate how efficient a solution is.

As a general rule of thumb, a conservative estimate for servers is 10⁸ operations per second.

O(1)

Constant time complexity. Could be

  • Hashmap lookup
  • Array access and update
  • Pushing and poping elements from a stack
  • Finding and applying math formula
  • Typically for n > 10⁹

The following code is O(1) because it executes a constant number of operations:

1a = 5
2b = 7 
3c = 4 
4d = a + b + c + 153
5

O(log(N))

log(N) grows VERY slowly. log(1,000,000) is only about 20. In fact, lookup by primary key in a relational database is log(N) (many mainstream relational databases such as postgres use B-trees for indexing by default, and search in B-tree is log(N)).

In coding interviews, log(N) typically means

  • Binary search or variant
  • Balanced binary search tree lookup
  • Processing the digits of a number
  • Typically for n > 10⁸

The following code is O(log(N)) because N is halved each iteration, so the amount of work is logarithmic:

1N = 100000000
2while N > 0:
3  # some constant operation
4  N //= 2
5

O(N)

Linear time typically means looping through a linear data structure a constant number of times. Most commonly, this means

  • Going through array/linked list
  • Two pointers
  • Some types of greedy
  • Tree/graph traversal
  • Stack/Queue
  • Typically for n ≤ 10⁸

The following examples are both O(N):

1for i in range(1, N + 1):
2  # constant time code
3
4i = 0
5while i < N:
6  # constant time code
7  i += 1
8

Because we also ignore constant factors and lower order terms, the following are also O(N):

1for i in range(1, 5 * N + 17):
2  # constant time code 
3
4for i in range(1, N + 538238):
5  # constant time code
6

O(K log(N))

  • Heap push/pop K times. When you encounter problems that look for the "top K elements", you can usually solve them with pushing and popping to a heap, K times, resulting in O(K log(N)) runtime. e.g., K closest points, merge K sorted lists.
  • Binary search K times.
  • Typically for n ≤ 10⁶

We will discuss more about this in the heap section.

O(N log(N))

1N = int(input())
2ar = []
3for i in range(N):
4  m = int(input())
5  ar.append(m)
6ar.sort() # nlogn
7

O(N^2)

Also called quadratic time.

  • Nested loops, e.g., visiting each matrix entry
  • Many brute force solutions
  • Typically for n ≤ 10⁴

For small N < 1000, this is not that bad for modern computers. You can probably pass most Leetcode tests with quadratic time for tiny Ns. However, in an interview, your solution usually has to do better than quadratic time to impress the interviewer if there is a better solution.

This example is O(N^2) because the outer loop runs O(N) iterations and the inner loop O(N) as well:

1for i in range(1, N + 1):
2  for j in range(1, N + 1):
3    # constant time code
4

For this example, the outer loop runs O(N) iterations, and the inner loop runs anywhere between 1 and N iterations. Since Big O notation calculates worst-case time complexity, we treat the inner loop as a factor of N. Thus, this code is O(N^2).

1for a in range(1, N + 1):
2  for j in range(a, N + 1):
3    # constant time code
4

O(2^N)

Grows very rapidly. Often requires memoization to avoid repeated computations and reduce complexity.

  • Combinatorial problems, backtracking, e.g. subsets
  • Often involves recursion and is harder to analyze time complexity at first sight
  • Further detailed code examples can be found in the backtracking section
  • Typically for n ≤ 25

A recursive Fibonacci algorithm is O(2^N) because for any Fib(i) where i > 1, we call Fib(i - 1) and Fib(i - 2).

1def Fib(n):
2  if n == 0 or n == 1:
3    return 1
4  return Fib(n - 1) + Fib(n - 2)
5

O(N!)

Grows insanely rapidly. Only solvable by computers for small N. Often requires memoization to avoid repeated computations and reduce complexity.

  • Combinatorial problems, backtracking, e.g. permutations
  • Often involves recursion and is harder to analyze time complexity at first sight
  • Detailed code examples can be found in the backtracking section
  • Typically for n ≤ 12

Amortized Time Complexity

The idea of amortized time is doing a very costly task once in a while. The costly task is done so rarely that it is dilluted away. For example, if we had N O(1) tasks but only a single O(N) task, we could still consider the total O(N) instead of O(N^2) if N is large enough.

The key take away is that amortized time is the average time taken per operation.

The following is an implementation of an O(1) append function for a dynamically sized array:

1size = None # number of elements in array
2capacity = None # maximum number of elements array can store
3arr = []
4...
5def append(x):
6  if size == capacity:
7    new_arr = [None] * (2 * capacity)
8    capacity *= 2
9    for i in range(size):
10      new_arr[i] = arr[i]
11    arr = new_arr
12  arr[size] = x
13  size += 1
14

Big O Notation Practice

Answers for these questions will be at the end of this article.

  1. What is the asymptotic time bound of functions with these time complexities?
  • 3N + 2N + N
  • 2N^3 + 5N^2
  • N + log(N)
  • N^2log(N)
  • 2^N + N^2
  • 10
  1. What is the time complexity of this code?
1N = int(input())
2ar = [0] * N
3for i in range(N):
4  ar[i] = int(input())
5max_val = ar[0]
6for i in range(1, N):
7  max_val = max(max_val, ar[i])
8print(max_val)
9
  1. What is the time complexity of this code?
1N = int(input())
2ar = [[0] * N] * N
3for i in range(N):
4  for j in range(N):
5    ar[i][j] = int(input())
6
7for i in range(N):
8  ar[i].sort()
9
  1. What is the time complexity of this code?
1# assume an O(1) swap(a, b) function that swaps the values a and b
2N = int(input())
3ar = [[0] * N] * N
4for i in range(N):
5  for j in range(N/2):
6    swap(ar[i][j], ar[i][N - j])
7
  1. What is the time complexity of this code?
1# assume an O(1) log2(N) function
2def func(str, idx, len):
3  if idx == len:
4    print(str)
5  else:
6    func(str + "a", idx + 1, len)
7    func(str + "b", idx + 1, len)
8...
9N = int(input())
10func("", 0, int(log2(N)))
11
  1. What is the time complexity of this code?
1N = int(input())
2bin = []
3while N != 0:
4  bin.append(N)
5  N //= 2
6

Answers:

  • O(N)
  • O(N^3)
  • O(N)
  • O(N^2 log(N))
  • O(2^N)
  • O(1)
  1. O(N) — Both loops iterate O(N) times with O(1) operations each iteration.
  2. O(N^2 log(N)) — The second for loop has O(N) iterations and it takes O(N log(N)) to sort for each interation.
  3. O(N^2) — The outer loop iterates O(N) times, the inner loops O(N / 2) times which is O(N), and swapping is O(1).
  4. O(N)O(2^log(N)) = O(N) strings are formed and each string takes O(1) to form and print.
  5. O(log(N))N is divided by 2 each time, so the amount of work is logarithmic with respect to N.