Algorithm Big O Cheat Sheet



Common Data Structure Operations

This cheat sheet is one you will want to bookmark as it is part of an ebook! O stands for Order Of — as such the Big-O Notation is approximate; Algorithm running times grow at different rates: O(1) O(logN) O(N) O(N logN) O(N²) O(2ᴺ) O(N!) Further Resources. If you would like to dig deeper into the Maths behind Big-O, take a look at this free Coursera course from Stanford University. The Big-O Cheat Sheet.

Data StructureTime ComplexitySpace Complexity
AverageWorstWorst
AccessSearchInsertionDeletionAccessSearchInsertionDeletion
ArrayΘ(1)Θ(n)Θ(n)Θ(n)O(1)O(n)O(n)O(n)O(n)
StackΘ(n)Θ(n)Θ(1)Θ(1)O(n)O(n)O(1)O(1)O(n)
QueueΘ(n)Θ(n)Θ(1)Θ(1)O(n)O(n)O(1)O(1)O(n)
Singly-Linked ListΘ(n)Θ(n)Θ(1)Θ(1)O(n)O(n)O(1)O(1)O(n)
Doubly-Linked ListΘ(n)Θ(n)Θ(1)Θ(1)O(n)O(n)O(1)O(1)O(n)
Skip ListΘ(log(n))Θ(log(n))Θ(log(n))Θ(log(n))O(n)O(n)O(n)O(n)O(n log(n))
Hash TableN/AΘ(1)Θ(1)Θ(1)N/AO(n)O(n)O(n)O(n)
Binary Search TreeΘ(log(n))Θ(log(n))Θ(log(n))Θ(log(n))O(n)O(n)O(n)O(n)O(n)
Cartesian TreeN/AΘ(log(n))Θ(log(n))Θ(log(n))N/AO(n)O(n)O(n)O(n)
B-TreeΘ(log(n))Θ(log(n))Θ(log(n))Θ(log(n))O(log(n))O(log(n))O(log(n))O(log(n))O(n)
Red-Black TreeΘ(log(n))Θ(log(n))Θ(log(n))Θ(log(n))O(log(n))O(log(n))O(log(n))O(log(n))O(n)
Splay TreeN/AΘ(log(n))Θ(log(n))Θ(log(n))N/AO(log(n))O(log(n))O(log(n))O(n)
AVL TreeΘ(log(n))Θ(log(n))Θ(log(n))Θ(log(n))O(log(n))O(log(n))O(log(n))O(log(n))O(n)
KD TreeΘ(log(n))Θ(log(n))Θ(log(n))Θ(log(n))O(n)O(n)O(n)O(n)O(n)

Array Sorting Algorithms

Therefore, we can describe this algorithm has time complexity as O(n log n). Big-O Cheat Sheet. The graph that shows running time complexity in terms of big-o notation. Here you can find a sheet with the time complexity of the operations in the most common data structures. Big-O Cheat Sheet for Some Data Structures and Algorithms. Big O Cheat Sheet for Common Data Structures and Algorithms 3 min read on August 29, 2019 When measuring the efficiency of an algorithm, we usually take into account the time and space complexity.

AlgorithmTime ComplexitySpace Complexity
BestAverageWorstWorst
QuicksortΩ(n log(n))Θ(n log(n))O(n^2)O(log(n))
MergesortΩ(n log(n))Θ(n log(n))O(n log(n))O(n)
TimsortΩ(n)Θ(n log(n))O(n log(n))O(n)
HeapsortΩ(n log(n))Θ(n log(n))O(n log(n))O(1)
Bubble SortΩ(n)Θ(n^2)O(n^2)O(1)
Insertion SortΩ(n)Θ(n^2)O(n^2)O(1)
Selection SortΩ(n^2)Θ(n^2)O(n^2)O(1)
Tree SortΩ(n log(n))Θ(n log(n))O(n^2)O(n)
Shell SortΩ(n log(n))Θ(n(log(n))^2)O(n(log(n))^2)O(1)
Bucket SortΩ(n+k)Θ(n+k)O(n^2)O(n)
Radix SortΩ(nk)Θ(nk)O(nk)O(n+k)
Counting SortΩ(n+k)Θ(n+k)O(n+k)O(k)
CubesortΩ(n)Θ(n log(n))O(n log(n))O(n)


We summarize the performance characteristics of classic algorithms anddata structures for sorting, priority queues, symbol tables, and graph processing.

Big O Algorithm

We also summarize some of the mathematics useful in the analysis of algorithms, including commonly encountered functions;useful formulas and appoximations; properties of logarithms;asymptotic notations; and solutions to divide-and-conquer recurrences.


Sorting.

The table below summarizes the number of compares for a variety of sortingalgorithms, as implemented in this textbook.It includes leading constants but ignores lower-order terms.
ALGORITHMCODEIN PLACESTABLEBESTAVERAGEWORSTREMARKS
selection sortSelection.java½ n 2½ n 2½ n 2n exchanges;
quadratic in best case
insertion sortInsertion.javan¼ n 2½ n 2use for small or
partially-sorted arrays
bubble sortBubble.javan½ n 2½ n 2rarely useful;
use insertion sort instead
shellsortShell.javan log3nunknownc n 3/2tight code;
subquadratic
mergesortMerge.java½ n lg nn lg nn lg nn log n guarantee;
stable
quicksortQuick.javan lg n2 n ln n½ n 2n log n probabilistic guarantee;
fastest in practice
heapsortHeap.javan2 n lg n2 n lg nn log n guarantee;
in place
n lg n if all keys are distinct


Priority queues.

The table below summarizes the order of growth of the running time ofoperations for a variety of priority queues, as implemented in this textbook.It ignores leading constants and lower-order terms.Except as noted, all running times are worst-case running times.
DATA STRUCTURECODEINSERTDEL-MINMINDEC-KEYDELETEMERGE
arrayBruteIndexMinPQ.java1nn11n
binary heapIndexMinPQ.javalog nlog n1log nlog nn
d-way heapIndexMultiwayMinPQ.javalogdnd logdn1logdnd logdnn
binomial heapIndexBinomialMinPQ.java1log n1log nlog nlog n
Fibonacci heapIndexFibonacciMinPQ.java1log n11 log n1
amortized guarantee


Symbol tables.

The table below summarizes the order of growth of the running time ofoperations for a variety of symbol tables, as implemented in this textbook.It ignores leading constants and lower-order terms.
worst caseaverage case
DATA STRUCTURECODESEARCHINSERTDELETESEARCHINSERTDELETE
sequential search
(in an unordered list)
SequentialSearchST.javannnnnn
binary search
(in a sorted array)
BinarySearchST.javalog nnnlog nnn
binary search tree
(unbalanced)
BST.javannnlog nlog nsqrt(n)
red-black BST
(left-leaning)
RedBlackBST.javalog nlog nlog nlog nlog nlog n
AVL
AVLTreeST.javalog nlog nlog nlog nlog nlog n
hash table
(separate-chaining)
SeparateChainingHashST.javannn1 1 1
hash table
(linear-probing)
LinearProbingHashST.javannn1 1 1
uniform hashing assumption


Graph processing.

The table below summarizes the order of growth of the worst-case running time and memory usage (beyond the memory for the graph itself)for a variety of graph-processing problems, as implemented in this textbook.It ignores leading constants and lower-order terms.All running times are worst-case running times.


PROBLEMALGORITHMCODETIMESPACE
pathDFSDepthFirstPaths.javaE + VV
shortest path (fewest edges)BFSBreadthFirstPaths.javaE + VV
cycleDFSCycle.javaE + VV
directed pathDFSDepthFirstDirectedPaths.javaE + VV
shortest directed path (fewest edges)BFSBreadthFirstDirectedPaths.javaE + VV
directed cycleDFSDirectedCycle.javaE + VV
topological sortDFSTopological.javaE + VV
bipartiteness / odd cycleDFSBipartite.javaE + VV
connected componentsDFSCC.javaE + VV
strong componentsKosaraju–SharirKosarajuSharirSCC.javaE + VV
strong componentsTarjanTarjanSCC.javaE + VV
strong componentsGabowGabowSCC.javaE + VV
Eulerian cycleDFSEulerianCycle.javaE + VE + V
directed Eulerian cycleDFSDirectedEulerianCycle.javaE + VV
transitive closureDFSTransitiveClosure.javaV (E + V)V 2
minimum spanning treeKruskalKruskalMST.javaE log EE + V
minimum spanning treePrimPrimMST.javaE log VV
minimum spanning treeBoruvkaBoruvkaMST.javaE log VV
shortest paths (nonnegative weights)DijkstraDijkstraSP.javaE log VV
shortest paths (no negative cycles)Bellman–FordBellmanFordSP.javaV (V + E)V
shortest paths (no cycles)topological sortAcyclicSP.javaV + EV
all-pairs shortest pathsFloyd–WarshallFloydWarshall.javaV 3V 2
maxflow–mincutFord–FulkersonFordFulkerson.javaEV (E + V)V
bipartite matchingHopcroft–KarpHopcroftKarp.javaV ½ (E + V)V
assignment problemsuccessive shortest pathsAssignmentProblem.javan 3 log nn 2


Commonly encountered functions.

Here are some functions that are commonly encounteredwhen analyzing algorithms.
FUNCTIONNOTATIONDEFINITION
floor( lfloor x rfloor )greatest integer (; le ; x)
ceiling( lceil x rceil )smallest integer (; ge ; x)
binary logarithm( lg x) or (log_2 x)(y) such that (2^{,y} = x)
natural logarithm( ln x) or (log_e x )(y) such that (e^{,y} = x)
common logarithm( log_{10} x )(y) such that (10^{,y} = x)
iterated binary logarithm( lg^* x )(0) if (x le 1;; 1 + lg^*(lg x)) otherwise
harmonic number( H_n )(1 + 1/2 + 1/3 + ldots + 1/n)
factorial( n! )(1 times 2 times 3 times ldots times n)
binomial coefficient( n choose k )( frac{n!}{k! ; (n-k)!})


Useful formulas and approximations.

Here are some useful formulas for approximations that are widely used in the analysis of algorithms.
  • Harmonic sum: (1 + 1/2 + 1/3 + ldots + 1/n sim ln n)
  • Triangular sum: (1 + 2 + 3 + ldots + n = n , (n+1) , / , 2 sim n^2 ,/, 2)
  • Sum of squares: (1^2 + 2^2 + 3^2 + ldots + n^2 sim n^3 , / , 3)
  • Geometric sum: If (r neq 1), then(1 + r + r^2 + r^3 + ldots + r^n = (r^{n+1} - 1) ; /; (r - 1))
    • (r = 1/2): (1 + 1/2 + 1/4 + 1/8 + ldots + 1/2^n sim 2)
    • (r = 2): (1 + 2 + 4 + 8 + ldots + n/2 + n = 2n - 1 sim 2n), when (n) is a power of 2
  • Stirling's approximation: (lg (n!) = lg 1 + lg 2 + lg 3 + ldots + lg n sim n lg n)
  • Exponential: ((1 + 1/n)^n sim e; ;;(1 - 1/n)^n sim 1 / e)
  • Binomial coefficients: ({n choose k} sim n^k , / , k!) when (k) is a small constant
  • Approximate sum by integral: If (f(x)) is a monotonically increasing function, then( displaystyle int_0^n f(x) ; dx ; le ; sum_{i=1}^n ; f(i) ; le ; int_1^{n+1} f(x) ; dx)


Properties of logarithms.

  • Definition: (log_b a = c) means (b^c = a).We refer to (b) as the base of the logarithm.
  • Special cases: (log_b b = 1,; log_b 1 = 0 )
  • Inverse of exponential: (b^{log_b x} = x)
  • Product: (log_b (x times y) = log_b x + log_b y )
  • Division: (log_b (x div y) = log_b x - log_b y )
  • Finite product: (log_b ( x_1 times x_2 times ldots times x_n) ; = ; log_b x_1 + log_b x_2 + ldots + log_b x_n)
  • Changing bases: (log_b x = log_c x ; / ; log_c b )
  • Rearranging exponents: (x^{log_b y} = y^{log_b x})
  • Exponentiation: (log_b (x^y) = y log_b x )


Aymptotic notations: definitions.

NAMENOTATIONDESCRIPTIONDEFINITION
Tilde(f(n) sim g(n); )(f(n)) is equal to (g(n)) asymptotically
(including constant factors)
( ; displaystyle lim_{n to infty} frac{f(n)}{g(n)} = 1)
Big Oh(f(n)) is (O(g(n)))(f(n)) is bounded above by (g(n)) asymptotically
(ignoring constant factors)
there exist constants (c > 0) and (n_0 ge 0) such that (0 le f(n) le c cdot g(n)) forall (n ge n_0)
Big Omega(f(n)) is (Omega(g(n)))(f(n)) is bounded below by (g(n)) asymptotically
(ignoring constant factors)
( g(n) ) is (O(f(n)))
Big Theta(f(n)) is (Theta(g(n)))(f(n)) is bounded above and below by (g(n)) asymptotically
(ignoring constant factors)
( f(n) ) is both (O(g(n))) and (Omega(g(n)))
Little oh(f(n)) is (o(g(n)))(f(n)) is dominated by (g(n)) asymptotically
(ignoring constant factors)
( ; displaystyle lim_{n to infty} frac{f(n)}{g(n)} = 0)
Little omega(f(n)) is (omega(g(n)))(f(n)) dominates (g(n)) asymptotically
(ignoring constant factors)
( g(n) ) is (o(f(n)))


Common orders of growth.

NAMENOTATIONEXAMPLECODE FRAGMENT
Constant(O(1))array access
arithmetic operation
function call
Logarithmic(O(log n))binary search in a sorted array
insert in a binary heap
search in a red–black tree
Linear(O(n))sequential search
grade-school addition
BFPRT median finding
Linearithmic(O(n log n))mergesort
heapsort
fast Fourier transform
Quadratic(O(n^2))enumerate all pairs
insertion sort
grade-school multiplication
Cubic(O(n^3))enumerate all triples
Floyd–Warshall
grade-school matrix multiplication
Polynomial(O(n^c))ellipsoid algorithm for LP
AKS primality algorithm
Edmond's matching algorithm
Exponential(2^{O(n^c)})enumerating all subsets
enumerating all permutations
backtracing search


Asymptotic notations: properties.

  • Reflexivity: (f(n)) is (O(f(n))).
  • Constants: If (f(n)) is (O(g(n))) and ( c > 0 ),then (c cdot f(n)) is (O(g(n)))).
  • Products: If (f_1(n)) is (O(g_1(n))) and ( f_2(n) ) is (O(g_2(n)))),then (f_1(n) cdot f_2(n)) is (O(g_1(n) cdot g_2(n)))).
  • Sums: If (f_1(n)) is (O(g_1(n))) and ( f_2(n) ) is (O(g_2(n)))),then (f_1(n) + f_2(n)) is (O(max { g_1(n) , g_2(n) })).
  • Transitivity: If (f(n)) is (O(g(n))) and ( g(n) ) is (O(h(n))),then ( f(n) ) is (O(h(n))).
  • Polynomials: Let (f(n) = a_0 + a_1 n + ldots + a_d n^d) with(a_d > 0). Then, ( f(n) ) is (Theta(n^d)).
  • Logarithms and polynomials: ( log_b n ) is (O(n^d)) for every ( b > 0) and every ( d > 0 ).
  • Exponentials and polynomials: ( n^d ) is (O(r^n)) for every ( r > 0) and every ( d > 0 ).
  • Factorials: ( n! ) is ( 2^{Theta(n log n)} ).
  • Limits: If ( ; displaystyle lim_{n to infty} frac{f(n)}{g(n)} = c)for some constant ( 0 < c < infty), then(f(n)) is (Theta(g(n))).
  • Limits: If ( ; displaystyle lim_{n to infty} frac{f(n)}{g(n)} = 0),then (f(n)) is (O(g(n))) but not (Theta(g(n))).
  • Limits: If ( ; displaystyle lim_{n to infty} frac{f(n)}{g(n)} = infty),then (f(n)) is (Omega(g(n))) but not (O(g(n))).


Here are some examples.

FUNCTION(o(n^2))(O(n^2))(Theta(n^2))(Omega(n^2))(omega(n^2))(sim 2 n^2)(sim 4 n^2)
(log_2 n)
(10n + 45)
(2n^2 + 45n + 12)
(4n^2 - 2 sqrt{n})
(3n^3)
(2^n)


Divide-and-conquer recurrences.

For each of the following recurrences we assume (T(1) = 0)and that (n,/,2) means either (lfloor n,/,2 rfloor) or(lceil n,/,2 rceil).
RECURRENCE(T(n))EXAMPLE
(T(n) = T(n,/,2) + 1)(sim lg n)binary search
(T(n) = 2 T(n,/,2) + n)(sim n lg n)mergesort
(T(n) = T(n-1) + n)(sim frac{1}{2} n^2)insertion sort
(T(n) = 2 T(n,/,2) + 1)(sim n)tree traversal
(T(n) = 2 T(n-1) + 1)(sim 2^n)towers of Hanoi
(T(n) = 3 T(n,/,2) + Theta(n))(Theta(n^{log_2 3}) = Theta(n^{1.58...}))Karatsuba multiplication
(T(n) = 7 T(n,/,2) + Theta(n^2))(Theta(n^{log_2 7}) = Theta(n^{2.81...}))Strassen multiplication
(T(n) = 2 T(n,/,2) + Theta(n log n))(Theta(n log^2 n))closest pair


Master theorem.

Let (a ge 1), (b ge 2), and (c > 0) and suppose that(T(n)) is a function on the non-negative integers that satisfiesthe divide-and-conquer recurrence$$T(n) = a ; T(n,/,b) + Theta(n^c)$$with (T(0) = 0) and (T(1) = Theta(1)), where (n,/,b) meanseither (lfloor n,/,b rfloor) or either (lceil n,/,b rceil).
  • If (c < log_b a), then (T(n) = Theta(n^{log_{,b} a}))
  • If (c = log_b a), then (T(n) = Theta(n^c log n))
  • If (c > log_b a), then (T(n) = Theta(n^c))
CheatRemark: there are many different versions of the master theorem. The Akra–Bazzi theoremis among the most powerful.

C++ Stl Algorithm Cheat Sheet


Algorithm Cheat Sheet Pdf

Last modified on September 12, 2020.
Copyright © 2000–2019Robert SedgewickandKevin Wayne.All rights reserved.