Skip to main content
Algorithms

Big O Notation

3 mins

graph with multiple curves

What is Big O Notation #

The term “Big O” comes from the word “order,” as in the order of the function. The “O” in Big O stands for “order of magnitude.”

Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument approaches infinity. It is used to analyze the efficiency of algorithms and to compare the performance of different algorithms.

The Big O notation is an asymptotic notation, which means that it describes the behavior of a function as the argument approaches infinity. It provides an upper bound on the growth rate of the function, which means that the function will not grow faster than the Big O notation.

The value does not mean that it will always take that long to run, but it gives an idea of how the algorithm will behave as the input size grows. For example a search function with a Big O of O(n) will take longer to run as the input size grows, but it will not take longer than n times the input size. Also if the item is the first item in the list, the function will return almost immediately.

Use of Big O Notation in Algorithms #

Big O notation is used to analyze the efficiency of algorithms and to compare the performance of different algorithms. It provides a way to describe the time complexity of an algorithm and amount of memory it uses.

If a function has a Big O of O(n), it means that the function will take longer to run as the input size grows. If the function has a Big O of O(1), it means that the function will take the same amount of time to run regardless of the input size.

Common Big O Notations #

The following table lists some of the most common Big O notations and their descriptions:

Big O Notation Name Description
O(1) Constant Time Operation takes the same amount of time regardless of input size
O(log n) Logarithmic Time Operation time increases logarithmically with input size
O(n) Linear Time Operation time increases linearly with input size
O(n log n) Linearithmic Time Operation time increases linearly and logarithmically with input size
O(n^2) Quadratic Time Operation time increases quadratically with input size
O(n^3) Cubic Time Operation time increases cubically with input size
O(2^n) Exponential Time Operation time doubles with each addition to the input size
O(n!) Factorial Time Operation time increases factorially with input size

These notations are crucial for understanding the efficiency of algorithms, especially in terms of how they scale with increasing data sizes.

Common Algorithms and Their Big O Notations #

The following table lists some common algorithms and their Big O notations:

Algorithm Type Algorithm Example Best Case Average Case Worst Case
Sorting Bubble Sort O(n) O(n^2) O(n^2)
Insertion Sort O(n) O(n^2) O(n^2)
Selection Sort O(n^2) O(n^2) O(n^2)
Merge Sort O(n log n) O(n log n) O(n log n)
Quick Sort O(n log n) O(n log n) O(n^2)
Searching Binary Search (on a sorted array) O(1) O(log n) O(log n)
Linear Search O(1) O(n) O(n)
Graph Algorithms Dijkstra’s Algorithm (without heap) O(V^2)
Dijkstra’s Algorithm (with heap) O((V+E) log V)
Breadth-First Search (BFS) O(V+E) O(V+E)
Depth-First Search (DFS) O(V+E) O(V+E)
Dynamic Programming Fibonacci (recursive) O(2^n)
Fibonacci (with memoization) O(n) O(n) O(n)
Data Structures Array Access O(1) O(1) O(1)
Hash Table Access O(1) O(1) O(n)
Balanced Binary Search Tree Access O(log n) O(log n) O(log n)
  • V stands for the number of vertices in a graph.
  • E stands for the number of edges in a graph.
  • n is the size of the input or the number of elements in the data structure.

Note: The time complexity can vary based on the implementation details and the structure of the input data.