What is Big O??

Robert Shields IV
3 min readNov 2, 2020

How long does it take something to run? Is it scalable? Readable? Big O measures the runtime of a given code. How does the efficiency of the program increases as the data increases? Big O sounds like a bunch of crap, doesn’t it? However, it does help when learning how memory, space, and data works within a given algorithm. This is what I’ve learned so far.

Big O - n is just a standard convention that means it depends on the number of inputs given. It calculates the efficiency of a given algorithm.

O(n) or Linear Time is the most common, which is fair on the complexity chart. As things grow larger and larger, does it scale? Well given the amount of items given, it scales relative to the number (n) of items given in a given function.

O(1) or Constant Time. No matter how many items are added to a given collection, it will take the same amount of operations to complete a given function. For example…. grabbing the first element in the array. The array could be 15, 1070960, or 123906210, it will still take one operation to grab the first element in that array. Input can be any type of data, not just arrays. Predictability is nice.

O(n^2) or Quadratic Time. That means that every time the number of elements increase quadratically. It increases quite fast which is horrible(pretty slow) when it comes to scale.

O(n!) or Factorial is the most expensive and steepest of them all. You aren’t going to encounter this since its so bad. Its kind of like adding a nested loop for each input.

There are plenty more, which I will discuss within a future blog, when I’m able to solidify my knowledge with Big O.

When calculating Big O, we always think about the worst case when it comes to scaling your inputs. Its usually based on speed and memory (RAM) available. There’s a cheat sheet that can be used that will help when it comes to practicing for interviews that was given to me. I’ll pass it along to you all as well!

Big O Cheat Sheet: -Big O’s

O(1) Constant- no loops

O(log N) Logarithmic- usually searching algorithms have log n if they are sorted (Binary Search)

O(n) Linear- for loops, while loops through n items

O(n log(n)) Log Linear- usually sorting operations

O(n²) Quadratic- every element in a collection needs to be compared to ever other element. Two nested loops

O(2^n) Exponential- recursive algorithms that solves a problem of size N

O(n!) Factorial- you are adding a loop for every element

Iterating through half a collection is still O(n) Two separate collections: O(a * b)

  • What can cause time in a function?-
  • Operations (+, -, *, /)
  • Comparisons (, ==)
  • Looping (for, while)
  • Outside Function call (function())

Rule Book

  • Rule 1: Always worst Case
  • Rule 2: Remove Constants
  • Rule 3: Different inputs should have different variables. O(a + b). A and B arrays nested would be O(a*b) + for steps in order * for nested steps
  • Rule 4: Drop Non-dominant terms -What causes Space complexity?- Variables Data Structures Function Call Allocations

--

--