Big O Notation
No matter how large Constant O(1) is, or how slow the increase to Linear O(n) is, Linear will at some point surpass (take longer than) Constant.
As size increases, is Linear Time or Constant Time faster?
Insertion Sort. Although Merge Sort is usually faster at O(n log n) rather than Insertion Sort's O(n^2), with small amounts of data Insertion Sort is faster, especially if the array is partially sorted already!
For smaller amounts of data, which is faster: Insertion Sort or Merge Sort?
Exponential Runtime
Generally speaking, when you see an algorithm with multiple recursive calls, you're looking at what kind of runtime?
O (log n) EX: Binary Search
If we see a problem where the number of elements in the problem space gets halved each time, that will likely be what runtime?
Upper Bound on the time. EX: An algorithm that prints all the values in an array could be described as O(n), BUT it could also be described as O(n^2) or even O(n^3) time. The algorithm is at least as fast as each of these; therefore they are Upper Bounds on the runtime.
In academia, Big O notation describes what?
Lower Bound on the time. EX: Printing the values of an array is Omega(n), but it is also Omega(log n) and Omega (1)
In academia, Big Omega notation describes what?
Tight Bound. = Both Big O and Big Omega. That is, an algorithm is Theta (n) if it is BOTH O(n) and Omega (n). In the industry (and therefore in interviews) people seem to have merged Theta and O together. So the industry's meaning of Big O is closer to what academics mean by Theta, in that it would be seen as incorrect to describe printing an array as O(n^2). Industry just says O(n).
In academia, Big Theta notation describes what?
Add: We do A chunks of work, and then B chunks of work. Therefore the total amount of work is O(A + B). "Do this. When done, do that." EX: 2 for loops that are not nested or connected. Multiply: We do B chunks of work FOR EACH element in A. Therefore the total amount of work is O(A * B). "Do this for each time you do that." EX: Nested For Loops.
Suppose you have an algorithm that has 2 steps. When do you Multiply the runtimes and when do you Add them?
1. Best Case 2. Worst Case 3. Expected Case
What are the 3 ways we describe our runtime for an algorithm?
O (n!) O (2^n) O (n^3) O (n^2) O (n log n) O (n) O (log n) O (1) (sometimes)
What are the runtimes in order from slowest to fastest?
Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm.
What is Big O Notation?
O (1) As the size increases, it will still take the same amount of time.
What is Constant Time for an algorithm?
O (n), where n is the size. This means that the time increases linearly with the size.
What is Linear Time for an algorithm?
The amount of memory required by an algorithm. Space Complexity is a parallel concept to Time Complexity. If we need to create an array of size n, this will require O(n) space. If we need a 2-dimensional array of size n x n, this will require O(n^2) space.
What is Space Complexity?
How fast an algorithm runs, expressed as a function of the number of input values.
What is Time-Complexity? (of an algorithm)
There isn't one. Best/Worst/Expected case DESCRIBE the Big O or Big Theta time for particular inputs or scenarios. Big O/Big Omega/Big Theta describe the Upper Bound, Lower Bound, and Tight Bounds for the runtime, respectively.
What is the particular relationship between Best/Worst/Expected case and O/Theta/Omega?
O(n^2)
What is the runtime of Bubble Sort?
O(n log n)
What is the runtime of Heapsort?
O(n)
What is the runtime of Linear Search?
O(n log n)
What is the runtime of Mergesort?
O(log n)
What is the runtime of a binary search algorithm?
O(1)
What is the runtime of accessing an array at an index?
O(1)
What is the runtime of pushing or popping from a stack?
O(n), where n is the number of elements in the array.
What is the runtime of searching/traversing through an array?
O(log n) Instead of simply incrementing, 'j' is increased by 2 times itself in each run.
What is the runtime of the following? var j = 1 while j < n { // do constant time stuff j *= 2 }
O(n^2)
What is the runtime of traversing a 2D array?
Algorithms with running time O(2^N) are often recursive algorithms that solve a problem of size N by recursively solving two smaller problems of size N-1.
What kind of algorithms usually have a runtime of O(2^n) ?
O (branches ^ depth), where branches is the number of times each recursive call branches. EX: O(2^n)
When you have a recursive function that makes multiple calls, the runtime will often (but not always) look like what?
It is very possible for O(n) code to run faster than O(1) code for specific inputs. Big O just describes the rate of increase. So an algorithm that we might have described as O(2n) is actually just O(n). While saying O(2n) might seem like we're being more precise, we are NOT. You would have to go down to the assembly code level and start comparing those just to be accurate. Instead, Big O allows us to express how the runtime SCALES. We just need to accept that it doesn't mean that O(n) is always better than O(n^2).
Why do we drop constants in runtime?
It's not a very useful concept. After all, we could take any algorithm, special case some input, and then get O(1) time in the best case.
Why do we rarely ever discuss Best Case Time Complexity?