CS303-Exam 1: Review
Assume you have an algorithm that runs 1 calculation + 2 calculations + 3 calculations + ........... + n calculations. or better said ((n(n+1))/2. What is the following Big O time complexity for the following equation?
O(n^2)
What is the runtime of a brute force algorithm
O(n^2)
What is the runtime of insertion sort?
O(n^2)
What is the time complexity of grade school multiplication?
O(n^2)
What is the worst case runtime for Randomized Quick Sort?
O(n^2)
what is the runtime of counting total number of inversions using the brute force method ?
O(n^2)
3n^3+2n+7 What is the following Big O time complexity for the following equation?
O(n^3)
2n^4+ 4n^3-6n logn + 100. What is the following Big O time complexity for the following equation?
O(n^4)
Master's Theorem when a < b^d
O(n^d)
Master's Theorem when a = b^d
O(n^d*logn)
(5n)(3+logn). What is the following Big O time complexity for the following equation?
O(nlogn)
Average case of Quick Sort
O(nlogn)
What is the runtime for Merge Sort?
O(nlogn)
Set data structure way to find if an array has duplicates?
One way is to turn the array into a set then compare sizes to the original array.
How does quick sort work?
Quicksort works by partitioning an array into two parts, one part with the smaller values, and the other part with larger values. After the array has been partitioned, quicksort calls itself, and sorts the two parts recursively.
What is Boyer-Moore's *voting* Algorithm used for?
Searching for occurrences of a something within an array
Binary Search Substitution
T(n) = 1T(n/2) + O(1) O(n^log1*logn) = O(n^0*logn) = O(logn)
Recurrence relation for Merge Sort
T(n) = 2T(n/2) + n A = 2 B = 2 D = 1 T(n) = AT(n/B) + n^D
Recurrence relation for counting inversions in the divide and conquer technique
T(n) = 2T(n/2) + n A = 2 B = 2 D = 1 T(n) = AT(n/B) + n^D
The formal definition of Big-Oh
T(n) = O(f(n)) if and only if there exists positive constants c and n0 such that T(n) <= c*f(n) for all n >= n0
Theta formal definition
T(n) = O(f(n)) and T(n) = Ω(f(n)) If and only if there exists positive constants c1, c2 and n0 such that c1 f(n) <= T(n) <= c2 f(n) for all n >= n0
Formal definition of Omega
T(n) = Omega(f(n)) if and only if there exists positive constants c and n^0 such that T(n) >= c*f(n) for all n >= n0
A hint in calculating Big-Oh when given an equation such as 3n^2 + n
Take which ever n will grow faster in this example. n^2 will grow a lot faster than n, thus this function would run O(n^2)
2^n + n^2 What is the following Big O time complexity for the following equation?
O(2^n)
Binary Search and Ternary runtime both can be written as
O(logn)
What is the space complexity of Karatsuba's algorithm
O(logn)
n^3 + n! + 3 What is the following Big O time complexity for the following equation?
O(n!)
What is the running time of HashMap?
O(n)
What is the runtime for Linear Search?
O(n)
What is the runtime of O(n)?
O(n)
What is the worst-case runtime for Boyer-Moore *voting* Algorithm
O(n)
n + logn What is the following Big O time complexity for the following equation?
O(n)
What is the runtime of Karatsuba's algorithm?
O(n^(log3)) That is n to the power of log (base 2) 3
Master's Theorem when a > b^d
O(n^(logba)) b = log base a is the number
Omega is the analysis of the asymptotic ________ __________
lower bound
Big-Oh is the analysis of the asymptotic ________ __________
upper bound
Masters Theorem
#Masters Theorem This theorem is setup to be used with recurrence relations of T(n) nature. A T(n/B) + n^D A B D These are the things to look out for and where they appear in the equation. Given 3T(n/2) + n <- This has an invisible 1 as the power of the last n. -> 3T(n/2) + n^1 A = 3 B = 2 D = 1 Now here we look at our cases: T(n) = O(nlogn) when a = b^d T(n) = O(n^(logba)) where it is n to the power of log (base b) a when a > b^d T(n) = O(n^d) when a < b^d So here we can plug in A, B and D A = 3 B = 2 D = 1 3 = 2^1 = False. Thus not O(nlogn) 3 > 2^1 = True. Thus O(n^(log2-3) 3 < 2^1 = False. Thus not O(n^1) Follow this pattern for any other recurrence relations
How does merge sort work?
1. Arrays can be broken into half to form two smaller arrays. You can continue to do this if needed. 2. When you have two arrays, keep track of the smallest value in each one that hasn't been placed in order in the larger "Merged" array yet. 3. Compare the two smallest values from each array, place the smallest in next location in Large array. 4. Then you adjust the minimum value marker in the array the smallest element was removed from. 5. Continue until large "Merged" array is full.
Which methods can you use to solve recurrence relations of divide and conquer techniques?
1. Master's Theorem 2. Recursion Trees 3. Substitution Method
How does Insertion sort work?
1. Starting with the second element, you go one-by-one and insert it (by swapping) into the already sorted list to its left in the correct order. 2. So by the time you reach the end of the list, it will put the last element where it should be and then the list is sorted.
What is the T(n) equation for Karatsuba's Algorithm?
3T(n/2) + n
Solve 2T(n/2) + n using Masters Theorem
A = 2 B = 2 D = 1 A = B^D 2 = 2^1 O(n^1 * logn) O(nlogn)
Why is Insertion sort considered a brute force method?
Because it forces you to check every element
Why the runtime for merge sort, O(nlogn)?
Because you are dividing and conquering. This results in splitting the list/array into smaller chunks and working with them as they small. The runtime goes up linearly as n grows exponetially
Why is the runtime of Insertion sort O(n^2)?
Because you are traversing the list twice in a nested loop
Binary Search and Ternary runtime differs in what way?
Binary is O(log2n) and Ternary is O(log3n)
O(n^2) way to find if an array has duplicates
Compare each number to every other number and check if it appears again.
What is the best case runtime for binary search?
O(1)
Binary search and ternary search are both examples of algorithms that use which principal?
Divide and conquer
Theta complexity calculation. Assume: T(n) = 1/2n^2 + 3n Theta(___)?
First find n0, c1 and c2 n0 = 1 c1 = 1/2 c2 = 3 1/2f(n) <= T(n) <= 3f(n) for all n>= n0 Theta(n^2)
How does Boyer-Moore's *voting* Algo work?
Given an array and told to find occurrences. 1. Set a counter to 0 2. Set a candidate holder to nothing 3. Traverse the array. As you traverse -If the counter is 0, set the candidate holder to that character -If the counter is not 0, increment the counter every time you see that character and decrement every time that you do not see it 4. By the time you finish traversing the array. The candidate should be the majority character (the highest occurring character)
The first step of Karatsuba's algorithm
Grab x and y and split it into x1, x2 and y1, y2 x = 47 y = 78 x*y = 47 * 78 x1 = 4 x2 = 7 y1 = 7 y2 = 8
Calculating: T(n) = 3n^5+4n^2-1000n+99
In this example: n0 = n-nought Choose your n0. - n^0 = 1 Calculate your C C = |3| + |4| + |-1000| + |99| Now we need to show for all n >= n0, T(n) <= C(n^5) We have for every n>= 1 - T(n) = |3|n^5 + |4|n^2 + |-1000|n + |99| - T(n) = |3|n^5 + |4|n^5 + |-1000|n^5 + |99|n^5 T(n) <= C(n^5)
An example of an algorithm that can be solved using Master's Theorem and results in a < b^d
Insertion sort
An example of an algorithm that can be solved using Master's Theorem and results in a = b^d
Merge sort
What is the best case runtime for Linear Search?
O(1)
Why is worst case runtime for Randomized Quick Sort, O(n^2)
The worst case occurs when, at every step, the partition procedure splits an nn-length array into arrays of size 11 and n−1n−1. This "unlucky" selection of pivot elements requires O(n)O(n) recursive calls, leading to a O(n2)O(n2) worst-case
T(n) = 2^n+5 Theta(_____)
Theta(2^n)
What is the runtime of ternary search?
Time Complexity: O(log3 n) Space Complexity: O(1)
What is the runtime for binary search?
Time Complexity: O(logn) Space Complexity: O(1)
What is a good use for Karatsuba's algorithm?
To multiply two large numbers in a simple relatively fast way
Karatsuba's Algorithm Steps
To multiply two n-digit numbers x and y. 2. Create a B variable. where B is a base of m and m = 5/2 3. Split x and y into x1,x2 and y1,y2 4. x = x1 * B^m + x2 -- y = y1 * b^m * y2 ---- x*y = (x1 * B^m + x2)(y1 * B^m + y2) 5. Step 4 becomes. -- x*y = x1*y1*B^(2m) + x1*y2*B^m + x2*y1*B^m + x2*y2 6. Observe the four sub problems -- x1*y1, x1*y2, x2*y1, x2*y2 7. To simply the equation. I will assign these subproblems to a letter. -- a = x1*y1, b = x1*y2, c = x2*y1, d = x2*y2 8. xy = a*B^(2m) + b * B^m + c 9. b (lowercase b) = (x1+x2)(y1+y2) - a - c
Partitioning
To partition an array, the algorithm will choose one element of the array, called the pivot value. Then, a single pass through the array is enough to rearrange the array, so that values that are smaller than the pivot end up on the left side of the array, and values larger end up on the right side. The pivot ends up between them.
When does the best case runtime for binary search occur?
When the element needed to be found is at the middle
Is Omega used?
Yes, but rarely since rarely are we concerned about the best case (lower bound) runtime of a program
The second step of Karatsuba's algorithm
a = x1 * y1 = 4 * 7 = 28 c = x2 * y2 = 7 * 8 = 56 b = (x1 + x2)(y1 + y2) - a - c = 11 * 15 - 28 - 56 Here 11 * 15 can be multiplied by Karatsuba's algorithm Do this until you get to a final answer
Theta complexity is a calculation of the ________ case runtime of a program
average
Write a function that takes an input of an Array of size n and doubles every element of the array. What is the asymptotic worst case analysis of this function?
for(int i = 0; i <= n; i++) { --- Array[i] = Array[i] * 2; } O(n)