CS4306 - Algorithm Analysis MidTerm Study Guide

¡Supera tus tareas y exámenes ahora con Quizwiz!

InsertionSort Pseudocode

InsertionSort(A) for j = 2 to A.length; key = A[j] i = j-1 while i > 0 and A[i] > key A[i+1] = A[i] i = i-1 A[i+1] = key

SelectionSort Pseudocode

SelectionSort(A[0,...,n-1]) for i = 0 to n-2 do min = i for j = i+1 to n-1, do if A[j] < A[min] min = j swap A[i] and A[min]

Explain differences and similarities between divide and conquer and dynamic programming

- Divide and Conquer works by dividing the problem into sub-problems, conquer each sub-problem recursively and combine these solutions. - Dynamic Programming is a technique for solving problems with overlapping subproblems. Each sub-problem is solved only once, and the result of each sub-problem is stored in a table (generally implemented as an array or a hash table) for future reference. These sub-solutions may be used to obtain the original solution and the technique of storing the sub-problem solutions is known as memoization.

SelectionSort description

- Find the smallest entry in the array and swap with the first entry; it is now considered sorted - Find the smallest entry in the unsorted side and swap with the front of the unsorted side, and add to sorted side in next index in the sorted array - Repeat until entire array is sorted

InsertionSort description

- Take the first element of the array and check it against the next position - Swap if the value at the next index < the first index value - Repeat process, shifting indices by 1 to the right of the sorted side of the array until the unsorted value is in place (multiple shifts), until array is sorted

Dynamic programming should be used when

- There are overlapping subproblems - There is an optimal solution

Dynamic Programming has two ways to be implemented; which is better, and why?

- Top down with memoization - Bottom up (Better; catches all subproblem variations)

What is the big O of the Fibonacci sequence implemented by recursion? (3 lines of pseudocode)

2^n

Which algorithm paradigm that we have studied solves everything?

Brute force

What algorithm paradigms have been studied so far?

Brute force Dynamic Programming Divide and Conquer

What is the hard part of dynamic programming?

Finding the optimal sub-problem

Master Theorem Case 2

If f(n) = θ(n^(log_b a) log^k n) with k ≥ 0, then T(n) = θ(n^(log_b a) log^(k+1) n) (Big Ɵ)

Master Theorem Case 3

If n^(log_b a) < f(n) then T(n) = θ(f(n)) (Big Ω)

Master Theorem Case 1

If n^(log_b a) > f(n) then T(n) = θ(n^(log_b a)) (Big O)

Why isn't MergeSort considered the best sorting algorithm?

It isn't in-place, and requires more space to run.

MergeSort Pseudocode

MERGE-SORT(A; p; r) if p < r q = [(p+r/2)] MERGE-SORT(A; p; q) MERGE-SORT(A; q + 1; r) MERGE(A; p; q; r)

What is the maximum subsequence sum (contiguous subarray with the largest sum) of the given array? [−2, 1, −3, 4, −1, 2, 1, −5, 4]

Maximum subsequence ONLY goes from positive # to positive #; end at negative. See if you can find a left/middle/right subsequence sum, and the largest of those is the maximum. In this case: Left: [1, -3, 4, -1, 2, 1, -5, 4] = 3 Middle: [4, −1, 2, 1] = 6 - largest Right: [2, 1, -5, 4] = 2

Which sort routine/algorithm has the lowest worst case runtime (Big O)?

MergeSort O (n log n)

List of common orders of magnitude, in order

O(1) - constant run time O(log n) - logarithmic time O(n) - linear time O(n^2) - quadratic time O(n^3) - cubic time O(3/2)^n - exponential time O(2^n) - exponential time O(n!) - factorial time

What is the MergeSort Big O?

O(n log n)

What is the Big O of Fibonacci sequence implemented in dynamic programming?

O(n) (linear)

InsertionSort Big O

O(n^2)

SelectionSort Big O

O(n^2)

What is the Big O of brute force matrix multiplication?

O(n^3)

What is the Big O of Strassen's matrix multiplication?

O(n^log7) and/or O(n^2.8)

What is the best parenthezation of the following 4 matrices: M1: 2x3 M2: 3x2 M3: 2x4 M4: 4x3

Shortcut: P x Q x T ((M1 M2) (M3 M4)) = 288 (BEST) M1 (M2 (M3 M4)) = 864 ((M1 M2) M3) M4) = 1152 (M1 (M2 M3)) M4) = 1728 M1 ((M2 M3) M4) = 1728

How does Strassen's matrix multiplication improve runtime?

Strassen's reduces the big O by reducing 1 multiplication and inserting additions, which take less time to perform.

What must be true if you are going to use a binary search on an array?

The array must be sorted

Is Divide-and-Conquer top down or bottom up?

Top down

Write out one iteration of what a computer does with a QuickSort of the following array: [3 7 3 2 1 -4]

[3 7 3 2 1 -4] 3:: 7 3 2 1 -4- select pivots (first index / last index) 3:: 3 7 2 -4 1 - sort from outside to in 3:: 3 2 7 -4 1 - 3:: 2 3 -4 7 1 - 3:: 2 -4 3 1 7 - 3:: -4 2 3 1 7 - 3:: -4 2 1 3 7 - Pivot cross (-4 and 3) -4 2 1 3 3 7 - End of first iteration

In the following example, what is the largest substructure that is not necessarily contiguous? abacad

acd

What does contiguous mean?

adjacent/next to each other

What is the sequence of numbers where n ≥ 2 according to the following definition: sequence[0] = 10 sequence[n] = 10 + sequence[n_1]

{10, 20, 30, ....}


Conjuntos de estudio relacionados

MKT 230 - Exam 1: Quiz Compilation

View Set

Cell Overview Review worksheet questions

View Set

Leadership Exam 1 Sherpath with all answer choices

View Set

CH4: Adjustments, Financial Statements and Financial Results

View Set