COSC 3320 - Midterm Study Set

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Q6.5: quicksort: How many subproblems are there in total?

M_in, M_out, and M each have n subproblems for a total of 3n.

Q7.1: quicksort: Why is a straightforward implementation of the recursive solution to a problem more expensive than the Dynamic Programming solution?

Memoization - if the problem has overlapping subproblems, it will recalculate the solutions to them more than once. On the other hand, a Dynamic Programming solution will determine the solutions from the bottom up, and therefore does not have this problem.

Q5.3: Why is mergesort an optimal sorting algorithm?

MergeSort uses O(n log n) comparisons in every case

Q1.3: Do I need to use for/while loops in a recursive algorithm?

No, any loop can be represented by a recursive function.

Q3.4: Is (1/n) = ω(1) (little-omega)?

No, in fact (1/n) is strictly less than 1 for all n > 1

Q2.3: f(n) = n^4 + 2n^3 + 1000n^2 + 106n + 108 , g(n) = n^3 . Is f(n) = O(g(n))? Why?

No. Checking limit as n goes to infinity of f(n)/g(n) shows that the numerator grows infinitely, meaning f(n)!=O(g(n)). We also can easily solve this by just looking at the highest order of n in both functions. However, you should show the entire limit process for full credit.

Kruskal's Algorithm Time Complexity

O(ElogV)

what is the time complexity of the greedy set cover problem?

O(log delta), delta = size of largest set

Prim's algorithm time complexity

O(m log n + n log n) = O(m log n), where m is number of edges and n is the number of vertices

Dijkstra's algorithm time complexity

O(m log n)

Prim's algorithm time complexity

O(m log n)

Time complexity of negative weight cycle detection in Bellman Ford

O(m)

Runtime of Bellman-Ford

O(mn)

Floyd-Warshall Algorithm time complexity

O(n^3)

Floyd-Warshall number of subproblems

O(n^3), every pair of vertices i and j, there are n subproblems

The variable-length codes assigned to input characters are in Huffman's coding are called ___

Prefix Codes

Q5.1: Show that Quicksort is correct.

Proceed by strong induction: clearly, Quicksort is correct on a set of size n = 0. Suppose that Quicksort is correct on sets of size up to n. Then, for n + 1 elements, Quicksort selects a pivot p and partitions our input into subarrays, S1 and S2, then recursively sorts S1 and S2. However, since p is not in S1 or S2, we must have that both sets contain at most n elements. Thus, Quicksort correctly sorts both subarrays by the induction hypothesis, and the returned array S1 + {p} + S2 is sorted.

Q5.7: Show the correctness of the Selection algorithm.

Proceed by strong induction: clearly, Select is correct on a set of size n = 0. Suppose that Select is correct on sets of size up to n. Then, for n + 1 elements, Select nominates a pivot p and partitions our input into subarrays, S1 and S2, as in Quicksort. Then: • if | S1 | = k − 1, Select returns p, in which case the algorithm is clearly correct • if | S1 | < k − 1), the k-th smallest element lies in S2. Since there are | S1 | + 1 elements smaller, it must therefore be the (k − (| S1 | − 1))th smallest element of S2. By our induction hypothesis, since | S2 | ≤ n, Select correctly returns the (k − (| S1 | − 1))th smallest element from S2. • if | S1 | > k, then our desired element lies in S1. Since | S2 | ≤ n, Select correctly returns the k-th smallest element of S1 by the induction hypothesis

Q4.4: We proved the DC Recurrence Theorem for input sizes that are powers of b (n = b^l). Why do the same asymptotic bounds hold for all n?

Set k such that n1 = b^k < n < b^(k+1) = n2, i.e., sandwich n between two powers of n. The asymptotic bounds for T(n) hold for n1 and n2 by the DC Recurrence Theorem. Since T(n) is an increasing function of n, T(n1) ≤ T(n) ≤ T(n2). Then T(n1) and T(n2) are asymptotically within a factor of b from each other. Hence T(n) has the same asymptotic bound as T(n1) and T(n2).

Q5.5: Give a bad input, other than 1, 2, . . . , n, for quicksort that takes (n Choose 2) comparisons.

Simply input the array in reverse-sorted order

SDSP

Single Destination Shortest Paths

SPSP

Single Pair Shortest Path

SSSP

Single Source Shortest Path; Compute shortest paths from a given source to all vertices in the graph.

HW1 Exercise 4.3: Solve the following recurrences. Give the answer in terms of Big-Theta notation. Solve up to constant factors, i.e., your answer must give the correct function for T(n), up to constant factors. You can assume constant base cases, i.e., T(1) = T(0) = c, where c is a positive constant. You can ignore floors and ceilings. You can use the DC Recurrence Theorem if it applies. (f) T(n) = 4T(n/2) + n^3

Solution: (f) Write f(n) = n^3 , a = 4, b = 2. Then af(n/b) = 4(n/2)^3 = (4/8)n^3 = (1/2) f(n), hence c = (1/2) < 1. By the DC Recurrence Theorem, T(n) = Θ(f(n)) = Θ(n^3)

HW1 Exercise 4.3: Solve the following recurrences. Give the answer in terms of Big-Theta notation. Solve up to constant factors, i.e., your answer must give the correct function for T(n), up to constant factors. You can assume constant base cases, i.e., T(1) = T(0) = c, where c is a positive constant. You can ignore floors and ceilings. You can use the DC Recurrence Theorem if it applies. (k) T(n) = T(7n/8) + n

Solution: (k) Write f(n) = n, a = 1, b = (8/7) . Then af(n/b) = 1(n/(8/7)) = (7/8)n = (7/8) f(n), hence c = (7/8) < 1. By the DC Recurrence Theorem, T(n) = Θ(f(n)) = Θ(n)

HW1 Exercise 4.3: Solve the following recurrences. Answer in terms of Big-Theta notation. Solve up to constant factors, i.e., your answer must give the correct function for T(n), up to constant factors. You can assume constant base cases, i.e., T(1) = T(0) = c, where c is a positive constant. You can ignore floors and ceilings. You can use the DC Recurrence Theorem if it applies. (a) T(n) = 3T(n/2) + n

Solution: (a) Write f(n) = n, a = 3, b = 2. Then af(n/b) = 3(n/2) = (3/2)n = (3/2) f(n), hence c = (3/2) > 1. By the DC Recurrence Theorem, T(n) = Θ(n^logb(a)) = Θ(n^log2(3)) ≈ Θ(n^1.58)

HW1 Exercise 2.11: You have the task of heating up n buns in a pan. A bun has two sides and each side has to be heated up separately in the pan. The pan is small and can hold only (at most) two buns at a time. Heating one side of a bun takes 1 minute, regardless of whether you heat up one or two buns at the same time. The goal is to heat up both sides of all n buns in the minimum amount of time. Suppose you use the following recursive algorithm for heating up (both sides) of all n buns. If n = 1, then heat up the bun on both sides; if n = 2, then heat the two buns together on each side; if n > 2, then heat up any two buns together on each side and recursively apply the algorithm to the remaining n − 2 buns. • Set up a recurrence for the amount of time needed by the above algorithm. Solve the recurrence. • Show that the above algorithm does not solve the problem in the minimum amount of time for all n > 0. • Give a correct recursive algorithm that solves the problem in the minimum amount of time. • Prove the correctness of your algorithm (use induction) and also find the time taken by the algorithm.

Solution: • Clearly, T(1) = T(2) = 2. For n > 2, T(n) = 2 + T(n − 2), which resolves to: T(n) = ( n, if n is even n + 1 if n is odd) • Simply note that for n = 3, we can solve heat the buns in 3 steps: (step 1) heat the top of bun 1 and the top of bun 3 (step 2) heat the top of bun 2 and the bottom of bun 3 (step 3) heat the bottom of bun 1 and the bottom of bun 2 However, our algorithm actually takes 4 steps: (step 1) heat the top of bun 1 and the top of bun 2 (step 2) heat the bottom of bun 1 and the bottom of bun 2 (step 3) heat the top of bun 3 (step 4) heat the bottom of bun 3 • If n is even, the recursive algorithm is optimal. Otherwise, select three buns, and label them 1, 2, and 3. Then: (step 1) heat the top of bun 1 and the top of bun 3 (step 2) heat the top of bun 2 and the bottom of bun 3 (step 3) heat the bottom of bun 1 and the bottom of bun 2 then repeat the recursive algorithm on the remaining n − 3 buns. • The base cases are obvious. Suppose the algorithm is correct for all values up to n. Then, for n + 1 buns: - if n + 1 is even, then we heat two buns and repeat on n − 1. Correctness follows from the induction hypothesis. - if n + 1 is odd, then we heat three buns and repeat on n − 2. Correctness follows from the induction hypothesis. In every case, this algorithm takes n steps.

HW1 Exercise 4.8: Give a recursive algorithm to compute 2^n (in decimal) for a given integer n > 0. Your algorithm should perform only O(log n) integer multiplications.

Solution: Notice that, if n = (b0 b1 ... bk) is the binary representation of n, then 2^n = 2^(b0 b1...bk) = 2^(2^k b0) 2^(2^(k−1) b1) · · · 2^bn. Whenever these bi is 0, 2^(2^(k−i) bi) = 1. ______________________________________________________________ Algorithm pow(b, n): Returns b^n 1: procedure pow(b, n) 2: if n = 0 then 3: return 1 4: else 5: if n is even then 6: return pow(b^2 , (n/2) ) 7: else 8: return b×pow(b^2 , floor(n/2) ) Note that this takes O(log n) steps, as each step we divide n by 2, and thus the algorithm terminates in ceiling(log n) steps

HW1 Exercise 4.1: Prove the asymptotic bound for the following recurrences by using induction. Assume that base cases of all recurrences are constants, i.e., T(n) = Θ(1) for n < c for some constant c. (b) T(n) ≤ T(5n/6) + O(n). Then T(n) = O(n)

Solution: (b) It is helpful to restate what we want to prove: Given T(n) ≤ T(5n/6)+ an where a > 0 is a fixed constant, we wish to show that T(n) = O(n), i.e., that there exists a fixed constant c > 0 such that, for all values up to n, T(n) ≤ cn. The base cases can be satisfied by choosing sufficiently large c. For the induction step, we assume that T(k) ≤ ck for all k ≤ n and then show that T(n + 1) ≤ c(n + 1). T(n + 1) = T(5(n + 1)/6) + an ≤ c(5(n + 1)/6) + an (by the induction hypothesis) ≤ (5c/6)n + (5c/6) + an = c(n + 1) − (c/6)n − c + (5c/6) + an = c(n + 1) − ((c/6) − a)n − (c/6) ≤ c(n + 1), if c ≥ 6a.

HW1 Exercise A.3: Prove by mathematical induction the following statement: [n (Summation) i=0] a^i = (a^(n+1) − 1 / (a − 1)) where a != 1 is some fixed real number. By the way, this is the sum of a geometric series, a useful formula that comes across in algorithmic analysis.

Solution: The base case, n = 1, is trivial [n=1 (Summation) i=0] a^i = a + 1 and (a^(1+1) − 1 / (a − 1)) = (a + 1)(a − 1) / (a − 1) = a + 1 Suppose we have that the above formula is correct for the sum of n terms. Then, for n + 1, we have [n+1 (Summation) i=0] a^i = [n (Summation) i=0] a^i + a^(n+1) = (a^(n+1) − 1 / (a − 1)) + a^(n+1) by the induction hypothesis = (a^(n+1) − 1 / (a − 1)) + ((a^(n+1)*(a − 1)) / (a − 1)) = (a^(n+1) − 1 + a^(n+2) − a^(n+1)) / (a − 1) = (a^(n+2) − a) / (a − 1)

HW1 Exercise 2.10: Rank the following functions by order of growth; that is, find an arrangement g1, g2, g3, . . ., of the functions satisfying g1 = O(g2), g2 = O(g3), . . . n^3 , n/log^2(n), n log n, (1.1)^n , 1/n^3 , log^6(n), 1/n, 2 lg n , n!, n^(lglg(n)) , 2^√(log n) , n^(1/log n) Note: lg or log means a logarithm base 2 and that log^6(n) is the usual way of writing (log n)^6 .

Solution: The ordering is 1/n^3 , 1/n , n^(1/log n) , log^6(n), 2^√(log n) , n/log^2(n), 2^(log n) , n log n, n^3 , n^(lglg(n)) , (1.1)^n , n!

kruskal pseudocode

Sort all the edges from low weight to high Take the edge with the lowest weight and add it to the spanning tree. If adding the edge created a cycle, then reject this edge. Keep adding edges until we reach all vertices.

Q1.4: What if there is no base case in a recursive algorithm?

The algorithm will never terminate.

Q6.4: quicksort: Guess the runtime of the divide and conquer solution.

The divide and conquer algorithm runs in linear time. Each step of M_in, M_out, and M runs in constant time. There are n such steps for a total of O(n).

Shortest Path

The lowest weight path between two nodes

Q4.6: MergeSort: How small can the number of comparisons be?

The number of comparisons is minimized when all elements of the left array are less than the first element of the right array. In that case, we perform (n/2) comparisons.

Q6.3: What is the runtime of the brute-force solution for quicksort?

There are (n Choose 2) possible choices for i and j for a runtime of O(n^2).

Q7.5: How many subproblems does our Dynamic Programming solution to the string-matching problem have?

There is a subproblem for each i and j, which ranges from 1 to m and 1 to n, respectively, for a total of m*n subproblems.

Q3.6: How many comparisons does the max algorithm: max(S) = ( if |S| = 1, return S else, return max(head(S), max(tail(S)) ) take?

This takes n−1 comparisons. Notice that the number of comparisons is described by the recurrence T(n) = 1 + T(n − 1), T(1) = 0. Simply unrolling this recurrence yields T(n) = n-1

Q1.6: Find the closed form expression for the recurrence: T(n) = 1 + 4T(n − 1) where T(1) = 1.

Unroll this recursion to observe a pattern: T(n) = 1 + 4T(n − 1) = 1 + 4(1 + 4T(n − 2)) = 1 + 4 + 16T(n − 2) = 1 + 4 + 16(1 + 4T(n − 3)) = 1 + 4 + 16 + 64T(n − 3) . . . = 1 + 4 + 16 + . . . + 4^n = 4 ((1 − 4^n) / (1 − 4)) = (4^(n + 1) − 4) / 3

Q4.2: Solve T(n) ≤ 3T(n/4) + 1.

Write f(n) = 1, a = 3, b = 4. Then af(n/b) = 3f(n/4) = 3 = 3f(n) so c = 3 > 1, and T(n) = Θ(n^logb(a)) = Θ(n^log4(3)). Note that log4(3) ≈ 0.792

Q4.1: Using the DC Recurrence Theorem, solve T(n) ≤ 2T(n/2) + n

Write f(n) = n, a = b = 2. Then af(n/b) = 2f(n/2) = 2(n/2) = n = f(n) so c = 1, and T(n) = Θ(f(n) logb(n)) = Θ(n log n).

Q4.3: Solve T(n) ≤ 2T(n/4) + n^2 .

Write f(n) = n^2, a = 2, b = 4. Then af(n/b) = 2f(n/4) = 2(n/4)^2 = 2(n^2)/16) = (1/8)n^2 = (1/8) f(n) so c = (1/8) < 1, and T(n) = Θ(f(n)) = Θ(n^2).

Q3.5: Is (1/n^2) = O(1/n)?

Yes, for all n > 1, (1/n^2) < (1/n)

Q1.2: Can any algorithmic problem be solved by using recursion alone?

Yes.

Q2.5: Is n^2 = O(1.1^n)?

Yes. Checking limit as n goes to infinity of f(n)/g(n), and deriving with each step, we will see that they equal 0.

Q2.4: f(n) = n^4 + 2n^3 + 1000n^2 + 106n + 108 , g(n) = n^4 . Is f(n) = O(g(n))? Why?

Yes. Checking limit as n goes to infinity of f(n)/g(n), and deriving with each step, we will see that they equal 1. We also can easily solve this by just looking at the highest order of n in both functions. However, you should show the entire limit process for full credit.

Q3.3: Is n^(1/lg n) = O(1)

Yes. In fact, lg(n^(1/lg n)) = (1/lg n) lg n = 1. Hence n^(1/lg n) = 2.

Q3.2: Is lg^2(n) = O(n^0.1)?

Yes. Notice that lg lg^2(n) = 2lglgn, whereas lg n^0.1 = 0.1 lg n

Q7.4: What is the additional cost of finding the solution, i.e., the indices, in the maxsum problem? A:

You simply need to include a bookkeeping step in which, if the solution is equal to M_in(i), include i. We then backtrace from the solution at n. In total, this is O(n) additional steps.

Aligning two Sequences: pseudocode

a = [] [] for i in 0 ... m: a[i,0] = -i for j in 0 ... n: a[0,j] = -j for i in 0 ... m: for j in 0 ... n: diag = sim(s[1...i-1],t[1...j-1]) + isMatchingCharacter(s[i],t[j]) left = sim(s[1...i],t[1...j-1]) + isMatchingCharacter(gap,t[j]) right = sim(s[1...i-1],t[1...j]) + isMatchingCharacter(s[i], gap) return a[m,n] # score

negative weight cycles i.e.

a cycle that will reduce the total path distance by coming back to the same point; can lead to infinity

The Floyd-Warshall algorithm gives a dynamic programming algorithm to compute

all-pairs shortest paths

The Floyd-Warshall algorithm: negative weights are ___ (allowed/not allowed)

allowed

Dijkstra's algorithm uses ___

binary heap

Prim's Algorithm uses ___

binary heap

Bellman-Ford Algorithm can work on ___ (positive/negative/both) weights

both

Dijkstra's algorithm can work on ___ graphs, but Prim's algorithm only works on ___ graphs

both directed and undirected; undirected

Huffman's coding is ___ (bottom-up/top-down)

bottom-up

set cover pseudocode:

c := sets = () # all required numbers u := [1,2,3,4,...] # while all numbers are not covered while c != u: a := pick set with max unpicked elements c := union(c,a) return c

Negative weight edges can ___

create negative weight cycles

Prim's algorithm runs faster in ___ graphs. Kruskal's algorithm runs faster in ___ graphs.

dense; sparse

Dijkstra's algorithm computes the distances from source s to other vertices in increasing order of their ___ from s

distances

To implement Huffman's algorithm efficiently, the key operation is to ___

efficiently choose two characters with the lowest frequencies in each step

what does it mean for a problem to be np-hard? In fact, it is NP-hard to ___.

even approximate the solution to within a logarithmic factor of the optimal

Given a chain of n matrices, the number of different ways to parenthesize is ___ in n

exponential

Q7.2: Fill in the missing line (5) for the following algorithm: _____________________________________ Algorithm fib(n): Returns the n-th Fibonacci number. 1: procedure fib(n) 2: Initialize array fib to 0 3: fib[1] ← 1 4: for i = 2 to n do 5: ____________________________ 6: return fib[n]

fib[i] ← fib[i − 1] + fib[i − 2].

Aligning two Sequences: base cases

fill row s = -i fill col t = -j fill diag s,t = 0

prim's algorithm is used for

finding the Minimum Spanning Tree

Bellman ford subproms

finding the shortest path between two nodes using the fewest number of edges

difference between dijkstra and prim's

finds shortest path; finds mst

why are the number of subproblems equal to n^3 for Floyd-Warshall

for every pair of vertices i and j, there are n subproblems

In Huffman's coding, the lengths of the assigned codes are based on the ___

frequencies of corresponding characters

chain matrix multiplication pseudocode

function MatrixCostHelper(p, i, j) 4: if m[i, j] = ∞ then 5: if i = j then 6: m[i, j] ← 0 7: else 8: for k ← i to j − 1 do 9: left ← MatrixCostHelper(p, i, k) 10: right ← MatrixCostHelper(p, k + 1, j) 11: total ← left + right + pi−1pkpj 12: if total < m[i, j] then 13: m[i, j] ← total 14: return m[i, j]

Dijkstra's algorithm is a ___ algorithm

greedy

Huffman's coding algorithm is ___

greedy

kruskal is a ___ algorithm

greedy

While Dijkstra looks only to the ___, Bellman goes through ___

immediate neighbors of a vertex; each edge in every iteration.

Dijkstra's algorithm computes the distances from source s to other vertices in ___ order of their distances from s

increasing

Dijkstra's algorithm pseudocode

initialize: set initial distance from source to vertex set v = infinity set parent of each vertex as null set source node with distance 0 Q := build heap S := empty set to store nodes while Q is not empty: u := extractmin() s.add(u) for all neighbors of u:= calculate newDistance update node distance in Q update parent of current node

negative weight cycles can

lead to infinity

Huffman coding is a ___

lossless data compression algorithm

dense graph has

many more edges than vertices

Aligning two Sequences: The best alignment is one which receives the ___

maximum score called the similarity — sim(s, t).

Kruskal's algorithm is a

minimum spanning tree algorithm

Prim's Algorithm

mst

Fourier Transform can be used for

multiplying two polynomials

Huffman: A greedy approach places our ___characters in __ sub-trees

n characters in n sub-trees and starts by combining the two least weight nodes into a tree

how many steps does the huffman algorithm take?

n-1

The Bellman-Ford Algorithm detects if there are ___ reachable from s

negative cycles

huffman coding time complexity?

o(log n)

Aligning two Sequences: runtime

o(mn)

what is the time complexity of the robot cleaning algorithm?

o(n); n = number of dirty rooms

number of subproblems in matrix chain multiplication

o(n^2); distinct subproblem for every pair (i,j) {1 <= i <= j <= n}

runtime of chain matrix multiplication with recursion and table look-up

o(n^3)

dijkstra can work on ___ (positive/negative/both) weights

positive

How does Huffman's code make sure there is no ambiguity?

prefix codes

Bellman Ford Pseudocode

same as dijkstra's but with a small change: for each edge (U,V) in G If distance[U] + edge_weight(U, V) < distance[V} Error: Negative Cycle Exists

If there are no such negative cycles, The Bellman-Ford Algorithm returns the ___.

shortest paths

Aligning two Sequences: recursive formula

sim(s[1...i],t[1...j]) = - sim(s[1...i-1],t[1...j-1]) + isMatchingCharacter(s[i],t[j]) - sim(s[1...i],t[1...j-1]) + isMatchingCharacter(gap,t[j]) - sim(s[1...i-1],t[1...j]) + isMatchingCharacter(s[i], gap)

Bellman-Ford algorithm is a dynamic programming based algorithm for ___.

single-source shortest paths

In Huffman's coding, the most frequent character gets the ___ code and the least frequent character gets the ___ code.

smallest; largest

Dijkstra's and Bellman-Ford's

solve sssp

One can solve the APSP problem, by ___

solving n instances of the SSSP problem, one from each node as a source

when reversing directions of the edges for sdsp, make sure to switch ___

source and destination

Aligning two Sequences: An alignment is an insertion of ___.

spaces in arbitrary locations along the sequences so that they end up with the same size

Which data structure is efficient for huffman's coding?

storing the characters in a min-heap data structure with their frequencies as key values

Aligning two Sequences: a given scoring matrix score where the entry score[x, y] gives ___

the alignment score for characters x and y

Prefix length code means ___

the code assigned to one character is not the prefix of code assigned to any other character

Aligning two Sequences: The score for an alignment is ___

the sum of the scores of its aligned characters

Dijkstra's algorithm constructs ___

tree of shortest path

t/f: shortest path problems allow negative weight cycles

true

what does it mean for a problem to be np-hard? In fact, it is NP-hard to even approximate the solution to within a logarithmic factor of the optimal.

unlikely to have a polynomial time algorithm that finds the optimal solution

SDSP problem can be solved by

using sssp and reversing the directions of the edges, and using destination as source

Q6.1: Looking at Quicksort, are the subproblems: • Overlapping? • Disjoint? • Independent?

• The subproblems do not overlap. • The subproblems are disjoint. • The subproblems are independent.

Chain Matrix Multiplication recurrence naively

∑p(k)*p(n-k)

Q5.4: How many distinct pairs are there among n elements?

(n Choose 2) = (n(n−1))/2

Q3.1: Which is asymptotically bigger, 2^(lg n) or √n?

2^(lg n) = n > √n, so 2^(lg n) is asymptotically bigger.

Q1.1: What are the two key ingredients of a recursive algorithm?

A recursive algorithm must have a base step and a recursive step

Bellman-Ford Algorithm

Algorithm which computes shortest paths from a single vertex to all other vertices in a weighted digraph.

APSP

All Pairs Shortest Paths

Chain Matrix Multiplication

C(i, j) = minimum cost of multiplying Ai x A(i+1) x ... x Aj C(i,j) = min_{i<=k<j}{C(i,k)+C(k+1,j)+m_{i-1}*m_k*m_j}

Chain Matrix Multiplication recurrence (dynamic programming)

C(i,j) = min_{i<=k<j}{C(i,k)+C(k+1,j)+m_{i-1}*m_k*m_j}

Single Pair Shortest Path

Compute shortest path for a given pair of vertices.

SSSP Definition

Compute shortest paths from a given source to all vertices in the graph.

Single Destination Shortest Paths

Compute shortest paths to a given destination from all vertices in the graph.

All Pairs Shortest Paths

Compute the shortest paths for all pairs of vertices in the graph.

Q6.2: Why does simply adding all elements of the array not provide a solution to the maximum contiguous subarray sum problem?

Entries can be negative, e.g., A = [−1, 1] which has solution 1 but sum(A) = 0.

Q5.8: Why is the worst-case complexity of the Naive Selection algorithm O(n^2) ?

For the same reason as Quicksort: if we choose a pivot poorly, then | S1 | = n − 1 and | S2 | = 0. Then our recurrence is once again T(n) = n − 1 + T(n − 1), T(0) = 0 which resolves to T(n) = (n Choose 2) = O(n^2)

Q6.6: quicksort: Why are there two cases for M_in?

If A[i − 1] is part of the solution for M(in)[i], then M_in[i] = M_in[i − 1] + A[i]. Otherwise, it does not include A[i − 1] and must therefore be only A[i]. Thus, M_in[i] = max(A[i], M_in[i − 1] + A[i])

Q5.2: Why is quicksort not an optimal sorting algorithm?

If the pivot is chosen so that the remaining n − 1 elements are all in one of the sets S1 or S2, then our recurrence becomes: T(n) = n − 1 + T(n − 1), T(0) = 0, since we require n − 1 comparisons. This has a closed form expression T(n) = (n Choose 2) = O(n^2)

Q7.3: Fibonacci number: How many operations does the recursive version of the above algorithm take?

Implemented naively, the recurrence relation is given by T(n) = 1 + T(n − 1) + T(n − 2), which can be shown to be O(ϕ^n) by induction, where ϕ = ((1+√5)/2).

Q5.6: Why is Quicksort used in practiced as opposed to MergeSort?

In practice, most inputs give O(n log n) performance for Quicksort, but with better constant factors than Mergesort. Moreover, Mergesort requires an auxiliary array, whereas Quicksort can be done in place.

Q4.5: Argue why the number of comparisons in the merge step of MergeSort is at most n.

Let i and j represent the positions of our left and right subarrays during the merge step. If the i-th element of the left array is less than the j-th of the right array, then we increment i. Otherwise, we increment j. This continues until i and j are equal to the size of the left and right subarrays, respectively. In particular, since exactly one of i and j is incremented each step, there can be at most n steps (for otherwise, i or j would be out of the array bounds). Since there is exactly one comparison per step, this bounds the number of comparisons by n.


Ensembles d'études connexes

EverFi - Week 8 - "Consumer Protection"

View Set

Networking Essentials - Module 4

View Set

Team Development Final for Chapters 1-6

View Set

Test - Coordinate Adjectives/Misplaced Dangling Modifiers

View Set

Week 3 PATHO 370 Check Your Understanding

View Set

Chapter 7 Miscellaneous Personal Lines Coverage

View Set