CSE 100

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Given a directed graph G = (V,E) where all edges have a unit weight/distance, and two vertices s, t in V , we want to find a shortest path from s to t in O(E + V) time. Which algorithm would you use?

BFS

You are given n integers, a1, a2, ..., an. Give an algorithm to determine if all the integers are distinct or not. Assume that the simple uniform hashing assumption holds. Give an O(n) time algorithm via hashing. Briefly explain why your algorithm runs in O(n) time (in expectation).

Begin by creating a hash table of size O(n) and hash the given integers into the table. When inserting an element, check if the corresponding slot is already occupied. If it is, test if the slot contains other elements with the same key value. As we will be performing O(n) insert operations and each insertion takes O(1) time assuming uniform hashing, the running time of the algorithm is O(n).

In the max subarray problem, we are given as input an array A[1...n] of n elements. Here, our goal is to compute the largest sum of all elements in any subarray of A[1...n]. In other words, the goal is to compute max1 ≤ i ≤ j ≤ n S[i, j], where S[i, j] = A[ i ] + A[ i + 1 ] + ... + A[ j ]. We learned a O(n log n) time algorithm based on divide and conquer for the max subarray problem. The algorithm required us to compute max1 ≤ i ≤ n/2 S[i, j] in O(n) time; for simplicity, here let's assume n is even. Describe such an algorithm. It must be clear why the running time is O(n) from the description unless you explain why.

Compute Left Half Subarray Sums: - Start from the center index n/2 and work towards the beginning of the array. - Calculate S[n/2, n/2], S[n/2 - 1, n/2], ..., S[1, n/2] in this order. - Keep track of the maximum value among these computed sums. Compute Right Half Subarray Sums: - Start from the center index n/2 + 1 and work towards the end of the array. - Calculate S[n/2 + 1, n/2 + 1], S[n/2 + 1, n/2 + 2], ..., S[n/2 + 1, n] in this order. - Keep track of the maximum value among these computed sums. Choose Maximum Sums: - Take the maximum of the two maximum values obtained from the left and right half calculations. - This maximum value represents the maximum subarray sum within the specified range. We can compute S[n/2, n/2], S[n/2 − 1, n/2], ..., S[1, n/2] in this order in O(n) time. Take the maximum of these. Likewise, we can compute S[n/2 + 1, n/2 + 1], S[n/2 + 1, n/2 + 2], ...S[n/2 + 1, n] in this order in O(n) time. Choose the max of these and add up the two chosen maximum values.

Suppose we just ran DFS on a directed graph G. The algorithm DFS we learned in class computes (v.d, v.f) for every vertex v in V . Using this, we would like to do the following. Briefly explain how. If your solution is not based on each vertex's discovery/finish time, you cannot get full points. How do we compute the number of depth first trees?

Count the number of back edges encountered during the DFS traversal. Each back edge contributes to the formation of a cycle within a depth-first tree. Since each depth-first tree contains exactly one cycle, the number of depth-first trees is equal to the number of cycles formed by the back edges. This approach uses the comparison of discovery and finish times to identify back edges.

You are given two sequences, <a1, a2, ..., an> and <b1, b2, ..., bn>, where each sequence consists of distinct integers. Describe a linear time algorithm (in the average case) that tests if a sequence is a permutation of the other. Assume that the simple uniform hashing assumption holds.

Create an empty hash table called of size O(n) called H. For each element ai in the sequence <a1, a2, ..., an>, insert ai into the hash table H. For each element bi in the sequence <b1, b2, ..., bn>, search for bi in the hash table H. If the element bi is not in the hash table H then return False. After the loop return True. We create a hash table of size Θ(n) and use chaining to resolve collisions. Recall that under the simple uniform hashing assumption, a search, either successful or unsuccessful, take O(1) time on average. We first insert a1, a2, ..., an into the hash table. This is done in O(n) time in the average case. Then, we search each bi in the hash table, which is done in O(1) time. we can find every bi in the hash table if and only if one is a permutation of the other, due to the fact that each of the two sequences consists of distinct integers. No need to explain the running time.

Given a directed graph G = (V,E) where every edge has positive weight/distance, and two vertices s, t in V , we want to find a shortest path from s to t. Which algorithm would you use? You should use the fastest algorithm.

Dikjstra

We learned the theorem that any comparison based sorting algorithm has a running time of Ω(n log n). Your job is to complete the following proof of the theorem. Consider any comparison-based algorithm and its decision tree T on n elements. Briefly explain why the decision tree has at least n! leaves.

Each leaf represents a distinct possible ordering of the elements, and there are n! possible orderings for n elements.

2^n = O(n^100 * log n)

False

If T(n) = T(n/2) + Θ(n), then we have T(n) = Θ(n log n). True or False?

False

If an algorithm's worst case running time is Θ(n^2 ), it means that the running time is Θ(n^2 ) for all inputs of size n. True or False?

False

If an undirected graph G = (V,E) has |V | - 1 edges, it must be a tree

False

If f = O(g), then g = O(f) True or False?

False

In the Random Access Model (RAM), there are multiple machines available for computation. Therefore, one can run each thread on a separate machine. True or False?

False

Merge-sort is an in-place sorting algorithm. True or False?

False

Suppose we resolve hash table collisions using chaining. Under the simple uniform hashing assumption, a successful search takes Θ(1/(1 − α)) time in expectation where α is the load factor of the current hash table

False

The adjacency matrix representation of a graph G = (V,E) needs memory O(|E|+|V |).

False

The running time of Topological sort is Θ(E + V log V). True or False?

False

The running time of the Strassen's algorithm for multiplying two n-by-n matrices is Θ(n^3) True or False?

False

We are given a array A[1 · · · n] (not necessarily sorted) along with a key value v. We can find 1 ≤ i ≤ n such that A[ i ] = v in O(log n) time (if such i exists).

False

if f = O(g), then g = O(f) True or False?

False

log^15 n = Ω(√ n). True or False?

False

n (Summation symbol) i = 1 (i = O(n))

False

n log^5 n = Ω(n^2 ) True or False?

False

(logn)^100 = Θ(log n)

False Because when we compare logn with n we can clearly tell that n is larger, but when have two things that are closer to being equal like logn and logn^100 then we pay close attention to the exponent, in this example we are working with big theta which means the exponent has to be exact

Show how to sort n integers in the range from 0 to n^3 − 1 in O(n) time. Here you can assume that each bit operation takes O(1) time, and so does comparing two integers. Explain your algorithm's running time.

First convert each integer to base n number. Then each number will now have at most 3 digits and each of these digits can take a value from 0 to n - 1. Now applying radix sort on the numbers, since there are 3 digits in each number, it will take 3 passes. Apply counting sort to sort numbers at each digit starting from sorting according to least significant digit up to the most significant digit. Therefore the running time is Θ(3 * (n + n)) = Θ(n).

You are given n integers, a1, a2, ..., an. Give an algorithm to determine if all the integers are distinct or not. Assume that the simple uniform hashing assumption holds. Give an O(n log n) time algorithm without using hashing.

First we have to sort the integers in O(n log n) time. Then we compare each pair of integers that are next to each other. If the two integers are equal we return false. Else, we return true.

Suppose we just ran DFS on a directed graph G. The algorithm DFS we learned in class computes (v.d, v.f) for every vertex v 2 V . Using this, we would like to do the following. Briefly explain how. If your solution is not based on each vertex's discovery/finish time, you cannot get full points. How do we check if an edge (u, v) is a back edge?

If there is a vertex where vertex v was discovered later and finished exploration earlier than vertex u, then the edge (u, v) is a back edge.

Recall that the Dijkstra algorithm maintains the set of vertices S whose shortest distances have been determined, and grows it by adding one vertex to S in each iteration. For more details of the algorithm, you can see the algorithm in Ch 24.3 or the lecture slides. Answer if the following claim is true or not, and explain why. Claim: Let's assume that the directed input graph G = (V, E) has no negative weight edges, and further all vertices are reachable from the given source vertex s. Suppose vertices are added to S in this order: s = v1, v2, v3, ...vn. Then, it must be the case that δ(s, v1) ≤ δ(s, v2) ≤ ... ≤ δ(s, v_n). Here, δ(s, v) denotes the (shortest) distance from s to v.

(Correctly Reworded) Given claim is true because case v1 is less than or equal to v2. Hence v1 has less distance than v2 and distance is only low when weight of vertice is less than another one. Also no negative weight edge so no repetition will occur. Therefore the claim is true.

A Variant of Rod cutting. Recall in the rod cutting problem, we're given a rod of length n along with an array {pi}1≤ i ≤ n, in which pi denotes the price you can charge for a rod/piece of length i. The goal is to cut the given rod of length n into smaller pieces (or do nothing) so that the total price of the pieces is maximized. But things have changed. Cutting has become so expensive and difficult. You have to hire the cut master to cut the rod. The master is lazy and doesn't want to cut many times, so he charged k^2 dollars if he makes k cuts. Now your goal is to figure out the best way to cut the rod so as to maximize your profit, namely your revenue minus the total expenses for cutting. Example: Suppose n = 10 and you cut the rod into pieces of lengths, 2, 3, 5. Then your profit is p2 + p3 + p5 − 2^2 . If you cut it into 10 pieces, all of length 1, then your profit is 10p1 − 9^2 . Give a DP based algorithm. Your algorithm only needs to compute the maximum profit. Analyze the running time of your algorithm.

(REMEMBER THE UNDERSCORES ARE LIKE EXPONENTS BUT INSTEAD OF GOING ON TOP THEY GO ON THE BOTTOM) There are several ways. One possible solution is the following. Let r_i,k denote the max profit we can get out of a rod of length i by making exactly k cuts. Here, 0 ≤ i ≤ n and 0 ≤ k ≤ n − 1. Clearly, r_i,0 = pi for all i ≥ 0; let p_0 = 0 for convenience. Also, r_0,k = 0 for all k. Further, r_i,k = max1≤ j ≤ i−1(pj + r_j−i,k−1 − (k^2 − (k − 1)^2)) for other pairs of i and k. We can set up a DP table with entries corresponding to r_i,k and can fill it using a double loop with k being for the outer loop; of course, we use the above recursion. The number of entries is Θ(n^2). And each entry takes O(n) time to compute. Thus, the RT is O(n^3) to compute all the entries. And we just need to return max_0≤k≤n−1 r_n,k as the max profit. The running time is O(n^3) because the function max_profit has 3 for loops, one inside of the other.

We are given as input an undirected graph G = (V, E), a source vertex s ∈ V , and an integer k > 0. We want to output all vertices within k hops from s. Here, we say that v is within k hops from s if there is a path from s to v consisting of at most k edges. Describe your algorithm. The faster your algorithm is, the more points you will earn.

- let Q be a queue to store {vertex number of edges from s} - Q.enqueue({s, o}) - mark s as visited - while Q is not empty {u, dist} = Q.dequeue() print u <- this u is within k hops so print or store it if v is not visited and dist is < k then Q.enqueue({v, dist + 1}) end of while loop (POSSIBLY LEARN THE EXPLANATION TO THIS QUESTION)

Recall that in the matrix chain multiplication, the input is a sequence of n matrices, A1, A2, ..., An where Ai is pi−1 × pi , and the goal is to fully parenthesize the product A1A2 · · · An such that the number of multiplications is minimized. To solve the problem, using a DP, we computed m[i, j] and s[i, j] - recall that m[i, j] was the minimum number of multiplications needed to compute the product AiAi+1 · · · Aj , and s[i, j] = k implies that there is an optimal solution for AiAi+1 · · · Aj that is constructed by Ai...k × Ak+1...j . 1 Unfortunately, we have lost all m[i, j] values, but luckily we still have s[i, j] values. Our goal is to compute m[1, n] as fast as possible. Give a pseudocode of your algorithm. What is your algorithm's asymptotic running time? Your algorithm must run faster than you compute m[1, n] from scratches using DP.

Initialize the m array with base case for(i = 1; i < n; i++){ m[i][i] = 0; } // L represents chain length for(L = 2; L < n; L++){ -> // loop for starting index of chain of length L -> for(i = 1; i < n - L + 1; i++){ ->-> // ending index of chain which has length L ->-> // and i as start index ->-> j = i + L - 1 ->-> // We have value of optimal k stored in s ->->array so ->-> // we use that value directly ->-> int k = s[i][j]; ->-> // find the value of m[i][j] using value of k ->-> m[i][j] = m[i][k] + m[k + 1][j] + a[i - 1] * [k] * a[j]; ->-> } } return m[1][n] The time complexity of the algorithm is O(n^2) because we are using only two nested for loops. This complexity is better than building m array from scratch since the cost of building an m array from scratch is O(n^3).

Interval Selection Problem (ISP). In the ISP, we are given n intervals (s1, f1),(s2, f2), ...,(sn, fn), and are asked to find a largest subset of intervals that are mutually disjoint. For simplicity, let's assume that all s, f values are distinct. We refer to si and fi as interval i's start and finish times, respectively. Also assume that intervals are ordered in increasing order of their finish times. Prove the key lemma that there is an optimal solution that includes the first interval which ends the earliest.

Let there be a claimed optimal solution which doesn't include the first interval. Let the interval with the earliest starting time in the claimed solution be (si, fi). Then we have the following cases: Case 1: f1 < si, In this case, all other intervals in the claimed solution start at or after si, hence if (s1, f1) is added to the solution, it still remains valid as no intervals in the claimed solution intersect with it. But this increases the number of intervals by one in an already claimed to be optimal solution, which contradicts that the original solution was optimal. Hence this not a possible case. Case 2: si <= f1, We notice that f1 <= fi because the intervals were sorted by their finishing times. Remove the interval (si, fi) from the solution and add (s1, f1) to the solution. Then the new solution still remains valid. For us to see why it remains valid we let some other interval (sj, fj) from the claimed solution intersect with (s1, f1) which must happen if the new solution has to be invalid. In that case sj <= f1 while f1 <= fj which further implies that (si, fi) also intersects with (sj, fj). In fact, f1 is in (si, fi) intersection symbol (sj, fj) which proves that the intersection is non empty. But the claimed solution was valid, which is a contradiction, proving that the new solution is indeed valid.

Name two sorting algorithms whose (worst-case) running time is O(n log n)

Merge, Heap

You're given a directed graph G = (V, E) with a source vertex s ∈ V ; assume that edges have non-negative weights. Dr. Sponge tells you that the shortest distance from s to each vertex v is p(v). However, you're unsure. How can you test if Dr. Sponge is telling the true in O(E + V ) time? No proof is needed. Just explain how.

Run the Bellman-Ford algorithm with the source vertex s as the input. This algorithm will compute the shortest distance from s to all other vertices in the graph G. During the Bellman-Ford algorithm's execution, it relaxes the edges repeatedly, updating the distance estimates. If there are no negative weight cycles in the graph, the algorithm will converge to the correct shortest distances from s to all vertices. After the Bellman-Ford algorithm completes, compare the computed shortest distances d(v) to the claimed shortest distances p(v) provided by Dr. Sponge. If for every vertex v, d(v) is equal to p(v), then Dr. Sponge's claim is verified to be true. If there is any vertex v for which d(v) is not equal to p(v), then Dr. Sponge's claim is not true.

(Learn This) We learned the theorem that any comparison based sorting algorithm has a running time of Ω(n log n). Your job is to complete the following proof of the theorem. Consider any comparison-based algorithm and its decision tree T on n elements. The tree's height h is defined as the maximum number of edges on any path from the root to a leaf. It is known that the number of leaves in the tree of height h is at most 2^h . (d) Therefore, we have n! ≤ 2^h. Show that h = Ω(n log n)

See the lecture slides. We show that (n/2)^(n/2) ≤ n! ≤ 2^h . Taking the log on both sides gives the desired conclusion.

What is the main advantage of hash tables over direct-address tables? Please make your answer concise.

Space-efficient; this answer is enough. More precisely, if the universe of possible keys is super large compared to the set of actual keys, then lots of space is wasted.

Suppose we ran the Bellman-Ford algorithm on a directed graph G = (V, E) with no negative weight cycle. As a result, we obtained v.d = δ(s, v) where s is the given source vertex. Now, you're asked to find a shortest path from s to a given vertex v in time O(E+V ). Describe your algorithm. No need to explain why it works. As usual, you can describe your algorithm in plain English or using pseudocode.

Start from vertex v Set current vertex c = v Initialize path to (v) while c is not equal to s Iterate over u such that (u, c) is in E if c.d = u.d + w(u, c) where w(u, c) is the weight of the edge from u to c set c = u set path = (u, path) continue the while loop for the next iteration end of while loop This algorithm finds the shortest path because at every point it finds the vertex u such that the weight of the shortest path from s to u plus the weight of the edge (u, c) equals the weight of the shortest path from s to c. Hence, this is one of the shortest paths in the graph. The algorithm considers each edge exactly once, therefore the running time is O(E + V)

Recurrence relation for Strassen's algorithm

T(n) = 7T(n/2) + Θ(n^2)

Recurrence relation for Square Matrix Multiply Recursive

T(n) = 8T(n/2) + Θ(n^2)

Briefly explain the Random Access Model (RAM)

The Random Access Model (RAM) is a simplified model used to analyze algorithm efficiency. It assumes that accessing any piece of data in memory takes the same amount of time, regardless of its location. This model allows for basic operations like arithmetic and memory access to be done in constant time. It helps us understand how algorithms perform by ignoring specific hardware details and focusing on the algorithm's logic and operations. Essentially, the RAM model provides a way to measure and compare the efficiency of different algorithms based on their fundamental operations.

Why the hash function h(k) = k mod 2L for some integer L is not desirable? Please make your answer concise.

The hash value computation only uses the L least significant bits.

Solve the following recurrence using the recursion tree method: T(n) = 2T(n/2) + n^2 . To get full points, your solution must clearly state the following quantities: the tree depth (be careful with the log base), each subproblem size at depth d, the number of nodes at depth d, workload per node at depth d, (total) workload at depth d. And of course, do not forget to state what is T(n) after all

The tree visualization is omitted. (rough visualization: 1pt) For simplicity, let T(1) = 1. Tree depth D = log2 n. 1 pt Each subproblem size at depth d: n/2 d .(1 pt) Number of nodes at depth d: 2d (1pt) WL per node at depth d: (n/2 d ) 2 (1pt) WL at depth d: n 2/2 d . (1pt) So, PD d=0 n 2/2 d = Θ(n 2 ). (top level dominates) Final answer (1pt)

Suppose we want to compress a text consisting of 6 characters, a, b, c, d, e, f using the Huffman Algorithm. Give an example for which the algorithm produces at least one codeword of length 5. In other words, you are being asked to give a set of the character frequencies that results in the deepest tree

There are millions of examples. One simple example is 1, 1, 2, 4, 8, 16. In fact, assuming that f1, f2, ...f6 are the frequencies sorted in increasing order, it can be a correct solution as long as f3 ≥ f2 + f1, f4 ≥ f3 + f2 + f1, f5 ≥ f4 + ... + f1, f6 ≥ f5 + ... + f1

4^n = Ω(n^4 ) True or False?

True

A = <1, 2, 3, ..., n> is a min-heap

True

If A(n) = 2A(n/2) + 1000n with A(1) = 1 and B(n) = 2B(n/2) + n with B(1) = 1, then A(n) = Θ(B(n)). True or False

True

If G is undirected, then G and GT have the same adjacency matrix representation

True

If T(n) = T(9n/10) + n, then we have T(n) = O(n).

True

If T(n)=2T(n/2) + n, then we have T(n) = O(n log n)

True

If a graph G = (V, E) is connected and has | V | edges, then it has a unique cycle. True or False?

True

If an undirected graph G = (V,E) is connected and its edges have distinct weights, G has a unique minimum spanning tree.

True

If f = O(g) and g = O(h), then it must be the case that f = O(h) True or False?

True

In the (s-t) max flow problem, the maximum flow value is equal to the minimum cut value. True or False?

True

Insertion sort is a stable sorting algorithm (here, we are considering the pseudocode of Insertion sort in the textbook).

True

It takes at least Ω(n log n) time for Mergesort (in the textbook) to sort any input of n elements. True or False?

True

One can build a max-heap in O(n) time, True or False?

True

Quick sort is an in-place sorting algorithm (here, we are considering the pseudocode of Quick sort in the textbook)

True

Suppose we are given a sequence of n elements of the same key value. If we run a stable sorting algorithm on this input, then the input sequence order remains the same. True or False?

True

The Bellman-Ford algorithm relaxes all edges in each iteration, and performs |V | − 1 iterations. Suppose that if v.d is not updated for any vertex in one iteration, then we stop the algorithm. Then, it must be the case that v.d = δ(s, v) for all v ∈ V ; recall δ(s, v) denotes the shortest path distance from s to v. Assume that there is no negativeweight cycle reachable from s. True or False?

True

The average running time of the Randomized Quick-Sort is O(n log n) if the pivot is chosen uniformly at random, True or False?

True

The average running time of the randomized quicksort algorithm (that chooses the pivot randomly) is O(n log n) for all inputs.

True

The decision tree of any comparison based sorting algorithm has a height Ω(n log n).

True

The running time of Insertion Sort on the input <n, n-1, n-2, . . . . 1_) is Ω(n^2)

True

There is a deterministic O(n) time algorithm for Selection problem; recall that in the Selection problem, we are asked to find the kth smallest element out of n elements. True or False?

True

There is a linear time algorithm that finds a median in O(n) time.

True

lim_n→∞ f(n)/g(n) = 0, then f(n) = O(g(n)). True or False?

True

log2 n! = Ω(n log n) True or False?

True

n (Summation symbol) i = 1 (Θ( i ) = Θ(n^2))

True

n^2 + 100n = O(2^n ) True or False?

True

n^2 = Ω(n log n)

True

We learned that assuming that all edges have distinct weights and the graph is connected, there is a unique MST and an edge is in the MST if and only if the edge is safe. Answer if the following claim is true or not, and explain why. Claim: Assume that all edges have distinct weights and the graph is connected. Suppose an edge e has the maximum weight among all edges on some cycle C of the graph. Then, the edge is not safe.

True, the main purpose of getting a MST is to get the least path from one node to another where in normal graph we may have chosen a distant path. While making MST from a graph we try to include edges with the least cost which results in maximum value not getting into MST. We are supposed to choose maximum value edge if and only if it has no other connection. Here its clearly mentioned that they are in a cycle so there must be more than one edge connected to each and every node which results in elimination of highest weight edge. If we have n nodes the result will have n - 1 edges in MST. So any spanning tree algorithm tries to choose the minimum edge cost.

In the weighted version of Interval Selection problem, we are given n intervals, I1 = (s1, f1), I2 = (s2, f2), ..., In = (sn, fn) where they are ordered in increasing order of their finish times, i.e., f1 ≤ f2 ≤ ... ≤ fn. Further each interval Ii is associated with a certain weight/profit wi ≥ 0. Our goal is to choose a subset of intervals with the maximum total weight. If we let M(i) denote the maximum weight of any subset of mutually disjoint intervals from I0, I1, I2, ..., Ii (here, I0 is a 'dummy' interval of zero weight that is disjoint from all other intervals), we have the following recurrence: M(i) = ( 0 if i = 0, max{M(j) + wi , M(i − 1)} otherwise, where Ij is the interval with the largest finish time that ends before Ii starts. Give a pseudocode of a top-down DP algorithm based on memoization using this recursion.

WeightedIntervalSelection(i){ // M is an array that contains the memoized results // P is an array such that p[i] contains the largest index of j such that f[j] < s[i] if i == 0 -> then return 0 if M[i] has been computed -> then return M[i] value1 = WeightedIntervalSelection(i - 1) value2 = w[i] + WeightedIntervalSelection(p[i]) if(value1 > value2){ M[j] = value1 } else{ M[j] = takeWeight // Also know as w[i] } return M[j] }

We learned the theorem that any comparison based sorting algorithm has a running time of Ω(n log n). Your job is to complete the following proof of the theorem. Consider any comparison-based algorithm and its decision tree T on n elements. The tree's height h is defined as the maximum number of edges on any path from the root to a leaf. It is known that the number of leaves in the tree of height h is at most 2^h . (d) Therefore, we have n! ≤ 2^h. Explain why h = Ω(n log n) implies the sorting algorithm's running time is Ω(n log n).

When the height of the decision tree is Ω(n log n), it indicates that the algorithm must perform a significant number of h comparisons in the worst-case scenario. This, in turn, implies that the running time of the sorting algorithm is lower-bounded by Ω(n log n), providing a formal lower bound on its time complexity.

In the rod cutting problem, we are given as input, p1, p2, ..., pn, where pi denotes the price of a rod/piece of length i. We are interested in cutting a given rod of length n into pieces of integer lengths in order to maximize the revenue; here we are only interested in finding the maximum revenue. Cutting is free. Let ri denote the maximum revenue one can get out of a rod of length i. To solve the problem using DP (dynamic programming), we can use the following recursion: rj = ( max1≤i≤j (pi + rj−i) if j ≥ 1 0 otherwise (j = 0) But Dr. Sponge thinks that perhaps he has a more efficient recursion and proposes the following: rj = ( max{pj , max1≤i≤bj/2c (ri + rj−i)} if j ≥ 2 p1 otherwise (j = 1) Does this proposed recursion correctly compute rj? Answer Yes or No, and explain why.

Yes, the proposed recursion correctly computes R_ j. If j = 1 then r1 = p1 so it correctly computes. If j = 2 then r2 = max{p2, max(r1 + r1)}, since for a length of 2 we have 2 options, either we cut in half or remain the same, then we pickup the maximum of two means, we pickup revenue of max{p2, p1 + p1} so it correctly computes. Given algorithm computes correctly because it handles all cases. Since in the below algorithm we compute recursively for i = 1 to j/2 since initially we break in two parts like i and j - 1 or don't break which means take revenue as pj. So each step we basically have two choices, either we break in two parts or don't break. If we break in two parts then take each possibility and take the maximum of that so the algorithm is correct.

(Correctly Reworded) Dr. Sponge claims that in the tree representation of an optimal prefix-free code, every leaf node must have a sibling. Is the claim true or not? Answer Yes or No, and explain why.

Yes, this statement says every leaf must have a sibling. That means, every node other than the leaf should have at least two children. In a optimal prefix code, the tree must be full, meaning every internal node has exactly two children. If some internal node has only one child, then that node can be simply replaced with its unique child. This helps to reduce the cost of encoding.


Set pelajaran terkait

apush unit 6 new south to industrial revolution

View Set

Final Exam Multiple Choice (test bank)

View Set

Study guide CT (IMAGE PRODUCTION)

View Set

5 characteristics of a free market system

View Set

Pathophysiology Exam 2 Test Bank

View Set

C779 v5 - Web Development Foundations Learning Check

View Set

PSYCH Final, Psychology Chapter 7 Quiz, exam 1

View Set