Final CS 325
If the solution obtained by an approximation algorithm is : 10 The optimal solution is : 5 What will be the value of the approximation ratio?
2
Select correct inequality: logn vs sqrt(n)
<
If problem A is in NP then it is NP-complete.
False
P=NP True false unknown?
unknown
Select correct inequality: nlogn vs 2^n
<
Select correct inequality: n! vs 2^n
>
Which of the following correctly defines what a 'recurrence relation' is?
An equation (or inequality) that relates the nth element of a sequence to its predecessors (recursive case). This includes an initial condition (base case).
What is the correct loop invariant for the below code: for i in range(len(A)): # in pseudo-code for i=0,...,len(A)-1 answer += A[i] return answer
At the start of iteration i of the loop, the variable answer should contain the sum of the numbers from the subarray A[0:i-1].
Given two vertices s and t in a connected graph G, which of the two traversals, BFS and DFS can be used to find if there is a path from s to t?
Both BFS and DFS
Pick the statements which are True.
Dynamic programming technique would always return an optimal solution
Big omega notation
For large values of n the running time of f(n) is at least b⋅g(n)
Big o notation
For large values of n the running time of f(n) is at most b⋅g(n)
Which of the following is true for Prim's algorithm?
It never accepts cycles in the MST, It is a greedy algorithm, It can be implemented using a heap
If you are given different versions of the same algorithm with the following complexity classes, which one would you select?
LOGARITHMIC, linear, polynomial, quadratic
What is the solution of T(n) = 2T(n/2) + n/logn using master method?
Master method does not apply
Which of the following data structures can be used to implement the Dijkstra algorithm most efficiently?
Min priority queue
Given an array A of size n, we want to access the ith element in the array, 0<i<n. What will be the time complexity of this operation?
O(1)
Given a sorted array A of size n, we want to find if an element k belongs to this array. What will be the best time complexity to perform this search operation? Note: best time complexity and not the best time
O(logn)
Given an array A of size n, we want to find if an element k belongs to this array. What will be time complexity of this search operation? Assume that we don't know anything about the order of elements in the array.
O(n)
In the Longest Common Subsequence problem assume we are comparing two strings of lengths m and n. In the bottom-up approach the solution we build a 2-Dimensional array called Cache[m][n]. The final solution was obtained by accessing which element of the cache?
The element in the bottom right corner of the cache[m][n]
What is the solution of T(n) = 2T(n/2) + n^2 using the Master theorem?
Theta(n^2)
Backtracking is used to solve which of the problems:
To find all possible solutions
A graph can be represented as a tuple containing two sets. For example: A= ({...},{...})
True
Select correct inequality: 5^n vs n^5
>
Select correct inequality: n^2 vs nlogn
>
When we say algorithm A is asymptotically more efficient than B, what does that imply?
A will always be a better choice for large inputs
Let W(n) and A(n) denote respectively, the worst case and average case running time of an algorithm executed on an input of size n. which of the following is ALWAYS TRUE?
A(n) = O(W(n))
To find the optimal solution for 0-1 knapsack, what would be dimensions of the extra array that we would need? The knapsack has a capacity of W, and there are total of n items. Assume we are using the approach that was discussed in the exploration.
Array[W+1][n+1]
Compare function growth: f(n) = 10 ; g(n) = log 10
Both are constants
Which of the following is/are property/properties of a dynamic programming problem?
Both optimal substructure and overlapping subproblems
In which of the following approaches we start with the base case and proceed to solve the bigger subproblems?
Bottom-up Approach
Can 0/1 knapsack problem be solved using the Greedy algorithm technique to obtain an optimum solution to fill the knapsack? 0/1 knapsack problem (This is the problem that we saw in the previous modules) When have n items and their values given. We are provided with a knapsack of capacity X. We have only one copy of each item. We need to maximize the value of our knapsack with items that we pick.
False. Greedy might not give us optimal solution
Big Theta Notation
For large values of n the running time of f(n) is at least axg(n) and at most bxg(n)
Prim's and Kruskal's algorithms to find the MST follows which of the algorithm paradigms?
Greedy approach
What would be the time complexity of the following algorithm? reverse(a): for i = 1 to len(a)-1 x = a[i] for j = i downto 1 a[j] = a[j-1] a[0] = x
O(n^2)
What are the major required aspects in a problem in order to apply Dynamic Programming Technique?
Optimal Substructure and Overlapping subproblems
We use reduction to prove that NP-Completeness of a problem X from A. As a part of reduction we must prove which of the following statements? Assume A is a NP-Hard problem. Statement P: A can be transformed to X in a polynomial time Statement Q: We can obtain solution to A from X in polynomial time
P and Q
Given the following algorithm foo(n) if n <= 1 return 1 else x = foo(n-1) for i = 1 to n x = x + i return x Determine the asymptotic running time. Assume that addition can be done in constant time.
Theta(n^2)
What would be the time complexity of the following algorithm? int sum = 0; for (i = 0; i < n; i++) { for (j = 0; j < n; j++) { for (k = 0; k < n; k++) { if (i == j == k) { for (l = 0; l < n*n*n; l++) { sum = i + j + k + l; } } } }}
Theta(n^4)
Solve the following recurrence by giving the tightest bound possible. T(n) = 4T(n/4) + 4n
Theta(nlogn)
A graph can have many spanning trees.
True
All possible greedy algorithms, at each step, choose what they know is going to lead to an optimal/near optimal solution. (True or False?)
True
Dijkstra is used to solve for a shortest path in a weighted graph with non negative weights.
True
If problem A can be solved in polynomial time then A is in NP.
True
Is the following a property that holds for all non-decreasing positive functions f and g? (True=Yes/ False=No) If f(n) = O(n^2) for c=1 and n0=0 and g(n) = Theta(n^2) for n0=0 and c1 =1 and c2=1 then f(n) = O(g(n)).
True
Removing the maximum weighted edge from a Hamiltonian cycle will result in a Spanning Tree
True
We can use Divide and Conquer technique to solve a problem in which of the following scenarios?
We can break the problem into several subproblems that are similar to the original problems but smaller in size
What makes the solution for the 'Activity Selection Problem' that we implemented in the exploration, a greedy approach?
We make a best available choice in each iteration and we never look back
The difference between Divide and Conquer Approach and Dynamic Programming is
Whether the sub-problems overlap or not
What is the basic operation (that which is executed maximum number of times) in the following code? reverse(a): for i = 1 to len(a)-1 x = a[i] for j = i downto 1 a[j] = a[j-1] a[0] = x
a[j] = a[j-1]
Which of the following techniques can be called as intelligent exhaustive search?
backtracking
Compare function growth: f(n)= 0.01n^3 ; g(n) = 50n+10
f'(n) = .03n^2, g'(n) = 50. So: f(n) = omega(g(n)) or g(n) = O(f(n))
Compare function growth: f(n) = logn^2 ; g(n) = logn + 10
f(n) = 2logn / g(n) = logn + 10 = 2. So: f(n) = theta(g(n))
Compare function growth: f(n) = log n^3 ; g(n) =log^3 n
f(n) = 3logn, g(n) = (logn) ^ 3 So: f(n) = O(g(n))
Which of the following equations correctly represent the factorial function. Factorial of a number n is given by: Factorial of n = n*(n-1)*(n-2).....3*2*1
f(n) = n*f(n-1)
In the o-1 knapsack recurrence formula f(x,i) = max{ vi + f[x-wi , i-1] , f[x , i -1] } The first part vi + f[x-wi , i-1] represents : The second part f[x , i -1] represents:
first part: adding the ith item to the knapsack second part: not adding the ith item to the knapsack
Which of the following can be used to compare two algorithms?
growth rates of the two algorithms
Compare function growth: f(n) = 2^n ; g(n) = 10n^2
(f'(n) = log2 * 2^n) / (20n) So: f(n) = omega(g(n))
Pick the statements which are True.
- Dynamic programming technique would always return an optimal solution - A greedy algorithm is hard to design sometimes as it is difficult to find the best greedy approach - Greedy algorithms are efficient compared to dynamic programming algorithms
For an undirected graph G, what will be the sum of degrees of all vertices. (degree of a vertex is the number of edges connected to it.)V: number of vertices, E: number of edges.
2|E|
In the exploration to show that the independent set problem is NP-Complete we have used which of the following NP-Hard problems?
3SAT
Consider the following algorithm 1 Bubble-sort(a) 2 for i = a.length() to 1 3 for j = 1 to i-1 4 if a[j]>a[j+1] 5 swap(a[j],a[j+1]); 6 end if What is its basic operation (write the line number of code which would define the execution time of the code)?
4
A binary search algorithm searches for a target value within a sorted array. Binary search compares the target value to the middle element of the array; if they are unequal, the half in which the target cannot lie is eliminated and the search continues on the remaining half until the target value is found or until a search can no longer be performed. This problem can be solved using which of the techniques?
Divide and Conquer. Not dynamic programming because there are no overlapping subproblems.
Given two integer arrays to represent weights and profits of 'N' items, find a subset of these items that will give us maximum profit such that their cumulative weight is not more than a given number 'C'. Best technique to solve this problem is?
Dynamic Programming
Consider the subset sum problem. Problem: Given an array of numbers find if there is a subset that adds to a given number. Return True if there exists such subset, else return False. The subset of numbers need not be continuous in the array. We don't know anything about the order of the elements in the array. Identify which of the following strategies can be used to solve this problem.
Dynamic programming: can be used backtracking: can be used brute force approach: can be used divide and conquer: cannot be used
The asymptotic runtime of the solution for the combination sum problem that was discussed in the exploration is
Exponential
Which of the following recurrence relations is correct representation of the towers of Hanoi problem that was discussed in the exploration?
F(n) = 2F(n-1) + 1
What is the correct recurrence formula for the unbound knapsack problem that was discussed in the exploration? Consider the weight of the items w[1..n], value of the items v[1..n]
F(x) = max{ F[x-wi] + vi }
A spanning tree of a graph should contain all the edges of the graph.
False
For every decision problem there is a polynomial time algorithm that solves it.
False
If there is a polynomial time reduction from a problem A to Circuit SAT then A is NP-hard.
False
We are given an array of numbers and we are asked to find an optimal solution to maximize the sum of numbers (i.e continuous subsequence that has maximum sum). if the order of the input numbers were altered or if we use a different algorithm, we will always end up with the same combination of numbers as answer.
False
When performing the topological sort we always find a unique solution.
False
Rank the following functions by increasing order of growth: log( n! ), 10000n^2, log(n^3), 2^n, n^2 log(n)
log(n^3), log( n! ), 10000n^2, n^2 log(n), 2^n
In dynamic programming, the technique of storing the previously calculated values is called
memoization
Write the loop invariant for the following code: item = -INF (minus infinite) for (i = 0 to n-1) if (A[i] > item) item = A[i]
the loop invariant condition is that 'item' is always maximum among the first i elements of array A.