CSCE 411 Final Exam

Ace your homework & exams now with Quizwiz!

Order the running-times of algorithms from least (= 1) to greatest (= 5)

1.O(log*(n)) 2.O(log(n)) 3.O(n) 4.O(log(n!)) 5.O(n2 log(n))

The approximation algorithm for VERTEX-COVER has approximation ratio:

2

What algorithm design principle was used in the Fast Fourier Transform (FFT) algorithm.

Divide-and-Conquer

Memoization is a form of dynamic programming

True

Strassen's matrix multiplication algorithm has a worst-case running-time of approximately Θ(nb) for which exponent b?

b = log2(7)

What is the run-time of the fast Fourier transform of length n?

Θ(n ln n)

Any comparison-based sorting algorithm needs to make at least how many comparisons? Check a tight lower bound.

Ω(nlogn)

Let f(n)=(2+(−1)n)n2f(n)=(2+(−1)n)n2 and g(n)=n2g(n)=n2. Then(1) does the limit limn→∞|f(n)||g(n)|limn→∞|f(n)||g(n)| exist?(2) f∈Θ(g)f∈Θ(g) ?

(1) No (2) Yes

Let ff and gg be functions from the set of natural numbers to the set of real numbers. True or false:(1) O(O(f(n)))=O(f(n)).O(O(f(n)))=O(f(n)).(2) If cc is a nonzero constant greater than 1, then O(cf(n))>O(f(n)).

(1) True (2) False

Let ff and gg be functions from the set of natural numbers to the set of real numbers. True or false:(1) O(O(f(n)))=O(f(n)).O(O(f(n)))=O(f(n)).(2) If cc is a nonzero constant, then O(cf(n))>O(f(n)).

(1) True (2) False

True or false:(1) The function n2n2 is in ω(n)ω(n).(2) The function nn is in ω(n)ω(n).

(1) True (2) False

Let ff and gg be functions from the set of natural numbers to the set of real numbers. We write f∈O(g)f∈O(g) if and only if there exists a positive real constant CC and a natural number n0n0 such that (1)__________ holds for all n≥n0n≥n0, and f∈Ω(g)f∈Ω(g) if and only if there exists a positive real constant cc and a natural number n′0n0′ such that (2)__________ holds for all n≥n′0n≥n0′

(1) |f(n)|≤C|g(n)||f(n)|≤C|g(n)| (2) c|g(n)|≤|f(n)|

Suppose that the input graph is undirected. Suppose that Kruskal's algorithm has already chosen the edges (1,2), (2,3), (2,4). In the next step, which edge cannot be picked by Kruskal's algorithm?

(1,3)

Suppose that we are given a polynomialA(x)=n−1∑k=0akxk.A(x)=∑k=0n−1akxk.The input to the FFT of length nn is an array of length nn containing the coefficients a0,a1,...,an−1a0,a1,...,an−1. Describe the output vector of the FFT with the help of the polynomial A(x)A(x). In the answers below, we denote by ω=exp(2πi/n)ω=exp⁡(2πi/n) a primitive nn-th root of unity, where i=√−1i=−1. Furthermore, IFFT denotes the inverse Fast Fourier transform.

(A(1),A(ω),...,A(ωn−1))

The greedy algorithm does not always give the least number of coins. Select the denomination of coins where the greedy algorithm may give a suboptimal number of coins.

1, 6, 10

Consider the matrices P, Q, and R which are 25 x 10, 10 x 20, and 20 x 30 matrices, respectively. What is the minimum number of scalar multiplications required to multiply the three matrices (assuming that the standard matrix multiplication algorithm is used to multiply pairs of matrices)?

13500

In a binary counter A with k bits, the increment operation flips the least significant bit A[0] every time, the second bit A[1] every other time, the third bit A[2] every fourth time, and so on. Suppose that the counter is initially 0. If a sequence of n increment operations is performed, what is the amortized cost per increment? [As k can be very large, you should use bit-complexity.]

2

The metric TSP problem has approximation ratio

2

The matrix‐chain multiplication for a chain of matrices A 1 ... An is an example of a dynamic programming problem. Consider the subchain A iAi+1 ... Aj. Recall that the problem is to minimize the cost of the multiplication (in terms of the number of required scalar multiplications) by choosing where to parenthesize the chain. In this case, choosing where to split the chain and parenthesizing terms results in (a) how many subproblems and (b) how many choices for the split?

2 subproblems, j-i choices

Suppose that a sequence of operations has the following costs: op(1) = 1, op(2) = 2, op(3) = 3, op(4)=1, op(5)=5, op(6)=1, op(7)=1, op(8)=1, op(9)=9, ... so the cost of the k-th operation is op(k)=1 unless k-1 is a power of 2. If k-1 is a power of 2, then the cost of op(k)=k. Let T(n) be the total cost of the first n operations. What is the amortized cost of an operation when a sequence of 9 operations is performed? Choose the closest numerical value

2.666

Suppose that a sequence of operations has the following costs: op(1) = 1, op(2) = 2, op(3) = 3, op(4)=1, op(5)=5, op(6)=1, op(7)=1, op(8)=1, op(9)=9, ... so the cost of the k-th operation is op(k)=1 unless k-1 is a power of 2. If k-1 is a power of 2, then the cost of op(k)=k. Let T(n) be the total cost of the first n operations. What is T(9)?

24

In a binary counter A with k bits, the increment operation flips the least significant bit A[0] every time, the second bit A[1] every other time, the third bit A[2] every fourth time, and so on. How many bits will be flipped in total during a sequence of n increments, starting with a zero counter? Select the bound given in the lecture video.

2n

In Karatsuba's method to multiply two n-bit integers, how many multiplications of integers are done at each level of recursion?

3

Suppose that a sequence of operations has the following costs: op(1) = 1, op(2) = 2, op(3) = 3, op(4)=1, op(5)=5, op(6)=1, op(7)=1, op(8)=1, op(9)=9, ... so the cost of the k-th operation is op(k)=1 unless k-1 is a power of 2. If k-1 is a power of 2, then the cost of op(k)=k. What is the smallest upper bound for the amortized cost of an operation when performing a sequence of n operations?

3

Let ff and gg be functions from the set of natural numbers to the set of real numbers such that gg is eventually nonzero. Then f∈o(g)f∈o(g) if and only if limn→∞f(n)g(n)limn→∞f(n)g(n) __________ holds.

=0

Let ff and gg be functions from the set of natural numbers to the set of real numbers. We write f∼gf∼g and say that ff is asymptotically equal to gg if and only if limn→∞f(n)g(n)limn→∞f(n)g(n) __________ holds

=1

Let ff and gg be functions from the set of natural numbers to the set of real numbers, and assume that gg is eventually nonzero. Then f∈ω(g)f∈ω(g) if and only if limn→∞|f(n)||g(n)|limn→∞|f(n)||g(n)| __________.

=∞

Hilbert's Hotel has an infinite number of rooms with room numbers 0, 1, 2, ... Unfortunately, all rooms are occupied. A new bus arrives and brings new guests. Which bus load cannot be accommodated by Hilbert's hotel?

A bus full of aliens that wear shirts with real numbers on them, and apparently there is not one real number missing.

We developed the minimizing greedy algorithm for graphic matroids. Given a connected undirected graph with weighted edges as an input, what does it compute?

A minimum spanning tree

Given a connected graph with n vertices, Kruskals greedy algorithm returns

A minimum spanning tree with n-1 edges

Let A and B be decision problems in NP. We know that A is NP-complete. We want to show that B is NP-complete. What kind of reduction do we need to prove?

A ≤p B

The features of the Bellman-Ford algorithm include (check all that apply, assume that all possible features mentioned in the lecture have been implemented)

Allows for the edges of the input graph to have negative weights Can detect cycles of negative total weight.

Which are properties of a dynamic programming problem? Choose the best answer.

Both optimal substructure and overlapping subproblems

How do you determine the cost of a spanning tree in a connected graph?

By the sum of costs of the edges of the tree

The fast Fourier transform discussed in the lecture videos follows:

Cooley and Tukey

Cantor showed that for any set A, we have |A| < |P(A)|. What does this statement mean? Select the correct interpretation.

Every set has a smaller cardinality than its power set.

Consider a brute force implementation in which we find all possible ways of multiplying a given sequence of n matrices A1 A2 ... An. What is the time complexity of this implementation?

Exponential

What problem does the Cooley-Tukey fast Fourier transform algorithm solve?

It allows to quickly switch between point-value and coefficient representation of polynomials

Karger's minimum cut algorithm uses contractions of what type?

It contracts edges

The dynamic programming algorithm for the coin change problem cannot find the optimal amount of change for the given denomination of coins:

It finds the optimal number of coins for any denomination of coins

What is the correct definition of an NP-complete problem?

It is a decision problem B in NP such that every decision problem A in NP satisfies A ≤p B

A randomized algorithm that is sometimes fast and always correct is a

Las Vegas type algorithm.

Which of the following problems should be solved using dynamic programming?

Longest common subsequence

In dynamic programming, the technique of storing the previously calculated values is called

Memoization

Which property is not required by the accounting method?

Must charge a positive cost for each operation

Does Karger's minimum cut algorithm always find the size of a minimum cut?

No, it does not always return the correct size of a minimum cut.

Is the polynomial reduction relation ≤p a partial order? Choose the correct answer and justification.

No, since the relation is not antisymmetric

We discussed the complexity class NP. What does NP stand for?

Nondeterministic Polynomial-time

As discussed in the lecture, the best algorithm presented for union-find can have an amortized cost of (select the best bound)

O(log* n)

In the lecture, we discussed an efficient deterministic algorithm to find the median in an array of n numbers. What was the complexity of this algorithm and the method?

O(n) using a recursive algorithm

Given a graph G with n vertices and m edges, the worst-case running time of the Bellman-Ford algorithm is (select the tightest bound):

O(nm)

What two properties are required to successfully use the dynamic programming technique?

Optimal substructure and overlapping subproblems

What property does a greedy algorithm in general NOT have?

Overlapping subproblems

The decision problem PRIMES asks whether a given positive integer is prime. The decision problem HAMILTONIAN CIRCUIT asks whether a given graph has a hamiltonian circuit. Are these problems in P or NP-complete? Give the most accurate answer.

PRIMES is in P and HAMILTONIAN CIRCUIT is NP-complete

Suppose that problem A is NP-complete. We are given a decision problem B. How do you prove that B is NP-complete?

Show that B is in NP and A ≤p B.

Let T(n) denote the worst-case running time of Karatsuba's recursive algorithm to multiply n-bit integers. Which answer describes T(n) best?

T(n) = 3T(n/2) + O(n)

A divide-and-conquer algorithm takes an array of length n as an input. It splits the array into three parts of length n/2, and recursively calls the algorithm on these three smaller arrays. At this level of recursion, it has to perform O(n) operations to create the subparts and combine the results. What is the recurrence for the worst-case running-time T(n)?

T(n) = 3T(n/2)+O(n)

An algorithm has a worst-case run-time T(n) satisfying the recurrence T(n) = 3T(n/2)+O(n). What is the best expression for T(n)?

T(n) = Θ(nlog(3)/log(2))

The greedy choice property refers to which property of an algorithm?

Take whatever choice looks the best at the moment

The Halting problem H is a decision problem. Which of the following statements is correct.

The Halting problem cannot be solved by a computer.

Let zi denote the i-th smallest element of an array of n elements. For integers i<j, we denote by Zij the set {zi, zi+1, ..., zj}. Which condition guarantees that randomized quicksort will never compare zi and zj?

The first pivot selected from Zij is zk, where i<k<j.

Which property is not satisfied by a greedy algorithm?

The greedy output property

The greedy algorithm for giving change using n coins with values v[1] > v[2] > ... > v[n] cannot always give the correct amount of change, unless (select the most appropriate condition)

The smallest value v[1] must be equal to 1

A greedy algorithm makes at each step a choice that leads to the optimal solution

True

Dynamic programming and greedy algorithms both use the optimal substructure property.

True

At this point in time, is P = NP, true, false, or unknown?

Unknown

Suppose that an array A[1..n] is sorted with deterministic quicksort. Deterministic quicksort has in general a worst-case running time of O(n2) and this bound can be attained. In the lecture, a variation of the pivot selection was shown that can guarantee a worst-case running time of O(n log n) for deterministic quicksort. How was the pivot element selected in this variation of deterministic quicksort?

Use the median of the array A[1..n]

The polynomial hierarchy PH is equal to

UΣk

We know that SAT is NP-complete. The SUDOKU decision problem (can a given SUDOKU be solved?) is obviously in NP. How can you use SAT to show that SUDOKU is NP-complete as well?

We need to show that SAT ≤p SUDOKU.

We proved that there exist functions that cannot be computed. What was the main idea of this proof?

We showed that there are countably many programs, but uncountably many functions from N0 to {0,1}.

In the lecture, we proved that VERTEX COVER is NP-complete. How did we show it?

We used a polynomial-time reduction from 3SAT

Karatsuba invented a method to multiply integers X and Y. In the divide-step, he creates four integers A, B, C, and D. What conditions do these four integers satisfy?

X = 2n/2A+B and Y = 2n/2C+D

Suppose that A and B are decision problems in NP such that A ≤p B. Suppose that you find a new algorithm for B that runs in polynomial time. Can you infer that A belongs to P as well?

Yes, since A ≤p B.

Kruskal's algorithm (as discussed in the lecture videos) finds for a connected input graph G

a minimum spanning tree

Which algorithm is not discussed in the lecture videos on greedy algorithms?

an algorithm to schedule jobs so as to minimize the sum of completion times

Assuming that P is not equal to NP, the approximation ratio of a polynomial-time approximation algorithm for the non-metric TSP problem

cannot be a constant

The matroid in the greedy algorithm for matroids models:

constraints

What is the structure of the divide-and-conquer paradigm?

divide the problem into subproblems, recursively solve the subproblems, combine the solutions to the subproblems to a solution to the problem

The greedy algorithm to find a maximal matching

find bigger and bigger matchings using augmented paths

The dynamic programming solution to find the least-squares regression using line segments is aiming to solve the problem by

finding as few line segments as possible that minimize the least-squares error

Let ff and gg be functions from the set of natural numbers to the of real numbers. If the limit limn→∞|f(n)||g(n)|limn→∞|f(n)||g(n)| exists and is finite, then

f∈O(g)

Let ff and gg denote functions from the natural numbers to the real numbers. We write __________ if and only if there exist positive real constants cc and CC and a natural number n0n0 such that c|g(n)|≤|f(n)|≤C|g(n)|c|g(n)|≤|f(n)|≤C|g(n)| holds for all n≥n0n≥n0. What can go in the underlined space?

f∈Θ(g)

The nn-th Harmonic number Hn=1+1/2+⋯+1/nHn=1+12+⋯+1n is asymptotically equal to

ln n

We havelimn→∞ln(n!)nlnn=1.limn→∞ln⁡(n!)nln⁡n=1.Check all statements below that are implied by the previous statement.

ln(n!)∼nlnn n∈O(nlnn)n∈O(nln⁡n)

A lower bound on the number of comparisons for any comparison-based algorithm to find the 2nd largest element in an array of nn elements needs at least how many comparisons (check the tight lower bound):

n−2+⌈log2n⌉

Strassen's divide-and-conquer algorithm for multiplying two n×nn×n matrices achieves its speedup over the standard matrix multiplication algorithm by multiplying

seven n/2×n/2n/2×n/2 matrices at each level of recursion

Dijkstra's algorithm is used to solve the ________ ? (Fill in the blank)

single-source shortest path problem

A matroid is given by a finite set S and a hereditary family F of subsets of S satisfying the exchange axiom. What other property needs to be satisfied by a matroid (S,F)?

the family F must contain at least one set

Roughly speaking, Hilbert's tenth problem is concerned with:

the question whether there exists an algorithm to decide whether a given Diophantine equation can be solved.

Kruskal's greedy algorithm chooses in each step

the remaining edge with the least weight, unless it would form a cycle with the already chosen edges

Which of the following is NOT a method to find the amortized cost of a sequence of operations on a data structure?

the tabulation method

Suppose that we are given a sequence of n operations on a data structure. The amortized cost of each operation is given by

the worst-case running time of all n operations divided by n.

What is the definition of f in O(g) for functions f,g: N → R ?

there exists a positive integer n0 and a positive real number c such that |f(n)| <= c|g(n)| holds for all n >= n0.

Karatsuba's algorithm to multiply large n-bit integers

uses a divide-and-conquer algorithm that splits the integers into two n/2-bit parts and recursively performs 3 multiplications of n/2-bit integers to get a runtime of at most O(n1.59) operations.

In introductory computer science classes, a common example for recursion is the calculation of Fibonacci numbers:var m := map(0 -> 0, 1 -> 1) function fib(n) if key n is not in map m m[n] := fib(n-1) + fib(n-2)return m[n]What is the time complexity of the algorithm?

Θ(2n)Θ(2n) Θ(en)Θ(en) Θ(ϕn)Θ(ϕn), where ϕϕ is the golden ratio Θ(nn)Θ(nn) --None of the above

The expected run-time of Randomized-Quicksort on input of an array of length n is given by

Θ(n log n)

What is the worst case running time of the FFT (as discussed in class) for an input of length nn?

Θ(nlogn)

In introductory computer science classes, a common example for recursion is the calculation of Fibonacci numbersfunction fib(n) if n <= 1 return n return fib(n-1) + fib(n-2)

Θ(ϕn)Θ(ϕn), where ϕϕ is the golden ratio

According to the lecture, the decision problem MAX-INDSET likely belongs to which level in the polynomial hierarchy?

Σ2 , but not (Σ1 or Π1)

The potential method defines a function Φ on the data structure. Which property is not required of the potential function? We denote by Dk the data structure after the n-th operation

Φ is monotonically increasing


Related study sets

Chapter 20: Anxiolytic and Hypnotic Agents

View Set

Ethical Accounting - C03 - Organizational Ethics and Corporate Governance

View Set

Chapter 14 - Marketing Channels & Supply-Chain Management

View Set

Unit 2 Recognizing quotes (match the quote with the speaker)

View Set