Algorithms Midterm

Ace your homework & exams now with Quizwiz!

What are the strongly connected components of the above graph? To answer this question, just list the nodes that belong to each connected component.

1) A 2) B,C,F,E,D,G 3) H, J

Perform a depth-first traversal of this graph, starting at node A. Label every edge in the graph with T if it's a tree edge, B if it's a back edge, F if it's a forward edge, and C if it's a cross edge. To ensure that there is a single solution, assume that whenever faced with a decision of which node to pick from a set of nodes, pick the node based on alphabetical order

1. A->B = T 2. B->C = T 3. C->F = T 4. F->B = B 5. F->J = T 6. J->H = T 7. H->J = B 8. B->E = T 9. E->D = T 10. D->G = T 11. G->E = B 12. E->F = C 13. E->H = C 14. A->D = F

4. A key step of the Quicksort algorithm is the PARTITION procedure, which rearranges the subarray A[p..r] in place, as follows: PARTITION(A, p, r) x <- A[r] i <- p - 1 for j <- p to r - 1 do if A[j] <= x then i <- i + 1 exchange A[i] <-> A[j] exchange A[i + 1] <-> A[r] return i + 1 Suppose that you have an unordered array, A = <19, 11, 13, 22, 17, 10, 15, 14>, and the original call to the Quicksort procedure is QUICKSORT(A, 1, 8). Show the elements of the array A in the order in which they appear after the first call to PARTITION.

11, 13, 10, 14, 17, 19, 15, 22

What is the array after a call to merge sort? 51, 13, 10, 64, 34, 5, 32, 21

5, 10, 13, 21, 32, 34, 51, 64

Below is the pseudocode for BUILD_MAX_HEAP, which uses the procedure MAX_HEAPIFY in a bottom-up manner to convert an array A[1..n], where n = length[A], into a max-heap. MAX-HEAPIFY(A, i) 1 l ← LEFT(i) 2 r ← RIGHT(i) 3 if l ≤ heap-size[A] and A[l] > A[i] 4 then largest ← l 5 else largest ← i 6 if r ≤ heap-size[A] and A[r] > A[largest] 7 then largest ← r 8 if largest ≠ i 9 then exchange A[i] ↔ A[largest] 10 MAX-HEAPIFY(A, largest) BUILD_MAX_HEAP(A) 10 heap_size[A] ← length[A] 11 for i ← ⌊length[A]/2⌋ downto 1 12 do MAX-HEAPIFY(A,i) Suppose BUILD-MAX-HEAP(A) has been called on the array below: A = <5, 2, 4, 3, 17, 10, 11, 15, 9, 8> Show the result of 5 calls from BUILD-MAX-HEAP to MAX-HEAPIFY.

A[1] = 17, A[2] = 15, A[3] = 11, A[4] = 9, A[5] = 8, A[6] = 10, A[7] = 4, A[8] = 3, A[9] = 5, A[10] = 2

Below is the pseudocode for BUILD_MAX_HEAP, which uses the procedure MAX_HEAPIFY in a bottom-up manner to convert an array A[1..n], where n = length[A], into a max-heap. MAX-HEAPIFY(A, i) 1 l ← LEFT(i) 2 r ← RIGHT(i) 3 if l ≤ heap-size[A] and A[l] > A[i] 4 then largest ← l 5 else largest ← i 6 if r ≤ heap-size[A] and A[r] > A[largest] 7 then largest ← r 8 if largest ≠ i 9 then exchange A[i] ↔ A[largest] 10 MAX-HEAPIFY(A, largest) BUILD_MAX_HEAP(A) 10 heap_size[A] ← length[A] 11 for i ← ⌊length[A]/2⌋ downto 1 12 do MAX-HEAPIFY(A,i) Suppose BUILD-MAX-HEAP(A) has been called on the array below: A = <5, 2, 4, 3, 17, 10, 11, 15, 9, 8> Show the initial heap

A[1] = 5, A[2] = 2, A[3] = 4, A[4] = 3, A[5] = 17, A[6] = 10, A[7] = 11, A[8] = 15, A[9] = 9, A[10] = 8

Describe an algorithm for testing whether a directed graph is cyclic or acyclic

Do a depth first search and look for backedges. If there is a backedge it is cyclic. If there are no backedges it is acyclic

Below is the pseudocode for Euclid's algorithms, which finds the greatest common divisor gcd(a,b) of two integers, a and b, where a >= b Algorithm EuclidGCD(a, b) Input integers a and b Output gcd(a, b) if b = 0 return a else return EuclidGCD(b, a mod b) Show the sequence of steps it takes to find gcd(117, 52)

EuclidGCD(52, 13) EuclidGCD(13, 0) return(13)

If a priority queue with n elements is implemented as a binary heap, insertion of a new element into the queue has worst-case complexity Θ(n).

F

It is possible to perform a topological sort of an undirected graph.

F

Karatsuba's divide-and-conquer multiplication algorithm is faster in practice than the grade-school multiplication algorithm no matter what the size of the integers it multiplies.

F

The recurrence relation for insertion sort is T(n) = T(n-1) + O(n).

F

Using the divide-and-conquer strategy, the problem of factoring an integer into its prime factors can be solved in polynomial time.

F

(T/F) 1+2+3+...+(n-1) + n = O(n)

False

(T/F) 12n^2 + 6n = o(n^2)

False

(T/F) O(n log n) is the same as O(n) because, in all practical cases, log n is smaller than some constant

False

(T/F) n! = O(2^n)

False

(T/F) n^3 - n^2 = O(n^2)

False

Exponential functions grow asymptotically () than polynomial functions

Faster

Suppose that A and B are two sorted arrays of length n. A has distinct elements, and B has distinct elements, but there may be elements that are in both A and B. Describe an algorithm with worst-case runtime O(n) that finds the intersection of A and B — that is, your algorithm should return all of the elements that are in both A and B. You don't need to give detailed pseudocode. It is enough to describe the idea of the algorithm in English.

Have A[i] and B[j]. If A[i] = B[j], put the element in a separate array and increment i and j by 1. If A[i] > B[j], increment j. If A[i] < B[j], increment i

What is the maximum number of edges in an acyclic undirected graph with 100 vertices? What is the minimum number?

Max: V-1 = 99 Min: 0

Exactly how many strongly-connected components are there in a directed acyclic graph with one million vertices? Briefly explain your answer.

One million because the strongly connected components for a directed acyclic graph is the number of vertices. A single vertex is a strongly connected component

Differences between quickselect and quicksort?

Quicksort -> you partition the array around a pivot and then continue to the subarray Quickselect -> you just recursively perform partition on one side b/c you know what side the pivot is on. Quickselect is faster than quicksort

4. A key step of the Quicksort algorithm is the PARTITION procedure, which rearranges the subarray A[p..r] in place, as follows: PARTITION(A, p, r) x <- A[r] i <- p - 1 for j <- p to r - 1 do if A[j] <= x then i <- i + 1 exchange A[i] <-> A[j] exchange A[i + 1] <-> A[r] return i + 1 Suppose that you have an unordered array, A = <19, 11, 13, 22, 17, 10, 15, 14>, and the original call to the Quicksort procedure is QUICKSORT(A, 1, 8). The above version of partition always chooses the last element of an array as the pivot element. Briefly, describe a better way to pick the pivot.

Random or median of 3s

Master theorem says that: | Θ(n^(logb(a))) if a > b^d T(n) = | Θ(n^(d)log(n)) if a = b^d | Θ(n^d) if a < b^d T(n) = | 16T(n/4) + 5n^2 n > k | Θ(1) n <= k Does the 1st, 2nd, or third case apply? Solve the recurrence

Second. Θ(n^(2)logn)

Linear functions grow asymptotically [ ] than quadratic functions

Slower

Poly Log functions grow asymptotically [ ] than polynomial functions

Slower

Describe an algorithm that computes the most frequently occurring element in an array A[1 ...n] in O(n log n) time. (Just describe the idea of the algorithm in English - do not write pseudocode.)

Sort an array in nlogn time. All elements with the same key are going to be next to each other. Read through the array and count how many adjacent elements there are. Keep track of adjacents.

Unfortunately, there is no single best comparison-based sorting algorithm. If there were, name at least three properties it should have. Use just one or two words to describe each property.

Stable, in place, adaptive, asymptotically optimal

An array of 100 integers can be sorted in O(1) time.

T

Given an unsorted array of n integers, the median element in the array can be found in Θ(n) time in the average-case.

T

Heap sort is an in-place sorting algorithm and merge sort is a stable sorting algorithm.

T

In any heap (considered as a tree) any two leaf nodes have either the same depth, or depths differing by one.

T

Tim sort, a hybrid sorting algorithm that combines the best features of insertion sort and merge sort, is a stable sorting algorithm, but not an in-place sorting algorithm.

T

Master theorem says that: | Θ(n^(logb(a))) if a > b^d T(n) = | Θ(n^(d)log(n)) if a = b^d | Θ(n^d) if a < b^d func sumArray(A, low, high) if low > high return 0 if low = high return A[low] mid <- (high+low)/2 leftSum <- sumArray(A, low, mid) rightSum <- sumArray(A, mid+1, high) return leftSum + rightSum Give a recurrence relation for the worst-case performance of this algorithm, and then use the Master theorem to find the asymptotic complexity of your recurrence relation.

T(n) = 2T(n/2) + n^0 Θ(n^(log2)) = Θ(n)

Towers of Hanoi recurrence

T(n) = 2^n

Suppose you are choosing between the following two algorithms: 1. Algorithm A solves problems of size n by dividing them into five subproblems of half the size, recursively solving each subproblem, and then combining the solutions in linear time. 2. Algorithm B solves problems of size n by dividing them into nine subproblems of size n/3, recursively solving each subproblem, and then combining the solutions in O(n2) time. What are the running times of each of these algorithms (in big-Q notation), and which would you choose. To answer this question, you will need to formulate the recurrence relation for each algorithm, so show this in your answer.

T(n) = 5T(n/2) + n Θ(n^(log5)) T(n) = 9T(n/3) + n^2 Θ(n^(2)logn) Θ(n^(2)logn)

Binary search recurrence

T(n) = log(n)

Karatsuba recurrence

T(n) = n^(log2(3))

Insertion sort recurrence

T(n) = n^2

Selection sort recurrence

T(n) = n^2

Merge sort recurrence

T(n) = nlog(n)

Below is the pseudocode for Euclid's algorithms, which finds the greatest common divisor gcd(a,b) of two integers, a and b, where a >= b Algorithm EuclidGCD(a, b) Input integers a and b Output gcd(a, b) if b = 0 return a else return EuclidGCD(b, a mod b) Explain what it means to say the recursive function gcd(a,b) shown above is a tail-recursive function.

The last thing the function does is a recursive call

Binary search best case

Theta(1)

Hashing average case

Theta(1)

Hashing best case

Theta(1)

Interpolation search best case

Theta(1)

sequential search best case

Theta(1)

Brute Force Algorithm: int isqrt(int n) { int i, isqrt; for (i = 1; i*i <= n; i++) isqrt = i; printf("\n Integer square root: %d", isqrt); } What is the asymptotic complexity of this algorithm?

Theta(2^n)

Binary search average case

Theta(log n)

Hybrid binary/interpolation worst case

Theta(log n)

Hybrid binary/interpolation average case

Theta(log(log n))

Interpolation search average case

Theta(log(log n))

Binary search worst case

Theta(log(n))

How many iterations does Euclid's algorithm require in the worst case?

Theta(log2(a)) iterations

Hashing worst case

Theta(n)

Insertion sort best case

Theta(n)

Interpolation search worst case

Theta(n)

Sequential search average case

Theta(n)

Sequential search worst case

Theta(n)

Use the master theorem to solve T(n) = 9T(n/3) + n

Theta(n^(log3(9)))

Use the master theorem to solve T(n) = 5T(n/4) + n

Theta(n^(log4(5)))

Grade school multiplication complexity

Theta(n^2)

Insertion sort average case

Theta(n^2)

Insertion sort worst case

Theta(n^2)

Selection sort average case

Theta(n^2)

Selection sort best case

Theta(n^2)

Selection sort worst case

Theta(n^2)

Use the master theorem to solve T(n) = 4T(n/2) + n^3

Theta(n^3)

What is the similarity between quickselect and quicksort?

They use the same pivot/partition. They partition array around a pivot

(T/F) If T(n) = o(g(n)) then T(n) = O(g(n))

True

(T/F) Strassen's algorithm is an efficient divide-and-conquer algorithm for matrix multiplication, but asymptotically faster algorithms are possible.

True

(T/F) The best-case complexity of insertion sort is O(n)

True

(T/F) n^(2) log n = O(n^2.1)

True

(T/F) n^(2)logn = Ω(n^2)

True

3n^2 = O(n^2) (T/F)

True

Can this be solved using a single invocation of a sorting subroutine followed by a single pass over the sorted array? Compute the minimum gap between any pair of array elements

Yes. First sort the array, and then go through and look for a gap

Can this be solved using a single invocation of a sorting subroutine followed by a single pass over the sorted array? Compute the number of distinct integers contained in the array

Yes. Have a counter. If the element is the same as the previous element, don't increment the counter

Can this be solved using a single invocation of a sorting subroutine followed by a single pass over the sorted array? compute a "de-duplicated" version of the input array, meaning an output array that contains exactly one copy of each of the distinct integers in the input array

Yes. If it is the same, discard it (duplicated). Output integers while you're going through the array. Don't output duplicates.

Can this be solved using a single invocation of a sorting subroutine followed by a single pass over the sorted array? Compute the mode (the most frequently occurring integer) of the array. If there is a tie and there are two or more modes, the algorithm should return all of them

Yes. Sort it, go through it, and see which one has the most duplicates

Can this be solved using a single invocation of a sorting subroutine followed by a single pass over the sorted array? For this part, assume that the array's integers are distinct and that the array has odd length. Compute the median array-the "middle element", with the number of other elements less than it equal to the number of other elements greater than it

Yes. Sort the array and go to the middle element

Which of the following algorithms we have studied so far this semester are divide-and-conquer algorithms, but not decrease-and-conquer algorithms? Put an X next to your answers. a. Karatsuba's integer multiplication algorithm b. Binary search for an item in a sorted array c. Euclid's algorithm for finding the greatest common divisor of two integers d. Bisection algorithm for finding the square root of a number e. Mergesort f. Quickselect algorithm for finding the median element in an unsorted array

a. Karatsuba's integer multiplication algorithm and e. Mergesort

For each of the following algorithms, give its average-case asymptotic complexity using big-Theta notation. As a hint, note that the algorithms are listed in order from fastest to slowest. a. Interpolation search b. Binary search c. Quickselect d. Quicksort e. Karatsuba's divide-and-conquer algorithm for multiplication f. Grade-school multiplication

a. theta(log(logn)) b. theta(log n) c. theta(n) d. theta(nlogn) e. theta(n^(log3)) [with base 2] f. theta(n^2)

Order types of functions from asymptotically slowest to fastest (logarithmic, constant, poly logarithmic, exponential, polynomial, factorial)

constant, logarithmic, poly logarithmic, polynomial, exponential, factorial

Big Omega

lower bound on worst case running time

4. A key step of the Quicksort algorithm is the PARTITION procedure, which rearranges the subarray A[p..r] in place, as follows: PARTITION(A, p, r) x <- A[r] i <- p - 1 for j <- p to r - 1 do if A[j] <= x then i <- i + 1 exchange A[i] <-> A[j] exchange A[i + 1] <-> A[r] return i + 1 Suppose that you have an unordered array, A = <19, 11, 13, 22, 17, 10, 15, 14>, and the original call to the Quicksort procedure is QUICKSORT(A, 1, 8). What is the best-case complexity of quicksort? When does it happen?

nlogn. When you have good pivots

What is the asymptotic complexity of the fastest algorithm for finding the strongly-connected components of a directed graph.

theta(V + E)

Big Theta

tight bound on worst case running time

Big O

upper bound on worst case running time

Running time of T(n) = logn + 2^n

Θ(2^n)

Running time of T(n) = n! + n^2

Θ(n!)

Running time of T(n) = 32n + 1024

Θ(n)

Running time of T(n) = sqrt(n) + n

Θ(n)

Running time of T(n) = n^3 + 3(logn)^2

Θ(n^3)


Related study sets

Live Virtual Machine Lab 10.3: Module 10 Physical Network Security Concepts

View Set

Skeletal Muscle Tissue- Ch 9 Part 1

View Set

Network+ Guide to Networks (7th edition) chapter 3

View Set

Employment Law - Exam 3 Practice Questions

View Set

Quiz 2 Language Meaning + Computational Approaches (week 7)

View Set

Biology 121 Chapter 9 Questions, BIOL 410 Exam 2

View Set