CS 261 - Ch. 4
In what ways is quick sort similar to merge sort? In what ways are they different.
Divide and conquer algorithm. Quick sort has a pivot.
Suppose an algorithm is O(n), where n is the input size. If the size of the input is doubled, how will the execution time change?
Execution time is c*(2n)
Suppose you selected the first element in the section being sorted as the pivot. What advantage would this have? What input would make this a very bad idea? What would be the big-Oh complexity of the quick sort algorithm in this case?
Choosing a random pivot minimizes the chance that you will encounter worst-case O(n2) performance (always choosing first or last would cause worst-case performance for nearly-sorted or nearly-reverse-sorted data). Choosing the middle element would also be acceptable in the majority of cases.
Compare the partition median finding algorithm to binary search. In what ways are they similar? In what ways are they different?
Finding the smallest element in partition algorithm. In binary search also trying to find a value.
What is the biggest advantage of merge sort over selection sort or insertion sort? What is the biggest disadvantage of merge sort?
For merge sort the key insight is that two already sorted arrays can be very rapidly merged together to form a new collection. The recursive calls on merge sort will be approximately O(log n) levels deep, and at each level it will be doing approximately O(n) operations. Merge sort uses extra memory.
What does it mean to say that one function dominates another when discussing algorithmic execution times?
If as the input gets larger the second will always be larger than the first, regardless of any constants involved.
Explain in your own words why any sorting algorithm that only exchanges values with a neighbor must be in the worst case O(n^2).
Imagine the input is sorted exactly backwards. The first value must travel all the way to the very end, which will requires n steps.
Using the process of forming a partition descried in the previous question, given an informal description of the quick sort algorithm.
Like Merge Sort, QuickSort is a Divide and Conquer algorithm. It picks an element as pivot and partitions the given array around the picked pivot. There are many different versions of quickSort that pick pivot in different ways. -Always pick first element as pivot. -Always pick last element as pivot (implemented below) -Pick a random element as pivot. -Pick median as pivot. The key process in quickSort is partition(). Target of partitions is, given an array and an element x of array as pivot, put x at its correct position in sorted array and put all smaller elements (smaller than x) before x, and put all greater elements (greater than x) after x. All this should be done in linear time.
What is a linear search and how is it different from a binary search?
Linear search or sequential search is a method for finding a target value within a list. It sequentially checks each element of the list for the target value until a match is found or until all the elements have been searched. A binary search starts at the middle of a sorted list and see's whether that's greater than or less than the value you're looking for.
Give an informal description of how the merge sort algorithm works
Merge-sort is based on the divide-and-conquer paradigm. It involves the following three steps: -Divide the array into two (or more) subarrays -Sort each subarray (Conquer) -Merge them into one (in a smart way!)
Suppose an algorithm is O(n^2), where n is the input size. If the size of the input is doubled, how will the execution time change?
O(2n^2)
What does the quick sort algorithm do if all elements in an array are equal? What is the big-Oh execution time in this case?
O(N^2) The running time of QUICKSORT when all elements of array A have the same value will be equivalent to the worst case running of QUICKSORT since no matter what pivot is picked, QUICKSORT will have to go through all the values in A. And since all values are the same, each recursive call will lead to unbalanced partitioning.
If you start out with n items and repeatedly divide the collection in half, how many steps will you need before you have a single element?
log(n)
In your own words given an informal explanation of the process of forming a partition.
The process of dividing a portion of an array into two. The limits of the partition are described by a pair of values: low and high. The first represents the lowest index in the section of interest, and the second the highest index. In addition there is a third element that is selected, termed the pivot. The first step is to swap the element at the pivot location and the first position. This moves the pivot value out of way of the partition step. The variable i is set to the next position, and the variable j to high.
Why is the pivot swapped to the start of the array? Why not just leave it where it is? Give an example where this would lead to trouble.
This moves the pivot value out of way of the partition step. If not moved then it will be moved to a different location/position.
Explain in your own words how the shell sort algorithm gets around this limitation.
To avoid this inevitable limit, elements must "jump" more than one location in the search for their final destination.
Can a linear search be performed on an unordered list? Can a binary search?
Yes No
Suppose an algorithm is O(log n), where n is the input size. If the size of the input is doubled, how will the execution time change?
You go from c*(log n) to c*(log 2n), which is simply c + c*log n.