Sorting Algorithms (n = # of records to be sorted)

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Selection Sort

An In-place Comparison Sort Has O(n^2) complexity, making it inefficient on large lists, and generally performs worse than the similar Insertion Sort. Is noted for its simplicity, and has performance advantages over more complicated algorithms in certain situations The algorithm find the minimum value, swaps it with the value in the first position, and repeats these steps for the remainder of the list. It does no more than n swaps, and thus is useful where swapping is very expensive. Best: n^2 Average: n^2 Worst: n^2 Memory: 1 Stable: No Method: Selection Note: Stable with O(n) extra space, for example using lists

In-place Merge Sort

An in-place algorithm (or in Latin in situ) is an algorithm which transforms input using a data structure with a small, constant amount of extra storage space. The input is usually overwritten by the output as the algorithm executes. Can be implemented as a stable sort based on stable in-place merging. Best: - Average: - Worst: n (log n)^2 Memory: 1 Stable: Yes Method: Merging

Bubble Sort and Variants (category)

BUBBLE SORT SHELL SORT COMB SORT Bubble sort, and variants such as the cocktail sort, are simple but highly inefficient sorts. They are thus frequently seen in introductory texts, and are of some theoretical interest due to ease of analysis, but they are rarely used in practice, and primarily of recreational interest. Some variants, such as the Shell sort, have open questions about their behavior.

Library Sort

Best: - Average: n log n Worst: n^2 Memory: n Stable: Yes Method: Insertion

Cycle Sort

Best: - Average: n^2 Worst: n^2 Memory: 1 Stable: No Method: Insertion

Cubesort

Best: n Average: n log n Worst: n log n Memory: n Stable: Yes Method: Insertion Note: Makes n comparisons when the data is already sorted or reverse sorted

Binary Tree Sort

Best: n Average: n log n Worst: n log n (balanced) Memory: n Stable: Yes Method: Insertion Note: when using a self-balancing binary search tree

Timsort

Best: n Average: n log n Worst: n log n Memory: n Stable: Yes Method: Insertion & Merging Note: Makes n comparisons when the data is already sorted or reverse sorted

Introsort

Best: n log n Average: n log n Worst: n log n Memory: log n Stable: No Method: Partitioning & Selection

Bubble Sort

Bubble sort is a simple sorting algorithm. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass. This algorithm's average and worst-case performance is O(n^2), so it is rarely used to sort large, unordered data sets. Bubble sort can be used to sort a small number of items (where its asymptotic inefficiency is not a high penalty). Bubble sort can also be used efficiently on a list of any length that is nearly sorted (that is, the elements are not significantly out of place). For example, if any number of elements are out of place by only one position (e.g. 0123546789 and 1032547698), bubble sort's exchange will get them in order on the first pass, the second pass will find all elements in order, so the sort will take only 2n time. Best: n Average: n^2 Worst: n^2 Memory: 1 Stable: Yes Method: Exchanging Note: Tiny code size

Bucket Sort

Bucket sort is a divide and conquer sorting algorithm that generalizes Counting sort by partitioning an array into a finite number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. Due to the fact that bucket sort must use a limited number of buckets it is best suited to be used on data sets of a limited scope. Bucket sort would be unsuitable for data that have a lot of variation, such as social security numbers.

Distribution Sort (category)

COUNTING SORT BUCKET SORT RADIX SORT Distribution sort refers to any sorting algorithm where data are distributed from their input to multiple intermediate structures which are then gathered and placed on the output. For example, both bucket sort and flashsort are distribution based sorting algorithms. Distribution sorting algorithms can be used on a single processor, or they can be a distributed algorithm, where individual subsets are separately sorted on different processors, then combined. This allows external sorting of data too large to fit into a single computer's memory.

Comb Sort

Comb sort improves on bubble sort. The basic idea is to eliminate turtles, or small values near the end of the list, since in a bubble sort these slow the sorting down tremendously. (Rabbits, large values around the beginning of the list, do not pose a problem in bubble sort) Best: n Average: n log n Worst: n^2 Memory: 1 Stable: No Method: Exchanging Note: Small code size

Counting Sort

Counting sort is applicable when each input is known to belong to a particular set, S, of possibilities. The algorithm runs in O(|S| + n) time and O(|S|) memory where n is the length of the input. It works by creating an integer array of size |S| and using the ith bin to count the occurrences of the ith member of S in the input. Each input is then counted by incrementing the value of its corresponding bin. Afterward, the counting array is looped through to arrange all of the inputs in order. This sorting algorithm cannot often be used because S needs to be reasonably small for it to be efficient, but the algorithm is extremely fast and demonstrates great asymptotic behavior as n increases. It also can be modified to provide stable behavior.

Insertion Sort

Efficient for small lists and mostly sorted lists, often used as part of more sophisticated algorithms. Takes elements from the list one by one and inserting them in their correct position into a new sorted list. In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one. SHELL SORT is a variant of Insertion Sort that is more efficient for larger lists. Best: n Average: n^2 Worst: n^2 Memory: 1 Stable: Yes Method: Insertion

Heapsort

Heapsort is a much more efficient version of selection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called a heap, a special type of binary tree. Once the data list has been made into a heap, the root node is guaranteed to be the largest (or smallest) element. When it is removed and placed at the end of the list, the heap is rearranged so the largest element remaining moves to the root. Using the heap, finding the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple selection sort. This allows Heapsort to run in O(n log n) time, and this is also the worst case complexity. Best: n log n Average: n log n Worst: n log n Memory: 1 Stable: No Method: Selection

Simple Sorts (category)

INSERTION SORT SELECTION SORT Two of the simplest sorts are insertion sort and selection sort, both of which are efficient on small data, due to low overhead, but not efficient on large data. Insertion sort is generally faster than selection sort in practice, due to fewer comparisons and good performance on almost-sorted data, and thus is preferred in practice, but selection sort uses fewer writes, and thus is used when write performance is a limiting factor.

Efficient Sorts (category)

MERGE SORT HEAPSORT QUICKSORT Practical general sorting algorithms are almost always based on an algorithm with average complexity (and generally worst-case complexity) O(n log n), of which the most common are heap sort, merge sort, and quicksort. Each has advantages and drawbacks... - simple implementation of merge sort uses O(n) additional space, - simple implementation of quicksort has O(n2) worst-case complexity. These problems can be solved or ameliorated at the cost of a more complex algorithm. While these algorithms are asymptotically efficient on random data, for practical efficiency on real-world data various modifications are used. First, the overhead of these algorithms becomes significant on smaller data, so often a hybrid algorithm is used, commonly switching to insertion sort once the data is small enough. Second, the algorithms often perform poorly on already sorted data or almost sorted data - these are common in real-world data, and can be sorted in O(n) time by appropriate algorithms. Finally, they may also be unstable, and stability is often a desirable property in a sort. Thus more sophisticated algorithms are often employed, such as Timsort (based on merge sort) or introsort (based on quicksort, falling back to heap sort).

Merge Sort

Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list. Merge Sort scales well to very large lists, because its worst-case running time is O(n log n). It is also easily applied to lists, not only arrays, as it only requires sequential access, not random access. However, it has additional O(n) space complexity, and involves a large number of copies in simple implementations. Merge sort has seen a relatively recent surge in popularity for practical implementations, due to its use in the sophisticated algorithm Timsort, which is used for the standard sort routine in the programming languages Python and Java (as of JDK7). Merge sort itself is the standard routine in Perl, among others, and has been used in Java at least since 2000 in JDK1.3. Best: n log n Average: n log n Worst: n log n Memory: n (worst case) Stable: Yes Method: Merging

Quicksort

Quicksort is a divide and conquer algorithm which relies on a partition operation: to partition an array, an element called a pivot is selected. All elements smaller than the pivot are moved before it and all greater elements are moved after it. This can be done efficiently in linear time and in-place. The lesser and greater sublists are then recursively sorted. This yields average time complexity of O(n log n), with low overhead, and thus this is a popular algorithm. Efficient implementations of quicksort (with in-place partitioning) are typically unstable sorts and somewhat complex, but are among the fastest sorting algorithms in practice. Together with its modest O(log n) space usage, quicksort is one of the most popular sorting algorithms and is available in many standard programming libraries. The important caveat about quicksort is that its worst-case performance is O(n^2); while this is rare, in naive implementations (choosing the first or last element as pivot) this occurs for sorted data, which is a common case. The most complex issue in quicksort is thus choosing a good pivot element, as consistently poor choices of pivots can result in drastically slower O(n2) performance, but good choice of pivots yields O(n log n) performance, which is asymptotically optimal. For example, if at each step the median is chosen as the pivot then the algorithm works in O(n log n). Finding the median, such as by the median of medians selection algorithm is however an O(n) operation on unsorted lists and therefore exacts significant overhead with sorting. In practice choosing a random pivot almost certainly yields O(n log n) performance. Best: n log n Average: n log n Worst: n^2 Memory: log n (on average); n (worst case) Stable: typically no, but stable versions exist Method: Partitioning

Radix Sort

Radix sort is an algorithm that sorts numbers by processing individual digits. n numbers consisting of k digits each are sorted in O(n · k) time. Radix sort can process digits of each number either starting from the least significant digit (LSD) or starting from the most significant digit (MSD). The LSD algorithm first sorts the list by the least significant digit while preserving their relative order using a stable sort. Then it sorts them by the next digit, and so on from the least significant to the most significant, ending up with a sorted list. While the LSD radix sort requires the use of a stable sort, the MSD radix sort algorithm does not (unless stable sorting is desired). In-place MSD radix sort is not stable. It is common for the counting sort algorithm to be used internally by the radix sort. A hybrid sorting approach, such as using insertion sort for small bins improves performance of radix sort significantly.

Shell Sort

Shell sort was invented by Donald Shell in 1959. It improves upon bubble sort and insertion sort by moving out of order elements more than one position at a time. One implementation can be described as arranging the data sequence in a two-dimensional array and then sorting the columns of the array using insertion sort. Best: n Average: n (log n)^2 or n^3/2 Worst: Depends on gap sequence; best known is n (log n)^2 Memory: 1 Stable: No Method: Insertion Note: Small code size, no use of call stack, reasonably fast, useful where memory is at a premium such as embedded and older mainframe applications


Kaugnay na mga set ng pag-aaral

Lesson 5 Speedback Assignment - History from 1877

View Set

Social Studies: Chapter 7 and Chapter 8

View Set

Comprehensive Ethics NASBA Final

View Set

MGMT 494: Marketing Strategy Midterm

View Set