divide and conquer algorithm
Binary search 3
If X = L[mid] then X is found in the list. Its location index mid is then returned as the searching result. Otherwise, if X < L[mid] then the first half is further checked, else the second half is checked. This process is continued recursively until either X is found or the whole list is checked.
divide and conquer algor
If the solution to P1 or P2 is unavailable, P1 or P2 should be divided further into even smaller instances, e.g. the subproblems of the subproblem. The dividing processes continue until the solutions to the subproblems are found and the combination processes start to return the partial solutions to each subproblem at higher levels.
quicksort explanation
divides list into sublists region partitioned into two sub groups elements less than pivot element are put in left subregion elements greater than pivot in right sub region pivot element repositioned to sit in bw two sub regions
number of iterations made by binary search algorithm in a list of n items is
roughly log base 2 of n
Merge
# of divisions and # of merges = n-1
What is the pseudo code for merge sort?
#instantiate recursive #return arr if arr.size is less than 2 #set mid to arr.length / 2 #set arr1= arr[o..mid] #set arr2 = arr[mid..-1]
Heapsort requires what to execute?
A heap, Duh
pros and cons of delete by copying
Advantages - Does not increase height • Problems - May become unbalanced if always use the successor -- Alternate the use of successor and predecessor.
Sort
An algorithm used to arrange elements of an array into natural order, such as numeric or alphabetical.
Building an Expression Tree
Build the expression from the postfix form Expression Tree constructor has 3 parameters: reference to data item reference to left child reference to right child Uses a stack of ExpressionTree objects *review picture slide 68
O(g(n))
Class of functions f(n) that grow NO FASTER than g(n)
Subtrees of a node
Consists of a child node and all its descendants A subtree is itself a tree Node can have many subtrees
Binary search 1
Given a sorted list and key X, we want to know if X is in the list. If yes, the position of X in the list is returned. Otherwise, a null is returned.
Linked Binary Tree Implementation
Linked structure of nodes represents the binary tree root: reference to the root node count: keeps track of number of nodes in the tree Binary Tree Node represents each node of tree
Base Case
List less than or one, it is already sorted
Degree/arity of a tree
Maximum of the degrees of the tree's nodes
(index * 2) + 2
Right child
Path
Sequence of edges leading from one node to another
Mergesort is what kind of sort, stable or nonstable
Stable
Divide, Conquer, Combine
What are the steps for each level of divide and conquer strategy?
Define function
def mergeSort(alist):
Define partition
def partition(alist,first,last):
Define quick sort
def quickSort(alist):
Define quick sort helper
def quickSortHelper(alist,first,last):
Insertion
relatively stable, best to verify sorted O (N), worst when random or reversed O (N^2)
Operations
removeLeftSubtree removeRightSubtree removeAllElements size contains isEmpty find toString iteratorInOrder iteratorPreOrder iteratorPostOrder iteratorLevelOrder
top-down approach(quicksort)
repeatedly splitting largest lists into smaller ones
search in BST recursive
if(p==NULL) return NULL; if(el>p->element) return search(p->right,el); if*el<p->element) return search(p->left,el); if(el==p-> element) return &P->element;
delete a tree
if(ptr is not NULL) { delete(left Child) delete(right child) delete ptr; }
Caveat
requires extra space to hold the halves. Problematic on large data sets
Disadvantages
requires heap data structure, is not stable, slower than quicksort
Return splitpoint
return rightmark
Assign right list
righthalf = alist[mid:]
Decrement rightmark
rightmark -= 1
Assign rightmark
rightmark = last
binary search algorithm method
uses divide and conquer to search through a (sorted) list first look at item in location n/2, then look at region before or after
get height
if(ptr is null) return 0 endif leftHeight = height(ptr->leftChild) rightHeight = height(ptr->rightChild) if (leftHeight > rightHeight) return leftHeight + 1 else return rightHeight + 1 endif
Merge
uses twice as much memory due to temporary array storage
Heap sort will always take
nlogn time
Merge
no sorting or decision making about ordering is done until the division process is done and merge process takes over
Quick dis
nonstable, even if sorted, takes onlogn
Con of Heap Sort
not stable and larger constant factors than quicksort
Max height
number of nodes
LinkedBinaryTree Class
protected BinaryTreeNode<T> root; protected int count; Attributes are protected so they can be accessed directly in any subclass of the LinkedBinaryTree class //Empty binary tree constructor public LinkedBinaryTree() { count = 0; root = null; } //Binary tree with root element public LinkedBinaryTree(T element) { count = 1; root = new BinaryTreeNode<T>(element); }
Bubble
volatile, simple and slow, best to verify sorted O (N), worst when random or reversed O (N^2)
Quick Sort
when L and R markers cross and partitions are of single value, then the current halves are sorted
Recursive call for leftmark
quickSortHelper(alist,first,splitpoint-1)
Recursive call for rightmark
quickSortHelper(alist,splitpoint+1,last)
Loop to check both halves is at base case
while i < len(lefthalf) and j < len(righthalf):
Loop to check uninserted value in left half
while i < len(lefthalf):
Loop to check uninserted value in right half
while j < len(righthalf):
Loop on leftmark
while leftmark <= rightmark and alist[leftmark] <= pivotvalue:
Loop while false
while not done:
two well-known sorting algorithms
quicksort: divide list into big values and small values, then sort each part mergesort: sort subgroups of size 2, merge into sorted groups of size 4, merge into sorted groups of size 8,...
Call quick sort helper
quicksortHelper(alist,0,len(alist)-1)
bottom up approach (Merge sort)
recombine smaller lists into larger ones
recursive definitions of mathematical functions or sequences. For example: g(n) = g(n-1) + 2n -1 g(0) = 0
recurrence
Name for when subproblems large enough to solve recursively.
recursive case
winner tree
勝者樹(營的可以佔據父node,先比較出每個key的最小值(K次),再來以樹結構往上比較出最小的(小勝利!k-1次),重複做直到key中data用完(最多n-2次),總花費O(k)+O(n*ln(k)))
sequential access
循序存取(要拜訪第N個空間的話,需要從頭開始一個一個跳躍直到N為止)
Merge Sort
A recursive algorithm that continually splits a list in half, until each half is either empty or one item, at which point the two halves are merged together in natural order, back up the division process until the entire list has been sorted.
Time complexity for Binary Heap Queue Array List or Node List: Add Remove Peek
Add O(log N) Remove O(log N) Peek O(1)
Time complexity for Balanced BST Queue Array List or Node List: Add Remove Peek
Add O(log N) Remove O(log N) Peek O(log N)
What is insert in heap? How to insert? time complexity?
Adds the new element to the end of the heap and then uses shiftUp() to fix the heap. How? Take append the new value to the last of array, and then shift up if necessary - log(n)
What date structure should we use to implement an AVL tree?
An AVL tree has a balance factor, data, and two pointers pointing to the left and right.
What is heap?
Heap is a binary tree lives inside array. it doesn't use parent/child pointer.
These sorts have what kind of time complexity and how is it different from insertion, bubble and the other one?
Heap, Merge and Quick have have a more efficient time complexity of O(nlogn) (quasilinear) versus the quadratic time complexity of bubble, insertion and selection.
complexity of insert in BST
O(height(T))
What is the run time complexity of inserting into a heap?
O(log n)
Average Case
O(n log n) log linear
Best Case
O(n log n) log linear
What is the build heap complexity?
O(n)
What is the complexity of traversal?
O(n)
What is the complexity for search in heap?
O(n), search the array.
Worst Case
O(n2) quadratic
Worst Case
O(n^2) linear
divide and conquer
break problem into smaller pieces and solve smaller sub-problems
Given a node with position i in a complete tree, what are the positions of its child nodes?
left child = 2i+1, right child = 2i+2
Assign left list
lefthalf = alist[:mid]
Increment leftmark
leftmark += 1
Assign leftmark
leftmark = first+1
slicing notation
list[1:3]
What is the time complexity for shift up and down?
log(n)
Insertion
lower values sorted first (inserted to the left)
Call merge sort passing in list
mergeSort(alist)
Split left list to base case
mergeSort(lefthalf)
Split right list to base case
mergeSort(righthalf)
Find mid point
mid = len(alist)//2
Quick Sort
moves lesser values to left of pivot, greater values to right of pivot, at which point, pivot value is in correct place
Required Operations
n for size n
Insertion
nested loop to insert each unsorted item in a list into its proper position
Priority queues
only the highest priority element can be accessed
Quick Sort
partition, swap
Quick Sort
partitions lists and swaps values on either side of a pivot value
Assign pivot value
pivotvalue = alist[first]
What is the time complexity for heap sort?
nlogn
Array Based Implementation: At what index is parent stored
(i-1)/2
Balanced trees
-binary trees -for all nodes: |left_subTree height - right_subTree height | <= 1
2-3-4 Operations:
-construction, empty, search -insert/delete must maintain as a 2-3-4 tree
How to: Binary Heap Insertion
-insert new element in the one and only one location that will maintain the complete shape (start left to right) -swap values necessary on a leaf to root path to maintain partial order
BST vs 2-3-4
-insertion in a BST might change the height of a tree if its a leaf node, whereas in a 2-3-4 tree, all leaf nodes are on the same level
M Node Tree
-stores m-1 data values -has links to m subtrees
Trees
A data structure! --a tree is a collection of nodes. Unless empty, it begins at the root node --terms, root, siblings, grandparent/grandchild, parent/child, ancestor
Bubble Sort
A simple (and relatively slow) sorting algorithm that repeatedly loops through a list of values, comparing adjacent values and swapping them if they are in the wrong order.
Quick Sort
An efficient sorting algorithm, serving as a systematic method for placing the elements of an array in order. Sometimes called partition-exchange sort.
Internal node
Any node that is not a leaf node
Mergesort Time Efficiency
Big Theta(n log n)
Quicksort is what kind of sort- stable or nonstable
Nonstable
Length of path
Number of edges on the path
mechanism of divide and conquer
This approach divides an instance of a problem P into at least two smaller instances, P1 and P2 for example. P1 and P2 are of the same problem in nature to P, the original, but much smaller in size1. The 1e.g half of the original size or smaller problems are called subproblems. If the solutions for the smaller. smaller instances are available individually, the solution to the original P can be derived by simply combining the solutions to the subproblems P1 and P2.
Merge
divide and conquer a list, merging portions of the list in sorted order
Quick Sort
divide elements into sublists sorted around a pivot value
Insertion
saves a copy (temp) instead of swap function
size recursive
{ if( p == 0 ) { return 0; } else return(size(p->left)) + 1 + (size(p->right)); }
ALL BigO
不解4
LSD radix sort
最低數字基數排序Least Significant Digital(先建立r個bucket,編號0~(r-1)(例:在10進制中r即為10),再令1變數d儲存最大data的總位數(1023就是4位數),然後從個位開始將相同個位的放入bucket中,再將其照序(0~(r-1))取出,再來換十位、百位.....直到d為止)
decision tree
決策樹(用來比較過程,比如說像是計算recourse運算次數可以用)
Array Based Implementation: At what index is left child stored
2i+1
Array Based Implementation: At what index is right child stored
2i+2
Expression Trees
--Binary tree to represent arithmetic expressions --The leaves of an expression tree are operands, such as constants and variable names, and the other nodes contain operators. InFix: A*B+C A*(B+C) Postfix(RPN): AB*C+ ABC+* Prefix: +*ABC *A+BC *To go from infix to postfix, put the infix into an expression tree, and then traverse it in post order to come out with infix expression.*
What is the common use for heap?
- For building priority queues. - The heap is the data structure supporting heap sort. - Heaps are fast for when you often need to compute the minimum (or maximum) element of a collection. - Impressing your non-programmer friends.
pros and cons of delete by mergings
- May increase or decrease the height after deletion
How many kinds of heap?
- max-heap: parent nodes must always have a greater value than children. - min-heap: parent nodes must always have a value less than children.
What is the difference between binary search tree and heap?
- order of nodes: for bst, the left child always smaller than the node while right child always larger; In max-heap, children must be smaller; in min-heap, children must be larger. - memory: traditional trees take up additional memory for nodes objects and pointers to left/right childs. While heap only uses plain array of storage and no pointers. - binary search tree: binary search tree must be "balanced" so that most operations have O(log n) performance. For heap, we don't need the entire tree to sorted. We just need to have the heap property to fulfilled, and balancing isn't an issue. Because the way heap is structure is gurantee O(logn) performance. - searching: search a binary tree is really fast. Searching in heap is slow, the purpose of heap is to put the largest/smallet on the top for easy insert/delete.
What is shift up and down in heap?
- shift up: If the element is greater (max-heap) or smaller (min-heap) than its parent, it needs to be swapped with the parent. This makes it move up the tree. - shift down: If the element is smaller (max-heap) or greater (min-heap) than its children, it needs to move down the tree. This operation is also called "heapify".
AVL basic operations:
1) constructor, search traversal, empty 2) insert: keep balanced! 3) delete: keep balanced! 4) similar to BST
Binary Trees
--a tree in which no node can have more than 2 children --Useful in modeling processes where there are comparisons or an experiment has exactly two possible outcomes; is also useful when a test is performed repeatedly (coin toss, decision trees that are often used in AI, encoding/decoding messages in dots/dashes like morse code) *Tree height*: the height of a node is the length of the longest downward path to a leaf from that node (height of leaf nodes is zero, and height of the root node is the height of the tree) *Complete trees*: trees are complete when each level is completely filled except the bottom level. The leftmost positions are filled at the bottom level. Nice for array storage! However, array storage only for complete trees. *Balanced Trees* are binary trees, and this is true for all nodes in the tree: *|left_subTree height - right_subTree height| <= 1*
Complete trees
--each level is completely filled except the bottom level --the leftmost positions are filled at the bottom level --array storage is perfect for them, however if tree is not complete, you need to account for missing nodes, which may be very inefficient
How are trees useful?
--used to implement the file system of several popular operating systems --useful to evaluate arithmetic expressions --support searching operations in O(log N) average time and how to refine these ideas to obtain O(log N) worst case bounds --useful in modelling processes that have (2) outcomes (experiments or comparisons...example, coin toss experiment) --decision trees (expert systems in AI) --encoding/decoding messages in dots/dashes (morse code)
Linked Representation of binary trees
--uses space more efficiently --provides additional flexibility --each node has two links: one to the left child of the node, and one to the right child of the node
Characteristic of Binary Heap
-Complete Binary Tree - Partial Order Property
M Node Tree: 2-3-4 Tree
-Each node stores at most 3 data values (keys) -each non-leaf node is a 2 node, a 3 node, or a 4 node -All the leaves are on the same level
Insertion 2-3-4
-For a 2-3-4 tree, a new node is added to the top of the tree when the tree is full If tree is empty -create a 2 node containing new item-inital root of the tree else --find leaf node where item should be inserted by repeatedly comparing item with values in node and following appropriate link to child --if room in leaf node, add item, otherwise... 1) split 4 node into 2 nodes, one storing items less than median value, other storing items greater than median. Median value moved to a parent having these 2 nodes as children. 2) if parent has no room for median, split that 4node in the same manner, continuing until either a parent is found with room or reach full root 3) if full root 4 node split into two nodes, create new parent 2-node, the new root
How to: Binary Heap Deletion
-Maintain complete shape by replacing the root value with the value in the lowest, right-most leaf. Then delete. -Swap values if necessary to maintain partial order Binary Heap ppt. Slide 13 for example.
Three possibilities for inductive step:
1) inorder Traverse(root -> left) print (root -> data) traverse(root -> right) ^^ this will result in a print of the data in ascending order 2) Preorder print (root -> data) traverse(root -> left) traverse (root -> right) 3) Postorder traverse(root -> left) traverse (root -> right) print (root -> data)
quick sort algo
CODE
Non-recursive Inorder
1) Create an empty stack S 2) Initialize current node as root 3) push the current node to S and set current = current -> left until current is NULL 4) if current is NULL and stack is not empty, then a) pop the top item from stack b) print the popped item, set current = popped_item -> right c) go to step 3 5) if current is null and stack is empty, then we are done
Divide-and-Conquer
1. Divide instance of problem into 2 or more smaller instances 2. Solve smaller instances 3. Obtain solution to original (larger) instance by combining these solutions Typically some additional work required to combine solutions
What's the mergesort synopsis?
1. Divide recursively until subarrays are 1 element in length (base case) 2. Merge the two subarrays up every level 3. Return the sorted array
Two Phases of Heap Sort
1. Transform initial arbitrary order of array into partial order 2. Transform partial order to total order
Heap Sort pseudocode overview
1. Utilize a heap data structure 2. Add items into heap 3. Extract items from heap a. store into new array b. possible in place implementation
What is a heap data structure
1. a binary tree 2.the value of each node is greater than or equal to the values in each children 3.Perfectly balanced. The leaves in the last level are in the left most poistion
How to: Heap Sort
1. rearrange array to Max Heap 2. repeatedly move max element to final sorted place towards end of the array, heapify
Max # of nodes
2^(height(tree)) - 1
Max # of leaves
2^(height(tree)-1)
AVL trees must be balanced after every insertion and deletion
4 cases of imbalance: 1) insertion was in left subtree of left child of N 2) insertion was in right subtree of right child of N 3) Insertion was in right subtree of left child of N 4) insertion was in left subtree of right child of N Case 1 and 2 require a single rotation for balancing Case 3 and 4 require double rotation for balancing
Selection Sort
A sorting routine that uses a nested loop process to systematically select the best value among the unsorted elements of the array for the next position in the array, starting with position zero all the way to the end.
Insertion Sort
A type of sort that uses a nested loop process to systematically find the best place in the current array for an unsorted item.
Time complexity for Unsorted Queue Array List or Node List: Add Remove Peek
Add O(1) Remove O(N) Peek O(N)
Time complexity for Sorted Queue Array List or Node List: Add Remove Peek
Add O(N) Remove O(1) Peek O(1)
Steps in the divide and conquer approach
An algorithm taking the divide and conquer approach usually includes the following main steps: 1. Divide an instance of a problem into smaller instances 2. Solve the smaller instances recursively 3. Combine, if necessary, the solutions of the subproblems to form the solution to the original problem.
Step Five
At the point rightmark is lesser than leftmark, split point is found and the item will be swapped with pivot value
What is T(n) of insert algorithm in a BST?
Average complexity O(logN)
What is T(n) of search algorithm in a BST? Why?
Average complexity O(logN)
Run time of Binary Search Trees
Average run time of most operations is O(log N).
Quicksort advantages
Average runtime is 2-3 faster than mergesort and can be sorted in place
BST vs AVL
BST: -no guarantee of balance -potential for lopsided +less complex code AVL: +tree is balanced +guarantee O(logn) -more complicated code -frequent rotations
what is the balance factor of a node?
Balance Factor of every node is -1 ≤ b ≤ 1 height(left subtree) - height(right subtree)
Balanced Trees: AVL Trees
Balance factor: a node's BF is the difference between the heights of its left and right subtrees AVL Tree: A binary search tree in which, for every node the balance factor is either 0 or +1 or -1 (height of an empty tree is defined as -1)
what is the best and worst complexity for search in a bst?
Based on the number of comparisions Worst case: O(n) -- when tree is off balance and the shape is like a linked list. - Best case: O(logn) - when tree is complete. - Average case: O(logN) - close to the best case.
Quicksort Time Analysis
Best case: Big Theta(n log n) Split in the middle Worst case: Big Theta(n^2) Sorted array! Average case: Big Theta(n log n) Random arrays
Quick Sort
Best: O (N log N) stays balanced Avg: O (N log N) stays fairly balanced Worst: O (N^2) all values on one side
Bubble
Best: O (N) list already in order (1 pass) Avg: O (N^2) random Worst: O (N^2) reverse
Insertion
Best: O (N) list already in order (3 steps each pass) Avg: O (N^2) random Worst: O (N^2) reverse
Selection
Best: O (N^2) list in order Avg: O (N^2) random Worst: O (N^2) reverse
Efficiency of Pre, In, and Postorder algorithms
Big Theta(n)
Height Algorithm Efficiency
Big Theta(n)
Mergesort Space Requirement
Big Theta(n) (NOT in-place)
Ex. Max Heap 12,7,9,3,18,1,10
Binary Heap ppt slide 11
Ex. Min Heap 12,7,9,3,18,1,10
Binary Heap ppt slide 12
Why avl tree instead ofBST
Binary Search Trees are fast if they're shallow. • Problems occur when one branch is much longer than the other.
Binary Search vs Linear Search
Binary search... Requires that data items should be in ascending order, and is usually implemented with an array because it needs direct access. advantages: -usually out performs linear search....O(log n) vs. O(n) disadvantages of binary search: --needs to be sorted --needs direct access of storage structures, NOT good for linked lists HOWEVER it is possible to use a linked structure which can be searched in a binary like manner...aka binary search trees If we do a binary search of a linked list, connecting an "index" to each node, traversing is still O(n) whereas it may still be better than linear search b/c comparisons go down to log(n) In order to do log(n) traversal and comparisons, we need a binary search tree!
Traversal analysis
Binary tree with n nodes For each node, 2 recursive calls max so 2n recursive calls at most O(n)
Recursion and Binary Trees
Binary trees are recursive in nature Tree Traversal is recursive: if the binary tree is empty, then do nothing else inductive steps: N: visit the root, process data L: traverse the left subtree R: traverse the right subtree
Insertion
Build your BST by repeatedly calling a function to insert elements into a BST. Similar to search, can be done recursively and non recursively.
definition of AVL tree
By definition it is a binary tree it is a binary tree + balanced condition -ensure the depth O(log N) -height difference between the subtress of any node is at most 1
four cases of re balance in AVL tree
Case 1: An insertion into the left subtree of the left child • case 2: Insertion into the right subtree of the left child • case 3:Insertion into the left subtree of the right child • case 4: An insertion into the right subtree of the right child
Edges
Connections between nodes
Mergesort is helpful for sorting what kind of data?
Data types that can't be directly compared like objects or linked lists.
Root
Distinguished element that is the origin of the tree -only ONE root node
Strategy
Divide and conquer
Stragtegy
Divide and conquer to improve performance
combine the sub problem solutions into the solution for the original problem.
Divide and conquer: Combine
Solve each sub problem recursively, If the sub problem is small enough, just solve it in a straight forward manner.
Divide and conquer: Conquer
Min Heap Property
Each child node is greater than or equal to the parent node with the min at the root
Max Heap Property
Each child node is less than or equal to the parent node with the max at the root
Nodes
Elements in the tree
merge sort mechanism
Generally speaking, to merge two objects means to combine them, so as to become part of a larger whole object. However, the word merge used in our context implies one type restriction: that is, the two original list objects and the combined list object after merging must be of the same type.
Degree/arity of a node
How many children a node has
Complete Binary Tree and Height Time Complexity
If all levels except possibly the last are completely full, and the last level has all its nodes to the left side height O(log N)
Full Binary Tree and Height Time Complexity
If each node is either a leaf or possesses exactly two child nodes. height O(log N)
Step One
If list is less than or equals to one, it is considered sorted
Binary Search Tree
In a BST, nodes are ordered in the following way: --each node has at most two children --each node contains one key (data) --all keys in the left subtree of a node are less than the key on the node --all the keys in the right subtree of a node are greater than the key in the node
Inorder Algorithm
Inorder(T) if T is not an empty set Inorder(Tleft) print(root of T) Inorder(Tright)
Iterative Binary Tree Traversals
Iterative traversal uses a container to store references to nodes not yet visited Order of visiting depends on the type of container (stack, queue, etc.) Algorithm: Create empty container to hold references to nodes yet to be visited Put reference to root node in container While container is not empty: -remove reference x from the container -visit the node x points to -put references to non empty children of x in the container Container is a stack: if we push the right successor of a node before the left successor, this is preorder traversal Container is a queue: if we enqueue the left successor before the right, we get a level order traversal
Descendants of a node
Its children, the children of its children, etc.
Ancestors of a node
Its parent, parent of its parent, etc.
Bubble
K goes thru list comparing adjacent and swapping as needed, telling Joker to lie down (false) each time a swap is performed
Insertion
K looks down the list and shifts the values up the list until every value has found its best position
Insertion
K starts on 2nd, checks to see if J position better for K value
Step Two
Keep splitting the list until two items (base case)
Quick Sort
L and R look at their position values until swap, then move inwards 1 step until they cross
Quick Sort
L and R markers check if their position's value is lesser or greater than pivot
Quick Sort
L and/or R markers move until find position/value that is incorrectly lesser/greater than pivot, and they swap
Heap, Merge and Quick are useful for what kind of data sets?
Larger, where n > 10
Huffman's algorithm
Left Branch 0 Right branch 1 (freq/root) x bit
(index * 2) + 1
Left child
Height of a (non empty) tree
Length of the longest path from root to leaf Height of empty tree = -1
Step Six
List is divided and quick sort invoked recursively on the two halves
MergeSort Synopsis
Merge Sort is frequently classified as a "divide and conquer" sort because unlike many other sorts that sort data sets in a linear manner, Merge Sort breaks the data into small data sets, sorts those small sets, and then merges the resulting sorted lists together. Given the data (4 3 1 2) to sort, Merge Sort would first divide the data into two smaller arrays (4 3) and (1 2). It would then process the sub list (4 3) in precisely the same manner, by recursively calling itself on each half of the data, namely (4) and (3). When merge sort processes a list with only one element, it deems the list sorted and sends it to the merging process; therefore, the lists (4) and (3) are each in sorted order. Merge sort then merges them into the sorted list (3 4). The same process is repeated with sublist (1 2)--it is broken down and rebuilt in to the list (1 2). Merge Sort now has two sorted lists, (4 3) and (1 2) which it merges by comparing the smallest element in each list and putting the smaller one into its place in the final, sorted data set. Tracing how merge sort sorts and merges the subarrays it creates, makes the recursive nature of the algorithm even more apparent. Notice how each half array gets entirely broken-down before the other half does.
Merge sort
Merge sort is another example of applying the divide and conquer technique. The idea is to first divide the original list into two halves, lList and rList, then merge-sort the two sublists, recursively. The two sorted halves lList and rList are merged to form a sorted list.
Merge Pseudocode
Merge(B[0..p-1], C[0..q-1], A[0.. p+q-1]) //Merges 2 sorted arrays into 1 sorted array (A[ ]) //Input: Arrays B[0..p-1] and C[0..q-1] both sorted //Output: Sorted array A[0..p+q-1] of the elements of B & C i <- 0; j <- 0; k <- 0 while i < p and j < q do if B[i] <= C[j] A[k] <- B[i]; i <- i+1 else A[k] <- C[j]; j <- j+1 k <- k+1 if i = p copy C[j..q-1] to A[k..p+q-1] else copy B[i..p-1] to A[k..p+q-1]
Mergesort vs. Quicksort
Mergesort - divides input elements according to their position in the array Quicksort - divides input elements according to their value
Mergesort is faster or slower than bubble sort
Mergesort is quasilinear (O(nlogn), whereas bubble is O(n^2) quadratic. So merge is much faster
Mergesort Pseudocode
Mergesort(A[0..n-1]) //Sorts array A[0..n-1] by recursive mergesort //Input: An array A[0..n-1] of orderable elements //Output: Array A[0..n-1] sorted in nondecreasing order if n > 1 copy A[0.. floor(n/2) - 1] to B[0.. floor(n/2) - 1] (left side) copy A[floor(n/2).. n-1] to C[0.. floor(n/2) - 1] (right side) Mergesort(B[0.. floor(n/2) - 1]) (left side) Mergesort(C[0.. floor(n/2) - 1]) (right side) Merge(B, C, A)
Empty Tree
No nodes or edges
Parent/predecessor
Node directly above another node in the hierarchy Each node has only one parent
Child/successor
Node directly below another node in the hierarchy Parent can have many children
Leaf node
Node without children
Siblings
Nodes that have the same parent
Tree ADT
Nonlinear abstract data type that stores elements in a hierarchy Family tree Table of contents Class inheritance Computer file system Set of elements that either: -is empty -has a distinguished element called the root and zero or more trees (subtrees of the root)
Level of a node
Number of edges between root and the node Defined recursively: Level of root node is 0 Level of a node that is not the root node is the level of its parent + 1
3
Number of steps in a divide and conquer strategy for each level of recursion
Merge
O (N log N) - O (N log N) - O (N log N)
Quick Sort
O (N log N) - O (N log N) - O (N log N)
Bubble
O (N) - O (N^2) - O (N^2)
Insertion
O (N) - O (N^2) - O (N^2)
Selection
O (N^2) - O (N^2) - O (N^2)
What is peek complexity?
O(1)
Heap Sort Time Complexity
O(N Log N)
(index - 1) / 2
Parent formula (if not a root)
Step Three
Partition process will find the split point and move items to the appropriate side of the list
Hoare's Partitioning Algorithm
Partition(A[l..r]) //Partitions a subarray by using its first element as a pivot //Input: A subarray A[l..r] of A[0..n-1], defined by its left and // right indices l and r ( l<r) //Output: A partition of A[l..r], with the split position // returned as this function's value p <- A[l] (First element of A is out pivot point) i <- l; j <- r+1 repeat repeat i <- i+1 until A[i] >= p (Walk through left side) repeat j <- j+1 until A[j] <= p (Walk though right side) swap(A[i], A[j]) (Swap as they're in wrong sides) until i >= j swap(A[i], A[j]) //undo last swap when i >= j swap(A[l], A[j]) (Put pivot point element in its place) return j (Returns index of pivot point element)
Step Two
Pivot value belongs in the final sorted list, commonly called split point
Step Three
Place on sorted list
Quicksort Highlevel Implementation
Please Let Sam Respond Back Rosily 1. Pivot point - choose one 2. loop through the rest of the unsorted elements 3. Swap to create a section of smaller than pivot elements and larger than pivot elements 4. Recursively select pivots on subsections 5. Base case hits when there's only one element left in the subarray 6. Return sorted array
Step Four
Position markers will swap values if found items on the wrong side of split point
Postorder Algorithm
Postorder(T) if T is not an empty set Postorder(Tleft) Postorder(Tright) print(root of T)
Inorder, preorder, and postorder
PreOrder: print node; traverse (left); traverse(right); InOrder: traverse(left); print node; traverse(right); PostOrder: traverse(left); traverse(right); print node;
Preorder Algorithm
Preorder(T) if T is not an empty set print(root of T) Preorder(Tleft) Preorder(Tright)
Quicksort is helpful for sorting what kind of data?
Primitives like numbers and floats
Expression Trees
Program that manipulates or evaluates arithmetic expressions using binary trees Root node and interior nodes contain operations Leaf nodes contain operands Evaluate using certain traversal method (review picture)
Pros and cons of AVL tree
Pros: • All operations guaranteed O(log N) • The height balancing adds no more than a constant factor to the speed of insertion Cons: • Space consumed by height (or B.F.) field in each node • Slower than ordinary BST on random data
How to deal with unbalanced trees?
Rebalance binary search tree when a new insertion makes the tree "too unbalanced" 1) AVL trees 2) Red Black Trees OR Allow more than one key per node of a search tree 1) 2-3 trees 2) 2-3-4 trees 3) B trees
Quicksort and Mergesort require what to execute?
Recursion
Algorithm Type
Recursive, continually splits a list in half
What is remove in heap? How? time complexity?
Removes and returns the maximum value (max-heap) or the minimum value (min-heap). To fill up the hole that's left by removing the element, the very last element is moved to the root position and then shiftDown() fixes up the heap. - how? take the last item and put it on the first, and then shift down - log(n)
BinaryTreeNode class
Represents a node in the binary tree Protected attributes: element: reference to data element left: reference to left child of the node right: reference to right child of the node
Pre order Traversal
Root , Left, Right
Analysis of AVL Trees
Search and insertion are O(log n) Deletion is more complicated but is also O(log N) (deletion, replace node with largest node in left subtree) Disadvantages of AVL? frequent rotation, complexity Height of AVL = 1.01 log2n + .1 for large n
Simple Insertion Explanation:
So in each node, you're allowed at most 3 values. Each node is allowed to have up to 4 children. You'll always have childrenAmt = dataAmount+1. If when inserting, there are already 3 values in a node, you break that node apart by bringing the median node up.
Non Binary Trees
Some applications require more than two children per node aka game trees, genealogical trees
Divide-and-Conquer Examples
Sorting: MergeSort & QuickSort Binary tree traversals Multiplication of large integers Matrix multiplication: Strassen's Algorithm Closest-pair and convex-hull algorithms Binary search: decrease-by-half
Postorder Traversal
Start at root Visit children of each node then the node Recursive algorithm: If tree is not empty... -perform postorder traversal of left subtree of root -perform postorder traversal of right subtree of root -visit root node of tree
Level Order Traversal
Start at root Visit the nodes at each level, from left to right
Preorder Traversal
Start at the root Visit each node, followed by its children; we will choose to visit left child before right Recursive algorithm: If tree is not empty... -visit root node of tree -perform preorder traversal of its left subtree -perform preorder traversal of its right subtree
Inorder Traversal
Start at the root Visit the left child of each node, then the node, then any remaining nodes Recursive algorithm: If tree is not empty... -perform inorder traversal of left subtree of root -visit root node of tree -perform inorder traversal of right subtree *review picture
Merge k sorted arrays
T(k) = 2T(k/2) +O(kn) O(knlogk)
find kth smallest in two arrays
T(k) = T(k/2) + O(1) O(logk)
towers of hanoi
T(n) = 2T(n-1) + O(1) O(2^n)
Count number exchanged pairs
T(n) = 2T(n/2) + O(n) O(nlogn)
Hadamard matrices Multiplication
T(n) = 2T(n/2) + O(n) O(nlogn)
closest pair
T(n) = 2T(n/2) + O(n) O(nlogn)
Lopsidedness
The order in which items are inserted into a BST determines the shape of the BST. This will result in balanced or unbalanced trees. *The complexity of a lopsided tree? If balanced, search is T(n) = O(logN) if unbalanced, T(n) = O(n)* Key issue? Attractiveness of binary search tree is marred by the bad (linear) worst case efficiency --e need to keep a BST balanced to maximize its benefits
Swap
The process in sort routines of exchanging two values in an array, using a temporary variable, taking three steps to execute.
Remove a node
There are 3 cases: You're removing.... 1) a leaf node (just delete and set to nullptr) 2) a node with one child (bypass deleted node by reconnecting) 3) a node with two children ----find the inorder successor (smallest node in right subtree), copy it into the node to be deleted, and then recursively delete
history of divide and conquer
This is a useful approach to solve an algorithmic computational problem. The name divide and conquer was used as the brilliant fighting strategy by the French emperor Napoleon in the Battle of Austerlitz on 2 December, 1805
Tree Traversals
Traversal of a tree requires that each node of the tree be visited once Orderings: preorder inorder postorder level order
General Tree
Tree each of whose nodes may have any number of children
Binary Tree
Tree each of whose nodes may have no more than 2 children Tree with degree (arity) or 2 The children are called left child and right child (if they exist) Recursive definition: -The empty tree -Tree which has a root whose left and right subtrees are binary trees Positional tree: matters whether the subtree is left or right
n-ary Tree
Tree each of whose nodes may have no more than n children
Leftmark
at the beginning of the list. Finds value greater then pivot
Non recursive traversal
Use a stack!!
How to turn heap array back to a sorted heap?
Use heap sort.
Partitioning
Uses two position markers, leftmark and rightmark. Will locate and converge at split point.
Rightmark
at the end of the list. Finds value lesser than pivot
input(s) for which the function produces a result trivially (without recurring)
base case
Must a complete tree also be a balanced tree?
Yes
Merge
a divide and conquer approach is used to break apart the list and sort smaller portions of it
Insertion
additional values are inserted each pass into a portion of the list that maintains a sorted order
Bubble
after each pass, the next highest unsorted value should be in the correct position
Create a list of numbers
alist = []
Swap pivot value with split point
alist[first], alist[rightmark] = alist[rightmark], alist[first]
Insert left value beginning of list
alist[k]=lefthalf[i]
Insert value to list at next position
alist[k]=lefthalf[i]
Insert right value to beginning of list
alist[k]=righthalf[j]
Insert value to list at next position
alist[k]=righthalf[j]
Swap position markers values
alist[leftmark], alist[rightmark] = alist[rightmark], alist[leftmark]
Priority queue
chooses next element to delete based on priority.
Bubble
compare adjacent values, and if they are out of order, swap them
Insertion
compare places all the way back to the beginning, shifting cards to the right until find the best place for the value we are seeking
Selection
compares each to whole rest of list, until gets a clear pass
Basic Binary Search Tree Operations
construct an empty BST, isEmpty(), search(int item), insert(int data) while maintaining BST properties, delete(int data) that maintains BST properties, Traverse() with inorder, preorder, and postorder both recursively and non recursively
What is the actual code for quicksort
def quicksort(input) divide = lambda do |start, finish| return if start >= finish mid = start pivot = finish for i in start ... finish if input[i] < input[pivot] input[i], input[mid] = input[mid], input[i] mid += 1 end end input[mid], input[pivot] = input[pivot], input[mid] divide.call(start, mid - 1) divide.call(mid + 1, finish) end divide.call(0, input.length - 1) input end
Quicksort Synopsis
divide the list into smaller lists which can then also be sorted using the quick sort algorithm. This is usually done through recursion. Lists of length 0 are ignored and those of length 1 are considered to be sorted. Quick sort, like Merge Sort, is a divide-and-conquer sorting algorithm. The premise of quicksort is to separate the "big" elements from the "small" elements repeatedly. The first step of the algorithm requires choosing a "pivot" value that will be used to divide big and small numbers. Each implementation of quicksort has its own method of choosing the pivot value--some methods are much better than others.
Merge
divides elements into sublists and then reassembles them into sorted order
Merge
divides lists into sublists of only one element before merging them back together into one sorted list
Set loop boolean
done = False
Break out of main loop
done = True
Else
else:
False comparison
else:
Selection
find the best (lowest) value, swap, J/Q/K
Selection
find the next smallest value in the remaining unsorted elements
Selection
finds the best value for the current position with each pass
Min height
floor(lg(n)) +1
How many level does a heap with n nodes have?
floor(log_2(n))
Selection
goes thru whole list on each pass finding best (lowest value) for each index (next smallest value is always swapped with current index)
merge sort comparisons explanation
group size starts at 1, doubles each time compare values at front of each group
Array Based Implementation: At what index is root stored
i
Counter for left half
i=0
Increment left half counter
i=i+1
Base Case Check
if first < last:
What is the formula for heap index?
if i is the index of node, parent(i) = floor((i-1)/2) left(i) = 2i + 1 right(i) = 2i + 2
Compare left and right values
if lefthalf[i] < righthalf[j]:
Check for base case
if len(alist)>1:
Split point found
if rightmark < leftmark:
Heap
implementation of priority queue based on a complete binary tree, each of whose elements contains a value that is >= the values of all its descendants
Pro of Heap Sort
in-place sort with guaranteed NlogN
Insertion
inserts value to best position
pseudocode for quicksort
instantiate lambda function with start, finish #return if start is greater than or equal to finish # set mid to start #set pivot to finish #for i in start to finish #if the value at input[i] is less than the value at input[pivot] #swap input[i], input[pivot] # increment mid by one #end #end #make recursive call with (start, mid -1) #make another recursive call with mid +1 and finish #end #make the instantiating recursive call with 0 and input.length - 1) #return input #end
Counter for right half
j=0
Increment right half counter
j=j+1
Counter for list index
k=0
Increment list index counter
k=k+1
Merge
recursive, divide in half, then merge
Search
recursive, non recursive
Step One
select a pivot value to split the list. use first item
Insertion
select the next best (lower) value and insert them into a sorted sublist
Selection
select the next best value of a list and swap it into a portion of the list that has been sorted
Selection
selects the next "best" value and swaps it inot its correct position with each pass
Merge
situation of data doesn't matter (random, reverse), it works the same in any order O (N log N)
Array Based Implementation are more commonly used because
space efficient easy transversal
Call partition to find split point
splitpoint=partition(alist,first,last)
Selection
stable but slow, Always O (N^2), quadratic
Insertion
start on 2nd, save temp, shift to right
Quick Sort
swap elements on either side of the pivot that are on the wrong side
General approach to heap sort
swap first and last then reheap (repeat)
Bubble
swap, adjacent, highest, Joker
In order code
template <classt T> void BT<T>::inorder(BT Noder<T> *P) { if (p!=NULL) { inorder(p->left); cout<<p->element; inorder(p->right) } }
post order code
template <classt T> void BT<T>::postorder(BT Noder<T> *P) { if (p!=NULL) { postorder(p->left) postorder(p->right) cout<<p->element; } }
pre order code
template <classt T> void BT<T>::preorder(BT Noder<T> *P) { if(p!=NULL) { cout<<element; preorder(p->left); preorder(p->right); } }
delete by merging
template<class T> void BST<T>::deleteByMerging(BSTNode<T>*& node) { BSTNode<T> *tmp = node; if (node !=0) //node has no right child: its left child (if any) is attached to its parent if (node->right == 0) node = node->left; //node has no left child: its right child is attached to its parent. else if (node->left == 0) node = node->right; else //merge subtree { tmp = node -> left; //1. get predecessor while (tmp->right != 0) tmp = tmp->right; // 2. tmp is pointing to the predecessor // 3. establish link between predecessor and the right subtree. tmp->right = node-> right; tmp = node; // 4. tmp is now pointing to the original node (to be removed) node = node->left; // 5. node is its left child (which connects with the parent of original node) } delete tmp; // 6. remove the original node } }
what is a bst
that the key in any node is larger than the keys in all nodes in that node's left subtree and smaller than the keys in all nodes in that node's right subtree.
Tree height
the height of a node is the length of the longest downward path to a leaf from that node you can find the height of any node
get leaf count
unsigned int getLeafCount(struct node* node) { if(node == NULL) return 0; if(node->left == NULL && node->right==NULL) return 1; else return getLeafCount(node->left)+ getLeafCount(node->right); }
Merge
uses a divide and conquer recursive approach, repeatedly dividing the list in half until it is looking at only two individual items, then merges them back into one sorted list, working back up through each division, merging each pair of sorted sections together into the larger section
Selection
uses a nested loop to "select" the best data for a particular position, then swap the best data with the value in that place
Bubble
uses a nested loop to compare neighboring items and make swaps as neighbors are out of natural order
Quick Sort
when L and R markers meet, they swap out the pivot
search in BST not recursive
while (p != 0) if (el == p->key) return &p ->key; else if (el < p->key) p = p->left; else p = p->right; return 0; }
get size
while (there are elements in the tree) count++ if (there is a left child) push onto a queue endif if (there is a right child) push onto a queue endif pop the queue endwhile
Loop on rightmark
while alist[rightmark] >= pivotvalue and rightmark >= leftmark:
get leaf count recursive
{ if(node == NULL) return 0; if(node->left == NULL && node->right==NULL) return 1; else return getLeafCount(node->left)+ getLeafCount(node->right); }
find Pred in bst
{ if(p->right!=0) { p=p->left; while(p->right!=0) p=p->right; } return p; }
get height of the tree
{ if(ptr==NULL) return 0; leftHeight=height(ptr->left) RightHeight=height(ptr>right) if(leftHeight>rightHeight) return leftHight +1; else trun rightHeight +1; }
delete by copying
{ BST<Node<T> *previous, *temp = node; if (node->right ==0) //node has no right child node = node->left; else if (node->left == 0) //node has no left child node = node->right; else { //node has both children tmp = node->left; previous = node; //1. set previous while (tmp->right != 0) { //2. search for predecessor previous = tmp; tmp = tmp->right; } node->key = tmp->key; //3. copy if (previous == node) //4. break the link for predecessor previous->left = tmp->left; else previous->right = tmp->left; } delete tmp; //5. delete }
insert in BST
{ BSTNode<T> *p = root, *prev = 0; while (p != 0) { prev = p; if(p->key < el) p = p->right; else p = p->left; } if (root == 0 ) // tree is empty root = new BSTNode<T>(el); else if (prev->key < el) prev->right = new BSTNode<T>(el); else prev->left = new BSTNode<T>(el); }
MT Case 2
∃k≥0: f(n)∈Θ(n^(log_b a)(logn)^k) → T(n) = Θ(n^(log_b a) (logn)^k+1)
MT Case 1
∃ε>0: f(n)∈O(n^(log_b a - ε)) → T(n) = Θ( n^(log_b a) )
comparison base sort
不能給予資料確切數值,之能兩兩比較的排序法,極限速度為n*ln(n)(insert,shell,bubble,select,quick....都是)
binary insertion sort
二分插入排序(將二分搜尋帶入插入排序的前區比較,能使資料比較部分降至O(ln(n)),但是一樣用Array做儲存,故移動時必須帶動全部,為O(n))
binary search
二分搜尋法(須由小到大排序,需使用RA,從中間開始找,若中間那個值大於要找的值就往左邊在剖半,小於就往右邊頗半找)
internal sort
內部排序(因為資料量比較小,故可以全數放在memory中進行排序)
divide
分開(快速排序內的分割方式,取第一筆資料當作Pivot key(PK),跑一輪比較,大的放PK後面,小的放PK前面,所以PK前面一定<PK,PK後面一定>PK)
merge sort
合併排序(external sort常用之一,不好描述,看圖,而比大小方式為左右兩區塊各有一個計數器,一開始第一個和第一個比,小的放入新陣列,並且計數器加1,繼續和大的那邊比)
Radix sort
基數排序法(又稱bin sort,bucket sort,有LSD radix sort以及MSD radix sort兩種)
heap sort
堆積排序(先建立一個buttom-up的max-heap,然後取出root數值,把最後一個node替補上去,然後把root存入node的位置後固定(在之後的heap排序中無視這個位置),再進行max-heap排序步驟,直到全部root取完)
external sort
外部排序(因為資料量過大,無法一次接存入memory中,故會存在disk中進行,例如merge sort,m-way search tree)
stable or unstable
對於排序而言倘若出現若干個相同值的data:k,k1,k2....,若排序完後改變到他的排序(浪費步驟),稱之為unstable,反之並無移動到他們(沒有浪費)則稱之stable
conquer
征服(將divide完的前後子資料重複做divide步驟,稱之)
quick sort
快速排序(平均速度最快的,取最前面當做PK,比PK大就往前放,比PK小就往後放。而比較方式採雙標示,第一個標示標在PK另一端第一個小於PK的數字,然後第二個標示標示在另一端的下一個並且一個一個比較往PK方向前進,只要標示二發現大於PK的數,就和標示一那小於PK的數換,直到換完,PK會剛好到該到的地方)
run(merge sort)
排序好的片段資料
insert sort
插入排序(一個一個往後取值,取道的值和前面以插入區做比較,放入適當位址,資料移動運算量為O(n),資料比較運算量為O(n))
MSD radix sort
最大數字排列Most Significant Digital(概念和LSD類似,但是由最高位數位數當作r(bucket),先將各個不同位數的data放入bucket中,再將每個bucket各自排序,最後再取出即完成)
bubble sort
泡沫排序(由左而右兩兩個互相比較,大的丟右邊,每排完一回合(左到右全部),右邊會多一個以排完的數字,和select sort有點相似)
linear time sorting method
線性排序方式
linear insertion sort
線性插入排序(運用link list方式儲存,並運用link list之特性,降低data movement的次數至O(1)(因為只要改單個point即可達成調換),比較還是一樣為O(n))
linear search
線性搜尋(又稱sequential search循序搜尋,就是一個一個看,有找到就停下來)
counting sort
計數排序(待補)
Shell Sort
謝爾排序(又稱增量遞減排序排序法 (diminishing increment sort),採間距(span)比較,間距一班是採總量的n/2,n/(2^2)....比例下去,請看圖)
loser tree
輸家樹(比較方式跟winner tree相同,但是在這是輸的人可以佔據父node,贏的人繼續往上比較,時間分析度相同,但是餐雨點樹較少,比winner實用)
select sort
選擇排序(從第一個開始到最後,再將選擇出最小和第一個交換,之後再從第二個開始到最後選出最小和第二個交換,依此類推;和插入排序不同在,插入是先取,然後和前面排列完全的區塊做比較後插入)
select tree
選擇樹(協助k-way merge一次合併K個run成為一個,其中又分為winner tree以及loser tree)
Random access
隨機存取(意思就是不需經由循序步驟,可以用跳躍式的方式拜訪空間)
examples of divide and conquer
Classical algorithms such as the binary search, merge sort and quick sort are good examples of the divide and conquer approach
majority element
T(n) = 2T(n/2) + O(n) O(nlogn)
merge sort
T(n) = 2T(n/2) + O(n) O(nlogn)
Binary tree is ...
a divide-and-conquer ready structure
pivot element (quicksort)
first element in a region to be sorted
merge sort total number of comparisons
n x log base 2 of n
base case
simplest case of a recursive definition
Binary Search
T(n) = T(n/2) + O(1) O(logn)
MT Case 3
∃ε>0: f(n)∈Ω(n^log_b a + ε) ∧ ∃δ < 1: af(n/b) ≤ δf(n)→ T(n) = Θ(f(n))
Tree Traversals
1.) Pre-order 2.) In-order 3.) Post-order 4.) Level-order
Mergesort
1. Split array A[0..n-1] in 2 about equal halves and make copies of each half in arrays B and C 2. Sort arrays B and C recursively 3. Merge sorted arrays B and C into array A as follow: A) Repeat the following until no elements remain in one of the arrays: a) Compare the 1st elements in the remaining unprocessed portions of the arrays b) Copy smaller of the 2 into A, while incrementing the index indicating the unprocessed portion of that array B) Once all elements in 1 of the arrays are processed, copy the remaining unprocessed elements from other array into A.
Quicksort Improvements
1.) Better pivot selection: Median of Three Partitioning Uses median of leftmost, rightmost, and middle element of array Randomized quicksort Uses a random element as a pivot point 2.) Switch to insertion sort on small subarrays 3.) Elimination of recursion
Asymptotic Order of Growth
A way of comparing functions that ignores constant factors and small input sizes.
Big Omega(g(n))
Class of functions f(n) that grow AT LEAST AS FAST as g(n)
Big Theta(g(n))
Class of functions f(n) that grow AT SAME RATE as g(n)
Break the problem down into subproblems that are smaller instances of the same problem
Divide and conquer: Divide
Height Algorithm
Height(T) //Computes recursively the height of a binary tree //Input: A binary tree T //Output: The height of T if T is an empty set return -1 else return max{Height(Tleft),Height(Tright)} + 1 (The +1 accounts for the root itself
binary search 3
Let l, r be the index of the first (left most) and the last element (right most) of a list respectively. The middle index can then be defined as mid = ⌊(l + r)/2⌋ (Here ⌊x⌋ reads 'floor of x', which rounds x to the nearest integer≤ x. For example, ⌊2.96⌋ = 2). If X < L[mid], the left half L[l..mid − 1] is selected for further check. Otherwise, the right half L[mid + 1..r]. Initially, l = 0 and r = n − 1.
Binary search 2
Let the sorted list be L[0..n − 1] and the key be X. The idea is to check if X is the middle element of the list L[mid], where mid is the index of the middle element. If not, L[mid] divides the list into two halves, and only one half needs to be checked.
Quicksort Pseudocode
Quicksort(A[l..r]) //Sorts a subarray by quicksort //Input: Subarray of array A[0..n-1], defined by its left and // right indices l and r //Output: Subarray A[l..r] sorted in nondecreasing order if l < r s <- Partition(A[l..r]) //s is a split position Quicksort(A[l..s-1]) Quicksort(A[s+1..r])
polynomial multiplication
T(n) = 3T(n/2) + O(n) O(n^log3)
Integer multiplication/ Karatsuba
T(n) = 3T(n/2) + O(n) O(n^log_2 3)
Matrix Multiplication/ Strassen
T(n) = 7T(n/2) + O(n^2) O(n^log_2 7)
selection (kth smallest)
T(n) = T(n/5) + T(7n/10 + 6) + O(n) O(n)
Binary Tree height
The length of the longest path from root to leaf
Inorder Tree Traversal
The root is visited after visiting its left subtree but before visting the right subtree.
Postorder Tree Traversal
The root is visited after visiting the left and right subtrees (in that order)
Preeorder Tree Traversal
The root is visited before the left and right subtree are visited (in that order).
Combine step
When you have to solve problems similar to recursive problems, but that are not quite the same as the recursive problem, what step to the fall into?
binary search in python
def bsearch(a,x): lower = -1 upper=len(a) while upper > lower + 1: mid = lower+upper // 2 if a[mid] == x: return mid if x < a[mid]: upper = mid else: lower = mid return None