Reiknirit
Heaps: Multiway heaps
d-way tree Parent's key smaller than its children's keys. d is number of children for each node height is tilde log d N When inserting it dose log d N compares Del max/min: d log d(N) max/min: 1
Kd-Trees: Grid
Grid Implementation: Fast, simple solution for evenly-distributed points. Problem: Clustering, a well known phenomenon in geometric data.
UnionFind: Quick Union
Grunn union aðgerð. Best að teikna tré. union(p, q) -> setjum rótina af p undir rótina af q. Til að gera id listann: byrja neðst í tréinu og index-ið fær id'ið af rótinni.
Heaps: Binary Heap
How it works: a data structure that implements a complete binary tree within an array, such that every parent node has a value that is less than the value of either of its children. Structure: always add the left child of every node first. Maintain: After adding node we check if the parent is larger and if so we swim up. You can only delete root node and when that happens you make a bottom node the root and make it swim down to maintain the tree. MaxHeap: Heap where root is max value MinHeap: Heap where root is min value Insert: O(logN) Del max/min: O(logN) max/min: O(1)
MST: Kruskals Algorithm
Keyword: Add smallest edges until we get a MST Keep adding the smallest edges, making sure to not create cycle, until we have V-1 edges, We then have MST. Time complexity: O(e log (e))
SAT: Naive method
Maintain array a[] where a[i] represents bit i. Simple recursive method does the job. Equivalent to counting in binary from 0 to (2^n) - 1 Transform to count all possible T F combinations
Analysis: Measurements time complexity as function of n
(a*n2^b)/(a*n1^b) = t2/t1. n2 = stærra inntak n1 = minna inntak b = veldið af vaxta => setur inn í formúl, leysir fyrir b. t2 = stærri tími t1 = minni tími a = fasti b = logn2/n1(t2/t1) tip: a = t1/n1^b T(n) = a*n^b
MST: Spanning Tree
A spanning tree of graph G is a subgraph T that is: Acyclic (no cycles) Connected Contains all the vertices
Analysis: Tilde notation
Estimate running time (or memory) as a function of input size n. Ignore lower order terms. - when n is large, terms are negligible - when n is small, we don't care Example: 3n^3 + n^2 + 5n + 10 -> ~3n^3
Binary Search Tree: order
For all start with checking left and then right. Preorder: Write nodes in the order that you visit them. Write(node) recur(node.left) recur(node.right) Postorder: Write node when both subtrees have been visited. recur(node.left) recur(node.right) Write(node) Inorder: Write node when left subtree has been visited. recur(node.left) Write(node) recur(node.right)
Einkenni við sort
Gerum ráð fyrir að það séu 32 í lista Top-down Mergesort = Listinn skiptur í tvennt, vinnstri hlið(1-16) alveg sorteruð og hægri hluti búinn að skiptast í tvennt(16-24 & 25-32) og innbyrgðist raðaður Bottom-up Mergesort = listinn skiptur upp í hluta með t.d 2,4,8,16 stökum í og þeir raðaðir innbyrgðis. t.d listinn skiptur í 8 hluta með 4 stök í hverjum hlusta sem eru innbyrgðis raðaðir Quicksort = Öll stokk fyrir neðan eitthvað miðjustak (miðjustak er á réttum stað miðað við sortaða listann) eru minni en miðjustakið og öll stök fyrir ofan miðjustakið eru stærri LSD radix sort = listinn er raðað eftir seinustu stöfunum, t.d seinustu 2 MSD radix sort = listinn raðaður eftir fyrstu stöfunum t.d raðað eftir fyrsta stafinum Selection sort = t.d fyrstu X stök eru alveg rétt röðuð. Þau stök sem koma eftir X eru ekki rétt röðuð Insertion sort = fyrstu X stökinn eru innbyrgðis rétt röðuð og restinn hefur ekki verið *snert* heapsort = t.d fremsta stakið er það sem er síðasta í raðaðri útgáfu & nr. 2 og 3 eru þau sem eru næst síðust. -> er eins og það er búið að hrúgu raða inntakinu og eigi eftri að framkvæma del_min
Shortest paths: Dijkstra
Initialise distances according to the algorithm Pick first node and calculate distances to adjacent nodes. Pick next node with minimal distance; repeat adjacent node distance calculations. Final result of shortest-path tree
Hashing: linear probing
Insert: Insert to hashed index. When a new key collides, find the next empty slot, and put it there. Note: Array size M must be greater than the number of key-value pairs N. Search: Search table index i. If occupied but no match, try i+1, i+2, etc.
Kd-Trees: 2D-Tree
Keyword: BST that uses a spacial set of keys(x,y) Recursively partition a plane into two halfplanes. Build a tree that simulates the partition of the plane. Start by partitioning by the vertical axis, then by horizontal and then again by vertical and so on. So which partition we use depends on the level the point will be added to. Same as BST but we alternate using x- and y- coordinates as key. When finding all points in a query axis-aligned rectangle. Check if the point in node lies in given rectangle. Recursively search left/bottom(if any could fall in rectangle). Recursively search right/top(if any could fall in rectangle). Time Complexity: Average: R+logN, Worst case: R + sqrt(N) Nearest Neigbour Search: Find closest point to query point. Check distance from point in node to query point. Recursively search left/bottom (if it could contain a closer point). Recursively search right/top (if it could contain a closer point). Organize the method so that it begins by searching for query point.
Balanced BST: 2-3 Trees
Keyword: Balanced BST Can have 2 or 3 children 2-node: one key, two children, left smaller, right larger 3-node: two keys, three children, left smaller, middle in between, right larger. Best case height: log 3(N) - all 3-nodes Worst case height: log(N) - all 2 nodes Symmetric order: Inorder traversal yields keys in ascending order. Perfect Balance. Search: Almost like normal BST. Insert: Adding to 2-node creates 3-node. Insert: Adding to 3-node, creates temporary 4-node, move middle key in 4-node into parent leaving behind 3-node. Repeat up the tree as necessary, if we reach root and its a 4-node, split into two 2-nodes.
Graph: Directed Graph: Topological Sort
Keyword: Can all edges point upwards? Not possible if cycle. DAG: Directed Acyclic Graph. Method: Run DFS and return vertices in reverse post-order.
Heaps: HeapSort
Keyword: Create MaxHeap, remove root and add to the back of the array, fix array and continue, until the heap is empty. Method: Create MaxHeap, switch root with bottom value in heap, remove bottom value(root) from the heap. Use bottom value as the back of array, fix heap by using sink. Continue to switch and remove root until the heap is empty Heap method Swim: node switches with the parent if the parent is smaller then self. Time complexity O(nlogn) build-max-heap : O(n) heapify : O(long), called n-1 times
Binary Search Tree
Keyword: Data Structure, all children to left of node are smaller then node, all children to right are larger then node. Worst case: search, insert, delete = N Avg case: search, insert = 1.39 lg N if deletion is allowed: search, insert, delete = sqrt(N) Smallest value: Leftmost in tree Largest value: Rightmost in tree
Sorting: Merge Sort
Keyword: Divide and conquer, Da best, divide and merge back together Break array down, into smaller parts sort when merging back together, time complexity = O(nlgn) Cons: Has to much overhead for small subarrays, cutoff to insertion sort for <= 10 items Is it stable? Yes
MST: Greedy Algorithm
Keyword: Find cut with no established line, Finding MST Cut: a cut in a graph is a partition of its vertices into two nonempty sets. Crossing edge connects a vertex in one set with a vertex in the other. Cut property: Given any cut, the crossing edge of min is in the MST. Start with all edges coloured grey. Find cut with no black crossing edges; colour its min-weight edge black. Repeat until V-1 edges are coloured black
Graph: Breadth-First Search
Keyword: Finish checking all adjacent before checking into another level. Method: Add starting vertex to queue, dequeue starting vertex, check all adjacent vertexes and add to queue. When all adjacent vertexes have been marked, start dequeuing vertexes and continue adding their respective unmarked adjacent vertexes to queue and so forth until all vertexes are visited. Can keep length of paths too vertexes from starting node. The path from the source to a certain vertex, is the shortest path from the source to the query vertex.
Symbol Tables
Keyword: Key&Value data structure Use: Insert a value with specific key Given a key, search for the corresponding value Implementations: Sequential Search(unorderd list) (Worst case: search: N, insert: N), (Average case: search N/2, insert: N)
Graph: Depth-First Search
Keyword: Mark v as visited. Recursively visit all unmarked vertices w adjacent to v. Method: Simply recursively visit all nodes connected from starting point. Preorder: vertices in order of calls to dfs(G, v) Postorder: vertices in order of returns from dfs(G, v) Level-order: vertices in increasing order of distance from s Usage: Find all vertices connected to a given source vertex. Find a path between two vertices.
MST: Prims Algorithm
Keyword: Pick shortest edge from set of visited vertexes. Method: maintain an array of visited vertexes, choose the shortest edge from all the available edges from visited vertexes, making sure not to create a cycle, Do until we have V-1 edges or V vertexes in the visited array. We then have and MST. TimeComplexity depends on data structure used to maintain visited vertexes O(v^2) with Adjacency Matrix O(vlogv + elogv) with Heap and adjacency list.
Balanced BST: Left Leaning Red-Black BST's
Keyword: Represent a 2-3 tree with a binary tree structure. No node has two red links connected to it. Every path from root to null link has the same number of black links. (perfect black balance) Red links lean left. Search is the same as for elementary BST (ignore color), much faster because of better balance. Node is marked red if parent connection is red. rotateLeft: Used to fix right leaning red links. Original red marked node moves to parent position and parent transforms into left child of original red marked node, left child of Original red node moves to right child of parent. Mark old parent as red. Inbetween link moves to old parent. rotateRight: Orient a left-leaning red link to (temporarily) lean right. Opposite of rotate left. Color flip: Recolor a node with two red links (temporary 4-node) both links change to black, parent node is marked as red. Everytime we add a node, we create a red link to its parent. Standard BST insert, color link red. Fix if link is not correct. If two left red links in a row, rotateRight on top node, making the middle node the root with two red links, then perform color flip. Worst case: insert, search, delete = 2 lg N Avg case: insert, search, delete = 1* lg N *extremely close to 1
Sorting: Insertion Sort
Keyword: Swap with a left neighbour until correct position. Start at first index, move from left to right, the first index stays still, continue to next index, if the value is smaller then the left neighbour; switch the values. Continue with this rule and swap neighbours until all values are placed in the correct position. T(n) = Average: n^2/4, Worst: n^2/2 Pros: Faster in practice then Selection sort Cons: Not the most efficient. Use cases: Use for small N or partially ordered Is it stable? Yes
Sorting: Selection Sort
Keyword: Swaps smallest. Start at first index, iterate through list searching for the smallest value, when found it swaps the values. Then continues on the next index. Until all values have been put into the right position. T(n) = ~n^2/2 why? first for loop goes to n, second for loop n-1, n-2, n-3.... Pros: Is memory efficient, is fast if the input is nearly sorted. Cons: Not very fast. Is it stable? No
Hashing: Cuckoo hashing
Keyword: Two hashtables and two hash functions. Try to insert into first table, if does not work, put value from first table to second using the second tables hash function. This can cause a loop, but the that can be detected and different hashing functions can then be used.
Sorting: QuickSort
Keywords: Pivot, find item smaller then pivot from right(itemFromRight), find item bigger then pivot(itemFromLeft). Method: Choose pivot, best if pivot is close to median value. Move pivot to last index. find item smaller then pivot from right(itemFromRight) and find item bigger then pivot(itemFromLeft) and swap, do this until the itemFromRight has a larger index then the itemFromLeft, then switch itemFromLeft with pivot, we now have the pivot in its correct index, and all values to the left are smaller then the pivot and all values to the right are bigger. We then recur into the next partition and do the same until all values have been sorted. Time Complexity: Worst: O(n^2), Average: O(nlogn) Pros: Can be very efficient, does not use as much memory as merge sort Cons: Can have inefficient with bad pivot choice. Is it stable? No.
StringSorts: LSD radix sort
LSD: Least Significant Digit Keyword: perform key-indexed counting sort for all letters in string starting at last, from right to left. Time complexity: 2WN guarantee, 2WN random, N+R extra space Stable = true
StringSorts: MSD radix Sort
MSD: Most Significant Digit Keyword: Start by key index sorting by first letter, then recursively sort the sub arrays. Slow for many subarrays Huge number of small subarrays because of recursion
Shortest paths: Bellman-Ford
Order of growth is E x V in both best- and worst-case Repeat V times: Relax all E edges
Analysis: Memory usage in Java
Overhead = 16 bytes Reference(Pointer) = 8 bytes Padding: Each object uses a multiple of 8 bytes.
Graph: upprifjun
Path: Is there a way from s to t Shortest path: What is the shortest path between s and t Euler tour: Is there a cycle that uses each edge exactly once Hamilton tour: Is there a cycle that uses each vertex exactly once Connectivity: Is there a way to connect all of the vertices
Tries: R-way Tries
Store characters in nodes (not keys) Each node has R children, one for each possible character.
Analysis: Memory Primitive Types
Type - Bytes boolean - 1 byte - 1 char - 1 int - 4 float - 4 long - 8 double - 8
Analysis: Memory Arrays & Matrices
Type - Bytes char[] - 2n+24 int[] - 4n+24 double[] - 8n+24 char[][] ~2mn int[][] ~4mn double[] ~8mn
UnionFind: Quick Find
Union - O(n) linear Find - O(1) constant Good if few union operations, and many finds. union(1,2) = Finnum hvað id af 2 er og index 1 fær það. Id listinn er fyrir neðan index listann. Root(p) = Root(q)
Sorting: Merge Sort - bottom up
Use pre-existing orders by identifying naturally-occurring runs. Kind of like the second half of merge sort. So you start sorting the first 2 elements in an array of size 2 and do so through the whole array. Now you have n/2 arrays of size 2. Then you do it again but merge adjacent arrays of size 2 then you have n/4 arrays of size. Continue until sorted Is it stable? Yes
Sorting time complexity
https://i.imgur.com/71FLZIF.png
Analysis: Memory usage in Java example
https://i.imgur.com/VZROuY0.png
UnionFind: Weighted Quick Union
if p < q: Root(p) = Root(q) else: Root(q) = Root(p) ATH: Base caseið gerist sjaldnar! Husgun: Minna tré fer undir stærra
StringSorts: Key-indexed counting
~11N+4R array access to sort N items whose keys are integers between 0 and R-1 Space = N+R Stable = True