CS Data Structures Exam 1

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

2-3 trees

*2-3 trees* 2-nodes have one element and two children 3-nodes have two elements and three children - faster insertions, slower lookups compared to AVL

array

*Arrays* (DATA STRUCTURE) - knows what kind of elements it has - length is indeterminant - not fixed size - homogeneous Use when: - number of elements you'll have depends on size of data

Big O rules

*Big O rules* if j < k, then 1 ≪ log n (constants are less than logs... ) ≪ n^j (are less than polynomials... ) ≪ n^k (are less than higher-degree polynomials...) ≪ n^k log n (are less than poly-log...) ≪ j^n (are less than exponentials...) ≪ k^n (are less than higher-base exponentials.)

characterizing data structures

*Characterizing Data Structures* - Each data structure almost always comes w an *algorithm* (effective procedure to class of problems) - Usually implements *abstract data type*: set of operations with rules about their behavior (stacks, queues, dictionaries)

Computing length of list (iteratively and recursively) methods and time/space complexities

*Computing length of list (iteratively and recursively) methods and time/space complexities* def len_rec(lst): if data?(lst): 1 + len_rec(lst.next) else: 0 *Time*: O(n) *Space*: O(1) ***** WEIRD bc it's recursive but no structs/vectors being held here so it's constant ***** def len_iter(lst): let count = 0 while data?(lst): count = count + 1 lst = lst.next count *Time*: O(n) *Space*: O(1)

the dictionary ADT

*Dictionary ADT* - Abstract data type that creates associations bw keys and values - Data structures: 1. A list of keys-value pairs (if it's small, this works) 2. A hash table (most common) -> non-direct addressing 3. An array of key-value pairs -> direct addressing 4. A sorted array of key-value pairs 5. Binary search tree - keys have to be unique, values don't interface DICT[K, V]: def mem?(self, key: K) -> bool? def get(self, key: K) -> V def put(self, key: K, value: V): VoidC def del(self, key: K): VoidC def empty?(self): bool?

Disjoint sets

*Disjoint Sets* - ADT aka Union Finds Lookslike: 0{125}{37}46 interface UNION_FIND: def len(self) -> nat? def union(self, p: nat?, q: nat?) -> VoidC def same_set?(self, p: nat?, q: nat?) -> bool? Behavior: • d.len() returns the total number of objects • d.union(p, q) causes p and q's sets to be joined together • d.same_set?(p, q) reports whether p and q are now in the same set

dynamic array

*Dynamic Array ADT* interface DYN_ARRAY[T]: def len(self) -> nat? def get(self, index: nat?) -> T def set(self, index: nat?, element: T) -> NoneC def push(self, element: T) -> NoneC def pop(self) -> T - leave extra space in array, otherwise push/pop are linear instead of constant time!!! EX of implementation: banker's queue

hash tables

*HASH TABLES* -> Non-direct addressing (keys aren't that perfect/they aren't in consecutive order) IMPORTANT:*have to be deterministic* -> if you call the hash function twice w the same argument, you get the same result (same key is put in the same bucket every time) let correct_bucket = self._hasher.hash(key) % self._data.len( ) ^ that has to be the same every time you give it the same key Hash function: function that turns keys(whether they're strings, etc.) into integers and puts them into buckets Ex: phonebook Hash collision: when the function gives the same bucket for two keys -> this can't happen bc each key has to be unique - you have more than one friend w a name that starts w c (there aren't enough buckets -> bad hash function) *COLLISION SOLUTION #1: Separate Chaining* - store a linked list in each bucket Pro: deletion is very easy Cons: only works if you have enough buckets (otherwise linked lists become very long and it doesn't help) LOOKUP *Time* O(1) if good hash function O(n) if bad hash function *Space* O(1) i think? *COLLISION SOLUTION #2: Open Addressing* - stop when you get to an empty bucket/use the next free bucket Pros: Cons: deletion is rly hard here LOOKUP *Time* O(1) if good hash function O(n) if bad hash function *Space* O(1) i think? - both separate chaining and open addressing rely on good hash functions to keep it from being linear WHY HASH tables compared to BST as a data structure representation of dictionary ADTs? 1. why open addressing/separate chaining: binary search trees a lot of time (worst/best case), whereas hash tables on average are way faster 2. better constant factors: hash tables can live in one contiguous block of memory, better for cache

linked lists

*LINKED LISTS* Pro: - very easy to grow and insert (good for things that change a lot) (EXCEPT adding to the end is linear) -> space complexity is better than vectors when you have to insert - constant adding to front Cons: - accessing/getting nth elements -> have to go through whole list vs RANDOM ACCESS which is constant in vectors - removing last node - adding to middle/end of list is linear time - problem with vectors: can't grow or add stuff in the middle -> have to create bigger vectors and copy everything over - could leave extra space... but this makes it hard to find stuff - SOLUTION: linked lists def get_nth(self, n): let curr = self.head while n > 0: if curr is None: #error curr = curr.next n = n - 1 curr.data *Time*: O(n)/linear -> go through entire linked list *Space*: O(1)/constant #finding nth node is same as above ^ just return curr instead of curr.data

Prim's algorithm

*Prim's algorithm* Build a tree edge-by-edge, as follows: 1. Start the tree at any vertex 2. Find the smallest edge connecting a tree vertex to a non-tree vertex, and add it to the tree 3. Repeat until all vertices are in the tree

Queue

*Queue* - Abstract data type in form of FIFO (first in first out -> like a line) interface STACK[T]: def push(self, element: T) -> NoneC def pop(self) -> T def empty?(self) -> bool? interface QUEUE[T]: def enqueue(self, element: T) -> NoneC def dequeue(self) -> T def empty?(self) -> bool? IMPLEMENTATION: kinda hard Linked list: order makes it hard bc dequeue need to loop through whole thing (O(n)) Array: have to shift when dequeue, also O(n) Solutions: *linked list w tail*: to easily dequeue in constant time (pro: linked list doesn't fill up) *ring buffer*: ring buffer has better constant factors (pro: uses less space depending on pointers) - start tells you where newest element is - len tells where old element is - pretends queue is connected

Depth First Search (DFS)

*Recursive DFS algorithm* Procedure DFS(graph, start)is seen ← new array (same size as graph, filled with false); Procedure Visit(v)is if not seen[v] then seen[v] ← true; for u in Successors(graph, v) do endVisit(u) end end Visit(start); end return seen *Time* O(V + E) -> adjacency list O(V^2) -> adjacency matrix *Space* O(1) -> have to create a new vector of size v but it's what you return, not scratch space - can also do DFS for cycle detection

variations of linked lists

*Singly linked list w/tail* -> ADDING TO END EASY - now adding to the end is not linear Pros: - removing first node (O(1)) - adding node ot beginning to end of list is O(1) Cons: - removing last node -> you need second to last node so you have to do pointer chase (O(n)) *Doubly linked list w/tail* -> REMOVING END IS EASY Pros: - removing first node (O(1)) - adding node to beginning/end of list (O(1)) Cons: - more space bc there are double sets of points *CIRCULAR linked list w/sentinel* - not sure how this helps

Stack

*Stack* - Abstract data type in form of LIFO (last in first out) - adds elements to the END Signature: • push(Stack, Element) • pop(Stack): Element • empty?(Stack): Bool interface INT_STACK: def push(self, element: int?) -> NoneC def pop(self) -> int? def empty?(self) -> bool? interface STACK[T]: def push(self, element: T) -> NoneC def pop(self) -> T def empty?(self) -> bool? STACK implementation: LINKED LIST let s = ListStack() s.push(2) s.push(5) s.push(7) s.pop() s.pop() STACK implementation: ARRAY let s = VecStack() s.push(2) s.push(3) s.push(4) s.push(5) ***s.push(6)*** # when vector is full, need a loop to grow bc you have to make a new vector and copy it over (linear order) *Linked list vs array stack* - LL fills when memory is gone, vs array has a fixed size - array stack is faster bc of cache locality so LL stack is smoother and array stack is tighter

struct

*Structs* (DATA STRUCTURE) - a kind of box - let you define new groupings of data - fixed size - heterogeneous When to use: - when you have a fixed number of fields you can name Syntax: struct employee: let id let name let position let employees = [employee(928, "Alice", 4)] [employee(14, "Carol", 6)] *To find Carol's position*: employee[2].position

insertion sort

*Time* O(n^2) -> two loops!!! *Space* O(1)

time and space complexity

*Time*: - How long do operations take (all basic is constant time, except making new vector is linear) - Count them - DON'T COUNT SPACE that's part of the result *Space*: how much stuff created in SCRATCH space - That but extra twist: have to know which operations allocate and how much 1. 2 ways to allocate: struct(object), vector (how many times am I calling cons or creating vector/struct) -> same as time counting 2. RECURSION -> stack size Iterative: stack size is constant - How many times can I recur for input of this size?

two graph representations

*Two graph representations* There are two common ways that graphs are represented on a computer: 1. adjacency list: In an array, store a list of neighbors (or successors) for each vertex *Space*: O(V + E) (has a list for each vertex, and the total length of all the lists is the number of edges) 2. adjacency matrix: Store a |V|-by-|V| matrix of Booleans indicating where edges are present: - if there are weights, then everything is inf and then where edges are present there are numbers (can be negative weights) *Space*: O(V^2) Adjacency matrix—is |V| by |V| regardless of the number of edges: *Time Complexities* 1. add_edge/set_edge adj. list: O(setInsert(d)) adj. matrix: O(1) 2. get_edge/has_edge? adj. list: O(setLookup(d)) adj. matrix: O(1) 3. get_succs adj. list: O(|Result|) -> list waiting for you, just go look them up adj.matrix: O(v) -> have to look through 4. get_preds adj.list: O(V + E) adj.matrix: O(V) - can adjust adj list a little to make it O(|Result|) if you're doing get_succs and get_preds a lot

graphs

*Use of graphs* 1. spatial graphs (Google maps -> what we're doing now!) 2. dependency graphs -> DAG 3. interference graphs -> scheduling 4. flow graphs -> like a spatial graph but weights tell you how much can flow bw two points, like a rate (this road can handle 40 cars per min, etc.) - if you want to cut flow in half, what's minimum edges *Types of graphs* - undirected graph - directed graph - DAG (directed acyclic graph)

vector comprehension

*VECTOR COMPREHENSION* - form of expression that lets you construct vector with whatever elements you want [E for x in v] where E = expression x = variable name v = vector ex: [2 * y for y in u if y != 3]

What is a data structure?

*What is data structure?* - Scheme for organizing data to use it efficiently (not take up too much space) We talk about: Data structures: • struct • array • linked list (single, double, circular) • ring buffer • hash table • binary search tree • adjacency list and adjacency matrix • binary heap • union-find • Bloom filter • dynamic array • AVL and red-black trees Other concepts: - Abstract data types (stack and queue ADTs, sets, dictionary ADTs) - Asymptotic analysis (big O notation -> best case, worst case, amortized worst case) - BSTs/Hashing as implementations of dictionary ADTs *Data Structure Goals* 1. Correctness of code 2. Efficient use of resources: - Time (for operations) - Space (memory) - Power

a sorted array dictionary

*a sorted array dictionary* - data structure that are representations of ADTs - easy to lookup *Time* O(log n) -> sorted *Space* O(1) INSERT *Time* have to move elements out of the way so O(n) *Space* O(n) i think??

breadth first search

*breadth first search (BFS)* BFS: instead of follow an edge and go through all consequences, we wanna find everything that's one edge away, everything that's 2 edges away, spreading out etc same way you do level order walk on tree: queue *time complexity*: O(V + E) for adj. list O(V^2) for adj. matrix *space complexity*: O(V) bc of the queue if todo is stack(LIFO): DFS if todo is queue(FIFO): BFS

other directed graph vocab

*digraph/directed graph vocab* - direct predecessor (arrow before it) - *reachable* path - strongly connected: two vertices that are mutually reachable from each other - strongly connected component: subgraph of vertices all strongly connected to each other

directed acyclic graph (DAG)

*directed acyclic graph (DAG)* - dependency graphs -> think chem (can't do c until you've done a or b)

directed graph

*directed graph* - edge order matters - this is usually type of graph computer scientists are talking about

graph ADT

*example of a graph ADT* interface GRAPH: def new_vertex(self) -> nat? def add_edge(self, u: nat?, v: nat?) -> NoneC def has_edge?(self, u: nat?, v: nat?) -> bool? def get_vertices(self) -> VertexSet def get_neighbors(self, v: nat?) -> VertexSet

graph search

*graph search* To answer whether there's a path (among other things), we can use: 1. Depth-first search (DFS): go as far as you can along a path, then go back and try anything you haven't tried yet - hand on corn maze wall 2. Breadth-first search (BFS): explore all the successors of a vertex before exploring their successors in turn - the steps thing

in-order tree walk

*in-order tree walk* - visit each node bw its children: - print statement between - go all way down, then up, then right, basically IN ORDER

level-order tree walk

*level-order tree walk* - visit all of each level before next level harder to code: have to go LR LR LR (jump bw nodes w no pointers in bw) SOLUTION: have a todo list (FIFO! -> queue)

minimal spanning tree

*minimal spanning tree* In a weighted graph, a spanning tree (or forest) is minimal if the sum of its weights is *minimal* over all possible spanning trees Two greedy algorithms to compute MST: - Prim's - Kruskal's

post-order tree walk

*post-order tree walk* - visit each node AFTER its children - go all way down left until you hit node w no children - then do that w right

pre-order tree walk

*pre-order tree walk* - visit each node BEFORE its children - go all way down left, then go back up and do right

merge sort

*time* version #1: O(n) version #2: O(nlogn) - odds and evens both O(n) *space* O(n)

tree walks

*tree walks* - traverses tree and linearizes vertices in some order -> you have some goal 1. pre-order tree walk 2. post-order tree walk 3. in-order walk 4. level-order walk

trees

*trees* tree: graph with no cycles *can't use while loops to sum up tree loops, you have to use RECURSION -> space?* - we usually root trees to orient them - ordered tree -> assigns order to children of each node - k-ary tree: each node has at most k children (binary trees are technically ternary trees, they're just not filled) - rose tree: infinite-ary tree *FULL TREE* - full tree: every non-leaf node has k children (if i have kids, i'm gonna have two, can't just have 1) *COMPLETE TREE* - we care more about complete than full - complete does not equal full - A tree is complete if every level is full of nodes except the last, which must be filled from the left - only one shape for each size of complete tree

undirected graph

*undirected graph* - edge order doesn't matter - mathematicians usually mean undirected

bloom filters

- Suppose Google Chrome's website blacklist (for malware sites) contains ten million URLs, averaging 100 characters in length. How much memory does it take to represent the the blacklist? - How long does it take to look up an entry in the blacklist? If we just store a sequence of entries then it's a looong linear scan. What if we want fast lookups? We could use a hash table, which lets us look up a URL in constant time, and the size of the hash table will be about 1 GB. - Google stores hash table (still 1GB, a lot of memory) on a server, but this requires a query every time to check the blacklist SOLUTION: Story summary of blacklist of each client We don't need to store any information with each URL, so imagine the following: make a hash table where each bucket is just a bit, which will be set if occupied and clear if unoccupied. What's the problem with this approach? (False positives.) - false positives not fatal here -> only have to do remote query in this case to confirm that URL is in blacklist probability of false positive? - depends on how many bits you use Let *n* be the number of set elements. (In our example, *n* = 10,000,000.) Let *m* be the number of bits. Then we expect approximately *n/m* (ignoring collisions) of the bits to be set. -> So when we lookup a URL that isn't in the set, the probability of a false positive is *n/m* here's where BLOOM FILTER comes in: - A Bloom filter does somewhat better than this by using multiple hash functions. Let the hash functions be *h1*, *h2*, etc. Then to add an element *s* to the set, we set the bits indicated by *h1(s)*, *h2(s)*, and so on. Then to look up an element, we check whether all the bits indicated by the hash functions are set. If all the bits are set, then the element is possibly (probably) in the set; if any of the bits is clear then the element definitely isn't in the set. - Note that we can't remove elements because multiple elements may share some same bits

structs and arrays

- can build networks out of structs connected to arrays - have identities!!! *EFFICIENCY* - time it takes to index array/struct doesn't depend on size of array/struct - getting first element of 1-element array takes same amount of time as last element of 100,000 array -> constant time (RANDOM ACCESS)

boxes

- don't need to worry about address of pointer, just pointer - 2 kinds of boxes we build data structures out of: 1. Structs 2. Arrays They have identity and can be aliased *EFFICIENCY* The time that it takes to index an array (or struct) does not depend on the size of the array (or struct), nor on the index That is, these three things should all take the same amount of time (due to *random access*): • getting the first/last element of a 1-element array, • getting the first element of a 100,000,000-element array, and • getting the last element of a 100,000,000-element array.

questions to ask about graphs

- is there a path from v to u? - What's the shortest path from v to u? - Are there any cycles?

AVL trees

- maintain balance factor giving difference bw each node's subtree's heights - balance factor bw -1 and 1, maintained via rotations - tree is approximately height balanced RULES that tell you what rotation to do: 1. first do normal leaf insertion 2. track balance factors on the way back up to the root 3. adjust w rotations as necessary *Right-left case* - right is too heavy (root past +1) - rotate it to be the right-right case, then rotate left ^ opposite for left-right and left-left cases

Splay trees

-paths very likely to be O(log n) - randomness: order of lookups you do affects order of this tree

red-black trees

1. Every node is red or black 2. The root is always black 3. dummy leaves are black 4. If a node is red, it's children must be black(no red node has a red parent) 5. Every path from a node to a NULL pointer must contain the same number of black nodes

Union find

1. quick find: eager approach p and q are connected if they have the same id - union: change all entries with id[p] to id[q] (all those 6's) - problem bc many entries can change 2. quick-union: lazy approach Root of i is id[id[id[...id[i]...]]] - find: could be linear - trees can get too tall - union: have to do find to do union 3. weighted w path compression - union: trees don't get too tall, heavier one always is the root - worst case for individual union/find operations -> O(logn) - but best case is O(mlog∗n) you can say that O(log^* n)O(log∗n) is the worst-case amortized time complexity of a single union or find operation—but don't forget that it's amortized

bit vector

A bit vector is a vector in which each element is a bit (so its value is either 0 or 1). In most vectors, each element has a different address in memory and can be manipulated separately from the other elements, but we also hope to be able to perform "vector operations" that treat all elements uniformly.

priority queue

ADT, has some things and they have some sort of order, numbers that let you know which is more important - can have the same key twice (unlike dictionaries) A (min-)priority queue provides these operations: • insert: adds an element • remove_min: removes the smallest element sorted list: unsorted list:

abstract data types (ADT)

An ADT defines: - set of abstract values - set of abstract operations on those values An ADT omits: - How values are concretely represented - How operations work Example of ADTs: - Stack - Queue - Dictionaries (Binary search trees, Hash Tables)

2-3-4 trees

B-tree of order 4

data structures are made of....

BITS !!! - can be organized into bytes - structure and order of 1's and 0's gives them meaning

what are data structures made of

BITS! Structure and order of 1's and 0's gives them meaning

Kruskal's algorithm

Build several trees and join them, as follows: 1. Start with a trivial tree at every vertex 2. Consider the edges in order from smallest to largest 3. When an edge would join two separate trees, use it combine them into one tree - how to keep track of the disjoint trees? Disjoint set ADT-> UNION FINDS! see CODE on separate sheet

permutation distribution

Can we characterize how sequences of insertions produce (un)balanced trees? - In fact, the only sequence to produce the right-branching degenerate tree is 0, ..., 14 - There are 21,964,800 sequences that produce the same perfectly balanced tree -> A RANDOM BST tends to be balanced!!!

How likely are false positives when using multiple hash functions?

Consider: - The probability of one hash function setting one particular bit is 1/m. Or, the probability of a bit not being set by the hash function is 1 - 1/m. - If there are k hash functions, then the probability of a bit being not set is (1 - 1/m)^k. - If we insert n elements, then the probability of a bit not being set is (1 - 1/m)^(kn), or the probability of a bit being set is 1 - (1 - 1/m)^(kn). - Now suppose we lookup an element that's not in the set. That means we check k bits, and we return true only if all k bits are set. So the probability that all k bits are set is [1 - (1 - 1/m)^(kn)]^k.

binary search tree

Data structure use interface of dictionary ADT *Invariant*: left < right LOOKUP *Time* IF BALANCED: O(log n) -> make use of the invariant IF unbalanced: O(n) -> all of them *Space* O(1) INSERT (recursive) *Time* If bushy/balanced: O(log n) If unbalanced: O(n) *Space* stuff you're creating isn't taking up any new structs or space DELETE - go right/left until you have to stop If bushy/balanced: O(log n) -> i think? If unbalanced: O(n) Space: O(1)

spanning tree

For a connected component of a graph, a spanning tree is a cycle-free subset of edges that connects every vertex If a graph has multiple components then each will have its own spanning tree, forming a spanning forest

choosing bw struct and array

Have a fixed number of fields that you can name? -> *struct*! How many elements you'll have depends on the size of the data?-> *array*!

Relaxation to find shortest path

Keep a table with two values for each node: • the best known distance to it from the start, and • the predecessor node along that best path To "relax" an edge, we consider whether our knowledge thus far, combined that that edge, can improve our knowledge by finding a shorter path For example, suppose that • the best known distance to node C is 15,• the best known distance to node D is 4, and • there's an edge of weight 5 from D to C. Then we update the best known distance to C to be 9, via D. *Time Complexity* O(E + VlogV), where E is the number of edges and V is the number of vertices.

Bellman-Ford algorithm

Solves: SSSP for graphs with no negative cycles (cycle with weights that sum to a negative number) Main idea: Relax every edge |V| − 1 times *Time*: O(VE) for adjacency list O(v^3) for adjacency matrix SEE CODE on other sheet!!!

Dijkstra's algorithm

Solves: SSSP for graphs with no negative edges Main idea: Relax the edges in a clever order depends *Time*: depends - not O(V + E) w adjacency list bc you have to find the smallest vertex first What's the clever order? -> Relax the edges coming out of the nearest vertex, then repeat An algorithm for finding the shortest paths between nodes in a weighted graph. For a given source node in the graph, the algorithm finds the shortest path between that node and every other. It can also be used for finding the shortest paths from a single node to a single destination node by stopping the algorithm once the shortest path to the destination node has been determined. *Time* - Its time complexity is O(E + VlogV), where E is the number of edges and V is the number of vertices.

deletion for random bst

To delete a node, we join its subtrees recursively, randomly selecting which contributes the root (based on size):

direct addressing

Uses the data in the address field without alteration. This is the simplest method of addressing and also the most common. ex: mapping digits w names in English let digits = ['zero', 'one', 'two', 'three'] LOOKUP def get_digit_name(name:int?) -> str?: digits[name] *Time* O(1) *Space* O(1)

root insertion

Using rotations, we can insert at the root: - To insert into an empty tree, create a new node - To insert into a non-empty tree, if the new key is greater than the root, then root-insert (recursively) into the right subtree, then rotate left - By symmetry, if the key belongs to the left of the old root, root insert into the left subtree and then rotate right

randomized insertion

We can now build a randomized insertion function that maintains the random shape of the tree: • Suppose we insert into a subtree of size k, so the result will have size k + 1 • If the tree were random, the new element would be the root with probability 1/k+1 • So we root insert with that probability, and otherwise recursively insert into a subsubtree bigger subtree is, less likely to do root insertion (want probability of that to be 1/1+k)

abstract data type

a description of operations on a data type that could have multiple possible implementations basically some concept in math and you wanna program with it operations: abstract implementation: how it actually works - set - stack - queue - dictionary

a random BST tends to be ________________

a random BST tends to be *balanced* - If you generate a tree by leaf-inserting a random permutation of its elements, it will probably be balanced - In particular, the expected length of a search path is 2lnn+O(1)

connected component

a subgraph of nodes all connected to each other

stack

abstract data type operations: push, pop, peek implementations: - linked list: cons, rest, first - array?

bytes

bytes: one way to order bits! - group of 8 bits - we think of it as a number written in base 2 system - byte could mean number OR letter (can send to place that expects number or char)

linked list of associations

data structure that represents dictionary ADT Lookup *Time* O(n) -> why it only works it's short *Space* O(1)

degree

degree of a vertex is the number of adjacent vertices degree of a graph is max degree of any vertex

leaf insertion

easy way to add elements to a tree

math laws

general structure {p} f(x) => y {q} if precondition p is true when we apply f to x, then we get y as a result and postcondition q will be true afterward p: logical statement about variables before q: logical statement about variables after f(x): code y: result

B-trees

generalization of 2-3 trees up to k children - like 2-3 trees, but allow up to k/2 missing children - nodes are optimal size -> stuff moved bw main memory and cache in blocks

binary heap

new data structure that we can use for shortest path implements new ADT -> priority queue priority queuee A binary heap is complete binary tree that is heap-ordered - A tree is heap-ordered if every element is less than or equal to its children *Binary Heap Insertion* - bubbling up ! *Time*: O(log n) - you do swaps for height of tree - complete tree is definitely balanced *Binary Heap Removal/Deletion* - percolating down! 1. Replace the root with the last element of the heap 2. Sink down to restore invariant *Time*: O(log n) *Cool thing about binary heaps* Instead of storing it as an actual tree with pointers: -> a binary heap is stored in level-order in an array! - finding parent or children involves that arithmetic let parent = (i - 1)/2 let left child = (2 * i) + 1 let right child = (2 * i) + 2

set

operations: empty?, member?, insert, union, intersect, size Implementations: - Linked list - Array - Binary search tree - Hash table

classes in DSSL2

they have: - fields - initializer/constructor - accessors - methods EXAMPLE: class Posn: let x let y def __init__(self, x, y): self.x = x self.y = y def get_x(self): self.x def get_y(self): self.y def distance(self, other): let dx = self.x - other.get_x( ) let dy = self.y - other.get_y( ) return (dx * dx + dy * dy).sqrt( )

the big grid

think of computer's memory as a big grid w assigned boxes of memory of bytes (each box stores address of actual number/a pointer to the actual number) - can treat several adjacent bytes as one larger number if you want - arrays/vectors are some memory with adjacent values

tree rotations

tool to help make trees more balanced

adjacent vertices

two vertices connected by an edge


Set pelajaran terkait

WHICH/ WHO / WHOSE / WHERE /WHEN

View Set

All about Pi (not really all about) 3.1415926535897932384626433...

View Set

Abnormal Psych: Eating and Sleeping Disorders

View Set

I personaggi di «Ciao Professore»

View Set