CPSC 320

Ace your homework & exams now with Quizwiz!

prove : that kruskal's algorithm produces the minimum spanning tree

Consider any edge e = (v, w) added by Kruskal's Algorithm, and let S be the set of all nodes to which v has a path at the moment just before e is added. Clearly v ∈ S, but w not ∈ S, since adding e does not create a cycle. Moreover, no edge from S to V − S has been encountered yet, since any such edge could have been added without creating a cycle, and hence would have been added by Kruskal's Algorithm. Thus e is the cheapest edge with one end in S and the other in V − S, and so by the cut property it belongs to every minimum spanning tree. Clearly (V, T) contains no cycles, since the algorithm is explicitly designed to avoid creating cycles. Further, if (V, T) were not connected, then there would exist a nonempty subset of nodes S (not equal to all of V) such that there is no edge from S to V − S. But this contradicts the behavior of the algorithm: we know that since G is connected, there is at least one edge between S and V − S, and the algorithm will add the first of these that it encounters.

logarithmic functions

For every b > 1 and every x > 0, we have logb n = O(nx) the base of the logarithm is not important when writing bounds using asymptotic notation. Logarithms grow more slowly than polynomials, and polynomials grow more slowly than exponentials.

What is the general form of a recurrence relation used for merge sort?

For some constant c, such that q > 2 and T(2) <= cn: T(n) = q*T(n/2) + cn

Union Find operations - Union(A,B)

For two sets A and B, the Union(A,B) operation will change the data structure by merging A and B into a single set. The goal is to implement Union(A,B) in O(log n) time.

cycle

a cycle is a path v1,v2,v3.....vk-1,vk where k > 2 and the first k-1 nodes are distinct and v1=vk so that the path ends where it started

Binary Heap

a data structure that implements a complete binary tree within an array, such that every parent node has a value that is less than the value of either of its children.

simple path

a path is simple if all its vertices are distinct from one another

directed graph

A directed graph consists of a set of nodes V and a set of directed edges E'. Each e' in E' is an ordered pair (u,v) u is the tail of edge e' and v is the head of edge e'. e' leaves node u and enters node v

strongly connected directed graph

A directed graph is strongly connected if for every pair of nodes u and v, their is a path from u to v and from v to u

Graph

A graph consists of a collection V of nodes and a collection E of edges. Each edge joins two nodes.

What is the big O for a recurrence relation with q > 2

O(n^(logbase2(q)) -> polynomial time

Algorithm efficiency

An algorithm is efficient if it achieves qualitatively better worst-case performance, at an analytical level, than brute-force search. More correctly an algorithm is efficient if it has a polynomial running time.

What is a greedy algorithm?

An algorithm is greedy if it builds a solution in small steps choosing a decision at each step myopically to optimize some underlying criterion.

Edges representation

An edge belonging to E is represented as a 2-element subset of V: e = {u, v} for some u, v belonging to V, where u and v are the ends of e.

Tree

An undirected graph is a tree if it is connected and has no cycles.

connected undirected graph

An undirected graph is connected if for every pair of nodes u and v, there is a path from u to v.

Interval Scheduling problem - greedy algorithm

Approach: Accept the request that finishes first i.e. the request for which f(i) is minimized R - the set of requests - contains all the requests A - the set of accepted requests - initialized as empty Initially let R be the set of all requests, A is initialized as empty While R not empty choose a request r E R with the smallest f(i) add r to A remove all r' E R that are not compatible with r EndWhile Return A as the set of accepted requests

cut property

Assume that all edge costs are distinct. Let S be a set of nodes that is not equal to zero and is not equal to V. Then the minimum cost edge e between S and V-S is in every minimum spanning tree.

cycle property

Assume that the edges of G are distinct. Let C be any cycle in G and let e = (u,v) be the most expensive edge belonging to C. Then e does not belong to any minimum spanning tree of G.

Heap Operations: Removing an element

Assume the heap currently has n elements. After deleting the element H[i], the heap will have only n − 1 elements; and not only is the heap-order property violated, there is actually a "hole" at position i, since H[i] is now empty. So as a first step, to patch the hole in H move the right child into the hole. The elements are in the first n − 1 positions, as required but the heap order property may now be violated at position i. If the key is too small (that is, the violation of the heap property is between node i and its parent), then we can use Heapify-up(i) to reestablish the heap order. On the other hand, if key is too big, the heap property may be violated between i and one or both of its children. In this case, we will use a procedure called Heapify-down

pseudocode for BFS where L0,L1,L2 lists are made for each layer

BFS(s) set Discovered[s] = True and Discovered[v] for all other v = False Initialize list L[0] and add it's only element s initialize layer counter i = 0 set current BFS tree T = nullset while L[i] is not empty: initialize empty list L[i+1] For each node u belonging to L[i] consider each edge (u,v) incident to u if Discovered[v] = false set Discovered[v] = true add edge (u,v) to Tree T add v to L[i+1] Endif Endfor increment i by one EndWhile

DFS pseudocode Recursive

DFS(u): Mark u as "Explored" and add u to R For each edge (u, v) incident to u If v is not marked "Explored" then Recursively invoke DFS(v) Endif Endfor

Can a recursive tree ever have edges between nodes at the same level

For a given recursive call DFS(u), all nodes that are marked "Explored" between the invocation and end of this recursive call are descendants of u in T. Let T be a depth-first search tree, let x and y be nodes in T, and let (x, y) be an edge of G that is not an edge of T. Then one of x or y is an ancestor of the other.

Union Find operations - MakeUnionFind(s)

For a set S returns a Union-Find data structure where all the elements of S are in separate sets. This corresponds, for example, to the connected components of a graph with no edges. The goal is to implement MakeUnionFind in time O(n) where n = |S|.

Union Find operations - Find(u)

For an element u E S, Find(u) will return the name of the set to which u belongs. The goal is to implement Find(u) in O(log n) time.

prove connected vertices in a BFS tree differ by at most 1 layer

Proof by contradiction Let T be a bfs tree and let x and y be vertices from Li and Lj respectively and let x,y be an edge in T. Let's suppose that j-i>1 or j-1>i. Consider the point in the BFS when the edges incident to x are being examined. At this point the only nodes discovered from x are those that belong to layers Li+1 and earlier, hence if y is a neighbor of x it should have been discovered by this point and should belong to layers Li+1 or earlier

Asymptotic tight bound Θ

If a function T(n) is both O(f(n)) and Ω(f(n)) we say that T(n) is Θ(f(n)) e.g. T(n) = pn^2 + qn + r is both O(n^2) and (n^2). f(n) is an asymptotically tight bound for T(n).

Properties of Asymptotic Growth Rate : Transitivity

If a function f is asymptotically upper-bounded by a function g and g is asymptotically upper-bounded by a function h then f is asymptotically upper-bounded by h as well. If f = O(g) and g = O(h), then f = O(h). Similarly if a function f is asymptotically lower-bounded by a function g and if g is asymptotically lower-bounded by a function h then f is also asymptotically lower-bounded by the function h. If f = Ω(g) and g = Ω(h), then f = Ω(h) If f = Θ(g) and g = Θ(h), then f = Θ(h).

DFS description

In DFS s is marked as explored and then the neighbor of s is put through the algorithm this occurs recursively so that everytime a neighbor is chosen and explored until the algorithm reaches a dead end.

Complete Binary Tree

In a complete binary tree every level, except possibly the last, is completely filled, and all nodes in the last level are as far left as possible.

Mutual reachability

In a strongly connected graph if u and v are mutually reachable and v and w are mutually reachable then u and w are mutually reachable.

proof : cut property

Let T be a spanning tree that does not contain e. Instead let e' be a more expensive edge than e with the property that exchanging e for e' results in another spanning tree. The resulting spanning Tree T' (which contains e) is cheaper than T. Recall that the ends of e are v and w. T is a spanning tree, so there must be a path P in T from v to w. Starting at v, suppose we follow the nodes of P in sequence; there is a first node w' on P that is in V − S. Let v' in S be the node just before w' on P, and let e' = (v' , w' ) be the edge joining them. Thus, e' is an edge of T with one end in S and the other in V − S. If we exchange e for e' , we get a set of edges T' = T − {e' }u{e}. We claim that T'is a spanning tree. Clearly (V, T' ) is connected, since (V, T) is connected, and any path in (V, T) that used the edge e' = (v' , w' ) can now be "rerouted" in (V, T') to follow the portion of P from v' to v, then the edge e, and then the portion of P from w to w' . To see that (V, T') is also acyclic, note that the only cycle in (V, T' union {e' }) is the one composed of e and the path P, and this cycle is not present in (V, T ) due to the deletion of e' . We noted above that the edge e' has one end in S and the other in V − S. But e is the cheapest edge with this property, and so ce < ce'. (The inequality is strict since no two edges have the same cost.) Thus the total cost of T' is less than that of T, as desired.

proof: cycle property

Let T be the spanning tree containing e, the most expensive edge belong to C, a cycle in source graph G. The aim is to prove that e does not belong to any minimum spanning tree of G. The proof strategy used is an exchange argument where there is an attempt to replace e with a cheaper edge e'. Suppose we delete e, partitioning the nodes into components S, containing node v and V-S, containing the node w. The edge e' must bridge S and V-S. The edge e' can be found by following the cycle C. The edges of C other than e form, by definition, a path P with one end at v and the other at w. If we follow P from v to w, we begin in S and end up in V − S, so there is some edge e' on P that crosses from S to V-S. Now consider the set of edges T' = T − {e}U{e' }. Arguing just as in the proof of the Cut Property (4.17), the graph (V, T' ) is connected and has no cycles, so T' is a spanning tree of G. Moreover, since e is the most expensive edge on the cycle C, and e' belongs to C, it must be that e' is cheaper than e, and hence T' is cheaper than T, as desired.

Asymptotic limit theory

Let f and g be two functions such that limn→∞ f(n)/g(n) exists and is equal to some number c > 0. Then f(n) = (g(n)).

Drop the lower order terms for big O

Let f be a polynomial of degree d, in which the coefficient ad is positive. Then f = O(n^d). Also, f = (nd), and hence it follows that in fact f = (nd).

Test to see if a graph is strongly connected?

Make a graph where every edge is reversed called Grev, run BFS starting at s in G and in G rev: case 1: if one of these two searches fails to reach every node, then clearly G is not strongly connected. case 2: otherwise s has a path to every node in the BFS of G and s has a path from every node shown in BFS of Grev .Then s and v are mutually reachable for every v, and so it follows that every two nodes u and v are mutually reachable: s and u are mutually reachable, and s and v are mutually reachable.

Is it possible for a dashed edge to connect vertices on a BFS tree that are 3 layers/levels appart?

Non tree edges all either connect nodes in the same layer on in adjacent layers. Let T be a bfs tree and let x and y be vertices in Li and Lj respectively. If edge x,y belongs to T then i and j differ by at most 1.

What is the big O for a recurrence relation with q = 2?

O(n log base2(n)) T(n) <= 2T(n/2) + cn at each level work is the same -> cn there are log2(n) levels so nlog(n) work is done -> nlogntime

Implementation Cost of Prims Algorithm

With the right data structures Prims Algorithm may be implemented in O(mlog(n)) time. the implementations of Prim and Dijkstra are almost identical. By analogy with Dijkstra's Algorithm, we need to be able to decide which node v to add next to the growing set S, by maintaining the attachment costs a(v) = mine=(u,v):u∈S ce for each node v ∈ V − S. As before, we keep the nodes in a priority queue with these attachment costs a(v) as the keys; we select a node with an ExtractMin operation, and update the attachment costs using ChangeKey operations. There are n − 1 iterations in which we perform ExtractMin, and we perform ChangeKey at most once for each edge. Thus we have O(m) time, plus the time for n ExtractMin, and m ChangeKey operations. As with Dijkstra's Algorithm, if we use a heap-based priority queue we can implement both ExtractMin and ChangeKey in O(log n) time, and so get an overall running time of O(m log n).

Balanced Binary Tree

a binary tree in which the left and right subtrees of every node differ in height by no more than 1.

breadth-first search (BFS)

algorithm for determining s-t connectivity which starts from s and explores outwards in all possible directions, adding nodes one layer at a time. L1 would be the first layer the layer of nodes that are neighbors of s, L0 consists of the set containing only s. Lj+1 consists of all nodes that were not present in an earlier layer and have an edge to a node in the layer J.

Do only polynomial functions run in polynomial time?

an algorithm can be polynomial time even if its running time is not written as n raised to some integer power. if an algorithm has running time O(n log n), then it also has running time O(n2), and so it is a polynomial-time algorithm. a number of algorithms have running times of the form O(nx) for some number x that is not an integer. For example, in Chapter 5 we will see an algorithm whose running time is O(n1.59); we will also see exponents less than 1, as in bounds like O( √n) = O(n1/2).

Dijkstra's algorithm - purpose

an optimal greedy algorithm to find the minimum distance and shortest path in a weighted graph from a give start node.

A bipartite graph

for a bipartite graph the node set V can be separated into sets X and Y such that every edge in the graph connects a node in X to a node in Y. If a graph is bipartite it cannot contain an odd cycle.

BFS theorem

for each j<=1, layer Lj produced by BFS consists of all the nodes at distance exactly j from s. There is a path from s to t if and only if j appears in some layer.

Minimizing the number of edges while keeping all the nodes connected

claim: Let T be a minimum-cost solution to the network design problem defined above. Then (V, T) is a tree. proof(by contradiction). Suppose the optimal solution (minimum spanning tree) (V,T) contains a cycle C and let e be any edge on that cycle. We claim that {(V,T)-e} is still connected. Since any path that used e can go the long way along the remainder of cycle C instead. It follows that {(V,T)-e} is another valid solution and is cheaper which is a contradiction.

What is the least number of edges that a connected graph must have?

connected graphs must have at least m ≥ n − 1 edges n is the number of nodes

Heap Array

in the array H implementation of a heap every node in the binary heap corresponds to an array element the root node is at H[1] and for any node at position i the children nodes are at 2i and 2i+1 while the parent of the node at position i at the position which is the floor of i/2

How to tell if two graphs are isomorphic?

two graphs are isomorphic if pi[1,2,3,4] = [2,1,3,4] If there's an edge between 1 and 2, 2 and 3, 3 and 4 in the first graph and an edge between 2 and 1, and edge between 1 and 3, an edge between 3 and 4 in the second graph. Then the graphs are isomorphic.

Let G be an undirected graph on n nodes. Any two of the following statements implies the third.

1) G is connected 2) G has no cycles 3) G has n-1 edges

Greedy Algorithms for minimum spanning tree

1) Kruskal's algorithm - approach: Add edges in increasing order of cost so long as the edge added does not introduce a cycle. If edge e would introduce a cycle upon insertion, discard e. 2) Prim's algorithm - approach: Start at root node s and attempt to greedily grow a tree outward from s. At each step add the node that can be attached as cheaply as possible to the existing partial tree. 3) Reverse-Delete algorithm - approach: Start with the full graph (V,E) and begin deleting edges in order of decreasing cost. Start with the most expensive edge and delete it as long as doing so would not disconnect the graph.

Why does the greedy approach produce the optimal result? Prove it.

1) Let O be the optimal set of requests (solution) while A is the set of accepted requests produced by the greedy algorithm. 2) Let the number of requests in O [j1,j2....jm] be m and let the number of requests in A [i1,i2....ik] be k Claim: |O| = |A| i.e. k = m sub-claim : For all indices r <= k f(ir) <= f(jr) proof by induction: When r = 1 then the statement is true since the algorithm starts by selecting the request with the smallest finish time. For r > 1, we know (since O consists of compatible intervals) that f(jr−1) ≤ s(jr). Combining this with the induction hypothesis f(ir−1) ≤ f(jr−1), we get f(ir−1) ≤ s(jr). Thus the interval jr is in the set R of available intervals at the time when the greedy algorithm selects ir. The greedy algorithm selects the available interval with smallest finish time; since interval jr is one of these available intervals, we have f(ir) ≤ f(jr). This completes the induction step back to main claim proof by contradiction: If A is not optimal, then an optimal set O must have more requests, that is, we must have m > k. Applying (4.2) with r = k, we get that f(ik) ≤ f(jk). Since m > k, there is a request jk+1 in O. This request starts after request jk ends, and hence after ik ends. So after deleting all requests that are not compatible with requests i1,..., ik, the set of possible requests R still contains jk+1. But the greedy algorithm stops with request ik, and it is only supposed to stop when R is empty—a contradiction.

Implementation and run time for Dijkstra's algorithm

1) The while loop runs n-1 times for n-1 nodes (excluding node s) and adds a node to S each time. 2) to determine the minimum mine=(u,v):u∈S d(u) + le would require iterating through all the edges (m edges) n times resulting in an implementation of O(mn). Instead maintain the mine=(u,v):u∈S d(u) + le by keeping nodes V-S in a priority queue with d'[v] as the keys. 3) Using the heap-based priority queue implementation discussed in Chapter 2, each priority queue operation can be made to run in O(log n) time. Thus the overall time for the implementation is O(m log n).

Time complexity for scheduling

1) sort the n requests in nlogn time 2) construct start time array s[1....n] O(n) 3)We always select the first interval; we then iterate through the intervals in order until reaching the first interval j for which s(j) ≥ f(1); we then select this one as well. More generally, if the most recent interval we've selected ends at time f, we continue iterating through subsequent intervals until we reach the first j for which s(j) ≥ f. In this way, we implement the greedy algorithm analyzed above in one pass through the intervals, spending constant time per interval. Thus this part of the algorithm takes time O(n).

path - undirected graph

A path in an undirected graph G = (V,E) is defined as a series of nodes v1,v2,v3....vk-1,vk with the property that each consecutive pair vi,vi+1 are joined by an edge in G.

Two Graph representations (undirected)

Adjacency matrix and adjacency list. Adjacency matrix is an n x n matrix (n is the number of nodes) each array entry is 1 if there is an edge between the two corresponding nodes/vertices. The space complexity of an adjacency matrix is n^2. An adjacency list is an array of pointers. Each pointer points to a linked list that contains the vertices that the vertice in the array has edges with. The space complexity of an adjacency list is O(n+m) since each edge e = (v, w) appears in exactly two of the lists: the one for v and the one for w. Thus the total length of all lists is 2m = O(m).

prove that Reverse-Delete produces a spanning tree of G

Consider any edge e = (v, w) removed by Reverse-Delete. At the time that e is removed, it lies on a cycle C; and since it is the first edge encountered by the algorithm in decreasing order of edge costs, it must be the most expensive edge on C. Thus by the cycle property, e does not belong to any minimum spanning tree. So if we show that the output (V, T) of Reverse-Delete is a spanning tree of G, we will be done. Clearly (V, T) is connected, since the algorithm never removes an edge when this will disconnect the graph. Suppose by contradiction that (V, T) contains a cycle C. Consider the most expensive edge e on C, which would be the first one encountered by the algorithm. This edge should have been removed, since its removal would not have disconnected the graph, and this contradicts the behavior of Reverse-Delete.

stack implementation of DFS

DFS(s) initialize S with one element s while S is not empty: remove node u from S if Explored[u] == false: set Explored[u] = True For each edge (u,v) incident on u: add v to stack endfor endif endwhile

Dijkstra's algorithm

Dijkstra's Algorithm (G, l): let S be the set of explored nodes for each u E S, store a distance d(u) Initially S = {s}, d[s] = 0 While S not equal to V: select a node v not in S with at least one edge from S for which d'[v] = mind[u] + le is as small as possible Add v to S and define d(v) = d'(v) EndWhile

Can Divide and Conquer algorithms be used when there is no optimal Greedy Algorithm?

Divide and Conquer algorithms can sometimes be used as an alternative approach but these algorithms are often not strong enough to reduce an exponential brute force problem down to polynomial time instead most of D n C algorithms reduce a running time that is unnecessarily large but already in Polynomial time down to a faster run time.

Divide and Conquer

Divide and conquer refers to the group of algorithms that breaks the input problem into subproblems and solves each subproblem recursively. The solutions to the subproblems are then combined to form the final solution.

exponential functions

Exponential functions are functions of the form f(n) = r^n for some constant base r. Here we will be concerned with the case in which r > 1, which results in a very fast-growing function In particular, where polynomials raise n to a fixed exponent, exponentials raise a fixed number to n as a power; this leads to much faster rates of growth.

prove that Prim's algorithm produces a minimum spanning tree

In each iteration of the algorithm, there is a set S ⊆ V on which a partial spanning tree has been constructed, and a node v and edge e are added that minimize the quantity mine=(u,v):u∈S ce. By definition, e is the cheapest edge with one end in S and the other end in V − S, and so by the Cut Property it is in every minimum spanning tree. It is also straightforward to show that Prim's Algorithm produces a spanning tree of G, and hence it produces a minimum spanning tree.

Dijkstra's algorithm simplified steps

Initialize the d[s] = 0, initialize all other distances to inf While there are unvisited vertices: visit the unvisited vertex with the smallest known distance from the start vertex. For the current vertex, examine its unvisited neighbors For the current vertex calculate the distance of each neighbor from the start vertex If the calculated distance of a vertex is less than the known distance, update the shortest distance update the previous vertex for each of the updated distances Add the current vertex to the list of visited vertices EndWhile

Union Find - Pointers - MakeUnionFind(S)

Initially each node in S will have a record initialized with a pointer that points to itself (or a null pointer) to indicate that v is in its own set.

What is the big O for a recurrence relation with q = 1

O(n) since T(n) <= 2cn -> linear time

Merge Sort - time complexity

O(nlogn)

Data-Structures for implementing Union-Find - Array implementation

Set up an array called "Component" where Component[s] is the parent/name of the set containing s. Find(s) is O(1) through simple look up. Union(A,B) can take up to O(n) before optimization. optimization : 1) explicitly maintain the list of elements in each set, so we don't have to look through the whole array to find the elements that need updating. 2)we save some time by choosing the name for the union to be the name of one of the sets, say, set A: this way we only have to update the values Component[s] for s ∈ B, but not for any s ∈ A. 3)More generally, we can maintain an additional array size of length n, where size[A] is the size of set A, and when a Union(A, B) operation is performed, we use the name of the larger set for the union. This way, fewer elements need to have their Component values updated.Even with these optimizations, the worst case for a Union operation is still O(n) time;

Let T be a depth-first search tree, let x and y be nodes in T, and let (x, y) be an edge of G that is not an edge of T. Prove that one of x or y is an ancestor of the other.

Suppose that (x, y) is an edge of G that is not an edge of T, and suppose without loss of generality that x is reached first by the DFS algorithm. When the edge (x, y) is examined during the execution of DFS(x), it is not added to T because y is marked "Explored." Since y was not marked "Explored" when DFS(x) was first invoked, it is a node that was discovered between the invocation and end of the recursive call DFS(x). It follows from (3.6) that y is a descendant of x.

Properties of Asymptotic Growth Rate: sum of functions

Suppose that f and g are two functions such that for some other function h, we have f = O(h) and g = O(h). Then f + g = O(h). Let k be a fixed constant, and let f1, f2,..., fk and h be functions such that fi = O(h) for all i. Then f1 + f2 + ... + fk = O(h). if f and g are two functions taking non-negative values if g = O(f), then f + g = Θ(f),f is an asymptotically tight bound for the combined function f + g.

Union-Find - Pointers- Union(A,B)

Suppose the name used for set A is a node u in A and the name used for set B is a node v in B. Then u or v should be the name of the combined set. Suppose u is chosen. To indicate that the combined set has the name u , v's pointer is updated to point to u. While the other pointers in set B do not need to be updated. So Union is O(1) since a single pointer is updated.

Asymptotic upper bound (O)

T(n) is O(f(n)) if for sufficiently large n, the function T(n) is bounded above by a constant multiple of the function f(n). More precisely, T(n) is O(f(n)) if there exist constants c > 0 and n0 >= 0 such that T(n) <= c.f(n) for all n >= n0. T is asymptotically upper bounded by f.

Asymptotic lower bound Ω

T(n) is Ω(f(n)) if their exists constants ϵ > 0 and n0 > 0 such that for all n > n0 T(n) >= ϵ.f(n). In this case T is asymptotically lower bounded by f.

Union-Find -Pointers-Find(v)

The Find operation is no longer constant since it needs to follow the sequence of pointers through a history of old names the set had, in order to get to the current name. This can be as large as O(n) if we are not careful with choosing set names. To reduce the time required for a Find operation, we will use the same optimization we used before: keep the name of the larger set as the name of the union.

What is the Union Find Structure?

The Union Find Structure allows us to maintain disjoint sets (such as graph components) in the following sense. Given a node u, the Find(u) operation will return the name of the set containing u. This operation can be used to test if nodes v and u are in the same set by Find(u)==Find(v)?. The data structure can also implement a function Union(A,B) to merge to disjoint sets A and B into a single set.

what data structure is needed to implement Kruskal's algorithm efficiently?

The Union Find Structure. This is exactly the data structure needed to implement Kruskal's Algorithm efficiently. As each edge e = (v, w) is considered, we need to efficiently find the identities of the connected components containing v and w. If these components are different, then there is no path from v and w, and hence edge e should be included; but if the components are the same, then there is a v-w path on the edges already included, and so e should be omitted. In the event that e is included, the data structure should also support the efficient merging of the components of v and w into a single new component

How long does the implementation of BFS take to run?

The above implementation of the BFS algorithm runs in time O(m + n) (i.e., linear in the input size), if the graph is given by the adjacency list representation.

how long does the stack implementation take?

The above implementation of the DFS algorithm runs in time O(m + n) (i.e., linear in the input size), if the graph is given by the adjacency list representation.

Improved data structure for Union Find - Pointers

The data structure for the alternate implementation uses pointers. For each node in v E V will be contained in a record that has an associated pointer which points to the name of the set that v is contained in. The elements of S will be used as the possible set names, naming each set after one of its elements.

Heap Operations: Identifying the minimal element

The heap element with the smallest key is the root (H[1]) so it takes O(1) time to find the smallest element.

Heap Operations: Heapify down

The heapify down function moves an element e that is larger than it's children down to it's appropriate position in the binary heap where it is smaller than either of its children or to the leaf position

Dynamic Programming

The idea for dynamic programming is drawn from the intuition for divide and conquer and is essentially the opposite of the greedy strategy: the algorithm explores the space of all possible solutions, by carefully decomposing things into a series of subproblems, and then building up correct solutions to larger and larger subproblems.

Interval Scheduling problem

The instance/input is a set of requests R {1.....n} where the ith request corresponds to an interval of time starting at s(i) and finishing at f(i).We'll say that a subset of the requests is compatible if no two of them overlap in time, and our goal is to accept as large a compatible subset as possible. Compatible sets of maximum size will be called optimal.

What is the greatest number of edges a graph can have?

The number of edges m can be at most n!/2!(n-2)! ≤ n2 n is the number of nodes and m is the number of edges

Heap Operations : Heapify-up

The procedure Heapify-up(H, i) fixes the heap property in O(log i) time, assuming that the array H is almost a heap with the key of H[i]too small.

Heap Operations: Adding an element

To start with, we can add the new element v to the final position i = n + 1, by setting H[i]= v. his does not maintain the heap property, as the key of element v may be smaller than the key of its parent. So we now have something that is almost a heap, except for a small "damaged" part where v was pasted on at the end. We will use the procedure Heapify-up to fix our heap. Using Heapify-up we can insert a new element in a heap of n elements in O(log n) time


Related study sets

Unit Test: How Important Ideas Are Expressed Answers

View Set

Growing and Maintaining Small Fruits

View Set

V-Codes and E-Codes Chpt 8 Faye Brown

View Set

Chapter 3: Doing Business in Global Market + Review Questions

View Set

Chapter 16: Spinal Cord and Spinal Nerves

View Set

Chapter 6 Basic Elements of Planning and Decision Making

View Set