CS180 MT

¡Supera tus tareas y exámenes ahora con Quizwiz!

Bellman-Ford

--The Bellman-Ford algorithm is a single-sourceshortest path algorithm. This means that, givena weighted graph, this algorithm will output theshortest distance from a selected node to allother nodes. ● It is very similar to the Dijkstra Algorithm.However, unlike the Dijkstra Algorithm, theBellman-Ford algorithm can work on graphswith negative-weighted edges. This capabilitymakes the Bellman-Ford algorithm a popularchoice. ○ Why are negative edges problematic? ■ Seems easy enough to modify, butare we done. No! Negative weight edges can create negative weightcycles i.e. a cycle that will reduce the total pathdistance by coming back to the same point. --The first step is to initialize the vertices. The algorithm initially set the distancefrom starting vertex to all other vertices to infinity. The distance betweenstarting vertex to itself is 0. ● After the initialization step, the algorithm starts calculating the shortest distancefrom the starting vertex to all other vertices. ● Within this step, the algorithm tries to explore different paths to reach othervertices and calculates the distances. ○ Do this |V|-1 times where |V| is the number of vertices in given graph. For each edge u-v ○ If dist[v] (the current min-distance) > dist[u] + weight of edge uv, then update dist[v] to ○ dist[v] = dist[u] + weight of edge uv ○ Note: this is an exhaustive traversal to ensure no possible shorter path exists ● Finally, when the algorithm iterates through all edges and relaxes all therequired edges, the algorithm gives a last check to find out if there is anynegative cycle in the graph. ○ It does this by traversing the graph again and seeing if any edges are relaxed further (thusindicating a cycle ○ Runs in O(VE) for iterating all nodes and for each node relaxing up to E edges

mergesort

Basically the idea is that we break ourunsorted array into multiple arraysrecursively until we only have twoelements per sub- array. ■ Sorting two elements is a constant timeoperation as we just compare and swap ifneeded. ■ Now we backtrack so that we nowcompare each element in some array A tothe elements in array B simultaneouslyand merge

O(n^3)

Elaborate nested loops

NP-hard

NP-Hard is the defining property of a class of problems that are informally "at leastas hard as the hardest problems in NP" or other NP problems are reducible to it

master theorem

Pickup Material: Master Theorum ● Many branching/recursive algorithms can berepresented in the form T(n) = aT(n/b) + f(n^d) ○ T(N) can be thought to represent the amount of effort in EachStep of the algorithm ○ where a represents the number of children each node has, (oryour branching factor) ○ and the proportion of data passed through each branch is 1/b ○ f(n) Then represents the additional work outside of recursivecall

prims vs kruskal

Prim's algorithm is significantly faster when you've got a really densegraph with many more edges than Nodes. Kruskal performs better in typical situations (sparse graphs) because it uses simpler data structures.

Quick sort (D&C)

Quicksort works by randomly selecting a 'pivot' element fromthe array ● We then partition the other elements into two sub-arrays,according to whether they are less than or greater than the pivot. ● The sub-arrays are then sorted recursively via a similar fashionwith pivot element selected from within them. ● This can be done in-place, requiring small additional amountsof memory to perform the sorting. ● Complexity The worst case time complexity of this algorithm is ● O(N^2), but as this is randomized algorithm, its time complexity fluctuates between that an O(NlogN).○ What would those scenarios be?

testing bipartiteness using BFS

color s red, and all its neighbors blue keep alternating colors until the entire BFS algorithm is done now either every edge has one red end and one blue end or both of the same color, showing whether its bipartite or not

Djikstras algorithm proof

contradiction: assume there is a different path for which traversing the graph thati is shorter than the current one, assuming at some point this alg falls behind a optimal approach however, since at every interval, we select the lowest cost path forward, at some junction the real lowest cost path forward must reveal itself to be lower and since at every step we select the lowest cost path, the only way we dont do that is if it doesn't reveal itself to us as lower cost however, this is not possible as this lower cost would show up as we traverse the graph

O(n)

linear time, Running time is at most a constant factor times the size of the input

O(n^2)

nested loops, pairs of inputs

BFS data structure and runtime

queue structure O(n+E)

topological sort using a directed acyclic graph

returns seq of nodes where every node will appear before each node it points to this array-ed order of nodes is topological ordering this seq helps us identify possible cycles starts with node with no incoming edges Describe the Topological Sort Algorithm in a DAG. ○ (Assume) Graph already scanned and adjacency list for all nodes notating incoming and outcoming edges constructed ○ 1. Source for DAG will be the node with no incoming edges, so scan for node without incoming edges ○ 2. Remove this node and all its outgoing edges from adjacency list, and update topological sort with selected node ○ 3. Repeat steps 1 and 2 on the updated adjacency list ○ 4. Continue until graph completely parsed Prove its correctness, and analyze the complexity. ○ To prove correctness you could do it inductively Show that a DAG by definition must have a source node with no incoming edges, so the base case works. For the inductive step, assume it works at an arbitrary step, since this algorithm removes outgoing edges from processed nodes, it effectively creates a new sub-graph with one or more DAGs based on the topological ordering. The algorithm will this next step properly, therefore it will correctly traverse the entire graph ○ Runtime of a Topological Sort is o(N + E) reflecting the actions of loading and removing nodes and edges as the adjacency list is traversed.

Djikstra's algorithm

shortest weighted path through a graph weights must be positive O((E+N) logN)

prove an independent set is NP complete: setup

(We'll use the fact that we already know Vertex Coveris NP-Complete to prove this.) First somedefinitions: ● Independent Set ○ An independent set is vertex set S in which notwo vertices are adjacent, i.e., for all distinctu,v ∈ S, uv ∈/ E. ● Vertex Cover ○ A vertex cover is vertex set C such that eachedge contains at least one vertex in C, i. e., forall distinct uv ∈ E, u ∈ C ∨ v ∈ C.

T(n)=2T(n/2)+Θ(n)

Don't think about this as a calculation. Think aboutis as writing out the time of obtaining the solutionin terms of the time of smaller problems it solves. ○ E.g., Sorting array with 100 elements (T(n) time) could bedone by sorting independently left 50 elements (T(n/2)time) + sorting right 50 elements (T(n/2) time again) +merging with take time proportional to 100, because youhave to do small work to set elements from two sortedarray in correct order. ○ How to convert this into a runtime though? The T(n/2)matters as we split multiple times so we account for that ■ First n/2 then n/4 or n/(2^2) then n/8 or n/(2^3) untiln/2^k = 1 (i.e., we've completely traversed the tree toit's leaf nodes ■ Solving 2^k = n or k= log(n) ● This is the depth of our splitting tree! ■ And since at each interval we're doing a total of Nsteps (e.g, at the first step it's 2 N/2) the total runtimeis Nlog(N)

proving D&C

Generally speaking for these problems there is a rather obviousbrute force approach, and that brute force approach is to tryevery possibility and return the solution ○ Easy enough to prove this, we can call it Proof by Exhaustion! I've literallyexhausted all possibilities to discover the solution ● What D&C asserts is that by restructuring the problem in thismanner, I've eliminated all the unnecessary calculations, to onlyperform the subset relevant to the ultimate solution. ○ So we have to show that the optimal solution Can only be amongst thelimited set of options we're considering ○ Merge Sort: Show that when merging the ordered sublists only thepoint-to-point comparison is needed to determine its position in themerged array, and that once an element is added to the merged array, it isimpossible for it to be supplanted by something further up ○ Closest points: Show that upon the merge the only possible candidates fora newer closer pair lie in the geometric space grids we have

P

P is the set of all decision problems which can be solved in polynomial timeby a deterministic Turing machine. Since they can be solved in polynomialtime, they can also be verified in polynomial time. Therefore P is a subset ofNP.

show an independent set lies in NP

Remember: A problem is in NP if a solution to it can be Verified inPolynomial time. For example, to verify the Independent Set problem where we aregiven a graph G, a candidate solution set S, and a target minimum setsize t we would do the following: 1. Check each pair of vertices in S and verify there is no edgebetween them. For n vertices in S this would take O(n2) time. 2. Verify the size of S is ≥ t. This would take O(n) time. As you can see, we can verify the Independent Set problem inpolynomial time so it lies in the class NP.

how to prove a DP solution

The nature of the subproblems also cluesus in to how to solve them. ● Since at every stage of a DP solution weask the same problem, the solution itreturns is essentially optimal for thatgiven subproblem ● We can therefore approach thisinductively: ○ Assume that up to k subproblems our solutionhas returned optimally ○ Show that as we extend the subproblems to k+1it will continue to return the optimal solution(by referencing the previous optimal solutions)

kruskal's runtime

Time complexity?■ In Kruskal's algorithm, the most time consuming operation is sorting because the total complexity of the Disjoint-Set operations will be O(ElogN), which is the overall Time Complexity of the algorithm.

prim's algorithm runtime and data structure

adjacency matrix: O(n^2) adjacency list: O(Elogn)

to prove NP complete?

--So if you're given a new problem and youwant to show that it isn't solvable inpolynomial time, you want todemonstrate that it's NP-Complete. ● How do we do this (from the previousslide)? ○ To show that a new problem of unknowndifficulty is NP-Complete we have to do twomain things. i. Show that the problem lies in NP. ii. Show that the problem is NP-Hard.

Dynamic Programming

-works by storing the result of subproblems so that when their solutions are required by other subproblems, they are at hand and we do not need to recalculate them, trading space for time -this technique of storing value of subproblems is called memoization. by saving values in an array, we save time for computations of sub-problems we have already come across -optimized form of recursion, reduce complexity from exponential to polynomial

NP-complete

A problem x that is in NP is also in NP-Complete if and only if every otherproblem in NP can be quickly (ie. in polynomial time) transformed into x. ○ In other words: ■ x is in NP, and ■ Every problem in NP is reducible to x (i.e., it is NP Hard)

O(nlogn)

Algorithm splits input into two equal sizes, solves each piece recursively, then combines solutions in linear time Megasort Or one where the expensive step is sorting the input

Reducibility

As a first step, we settle for relative judgements. In this case we want toassert the following: ○ Problem X is at least as hard as problem Y ● To prove such a statement, we reduce problem Y to problem X (note theorder!): ● If problem Y can be reduced to problem X, we denote this by Y ≤p X. ○ This means "Y is polynomial-time reducible to X." ○ It also means that since X is at least as hard as Y because if you can solve X, you can solve Y. ■ Reducibility means that if you had a black box that can solve instances of problem X, howcan you solve any instance of Y using polynomial number of steps, plus a polynomialnumber of calls to the black box that solves X? ● Key Note: We reduce to the problem we want to show isthe harder problem. ● Theorem: If Y ≤p X and Y cannot be solved in polynomial time, then X cannot besolved in polynomial time. ● Theorem: Conversely if X can be solved in Polynomial time, than Y can be solvedin Polynomial time as well

proving closest pair specifics

At a high-level we need to prove that this algorithm will return the closest pair ○ We can do this inductively by showing that at every step we return the closest pair until we've exhaustively explored the whole domain ● 1. Initial divide step, will generate a series of 1X1 comparisons that will calculate initial target min (and canserve as our base case for an inductive proof) ● 2. Every subsequent min calculation is a merge, as by definition the shortest distance within each segment isalready calculated (from our base case) so inductively we can assume we have min of each segment (ourassume k case) and now extrapolate to the min across a merge (our show k+1 holds case) ● 3. We need to show that for any point within the current MIN distance of the merge line only a fixed number(constant time) of lookups are needed ○ 3.a any point further away from the merge boundary then the current min, by contradiction cannot be part of a new min with a pointacross the merge ○ 3.b Within the merge boundary, given that we already know the boundary is based on the min distance within both boundaries, weknow that the relative density of points has to be limited (because if any denser you would simply have compressed the min distancealready to compensate), and specifically to 1 point within d/2 (with d being the min distance ○ Making a simple grid of 8 squares with each being d/2 we show that for every point along the merge you need to traverse AT MOSTthese 8 squares for another point to exhaustively these 8 grids, and by contradiction no closest pair is possible ■ Therefore if there is a new min it must be discoverable in constant time for each point along the merge and the new min (atK+1 merges) will be identified. ■ By induction this will return the global MIN of the set

ford-fulkerson: max-flow

Consider an S-T Network N. Prove that if f is a max-flow of the network,then there is a cut c with its capacity = to f o real surprises on this question, as you can directly leverage a portion of the three stageproof. ○ Since you're already given that f is a max-flow (normally step 1), you simply build from there ○ First show by contradiction that there cannot be an augmenting path or else f will not bemax-flow. ○ Then show by contradiction that there must be a cut C(s,t) such that it is equal to f and theedges are fully saturated

ford-flunkerson: min-path

Consider the following network with source A and sink F. If theFord-Fulkerson max-flow algorithm initially finds a path A,B,E,F in thenetwork below and sends 2 units of flow on it, show the residual networkand all subsequent steps of the Ford-Fulkerson algorithm on the network(NOTE: all capacities on the network = 2) At this point we've found the path A,B,E,F and sent two units through it. So you can sketch the graph and theresidual graph. (badly drawn image to the left)○ With that residual graph we can send an additional two units along the new path until that is exhausted.Write out the final residual path (below right, Note how the diagonal edge is leveraged in the reverse flowdirection as well). Total flow is 4

DP and recursion

DP is almost always applied to recursive algorithms, but not all Recursioncan use DP. ○ Unless there is a presence of overlapping subproblems like in the fibonacci sequence problem, arecursion can only reach the solution using a divide and conquer approach.

Divide and Conquer

Divide and Conquer ● In the past, when trying to solve a larger problem, we'vetried to break it down into smaller, more manageablesegments. Can you think of an example we learned of this?-GREEDY! ● Divide and Conquer is another method of breaking a largerintractable problem, into smaller, solvable chunks that canthen be combined to solve the entire problem ● The technique can be divided into roughly the followingthree parts: ○ Divide: the problem into smaller subproblems. ○ Conquer: the Subproblem by calling recursively untilthe subproblem is solved. ○ Combine: combine the subproblem solutions toassemble the ultimate problem solution.

Mining Gold Example

Given a gold mine of n*m dimensions. Eachfield in this mine contains a positive integerwhich is the amount of gold in tons. ● Initially a miner is at first (leftmost) column butcan be at any row (you choose where toinitialize your miner). ● She can move only to the right (righthorizontal, right up, right down [diagonal tothe right]), mining the gold as she goes. ● Your goal is to plot a path for the maximumamount of gold she can collect. ○ Greedily always choose the max value and traverse the graph■ Greedy approach will fail! (from previous slide if you aggressively selected thenext-max you would have opted for the 10)■ O(N) as you simply make a greedy max selection at each column (N) ○ Recursively: For each set of options evaluate the total max gold that can be obtained(square gold value + Max traversal gold), then greedily select, repeatuntil terminating■ Will return optimal, however runtime is exponential (N^N) as each set ofchoices require an complete nested recursive calls) ● Dynamic Programming: Also recursive, but work backward from destination.■ Working backwards update each square with the max-gold thatit can obtain,■ Once grid is updated Traverse it greedily.● Along the way, leverage Memoization to update max-valsbased on previously calculated■ Time complexity is O(N*M) as the max for the entire grid will be calculated,but each calculation is constant time, then greedy traversal is linear time

DP: Coin-change problem

Given a value N, if we want to make change for Ncents, and we have infinite supply of each of S = { S1,S2, .. , Sm} valued coins, how many ways can we makethe change? ○ Example: For an N of 8, with coins valued 1, 2, and 4 cents, thetotal number of unique combinations are 9: ■ 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 ■ 2+ 1 + 1 + 1 + 1 + 1 + 1 ■ 2 + 2 + 1 + 1 + 1 + 1 ■ 2 + 2 + 2 + 1 + 1 ■ 2 + 2 + 2 + 2 ■ 4 + 1 + 1 + 1 + 1 ■ 4 + 2 + 1 +1 ■ 4 + 2 + 2 ■ 4 + 4 ● Can we see the makings of subproblems we can memoize?

greedy vs. DP

Greedy Algorithms are similar to dynamic programming in the sense that they are both tools foroptimization. However, greedy algorithms look for locally optimum solutions, while DP finds theoptimal solution to subproblems and then makes an informed choice to combine the results of those subproblems to find the most optimum solution

Why does Dynamic Programming work?

Here the gold mining problem has the property of optimalsubstructure i.e., the optimal solution of a problemincorporates the optimal solution to the subproblems.○ E.g., the optimal solution for a starting point will incorporate theoptimal solutions of its successors ● Note: If there is optimal substructure, then the chancesare quite high that using dynamic programming willoptimize the problem. ● Also note: You are STILL doing basic recursion. The onlydifference is you structure your recursive calls bottom-up,rather than top-down, and save the results for reference

solve tug of war

Here's a neat approach: We can modify the Knapsack Problem for this particular problem.How? ○ In Knapsack we are trying to optimize max payoff given the limited weight, now we are trying to get a team asclose to (the sum of all players strength)/2 as possible ○ We also only need to assemble 1 team (because by exclusion, anyone not assembled is assigned to the otherteam) ○ We construct the same setup as knapsack: a table with i rows for the i number of players and j columns for thetotal cumulative strength of the team ■ Question: What if closest alignment is > cumulative strength/2? Doesn't matter since you're picking 2teams, so by default unassigned go to other. ○ As with the knapsack problem for each cell in the i x j matrix determine optimal choice of adding based onthis logic: ● Both time and storage complexity are O(i * j)

Visualizing dynamic programming approach

How do we do this? Rather than going top down, workbottom up, by solving the optimal subset of solutions aswe go along -start with the inclusion of one task move up to two, and so on. --Do this in a linear manner (i.e., we still sort) -Whenever we add a new task ( j) we make the following assessment --Does the optimal solution for growing subset include thevalue of this new task (vj) + the max of tasks that do notconflict with it (OPT(p( j)) Or is it more optimal to excludethis task in favor of a conflicting set of tasks that have ahigher payoff? (OPT( j-1)) ----Critically: That decision is a constant time set offixed lookups for each N --We continue to increment through our list of tasks, at each taskdetermining and storing its optimal subset, until, by the time wecomplete our list of tasks, we've constructed the optimal globalsolution --So our overall runtime for this is NlogN (for sorting thetask) as this step itself runs in linear timeDijkstra's

reduce an independent set to NP hard

In a graph G = {V, E}, if S is an Independent Set, then (V-S) is aVertex Cover ○ If S is an Independent Set, by the definition of an Independent Set, there can beno edge in G such that both vertices attached to it are in S. Hence for every edgein G, at least one vertex must lie in (V-S) ■ If every edge has a vertex in (V-S) then by the definition of Vertex Cover,(V-S) is a vertex cover ○ If we have (V-S) as a Vertex Cover, for any vertices within S, if they have aconnecting edge, then none of it's ends would fall in (V-S), violating thedefinition of a Vertex Cover. Therefore no edges within S can be connected by anedge, and therefore S is an independent set ● We conclude that G contains an independent set of size k and avertex cover of size (V -k) ● We can therefore reduce Vertex cover to Independent setbecause if we could solve for Independent Set, we could solveVertex Cover ● Independent Set is therefore NP-Complete!

Gayle Shapley steps

Initially all m∈M and w∈W are free While there is a man m who is free and hasn't proposed to every woman Choose such a man m Let w be the highest-ranked woman in m's preference list to whom m has not yet proposed If w is free then (m, w) become engaged Else w is currently engaged to m′ If w prefers m′ to m then m remains free Else w prefers m to m′ (m, w) become engaged m′ becomes free Endif Endif Endwhile Return the set S of engaged pairs.

network flow

Key idea: Can model system constraints and capacities interms of of networks with flow■ Ford-Fulkerson leverages traversals and residualgraphs to establish max-flow○ Key to Proof: Relating concepts of a max-flow (a singularcount of capacity), to Augmenting Paths via traversals (orthe lack thereof) (Obtained via traversals), to a set of fullysaturated edges (a min-cut) whose cumulative sumreflects that flow

kruskal's algorithm

Kruskal's Algorithm also greedily builds the spanning tree, but does so by adding edges one by one into a growing spanning tree. Kruskal's is a greedy approach as in each iteration it finds the edge which has least weight and adds it to the growing spanning tree until the tree is complete. Specific Algorithm Steps: ○ Sort the graph edges with respect to their weights, lowest to highest. ○ Start adding edges to the MST from the edge with the smallest weight until the edge of the largest weight. How do we know what edge to add or not? ■ Only add edges which don't form a cycle, i.e., only include edges which connect disconnected components. (add if no nodes, or one node currently added, but not if both nodes included) ○ When do we stop?■ Continue until number of edges = N-1

prove ford-flunkerson

Max-flow min-cut theorem. The value of the max flow is equal to the capacity of the min cut. 1. F is a Maxflow 2. There is no augmenting path relative to F 3. There exists a cut whose capacity equals the value of F ● If 1 Then 2: ○ Equivalent to: If not (2) Then not (1) or if augmenting path, then not yet a max flow ● If 2 Then 3: ○ Let f be a flow with no augmenting paths. ○ Let S be set of vertices reachable from s in residual graph. ■ S contains s; since no augmenting paths, S does not contain t ■ all edges e entering S in original network have f(e) = 0 ■ Then all edges e leaving S in original network have f(e) = u(e) where u(e) is sum of outgoing and residualedges ■ The set of outgoing edges is then a cut equal to f(e) ● If 3 Then 1: ○ Let f be a flow, and let (S, T) be an s-t cut whose capacity equals the value of f. Then f is a max flow and (S, T) isa min cut.

NP

NP is the set of all decision problems (questions with a yes-or-no answer) forwhich the 'yes'-answers can be verified in polynomial time (O(nk) where n isthe problem size, and k is a constant) by a deterministic Turing machine.Polynomial time is sometimes used as the definition of fast or quickly

Gayle Shapley runtime and data structure

O(n^2) arrays and linked lists

Weighted Task Interval Scheduling

Problem: We want to select the subset of tasks that maximize our return.Some of our tasks intervals overlap so there's a tradeoff we have to make. -(previously we could simply try to maximize the overall number of tasksas the tasks were unweighted, but now, we need to account for relativevalue of each task) -If I was doing this in a brute force manner, I would do a top-downrecursion: For each task I would have to calculate the maximum possibleweight including it would get, by recursively exploring its inclusion alongwith all other possible permutations of partial non-conflicting intervalschedules and then returning the global max -we can visualize this as a decision tree, where at each layer we branch on whetheror not to include a task. We have to traverse each possible permutation of the treerecursively and then 'boil up' the max solution ---E.G., To determine the Max of including task 5, or not, we have to explore the branch of including oror not. Then if 5 is included, is 3 included or not if, and if it is is 1 included and so on... ---Note: we begin to see the overlap! Those partial permutations of schedules will overlap, and in thisbruteforce implementation we'll have to re-traverse the same sub-branches over and over again -Key idea: Since the top down approach asks usto solve these same smaller problems multipletimes, why not solve them up-front ONCE, andSTORE these solutions. Then when subsequentlyasked to solve them again, we can simplyreference our previously solved solution inconstant time (This is what we refer to asoverlapping subproblems) -Also key idea: These are actually fairlysimple decision trees: at each node, eitherinclude that node, and the max obtainable withit, or don't include it, and instead pass upthe max of excluding it (Optimum substructure)

NP hard on exam

Remember: If asked on the exam to provethat one problem is reducible to another (orthat the problem is NP-Hard), you only needto do step 2 and simply provide logic to showhow the problems are linked in a systematicmanner that would allow for a polynomialmapping of the problems.○ You will not be expected to determine that mapping oranticipate the runtime● Also Remember: If you are asked to prove aproblem is NP-Complete solve steps 1 and 2 --Proving an NP-Complete problem: ○ Here what we're effectively establishing is that a 'simple'solution doesn't exist ○ Not proposing an actual solution! ■ Therefore no need to prove correctness of thesolution ■ By demonstrating the NP Complete relationshipyou've proven that a Poly-time solution doesn'texist and your work is done ● Reductions are asymmetrical ○ When reducing we assume our harder problem has asolution, and we simply need to map a particularinstance of that harder problem (which we know we cansolve), to the easier problem

close points problem (D&C)

Same thought as merge-sort: ○ We don't need to brute-force N^2 comparisons (clearly most points don't need tobe compared as they're obviously not candidates for closest-pair) ■ Instead, simply divide your space recursively along the X axis until you cando n/2 point-wise comparisons between the individual points ● Calculate and track current best min distance ○ Merging step is tricky because it's possible that the shortest distance is across thedivide. ■ We want to avoid doing a many-to-many comparison of points across thedivide as we end up with the same initial problem, just broken downdifferently. ● Instead, don't think of the merge in terms of points on either side,think instead about merging along the (shrinking) min-distancealong the divide. ● By segmenting the border into discrete grids, we simply need to seeif any points fall within the grid on one side of the divide and thenmake a fixed number of lookups to see if a point falls within thegrid on the other side (and then calculate the distance betweenthem) ■ among these sections simplifies to a constant-time lookup of grids ● Size of the grid determined by current best min ● Since we've already determined min distance within a section weonly need to look at fixed number of grids across other sections ○ Impact: At most all points only need to be involved in 1 merge step, and thatmerge is constant time, making the overall merge linear in terms of N

solving the DP Coin change problem

Simple way to solve: ○ Sort coins by size anditerate from smallest tohighest ○ Create a i x n matrix where iis unique coin and n is totalchange value from 1 up to n ○ At each value determinenumber of unique ways ofmaking change up to andincluding coin i ○ To do so reference previouslycalculated values and augmentwith additional possibilities --Let's focus in on the array of ways for 4 (n=4) when adding 2 (i=2) (red). First we take theprevious number of unique ways (1) (Blue), and then we look at how many ways 2 goes into it.Obviously 2 goes in by itself (2+2) (1), and 2 goes in once, in which case we reference thenumber of unique ways the remainder, in this case 2, could be previously obtained (1)(green), giving us a grand total of 3 ● Now let's focus in on that last array value: adding 4 to 8 (purple). In this case we takethe previously calculated number of ways (5) (Brown), one unique value for 4+4 (1), then add4 in once and pull in the number of unique ways the remainder (4) (Red) goes in (3). Summedtogether it gives us 9! ○ Key point: fixed number of lookups based on how many times a new coin goes into thetotal being calculate

NP-hard: cliques

Simply stated: a set of nodes within a graphare a clique when they are all completelydirectly connected to one another (i.e., allnodes have edges to all other nodes in theclique) ● More formally we would say that suppose thatG = (V, E) is a simple graph. A set S ⊆ V is aclique if every pair of vertices in S areadjacent. That is, S is a clique if for all pairs ofdifferent vertices u and v in S, {u, v} ∈ E. ● Can we show that Clique determination (i.e.,for a given graph G, does it have a clique of atleast size k) is NP-hard, by reducing anotherNP-Complete problem to it? Declare independence ● Intuition: If a clique is completely-connected,wouldn't the inverse of that mean a set of nodes thatare completely disconnected from one another... ○ Wait have we seen that somewhere before...YES! THAT'S AM***** F****** INDEPENDENT SET!!!!● Let's formalize this relationship: ○ Suppose G = (V, E) is a simple graph. Then G' = (V, E') is thecomplement of G, formed by complementing the set of edges.That is G' has an edge between different vertices u and v if andonly if G does not have an edge between u and v.○ The following Theorem follows directly from the definitions ofindependent sets and cliques. ■ Suppose G = (V, E) is a simple graph. S is an independentset of G if and only if S is a clique of G'

master theorem: Merge sort

T(n) = 2T(n/2) + n. ○ n - The size of the problem. For Merge Sort for example, nwould be the length of the list being sorted. ○ a - The number of subproblems in each recursive step. So inour Merge Sort example, since we are dividing the array intotwo halves and recursing down each half, a = 2. ○ b - The ratio we're reducing the subproblems by. For MergeSort b = 2 because we are passing half of the array (length n)to each each subproblem. n/b is the total size of eachsubproblem. ○ f(n) - The work to be done on n outside of the recursive steps.For Merge Sort this represents the merging step for theresults of the recursion and is O(n)m

master theorem: binary search

T(n) = T(n/2) + C. ○ n - The size of the problem. n would be the length ofthe list being searched. ○ a - The number of subproblems in each recursivestep. Since we are simply looking at a particular value,a = 1. ○ b - The ratio we're reducing the subproblems by. b = 2because we are passing half of the array (length n) toeach each subproblem. n/b is the total size of eachsubproblem. ○ C - The work to be done on n outside of the recursivesteps. For Binary search it's a constant time lookup ofmatching condition

why divide and conquer cannot use DP

That is the reason why a recursive algorithm like Merge Sort cannot use Dynamic Programming,because the subproblems are not overlapping in any way so the strategy of memoization isentirely unhelpful. ○ However both are similar, in that they break exponential runtimes by collapsing the probleminto a subset of problems that are solved in constant time and recombined

Gayle Shapley proof for correctness

The set S returned at termination is a perfect matching. Proof. The set of engaged pairs always forms a matching. Let us suppose that the algorithm terminates with a free man m. At termination, it must be the case that m had already proposed to every woman, for otherwise the While loop would not have exited. But this contradicts (1.4), which says that there cannot be a free man who has proposed to every woman. The set S returned at termination is a stable matching. Proof. through contradiction. assume there is an instability. set up two pairs and say that w1 and m2 prefer each other over their partners. the last proposal was m2 to w2. ask the question, did m2 ever propose to w1? if not, then w2 has to be higher preference than w1, which contradicts the instability we set up. on the other hand, if m2 did propose to w1, then w1 would have rejected him in favor for m1. However, that also contradicts the instability because w1 would have to prefer m1 over m2 to reject m2 proposal. Therefore, S is a stable matching.

Knapsack Problem

This is similar to weighted interval scheduling inthat we build up our decision by evaluatingwhether to include something as part of a newmax, or fall back to a previously calculated max ● Our subproblem can be structured as a decisiontree ○ For a given weight limit w and set of items up to i, doesincluding 1 (or more if non-exclusive) i and it's associatedvalue vi, along with the max value of a non-competing (i.e.,using only the remaining available weight if we include i)mix of previous items V[i-1, w-wi) give me a local MAX, ordoes excluding it in favor of an alternative mix at that sameweight without item i V[i-1,w] provide a higher MAX --We structure this solution as a 2D table with one axis beingthe array of items (sorted from lowest to highest), and theother being the total weight capacity of the knapsack ○ Note: just like the metaphor of filling up a jar (first with the largest itemsand then smaller ones in the crevices) we 'work backward' or bottomsup, by starting with the smallest items. That way, as we're workingthrough the larger ones, we can determine its MAX by looking up thesmaller items over smaller spaces we've previously calculated ● Also Note: The runtime for Knapsack is determined based onwhether we allow for multiple versions of each item. ○ In the simple version where it's only one copy of each item, the maxcalculation for each square is based on a fixed number of lookups andtherefore the overall runtime is simply the dimension of the 2D grid (ascompleting each square within it is constant time.) ○ If we allow for multiple copies of an item in our max solution we have upto n lookups per square (e.g., for an item weighing 2, for the square withknapsack weight of 11, we have to do 6 lookups (one each for theinclusion of 5,4,3,2,or 1 items along with the exclusion of the item) andtherefore the overall runtime is the grid (w*n)*n or w*n^2

mergesort runtime

This step runs in O(A + B) where A and B are thesize of the arrays (or O(N) for the entire list)Overall runtime is Nlog(N). ○ Question: Intuitively I understand why breaking theproblem down to a 1 to 1 comparison works, but why isit beneficial to sort arrays > 1. Aren't we simplyrecreating our original problem in multiple smalleriterations? ■ What's changed is now the Sub-arrays aresorted! Much easier to merge sorted arrays thanunsorted ■ We can simply do a point by point arraycomparison and stack the merged arrayaccordingly. ■ That's why the runtime for the final merge issimply (A+B), we simply need to iterate onceover the two arrays and we're done!

max-flow for bipartite graphs

We are college administrators charged withhousing assignments○ We have n students and d dorms ● Each student has selected a subset of dorms theywant to live in. ● Each dorm, can accommodate a certain numberof students ● Can we devise an algorithm to determine if allstudents can be assigned to a dorm of theirchoice? ● Yes! By modeling this as a Max-Flow problem! Graph it up! ● First Graph the relationships betweenstudents and dorms ● Then add a source (linking to all students)and a terminus (following the dorms) ● Add edge capacity weights, (in this case 1 foreach student to dorm, and whatever the totaldorm capacity from the dorm to the terminus ● Run Ford-Fulkerson to determine Max-Flow ● If Max-Flow = Number of students, then astable assignment is possible! ● Can you think of any other use cases?

DP: Fastfood Franchising

You're considering is considering opening a series offastfood restaurants along roadstops on the 405. Thereare n possible roadstop locations are along a straightline and the distances of these locations from the startof the 405 are in miles and in increasing order m1, m2,...., mn. The constraints are as follows: ○ At each location, you may open one restaurant and expectedprofit from opening a restaurant at location i is given as pi ○ Any two restaurants should be at least k miles apart, where k isa positive integer ● We want to create a program to maximize total profit(Pi). Use dynamic programming formulation to solve this problem. --Intuitively we can see how the optimal solution can can beconstructed here ○ As we iterate along the highway, we can evaluate the optimal distribution ofrestaurants along each stop for maximum profit to that point (notated as capitalPi) ○ Specifically, at each stop (location i) we face a decision: Do we place a restaurantthere or not, based on what maximizes the cumulative profit up to that point? ■ The maximum expected profit of location i comes from the maximum ofexpected profits of the preceding location (h) and whether we open alocation at location i. ○ What governs opening a location? ■ If the distance between h and i |h-i| > k then it always makes sense toadd a restaurant at i ■ If |h-i|< k then we have to evaluate what is higher and return maxbetween ● Ph (maximum value up to h) ● Pi where Pi = profit at i location (pi) + max profit at i - k (P(i-k))

Prim's algorithm

a greedy algorithm that finds a minimum spanning tree for a weighted undirected graph by focusing on the nodes Maintain two disjoint sets of nodes. One containing nodes that are in the growing spanning tree and other that haven't yet been added. ○ Select the node that is: 1. connected to the growing spanning tree and is 2. not yet in the growing spanning tree and 3. Has the lowest weighted edge and add it into the growing spanning tree. At every step, we have to consider all the edges that connect the two sets, and pick the minimum weight edge from these edges. After picking the edge, we move the attached vertex to the set containing the growing MST and start again. ● The group of edges that connects two set of vertices in a graph is called a 'cut' in graph theory. ● So, at every step of Prim's algorithm, we find a cut (of two sets, one contains the vertices already included in MST and other contains rest of the vertices), pick the minimum weight edge from the cut and move this vertex over to the MST Set. What happens to other edges that node has? ○ Only migrate specific edge over to the MST. If this node connects to more than one node in the MST, drop those other edges ○ If the node connects to any non-MST nodes, keep those edges as they now form the new cut! ● Repeat until all nodes migrated to the MST set.

minimum spanning tree (MST)

a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together, without any cycles and with the minimum possible total edge weight.

graph storage data structres

adjacency matrix: constant time lookup linked list: linear time

interval scheduling -- greedy

select intervals absed on when they finish first O(nlogn) for labeling and ordering in order of finishing time. implementing = O(n)

DFS data and runtime

stack O(E+n)

depth-first search (DFS) algorithm

start at node s start down in one direction and look for target until you hit an endpoint if not found, backtrack to most recent branch and go down that path until new endpoint is reached continue until you traverse entire domain or find the target

Breadth-first search (BFS) algorithm

traverse an undirected graph BFS (G, S) let L0 = {s} set L1 to all neighboring nodes of s keep going Lj times until no new nodes are encountered

DP example: Tug-of-War

want to divide this discussion section into a game of tug-of-war (twoteams pulling on opposites sides of a rope). I want you to design analgorithm to build two teams A and B as equal as possible. Note that thetwo teams do not need to have the same number of people. Each person i has a strength si. ● The sum of the strengths of people on team A should be as close toequal as possible to the sum of the strengths of the people on teamB. That is, you want to minimize: ● Greedily we could try to solve this by assigning from strongest toweakest, but this is not always optimal ● This time we will design a DP algorithm to build the teams A and B. Youralgorithm is now expected to successfully minimize the difference intotal strength between the two teams.


Conjuntos de estudio relacionados

System Administration and IT Infrastructure Services. Week 1: What is System Administration?

View Set

Chapter 22: PrepU - Complications Occurring During Labor and Delivery

View Set

Anatomy and Physiology II Lymphatic System-Part 2

View Set

Intracranial Pressure & Brain Tumors

View Set

Life Insurance PA Chapter 6 Test

View Set

Why did America fail to stop the spread of Communism in Vietnam?

View Set

Chapter 1: Introduction to Personal Training

View Set

Seidel Ch 21 Anus, Rectum, & Prostate

View Set