Dynamic Programming, Greedy Algorithm , Recursion, & Backtracking

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Few Playlist to save

1. https://tinyurl.com/y5x2kqc8 2. https://tinyurl.com/pdblrxw

Optimal substructure

If optimal solution of the given problem can be obtained by using optimal solutions of its subproblems, then it is identified that the given problems has Optimal Substructure Property

How to make recursion faster ?

Memorize or bottom up

Top-Down w/Memoization vs Bottom-Up

*same asymptotic runtime 1. Bottom-up more efficient by constant factor b/c no overhead for recursive calls 2. Bottom-up may benefit from optimal memory accesses 3. Top-down can avoid computing solutions of sub-problems that are not required 4. Top-down is more natural Top-Down: start w/whole weight & work backwards (return q, which = r[n]) Bottom-Up: start w/base case and work upwards until reach solution (return r[n])

What is the difference between Greedy Algorithm and dynamic programming algorithms?

- In a greedy Algorithm, we make whatever choice seems best at the moment in the hope that it will lead to global optimal solution. - In Dynamic Programming we make decision at each step considering current problem and solution to previously solved sub problem to calculate optimal solution. ---- The difference is that dynamic programming requires you to remember the answer for the smaller states, while a greedy algorithm is local in the sense that all the information needed is in the current state. Of course, there is some intersection. Links: 1. https://tinyurl.com/y565bnat 2. https://tinyurl.com/y6pvl9q3 3. https://tinyurl.com/yy3v49qf

What is the difference between divide and conquer and dynamic programming algorithms?

- divide and conquer solves disjoint subproblems recursively and combines the solutions, but often solves common subproblems repeatedly and unnecessarily - dynamic programming applies when subproblems overlap (when subproblems share subsubproblems) and saves each subproblem solution in table if needed again

What are the ways to implement a dynamic programming approach?

- top-down memoization, recursive where previous subproblem solutions are saved and looked up prior to solving a new subproblem - bottom-up method, requires some notion of size so solving problems can be combined with smaller subproblems

Recognizing a dynamic programming problem

-If you are redoing work for subproblems -Recursive Solution -Big O of n^2 is only solution and the array must stay in order (and is not sorted)

Developing DP Algorithm

1. Characterise structure of an optimal solution 2. Recursively define value of an optimal solution - test your definition very carefully! 3. Compute value of an optimal solution (typ bottom-up fashion) 4. Construct optimal solution from computed information

DP Requirements

1. Optimal substructure 2. Overlapping sub-problems

How to better apply recursion to solve problems ?

1. When in doubt, write down the recurrence relationship. 2. Whenever possible, apply memoization. 3. When stack overflows, tail recursion might come to help. Some Bonus: 3. The base case: this is the hardest to define. we need to figure out the boundary condition and return type 4. The recursion rules: the return type should be the same as base case, some time we need to keep an eye on whether it is a pre-order, post-order or in-order traversal. Example: I'd like to use the method mentioned in "reversed linked list" :Imagine that we are at a middle point in our recursion process where everything up to this point has been done.How would we proceed from this point?For example:in "reversed linked list", we pick a point n where all nodes after n (n+1 to end) has already been revered. What do we do at this point?in "Fibonacci" , we can also pick a point n where all nodes before n (0 to n-1) has already been calculated. How do we find n ? 5. Write some easy example to run through. It is fine if the first two step are wrong, write some cases to figure which part is wrong.

Dynamic paradigms: Greedy Algorithm

A greedy algorithm is an algorithmic strategy that makes the best optimal choice at each small stage with the goal of this eventually leading to a globally optimum solution. This means that the algorithm picks the best solution at the moment without regard for consequences. It picks the best immediate output, but does not consider the big picture, hence it is considered greedy. URL: 1. https://tinyurl.com/y5xuvhjj 2. https://tinyurl.com/yytms3x6

Dynamic paradigms: Divide-and-conquer

Break up a problem into independent subproblems, solve each subproblem, and combine solutions to subproblems to form a solution to original problem E.g. Merge-sort, quicksort

How to identify DP questions?

Look for these words: length of smallest/largest sub sequence, fewest/largest number of, the smallest/largest product, total number of ways to, maximum/minimum number of, largest/smallest of,

Dynamic paradigms: Dynamic programming

Programs that can be solved recursively, but the sub-problems overlap. A DP program solves each subproblem once and saves the result in a table (avoid re-computations) Typically used for optimizing problems

How to identify Greedy Algorithm questions?

To prove that an optimization problem can be solved using a greedy algorithm, we need to prove that the problem has the following: 1. *Optimal substructure property:* an optimal global solution contains the optimal solutions of all its subproblems. 2. *Greedy choice property:* a global optimal solution can be obtained by greedily selecting a locally optimal choice. Links: 1. https://tinyurl.com/yxbtqfhn

second step in determining optimal substructure?

assume you are a given a choice that leads to an optimal solution for the given problem

How to approach Recursion

compute f(n) by adding something, removing something, or change the solution for f(n-1) in some case, solve for first half of the data set, then second half. Common approach : Bottom-up, top-down, and half-half.

Momo-ize

if get subproblem solution , write down for furture use

first step in determining optimal substructure?

show that a solution involves making a choice that leaves one or more subproblems to be solved

Overlapping sub-problems

solution combines solutions to overlapping sub-problems space of subproblems must be "small", subproblems solved over and over, rather than generating new subproblems

informally, how is the running time of a dynamic programming algorithm determined?

two factors: - the number of subproblems - number of choices within each subproblem # Time Complexity = subproblems * time done on each

fourth step in determining optimal substructure?

use 'cut-and-paste' technique to prove by contradiction that your optimal solution is or isn't optimal and pasting in optimal

third step in determining optimal substructure?

with given choice, determine which subproblems exist


Set pelajaran terkait

Chapter 11: Being Credible and Using Evidence

View Set

Lesson 8 Protection Against Creditors; Agents and Brokers Course 324

View Set

PRO RES - Different roles of the lawyer

View Set

8. Quiz 2: Usage (Unit 2) - 86.7%

View Set

OSHA - Introduction to Occupational Safety and Health

View Set

Chapter 7 - Health Insurance Underwriting

View Set

Restorative Art Ch. 2 Bones of the Head and Face

View Set