Algorithms Exam 3 (dynamic programming)

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

How bottom up programming works

- finds solutions to small subproblems first. -stores them -combines them somehow to find a solution to a slightly larger subproblem

4 steps of dynamic programming

. Characterize the structure of an optimal solution in terms of solutions to its subproblems 2. Recursively define an optimal solution in terms of optimal solutions to smaller problems (Note: how's this different than D&C?) 3. Compute the values of an optimal solution in bottom-up fashion, store the result in a table for lookup later 4. Construct an optimal solution that produced the optimal value

for the lcs alg, if |x| = m and |Y| = n then how many subsequences of x are there

2^m which are to be compared each with Y making the total run time of )(n 2^m)

run time brute force LCS

O(n 2^m)

fib numbers calc with memoization run time

Theta n (saves time by using extra space)

Top down aka

memoization

time complexity of top down rod cutting

n^2 each subproblem solved just once, subproblems have sizes 0, 1, ... n; for each subproblem for loop iterates n times

compare bottom up dynamic programming to greedy approach

-also requires optimal substructure -but greedy makes choice first and then solves

3 properties of a problem that can be solved with dynamic programming?

-simple subproblems -optimal substructures of the problems -subproblem overlap

how memoization works

-use normal recursive approach -store values of smaller subproblems and then try to look them up before each recursive call

rod cutting dyn. programming approach

First approach: top-down with memoization Checks to see if subproblem already solved. If so, returns saved value, otherwise computes the value Second approach: bottom-up Sort subproblems by size, solve them in size order For each subproblem, all smaller subproblems its solution depends on have already been solved Both have same asymptotic running time

Dynamic Programming running times for LCS and 0/1 knapsack versus naive alg

LCS: O(m*n) vs. O(n * 2m) 0-1 Knapsack problem: O(W*n) vs. O(2n)

LCS stands for

Longest Common Subsequence

compare dynamic programming to divide and conquer

Remember D & C? Divide into subproblems. Solve each. Combine. Good when subproblems do not overlap, when they're independent No need to repeat them Divide and conquer: top-down Dynamic programming: bottom-up

simple subproblems

We should be able to break the original problem to smaller subproblems that have the same structure

overlapping subproblems

a recursive alg would revisit the same problem repeatedly

optimal substructure

an optimal solution contains within it optimal solutions to subproblems

memory function

function that remembers

Two flavors of DP

memoization bottom up

conditions necessary for dynamic programming

must be able to break problem into smaller subproblems; optimal substructure; overlapping subproblems

time complexity o f bottom up cut rod

n^2 because of doubly nexted for loops

is dynamic programming in place?

no

what do you use to implement memoization

use table abstract data type -lookup key: whatever identifies a subproblem -valued stored: the solution could use array/vecotr or map/hashtable

does lcs alg have optimal substructure?

yes; solutions of subproblems are parts of the final solution. Subproblems: "find LCS of pairs of prefixes of X and Y"


संबंधित स्टडी सेट्स

MKTG 409 Exam 3 Practice Questions

View Set

301 Business Law - Jeff Adams/ Evans

View Set

WGU C100 CH 6 NeoClassical Period

View Set

Conceptual Physics Free fall and Projectile Motion Test

View Set