Chapter 3: Algorithm Analysis

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

linear growth rate/running time

-a growth rate of cn (c is a positive constant) -as n grows, the running time grows proportionally (doubling n doubles the running time) -graphed by straight lines

asymptotic (algorithm) analysis

-attempts to estimate resource consumption of an algorithm -AA measures efficiency of an alg/(its implementation as program) as the input size becomes large -AA is an estimating technique, so if one is slightly better, it wont tell u that; good for "hmm should i use this alg?"

sequential search algorithm

-begins at the first position in the array and looks at each value in turn until K is found -once K is found, the algorithm stops -different from largest-value sequential search bc that one examines every array value

best, worst, and average cases for assigning the value from the first position of an array to a variable

T(n) = c for all three cases so T(n) is in O(c)

sum = 0; for (i=1; i<=n; i++) for (j=1; j<=n; j++) sum++ What is the running time? Does it take longer if n is bigger? What is the basic operation? What is the total number of increment operations?

T(n) = c(sub2)n^2 yes basic operation is the increment operation for variable sum total number of increment operations is n^2

how are while loops analyzed?

similar to for loops

a = b calculate the running time

the assignment statement takes constant time it is theta(1)

how are if statements analyzed?

the greater of the costs for the then and else clauses

how are switch statements analyzed?

the most expensive branch

if testing two algorithms, implementations should be ____________________

"equally efficient" (meaning all the factors are equal)

critical resource is...

-running time, but that isn't ur only focus -space is also required (main mem & disk space)

2^10

1024

If someone asked you, "What is the growth rate of this algorithm?" You should reply...

When? Best case? Average case? Or worst case?

time to copy a value is always _______ thus T(n) = __________

c(sub1) T(n) = c(sub1) the size of input n has no effect on running time, called a constant running time

T(n) = c(sub1)n^2 + c(sub2)n in the average case where c1 and c2 are (+) numbers...

c1n^2 + c2n <= c1n^2 + c2n^2 <= (c1+c2)n^2 for all n>1 so, T(n) <= cn^2 for c = c1 + c2 and n0 = 1 Therefore, T(n) is in O(n^2)

a basic operation's time to complete...

does not depend on the value of its operands

sum = 0; for (j=1; j<=n; j++) for (i=1; i<=j; i++) sum++; for (k=0; k<n; k++) A[k] = k; calculate the running time

theta (n^2)

sum1 = 0; for (k=1; k<=n; k*= 2) for (j=1; j<=n; j++) sum1++; calculate the running time

theta (nlogn)

what is the problem of sorting in the worst case?

theta (nlogn)

size

usually the number of inputs processed

is the running time of an algorithm that assigns a value to an array value a constant, regardless of the value?

yes

should we analyze an algorithm with best, worst, or average case?

-best is way too optimistic and only sometimes good to use -worst is good bc you know the algorithm can't get any worse than that (ex: real-time applications like monitoring airplanes) -average is useful if we we wanted to know the typical behavior if the program ran a bunch of times (but we need to know the distribution of inputs; in sequential search, if K is usually in the last few arrays, average case is wrong)

factors affecting running time:

-environment (speed of CPU, bus, peripheral hardware; competition for comp's/network's resources) -programming language & quality of code -"coding efficiency" of the programmer who converted from algorithm to program

average case

-for sequential search, the algorithm examines about n/2 values -if you ran the program a bunch of times, it would go about halfway before finding the value

worst case

-for sequential search, the last position in the array contains K -the algorithm needs to examine all n values & the running time is the longest possible

best case

-for sequential search, when the first integer in the array is K, so only one integer is examined. -the running time is the shortest possible.

upper bound

-indicates the upper or highest growth rate that the algorithm can have -aka worst case

exponential growth rate

-like 2^n -called this bc n is in the exponent -n! is also growing exponentially

big-Omega/Omega

-looks like horseshoe -lower bound for an algorithm -the least amount of a resource (usually time) that an algorithm needs for some class of input -like for big-Oh, is a measure of the algorithm's growth rate; measuring the resource required for some particular class of inputs (worst, average, or best case input of size n)

space/time tradeoff principle

-one can often achieve a reduction in time if one is willing to sacrifice space or vice versa -ex: lookup table

lookup table

-pre-stores the value of a function that would otherwise be computed each time it is needed -12! is the highest int number an int can store. a factorial program...more efficient to just have the 12 value precomputed and stored in a table than computing the factorials. it may be well worth the small amount of additional space needed to store the lookup table.

overhead

-storing additional information about where the data are within the data structure -ex: node of a linked list must store a pointer to the next value on the list -overheard should be kept to a minimum while allowing maximum access

"code tuning"

-the art of hand-optimizing a program to run faster or require less storage -First tune the algorithm, then tune the code.

disk-based space/time tradeoff

-the smaller you can make your disk storage requirements, the faster your program will run -the time to read information from disk is enormous compared to computation time, so it is better to have one that needs less storage stuff

T(n)

-time T to run the algorithm as a function of input size n -the true running time of the algorithm

big-Theta notation

-when the upper and lower bounds are the same within a constant factor -an algorithm is said to be theta(h(n)) if it is in O(h(n)) and it is in omega(h(n)) -if f(n) is theta(g(n)), then g(n) is theta(f(n))

code for largest-value sequential search & its basic operation & its running time

/** returns position of largest value in array A*/ static int largest (int [] A) { int currlarge = 0; //holds largest element position for (int i=1; i < A.length; i++) { if (A[currlarge] < A [i]) currlarge = i; return currlarge; } basic operation: compare an int's value to the largest value so far, which takes a fixed amount of time to compare running time: T(n) = cn c is time to do the comparison & it is done n times

simplifying rules

1) If f(n) is in O(g(n)) and g(n) is in O(h(n), then f(n) is in O(h(n)) 2) If f(n) is in O(kg(n)) for any constant k>0, then f(n) is in O(g(n)) 3) If f1(n) is in O(g1(n)) and f2(n) is in O(g2(n)), then f1(n) + f2(n) is in O(max(g1(n), g2(n))) 4) If f1(n) is in O(g1(n)) and f2(n) is in O(g2(n)), then f1(n)f2(n) is in O(g1(n)g2(n)) #1 says that if some function g(n) is an upper bound for your cost function, then any upper bound for g(n) is also an upper bound for ur cost function. #2 says you can ignore any multiplicative constant in ur equations with big-Oh. #3 for two parts of a program (statements or blocks of code) that run in sequence, u only need to consider the more expensive part. #4 is for simple loops in programs. if the action is repeated a number of times, and each repetition has the same cost, the total cost is the cost of the action * the number of times the action takes place. all four laws apply for omega and theta as well.

why not just run two algorithms as computer programs and see how many resources each consumes?

1) mad effort to build two when u only want one 2) maybe one program is better written but actually less efficient (bc the programmer can write one better due to bias) 3) that specific empirical test may favor one algorithm 4) the better of the two may still not fit in ur resource budget

order this from slowest growth to fastest growth: 2^n, 5nlogn, 2n^2, 10n, n!, 20n

10n, 20n, 5nlogn, 2n^2, 2^n, n!

how are subroutine calls analyzed?

add the cost of executing the subroutine

example of a basic operation

adding/comparing two ints

you analyze time required for ____________________ and space required for the _________________

algorithm/program data structure

quadratic growth rate

an algorithm whose running time equation has highest order term of n^2

when estimating an algorithm's performance, we primarily consider the # of ___________________________ required by an algorithm to process an input of a certain _______________

basic operations size

we have two functions as algebraic equations. which grows faster than the other?

best way to do this is to take the limit of the two functions as n grows towards infinity lim n--> infinity f(n)/g(n) if the limit goes to infinity, then f(n) is in omega (g(n)) bc f(n) grows faster if the limit goes to zero, then f(n) is in O(g(n)) bc g(n) grows faster. if the limit goes to some constant other than zero, then f(n) = theta g(n) bc both grow at the same rate.

binary search or sequential search?

binary search is more efficient than sequential search. but the running time for sequential search is the same regardless of if the array values are in order. if array not in ascending order, detrimental to put in order first and then run binary.

why is it much easier to show an algorithm/program is in omega(f(n)) than it is to show a problem is in omega(f(n))?

for a problem to be in omega(f(n)) means that every algorithm that solves the problem is in omega(f(n)), even algorithms that we have not thought of!

if the worst case is f(n), in big-Oh notation...

in O(f(n)) in the worst case

an algorithm whose running time has a constant upper bound is... (big-Oh)

is in O(1)

an old computer with linear, or a new computer with n^2 or 5nlogn?

linear trumps all!!!

If f(n) = 2nlogn and g(n) = n^2, is f(n) in O(g(n)), omega(g(n)), or theta(g(n))?

n^2 is in omega (2nlogn) reason shown in example 3.8, pg 68

is the upper bound for an algorithm the same as the worst case for that algorithm for a given input of size n?

no! the actual cost is not being bounded, the growth rate for the cost is.

binary search

on an array whose values are stored in order from lowest to highest. begins by examining the value in the middle position of the array. if value K is higher than the middle array, they do same thing for upper half, etc and vice-versa

situation for worst vs average

real-time applications = worst-case analysis average-case analysis = only if we know about the distribution of our input; otherwise, go to worst-case

example of a not basic operation

summing contents of an array with n integers is not because cost depends on the value of n

old machine, 10x faster new machine, linear growth rates...?

sure, the # of records the new machine can solve in 10x larger, but the proportion does not change. both old and new do at same proportion. constant factors never affect the relative improvement gained by a faster computer.

if n^2 grows as fast as T(n) in the worst case, big-Oh...

the algorithm is in O(n^2) in the worst case

how fast does log grow in comparison to a quadratic/n^something?

the quadratic grows faster than either log^bn or log n^b if a, b >1, n^a grows faster than those two logs

growth rate

the rate at which the cost of the algorithm grows as the size of the input grows

constant running time

the size of input n has no effect on running time

big-Oh notation

the upper bound to the growth rate of f(n)

T(n) = 3n^4 + 5n^2, then T(n) is in O(...)

then T(n) is in O(n^4) ignore all constants and all lower-order terms to determine the asymptotic growth rate for any cost function

sum = 0; for (i=1; i<=n; i++) sum += n; calculate the running time

theta (n)

sum2 = 0; for (k=1; k<=n; k*=2) for (j=1; j<=k; j++) sum2++; calculate the running time

theta (n)

limitations to asymptotic analysis w/ constants

usually we ignore constants bc they don't make a difference at the end at all in rare cases, it does. if you have a rly slow growing algorithm and a faster one, but you're only sorting five records or something, maybe the typically better one isn't right in this scenario. or if you're like sorting 1000 records and it's linear time but like 800n and something else, maybe linear time ain't the best choice here. at your discretion, tho ignoring is most always the way to go


Ensembles d'études connexes

Sociology Exam 1 Review. Chapters 1-4 (with key words)

View Set

Contract Law - Misrepresentation

View Set

CHAPTER 33- DISORDERS OF RENAL FUNCTION

View Set

Markets, Market Failure and Government Intervention

View Set

Reading Assignment 7 (The Genus Homo)

View Set

Archer Maternal & Newborn Health

View Set

International Marketing Chapter 8 SmartBook Assignment

View Set

PSYCH 105 John Lothes final exam review

View Set