Algorithmic Complexity

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

How can you know the complexity of common Python functions with lists?

Python knows how to get to a spot in a list without iterating through the whole list, so indexing, finding the length, appending, etc. are constant. Removing an element, copying a list, and finding if an element is in a list involve moving through the whole list, so those functions have linear complexity.

How can you know the complexity of common Python functions with dictionaries?

Since dictionaries are not ordered like lists and can be stored in any order, indexing, finding the length, etc. are linear in the worst case. However, they can be constant in the average case.

In a program running over each item in a list searching for an element, what would be the best, average, and worst cases?

The best case would be that the element is the first element in the list (constant). The average case would be the average running time over all possible inputs of a given size (length of list). This would be a more practical measure because it would tell you what you would have to do to look halfway through the list. The worst case would be the maximum running time (if the entire list is searched and the element is not found). This is the most useful case.

indirection

ability to reference something using a name, reference, or container instead of the value itself

worst case asymptotic complexity

as input size gets larger, additive constants/steps and multiplicative constants start to become irrelevant; focus on dominant terms (ex. in n^2 + 2n + n, O(n^2) is the only important term); n grows more rapidly than log(n), 3^n grows more rapidly than 2n^30

bubble sort

compares consecutive pairs of elements and swaps the elements so that the smaller is first; starts over when reaching the end of the list and stops when no more swaps have been made; concentrates on putting the largest elements in their correct spot first def bubble_sort(L): swap = False while not swap: swap = True for j in range(1, len(L)): if L[j-1] > L[j]: swap = False temp = L[j] L[j] = L[j-1] L[j-1] = temp #O(n^2) where n is len(L); does comparisons and passes until no more swaps

Big-Oh notation / O()

compares efficiency of algorithms; measures an upper bound on the asymptotic growth (order of growth, behavior as input size gets bigger); describes worst case (the bottleneck when a program runs), expresses rate of growth of program relative to input size, and evaluates algorithm (not machine or implementation); lower order of growth is better; independent of marching or specific implementation

constant complexity algorithms

complexity/number of iterations independent of size of inputs; takes same order of time no matter input; can have loops and recursions

What are the types of orders of growth ordered by complexity class from low to high? (c = constant)

constant run time (O(1)), logarithmic (O(log n)), linear (O(n)), loglinear (O(n log n)), polynomial (O(n^c)), and exponential (O(c^n))

What is a way to do a bisection search by copying a list?

def bisect_search1(L, e): if L == []: return False elif len(L) == 1: return L[0] == e else: half = len(L)//2 if L[half] > e: return bisect_search1(L[:half], e) else: return bisect_search1(L[half:], e) #[:half] copies list #O(log n) bisection search calls, O(n) for each call to copy list, O(n log n)

What is a way to do a bisection search by pointing to different places on the main list?

def bisect_search2(L, e): def bisect_search_helper(L, e, low, high): if high == low: return L[low] == e mid = (low + high)//2 if L[mid] == e: return True elif L[mid] > e: if low == mid: #nothing left to search return False else: return bisect_search_helper(L, e, low, mid - 1) else: return bisect_search_helper(L, e, mid + 1, high) if len(L) == 0: return False else: return bisect_search_helper(L, e, 0, len(L) - 1) #repasses list rather than copying, list and indices as parameters #O(log n)

What is an example of a program that is linear in the order of the length of the string but log in the size of an input n?

def h(n): answer = 0 s = str(n) for c in s: answer += int(c) return answer #converts integer to string and iterates over length, not magnitude of input n #O(log n) — base doesn't matter

What are methods of searching?

linear search/brute force search (walk through all elements, list does not have to be sorted, search for an element is O(n)) and bisection/binary search (sorted list, search for element in O(log n))

How can you find the number of steps it will take to run this program? while x > 0: x = x//2

log2(n) + 1; log2(n) gets you to x = 1, so one more division is needed to get to x <= 0

search algorithm

method for finding an item or group of items with specific properties within a collection of items; collection could be implicit (bisection search, find square root) or explicit (is a student record in a stored collection of data)

polynomial complexity

most common is quadratic with nested loops or recursive function calls; def isSubset(L1, L2): for e1 in L1: matched = False for e2 in L2: if e1 == e2: matched = True break if not matched: return False return True #outer loop executed len(L1) times, each iteration executes inner loop up to len(L2) times #worst case: O(len(L1) * len(L2)), O(len(L1)^2)

loop invariant

property that will hold true at each stage of an algorithm; ex. for selection sort, given a prefix of L[0:i] and suffix L[i + 1:len(L)], the prefix is sorted and no element is larger than the smallest element in the suffix (empty prefix base case is true, induction step moving smallest suffix element to end of prefix is true, and prefix is entire list when exiting)

monkey sort (aka bogosort, stupid sort, slowsort, permutation sort, shotgun sort)

randomly assigning elements in a list, checking if they are in order, and if not randomly assigning them again; best case O(len(L)) and worst case unbounded

exponential complexity

recursive functions with more than one recursive call for each size of the problem; most expensive but most efficient (look for approximate solutions); def genSubsets(L): res = [] if len(L) == 0: return [[]] #list of empty list smaller = genSubsets(L[:-1]) # all subsets without last element extra = L[-1:] # create a list of just last element new = [] for small in smaller: new.append(small+extra) # for all smaller solutions, add one with last element return smaller+new # combine those with last element and those without #for a set of size k integers, there are 2^k cases (is it in or not); solve with 2^(n-1) + 2^(n-2) +...+ 2^0 steps, or O(2^n)

What factors should be considered when building an efficient program?

time and space efficiency (tradeoff)

How can you measure the efficiency of a program?

time it, count the number of operations, or order of growth

Law of Multiplication for O()

used with nested statements/loops; ex. for i in range(n): for j in range (n): print('a') #O(n) * O(n) = O(n*n) = O(n^2)

Law of Addition for O()

used with sequential statements; ex. for i in range(n): print('a') for j in range(n*n): print('b') #O(n) + O(n*n) = O(n + n^2) = O(n^2) because of dominant term

What do you want in order to measure the efficiency of a program?

ways to evaluate the algorithm, scalability (capacity to be changed in size), and efficiency in terms of input size (input could be integer, length of list, etc.; you decide when there are multiple parameters to a function)

When does it make sense to sort a list and then do a binary search?

A linear search would be more efficient than a sort and search except for cases in which you sort a list once and then do many searches. Many searches can amortize/write off the initial sort cost.

Why is timing programs not the best method?

Although running time varies between algorithms, it also varies between implementations and computers. It is also not predictable based on small inputs.

Why is counting operations not the best method?

Although the count depends on the algorithm and is independent of computers, it also depends on implementations, and there is no real definition of which operations to count. Unlike timing, counting can come up with a relationship between inputs and the count. They both vary for different inputs.

How can you combine complexity classes?

Analyze statements inside the functions and apply rules, focusing on the dominant term.

How can you count operations?

Assuming operations, comparisons, assignments, and accessing objects in memory takes constant time, count the number of operations executed as a function of the size of the input. def ctof(c): return c*9.0/5 + 32 #3 operations def mysum(x): total = 0 for i in range(x + 1): total += i return total #1 + 3x operations

Can iterative and recursive factorial implementations be the same order of growth?

Even though they can take different amounts of time, they can be the same order of growth. (ex. linear)

When is a bisection search finished?

For n elements, finish looking through the list when 1 = n/2^i or i = log n (step after n/4). The complexity would be O(log n) where n is len(L).

How can you access a location in memory in constant time? (linear search)

If the list is all integers and each element has a fixed length (ex. 4 bytes), the i^th element is at base + 4i. If the list is heterogenous, you still go to the i^th location in memory, but you then follow a pointer or reference to another object at that location.

How can you time programs?

Import a time module (brings class into your own file). Start the clock, call the function, and stop the clock. Compare the two programs. import time def c_to_f(c): return c*9/5 + 32 t0 = time.clock() c_to_f(100000) t1 = time.clock() - t0 print("t =", t0, ":", t1, "s,")

What is a way to code a method sort?

def merge(left, right): result = [] i,j = 0, 0 while i < len(left) and j < len(right): if left[i] < right[j]: result.append(left[i]) i += 1 else: result.append(right[j]) j += 1 while (i < len(left)): result.append(left[i]) i += 1 while (j < len(right)): result.append(right[j]) j += 1 return result #complexity is O(len(left) + len(right)), O(len(longer list)), linear def merge_sort(L): if len(L) < 2: return L[:] else: middle = len(L)//2 left = merge_sort(L[:middle]) right = merge_sort(L[middle:]) return merge(left, right) #first recursion level: n/2 elements in each list and O(n) + O(n), where n is len(L), second level: n/4 elements and two merges (O(n)), dividing list in half is O(log(n)) #overall complexity is O(n log(n)) where n is len(L)

merge sort

divide and conquer approach; if a list is length 0 or 1, it is sorted, otherwise split into two lists until you have sorted lists of 1, merge sublists by looking at the first element of each pair and moving the smaller to the end of the result, copy remaining elements when one list is empty; more efficient because you only need to compare the first two elements of two lists and add the smallest to a new list

logarithmic complexity

efficient algorithms; complexity grows as log of size of one of its inputs; no function calls so only look at loops (ignore loops with constant steps); ex. bisection search or binary search of a list; ex. def intToStr(i): digits = '0123456789' if i == 0: return '0' result = '' while i > 0: result = digits[i%10] + result i = i//10 return result #find how many times you can divide i by 10 (log base 10 of i) #linear in the number of digits in n, log in the size of n

log-linear complexity

ex. merge sort

linear complexity

ex. search a list in sequence to see if an element is present, add characters of a string; if the number of operations inside a loop is constant and the number of times around the loop is n, it has linear complexity; def addDigits(s): val = 0 for c in s: val += int(c) return val #O(len(s))

selection sort

extracts minimum element in the first step and swaps it with the element at index 0, continues to find minimum element in remaining sublist and put it at next index; keeps left side of list sorted while all other elements are bigger than the first i elements; concentrates on putting the smallest elements in their place first unlike bubble sort; def selection_sort(L): suffixSt = 0 while suffixSt != len(L): for i in range(suffixSt, len(L)): if L[i] < L[suffixSt]: L[suffixSt], L[i] = L[i], L[suffixSt] suffixSt += 1 #outer loop executed len(L) times, inner loops executes len(L) - i times, O(n^2)

How do you find the number of steps in a nested program?

for x in L: for y in L: multiples.append(x*y) #multiply by n for first loop, each iteration of the outer loop is 3n, add 1 for final check to get n(3n+1) or 3n^2 + n for elt in L1: if elt in L2: intersection.append(elt) #multiply by n for first loop, each iteration is n and add 1 for append statement (element is only last element of L2), add 1 for final check to get n^2 + 2n

orders of growth

goals to evaluate efficiency when input is very big, express growth of run time as input size grows, put upper bound on growth, look at largest factors in run time; does not need to be precise/exact growth


Set pelajaran terkait

Vocabulary Workshop Level D Unit 7 8 & 9

View Set

Mercantilism + Navigation & Trade Acts

View Set

AMSCO The Judicial Branch Chapter 6

View Set