Analysis algorithm
We focus on worst-case running time because
1,easier to analyze 2.Crucial to applications such as scientific computations, games, finance and robotics
Primitive operation 1
Assigning a value to a variable
example of Greedy
Dijkstra's, Kruskal's and Prim's algorithms for graphs, Huffman encoding for files compression, coin changing.
General Rules for time complexity 1
Very large input-size
Experimental Studies step 1.
Write a program implementing an algorithm to be investigated.
recursive algorithm
a function calls itself again and again till the base condition(stopping condition ) is satisfied.
Primitive operation 4
comparing two numbers
running time function
f(n)
Branch and bound step 1
find an optimal solution by keeping track of the best solutions found so far
The efficiency analysis concentrates
on basic operations
General rule for time complexity for calculating fragments
sum all the running time of all fragments
Two approaches in dynamic programming
top-down and bottom-up
An Algorithm is
way solving a problem
Primitive operation 2
Calling a function
Experimental Studies step 2.
Run the program with inputs of varying sizes and compositions
Divide and conquer
approach attempts to reduce a single large problem into multiple simple independent subproblems, solving the subproblems and then combining the solutions to these subproblems into the solution for the original problem.
Greedy
approach works by making a decision that seems the most promising at that moment and never reconsidering this decision. Greedy algorithms always choose a local optimum and only hope to end up with the global optimum.
Backtracking g algorithms are like brute-force algorithms however
backtracking algorithms are distinguished by the way in which the space of possible solutions is explored. Sometimes a backtracking algorithm can detect that an exhaustive search is unnecessary and, therefore, it can perform much better .
An analysis of an algorithm should
be done before it is implemented
Example of divde and conquer
binary search, merge sort algorithms.
Dynamic programming algorithm has a better run time then
brute force
Changing the hardware or software environment affects the running time
by a positive constant Factor C
Basic operation 2
comparisons (<, >, ≤, ≥, ==, !=)
g(n) is
considered as a classification of the algorithm
An algorithm may be
implemented in any programming language
Primitive operation 5
indexing into an array
big-O asymptotic
is a term of classification of an algorithm
C
is positive constant
Backtracking should
it finds a solution to the first subproblem and then attempts to recursively solve the other subproblems based the next possible solution to the first subproblem and so on.Backtracking terminates when there are no more solutions to the first subproblem.
"intelligent backtracking"
keeps track of the dependencies between sub-problems and only re-solves those which depend on an earlier solutions which have changed
It characterizes the number of steps needed to do a basic operation
like adding two number, assigning a value to some variable or comparing tow numbers
Most algorithms transform input objects into
output objects
Dynamic programming step 1
partitioning a problem into overlapping subproblems
example of brute force
sequential search of the sorted array or a Hamilton circuit
By inspecting the pseudocode we can determine
the maximum number of basic operation executed by an algorithm as a function of the input size
We assume that it takes a constant amount of time
to access any cell in memory in the RAM (random-access machine) model.
General Rules for time complexity 2
worst case scenario
Limitation of experiments 3
In order to compare two algorithms, the same hardware and software environments must be used.
Calculate time complexity of if(con) S1 else S2
O(1)
General rule for time complexity for calculating fragments example int a; a = 5, a++;
O(1) is a constant
General rule for time complexity for calculating fragments example for(i=0;i<n;i++) { //simple statment }
O(n)
Experimental Studies step 3
Use a function, like the built-in clock() function, to get an accurate measure of the actual running time.
Randomized
algorithms are algorithms that make some random (or pseudo-random) choices.
Example Dynamic programming
Fibonacci numbers, Floyd's all pairs shortest paths algorithm matrix chain multiplication, longest common subsequence, activity scheduling problem.
Randomized example
randomized quick sort, pseudo-random number generator, probabilistic algorithms.
Dynamic Programming step 2
recursively solving the subproblems and memoizing their solutions to avoid solving the same subproblems repeatedly
Primitive operation 7
returning from a function
Average case time is often
difficult to determined
A correct Algorithm should
return correct output for any acceptable input data, also halt after returning
General rule for time complexity for calculating fragments example for(n) for(n) nested loop
0(n^2)
7 Classification of Algorithms
1. divide and conquer 2.dynamic programming 3. greedy 4.brute force backtracking 6.branch and bound 7.randomized
Asymptotic approach
1.Get a relation between the number of its basic operations and the size or magnitude of input data 2.Provide an upper bound on the running time funciton
General rule for time complexity for polynomial
Drop lower order terms drop constant multipler
Primitive operation 3
Performing an arithmetic operation
Experimental Studies step 4
Plot the run-time results and figure out a cost of the algorithm.
Branch and bound step 2
The algorithm traverses a spanning tree of the solution space and prunes the solution tree, thereby reducing the number of solutions to be considered.
How do you calculate a single loop
The number of iteration * number of basic operation in side the loop
n
express the size of input
g(n)
expresses growth rate
Calculate time complexity of a double loop int sum = 0 for (int i =1; 1<=n;I++) for(int j = 1;j,=n;j++ sum = sum +1
f(n) 2n*n+1=2n^2+1 =0(n^2)
Calculate time complexity of int sum =0; for(int i = 1; i <=n; i++) sum = sum +1
f(n) = 2n+1=O(n)
Constant function
f(n) =C
big-O asymptotic is denoted as
f(n)=O(g(n))
Any other constant function f(n) = c, can be written as
f(n)=cg(n)
Dose change hardware of software by a positive constant factor C alter its growth rate: True or false
false
Branch and bound Algorithms
find an optimal solution by keeping track of the best solutions found so far
Two types algorithms
Iterative and recursive
Algorithm can be considered as a
Abstraction of a computer program
The number of operation of algorithm is bounded by
Cg(n)
Theoretical Analysis help by reason 1
Characterizes running time of an algorithm as a function of the input size, n.
Limitation of experiments 1
It is necessary to implement the algorithm, which may be difficult.
Calculate time complexity example if for else for for
O(n^2)
Calculate time complexity example int a; a=5; for(n) for(n) for(n)
O(n^2)
General rule for time complexity for polynomial example: t(n) 17 n^4+3n^3+4n+8 =
O(n^4)
Two way a algorithm can solve a probem
Performing an unambiguous sequence of instructions in a finite amount of time and then halting Transforming input data into output data in finite time
Limitation of experiments 2
Results may not be indicative of the running times on other inputs not included in the experiment.
Theoretical Analysis help by reason 3
Takes into account all possible acceptable inputs (often implicitly).
Pseudocode
a high-level description of an algorithm instead of an implementation in a particular programming language
Backtracking algorithm
algorithm views the problem to be solved as a sequence of decisions and systematically considers all possible outcomes for each decision to solve the overall problem.
Brute Force
algorithms use non sophisticated approaches to solve a given problem. Typically, they are useful for small domains due to the total cost of examining all possible solutions.
Basic operation 3
arithmetic operations (+, ∗, −, /,taking to power, square root, etc.)
The number of input data item (=n) is directly
connected with the number of operation performed by an algorithm and it is expressed ad running time function
Primitive operation 6
following an object reference
The efficiency of an algorithm can be measured by
how much o f computer time and memory is utilized by it implementation.
Theoretical Analysis help by reason 2
Allows us to evaluate the speed of an algorithm independent of the hardware/software environment
Asymptotic Notation
Are languages that allow us to analyze an algorithm's running time by identifying it behavior as the input size for the algorithm increases. This also know as an algorithm growth rate.
Branch and bound examples
Liner programming and optimization problmes
Basic operation 1
data interchanges (swaps)
Classification of algorithms based on
design techniques
Example of backtracking
topological sort, Depth First Search, n-queens problem
Count how many primitive operation are excuted and
use the number t as a measure of the running time of the algorithm
the running time function f(n) is complex so to classify an algorithm we
used its upper bound by a simpler function, g(n) theat express the growth rate
Dynamic programming step 3
using an optimal structure approach to be sure that the optimal solutions of local subproblems lead to the optimal solution of the global problem
Want to establish algorithm efficiency
using asymptotic notation approach
itterative
using loop statement such as for loop, while loop, do-loop to repeat the same steps
The running time of an algorithm typically grows
with the input size