Big O Notation: Time & Space Efficiency

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Some General Tips on Writing Efficient Code with a More Efficient Big-O:

- Try to stay away from embedded loops (Remember that this is meant to be within reason. There are plenty of cases where an embedded loop is a reasonable solution) - Overusing variables can also weigh down runtime, if you don't need it don't use it. Sandwich coding is an example of this. - Hashes over arrays. - You can often use smaller datatypes to save memory bytes before integers, integers before floats.

4 Important Rules of Big O (Gayle McDowell)

1) If you have two different steps in your algorithm, you add up these runtimes: O(a) + O(b) = O(a+b) 2) Drop constants: O(n) + O(n) = O(2n) => O(n) 3) If you have different inputs, use different variables to represent them. Example: If you have two inputs to a function array a and array b, n means one variable, so you describe this as O(a*b) if the runtime for this function is linear. 4) Drop non-dominant terms: O(n) + O(n^2) = O(n^2). The n^2 drives how the runtime changes overall.

Time and Space Complexity Tradeoff

A good algorithm is one that is taking less time and less space, but this is not possible all the time. There is a trade-off between time and space. If you want to reduce the time, then space might increase. Similarly, if you want to reduce the space, then the time may increase. So, you have to compromise with either space or time.

What is the differences between the best case, average case, and worst case of an algorithm?

An algorithm can have different time for different inputs. It may take 1 second for some input and 10 seconds for some other input. Example) Let's say we are doing a linear search on the array [1,2,3,4,5] and need to find a value k. The time depends on what the input k is. Best case: This is the lower bound on running time of an algorithm. We must know the case that causes the minimum number of operations to be executed. In the above example, our array was [1, 2, 3, 4, 5] and we are finding if "1" is present in the array or not. So here, after only one comparison, you will get that your element is present in the array. So, this is the best case of your algorithm. Average case: We calculate the running time for all possible inputs, sum all the calculated values and divide the sum by the total number of inputs. We must know (or predict) distribution of cases. Worst case: This is the upper bound on running time of an algorithm. We must know the case that causes the maximum number of operations to be executed. In our example, the worst case can be if the given array is [1, 2, 3, 4, 5] and we try to find if element "6" is present in the array or not. Here, the if-condition of our loop will be executed 5 times and then the algorithm will give "0" as output.

Why is Analysis of Algorithms Crucial in Computer Science?

An important part of solving algorithms is efficiency. As your dataset grows so will the time it takes to run functions on it. Understanding the efficiency of an algorithm is important for growth. As programmers we code with the future in mind and to do that, efficiency is key.

What is the Big O of Arrays?

Answer: Big O of Arrays is O(n) Why? If we are searching for an element in an array, what is the maximum amount of elements we would have to look through in order to find our value? The answer is the length of the array, or n.

What is the Big O of Hashes?

Answer: Big O of Hashes is O(1) Why? A hash uses keys to lookup the value in which you are searching for. Lets say it's a function "find_value(key)". Everytime you look for a value, it will only have to run one time to find the answer. So the time complexity is O(1).

What is the Big O Notation of Linear Search VS Binary Search? Which is more efficient? Let's say we have const array = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; const search_digit = 10;

Answer: The worst case Big O of linear search is O(n) while the worst case big O of binary search is O(log(n)). Therefore, binary search is more efficient since the rate by which the complexity increases for Linear search is much faster than that for binary search. Why? Linear search algorithm will compare each element of the array to the search_digit. When it finds the search_digit in the array, it will return true. In general, Linear search will take n number of operations in its worst case (where n is the size of the array). For binary search, first we'll compare search_digit with the middle element of the array, that is 5. Now since 5 is less than 10, then we will start looking for the search_digit in the array elements greater than 5, in the same way until we get the desired element 10. It took approximately four operations. Now, this was the worst case for binary search. This shows that there is a logarithmic relation between the number of operations performed and the total size of the array. number of operations = log(10) = 4(approx)for base 2. For an array of size n, the number of operations performed by the Binary Search is: log(n)

Why is writing efficient code important at Palantir?

Efficiency and performance are critical for our software. Our back-end systems need to organize data efficiently and enable efficient queries. Different types of services access our back-end systems by running algorithms that extract insights from large amounts of data. Our front-end systems provide different types of visualizations in real time. It's imperative that both code and data are as efficient as they can be to create a responsive experience for users. Palantir engineers need to determine which components in a large system are critical for efficiency, and where can we make trade-offs. When should we replicate data in multiple storage formats that each optimize for a different type of query versus optimizing for space? How can we take advantage of servers with very high memory by keeping a part of the data in memory for fast access: what data do we keep, how will it be synchronized, and how can it be accessed most efficiently? How can we detect and optimize bottlenecks: when is processor time optimization critical, and when is performance I/O bound?

Big O Example #1 on Algorithms: Find an O(1) Solution Finding the sum of the first n numbers. For example, if n = 4, then our output should be 1 + 2 + 3 + 4 = 10. If n = 5, then the ouput should be 1 + 2 + 3 + 4 + 5 = 15.

O(1) Solution: Since this takes some constant time C1, so the Big O is O(1) int findSum(int n) { return n * (n+1) / 2;}

Big O of Two Nested For Loops Iterating Across Two Different Arrays A and B

O(A*B) NOTE: where is n? need to define what n means.

Big O of a Basic For Loop?

O(n)

Big O of an If-Statement nested in a for loop?

O(n)

Big O of Two For Loops (Not Nested)

O(n) + O(n) = O(n) *remember to drop constants!

Big O Example #1 on Algorithms: Find an O(n) Solution Finding the sum of the first n numbers. For example, if n = 4, then our output should be 1 + 2 + 3 + 4 = 10. If n = 5, then the ouput should be 1 + 2 + 3 + 4 + 5 = 15.

O(n) Solution:

Big O of Nested For Loops?

O(n^2)

Big O of Printing Ordered Pairs: Nested for loop first from 0 to N and the next from i to N

O(n^2)

Big O Example #1 on Algorithms: Find an O(n^2) Solution Finding the sum of the first n numbers. For example, if n = 4, then our output should be 1 + 2 + 3 + 4 = 10. If n = 5, then the ouput should be 1 + 2 + 3 + 4 + 5 = 15.

O(n^2) Solution:

Methods to Optimize Your Algorithms (Gayle McDowell)

Remember! It's okay for your first brute force algorithm to be slow. These are supposed to be challenging questions, and sometimes finding the brute force is hard enough. State the runtime, discuss it, and work to optimize it with these methods: BUD: Bottlenecks - Examples) Making searches faster --> Solution: Try putting things in a hash map, etc. think about what else I can do! Unecessary Work Duplicated Work Remove pre-computing On alot of problems, there's optimizations involving hash tables and precomputations

Definition: Space Complexity

Space complexity of an algorithm quantifies the amount of space or memory taken by an algorithm to run as a function of the length of the input. Space Complexity of an algorithm denotes the total space used or needed by the algorithm for its working, for various input sizes.

Definition - Analyzing the Efficiency of Code

The ability to analyze and improve the efficiency of code so that it runs with optimal complexity, creating systems that store and transmit data efficiently.

Definition: Time Complexity

The time complexity is the number of operations an algorithm performs to complete its task with respect to input size (considering that each operation takes the same amount of time). The algorithm that performs the task in the smallest number of operations is considered the most efficient one.

What are the 3 Types of Asymptotic Notation used to represent the time complexity of an algorithm? What are the differences between each?

Θ Notation (theta): The Θ Notation is used to find the average bound of an algorithm i.e. it defines an upper bound and a lower bound, and your algorithm will lie in between these levels. So, if a function is g(n), then the theta representation is shown as Θ(g(n)) and the relation is shown as: Θ(g(n)) = { f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 } Big O Notation: The Big O notation defines the upper bound of any algorithm i.e. you algorithm can't take more time than this time. In other words, we can say that the big O notation denotes the maximum time taken by an algorithm or the worst-case time complexity of an algorithm. So, big O notation is the most used notation for the time complexity of an algorithm. So, if a function is g(n), then the big O representation of g(n) is shown as O(g(n)) and the relation is shown as: O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 } Ω Notation: The Ω notation denotes the lower bound of an algorithm i.e. the time taken by the algorithm can't be lower than this. In other words, this is the fastest time in which the algorithm will return a result. Its the time taken by the algorithm when provided with its best-case input. So, if a function is g(n), then the omega representation is shown as Ω(g(n)) and the relation is shown as: Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }

What are the Big O Notations for popular searching and sorting algorithms - Binary Search, Linear Search, Quick Sort, Selection Sort, Traveling Salesperson?

Binary Search: O(log n) Linear Search: O(n) Quick Sort: O(n * log n) Selection Sort: O(n^2) Travelling salesperson : O(n!)

How can we make algorithms more efficient?

By reducing the amount of iterations needed to complete your task in relation to the size of the dataset. aka writing algorithms with a more efficient Big O notation.

Definition: Big O Notation

Big O Notation is notation to represent time complexity. In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows. The Big O notation defines the upper bound of any algorithm i.e. you algorithm can't take more time than this time. In other words, we can say that the big O notation denotes the maximum time taken by an algorithm or the worst-case time complexity of an algorithm. So, big O notation is the most used notation for the time complexity of an algorithm. So, if a function is g(n), then the big O representation of g(n) is shown as O(g(n)) and the relation is shown as: O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 } if f(n) = 2n² + 3n + 1 and g(n) = n² then for c = 6 and n0 = 1, we can say that f(n) = O(n²)


Ensembles d'études connexes

INSURANCE EXAM PRACTICE BRIAN CALDWELL

View Set

Image Evaluation: Cervical & Thoracic Vertebrae

View Set