Data Structures and Algorithms

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

_________ _______________ refer to how data is organized.

Data structures

Inserting a value at the end of a set is still the best-case scenario, but we still had to perform six steps for a set originally containing five elements. That is, we had to search all five elements before performing the final insertion step. Said another way: insertion into a set in a best-case scenario will take N + 1 steps for N elements. This is because there are N steps of search to ensure that the value doesn't already exist within the set, and then one step for the actual insertion.

In a worst-case scenario, where we're inserting a value at the beginning of a set, the computer needs to search N cells to ensure that the set doesn't already contain that value, and then another N steps to shift all the data to the right, and another final step to insert the new value. That's a total of 2N + 1 steps.

To understand why this algorithm is called "O(log N)," we need to first understand what logarithms are.

Let's examine why algorithms such as binary search are described as O(log N). What is a log, anyway? Log is shorthand for logarithm. The first thing to note is that logarithms have nothing to do with algorithms, even though the two words look and sound so similar.

In Big O, we describe binary search as having a time complexity of:

O(log N) This type of algorithm is also known as having a time complexity of log time.

We can't simply label one algorithm a "22-step algorithm" and another a "400-step algorithm." This is because the number of steps that an algorithm takes cannot be pinned down to a single number. Let's take linear search, for example. The number of steps that linear search takes varies, as it takes as many steps as there are cells in the array. If the array contains twenty-two elements, linear search takes twenty-two steps. If the array has 400 elements, however, linear search takes 400 steps.

The more accurate way to quantify efficiency of linear search is to say that linear search takes N steps for N elements in the array. Of course, that's a pretty wordy way of expressing this concept.

With linear search, if the value we're searching for is in the final cell or is greater than the value in the final cell, we have to inspect each and every element. For an array the size of 100, this would take one hundred steps.

When we use binary search, however, each guess we make eliminates half of the possible cells we'd have to search. In our very first guess, we get to eliminate a whopping fifty cells

An ________ is simply a list of data elements.

array

With an array containing one hundred values, here are the maximum numbers of steps it would take for each type of search:

--Linear search: one hundred steps --Binary search: seven steps

In most programming languages, we begin counting the index at ______.

0

An algorithm is simply a particular process for solving a problem. For example, the process for preparing a bowl of cereal can be called an algorithm. The cereal-preparation algorithm follows these four steps (for me, at least):

1. Grab a bowl. 2. Pour cereal in the bowl. 3. Pour milk in the bowl. 4. Dip a spoon in the bowl.

Bubble Sort is a very basic sorting algorithm, and follows these steps:

1. Point to two consecutive items in the array. (Initially, we start at the very beginning of the array and point to its first two items.) Compare the first item with the second one: 2. If the two items are out of order (in other words, the left value is greater than the right value), swap them: (If they already happen to be in the correct order, do nothing for this step.) 3. Move the "pointers" one cell to the right: Repeat steps 1 and 2 until we reach the end of the array or any items that have already been sorted. 4. Repeat steps 1 through 3 until we have a round in which we didn't have to make any swaps. This means that the array is in order. Each time we repeat steps 1 through 3 is known as a passthrough. That is, we "passed through" the primary steps of the algorithm, and will repeat the same process until the array is fully sorted. Step #4: We now compare the 7 and the 1: Step #5: They're out of order, so we swap them: Step #6: We compare the 7 and the 3: Step #7: They're out of order, so we swap them:

Constant Time vs. Linear Time Now that we've encountered O(N), we can begin to see that Big O Notation does more than simply describe the number of steps that an algorithm takes, such as a hard number such as 22 or 400. Rather, it describes how many steps an algorithm takes based on the number of data elements that the algorithm is acting upon.

Another way of saying this is that Big O answers the following question: how does the number of steps change as the data increases?

So it turns out that for an array of five cells, the maximum number of steps that linear search would take is five. For an array of 500 cells, the maximum number of steps that linear search would take is 500.

Another way of saying this is that for N cells in an array, linear search will take a maximum of N steps. In this context, N is just a variable that can be replaced by any number.

an algorithm can be described as O(1) even if it takes more than one step. Let's say that a particular algorithm always takes three steps, rather than one—but it always takes these three steps no matter how much data there is.

Because the number of steps remains constant no matter how much data there is, this would also be considered constant time and be described by Big O Notation as O(1). Even though the algorithm technically takes three steps rather than one step, Big O Notation considers that trivial. O(1) is the way to describe any algorithm that doesn't change its number of steps even when the data increases.

For an array of fewer than one hundred elements, O(N) algorithm takes fewer steps than the O(1) 100-step algorithm. At exactly one hundred elements, the two algorithms take the same number of steps (100). But here's the key point: for all arrays greater than one hundred, the O(N) algorithm takes more steps.

Because there will always be some amount of data in which the tides turn, and O(N) takes more steps from that point until infinity, O(N) is considered to be, on the whole, less efficient than O(1). The same is true for an O(1) algorithm that always takes one million steps. As the data increases, there will inevitably reach a point where O(N) becomes less efficient than the O(1) algorithm, and will remain so up until an infinite amount of data.

In order to help ease communication regarding time complexity, computer scientists have borrowed a concept from the world of mathematics to describe a concise and consistent language around the efficiency of data structures and algorithms. Known as_____ __ ________________, this formalized expression around these concepts allows us to easily categorize the efficiency of a given algorithm and convey it to others.

Big O Notation

Once you understand _____ ___ _________, you'll have the tools to analyze every algorithm going forward in a consistent and concise way—and it's the way that the pros use.

Big O Notation

With ordered arrays of a small size, the algorithm of binary search doesn't have much of an advantage over the algorithm of linear search. But let's see what happens with larger arrays.

Binary Search vs. Linear Search

__________ ___________, is a much, much faster algorithm than linear search.

Binary search

The Bubble Sort algorithm contains two kinds of steps:

Comparisons: two numbers are compared with one another to determine which is greater. Swaps: two numbers are swapped with one another in order to sort them.

________________ refers to removing a value from our data structure. With an array, this would mean removing one of the values from the array. For example, if we removed "bananas" from our grocery list, that would be deleting from the array.

Deletion

___________ _____ ___ ______ is the process of eliminating the value at a particular index.

Deletion from an array

how choosing the right data structure can significantly affect the performance of our code. Even two data structures that seem so similar, such as the array and the set, can make or break a program if they encounter a heavy load.

Even if we decide on a particular data structure, there is another major factor that can affect the efficiency of our code: the proper selection of which algorithm to use.

Like insertion, the worst-case scenario of deleting an element is deleting the very first element of the array. This is because index 0 would be empty, which is not allowed for arrays, and we'd have to shift all the remaining elements to the left to fill the gap.

For an array of five elements, we'd spend one step deleting the first element, and four steps shifting the four remaining elements. For an array of 500 elements, we'd spend one step deleting the first element, and 499 steps shifting the remaining data. We can conclude, then, that for an array containing N elements, the maximum number of steps that deletion would take is N steps.

Sorting algorithms have been the subject of extensive research in computer science, and tens of such algorithms have been developed over the years. They all solve the following problem:

Given an array of unsorted numbers, how can we sort them so that they end up in ascending order?

Logarithms are the inverse of exponents. Here's a quick refresher on what exponents are: 23 is the equivalent of: 2*2*2 which just happens to be 8. Now, log2 8 is the converse of the above. It means: how many times do you have to multiply 2 by itself to get a result of 8? Since you have to multiply 2 by itself 3 times to get 8, log2 8 = 3.

Here's another example: 26 translates to: 2*2*2*2*2*2=64 Since, we had to multiply 2 by itself 6 times to get 64, log2 64 = 6. While the preceding explanation is the official "textbook" definition of logarithms, I like to use an alternative way of describing the same concept because many people find that they can wrap their heads around it more easily, especially when it comes to Big O Notation.

Now, what is the maximum number of steps a computer would need to conduct a linear search on an array?

If the value we're seeking happens to be in the final cell in the array (like "elderberries"), then the computer would end up searching through every cell of the array until it finally finds the value it's looking for. Also, if the value we're looking for doesn't occur in the array at all, the computer would likewise have to search every cell so it can be sure that the value doesn't exist within the array.

With Big O, you have the opportunity to compare your algorithm to general algorithms out there in the world, and you can say to yourself, "Is this a fast or slow algorithm as far as algorithms generally go?"

If you find that Big O labels your algorithm as a "slow" one, you can now take a step back and try to figure out if there's a way to optimize it by trying to get it to fall under a faster category of Big O. This may not always be possible, of course, but it's certainly worth thinking about before concluding that it's not.

if you were to take a traditional college course on algorithms, you'd probably be introduced to Big O from a mathematical perspective. Big O is originally a concept from mathematics, and therefore it's often described in mathematical terms. For example, one way of describing Big O is that it describes the upper bound of the growth rate of a function, or that if a function g(x) grows no faster than a function f(x), then g is said to be a member of O(f). Depending on your mathematics background, that either makes sense, or doesn't help very much. I've written this book so that you don't need as much math to understand the concept.

If you want to dig further into the math behind Big O, check out Introduction to Algorithms by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein (MIT Press, 2009) for a full mathematical explanation. Justin Abrahms provides a pretty good definition in his article: https://justin.abrah.ms/computer-science/understanding-big-o-formal-definition.html. Also, the Wikipedia article on Big O (https://en.wikipedia.org/wiki/Big_O_notation) takes a fairly heavy mathematical approach.

Let's start by determining how many comparisons take place in Bubble Sort. Our example array has five elements. Looking back, you can see that in our first passthrough, we had to make four comparisons between sets of two numbers.

In our second passthrough, we had to make only three comparisons. This is because we didn't have to compare the final two numbers, since we knew that the final number was in the correct spot due to the first passthrough. In our third passthrough, we made two comparisons, and in our fourth passthrough, we made just one comparison. So, that's: 4 + 3 + 2 + 1 = 10 comparisons. To put it more generally, we'd say that for N elements, we make (N - 1) + (N - 2) + (N - 3) ... + 1 comparisons. Now that we've analyzed the number of comparisons that take place in Bubble Sort, let's analyze the swaps. In a worst-case scenario, where the array is not just randomly shuffled, but sorted in descending order (the exact opposite of what we want), we'd actually need a swap for each comparison. So we'd have ten comparisons and ten swaps in such a scenario for a grand total of twenty steps.

________________ refers to adding another value to our data structure. With an array, this would mean adding a new value to an additional slot within the array. If we were to add "figs" to our shopping list, we'd be inserting a new value into the array.

Inserting

Let's say we wanted to add "figs" to the very end of our shopping list. Such an insertion takes just one step. As we've seen earlier, the computer knows which memory address the array begins at. Now, the computer also knows how many elements the array currently contains, so it can calculate which memory address it needs to add the new element to, and can do so in one step.

Inserting a new piece of data at the beginning or the middle of an array, however, is a different story. In these cases, we need to shift many pieces of data to make room for what we're inserting, leading to additional steps.

Again, keep in mind that ordered arrays aren't faster in every respect. As you've seen, insertion in ordered arrays is slower than in standard arrays. But here's the trade-off: by using an ordered array, you have somewhat slower insertion, but much faster search. Again, you must always analyze your application to see what is a better fit.

It's also important to realize that there usually isn't a single data structure or algorithm that is perfect for every situation. For example, just because ordered arrays allow for binary search doesn't mean you should always use ordered arrays. In situations where you don't anticipate searching the data much, but only adding data, standard arrays may be a better choice because their insertion is faster.

Here's an implementation of Bubble Sort in Python: def bubble_sort(list): unsorted_until_index = len(list) - 1 sorted = False while not sorted: sorted = True for i in range(unsorted_until_index): if list[i] > list[i+1]: sorted = False list[i], list[i+1] = list[i+1], list[i] unsorted_until_index = unsorted_until_index - 1 list = [65, 55, 45, 35, 25, 15, 10] bubble_sort(list) print list

Let's break this down line by line. We'll first present the line of code, followed by its explanation. unsorted_until_index = len(list) - 1 We keep track of up to which index is still unsorted with the unsorted_until_index variable. At the beginning, the array is totally unsorted, so we initialize this variable to be the final index in the array. sorted = False We also create a sorted variable that will allow us to keep track whether the array is fully sorted. Of course, when our code first runs, it isn't. while not sorted: sorted = True

Contrast this with linear search. If you had three items, you'd need up to three steps. For seven elements, you'd need a maximum of seven steps. For one hundred, you'd need up to one hundred steps. With linear search, there are as many steps as there are items. For linear search, every time we double the number of elements in the array, we double the number of steps we need to find something. For binary search, every time we double the number of elements in the array, we only need to add one more step.

Let's see how this plays out for even larger arrays. With an array of 10,000 elements, a linear search can take up to 10,000 steps, while binary search takes up to a maximum of just thirteen steps. For an array the size of one million, linear search would take up to one million steps, while binary search would take up to just twenty steps.

If you look precisely at the growth of steps as N increases, you'll see that it's growing by approximately N2.

N # of Bubble Sort N2 5 20 25 If you look precisely at the growth of steps as N increases, you'll see that it's growing by approximately N2.

So let's look at the complete picture. With an array containing ten elements in reverse order, we'd have: 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 = 45 comparisons, and another forty-five swaps. That's a total of ninety steps. With an array containing twenty elements, we'd have: 19+18+17+16+15+14+13+12+11+10+9+8+7+ 6 + 5 + 4 + 3 + 2 + 1 = 190 comparisons, and approximately 190 swaps, for a total of 380 steps.

Notice the inefficiency here. As the number of elements increase, the number of steps grows exponentially. We can see this clearly with the following table: N data elements Max # of steps 5 20 10 90 20 380 40 1560 80 6320

Simply put, O(log N) is the Big O way of describing an algorithm that increases one step each time the data is doubled. As we learned in the previous chapter, binary search does just that. We'll see momentarily why this is expressed as O(log N), but let's first summarize what we've learned so far.

Of the three types of algorithms we've learned about so far, they can be sorted from most efficient to least efficient as follows: O(1) O(log N) O(N)

O(1) simply means that the algorithm takes the same number of steps no matter how much data there is. In this case, reading from an array always takes just one step no matter how much data the array contains. On an old computer, that step may have taken twenty minutes, and on today's hardware it may take just a nanosecond. But in both cases, the algorithm takes just a single step.

Other operations that fall under the category of O(1) are the insertion and deletion of a value at the end of an array. As we've seen, each of these operations takes just one step for arrays of any size, so we'd describe their efficiency as O(1).

The four operations are:

Read, Search, Insert, Delete

we've learn how to analyze the time complexity of a data structure by 4 operations.

Read, Search, Insert, Delete......This is extremely important, since choosing the correct data structure for your program can have serious ramifications as to how great your code will be.

_____________ from an array is, therefore, a very efficient operation, since it takes just one step. An operation with just one step is naturally the fastest type of operation. One of the reasons that the array is such a powerful data structure is that we can look up the value at any index with such speed.

Reading

______________ refers to looking something up from a particular spot within the data structure. With an array, this would mean looking up a value at a particular index. For example, looking up which grocery item is located at index 2 would be ___________ from the array.

Reading

the reading, searching, insertion, and deletion operations in context of an array-based set.

Reading from a set is exactly the same as reading from an array—it takes just one step for the computer to look up what is contained within a particular index. As we described earlier, this is because the computer can jump to any index within the set since it knows the memory address that the set begins at. Searching a set also turns out to be no different than searching an array—it takes up to N steps to search to see if a value exists within a set. And deletion is also identical between a set and an array—it takes up to N steps to delete a value and move data to the left to close the gap. Insertion, however, is where arrays and sets diverge. Let's first explore inserting a value at the end of a set, which was a best-case scenario for an array. With an array, the computer can insert a value at its end in a single step. With a set, however, the computer first needs to determine that this value doesn't already exist in this set—because that's what sets do: they prevent duplicate data. So every insert first requires a search.

Let's bring this all back to Big O Notation. Whenever we say O(log N), it's actually shorthand for saying O(log2 N). We're just omitting that small 2 for convenience.

Recall that O(N) means that for N data elements, the algorithm would take N steps. If there are eight elements, the algorithm would take eight steps. O(log N) means that for N data elements, the algorithm would take log2 N steps. If there are eight elements, the algorithm would take three steps, since log2 8 = 3.

____________ refers to looking for a particular value within a data structure. With an array, this would mean looking to see if a particular value exists within the array, and if so, which index it's at. For example, looking to see if "dates" is in our grocery list, and which index it's located at would be searching the array.

Searching

There are actually different types of sets, but for this discussion, we'll talk about an array-based set. This set is just like an array—it is a simple list of values. The only difference between this set and a classic array is that the set never allows duplicate values to be inserted into it. For example, if you had the set ["a", "b", "c"] and tried to add another "b", the computer just wouldn't allow it, since a "b" already exists within the set.

Sets are useful when you need to ensure that you don't have duplicate data. A set is an array with one simple constraint of not allowing duplicates. Yet, this constraint actually causes the set to have a different efficiency for one of the four primary operations.

While the actual deletion of "cucumbers" technically took just one step, we now have a problem: we have an empty cell sitting smack in the middle of our array. An array is not allowed to have gaps in the middle of it, so to resolve this issue, we need to shift "dates" and "elderberries" to the left.

So it turns out that for this deletion, the entire operation took three steps. The first step was the actual deletion, and the other two steps were data shifts to close the gap.......So we've just seen that when it comes to deletion, the actual deletion itself is really just one step, but we need to follow up with additional steps of shifting data to the left to close the gap caused by the deletion.

Let's examine how Big O Notation would describe the efficiency of linear search. Recall that linear search is the process of searching an array for a particular value by checking each cell, one at a time. In a worst-case scenario, linear search will take as many steps as there are elements in the array. As we've previously phrased it: for N elements in the array, linear search can take up to a maximum of N steps.

The appropriate way to express this in Big O Notation is: O(N) I pronounce this as "Oh of N." O(N) is the "Big O" way of saying that for N elements inside an array, the algorithm would take N steps to complete. It's that simple.

We've been assuming until now that the only way to search for a value within an ordered array is linear search. The truth, however, is that linear search is only one possible algorithm— that is, it is one particular process for going about searching for a value. It is the process of searching each and every cell until we find the desired value. But it is not the only algorithm we can use to search for a value.

The big advantage of an ordered array over a regular array is that an ordered array allows for an alternative searching algorithm. This algorithm is known as binary search, and it is a much, much faster algorithm than linear search.

Let's take another example. Here is one of the most basic code snippets known to mankind: print 'Hello world!' The time complexity of the preceding algorithm (printing "Hello world!") is O(1), because it always takes one step. The next example is a simple Python-based algorithm for determining whether a number is prime: def is_prime(number): for i in range(2, number): if number % i == 0: return False return True

The preceding code accepts a number as an argument and begins a for loop in which we divide every number from 2 up to that number and see if there's a remainder. If there's no remainder, we know that the number is not prime and we immediately return False. If we make it all the way up to the number and always find a remainder, then we know that the number is prime and we return True. The efficiency of this algorithm is O(N). In this example, the data does not take the form of an array, but the actual number passed in as an argument. If we pass the number 7 into is_prime, the for loop runs for about seven steps. (It really runs for five steps, since it starts at two and ends right before the actual number.) For the number 101, the loop runs for about 101 steps. Since the number of steps increases in lockstep with the number passed into the function, this is a classic example of O(N).

N # of Bubble Sort N2 10 90 100 20 380 400 40 1560 1600 80 6320 6400

Therefore, in Big O Notation, we would say that Bubble Sort has an efficiency of O(N2). Said more officially: in an O(N2) algorithm, for N data elements, there are roughly N2 steps. O(N2) is considered to be a relatively inefficient algorithm, since as the data increases, the steps increase dramatically.

We now know for a fact that the 7 is in its correct position within the array, because we kept moving it along to the right until it reached its proper place. We've put little lines surrounding it to indicate this fact.

This is actually the reason that this algorithm is called Bubble Sort: in each passthrough, the highest unsorted value "bubbles" up to its correct position. Since we made at least one swap during this passthrough, we need to conduct another one.

If we double it again (and add one) so that the ordered array contains fifteen elements, the maximum number of steps to find something using binary search is four. The pattern that emerges is that for every time we double the number of items in the ordered array, the number of steps needed for binary search increases by just one.

This pattern is unusually efficient: for every time we double the data, the binary search algorithm adds a maximum of just one more step.

Here's some typical Python code that prints out all the items from a list: things = ['apples', 'baboons', 'cribs', 'dulcimers'] for thing in things: print "Here's a thing: %s" % thing How would we describe the efficiency of this algorithm in Big O Notation? The first thing to realize is that this is an example of an algorithm. While it may not be fancy, any code that does anything at all is technically an algorithm—it's a particular process for solving a problem. In this case, the problem is that we want to print out all the items from a list. The algorithm we use to solve this problem is a for loop containing a print statement.

To break this down, we need to analyze how many steps this algorithm takes. In this case, the main part of the algorithm— the for loop—takes four steps. In this example, there are four things in the list, and we print each one out one time. However, this process isn't constant. If the list would contain ten elements, the for loop would take ten steps. Since this for loop takes as many steps as there are elements, we'd say that this algorithm has an efficiency of O(N).

In the previous chapter, we learned that binary search on an ordered array is much faster than linear search on the same array. Let's learn how to describe binary search in terms of Big O Notation.

We can't describe binary search as being O(1), because the number of steps increases as the data increases. It also doesn't fit into the category of O(N), since the number of steps is much fewer than the number of elements that it searches. As we've seen, binary search takes only seven steps for an array containing one hundred elements.

As we learned in the previous chapters, linear search isn't always O(N). It's true that if the item we're looking for is in the final cell of the array, it will take N steps to find it. But where the item we're searching for is found in the first cell of the array, linear search will find the item in just one step. Technically, this would be described as O(1). If we were to describe the efficiency of linear search in its totality, we'd say that linear search is O(1) in a best-case scenario, and O(N) in a worst-case scenario.

While Big O effectively describes both the best- and worst-case scenarios of a given algorithm, Big O Notation generally refers to worst-case scenario unless specified otherwise. This is why most references will describe linear search as being O(N) even though it can be O(1) in a best-case scenario. The reason for this is that this "pessimistic" approach can be a useful tool: knowing exactly how inefficient an algorithm can get in a worst-case scenario prepares us for the worst and may have a strong impact on our choices.

Let's look at this another way, and we'll see a pattern emerge:

With an array of size 3, the maximum number of steps it would take to find something using binary search is two. If we double the number of cells in the array (and add one more to keep the number odd for simplicity's sake), there are seven cells. For such an array, the maximum number of steps to find something using binary search is three.

We begin a while loop that will last as long as the array is not sorted. Next, we preliminarily establish sorted to be True. We'll change this back to False as soon as we have to make any swaps. If we get through an entire passthrough without having to make any swaps, we'll know that the array is completely sorted. for i in range(unsorted_until_index): if list[i] > list[i+1]: sorted = False list[i], list[i+1] = list[i+1], list[i]

Within the while loop, we begin a for loop that starts from the beginning of the array and goes until the index that has not yet been sorted. Within this loop, we compare every pair of adjacent values, and swap them if they're out of order. We also change sorted to False if we have to make a swap. unsorted_until_index = unsorted_until_index - 1 By this line of code, we've completed another passthrough, and can safely assume that the value we've bubbled up to the right is now in its correct position. Because of this, we decrement the unsorted_until_index by 1, since the index it was already pointing to is now sorted. Each round of the while loop represents another passthrough, and we run it until we know that our array is fully sorted.

You've probably played this guessing game when you were a child (or maybe you play it with your children now): I'm thinking of a number between 1 and 100. Keep on guessing which number I'm thinking of, and I'll let you know whether you need to guess higher or lower.

You know intuitively how to play this game. You wouldn't start the guessing by choosing the number 1. You'd start with 50 which is smack in the middle. Why? Because by selecting 50, no matter whether I tell you to guess higher or lower, you've automatically eliminated half the possible numbers! If you guess 50 and I tell you to guess higher, you'd then pick 75, to eliminate half of the remaining numbers. If after guessing 75, I told you to guess lower, you'd pick 62 or 63. You'd keep on choosing the halfway mark in order to keep eliminating half of the remaining numbers........This, in a nutshell, is binary search..

When a program declares an array, it ____________ a contiguous set of empty cells for use in the program. So, if you were creating an array meant to hold five elements, your computer would find any group of five empty cells in a row and designate it to serve as your array:

allocates

With an __________, 1. A computer can jump to any memory address in one step. (Think of this as driving to 123 Main Street—you can drive there in one trip since you know exactly where it is.) 2. Recorded in each array is the memory address which it begins at. So the computer has this starting address readily. 3. Every array begins at index 0.

array

When you have a solid grasp on the various data structures and each one's performance implications on the program that you're writing, you will have the keys to write fast and elegant _______ that will ensure that your software will run quickly and smoothly.

code

Contrast this with O(1), which is a perfect horizontal line, since the number of steps in the algorithm remains constant no matter how much data there is. Because of this, O(1) is also referred to as _________ _______.

constant time

___________—is like insertion, but in reverse.

deletion

when we measure how "_______" an operation takes, we do not refer to how fast the operation takes in terms of pure time, but instead in how many steps it takes.

fast

The _________ of an array is the number that identifies where a piece of data lives inside the array.

index

The efficiency of _____________ a new piece of data inside an array depends on where inside the array you'd like to insert it.

inserting

The worst-case scenario for _________ ____ ___ _____—that is, the scenario in which insertion takes the most steps—is where we insert data at the beginning of the array. This is because when inserting into the beginning of the array, we have to move all the other values one cell to the right.

insertion into an array

A basic search operation—in which the computer checks each cell one at a time—is known as _______ ___________.

linear search

In the previous chapter, we described the process for searching for a particular value within a regular array: we check each cell one at a time—from left to right—until we find the value we're looking for. We noted that this process is referred to as ________ _________.

linear search

You'll see that O(N) makes a perfect diagonal line. This is because for every additional piece of data, the algorithm takes one additional step. Accordingly, the more data, the more steps the algorithm will take. For the record, O(N) is also known as ___________ ________.

linear time

Most data structures are used in four basic ways, which we refer to as ___________________.

operations

The _________________ ____ _____ doesn't just matter for organization's sake, but can significantly impact how fast your code runs.

organization of data

Note how O(N2) curves sharply upwards in terms of number of steps as the data grows. Compare this with O(N), which plots along a simple, diagonal line. O(N2) is also referred to as ____________ _______.

quadratic time

it's clear that searching is less efficient than ___________, since searching can take many steps, while reading always takes just one step no matter how large the array.

reading

For _________________, To search for a value within an array, the computer starts at index 0, checks the value, and if it doesn't find what it's looking for, moves on to the next index. It does this until it finds the value it's seeking.

searching

A ___ is a data structure that does not allow duplicate values to be contained within it.

set

Measuring the speed of an operation is also known as measuring its ________ ___________. Throughout this book, we'll use the terms speed, time complexity, efficiency, and performance interchangeably. They all refer to the number of steps that a given operation takes.

time complexity

Now that we understand Big O Notation, we have a consistent system that allows us to compare any _____ __________. With it, we will be able to examine real-life scenarios and choose between competing data structures and algorithms to make our code faster and able to handle heavier loads.

two algorithms


Kaugnay na mga set ng pag-aaral

Olds Maternal-Newborn Nursing ch 37

View Set

Lesson 8G Musculoskeletal System

View Set

CIPP US Outline Financial Privacy

View Set

6th Grade - Geography and the Early Settlement of China

View Set

Respiratory Content Post Test- HURST

View Set

Chapter 10: Virtualization and Cloud Security

View Set