CS, Algorithms, and Data Structures

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Sorting is an important feature to have in any programming language

algorithms for this include: bubble sort, insertion sort, and selection sort. These algorithms are relatively beginner-friendly when it comes to sorting algorithms, but they're also not the most efficient: all of them are typically O(n2). merge sort and quick sort are more efficient. However, this efficiency comes at a cost, as the algorithms are a more complicated and tend to take more time to understand.

binary search tree transversal

For traversing through all of the nodes in a BST, there are two common algorithms that we use, which enable us to only have to visit each node once. These two algorithms are called Depth First Search (DFS) and Breadth First Search (BFS). They each utilize a secondary data structure (a stack for DFS and a queue for BFS) in order to traverse the tree.

Additional Graph Representations - Edge List

Edge List - One simple way to represent a graph is just a list, or array representing the edges of the graph. To represent an edge, we just have an array of two vertex numbers, or an array of objects containing the vertex numbers of the vertices that the edges are incident on. If edges have weights, add either a third element to the array or more information to the object, giving the edge's weight. Here's how we could create an edge list for our undirected graph from before: var edgeList = [[4, 6], [4, 3], [4, 5], [3, 2], [5, 2], [5, 1], [2, 1]];

Graph Performance Differences

Most of the time, adjacency lists will be used unless you are working with a dense graph (a graph where the number of edges is close to the number of verticied squared) or if you need to quickly look up if there is an edge connecting two nodes. Here is a performance comparison of adjacency lists and adjacency matricies. These will be represented in Big O notation with |V| being the number of verticies and |E| being the number of edges.

Weights- graphs

Notice here that the graphs we are traversing do not have weights. When we need to work with a weighted graph, BFS and DFS do not become optimal for traversing through a graph. Instead we need to use more sophisticated algorithms to find the 'shortest path' between two nodes so that we can optimize according to the weights of the edges. In the next section, we'll examine two common ways of finding the shortest path.

Finding a node in a binary tree

Now that we can insert nodes into a tree, we need to think about how to find them. The algorithm goes as follows (and can be done iteratively and recursively) Start searching at the root If the value you are looking for is less than the root go to the left node (if it exists) If the value you are looking for is greater than the root go to the right node (if it exists) Keep moving right or left until you find the node with that value, otherwise return undefined.

Linked List Vocabulary

There are some important vocabulary words to know when using linked lists: Node: each element in the list is called a node Head: the first element in the list Tail: the last element in the list Next: usually referring to the pointer to the next node in the list Previous: in a doubly linked list, the pointer to the previous element in the list

research dynamic programming and problems

in further detail

Linked List Advantages, Disadvantages

Advantage - O(1) shift, unshift operations. Since a linked list is not stored contiguously, each node only takes up one memory slot at a time. Therefore, doing operations like shift are always constant time. Moreover, if you keep a reference to the last element (also known as the tail) of the list, pushing is also a constant time operation. And in a doubly linked list, which we'll discuss later, popping is constant time too. Disadvantage - O(n) element access. A linked list is not stored contiguously and we only have a reference to the head (and possibly the tail) node. That means that finding an element at a specific index requires iterating through the list until that node is reached. This is a O(n) operation, but it is a constant time operation in an array.

queue data structure

An example of a queue data structure is a line in a grocery store. The first person in line is the first person served. Each subsequent person is served in priority order. If you want to be served, you must add yourself to the end of the line. In general, a queue is considered a First In, First Out data structure, or a FIFO data structure. It maintains an order and the first element added or enqueued to the queue is the first that will be returned or dequeued. Common queue operations: enqueue - Add an item to the back of the line. In the image below 2 is enqueue'ed into the queue and added to the back of the queue. enqueue number into queue dequeue - Return the first element in the queue. In the example below, -30 was the first element, so it is the element that is returned when dequeue is invoked. length - How many elements are in the queue.

Breadth First Search- graph

As with binary search trees, the main difference between breadth first search versus depth first search on a graph is that breadth utilizes a queue, while depth utilizes a stack. Other than that, the pseudocode is basically the same: Create a queue and push the starting vertex on it. Come up with a way to mark vertices as having been visited, and mark the starting vertex as visited While the queue is nonempty: pop a vertex off of the queue and push it into the array of vertices to be returned examine all vertices adjacent to the current vertex if an adjacent vertex has not been visited yet, push it into the queue and mark it as visited

Introduction to Graphs

Before we dive deep into how a graph works, let's see what one looks like and then compare and contrast it to the previous data structures we have seen. Here's a graph: You might be thinking that this looks very similar to a binary search tree! In fact, trees are a certain type of graph. The concept of a graph, however, is more general: all trees are graphs, but not all graphs are trees. Graphs consist of nodes (which are also called vertices) and can be displayed in many different ways. The connections between vertices are called edges. Here's another example of a graph: So what is different about this graph than the one above? We can see in this graph that the edges have a direction! Put another way, there are arrows pointing to different vertices. We can even go from one vertex (B) to another (C) to (E) to (D) and back to (B) in a cycle! We will soon learn the term for this type of graph; for now, the important thing to understand is that there are many different kinds. Before we define all the terms, let's look at one more kind of graph. What differences do you see here? In this graph the edges have numbers associated with them! When we create graphs, we can place weights between nodes to represent some sort of cost to travel from one node to another.

Max Heaps

Notice that the larger values are closer to the root, with the largest value being the root. Remember the order of a heap, everything must be from top to bottom and then left to right so all that matters in terms of the max value is that the parent node is always greater than the child node. This is not like a binary search tree where all values less than a parent node go on the left. With a binary heap we build the data structure starting from the top and going to the bottom and then left to right. If another node were to be inserted, it would be to the left of the node with a values of 3.

Min Heaps

Notice that the smaller values are closer to the root, with the smallest value being the root. If another node were to be inserted, it would be to the left of 19

Breadth First Search

The other alternative to traversing is to search horizontally, or through the "breadth" of the tree. The algorithm is as follows: start at the root place the root node in a queue while there is anything in the queue remove the first element in the queue (dequeue) if what has been dequeued has a left node enqueue the left node if what has been dequeued has a right node enqueue the right node. In the above example, starting at the root node, we would capture the values in this order: F, B, G, A, D, I, C, E, H.

Pure Recursion

We can also solve the problem above without using helper method recursion. This is commonly implemented by passing in smaller and smaller parameters to each recursive call. Let's see how we could tackle this problem using a single function: function allRecursive(array, condition) { var copy = copy || array.slice(); if (copy.length === 0) return true; if (condition(copy[0])){ copy.shift(); return allRecursive(copy,condition); } else { return false; } } var numbersArray = [1,2,3,4,5]; allRecursive(numbersArray, function(v) { return v > 0; });

The most important thing to have in any recursive function is a

base case. A base case is a terminating case that ends the recursive calls. Without a base case, your recursive function will keep calling itself until you run out of memory. What this means is that you have too many functions on the call stack and your stack "overflows" (that's where the name StackOverflow comes from)!

using helper method recursion:

function allRecursive(array, condition) { var copy = array.slice(); function allRecursiveHelper(arr, cb){ if (arr.length === 0) return true; if (condition(arr[0])){ arr.shift(); return allRecursive(arr,condition); } else { return false; } } return allRecursiveHelper(copy, condition); } var numbersArray = [1,2,3,4,5]; allRecursive(numbersArray, function(v) { return v > 0; });

Use Cases Binary Heaps are commonly used for the following (as well as quite a few other things):

heapsort - Heapsort is a useful in-place (O(1) space complexity) sorting algorithm that has an average performance of O(n log(n)). It requires building a heap and then swapping values to sort. You can read more about heapsort here and see a good visualization here. priority queue - We previously have seen queues which allow for O(1) insertion and deletion and a more advanced implementation of a queue is one that can re-order itself when a new element is enqueued. The elements in the queue are re-ordered in terms of their priority, which is why this is called a priority queue. Priority queues are easily implemented using min or max heaps to manage the priority and ordering of the queue. min-max-heap - we have seen min and max heaps, but there is also another data structure called a min-max heap which is a combination of min and max heaps and are frequently used to implement double-ended priority queues.

problem with linear search

it's possible that you'll need to search the entire array to find the desired element. in general it's hard to do better than linear search. But if you assume an extra condition on your data - namely, if you assume your data is sorted - you can sort through it much more efficiently. We can search through an array of sorted data using an algorithm called binary search.

Whether we're digging data in an array or searching through rows in a database, we are often faced with the challenge of how to search most efficiently. The simplest way of searching is an algorithm called

linear search Linear search traverses an array or list until an element is or is not found. When that element is found, the index at which that element exists is returned. If the element is not found, the algorithm returns -1. This is exactly what the indexOf method does for us!

To solve problems using dynamic programming, there are two approaches that we can use

memoization and tabulation

O(n)

or linear time, because the data set is iterated over approximately one time: Unlike our previous sayHello function, this one takes an argument, which controls how many times Hello gets logged to the console. In this case, the runtime of the function should be roughly proportional to the size of numberOfTimes. For example, it should take roughly ten times as long to log Hello 1,000 times as it does to log Hello 100 times. Setting numberOfTimes equal to n, the size of the input, this means that the runtime of sayHello is O(n). What matters is that the runtime scales in proportion to the input size, not the details of the proportional relationship. you could say the runtime is O(n + n) or O(2*n) however constants are ignored and they are both equivalent to O(n)

Quick Sort

quicksort is not the most intuitive of algorithms and has a wide range of implementations. The algorithm is as follows: Pick an element in the array and designate it as the "pivot". While there are quite a few options for choosing the pivot. We'll make things simple to start, and will choose the pivot as the first element. This is not an ideal choice, but it makes the algorithm easier to understand for now. Next, compare every other element in the array to the pivot. If it's less than the pivot value, move it to the left of the pivot. If it's greater, move it to the right. Once you have finished comparing, the pivot will be in the right place Next, recursively call quicksort again with the left and right halves from the pivot until the array is sorted. Like merge sort, quicksort typically runs at O(n log(n)), and even in the best case is O(n log(n)). But in the worst case - if the pivot is chosen to be the leftmost element and the array is already sorted, for instance - the runtime will be O(n2). Also like merge sort, it's easiest to implement quicksort with the aid of a helper function. This function is responsible for taking an array, setting the pivot value, and mutating the array so that all values less than the pivot wind up to the left of it, and all values greater than the pivot wind up to the right of it. It's also helpful if this helper returns the index of where the pivot value winds up.

Why use recursion?

recursion is far more useful than iteration when solving certain types of problems. a common use is when theres an object within another object. Instead of writing multiple loops, we could call our function again with a different parameter. The idea of invoking the same function again is a recursion.

merge sort

the algorithm involves splitting the array into smaller subarrays. To be more precise, the algorithm is as follows: Break up the array into halves until you can compare one value with another Once you have smaller sorted arrays, merge those arrays with other sorted pairs until you are back at the full length of the array Once the array has been merged back together, return the merged (and sorted!) array. So how does the algorithm perform in terms of its time complexity? Once the array has been broken down into one-element subarrays, it takes O(n) comparisons to get two-element merged subarrays. From there, it takes O(n) comparisons to get four-element merged subarrays, and so on. In total it takes O(log(n)) sets of O(n) comparisons, since the logarithm roughly measures how many times you can divide a number by 2 until you get a number that is 1 or less. Therefore, the time complexity for merge sort is O(n log(n)), which is significantly better than the complexity of bubble, insertion, and selection sort! Even in the best case, merge sort is O(n log(n)). In the worst case, it's O(n log(n)) too. Basically, whenever you think about merge sort, you should think O(n log(n)). When trying to implement merge sort, it's helpful to first write a function that takes two sorted arrays and merges them together. Merge sort then works by splitting the array in half, recursively calling itself on each half, and then using the merge helper to merge the two sorted halves back together.

tabulation

you solve the by solving all related sub-problems first. This means that you must decide in which order to solve your sub-problems first, which adds another step, but gives you more flexibility than memoization. This approach is traditionally known as a "bottom up" approach since the sub-problems are calculated first.

big O notation rules to remember

1. Constants are ignored 2. Smaller components are ignored Keeping those rules in mind, the following are equivalent: O(500 * n) --> O(n) O(99999999999) --> O(1) O(10*n2 + 5n + 20) --> O(n2) O(n * n) --> O(n2) O(n*log(n) + 30000 * n) --> O(n * log(n)) Notice that in all examples, constant values are replaced with 1, and all smaller components that are added are ignored.

Insertion Sort

Another simple algorithm for sorting an array of elements is insertion sort. The algorithm goes as follows: Start by picking the second element in the array (we will assume the first element is the start of the "sorted" portion) Now compare the second element with the one before it and swap if necessary. Continue to the next element and if it is in the incorrect order, iterate through the sorted portion to place the element in the correct place. Repeat until the array is sorted. Like bubble sort, insertion sort typically is O(n2), since you need to go through the array for each element in it in order to find its proper position. In the best case scenario, insertion sort will run at O(n) since only one complete iteration will be necessary.

Selection Sort

Another simple algorithm for sorting an array of elements is selection sort. The algorithm goes as follows: Assign the first element to be the smallest value (this is called the minimum). It does not matter right now if this actually the smallest value in the array. Compare this item to the next item in the array until you find a smaller number. If a smaller number is found, designate that smaller number to be the new "minimum" and continue until the end of the array. If the "minimum" is not the value (index) you initially began with, swap the two values. You will now see that the beginning of the array is in the correct order (similar to how after the first iteration of bubble sort, we know the rightmost element is in its correct place). Repeat this with the next element until the array is sorted. Just like bubble sort and insertion sort, selection sort is O(n2). In fact, it's O(n2) even in the best case; try to convince yourself why this is true.

Additional Graph Representations - Incedence Matrix

Incedence Matrix - A two-dimensional Boolean matrix, in which the rows represent the vertices and columns represent the edges. The entries indicate whether the vertex at a row is incident to the edge at a column. Here is what that might look like: In this example, the sign of the value in the matrix corresponds to the direction of the edge. For instance, edges 1 and 4 are leaving vertex a, so the values for (a, 1) and (a, 4) in the matrix are both +1. Similarly, edge 3 is entering vertex a, so the value for (a, 3) is -1. Finally, edge 2 doesn't touch vertex a, so the value for (a, 2) is 0.

Priority Queues

Since we will need to figure out the most optimal nodes to search with this algorithm, we need a data structure that will dynamically adjust to help us figure out which node to visit next. For pathfinding, a priority queue is an optimal data structure. A priority queue works just like a regular queue, but each node placed in the queue has a priority assigned to it and as elements are enqueued, the order of the queue changes based on the priority of the elements. To be more specific, we will be using a min priority queue which will place the values with a lowest cost at the front of the queue. Priority Queues are commonly implemented using Binary Heaps so if you have not done that yet, make sure you have that implemented before trying these next algorithms!

Visualize the call stack

When going through a recursive function, always do your best to visualize the call stack. The Chrome dev tools can help with this. Whenever you call a function again, think about how you are adding it to the stack. Finally, remember that the stack is a LIFO (Last In, First Out) data structure. The last function that is placed (pushed) on the stack will be the first one removed from (popped off) the stack.

big O notation

A more theoretical way of comparing one algorithm to another. Big O notation is a concept borrowed from mathematics that gives you an approximate upper bound on the runtime of your algorithm based on the size of the data set that the algorithm will use.

What is recursion?

A recursive function is a function that calls itself. Often, recursion is an alternative to iteration and in many cases it can actually be more elegant, resulting in less code that is more readable. However, it's essential to have what's called a base case in all recursive functions, as well as an understanding of the call stack.

Adjacency Matrix

An adjacency matrix is a two-dimensional matrix whose values contain information on the adjacency of pairs of vertices. More specifically, the value of the matrix in the *i*th row and *j*th column indicates whether or not vertex i and vertex j are adjacent. Here's an example of an adjacency matrix for our undirected graph from before: 1 2 3 4 5 1 0 1 0 0 1 0 2 1 0 1 0 1 0 3 0 1 0 1 0 0 4 0 0 1 0 1 1 5 1 1 0 1 0 0 6 0 0 0 1 0 0

Pathfinding with Graphs

Shortest path algorithms are incredibly common when building any application that involves pathfinding. This could be a simple game like pacman or tic tac toe, or something far more complex like building an efficient networking or navigation system. We will examine two very common algorithms for finding the shortest path, Dijkstra's and A star. If we are trying to find the shortest path between nodes, we need to think about how our adjacency list can account for weights that edges will contain. Right now our implementation of an adjacency list is simply an array of other vertices. However, we need to consider the weights that edges contain so that we can find a path between nodes that has the lowest cost. // add the node as the key and weight as the value in an object Graph.prototype.addEdge = function(vertex1, vertex2, weight) { this.adjacencyList[vertex1].push({node:vertex2, weight}); this.adjacencyList[vertex2].push({node:vertex1, weight}); };

Euclidian Distance

The Euclidean distance or Euclidean metric is the "ordinary" (i.e. straight-line) distance between two points in Euclidean space. You can think of Euclidian as a straight line between point A and B

memoization

The idea of 'storing' the result of an expensive function (fib) is known as memoization. Memoization is implemented by maintaining a lookup table of previous solved sub-problems. This approach is also known as "top down" since you solve the "top" problem first (which typically recurses down to solve the sub-problems).

Scope in Recursion

To help with scope in recursion, we can create a wrapper or helper function which will be called multiple times in an outer function (to provide additional scope). This is done through a process called helper method recursion

binary search

is an example of a divide and conquer algorithm, because at each step you're essentially dividing the amount of data you need to look at in half. In computer science it's often easier to solve a smaller problem rather than a larger one. The idea of binary search is to continually break the problem set in half until you know your answer. The approach takes advantage of the sorted data set. This is critical - if your set isn't sorted, you cannot use binary search to look for elements in it. Here's some psuedocode describing the algorithm: var searchValue = 1199 index = midpoint of current array if searchValue == value at index return index if searchValue > value at index do binary search of upper half of array if searchValue < value at index do binary search of lower half of array

Hast Tables or Hash Map

A hash table or hash map is a data structure that can map keys (of any type) to values. Hash tables are one of the most impressive data structures as they have an average runtime of O(1) for searching, insertion, and deletion. Before we dive deep into the performance characteristics of hash tables, let's first examine how they work. You can think of a hash table like a JavaScript object, except instead of all the keys being strings, the keys can be any kind of value. When searching for values in the hash table, we pass the key through a special function called a "hashing function," which turns the key into an index. A computer then uses that index to access the key's corresponding value. This isn't exposed to you; it's just used by the computer internally to associate keys with values. One thing to be mindful of when working with hash functions is that they sometimes cause collisions. In other words, it's possible for two keys to get hashed to the same index. When working with hash tables, it is essential to have a good hashing function that does not frequently cause collisions (different inputs return the same output). The function at a minimum should provide a uniform distribution (equal likelihood for all values). The hash table also needs a way to resolve collisions. We typically think of objects in JavaScript as being hash tables, but objects are restricted in certain ways. For example, the keys in JavaScript objects must be strings or symbols; no other data type is allowed. In order to accomodate more general hash tables, in ES2015, two new constructors called Map and WeakMap were introduced. Maps are similar to objects, but with a few key differences. From MDN: An Object has a prototype, so there are default keys in the map that could collide with your keys if you're not careful. This could be bypassed by using map = Object.create(null) since ES5, but was seldom done. The keys of an Object are Strings and Symbols, whereas they can be any value for a Map, including functions, objects, and any primitive. You can get the size of a Map easily with the size property, while the size of an Object must be determined manually.

Linked Lists vs Arrays

A linked list is an ordered list of data, just like an array. The difference between the two is how a linked list is stored in memory. A linked list is not stored contiguously. Instead, a linked list stores the data for a particular index as well as a pointer to the next element in the list. For example, suppose we had a linked list with the following elements in it: 8, 6, 20, -2. We've already seen that an array would need to store these values contiguously in memory. But a linked list can store the values in any order, provided each memory address contains not only a value, but also a pointer to the memory address containing the next value! In the present example, the list might be stored in memory like this: (see image) If you start with the first element in the list, often referred to as the head, and follow the links to the next element, you will end up with a ordered list, just like an array!

A*

A star is an extension to Dijkstra's algorithm that includes an additional concept for more efficient pathfinding. This concept is called a heuristic, which we can simply define as a "best guess". Since Dijkstra's will focus on the closest vertex possible, that might not always be the most efficient place to start searching. Imagine that we are trying to figure out the optimal distance from one city to another. Before you continue, watch this,and this excellent introduction to A*. With A* we calculate the cost between two nodes a bit differently and it is expressed by the following terms. f-cost - the total cost (g-cost + h-cost) g-cost - our distance cost (exactly the same as what we calculated with Dijkstra) h-cost - our heuristic cost (an additional cost we add depending on how we calculate our heuristic) This expression is represented as: f(x) = g(x) + h(x). This means that when we figure out the order of nodes to place in our priority queue, we factor in an additional cost based on a heuristic. So where does this heuristic come from? Simply put, we have to make one up! Fortunately, there are quite a few common heuristics that are used frequently with pathfinding and A*. These two heuristics are the Manhattan Distance (or Taxi Cab Distance) and Euclidian Distance

Adjacency List

An adjacency list is a collection of lists which contain the neighbors of a vertex (i.e. the vertices where are connected to the given vertex by an edge). There are many different implementations of adjacency lists, but we will be using an object to model this with the keys being the verticies and the values being an array of edges. Given the following graph, here is what our adjacency list would look like: var adjacencyList = { 1: [2, 5], 2: [1, 3, 5], 3: [2, 4, 5], 4: [3, 5, 6], 5: [1, 2, 4], 6: [4] }; This object indicates that vertex 1 is connected to vertices 2 and 5, vertex 2 is connected to vertices 1, 3, and 5, and so on.

Performance Characteristics

Binary heaps have impressive performance characteristics for deletion and insertion, but they are not as efficient as binary search trees for searching since each node must be visited. The space complexity for a binary heap is O(n) similar to binary search trees.

Binary search tree Performance

Binary search trees have impressive performance characteristics since all operations can be done in O(log(n)) time on average. However, this is not always the case if a tree is unbalanced (more nodes on one side than another). The worst case runtime for a BST can be O(n) if a tree is completely unbalanced (there are much more complex data structures like AVL and Red-Black Trees which balance themselves to prevent these kinds of issues).

real world examples of stacks

Call Stack - in JavaScript (and in computer science in general), the call stack is used to keep track of functions that are being executed or have been executed. Backtracking - for certain kinds of algorithms (especially ones we will examine in a later section on Binary Search Trees), remembering information in a certain order can easily be achieved using a stack. You can read more about backtracking here The back button in the browser - think about how this might work! Every time that you view a page, the previous URL get's added to a stack. If you click the back button that URL will be visited and popped off the stack. Try to visualize or diagram this example, it will help solidify your understanding quite a bit. Stacks are also very helpful for implementing an undo feature. Calculations - calculators and specific kinds of notations (like this one), can be implemented using stacks to keep track of numbers and operations. When the = button is pressed (or other buttons to calculate values), certain values are popped off the stack. Although this implementation is very simple and it enforces the LIFO property of a stack, it is not the most efficient. As we learned earlier, the push operation on an array is O(n). A better implementation would be to use a doubly linked list to implement the stack. That way, pushing and popping operations would always be constant time, O(1). You could also use a singly linked list, since its unshift and shift operations would mimic the push and pop of a stack.

how to remove a node into a binary heap

Deletion For this example we will be using a max heap. While we could remove nodes from anywhere in a heap, removal is commonly done through removing the root node (the highest value in a max heap or the lowest value in a min heap). Here is how it works with a max-heap: Replace the root of the heap with the last element on the last level (all the way on the bottom, the last node remaining going from left to right). Once again, we are not worried about comparing values, we just need to move another node to where the root node was (since it is now removed). We will compare values later. Compare the new root node with its children, if it is greater than both of its children, keep it there. If it is less than its children, pick the higher of the children and swap the position of the new root with the child. Keep repeating this process until the node which has become the root is in the correct place (has a parent with a greater value).

Dynamic Programming

Dynamic programming is a useful problem solving technique that often helps to make your programs more efficient. The basic idea is that you solve smaller problems and remember the result of those smaller problems in case you encounter them again. By remember solutions to smaller problems, you can more quickly figure out the larger problem.

Graph Traversal vs. BST traversal

Even though we can perform depth and breadth first searches on both graphs and binary search trees, there are a couple of important differences in the implementations that should be emphasized. When we implemented searching algorithms on BSTs, we made use of some assumptions on the structure of those graphs that we can no longer safely make. First, a BST has a canonical starting place when we want to begin a search: the root node. However, a general graph may not have the same hierarchical structure as a BST, and there may not be a natural place to begin the search. So, for our searching algorithms, we'll need to specify where the search should begin. More importantly, you'll need to deal with a problem we didn't have to worry about with binary search trees: graph cycles. With a BST, it was enough to walk down the tree and collect nodes as we found them; in general, though, we need to be careful not to add a node to our list if we've already encountered it. Since binary search trees don't have cycles, we never had to worry about revisiting a node we'd already visited, but for general graphs this is no longer the case. In both types of search, then, we'll need to be sure that we somehow keep track of the nodes that we have already visited, so that we can be sure to only visit them once. Now that we've outlined the differences between searching general graphs and searching BSTs, let's jump into some pseudocode.

How Are Arrays Stored in Memory?

Every piece of data in your program needs to be stored somewhere so that your program can access that data. The location of each piece of data in your machine is referred to as the data's memory address. You can think of memory addresses as bins to put our values in. Each slot in memory has a memory address, and each slot can hold data. One of the most important defining characteristics about arrays is that they are stored in memory contiguously. This means that adjacent elements in the array have adjacent memory addresses. Let's look at a simple example. Suppose you have an array like the following: var arr = [5,12,2,30,8]; We can think of memory addresses as defined by some numerical id. The fact that the array is stored contiguously means that it will be stored in memory like this (the top row corresponds to memory addresses): see image You can consider memory access to be constant time in your program, so every time you look up an variable, the process is constant time. But what about an array? An array saves the memory address of where the array starts. In other words, the program knows where index O of the array is stored in memory. And since arrays are contiguous, accessing an element by an index is also constant time because looking up a value at an index is just doing some simple math with memory addresses. No matter what element you're trying to access, your computer can always start at the beginning of the array, shift the memory address by some amount based on the index, and then access the element you requested. These are all constant-time operations. TAKE AWAY: All array access is constant time, or O(1).

Modeling our Graph in JS

For our implementation of a graph, we will be using an adjacency list. function Graph() { this.adjacencyList = {}; } When we add a vertex, we just add a new key in the object with an empty array as its value. When we add an edge, we push the vertex into the corresponding array for the vertex it is connected to.

Hash Performance

Hash tables have quite remarkable performance characteristics. If you can manage collisions in a hash table or if you are lucky enough to know all the keys ahead of time you can even create a perfect hash which will guarantee O(1) lookups for all cases. Here is some more detail about the performance of a hash table.

Comparing Common Big O Runtimes

In the following chart the X axis represents the size of the input (size of the array of data for example) and the Y axis represents a unit of time to run the program. Legend Color Big O Runtime Blue O(1) Red O(log(n)) Green O(n) Yellow O(n*log(n)) Purple O(n2) As you can see from the chart, as the input size goes up, the performance of these algorithms is dramatically different.

stack

It is a Last In First Out (LIFO) data structure. pop - Take something off of the top of the stack. In the example below, 1 will be removed from the stack because it is on top (the last element added). poping off of a stack of numbers push - Add something to the top of the stack. In the example below, 55 is being pushed onto the stack. A stack is always last in, first out, so the 55 goes on top of the stack. size or length - How many elements are in the stack

recursive function example

Let's take a look at a recursive function that won't throw a RangeError. We will call this function sumRange. It will take a number and return the sum of all numbers from 1 up to the number passed in. For example, sumRange(3) should return 6, since 1 + 2 + 3 = 6. You know enough JavaScript to write this function iteratively, but let's try to take a recursive approach. In order to do that, we'll need to think about a couple of things: The base case: how do we stop this function from overflowing the call stack? How should we call the function from within itself? (We need to make sure we do something a bit different each time so that we don't overflow the stack.) Let's take a look at one possible solution: function sumRange(num){ if(num === 1) return 1; return num + sumRange(num-1); } The base case here is the first line inside of the function; without it, the stack will overflow (try to answer for yourself why this is the case). Then, in the second line of the function, we're calling sumRange, but passing in num - 1. This is another hallmark of recursion: when we call the function again, we typically modify the parameters of the function in some way so that we can eventually reach the base case. In this example, if the second line read return num + sumRange(num), we'd once again overflow the stack (try to answer for yourself why this is the case, too). To really understand what this function is doing, think about what's happening on the return num + sumRange(num-1); line. If you call sumRange(4), for instance, then this function will itself call sumRange(3) on this line, which will add another copy of sumRange to the call stack. Similarly, sumRange will call sumRange(2), which will add yet another copy of sumRange to the stack. This process will continue until we reach the base case, after which these functions will start popping off of the stack.

Applications of graphs

Networking - We can model the entire Internet as a series of nodes connected in a graph. Social Networking - Imagine how you could model your social network of friends on Facebook or LinkedIn. We could use an undirected graph to display connections between friends and build sophisticated algorithms for expanding a social network. The graph is undirected because friendship is bidirectional: if I'm friends with you, then you're friends with me. Not all social networking is bidirectional, though: for example, you can follow someone on Twitter without them following you back. How does this change the structure of a graph model of users on Twitter, as compared to Facebook? Navigation - How could we model something like Google Maps? We can think of navigation as a massive directed weighted graph with distances between each location and endless paths for getting from one point to another. We will see in a later section how we can find the shortest path between two locations! Games/Mazes - Very commonly, games will implement forms of graph traversal specifically with mazes. We could model a maze as a series of nodes with an optimal path for someone to complete.

Dijkstra's algorithm

Now imagine we want to find the shortest path between vertex S and vertex T. How would we go about calculating that? We'll need to make a couple objects to store visited vertices and distances between vertices. The first thing we need to do is assign to every node a distance value and set the current node to 0 and all others to infinity. Even though we might know the distances by looking at the graph, we're going to imagine that we know nothing about distances except for the node we are starting at. So we first loop over each vertex and unless the vertex is the starting node, we place a key of that vertex and a value of Infinity in the distances object. In each iteration, we also enqueue each node with the priority of zero if it is the start node or Infinity if it is another node. Finally, in each iteration, we initialize a key of the vertex and a value of null in our previous object. The idea here is that we have not visited any nodes yet so there are no "previous" nodes visited. This object, along with our distances object will be updated. We then begin to iterate as long as the queue is not empty. Inside of this loop we dequeue the vertex first check to see if the vertex we have dequeued is the node we are trying to visit, if this is true, it means we are done iterating and we find our shortest path. One way to do that would be to build an array and iterate through our previous object. We can backtrack using our previous object to find the path that got us to the final vertex. Once we have found this path we can return it and we're done! What happens if the vertex we have dequeued is not the final node? The fun part! We need to examine each edge for this vertex and calculate the distance from our starting node. If the distance from our starting node is less than what we currently have in our distances object for that vertex we are checking, we will have to do a few things. First, we need to update the distances object for that vertex with the new, shorter distance. We also need to update our previous object for that vertex with the node we just enqueued. Finally, we need to enqueue the vertex with the updated weight. This is quite a lot, so try to watch the video above and build a table of previously visited nodes and distances between nodes - it will help quite a bit.

Handling Collisions

Now that we have an idea of how hash tables work, more specifically how hashing functions work, lets imagine we reach a point where our hashing function produces the same index for two distinct keys. This can cause serious problems for our hash table, because if there is already a value for that key, we do not want to overwrite it. So how can we manage this issue? While there are quite a few other ways to manage collisions, we will start by examining these two potential solutions: separate chaining and linear probing.

Inserting a node in a binary tree

One of the first methods to implement with a binary tree is inserting a node. The algorithm goes as follows (and can be done iteratively and recursively) Start at the root If there is nothing there, insert that node If the value of the node you are inserting for is less than the root go to the left node (if it exists) If the value of the node you are inserting for is greater than the root go to the right node (if it exists) Keep moving right or left until you find a node with no left or right value. Once you find that empty value, place the newly created node there.

bubble sort

One of the simplest algorithms for sorting an array of elements is bubble sort. The algorithm goes as follows For each element in the array, compare it with the next element (the element to the right). If the element is greater than the value on the right, swap the two values. Continue to swap until you have reached the end of the array. At this point the rightmost element will be in its correct place. Start at the beginning of the array and repeat this process. Since the rightmost element from the last iteration is now sorted, this process will terminate earlier and earlier each time you repeat. Unfortunately, the algorithm is typically O(n2) because there's a nesting of loops: you run through the array, and at each iteration you have to run through a subarray of the original array. In the best case scenario, bubble sort will run at O(n) since only one complete iteration will be necessary.

Binary Search Trees

One type of tree is called a Binary Tree. This is a tree in which each node has at most two children. Among binary trees, the specific kind of tree that we will be looking at now is a Binary Search Tree. A binary search tree is a special kind of tree where each node is in a sorted order: all nodes to the left of a given node must have values less than the given node's value, and all nodes to the right of a given node must have values greater than the given node's value. Let's see what a BST looks like: view image Notice that in the above example, every node to the left of the root has a value less than 8; every node to the right of the root has a value greater than 8. Similarly, every node to the left of the 3 node has a value less than 3, while every node to the right of the 3 node has a value greater than 3.

real world examples of queue

Prioritization - One of the most common use cases for a queue is when priority is important. Let's imagine your web site has thousands of requests a second. Since you cannot service all requests at the same time, you might implement a first-come-first serve policy by time of arrival. To manage the priority and ordering of requests, a Queue would be the most ideal data structure to use. Similarly in a multitasking operating system, the CPU cannot run all jobs at once, so jobs must be batched up and then scheduled according to some policy. Again, a queue might be a suitable option in this case. Finally, queues are commonly used with scheduling jobs for printers, where the first job to be enqueued is the first to be print. Searching Algorithms / Traversal - for certain kinds of algorithms (especially ones we will examine in a later section on Binary Search Trees), remembering information in a certain order can easily be achieved using a queue. Queues can also be used with maze algorithms to keep track of options that have not yet been explored. Job/Process Scheduling - Very commonly when background jobs are being run, there is a queue that manages the order in which these background jobs are coming in and how they are being processed. Very commonly, in memory databases like Redis are used to manage these processes. The queue implementation above enforces the FIFO property of a queue, but again it is not the most efficient implementation. Both enqueue and dequeue are O(n) operations. To make both operations constant time, you could implement the queue with a doubly linked list.

Depth First Search- graph

Similar to binary search trees, we will use a stack to manage nodes visited in a graph for depth first search. Our goal will be to return an array of node values in the order in which they're searched. Given what we've learned so far about graph search, then, here's some pseudocode to help us implement depth first search: Create a stack and push the starting vertex on it. Come up with a way to mark vertices as having been visited, and mark the starting vertex as visited While the stack is nonempty: pop a vertex off of the stack and push it into the array of vertices to be returned examine all vertices adjacent to the current vertex if an adjacent vertex has not been visited yet, push it into the stack and mark it as visited

Across Time and Space

So far we've only been talking about the runtime of different algorithms using Big O Notation. This is often referred to as analyzing the time complexity of the algorithm But Big O isn't just used to talk about the time it takes our programs to run; it's also used to talk about how much space (i.e. memory) our program requires. This is often referred to as analyzing the space complexity of the algorithm. Very often we're concerned primarily with auxiliary space complexity, that is, how much additional memory does the algorithm require beyond what needs to be allocated for the inputs themselves?

A simple hashing function

So how can we create a hashing function? One approach is for our function to return a number between 0 and the size of our hash table. We can then use this number as an index inside of a suitably large array. Let's start with a simple example: suppose we have a collection of numbers that we want to store as values in a hash table. We could create a hash function by using the number modulo the size of the table as the index. For example, if our hash table has a size of 10, and our number is 50, then our index would be 50 % 10 = 0. This is a nice start, but we quickly run into problems. One problem is that the effectiveness of our hash table depends a great deal on the numbers we're trying to hash. For example, if we're trying to store the numbers 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100, we immediately run into a problem with a hash table of size 10, because each one of these numbers will yield an index of 0 with our current hashing function! One solution to this problem is to create a large hash table. In most applications, in order to minimize collisions it's also helpful to ensure that the size is a prime number (here's one article that goes into why primes are helpful). The larger issue, however, is that our hashing function depends on the input being a number. But we may want to hash all kinds of different types of data! Here's an example of a more sophisticated hashing function which converts strings to indices. This function sums the ASCII values of the letters in the key. It then finds the value of that sum modulus the size of the hash table: // let's imagine hashTable is an array with 7919 (a large prime number) var hashTable = new Array(7919) function basicHash(key){ var sumOfCharacters = key.split("").reduce(function(previousValue, nextValue){ return previousValue + nextValue.charCodeAt(0) }, 0) return sumOfCharacters % hashTable.length } basicHash('Elie') // 383 basicHash('Matt') // 406 basicHash('Tim') // 298 We could now use the value returned to use from the basicHash function as an index and implement a set method to add the value into our hash table. While this will work, it still has some problems. While hashing the three strings "Elie", "Matt" and "Tim" seem fine so far, that is because we have a relatively large hash table size. If we were to add more and more values the distribution would not be even and we would quickly get hashing collisions.

Manhattan Distance

The Manhattan Distance is the distance between two points measured along axes at right angles. The name alludes to the grid layout of the streets of Manhattan, which causes the shortest path a car could take between two points in the city. You can think of Manhattan as a series of right angles (forming a zig-zag).

Depth First Search

The algorithm behind depth first search is to search via "depth" and explore each branch of the tree as much as possible. But then the question arises, at which node do we start? Do we start at the root? Do we start at the leftmost node and go upward? Or do we start at the leftmost node and then find the next deepest left node? These distinctions might seem confusing right now, but the difference between them will be clear in a moment. These three approaches to performing depth first search have different names: pre-order, in-order, and post-order.

Separate Chaining

The first solution to handling collisions involves storing multiple pieces of data at each index. This can be done using linked lists, balanced binary search trees or, even another entire hash table! The algorithm that we would have to implement for separate chaining to work look something like this (assuming we're using linked lists): Create a large array of prime length, To add a key-value pair, hash the key using the hash function to obtain an index. If there is no data in the array at that index, create a new linked list and put the key-value pair inside the link list's head, If there is already a non-empty linked list in the array at that index (i.e. if there's a collision), insert the key-value pair at the next node in the linked list.

Linear Probing

The second solution to handling collisions involves a form of what is called "open addressing," which is the idea that when new key has to be inserted and there is a collision, the algorithm finds another open slot to place that key. Linear probing searches the hash table for the closest free location following the collision and inserts the new key there. Lookups are performed in the same way, by searching the table sequentially starting at the position given by the hash function, until finding a cell with a matching key or an empty cell. The algorithm that we would have to implement for linear probing to work would be Create a large array of prime length, To add a key-value pair, hash the key using the hash function to obtain an index. If there is no data in the array at that index, insert the data into the array If there is a collision, move to the next index in the hash table and if there is nothing there, place the key and value Otherwise, keep iterating through the hash table to find an empty spot and then place the key and value there. Note that both of these approaches emphasize the need for a hashing function which tends to distribute indices uniformly across the array. For example, if our hashing function always returns the index 0, the separate chaining technique reduces to just creating a linked list, and our linear probing technique just reduces to creating an array. In either case, we're losing many of the advantages of using a hash table in the first place!

Tree Terminology

There are a number of common terms you'll hear or read when you're learning about trees. Here are a few: Root - The top node in a tree. Child -A node directly connected to another node when moving away from the Root. Parent - The converse notion of a child. Siblings -A group of nodes with the same parent. Descendant - A node reachable by repeated proceeding from parent to child. Ancestor - A node reachable by repeated proceeding from child to parent. Leaf - A node with no children. Edge - The connection between one node and another. Path -A sequence of nodes and edges connecting a node with a descendant. Level - The level of a node is defined by 1 + (the number of connections between the node and the root). Height of node - The height of a node is the number of edges on the longest path between that node and a descendant leaf. Height of tree- The height of a tree is the height of its root node. Depth - The depth of a node is the number of edges from the tree's root node to the node. There are many different kinds of trees used for many different purposes including database indexing, structuring systems (like your file system) and many more.

Although this implementation is very simple and it enforces the LIFO property of a stack, it is not the most efficient. As we learned earlier, the push operation on an array is O(n). A better implementation would be to use a doubly linked list to implement the stack. That way, pushing and popping operations would always be constant time, O(1). You could also use a singly linked list, since its unshift and shift operations would mimic the push and pop of a stack.

This data structure is used quite frequently to organize and is called a tree. Trees, like many of the other data structures we've seen so far, consist of collections of nodes. Unlike things like linked lists, however, nodes in a tree are organized in parent-child relationships. If you've written JavaScript code to run in the browser, you've likely seen an example of a tree before. In fact, the DOM has a tree structure! For example, consider this HTML: <html lang="en"> <head> <meta charset="UTF-8"> <title>some html</title> </head> <body> <h1>Title</h1> <div>A div</div> <div>Another div</div> </body> </html> This HTML page can be represented as a tree. The html element represents the top node in the tree, and has two children head and body. Similarly, the head element has two children, while the body element has three:

A more complex hashing function

To fix our issue, we can introduce another small prime number to multiply each part of our sumOfCharacters by. This is an implementation of an algorithm called Horner's method. We will not be getting into the math of it (you can read more about it here), but in short, this will lead to a more uniform distribution of indices with fewer collisions on average. // lets imagine hashTable is an array with 7919 (a large prime number) var hashTable = new Array(7919) function hornerHash(key){ var smallPrime = 37; var sumOfCharacters = key.split("").reduce(function(previousValue, nextValue){ return smallPrime * previousValue + nextValue.charCodeAt(0) }, 0) return sumOfCharacters % hashTable.length } hornerHash('Elie') // 4155 hornerHash('Matt') // 6711 hornerHash('Tim') // 205 Note that in the example above, we are assuming that our keys are strings. What makes a hash table so special is that your keys can be any data type. This includes strings, numbers, booleans, null, undefined, objects, arrays, and even functions! Don't worry though, for the exercises below, you will not be implementing your own hashing function. What's important to understand is what makes a good hashing function and how you can handle hashing collisions, which we will see more in the next chapter.

Graph Traversal

Two of the simplest algorithms to use when searching through graphs are breadth first and depth first search. We've seen both of these algorithms when examining binary search trees; now let's apply the same logic for graphs! Of course, in order for us to be able to exhaustively search through a graph, we need to assume that it is connected. This simply means that there is a path between any two vertices in the graph. All of the graphs we've looked at so far have been connected, but it's not true that all graphs are connected. Here's an example of a graph that is not connected: In this graph, any search algorithm that starts at 0 won't be able to find any of the other vertices; similarly, any algorithm that starts at one of the other vertices won't be able to find the 0 vertex. In our algorithms for depth and breadth first search, we'll make the assumption that our graphs are connected.

Graphs Essential Vocabulary

Vertex - Similar to nodes, each vertex represents a piece of data in the graph . Edge - Vertices/nodes are connected by edges. Weighted - Weighted graphs have values placed on each edge, which represents the cost to travel from that node to another. Unweighted - In an unweighted graph, there is no value placed on traversing from one node to another. Directed - In a directed graph, edges can have a direction which restricts motion along them. If an edge points from node A to node B, for instance, then it's possible to move from A to B, but not from B to A. Edges in a directed graph can be either unidirectional (one-way) or bidirectional (two-way). Undirected - in an undirected graph, every edge is bidirectional; in other words, as long as an edge exists, there's no restriction on travel between the two nodes. Acyclic - An acyclic graph is where you can never traverse from a single node back to itself. Here's an example of an acyclic graph: Cyclic - A cyclic graph is a graph which is not acyclic.

Representing Graphs

What makes graphs tricky to implement is that there are multiple ways of representing the data structure programmatically. Unlike our representations of linked lists and binary search trees, which are collections of nested objects, we have a few different ways of representing graphs. Let's examine three approaches.

Array implementation Arrays are how binary heaps are commonly implemented in JavaScript for common use cases like heapsort and priority queues.

When we build a heap using an array, it is important to understand the placement of parents relative to their children. Here is the rule with regards to that: For a parent at index n: - its left child will be at index 2n + 1 - its right child will be at index 2n + 2 For a child node at index n - its parent node will be at index Math.floor( (n-1) / 2 ) Here is what a min-heap with an array representation would look like (see image) Using the index values in the array, see how the 2n+1 and 2n+2 rules apply as well as finding a parent index from a child index. These rules are essential to understand for inserting and deleting when using an array to represent a binary heap.

In-order

With in-order, the algorithm is as follows: Start at the root node Recursively call the pre-order function on the subtree to the left of the current node Check the value of the current node; if it has a value, record it Recursively call the pre-order function on the subtree to the right of the current node. In the above example, starting at the root node, we would capture the values in this order: A, B, C, D, E, F, G, H, I. Hence the name: the letters are in order!

Post-order

With post-order, the algorithm is as follows: Start at the root node Recursively call the pre-order function on the subtree to the left of the current node Recursively call the pre-order function on the subtree to the right of the current node Check the value of the current node; if it has a value, record it. In the above example, starting at the root node, we would capture the values in this order: A, C, E, D, B, H, I, G, F. We can see that the implementation of these three kinds of DFS are very similar. The pseudocode is the same, just in a different order! Indeed, once you implement one of these algorithms, you can implement the rest just by changing the order of code!

Pre-order

With pre-order, the algorithm is as follows: Start at the root node Check the value of the current node; if it has a value, record it Recursively call the pre-order function on the subtree to the left of the current node Recursively call the pre-order function on the subtree to the right of the current node. In the above example, starting at the root node, we would capture the values in this order: F, B, A, D, C, E, G, I, H.

doubly Linked Lists

a type of linked list that has both a next reference and a previous reference for each node. This allows iteration in both directions: forward and backward. Typically a doubly linked list would also have a head reference and a tail reference. If you were to implement the list in JavaScript, a Node and DoublyLinkedList constructor function would be: function Node(val) { this.val = val; this.next = null; this.prev = null; } Creating a small list using the Node constructor: // Creating a doubly linked list with 1 element plus a head a tail reference var head = new Node(30); var tail = head; // Adding -85 var newNode = new Node(-85); tail.next = newNode; newNode.prev = tail; tail = newNode; // Adding 10 newNode = new Node(10); tail.next = newNode; newNode.prev = tail; tail = newNode; Note that because we keep a reference to each node's previous node, pop is O(1) on doubly linked lists, while it is O(n) on singly linked lists. Finding a node by its index is also faster with a doubly linked list, since you can choose to traverse the list either from the front (for nodes in the first half) or from the back (for nodes in the second half). However, the complexity of finding a node by index is still O(n) - remember, big O notation doesn't care about constants, and cutting the runtime of a linear algorithm in half still results in a linear algorithm!

Singly Linked Lists

consists of nodes in which each node only has a next pointer. There is a reference to the first node, typically called the head. Optionally a tail reference might also be tracked. Each node appears in sequential order in the diagram, but in reality the nodes are scattered all over memory (as we learned earlier). The arrow from one node to the next is a pointer that holds a reference to the next element. A singly linked list node might be created like this in JavaScript: function Node(val) { this.val = val; this.next = null; } The code below shows how to create the list that was diagramed above: var head = new Node(30); head.next = new Node(-85); head.next.next = new Node(10); head.next.next.next = new Node(0); head.val; // evaluates to 30 head.next.next.val; // evaluates to 10 The singly linked list has some drawbacks. For example, if you wanted to access an element that was 2nd to last in the list, you would have to start from the front of the list and iterate through each item. Also, an operation like pop is O(n) because you must iterate from the front to find the element before the last. To solve some of these problems, you could use a doubly linked list.

Binary Heaps

data structure that shares some characteristics to binary search trees, but is a bit different in its implementation, which makes it a better data structure for some operations, but not others. With binary heaps, there are two common implementations, a max heap (the higher nodes bubble up to the root) and a min heap (the lower nodes bubble up to the root) A binary heap can be implemented in two different ways, either as a tree or as an array, but let's first define the rules for a binary heap. each node can only have at most two children (similar to a binary search tree) when a node is added, you always start at the root and then keep adding, top to bottom, left to right. This is unlike a binary search tree. In a binary search tree the spot where you add a node depends on the value, but in a binary heap, the spot in which you add is always in order from top to bottom, left to right. Let's take a look at a one kind of binary heap - a max heap

O(n2)

function allPairs(arr) { var pairs = []; for (var i = 0; i < arr.length; i++) { for (var j = i + 1; j < arr.length; j++) { pairs.push([arr[i], arr[j]]); } } return pairs; } within each element of the array, we are iterating over all elements again. Therefore, the runtime is O(n * n) or O(n2). It's a helpful rule of thumb that in general, if you see nested loops, the runtime will be O(nlevels of nesting). In other words, a function with a single for loop will be O(n), a function with a loop inside of a loop will be O(n2), a function with a loop inside of a loop inside of a loop will be O(n3), and so on. However, this rule of thumb doesn't always hold, as the following examples show: function logMultiples(n) { for (var num1 = 1; num1 <= n; num1++) { for (var num2 = 1; num2 <= n; num2++) { console.log(num1 * num2); } } } function logSomeMultiples(n) { for (var num1 = 1; num1 < n=; num1++) { for (var num2 = 1; num2 <= Math.min(n, 10); num2++) { console.log(num1 * num2); } } }

Here's an example of a recursive function without a base case:

function thisIsAProblem() { thisIsAProblem(); } If you invoke thisIsAProblem, the function will call itself. But this inner function will again call thisIsAProblem, which will again call thisIsAProblem, and so on. There's never any way for this function to stop calling itself, which means the call stack will just fill up with copies of thisIsAProblem.

how to insert a node into a binary heap

insertion Let's first examine the operations necessary for correctly inserting a node in a max-heap. Add the element to the heap (work your way from top to bottom, then left to right). We do not worry about comparing values and trying to insert it at the correct position based on its value, we only need to place it in the correct spot to satisfy the rules of a heap (nodes are placed top to bottom, then left to right). Once the element is placed at the last spot in the heap (top to bottom, left to right), compare the element with its parent. If the element is less than its parent, it is in the correct order and you are done. If the element is greater than its parent, you swap the placement of the parent node and the element. Keep repeating this process until the element you are adding has a parent that is greater than itself or the element you are adding is at the root. This can be done iteratively or recursively.

Removing a node in a binary search tree

one of the most complex: removing a node. Let's examine the three situations in which a node is removed in increasing difficulty. Removing a node with no children The simplest possible scenario is that the node we want to remove has no children. In this case, all we need to do is ensure that the parent node of the node to be removed (if the node to be removed is not the root) is aware that the node has been removed. Removing a node with one child If we remove a node that has one child, we need to make sure to update the parent of the removed node and set its left or right property to be the single child of the removed node. Removing a node with two children Removing a node with two children is a bit more difficult, because after removal the tree still needs to satisfy the conditions of a binary search tree. Here are the steps involved: Find the node to remove Find the successor of the node Replace the node to be deleted with the successor What is the successor? The leftmost child of the right child, if your current node has a right child. If the right child has no left child, the right child is your in-order successor. In other words, the successor is the unique node which can replace the node being removed and keep the subtree below the node being removed as a binary search tree. It can be difficult to understand this algorithm by reading about it. It's much easier to understand with the help of some visual aids. You can practice with this algorithm by creating and removing nodes here.

O(1)

or constant time because the algorithm is not dependent on a variable size data set. In other words, regardless of the input size, the runtime of the algorithm will not grow beyond some constant size. (In many cases, it will be roughly the same regardless of the input). function add(num1, num2, num3) { return num1 + num2 + num3; } In the above example, add requires two addition operations. The size of the numbers doesn't affect how many additions need to be performed, so in this case the runtime isn't dependent on the size of the inputs. function sayHello() { for (var i = 0; i < 100; i++) { console.log("Hello"); } } sayHello logs a message to the console 100 times whenever it is called. This function is also O(1); it doesn't even have any inputs!

Big O Runtime of Common Operations

we can fully analyze the runtime of an array: Find is O(n) because in the worst case all elements in the array must be iterated over to find the element you are looking for. Remove is also O(n) because an array must remain contiguous, so if an element is removed, all elements after it must be shifted down 1, which is a O(n) operation.

How Do Arrays Grow?

when the array is created, your JavaScript interpreter must decide how large to make the array's contiguous space in memory. When memory is allocated to store an array, it is often stored on the heap. However, there are many other variables that are also being placed on the heap. Numbers, objects, strings, and other data types may all be on the heap as well. All of these different types of data will be stored in memory alongside the array we care about. Let's say in this example, the interpreter chose to allocate 9 memory slots. This means that as long as our array has fewer that nine elements in it. 5 12 2 30 8 .... When memory is allocated to store an array, it is often stored on the heap. However, there are many other variables that are also being placed on the heap. Numbers, objects, strings, and other data types may all be on the heap as well. All of these different types of data will be stored in memory alongside the array we care about. So when the array is created, your JavaScript interpreter must decide how large to make the array's contiguous space in memory. Let's say in this example, the interpreter chose to allocate 9 memory slots. This means that as long as our array has fewer that nine elements in it, it can live in memory starting at address 3507. Now, let's say the program does the following: arr.push(3); // 6 arr.push(13); // 7 arr.push(14); // 8 arr.push(50); // 9 arr.push(35); // 10 -- longer than the allocated space in memory! Everything is fine, until the line arr.push(35). At this point the JavaScript interpreter has run out of allocated heap space. So in order to complete the push, the interpreter must allocate more contiguous space (more than 9 slots), copy over every element of the array to the new memory addresses, and then put the new element (35 in this case) into the array. So even though array access is constant time, the push operation can be O(n) if we run out of available space at the existing starting point in memory. Sometimes this operation is referred to as being amortized O(1). This is because JavaScript is setting aside a certain amount of space when the array is created, so that up to a point the push operation will be constant. But if you exhaust that available space in memory, the entire array will need to be copied somewhere else.


Ensembles d'études connexes

Medicine in America Final Multiple Choice

View Set

Chapter 6 Section 3 Reading Guide

View Set

B.Arab = Terjemahan surat Al-fatihah

View Set

end of section review questions 13

View Set

Ch.18 Sec.4 Questions (Only 2 and 3)

View Set

Davis Review Questions - Exam 1

View Set