Unordered Data Structures, Ordered Data Structures, Objected-Oriented Data Structures in C++

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

What is a disjoint set?

A collection of sets with unique items Elements within a set are said to be absolutely equivalent. Identity element is the element that represents the set

Which of these is considered the least run-time complexity? O(1) O(log* N) O(log N) O(log log N)

O(1)

The SUHA states

SUHA = Simple Uniform Hashing Assumption P(h(key1) == h(key2)) = 1/N.

int i = 0; int *j = &i; How many memory allocations are made on the stack and on the heap for the above code? For example, declaring an integer would count as one memory allocation.

Two allocations are made, with both to the stack: one for the integer i and one for the memory address that is the value of pointer j.

T or F: C++ allows a variable to be declared in a user-defined member function of a user-defined class that can be defined when the function is called.

True

T or F: The custom assignment operator is a public member function of the class.

True

T or F: The member functions of a class always have access to every member data variable of that class.

True

T or F: When declaring a constructor for a class, the name of the constructor must be the name of the class.

True

What is the std::vector? How do you include it, initialize, add to back, access an element, get the number of elements

#include <vector> std::vector<T> x; // T is the type of the items in the vector x.push_back(y); x[idx]; x.size();

Suppose we declare a variable as "int i;" Which of the following expressions returns the address of the memory location containing the contents of variable i? i->addr i.addr &i *i

&i returns the address of its operand, i

class Pair { public: double a,b; }; int main() { Pair *p = new Pair; p->a = 0.0; return 0; The expression p->a is equivalent to which one of the following?

(*p).a The arrow operator -> accesses the right operand member in the class at the memory address of its left operand.

Using the convention followed by the video lessons, given three disjoint sets (1,3,5,7), (2,8) and (4,6), which set would be referenced by the value 3?

(1,3,5,7)

What is the union of the disjoint sets (1,3,5,7) and (2,8)?

(1,3,5,7,2,8)

Which of these edge lists has a vertex of the highest degree? (a, b), (a, c), (a, d), (b, d) (a,b), (b, c), (d, b), (g, b) (d,b), (g,a), (h,f), (c, e) (a, c), (e, g), (c, e), (g, a)

(a,b), (b, c), (d, b), (g, b) Vertex b has degree four

what is the height of a non-existent tree?

-1

Create a binary search tree by inserting the following five values one at a time: 4 6 5 7 8 What is the height of this tree?

3

Suppose this stack is implemented as a linked list. std::stack<int> s; s.push(1); s.push(2); s.push(3); What is the value at the head of the linked list used to implement the stack s?

3

Balanced Binary Tree

A binary tree in which the left and right subtrees of any node have heights that differ by at most 1

What is the sum of the degrees of all vertices in a graph?

2*number of edges

what is the height of a leaf node?

0

Which adjacency matrix corresponds to the edge list: (1,2), (2,3), (3,4), (1,4) (where the rows/columns of the adjacency matrix follow the same order as the vertex indices)?

0 1 0 1 0 1 0 0 1 0

Suppose we had the following interface for a stack and queue, along with a correct implementation. (Note that in this simple version, the "pop" and "dequeue" methods will remove an item and also return a copy of that same item by value. This is a little different from how the C++ Standard Template Library implementations of a stack and queue work. In STL, you have separate functions for peeking at the next value that would be removed, and for actually removing the item.) class Queue{ public: Queue(); bool enqueue(int x); int dequeue(); bool isEmpty(); // other lines omitted }; What output does the following code produce? main() { Stack s = Stack(); Queue q = Queue(); for(int i = 0; i < 5; i++){ s.push(i); q.enqueue(i); } for(int i=0; i < 5; i++){ s.push(q.dequeue()); q.enqueue(s.pop()); } while (!q.isEmpty()) std::cout << q.dequeue() << " "; }

0 1 2 3 4

What is the log*n of 2^65536?

1 + log*2^65536 1 + 1 + log*65536 1 + 1 + 1 + log*16 1 + 1 + 1 + 1 + log*4 1 + 1 + 1 + 1 + log*2 1 + 1 + 1 + 1 + 1 + log*1 1 + 1 + 1 + 1 + 1 + 0 = 5

When encoding height into the root of an up-tree, what value should be placed in element 7 of the following array? 3 | -1 | 7 | -1 | 7 | -1 | ? | 1 | 2 | 3 | 4 | 5 | 6 | 7 |

1 -> 3 -> 7 5 -> value = -3 The value should be equal to -1 minus the height. A singleton disjoint set would have height zero but there is no -0 and 0 would point to the 0th element of the array, so we increment the height by one and negate it before storing it in the root of the up-tree.

When encoding size into the root of an up-tree, what value should be placed in element 7 of the following array? 3 | -1 | 7 | -1 | 7 | -1 | ? | 1 | 2 | 3 | 4 | 5 | 6 | 7 |

1 -> 3 -> 7 5 -> value = -4

What are the steps of the heap sort algorithm? What is the time complexity? Is it stable (does it maintain the relative order of the items with equal sort keys)?

1) Build Heap, O(n) time 2) n x removeMin, O(lgn) time 3) swap elements if you need to re-order to ascending or descending O(nlgn) time complexity overall It is not stable.

How does heap sort work and what is the time complexity?

1) Build a min heap in O(n) 2) Call removeMin n times. Gives O((lgn) * n ) 3) Swap terms if needed to get descending/ascending

What three things does a hash table consist of?

1) a hash function 2) an array 3) collision handling

Requirements for a tree

1) must have a root node, which by definition has no incoming edges 2) no cycles 3) each node has exactly one parent node, except the root which has zero. 4) Must have directed edges A tree is a "rooted, directed, acyclic structure"

How to remove a node from a BST: 1) leaf node 2) node with one child 3) node with two children

1) simply remove 2) simply connect the parent of the node to the child of the node to remove the node 3) find the IOP, set the value equal to the IOP, remove the IOP

3 Types of variable storage

1. Direct storage 2. Storage by pointer - denoted by *. The type of the variable is succeeded by *. Takes a memory address and is of the same type as the variable it is pointing to. It points to the allocated space of the object. 3. Storage by reference - creates an alias. Does not store memory itself. Denoted by &. Must be assigned when the reference variable is created. Changing the variable through any of its aliases is identicale

You can pass arguments in three ways

1. Pass by value - variable is passed by value and the original variable is not modified. The copy constructor will be called to create a copy on the function stack. 2. Pass by pointer - A pointer to the variable is passed and the function can modify the original variable via this. 3. Pass by reference

Name scenarios where the copy constructor is invoked

1. Passing an object as a parameter to a function (by value) The class object is copied into the stack frame of the called function 2. Returning an object from a function (by value) The class object is copied into the stack frame of the calling function 3. Initializing a new object, say from the value returned from a function, see the second call of copy constructor below Cube foo(){ Cube c; // calls the default constructor return c; // calls the copy constructor for the first time // since the value is copied into the stack frame of main } int main(){ Cube c2 = foo(); // calls the copy constructor a second time // since the value returned by foo is copied to c2 Cube c3 = c2; // only calls the copy constructor and not the default constructor Cube c4; // default constructor is called c4 = c2; // nothing is called since both objects are already "created" // this uses the automatic assignment operator instead to copy the contents return 0;

What are the properties of a custom assignment operator?

1. Public member of the class 2. Has the function name operator= 3. Has a return value of a reference of the class's type 4. Has exactly one argument, which is a const reference of the class's type Cube& Cube::operator=(const Cube& obj){ length_ = obj.length_; return *this; }

Given any binary tree with 128 nodes where each node has a left pointer and a right pointer, how many of these pointers are set to nullptr?

129 The number of node pointers to nullptr is equal to the number of nodes plus one. For a tree with just one node (the root), the number of pointers to nullptr is two. Adding a child node to any tree replaces one pointer to nullptr with a pointer to the new node, which then has two pointers to nullptr. Hence, for every new node, there is a net increase of one pointer to nullptr.

Question 2 Create a binary search tree by inserting the following eight values one at a time: 3 1 2 4 6 5 7 8 What is the balance factor of the root node of this tree? (For this question, do not perform any rotations on this tree as you insert the items. It's just a binary search tree, not necessarily a balanced BST such as an AVL tree.)

2 left subtree height is 1, right subtree height is 3. 3 - 1 = 2

int i = 0, j = 1; int *ptr = &i; i = 2; *ptr = 3; ptr = &j; j = i; *ptr = 4; Enter the number of different values stored in the same address that variable i has during the execution of the code above. (Your answer should be a single integer, which is the total number of different values assigned to that address.)

3

For the binary search tree created by inserting these items in this order: 4 3 5 1 2, which node among 1 through 5 is the deepest node with a balance factor of magnitude two or greater? (For this question, do not perform any balancing rotations as you insert these items.)

3 Looking at the subtree rooted at node 3, there is no right child, and so the height of its non-existent right subtree is -1. (Leaf nodes are subtrees of height 0, so non-existent subtrees have height -1 for consistency). The left subtree consists of node 1 and its right child 2. That is a subtree of height 1. Therefore, the balance factor of node 3 is (height of right subtree) - (height of left subtree) = (-1) - (1) = -2. Nodes 2 and 5 are leaf nodes, so their balance factors are (-1) - (-1) = 0. Node 1 has a balance factor of (0) - (-1) = 1. The root node 4 has a balance factor of (0) - (2) = -2, which is the same balance factor as node 3, but node 3 is deeper in the tree, so node 3 is the correct answer: It is the deepest node whose height balance factor has magnitude of two or greater.

According to the disjoint set array representation in the video lessons, Which of the following arrays would NOT be a valid representation of the disjoint set (1,3,5,7)? See Quiz 2.4

3,-1, 5,-1,7,-1,1,-1 1, 2, 3, 4,5, 6,7,8 This is indeed not valid because there is no root of the up-tree. Element 1 points to element 3 which points to element 5 which points to element 7 which points to element 1, so no element in this disjoint set is the root and would represent the disjoint set

How many explicit (non-automatic) constructors are present in the class? class Animal { public: Animal(); Animal(std::string name); Animal(std::string name, int age); Animal(std::string name, int age, double weight); Animal(const Animal & other); void setName(std::string name); std::string getName(); private: // ... };

5 (4 custom non-default and 1 copy)

What is the height of the binary search tree created by inserting the following values one at a time in the following order of insertion: 1 2 3 4 5 6 7 ?

6

Assume you are storing a complete binary tree as a contiguous list of keys in an array such that the root's key is stored at location 1 of the array, the keys from all of the nodes at the next level of the tree are stored in left-to-right order in subsequent locations in the array, then similarly for all of the nodes of each subsequent level. At what array location would the key stored in the node that is the left child of the right child of the root?

6 Correct. The root is at position 1. Its right child is at 2*(1) + 1 = 3 and position 3's left child is at 2*(3) = 6.

Suppose that we have numbers between 1 and 1000 in a binary search tree and we want to search for the number 363. Which of the following sequences can not be the sequence of nodes visited in the search? 2, 252, 401, 398, 330, 344, 397, 363 924, 220, 911, 244, 898, 258, 362, 363 2, 399, 387, 219, 266, 382, 381, 278, 363 925, 202, 911, 240, 912, 245, 363

925, 202, 911, 240, 912, 245, 363

Which operator is used to send a sequence of strings, numbers and other values to the standard library's cout object in a specific order so that they will be printed to the console? Which operator is used with cin to accept user input?

<< (streaming operator) is used with cout >> is used with cin Example: std::cout << "a" << 3; first evaluated as (std::cout << "a") << 3; and the expression in the parentheses returns a reference to cout after sending "a" to it, so that the second "<<" operator sends the value 3 to cout. You can think of it like this: // After the 3 has been sent: std::cout; After each "<<" is evaluated, the sub-expression evaluates to a reference to std::cout, including at the very end, after all the streamed items have been sent. This will be useful to know later, when we'll show you how to make your own classes compatible with streaming to std::ostream objects (like std::cout).

What is a base class? What is a derived class?

A base class is a generic form of a derived class. A derived class inherits from the base class. For example, Class Shape { public: Shape(); Shape(double width); double getWidth() const; private: double width_; } namespace uiuc{ Class Cube : public Shape { public: Cube(double width, Color colour); double getVolume() const; private: Color colour; } } The variable width_ will also be available in class Cube and accessible via the method getWidth already defined in the Shape class. Functions specific to and defined in Cube class are not available in the Shape class. When initializing a derived class, the base class must be initialized. By default, it can use the default constructor of the base class. However, custom constructor can be used with an initialzation list. The syntax will be as follows // definition in the cpp file namespace uiuc{ // the following syntax directs C++ to first call Shape(double width) contructor // with the parameter width, then execute the constructor code specific to Cube Cube::Cube(double width, Color colour) : Shape(width){ _color = colour; } double getVolume() const{ // width is a private member of shape, hence unable to directly access it // instead we use the publicly available accessor in the Shape class return getVolume() * getVolume() * getVolume(); } }

For which situation described here can Dijkstra's algorithm sometimes fail to produce a shortest path? You would want to avoid using Dijkstra's algorithm in this situation. A connected graph where some of the edge weights are negative and some have weight zero. A connected graph where there are multiple paths that have the same overall path cost (distance), and all of the edge weights are non-negative. A connected graph where all of the edges have the same positive weight. A connected graph where some of the edge weights are zero and the rest are positive.

A connected graph where some of the edge weights are negative and some have weight zero. There is nothing wrong with the edge weights of zero, but the negative weights are a problem. Dijkstra's algorithm, without modifications, achieves its fast running time by making certain assumptions about which paths are best. If it encounters an edge with negative weight, the assumptions fail, and it may not correctly identify the shortest path. Some people modify Dijkstra's algorithm to iterate when negative edge weights are encountered, to make corrections. However, this causes the algorithm to run very slowly in the worst case, and it's not part of the classical algorithm. (As a separate note, if there is any graph where a cycle has weights that sum to a negative value overall, then other shortest path algorithms can also fail to find a shortest path even if they are able to handle negative edge weights in some cases. That's because the graph may have paths with infinitely negative weight.)

What is an Adjacency Matrix? What's the time complexity for insert vertex remove vertex areAdjacent(v1,v2) incidentEdges(v)

A graph implementation where you a have vertex list and an edge list stored in arrays or hash tables. You also have a matrix that is n x n, where n is number of vertices. A 1 in a slot indicates there is an edge connecting two vertices. A 0 indicates no edge. Only the top right half of the matrix is filled in an undirected graph, since the bottom left is redundant. Instead of a 1, you can use a pointer to the edge in the edge list. insert vertex: O(n), where n is number of vertices remove vertex: O(n) areAdjacent(v1,v2): O(1) incidentEdges(v): O(n)

What is an Adjacency List? What's the time complexity for insert vertex remove vertex areAdjacent(v1,v2) incidentEdges(v)

A graph implementation where you have a vertex list and an edge list stored in arrays or hash tables. Vertex list: each vertex has linked list of all adjacent edges. Each edge in the linked list has a pointed to the edge in the edge list Edge list: each edge has the two vertices connected by the edge and the name of the edge. Each vertex in the edge list also has a pointer back to the edge's location in the linked list in the vertex list insert vertex: O(1), since you just add to vertex list and point to nullptr remove vertex: O(degv), which is at worst 2*m areAdjacent(v1,v2): O(min(degv1,degv2)) incidentEdges(v): O(degv)

What is an edge list? What's the time complexity for insert vertex remove vertex areAdjacent(v1,v2) incidentEdges(v)

A graph implementation where you have a vertex list and an edge list stored in arrays or hash tables. the vertex list simply has the vertex name in each slot. The edge list has the two vertices in the edge, and the name of the edge insert vertex: O(1) amortized remove vertex: O(m), m is num of edges areAdjacent: O(m) incidentEdges: O(m)

Simple graph

A graph with no self-loops or multi-edges

Which of the following data structures would be the better choice to implement a memory cache, where a block of global memory (indicated by higher order bits of the memory address) are mapped to the location of a block of faster local memory. Why? A hash table implemented with separate chaining, using an array of linked lists. A hash table implemented with double hashing. A hash table implemented with linear probing. An AVL tree.

A hash table implemented with double hashing. Double hashing would be a good strategy because the cache addresses are quite small and compactly stored in the array. Furthermore, double hashing is more efficient than linear probing, which suffers from clumping.

What is a pair in C++? What is the directive to add it? What is the syntax?

A pair is a way two create pairs of objects of different types into a single object so you don't need to create a custom class. #include <utility> std::pair<std::string, int> myPair; myPair.first = "hello"; myPair.second = 5; You can also use the make_pair helper function #include <utility> std::pair<std::string, int> myPair; myPair = std::make_pair("hello", 5);

We have looked at examples where the assignment operator returned the value "*this". The variable "this" is available by default in most class member functions. What is the value of this built-in class variable "this"? A reference to the current object. An alias of the current object. A pointer to the current object instance. A pointer to a heap-memory copy of the current object.

A pointer to the current object instance. This is correst. In fact, members of the current object can be accessed as "this->membername" . For example, if you define a member function whose argument is the same name as a member variable, any use of that name in the local scope of the function refers to the argument and not the member variable, but you can still access the member variable as "this->membername" . Hence the following example works. class Just_a_double { double val; public: void setValue(double val) { this->val = val; } }

what is #pragma once?

A preprocessor directive that instructs the compiler to only comply this code once

Suppose you are given an undirected simple graph with unweighted edges, and for a particular specification of three vertices uu, vv, and ww, you want to find the shortest path from uu to ww that goes through vv as a landmark. What is the most efficient method that can find this? A single run of Dijkstra's algorithm from uu. Two runs of Dijkstra's algorithm, first from uu and then from vv. A single run of breadth-first search from vv. Three runs of breadth-first search: once each from uu, vv, and ww.

A single run of breadth-first search from vv. A single breadth-first search from the landmark vertex finds the shortest paths from it to the start vertex and the end vertex, and since the edges are undirected, their combination is the shortest path from start to end that also visits the landmark. It's not necessary to use Dijkstra's algorithm in this case since the edges are unweighted.

What is a B-Tree?

A tree data structure that provides for efficient searching, adding, and deleting of items All keys within a node are in sorted order Each node contains no more than m-1 keys Each internal node can have at most m children A root node can be a leaf or have two to m children Each non root, internal node has [ceil(m/2), m] children This happens because when inserting elements, we split a node into two parts as soon as the array storing keys in that node has become full and cannot accomodate any new key This ensures that each internal node will be at least half full (other node can be a leaf)

What is a data structure's ADT?

Abstract Data Type It is how data interacts with a structure and is not an implementation, it is just a description

Suppose you have a rapid data feed that requires you to remove existing data point vertices (and any of their edges to other vertices) quickly to a graph representation. Which graph representation would you WANT to utilize? Edge List Adjacency Matrix Adjacency List All three representations have the same time complexity for removing a vertex from a simple graph of n vertices.

Adjacency List Since the adjacency list has a list of the edges the removed vertex shares with other vertices, it only needs time proportional to the degree of the removed vertex. In the worst case, that vertex could be connected to all of the other vertices and so require O(n) time, but in the typical case the degree will be less and the adjacency list is a better choice than the adjacency matrix.

Suppose you want to implement a function called neighbors(v) that returns the list of vertices that share an edge with vertex v. Which representation would be the better choice for implementing this neighbors() function? Edge List Adjacency Matrix Adjacency List All three representations result in the same time complexity for the neighbor() function.

Adjacency List The adjacency list requires a simple walk through the list of pointers to adjacent edges to find the neighboring vertices. This representation has an "output sensitive" running time meaning it runs as fast as possible based on the minimum amount of time needed to output the result.

Suppose you want to implement a function called neighborsQ(v1,v2) that returns true only if vertices v1 and v2 share an edge. Which representation would be the better choice for implementing this neighborsQ() function? Edge List Adjacency Matrix Adjacency List All three representations support the same time complexity for implementing the neighborQ() function.

Adjacency Matrix The neighborsQ(v1,v2) function can simply lookup the appropriate v1,v2 entry in the adjacency matrix, which takes constant O(1) time. This representation supports the fastest method for implementing this query.

Suppose you have a rapid data feed that requires you to add new data point vertices quickly to a graph representation. Which graph representation would you NOT want to utilize? Edge List Adjacency Matrix Adjacency List All three graph representations have the same time complexity for adding vertices to a simple graph.

Adjacency Matrix The adjacency matrix requires linear time, O(n), to add a vertex because the addition requires new entries to be placed in a new row and a new column of the matrix, and there are n elements in the new row and n elements in the new column. This means that as the number of vertices grows in the graph, it will take longer to add a new vertex, which is not a very good choice when processing a data feed.

If a B-Tree is completely filled, meaning every node holds its maximum number of keys and all non-leaf nodes has the maximum number of children, then what happens when an additional key is inserted into the B-Tree? After searching for the leaf node where the new key should go, the leaf is split in half as two separate leaf nodes, and then the middle value is thrown up to the layer above as an inserted key, and this insertion and rebalancing repeats until a new root key rises to the top, which adds a layer to the tree. A new node containing the new key is added above the previous root and becomes the new root. The new root will have one pointer leading to the old root node. A new leaf node is simply added to the B-Tree. Every leaf node in the entire B-Tree becomes parent to a new leaf node, but all but one of these leaf nodes are "blank" placeholder nodes that contain zero key values.

After searching for the leaf node where the new key should go, the leaf is split in half as two separate leaf nodes, and then the middle value is thrown up to the layer above as an inserted key, and this insertion and rebalancing repeats until a new root key rises to the top, which adds a layer to the tree.

Perfect Binary Tree

All leaf nodes are at the same depth and all internal nodes have two children

Which of these algorithms can be used to count the number of connected components in a graph? Count the number of times a breadth first traversal is started on every vertex of a graph that has not been visited by a previous breadth first traversal. Count the number of times a depth first traversal is started on every vertex of a graph that has not been visited by a previous breadth first traversal. All of the above None of the above

All of the above

Which graph representation has a better worst-case storage complexity than the others for storing a simple graph of n vertices? Edge List Adjacency Matrix Adjacency List All three graph representations have the same worst-space storage complexity for a simple graph of n nodes.

All three graph representations have the same worst-space storage complexity for a simple graph of n nodes. All three require O(n^2) storage in the worst case. The adjacency matrix requires O(n^2) space to store at least the upper-triangular portion of the n x n matrix. Both the edge list and adjacency list representations require O(n + m) storage, but in the worst case m is proportional to n^2 and O(n + n^2) = O(n^2).

What is a range-based for loop and what is the syntax?

Allows iteration over all objects in a container. It is useful so you don't reference something invalid. for ( int x : container) {loop body} x is just a temporary variable and not referencing the actual object in the container. If you want to modify in the input container, you can use a reference variable, such as int& x. If you don't want to create a copy, but you don't want to modify the input, you can use const int& x. Modifying x would throw an error.

The code that ensures the balance of an AVL tree after node insertion or removal only checks if the height balance factor is +2 or -2. What happens if the height balance factor of a node in an AVL tree after node insertion or removal is greater that +2 or less than -2? There is additional code not shown that handles the cases when the height balance factor is greater than +2 or less than -2. We ignore nodes in an AVL tree with height balance factor greater than +2 or less than -2 because they are statistically rare and are unstable, such that they are removed as soon as any tree balancing rotation occurs. When insertion and removal create a node whose height balance factor is greater than +2 or less than -2, that node always has a descendant with a height balance factor equal to +2 or -2 and when all of its descendant nodes are resolved, then its height balance factor will be no greater than +2 or no less than -2. An AVL tree never has a node with a height balance factor greater than +2 or less than -2, even after a node insertion or removal.

An AVL tree never has a node with a height balance factor greater than +2 or less than -2, even after a node insertion or removal. Every node in an AVL tree has a height balance factor of -1, 0 or 1. Inserting a node can increase the height of a subtree by only 1, and so can change the height balance factor of any node to no more than 2 or no less than -2. Similarly, a node in a any binary search tree is removed by either by deleting a leaf node, shortening a chain of nodes by removing a node with a single child, or replacing a node with its immediate ordered predecessor, and all three of these operations change the height of a subtree by no greater than +1 or no less than -1.

Array vs vector in C++

Both are arrays, which store data in sequential memory slots. Arrays are static while vectors are dynamic (automatically resize). Size = number of elements currently in an array Capacity = max number of elements that can be stored without resizing

Which of the following data structures would be the better choice to implement a dictionary that not only returns the definition of a word but also returns the next word following that word (in lexical order) in the dictionary. An AVL tree. A hash table implemented with double hashing. A hash table implemented with linear probing. A hash table implemented with separate chaining, using an array of linked lists.

An AVL tree. While the AVL tree needs O(log n) time to find the definition of the word, which is worse than the performance of a hash table, the AVL tree can find the next word in lexical order in O(log n) time whereas any hash table would need O(N) steps to find the next word in lexical order.

Dijkstra's algorithm

An SSSP (single source shortest path) algorithm for finding the shortest paths between nodes in a weighted graph. Works on undirected or directed. Unconnected or connected. Does not work when there are negative weights For a given source node in the graph, the algorithm finds the shortest path between that node and every other. It can also be used for finding the shortest paths from a single node to a single destination node by stopping the algorithm once the shortest path to the destination node has been determined. Its time complexity is O(m + nlogn), where E is the number of edges and V is the number of vertices.

Kruskal's Algorithm

An algorithm to get the minimum spanning tree Create minheap of edges based on weights. Use disjoint sets to create minimum spanning tree. Remove min from minheap repeatedly unioning it with the current set, but only if the nodes in the edge are not already in the set. Runs in O(mlogm) You can also use a sorted array, and this will still be O(mlogm)

Prim's Algorithm

An algorithm to get the minimum spanning tree sparse graph: (O(mlogm), since m is about n in a sparse graph. where m is number of edges and n is the number of vertices) dense graph: O(n^2*logn) Starting from a vertex, grow the rest of the tree one edge at a time until all vertices are included. Greedily select the best local option from all available choices without regard to the global structure. You can do this by using a minheap of edges with the current min weight to add them to the current tree.

What is an array and what is a vector in C++?

An array is a static array, while a vector is dynamic. Arrays store data in sequential blocks of memory. The size is the number of elements and the capacity is the max number of elements that can be stored in the array without resizing it.

What is an incident edge in a graph?

An edge connected to a vertex

What is the difference between an ordered and an unordered map?

An ordered map is the std::map type. It is stored in a tree structure and has ordered to its keys. An unordered map is the std::unordered_map type. It is stored as a hash table and has no order to its keys. ordered maps: O(logn) lookup unordered maps: O(1) lookup example: std::unordered_map<std::string, int> my_map; my_map["hello"] = 5; std::cout << my_map["hello"] << std::endl;

Which of the following is not a true statement about a complete binary tree? No node in a complete binary tree has only a right child. The worst-case run time for finding an object in a complete binary search tree of an ordered list of n items is O(lg n). The height of a complete binary tree of n nodes is floor(lg n). Any tree that contains a node with a single child is not a complete binary tree.

Any tree that contains a node with a single child is not a complete binary tree.

What is an adjacent vertex in a graph?

Any vertex connected to the vertex by an incident edge

T or F: a connected directed graph with no cycles is a tree.

False For example, a directed graph such as A -> B, A --> C, B --> D and C --> D is connected and has no cycle, but is not a tree because there are multiple paths from vertex A to vertex D.

Which data structure below supports the fastest run time for finding an item in a sorted list of items? Array Linked List Binary Search Tree All of these data structures have the same run time complexity for finding an item in a sorted list of items.

Array O(logn) worst case linked list is O(n) BST has O(logn) or O(h) if balanced, but worst case is O(n) if a linked list

Compare queue implementation with a linked list vs an array

Array: You fill from the back of the array working toward the front. This gives you amortized O(1) pushes and pops. Linked List: Requires a pointer to the tail node. Add onto the tail and remove from the head. Give O(1) for all operations Both implementations have similar amortized time complexities

Info about destructors

As with constructors, an automatic default destructor is assigned to the class that will automatically call the default destructors of all the member objects of the class. A destructor should not be called directly. Instead, the compiler will put specific calls depending on where the memory is allocated. If the object is allocated on the stack, the destructor is automatically called when the function returns. If the object is allocated on heap, the destructor is automatically called when the heap memory is released using the delete operator.

What is the time complexity for find, insert and remove for BST average case, BST worst case, sorted array, sorted list.

BST average: O(logn) for all BST worst case: O(n) for all sorted array: O(logn), O(n), O(n) sorted list: O(n) for all

Why do we use namespaces in C++ programming?

Because two different libraries might use the same label for a class or variable Namespaces allow different libraries to use the same label for a class or variable because they can each define a unique namespace to differentiate them when they are used together in a program.

In a hash function, when should you use the different collision-handling techniques? When should you use an AVL tree instead?

Big records - Use separate chaining, because it will take a lot of time to copy the record into the array. A linked list would be better Structure speed (really efficient hashing with great runtime complexity) - Use double hashing. Range finding/nearest neighbor - Use AVL tree. Hash table is terrible for this.

When implementing a queue which will need to support a large number of calls to its push() and pop() methods, which choice of data structure results in a faster run time according to "Big Oh" O() analysis? A linked list because an array cannot be used to implement a queue that supports both push() and pop() methods. The array implementation of a queue has a better run time complexity than does the linked list implementation. Both array and linked list implementations of a queue have the same run time complexity. The linked list implementation of a queue has a better run time complexity than does the array implementation.

Both array and linked list implementations of a queue have the same run time complexity. The push() and pop() methods can be implemented in constant time O(1) for both an array and a linked list. Push and pop run in amortized constant time O(1) for an array, but the large number of calls to push() and pop() will ensure the cost of resizing the array is properly amortized by the benefits of the array.

When implementing a stack which will need to support a large number of calls to its push() and pop() methods, which choice of data structure results in a faster run time according to "Big Oh" O() analysis? An array implementation because a linked list cannot be used to implement a stack that supports both push() and pop() methods. The linked list implementation of a stack has a better run time complexity than does the array implementation. Both array and linked list implementations of a stack have the same run time complexity. The array implementation of a stack has a better run time complexity than does the linked list implementation.

Both array and linked list implementations of a stack have the same run time complexity. The push() and pop() methods can be implemented in constant time O(1) for both an array and a linked list. Push and pop run in amortized constant time O(1) for an array, but the large number of calls to push() and pop() will ensure the cost of resizing the array is properly amortized by the benefits of the array.

class Just_a_double { public: double a; Just_a_double(double x) : a(x) { } Just_a_double() : Just_a_double(0) { } } Which constructors, if any, compile properly? Both constructors on lines 5 and 6 result in compiler errors. Both constructors on lines 5 and 6 compile properly The constructor on line 5 results in a compiler error but the constructor on line 6 compiles properly, The constructor on line 5 compiles properly, but the constructor on line 6 results in a compiler error.

Both constructors on lines 5 and 6 compile properly The initializer lists allow both member variable constructors as well as other declarations of the class constructor.

Which traversal method has a better run time complexity to visit every vertex in a graph? Breadth First Traversal Depth First Traversal Both have the same run time complexity. Neither traversal method will necessarily visit every vertex in a graph.

Both run in O(n+m) for n vertices and m edges

How do you build a heap from an unsorted array, and what is the time complexity?

Call heapify down on the first 1/2 of the elements (since the last half don't have children). This will give a heap in O(n)

What is the union operation in a disjoint set?

Combines two sets. Now all elements share the same identity element, so find operation would be equivalent.

What are the components of a good hash function?

Compresses the value to fit into the memory array. Computation time must be O(1) Deterministic: Gives same result every time Satisfy the Simple Uniform Hashing Assumption. h(key1) == h(key2) probability is 1/N, where N is capacity of array

What is the fastest way to build a heap of n items?

Create a complete tree of the items in any order, then call heapifyDown on every non-leaf node from the bottom of the tree up to the root. This runs in O(n) time

Full Binary Tree

Each node has 0 or 2 children

Which of the following examples does NOT call a copy constructor at least once? // Function prototype for "intersect": Cube intersect(Cube &left, Cube &right); // ... Cube a(10),b(5); Cube c; c = intersect(a,b); Cube a,b(10); a = b; Cube b(10); Cube a = b; // Function prototype for "contains": int contains(Cube outer, Cube inner); // ... Cube a(10),b(5); int a_bounds_b = contains(a,b);

Cube a,b(10); a = b; In this case, the assignment operator is called (either the default assignment operator or one that has been explicitly declared). Since the operand of the assignment operator is passed by reference and not by value, the copy constructor is not called because no new object needs to be constructed. option 1 calls in first line because it returns an object by value option 3 calls in line two with the initialization of a new cube option 4 calls in line 2 when it passes the objects by value

What is the syntax for a custom copy constructor definition?

Cube::Cube(const Cube& obj){...} It has a single argument that is a constant reference to an object of the same type of the class

What are the properties and syntax of a custom destructor?

Cube::~Cube(){...} 1.member function 2.name of the class preceded by a ~ 3.No arguments and no return type

Consider the following class: class Blue { public: double getValue(); void setValue(double value); private: double value_; }; Select all functions that are present in this class (including any automatic/implicit functions added by the compiler): Default constructor at least one custom, non-default constructor Copy constructor Assignment operator Destructor

Default constructor Copy constructor Assignment operator Destructor

Which of the following is a true statement about Dijkstra's algorithm? Assume edge weights (if any) are non-negative. Dijkstra's algorithm finds the shortest unweighted path, if it exists, between a start vertex and any other vertex, but only for an undirected graph. Dijkstra's algorithm finds the shortest weighted path, if it exists, between a start vertex and any other vertices, but only for an undirected graph. Dijkstra's algorithm finds the shortest weighted path, if it exists, between a start vertex and any other vertices in a directed graph. Dijkstra's algorithm finds the shortest weighted path, if it exists, between all pairs of vertices in a directed connected graph.

Dijkstra's algorithm finds the shortest weighted path, if it exists, between a start vertex and any other vertices in a directed graph.

Difference between Djikstra's and Prim's Algorithm

Djikstra's - keep track of sum of weights. Used to find shortest path from start vertex to all other connected nodes Prim's - Used to find minimum spanning tree

When using double hashing to store a value in a hash table, if the hash function returns an array location that already stores a previous value, then a new array location is found as the hash function of the current array location. Why? Only one additional hash function is called to find an available slot in the array whereas linear probing requires an unknown number of array checks to find an available slot. Since the hash function runs in constant time, double hashing runs in O(1) time. Double hashing reduces the clumping that can occur with linear probing. Double hashing reduces the chance of a hash function collision on subsequent additions to the hash table.

Double hashing reduces the clumping that can occur with linear probing. The subsequent hash functions spread out the storage of values in the array whereas linear probing creates clumps by storing the values in the next available empty array location, which makes subsequent additions to the hash table perform even worse.

How do you optimally grow a heap? How does this affect the time complexity of insertions?

Double the array it when it is filled. This effectively adds another level to the heap when looking at it as a tree. While this is an O(n) operation, this is only done every n operations, so insertions are O(1)

What is the optimal way to resize an array when it is full?

Double the size. This gives amortized O(1) appending to the array.

You have an array that is currently length one and already contains one item. You need to implement a function Append(i) that adds the item i to the position after the current last item of the array. If the array is full, then your Append() function will need to expand the size of the array so that it can store the additional item i. Recall that expanding the size of an array requires allocating new memory for the expanded size and copying all of the current array items to the new (expanded) array before de-allocating the previous (full) array. (It is okay to assume there is always enough memory to allocate for an array.) Your Append() function will be called an unknown number of times. Which method for resizing the array would result in the fastest total run-time for calling Append() n times to add n items to the array?

Doubling the length of the array every time an item is added when the array is already full.

Which graph representation would be the best choice for implementing a procedure that only needs to build a graph from a stream of events. Edge List Adjacency Matrix Adjacency List All three representations would share the same storage and time complexity for the procedure.

Edge List The Edge List performs worse in general than the Adjacency Matrix and the Adjacency List representations, but it is much simpler and easier to implement. It also takes less space than the alternatives, and can insert vertices and edges in constant time. The adjacency list can also insert vertices and edges in constant time, but if those are the only operations needed, then one need not waste space and additional code on building the adjacency list on top of the edge list.

Compare the complexity of edge list, adjacency matrix, and adjacency list space insertVertex removeVertex insertEdge removeEdge incidentEdges areAdjacent

Edge list | Adjacency matrix | Adjacency list space: O(n+m) | O(n^2) | O(n+m) insertVertex: O(1) | O(n) | O(1) removeVertex: O(m) | O(n) | O(degv) insertEdge: O(1) | O(1) | O(1) removeEdge: O(1) | O(1) | O(1) incidentEdges: O(m) | O(n) | O(degv) areAdjacent: O(m) | O(1) | O(min(degv1,degv2))

When storing a new value in a hash table, linear probing handles collisions by finding the next unfilled array element. Which of the following is the main drawback of linear probing? If the hash function returns an index near the end of the array, there might not be an available slot before the end of the array is reached. There may not be an available slot in the array. The array only stores values, so when retrieving the value corresponding to a key, there is no way to know if the value at h(key) is the proper value, or if it is one of the values at a subsequent array location. Even using a good hash function, contiguous portions of the array will become filled, causing a lot of additional probing in search of the next available unused element in the array.

Even using a good hash function, contiguous portions of the array will become filled, causing a lot of additional probing in search of the next available unused element in the array. This happens because the hashing distributes values uniformly in the array, but the linear probing fills in gaps between the locations of previous values, which makes the situation worse for later values added to the array.

Full binary tree

Every node has 0 or 2 children

T or F, for a B-Tree of order m?: Each node can hold an ordered list of as many as m keys.

F This statement is indeed false. In an order-m B-tree, each tree node indeed holds multiple keys, but the number of keys is limited to m-1.

T or F, as a valid reason to choose the B-Tree representation over a standard AVL binary search tree?: B-Trees have better algorithmic "Big-O" run-time complexity for the find operation.

F While the B-Tree find operation runs in O(log_m n) time, the m ends up being a constant factor and O(log_m n) = O(lg n) as a consequence of the big-O characterization of how run time increases as the number of data items (n) increases.

What is a queue, its ADT, and their optimal runtime?

FIFO data structure create -> creates an empty queue, O(1) push -> pushes element onto back of queue, O(1) pop -> pops element off front of queue, O(1) empty -> returns true if empty, O(1)

T or F: A class can only have one constructor.

False

T or F: Any functions that operate on a class's member data variables must be implemented independent of the class in a separate .cpp file.

False

T or F: C++ allows a local variable to be declared in main() with an unknown type that can be defined when the program is executed.

False

T or F: The member data variables in a class can only be accessed by the member functions of that class.

False

T or F: The member functions of a class can only operate on member data variables of that class

False

T or F: You should avoid using the memory address 0x0 for pointers whose value is not yet set, because memory location 0x0 is a valid location for the system to allocate to hold the contents of a variable.

False, 0x0 is reserved by the OS. It can be accessed by setting a pointer variable equal to the nullptr. If you access this value, a segmentation fault will occur. When deleting a pointer, you should use the delete keyword to release the memory and then set the value to nullptr. This avoids undefined behavior.

T or F: The type of the custom assignment operator function should be void.

False, it has a return value of a reference of the class's type Cube & Cube::operator= (const Cube & obj) { length_ = obj.length_; return *this; }

T or F: The custom assignment operator function is declared with two arguments: the source and target objects of the assignment.

False, it has exactly one argument, which is a const reference of the class' type Cube & Cube::operator= (const Cube & obj) { length_ = obj.length_; return *this; }

T or F: The custom assignment operator is a function declared with the name "operator::assignment".

False, it is declared with operator= Cube & Cube::operator= (const Cube & obj) { length_ = obj.length_; return *this; }

T or F, For a linked structure of edges and nodes to be a tree: Every node has zero, one or two children.

False, that is only true for a binary tree

T or F: A class must have at least one constructor declared for it.

False, there is a default constructor

T or F: When declaring a constructor for a class, the return type of the constructor must be the type of the class.

False, there is no return type

T or F: A class can consist of multiple member data variables of different types, but each member variable must be one of the built-in types.

False, they can also be derived or user-defined types

T or F: The "new" operator allocates memory on the stack that gets removed from the stack by the "delete"operator.

False, this is referring to the Heap

Which one of the following four hashing operations would run faster than the others? Finding a value in a hash table of 100 values stored in an array of 1,000 elements. Finding a value in a hash table of 4 values stored in an array of 8 elements. Finding a value in a hash table of 2 values stored in an array of 2 elements. Finding a value in a hash table of 20 values stored in an array of 100 elements.

Finding a value in a hash table of 100 values stored in an array of 1,000 elements. The load factor is 100/1,000 = 0.1 which is less than the other options.

Which one statement below is FALSE? Assume we are using the most efficient algorithms discussed in lecture. Adding n items, one at a time, to the end of an array takes O(n) time overall. Adding n items, one at a time, to the front of a linked list takes O(n) time overall. Finding an item in a sorted array of n items cannot be done in better than O(n) time. Finding an item in a sorted linked list of n items takes O(n) time.

Finding an item in a sorted array of n items cannot be done in better than O(n) time. This is indeed the false statement, because If the array is sorted, then one can perform a binary search of the array by going to the middle of the array to see if that is the correct item. If not, then the correct item would be one one side or the other, such that only half of the array needs to be search. Hence the search space is cut in half each step and O(lg n) steps are needed. (Recall "lg n" is the base 2 logarithm of n, such that if x = lg n, then n = 2^x. If x is not an integer, then O(lg n) would refer to the ceiling of lg n.)

What does the automatic default constructor initialize values to?

For primitive types, the default value is unknown, so it is not recommended

Let G = (V,E) be a simple graph consisting of a set of vertices V and a set of (undirected) edges E where each edge is a set of two vertices. Which one of the following is not a simple graph? G = ( V = (a,b,c), E = ((a,b)) ) G = ( V = (a,b,c), E = ((a,b),(b,c),(a,c)) ) G = ( V = (a,b,c), E = ((a,b), (a,c), (b,a), (b,c), (a,c), (b,c)) ) G = ( V = (a,b,c), E = () )

G = ( V = (a,b,c), E = ((a,b), (a,c), (b,a), (b,c), (a,c), (b,c)) ) This is not a simple graph because the same edge between a and b appears twice, once as (a,b) and a second time as (b,a). Since these are sets, (a,b) == (b,a).

Why use a B-Tree

If you have large seek times for data, because you can't store it all in main memory (some is stored elsewhere, such as another server or a hard drive). A b tree can be used to minimize the number of network packets or disk seeks.

Which of the following is NOT a step of the heap sort algorithm? Load the data in any order into a complete tree. Run heapifyDown on every non-leaf node. Insert the next item into the current heap. Remove the root node.

Insert the next item into the current heap

What is the unordered_map::find function?

It searches the unordered map for the key provided and returns an iterator of pair type of the key,value pair if found, otherwise it returns an iterator to the unordered_map::end

What if you try to index a key that doesn't exist in an unordered map?

It will create that key with no default initial value for that type. You should use the count method if you want to see if a key exists to avoid this.

In a BFS, how do you get number of disjoint graphs?

Increase count for number of components each time BFS is called when going through vertex list. You only call bfs if the vertex is unexplored. You will only call BFS once if all vertices are connected

int *i = new int; *i = 0; int &j = *i; j++; What does the last line of the above code segment do? Increments the value of j by one, where the value of j is a local copy stored on the stack of the value of i stored on the heap. Causes an error. Increments the value pointed to by variable i by one. Increments the address pointed to by variable i by one.

Increments the value pointed to by variable i by one. Yes, j is a direct reference to the same actual integer that i points to indirectly.

Consider the binary search tree created by inserting these items in this order: 4 3 5 1 2, If we interpret it now as an AVL tree, it has an imbalance that can be fixed with a rotation. After performing the correct balancing rotation about the node that we identified in the previous question, the resulting tree is identical to which one of the following binary search trees? (We'll describe these other trees by listing the order in which you would insert items to create the trees directly.) Inserting 2 1 4 3 5 one node at a time. Inserting 3 2 4 1 5 one node at a time. Inserting 3 5 2 4 1 one node at a time. Inserting 4 2 5 1 3 one node at a time.

Inserting 4 2 5 1 3 one node at a time This tree is indeed better balanced than before, and it's the correct result of performing a left-right rotation about the node 3 in the previous tree. In this new tree, the balance factor of greatest magnitude anywhere is now found at the root node, which has balance factor (0) - (1) = -1. (Remember that we focus on the deepest point of imbalance, where the magnitude of the balance factor is 2 or greater, to perform the rotation.)

You have a list of 100 items that are not sorted by the item value. Which one task below would run much faster on a list implemented as linked list rather than implemented as an array? Replacing the 25th item in the list with a different item. Searching the list for all items that match a given item. Finding the first item Inserting a new item between the 24th item and the 25th item.

Inserting a new item between the 24th item and the 25th item. Inserting a new item in the middle of the list implemented as an array requires copying each of the remaining items (25th to 100th) to new locations (26th to 101st), and may require resizing the array. Inserting a new item in the middle of the list implemented as a linked list would require setting the "next" pointer at the 24th item to the new item, and the "next" pointer at the current item to the (previously) 25th item.

What is the using keyword?

It allows you to import an item from a namespace to the global namespace so you can write the functions or classes without the namespace. for example, using std::cout; now you can just use cout instead of writing std::cout in the rest of the program

Which variables and methods can a derived class access?

It can access only the public variables and methods of the base class

What is a custom non-default constructor?

It is a constructor that requires arguments. If it is defined, then there is no automatic default constructor You can have multiple

What are four properties of the custom default constructor?

It is a member function with the same name as the class It takes zero parameters It has no return type If it is defined or another custom constructor is defined, then there is no automatic default constructor

What is a template type?

It is a special type that can take different types when the type is initialized. For example, std::vector. std::vector<char>; // initializes a vector of characters std::vector<uiuc::Cube>; // initializes a vector of cubes

What is an AVL tree?

It is an implementation of a balanced binary search tree. It has the standard BST implementation plus it tracks the height of each node and it maintains the balance factor between -1 and 1 on each insert and remove by completing necessary rotations.

What is Separate Chaining? What is the time complexity of insert and remove/find in worst and average case?

It is one way to manage collisions in a hash function. When you have a collision, you insert the value at the head of the linked list at that memory location. insert: O(1) both cases remove/find: O(n) worst; O(n/N) average

What is the type of a pointer?

It is the same as the object it is pointing to. for example int num = 42; int* p = &num cout << p << endl; // prints the address of the variable num cout << *p << endl; // prints the value of the variable num, 42 *p = 4; // changes the value of num to 4, p still stores the address of num type of p is int, and it points to num

What is a copy constructor?

It is used to copy the contents of an existing object to a newly instantiated object. The default copy constructor simply copies the contents of each member variable to the new object, which is typically what is desired.

class Pair { private: double a,b; }; class equalPair : public Pair { private: bool isequal; public: int status(); } When the function status() is implemented, which variables will it have access to? Both the member variables a,b or Pair and isequal of equalPair. No member variables of either equalPair or Pair. Just the member variables a,b of Pair. Just the member variable isequal of equalPair.

Just the member variable isequal of equalPair. Even though Pair is indicated as a public base class, the derived class equalPair does not have access to the private members of Pair.

What is a stack, its ADT, and their optimal runtime?

LIFO data structure. create -> creates an empty stack, O(1) push -> pushes element onto top of stack, O(1) pop -> pops element off top of stack, O(1) empty -> returns true if empty, O(1)

Study the various rotations in an AVL tree

Look at notes

How should one insert a new value into a heap to most efficiently maintain a balanced tree? Maintain the heap as a complete tree and insert a new value at the one new node position that keeps the tree as a complete tree. Then continually exchange the new value with the value of its parent until the new value is in node where it is greater than the value of its parent. Maintain the heap as a balanced binary search tree. Walk down the tree from the root exploring the left children first, then the right children, until a node is found that is greater than the new value. Insert a new node with the new value at that position and make the previous node the left child of that new node. Rebalance the tree if the height balance factor magnitude of the new node or its parent exceeds one. Maintain the heap as an AVL tree. Walk down the tree from the root exploring the left children first, then the right children, until a node is found that is greater than the new value. Insert a new node with the new value at that position and make the previous node the left child of that new node. Then call the appropriate rotation routine to rebalance the tree if the height balance factor magnitude of the new node or its parent reaches two. Maintain the heap as an array. Walk down the tree from the root at position 1 in the array, exploring the left children first, then the right children, until a node position is found whose value is greater than the new value. Copy the value at that position and all subsequent positions in the array to one greater position in the array, and store the new value at that position.

Maintain the heap as a complete tree and insert a new value at the one new node position that keeps the tree as a complete tree. Then continually exchange the new value with the value of its parent until the new value is in node where it is greater than the value of its parent.

In a BFS, how do you determine if there is a cycle?

Mark edges as discovery and cross-edges. A cross-edge indicates a cycle.

Minimum edges on: not connected graph: connected graph: Maximum edges on: simple graph: not simple graph:

Minimum edges on: not connected graph: 0 connected graph: v-1 Maximum edges on: simple graph: O(v*(v-1)/2) = O(v^2) not simple graph: infinite

Is a complete tree always full?

No

Is a full tree always complete?

No

Must every function in C++ return a value?

No Void is a valid return value (no return value)

Suppose you have a good hash function h(key) that returns an index into an array of size N. If you store values in a linked list in the array to manage collisions, and you have already stored n values, then what is the expected run time to store a new value into the hash table?

O(1) Storing a new value takes constant time because the hash function runs in constant time and inserting a new value at the head of a linked list takes constant time.

Worst case runtime for searching a BST for a value?

O(h) runtime. equivalent to O(logn) for a balanced tree O(n) is worst case in the case of a linked list

What is the ideal and worst-case runtime of finding in a disjoint set implemented as uptrees? What if we implement with smart union and path compression?

O(h), where h can be n in the worst case of a linked list. An ideal uptree is very flat. All nodes point to the identity element. with smart find and union, after any sequence of m union and find operations, the worst case runtime becomes O(mlog*n), where n is the number of items in the disjoint set. So, this is very close O(m). So this is amortized constant to more a single find or union operation. log*(n) = 0 for n<=1 1 + log*(logn) for n > 1

What is the runtime complexity for finding a value in a B-Tree?

O(logm(n)) where m is the order of the B tree and it is the base of the log. n is the number of nodes. This is because there are at most m children, so you are eliminating m-1 branches at each level of the tree. So, if you have m = 1001 and n = 1,000,000,000,000, you would only have 4 seeks!

Which of the following is the optimal run time complexity to find the shortest path, if it exists, from a vertex to all of the other vertices in a weighted, directed graph of n vertices and m edges. O(m + lg n) O(n) O(m + n) O(m + n lg n)

O(m + n lg n) This is the running time for Dijkstra's algorithm which is optimal.

What is the run-time algorithmic complexity of calling heapifyDown on every non-leaf node in a complete tree of n nodes?

O(n) The run-time of calling heapifyDown on a node is proportional to the height of the node. About half of the nodes are leaf nodes, about a quarter have height 1, about an eighth have height 2, about a sixteenth have height 3, and so on. This summation of heights converges to n, the number of nodes in the tree. Hence running heapifyDown on every non-leaf node has a run-time complexity of O(n).

Suppose you have a good hash function h(key) that returns an index into an array of size N. If you store values in a linked list in the array to manage collisions, and you have already stored n values, then what is the expected run time to find the value in the hash table corresponding to a given key?

O(n/N) This is the "load factor" of the hash table, and is the average length of the linked lists stored at each array element. Since the lists are unordered, It would take O(n/N) time to look at all of the elements of the list to see if the desired (key/value) pair is in the list.

int tri(int n) { int i,j; int count = 0; for (j=0; j < n; j++) for (i=0; i < j; i++) count++; return count; } Perform a run-time analysis of the code above. Express the number of times the variable count is incremented in terms of "Big Oh" notation.

O(n^2) Even though the number of times the variable count is incremented is n*(n-1)/2 = (1/2) n^2 - (1/2) n, the "Big Oh" notation is only concerned about the order of the growth, which is the growth of the highest degree term (e.g. n^2) ignoring any constant factors of that term (e.g. 1/2).

For a simple graph with n vertices, what is the worst case (largest possible) for the number of edges, in terms of big Oh?

O(n^2) Recall that the adjacency matrix has one entry per edge in its upper triangular portion. There are n^2 elements in the n x n adjacency matrix, and about 1/2 n^2 elements in its upper triangular portion, and O(1/2 n^2) == O(n^2).

int *i = new int; How many memory allocations are made on the stack and on the heap for the above code? For example, allocating space for one integer would count as one memory allocation.

One allocation on the stack and one allocation on the heap The use of the new operator allocates memory on the heap that persists until it is deallocated by the delete operator, instead of on the stack which is deallocated when the current function returns.

What is linear probing?

One collision-handling strategy in a hash function that linearly moves through the memory locations when there is a collision, until an empty slot is found. find: O(1) average, O(n) worst case

You have a list of 100 items that are not sorted by the item value. Which one task below would run much faster on a list implemented as an array rather than implemented as a linked list? Inserting a new item between the 24th item and the 25th item. Replacing the 25th item in the list with a different item. Searching the list for all items that match a given item. Finding the first item.

Replacing the 25th item in the list with a different item.

What is a C++ directive?

Preprocessor directives are lines included in the code of programs preceded by a hash sign (#). These lines are not program statements but directives for the preprocessor. The preprocessor examines the code before actual compilation of code begins and resolves all these directives before any code is actually generated by regular statements.

Which elements encountered by a breadth first search can be used to detect a cycle in the graph? Unexplored edges to unexplored vertices that remain so after completion of the breadth first search. Previously visited vertices that have been encountered again via a previously unexplored edge. Discovered edges that were previously unexplored by the traversal have been added to the breadth-first traversal. Unexplored vertices that have been encountered by the traversal of a previously unexplored edge.

Previously visited vertices that have been encountered again via a previously unexplored edge. A breadth first traversal returns a spanning tree of each connected component of the graph. Any edge that is not part of the breadth first search (e.g. not marked discovered) will connect one portion of the tree to another forming a cycle. Thus all unexplored edges, including ones ignored because they reach a previously visited vertex will create a cycle if added to the breadth first search.

What are the variable types in C++?

Primitive: integer, character, boolean, floating point, double floating point, void/valueless, wide character Derived: derived from primitive or built-in data types; the four are function, array, pointer, reference Abstract/User-defined: defined by user itself. The options are class, structure, union, enumeration, typedef

level order traversal

Process all nodes of a tree by depth: first the root, then the children of the root, etc. You can achieve this with a BFS

The removeMin operation removes the root of a min-heap tree. Which of the following implements removeMin efficiently while maintaining a balanced min-heap tree. Set the root value to +infinity. If the left child is smaller than the right child, perform a Right-Rotation, otherwise perform a Left-Rotation. Repeat this process at the new infinity-node location until the infinity node is a leaf, then remove and delete it. Replace the root value with the value of the last leaf (rightmost node at the bottom level) of a complete binary tree, and delete the last leaf. Then repeatedly exchange this last-leaf value with the smaller of the values of its node's children until this last-leaf value is smaller than the values of its node's children, if any. Delete the root and if the root has two children, then merge its left subtree with its right subtree by inserting each right subtree node value into the left subtree. Then delete the right subtree. Increment the address used to indicate the base location of the array storing the complete binary tree.

Replace the root value with the value of the last leaf (rightmost node at the bottom level) of a complete binary tree, and delete the last leaf. Then repeatedly exchange this last-leaf value with the smaller of the values of its node's children until this last-leaf value is smaller than the values of its node's children, if any. This last-leaf node corresponds to the end of the array so that there are never any missing values in the middle of the array.

Consider the binary search tree built by inserting the following sequence of integers, one at a time: 5, 4, 7, 9, 8, 6, 2, 3, 1 Which method below will properly remove node 4 from the binary search tree? Set the left pointer of node 5 to nullptr, and then delete node 4. Find the in order predecessor (IOP) of node 4, which is node 3. Remove node 3 from the tree by setting the right pointer of its parent (node 2) to nullptr. Then copy the key and any data from node 3 to node 4, turning node 4 into a new node 3, and delete the old node 3. Find the in order predecessor (IOP) of node 4, which is node 3. Remove node 3 from the tree by setting the right pointer of its parent (node 2) to point to the node pointed to by the left pointer of node 3. Then copy the key and any data from node 3 to node 4, turning node 4 into a new node 3, and delete the old node 3. Set the left pointer of node 5 to point to the node pointed to by the left pointer of node 4, and then delete node 4.

Set the left pointer of node 5 to point to the node pointed to by the left pointer of node 4, and then delete node 4. This is the correct way to remove node 4 from the binary search tree because node 4 has only one child, its left child.

Are the memory locations of the stack or heap memory larger?

Stack memory locations are larger than the heap. The stack begins at a high memory address and works its way down, whereas the heap begins at a low memory address and works its way up.

T or F, as a valid reason to choose the B-Tree representation over a standard AVL binary search tree?: B-Trees require fewer block read accesses for tree operations.

T

T or F, as a valid reason to choose the B-Tree representation over a standard AVL binary search tree?: B-Trees run faster on large data sets than do AVL trees.

T

T or F, as a valid reason to choose the B-Tree representation over a standard AVL binary search tree?: B-Trees work faster in networked cloud environments than do AVL trees.

T

T or F, for a B-Tree of order m?: All leaf nodes are at the same level of the B-Tree.

T

T or F, for a B-Tree of order m?: Any node that is not the root or a leaf holds at least half of the total number of keys allowed in a node.

T

T or F, for a B-Tree of order m?: Each node can have at most one more child than key.

T

How does removeMin work on a heap and what is the time complexity?

Take the min element out and swap with the last element in the heap. Heapify down with this element until the heap property is satisfied. Heapify down simply swaps the parent with the minimum of the two children. O(lgn)

Which keyword is used to indicate which namespace(s) to search to find classes and variables when they are referenced throughout the rest of the program?

The "using" keyword indicates to the compiler from which namespace references to classes and variables should be found.

What function defines the start for your program?

The program starts when the OS calls the main() function

Which of the following is NOT a full binary tree? The binary tree consisting of the subtree of ancestors of any node in any perfect binary tree. The binary search tree created by inserting the following values one at a time: 4 2 3 5 1. A perfect binary tree. A single node.

The binary tree consisting of the subtree of ancestors of any node in any perfect binary tree Every node in a full binary tree has zero or two children, whereas the non-leaf nodes of the subtree of ancestors would consist of nodes each having only a single child.

When you use the new operator to create a class object instance in heap memory, the new operator makes sure that memory is allocated in the heap for the object, and then it initializes the object instance by automatically calling the class constructor. After a class object instance has been created in heap memory with new, when is the destructor usually called? The destructor is called automatically when the delete operator is used with a pointer to the instance of the class. The programmer always needs to call the destructor manually in order to free up memory. The destructor is called automatically when the variable goes out of scope. The destructor is called automatically when the program returns from the function where the new operator was used to create the class object instance.

The destructor is called automatically when the delete operator is used with a pointer to the instance of the class.

Recall that the heapifyDown procedure takes a node index whose children (if any) are heaps, but the value of the node might not satisfy the heap property compared to its children's values. This procedure then swaps the node's value with the smallest child value larger than it (if any), and then calls itself on that smallest child node it just swapped values with to further propagate that value down the heap until it finds a valid location for it. template <class T> void Heap<T>::_heapifyDown(int index) { if (!_isLeaf(index)) { T minChildIndex = _minChild(index); if (item_[index] > item_[minChildIndex] ) { std::swap( imem_[index], item_[minChildI]); _heapifyDown(minChildIndex); } } } When you call heapifyDown on a given node, what is the maximum number of times heapifyDown is called (including that first call) to find a valid location for the initial value of that node? The maximum number of times heapifyDown is called is one plus the height of the node. The maximum number of times heapifyDown is called is the number of nodes in its subtree. heapifyDown is only called once since its children are already heaps. The maximum number of times heapifyDown is called is the number of non-leaf nodes in its subtree.

The maximum number of times heapifyDown is called is one plus the height of the node. heapifyDown is recursive, but it is only called on one of its children, so it walks down only one chain of descendants, not all of its descendants.

What is the degree of a node in a graph?

The number of incident edges

In-Order Predecessor

The previous node visited in an in order traversal of the BST. It is always the largest node in the left subtree of the BST

Given a hash function h(key) that produces an index into an array of size N, and given two different key values key1 and key2, the Simple Uniform Hashing Assumption states which of the following? The probability that h(key1) == h(key2) is 1/N. The probability that h(key1) == h(key2) is 0. If h(key1) == h(key2) then h needs a running time of O(lg N) to complete. If h(key1) == h(key2) then h needs a running time of O(N) to complete.

The probability that h(key1) == h(key2) is 1/N.

The breadth first traversal of a connected graph returns a spanning tree for that graph that contains every vertex. If the graph has weighted edges, which of the following modifications is the simplest that produces a minimum spanning tree for the graph of weighted edges. No modification is necessary because a breadth first traversal always returns a minimum spanning tree. The queue is replaced by a priority queue that keeps track of the total weight encountered by the current traversal plus each of the edges that connects a vertex to the current breadth first traversal. The queue is replaced by a priority queue that keeps track of the least-weight edge that connects a vertex to the current breadth first traversal. An ordinary breadth first traversal is run from each vertex (as its start vertex) and the resulting spanning tree with the least total weight is the minimum spanning tree.

The queue is replaced by a priority queue that keeps track of the least-weight edge that connects a vertex to the current breadth first traversal. A minimum spanning tree for a weighted graph can be found through a greedy breadth-first algorithm that simply chooses from the entire queue the least weight edge to add.

A breadth first traversal starting at vertex v1 of a graph can be used to find which ones of the following? The shortest path (in terms of # of edges) between vertex v1 and any other vertex in the graph. The shortest path (in terms of # of edges) between any two vertices in the graph. All of the above. None of the above.

The shortest path (in terms of # of edges) between vertex v1 and any other vertex in the graph.

Why do you typically use linear search instead of binary search when searching a B-Tree node for a value?

The time to fetch new nodes is much longer than the time to search a node, so doing binary search doesn't actually save significant time when looking at the big oh notation

Recall that every variable in C++ has these four things: a name, a type, a value and a memory location. int *p; p = new int; *p = 0; For the code above, which one of the following is NOT true for variable p? The name of the variable is "p" The type of the variable is a pointer to an integer, specifically the type "int *" The value of the variable is 0 The memory address of the variable is the value returned by the expression &p

The value of the variable is 0 is not true. Even though p points to a memory address containing the integer value 0, the value of the pointer is the memory address itself.

Which of the following is NOT true of a perfect binary search tree of a list of n ordered items? All of the leaf nodes are at the same level. The worst-case run time to find an item is O(n). If the height of the tree is h, then n = 2^(h+1) - 1. Every non-leaf node has two children.

The worst-case run time to find an item is O(n).

T or F: C++ allows a variable to be declared in a user-defined function with an unknown type that can be defined when the function is called.

True

When using an array to store a complete tree, why is the root node stored at index 1 instead of at the front of the array at index 0? Array index 0 is used to store the number of nodes in the complete tree stored in the array. We use index zero as a guard to prevent overstepping the root when propagating up the tree from its leaf nodes, which would cause a memory access fault. This makes the math for finding children and parents simpler to compute and to explain. We avoid using index 0 to avoid confusion with the value of 0 (nullptr) that we normally store in the child pointer of a node to indicate that child does not exist.

This makes the math for finding children and parents simpler to compute and to explain. Yep. It is worth wasting one memory location to make programming and documentation simpler and less bug-prone. In particular, this lets us find the parent of a given node simply by using integer division by 2.

What is re-hashing?

This occurs when you fill the array used for a hash function. You have to move all the values to a new array, and as a result, you need to change the hash function to maintain SUHA, then you need to rehash all values in the array

Post-Order Traversal

Traverse the left subtree Traverse the right subtree Visit the node

Inorder Traversal

Traverse the left subtree Visit the node Traverse the right subtree

Complete Binary Tree

Tree is perfect up until last level where it is filled from the left to the right.

T or F, For a linked structure of edges and nodes to be a tree: Every node has a parent except for one single root node

True

T or F, For a linked structure of edges and nodes to be a tree: Every node is connected to every other node by some path of edges.

True

T or F, For a linked structure of edges and nodes to be a tree: If any two nodes are connected, they are connected by only one path of unique nodes and edges.

True

T or F: A class can consist of multiple member data variables of different types, but each type must be specified when the class is defined.

True

T or F: C++ allows a member variable to be declared in a user-defined class with an unknown type that can by defined when an object of that class is created.

True

What happens when you take the union of two disjoint sets that contain the same value?

Two different disjoint sets by definition can never share the same value. Disjoint sets represent a partitioning of unique values into subsets that do not have any items in common. That is, each value belongs to exactly one of the sets. This is why each element can be used as an array index look up its "up-tree" parent, which represents the set the element belongs to.

Suppose we have this alternative function that returns a pointer to a memory location to an integer value of zero. int *allocate_an_integer() { int i = 0; return &i; } int main() { int *j; j = allocate_an_integer(); int k = *j; return 0; } Variable k is not assigned a value, because even if the compiler is set to ignore warnings and continue with compilation, the compiled program will still automatically detect that a local variable's address is being used after the function has returned, and exit to the operating system with a non-zero error code. Unknown. Depending on the compiler settings, the compiler may report that a local variable address is being returned, which could be treated as a warning or as a compilation error; Or, if the program is allowed to compile, then at runtime the variable k could be assigned zero, or some other value, or the program may terminate due to a memory fault. Assuming that the program compiles with just a warning and not an error due to the settings, the variable k will not be assigned a value, because the running program will crash the whole operating system. Variable k is certainly assigned the value zero, because the C++ runtime will automatically move the local variable to the heap and return the address of that heap variable instead.

Unknown. Depending on the compiler settings, the compiler may report that a local variable address is being returned, which could be treated as a warning or as a compilation error; Or, if the program is allowed to compile, then at runtime the variable k could be assigned zero, or some other value, or the program may terminate due to a memory fault Variable i is allocated with a memory location on the stack. When allocate_an_integer() returns, its memory on the stack including the memory holding the value of i is freed and may be used for other purposes, and may be overwritten with a new value.

How can you efficiently implement a disjoint set? How can you ensure optimal unions? How can you make finds more efficient?

Use a graph structure. Store values in an array. The index is the value of an element. A value of -1 is the representative element of a set, which we will call an uptree. A value besides -1 is the value of another element in the set. So, each element will point to another element in the uptree, and they will all have a path to the identity element. Smart Union: Instead of -1, you can also use either the negation of the height minus 1 or the number of elements in the array. You need to use height minus -1, because a one element tree will have a height of 0, and you can't have -0. Tracking the height or size allows for efficient unioning. You want to point the smaller to the larger. Smart find, Path compression: You can be even more efficient by updating the node identity elements when doing find operation. This way all elements in an uptree can point to the identity element.

How do you do a BFS of a graph? What is the runtime?

Use a queue. Maintain a list of visited nodes to avoid cycles. runtime is O(n+m), but m can be n^2 if there is the max number of edges in a simple graph

How do you do a DFS of a graph? What is the runtime?

Use a stack. Runtime is O(n+m), but m can be n^2 if there is the max number of edges in a simple graph. Edges are called discovery and back edges.

When computing the union of two disjoint sets represented as up-trees in an array, (using proper path compression) which of these strategies results in a better overall run time complexity than the other options? Always make the up-tree with fewer elements a subtree of the root of the up-tree with more elements. Always make the up-tree with a shorter height a subtree of the root of the up-tree with a larger height. The overall run time complexity is not affected by which up-tree is chosen to become a subtree of the other up-tree. Using either size or height strategies above results in the same overall run time complexity.

Using either size or height strategies above results in the same overall run time complexity.

Pre-Order Traversal

Visit the node Traverse the left subtree Traverse the right subtree

When is a binary tree a min-heap? When every node's value is less than its parent's value. When every node's value lies between the maximum value of its left child's subtree and the minimum value of its right child's subtree. When the leaf nodes represent the smallest values in the tree, and every leaf node is smaller than the root. When every node's value is greater than its parent's value.

When every node's value is greater than its parent's value. A non-empty binary tree is a min-heap if the root is less than either or both of its children (if any), and the subtrees of its children are min-heaps. This is equivalent to the definition from the video lesson and equivalent to the answer.

Which of the following best describes "path compression" as described in the video lessons to accelerate disjoint set operations? (Here we say "parent pointer" to mean whatever form of indirection is used to refer from a child to its parent; this could be a literal pointer or it could be an array index as in the lectures.) When the root of an element's node is found, all of the descendants of the root have their parent pointer set to the root. When the root of the up-tree containing an element is found, both the element and its parent will always have their parent pointers set to point to the root node. When traversing the up-tree from an element to its root, if any elements in the traversal (including the first element, but excluding the root itself) do not point directly to the root as their parent yet, they will have their parent pointer changed to point directly to the root. When the root of the up-tree containing an element is found, the element and all of its siblings that share the same parent have their parent pointers reset to point to the root node.

When traversing the up-tree from an element to its root, if any elements in the traversal (including the first element, but excluding the root itself) do not point directly to the root as their parent yet, they will have their parent pointer changed to point directly to the root. That's right: Path compression only flattens the lineage of nodes in an up-tree from an element to the root, and not all of the elements in the up-tree every time. This has amortized benefits as the data structure is optimized over the process of several union and find operations

Does every variable in C++ have a specific type?

Yes

Can you implement a dictionary using a BST?

Yes, this is useful when you need to know what is close to a specific key. However, the time complexity of adding, removing, and finding will be worse

Suppose you want to implement a queue ADT using a linked list. Your queue needs to be able to "push" (or "enqueue") a single item in constant time, as well as "pop" (or "dequeue") a single item in constant time. The operations need to happen at opposite ends of the queue, as would be expected of the queue ADT. However, the people who use your queue implementation don't need to know about how exactly it is implemented, so you can be somewhat creative in how you implement it, as long as the "push" and "pop" operations behave as expected. Which of the following implementations can accomplish this? Select all that apply. (For the sake of this question, let's not consider any design strategies that would close the linked list into a circle.) You can do it with a modified singly-linked list where the list has both a "head" pointer and a "tail" pointer, but each node has only a "next" pointer. You can do it with a singly-linked list where the list has only a "head" pointer and each node has only a "next" pointer. You can do it with a doubly-linked list where the list has a "head" pointer and a "tail" pointer and each node has a "next" pointer and a "previous" pointer. You can't implement a queue as a linked list. You need a more advanced data structure.

You can do it with a modified singly-linked list where the list has both a "head" pointer and a "tail" pointer, but each node has only a "next" pointer. You can do it with a doubly-linked list where the list has a "head" pointer and a "tail" pointer and each node has a "next" pointer and a "previous" pointer.

what is the namespace keyword?

You can use it to define the namespace for classes. You can encapsulate the class in the namespace by wrapping the code with namespace uiuc {} where uiuc can be replaced with the name of your namespace. for example: namespace uiuc{ Class Cube{ // same definition as above }; } now you can refer to Cube as uiuc::Cube

What happens when you insert a number into a full node in a B-tree?

You find the middle value of the node and throw that up. If that node is full, you repeat until you find a node that isn't full or the inserted value is the root node.

How can you manage your runtime complexity when using linear probing or double hashing for collision handling in a hash function?

You have to manage the load factor (n/N), because the runtime is proportional to the load factor and not n. If you expand the array every time the load factor reaches a certain value, you can keep a constant runtime O(1) for the operations. The runtime increases exponentially as the load factor approaches 1.

How many nodes of a complete binary tree are leaf nodes?

about half Let n be the number of nodes in the complete binary tree. If the complete binary tree is also a perfect tree, then all of the leaf nodes are at the bottom level, and there are exactly n/2 + 1/2 of them. (n is odd for a perfect binary tree.) Now consider deleting these nodes one at a time in an order that keeps the tree complete. The first node you delete is the right child of a parent so it decreases n by one and decreases the number of leaf cells by one. The second node you delete will be that parent's left child, turning the parent into a new leaf node, so decreasing n by one but not decreasing the number of leaf nodes. As you continue removing leaf nodes from right to left on that bottom level, you are reducing n by one but the number of leaf nodes by an average of 1/2 (by one for every right child and by zero for every left child). Hence about half of a perfect binary tree's nodes are leaf nodes, and this continues as you delete any or all of the nodes in the bottom level in a right-to-left order that keeps the binary tree complete.

convention for naming private members

add an _ at the end

Perfect Binary Tree

all internal nodes have 2 children and all leaf nodes are at the same level

Complete Binary Tree

all the levels are completely filled except possibly the last level and the last level has all keys as left as possible.

Min Heap

complete binary tree that is either empty or one where the left and right nodes are smaller than the root, and all subtrees have this property

How do you represent a min heap?

as an array, where you can find the left and right children of a node by applying a formula. left child = parent idx * 2 right child = parent idx * 2 + 1 parent = child // 2

Consider the following class: class Orange { public: Orange(double weight); ~Orange(); double getWeight(); private: double weight_; }; Select all functions that are present in this class (including any automatic/implicit functions added by the compiler): Default constructor at least one custom, non-default constructor Copy constructor Assignment operator Destructor

at least one custom, non-default constructor Copy constructor Assignment operator Destructor

Compare stack implementation with a linked list vs an array

both have similar time complexity, O(1), for create, push, pop, empty. linked list: standard linked list is good. Push items to head of linked list. Pop items from head. array: push items and pop items from end.

Which one of the following properly declares the class RubikCube derived from the base class Cube? class RubikCube : public Cube {...}; class RubikCube(Cube) {...}; class Cube : public RubikCube {...}; class Cube(RubikCube) {...};

class RubikCube : public Cube {...};

what is a header file?

defines the interface to the class which includes: Declaration of all member variables Declaration of all member functions It sort of provides an API to use the class in a client code, but not information on how the internals work The implementation will be in a corresponding .cpp file, which will have #include "filename.hpp" at the top

If, after inserting a new node into an AVL tree, you now have a node with a height balance factor of -2 with a child with a height balance factor of +1, which rotation operation should be performed?

left-right rotation Since the height balance factors of the parent and child have different signs, this is an "elbow." Since the child's height balance factor is +1, we first perform a left rotation on it to turn it into a stick. Then since the parent's height balance factor is -2, we raise the middle of this "stick" to create a "mountain" by performing a right rotation on it.

What is the optimal way to resize an array when it is full?

double the size every time it fills. This gives an amortized run time to add items of O(1).

class Pair { public: double a,b; Pair(double x, double y) { a = x; b = y; } }; If a class equalPair is derived from the above base class (but specializes it by adding a single boolean "isequal" member variable) then which one of the options below is a proper declaration of a constructor for equalPair? (As a side note: Although the member variables are of type double, for the sake of this question, we are not concerned about making approximate comparisons of floating-point types, only exact comparisons. Usually, in practical usage, when you compare floating-point values, you should write a function for approximate comparison. That is, you should allow numbers to be considered equal if they have a very small absolute difference, even if they are not exactly the same.) equalPair(double a, double b) { isequal = (a == b); equalPair(double a, double b) { Pair(a,b); isequal = (a == b); equalPair(double a, double b) { this->Pair(a,b); isequal = (a == b); equalPair(double a, double b) : Pair(a,b) { isequal = (a == b); }

equalPair(double a, double b) : Pair(a,b) { isequal = (a == b); }

What is the find operation in a disjoint set?

find operation: find(4) would find set with 4 in it and return the identity element. So, find of two elements in same set is equal.

How do you insert an element into a heap?

insert at next available index, check if parent is smaller, if not, swap parent with child. Repeat until parent is smaller than the inserted value. This is called heapifying up

#include directive

insert the contents of another file at the current location while processing the current file

what is the syntax for creating and deleting an array in the heap?

int *p = new int[3]; delete[ ] p;

According to video lesson 1.1.2, which of the following is a good hash function h(key) that translates any 32-bit unsigned integer key into an index into an 8 element array? int h(uint key) { int index = 5; while (key--) index = (index + 5) % 8 return index; } int h(uint key) { return key & 7; } int h(uint key) { return rand() % 8; } int h(uint key) { return max(key,7); }

int h(uint key) { return key & 7; } (Note that an expression like "2 & 3" uses the bitwise-AND operator, which gives the result of comparing every bit in the two operands using the concept of "AND" from Boolean logic; for example, in Boolean logic with binary numbers, 10 AND 11 gives 10: for the first digit, 1 AND 1 yields 1, while for the second digit, 0 AND 1 yields 0. An expression like "4 % 8" uses the remainder operator that give the remainder from integer division; for example, 4 % 8 yields 4, which is the remainder of 4/8. In some cases, these two different operators give similar results. Think about why that is.) This always generates the same output given the same input, and it has a uniform chance of collision. It also runs in constant time relative to the length of the input integer (that is, relative to the number of bits, without respect to the magnitude of the integer). Note that in binary, the number 7 is 0000...0111. (The leading digits are all zero, followed by three 1 digits, because these place values represent 4+2+1.) When you do "key & 7", the result will have leading zeros, and the rightmost three digits will be the same as those of key. Because this results in values between 0 and 7, it's similar to taking the remainder of division by 8. That is, "key & 7" should give the same result as "key % 8". Bitwise operations like this can be somewhat faster than arithmetic operations, but you have to be careful about the specific data types and the type of computing platform you are compiling for. Note that this trick only works for some right-hand values as well, based on how they are represented in binary. These tricks are not always portable from one system architecture to another.

Suppose we are writing the following function that is intended to return a pointer to a location in memory holding an integer value initialized to zero. int *allocate_an_integer() { // declare variable i here *i = 0; return i; } How should variable i be declared?

int* i = new int; The variable i should be a pointer to a memory location to an integer allocated from the heap so the memory location continues to be allocated after the function has returned.

what is the iostream header?

it is a header that includes read/write operations from the standard library

Recall that the iterated log function is denoted log*(n) and is defined to be 0 for n <= 1, and 1 + log*(log(n)) for n > 1. Let lg*(n) be this iterated log function computed using base 2 logarithms. Blue Waters, housed at the University of Illinois, is the fastest university supercomputer in the world. It can run about 2^53 (about 13 quadrillion) instructions in a second. There are about 2^11 seconds in half an hour, so Blue Waters would run 2^64 instructions in about half an hour. Which one of the following is equal to lg*(2^64)?

lg*(2^64) = 1 + lg*(64) = 1 + 1 + lg*(6) = 1+ 1 + 1 + lg*(~2.6) = 1 + 1 + 1 + 1 + lg*(1.4) = 1 + 1 + 1 + 1 + 1 + lg*(0.5) = 5

What's the time complexity for Linked Lists and arrays for (1) element access (2) resizing

linked list: O(n) element access and O(1) resizing. array: O(1) element access and O(n) resizing

what is the height of a tree node?

longest path length (in number of edges) from the root of that tree or subtree to any one of its leaves

In C++, what's the difference between the std::map and the std::unordered_map?

map has the lower_bound(key) and upper_bound(key) methods which return the first element <= and the first element > the key. The map is a red-black tree structure. unordered_map is a hash function, so the lower and upper bound methods don't exist. Both support operator[], erase, and insert.

How do you determine if a key exists in an unordered map?

map_name.count("key_name") this will return 1 if the key exists and 0 otherwise.

How can you return the number of keys in an unordered map?

map_name.size()

template <typename Type> Type max(Type a, Type b) { return (a > b) ? a : b; } Which one of the following exampled is a proper way to call the max function declared above in template form? max(5.0,10.0) max<double>(5.0,10.0) max<Type = double>(5.0,10.0) <Type = double>max(5.0,10.0)

max(5.0,10.0) Whereas a class needs to explicitly identify the type, a templated function does not need to explicitly identify the type(s) used if the type of its arguments can be sufficiently matched to the templated types used in the function declaration.

template <typename Type> Type max(Type a, Type b) { return (a > b) ? a : b; } class Just_a_double { public: double num; }; int main() { Just_a_double a,b; a.num = 5.0; b.num = 10.0; ... } Given the above code, which one of the expressions below, if used at line 15, will compile and not generate a compile error? max(a.num,b.num) max("five",10.0) max(a,10.0) max(a,b)

max(a.num,b.num) Both arguments to max() are the same type and both can be compared using the greater-than operator.

How many ways to create a BST with the same data?

n!

How much farther can a cross-edge take you from the root?

no more than 1 farther

How many times is the uiuc::Cube 's copy constructor invoked? double magic(uiuc::Cube cube) { cube.setLength(1); return cube.getVolume(); } int main() { uiuc::Cube c(10); magic(c); return 0; }

once, when function is called, because it has an object passed by value

If, after inserting a new node into an AVL tree, you now have a node with a height balance factor of -2 with a child with a height balance factor of -1, which rotation operation should be performed?

right rotation Since the parent and child height balance share the same sign, that means that there is a "stick" that needs to be rotated to form a "mountain." Since the shared sign of the height balance factors is negative, this means the left subtree is the cause of the imbalance and should be resolved by a right rotation.

balance factor of a node in a tree

right subtree height - left subtree height

What is the syntax for a templated function and a templated class?

template <typename T> class List{ private: T data_; }; template <typename T> T max(T a, T b) { if (a > b) {return a;} return b; }

How can you assign allow a function or class to accept many different types as arguments?

templated functions and classes template variables are checked at compile time

Tree Height

the number of edges of the longest path from the root to a leaf

Tree Depth

the number of edges of the path from a node to the root

What structure is formed by the discovery edges in a BFS?

spanning tree

What is the namespace of the C++ Standard Library?

std

Which of the following will generate an error at compile time? std::vector<char[256]> v; std::vector<double> v; std::vector<std::vector<int>> v; std::vector v;

std::vector v; no type is specified for the template

Suppose you want to create a vector of integers. Which of the following creates an instance of the std::vector class that can contain integers? std::vector<int> v; int *v; int<std::vector> v; int v[256];

std::vector<int> v;

What is a linked list? What is the sample code to build one?

template <Typename T> class ListNode{ public: T& data; ListNode* next; ListNode(T& data): data(data), next(nullptr) {}; };

What are the valid values for a boolean variable?

true, false

Given: namespace uiuc { class Pair { double a,b; }; } which syntax can be written outside of the namespace declaration to properly create a variable named "p" of type Pair with no initial value given?

uiuc::Pair p;

Which one of the following functions outputs the keys of a binary search tree in item order when the root node is passed to it as its parameter. void print(TreeNode *node){ if (!node) return; std::cout << node->key << " "; print(node->left); print(node->right); void print(TreeNode *node){ if (!node) return; print(node->left); std::cout << node->key << " "; print(node->right); } void print(TreeNode *node){ if (!node) return; print(node->left); print(node->right); std::cout << node->key << " "; } none

void print(TreeNode *node){ if (!node) return; print(node->left); std::cout << node->key << " "; print(node->right); } This is an in-order traversal that prints the values of the descendants of the node's left child, then the node's value, then the values of the descendants of its right child.

Consider the code below that includes a class that has a custom constructor and destructor and both utilize a global variable (which has global scope and can be accessed anywhere and initialized before the function main is executed). int reference_count = 0; class Track { public: Track() { reference_count++; } ~Track() { reference_count--; } }; Which one of the following procedures (void functions) properly ensures the deallocation of all the memory allocated for objects of type Track so the memory can be re-used for something else after the procedure returns? For the correct answer, the variable reference_count should be zero after all calls to track_stuff() and all of the memory should be deallocated properly. This will dependably occur after only one of the following procedures. void track_stuff() { Track t; // ... delete t; return; } void track_stuff() { Track t; Track *p = &t; // ... delete p; return; } void track_stuff() { Track *t = new Track; // ... t->~Track(); return; } void track_stuff() { Track t; Track *p = new Track; // ... delete p; return; }

void track_stuff() { Track t; Track *p = new Track; // ... delete p; return; }

How do you import a class in another program?

you write #include "headerfilename.hpp" This will then check the same directory as the current program for the header file of the class


Set pelajaran terkait

ATI Final Test Practice Questions

View Set

Managerial Accounting Exam 1 Ch 13

View Set

Achieve 3000: Know Your Personal Assistant

View Set