Algorithms Final

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

What is the time complexity of the Insertion/Bubble Sort?

BigO: n^2 BigOmega: n

What is the time complexity of the Selection Sort?

BigO: n^2 BigOmega: n^2

What is the time complexity of Mergesort?

BigO: nlogn BigOmega: logn

What is the time complexity of Quicksort?

BigO: nlogn BigOmega: n^2

Why do we sometimes use linear search instead of binary search?

Binary search requires the dataset to be sorted

The average search time in separate chaining depends heavily on the load factor. What is the ideal and not ideal case?

Ideal Case (λ ≤ 1): Each linked list has a length of 1 or less. This scenario leads to constant time (O(1)) search operations Non-Ideal Case (λ > 1): Searching through these longer lists increases the average search time. Traversing longer lists incurs a linear time (O(λ)) cost in the worst case.

BigO of heap insert/delete

O(logn)

Time complexity of MOST AVL tree operations (insert, delete, search, findMin, findMax, etc)

O(logn)

Best case & worst case for BST & AVL Tree

O(logn) O(n)

Time complexity of BST operations (insert, delete, search, etc)

O(n)

Time complexity of tree operations: deleteTree() & copyTree()

O(n)

building heap

O(n)

Preorder traversal

ROOT, left, right

In order traversal

left, ROOT, right

Postorder traversal

left, right, ROOT

In separate chaining, hash table elements point to _________ of records associated with the index

linked lists

How long in the worst case does it take to find an element within a dataset of size n using binary?

logN (the search algorithm would have to divide the list the max amount of times if the element is not the element the marks the split until it is the last left)

use case for preorder

making a copy of a tree

what does increaseKey (p, △) do?

moves node down the tree

what does decreaseKey(p, △) do?

moves node up the tree

Rank the time complexities

n!, 2^n, n^2, nlogn, n, logn, 1

how do you insert a value into a heap?

place in next available position, percolate up until it is in the right position (while x < parent(x) and x!=root, swap)

A min / max heap is essentially a

priority queue

what does remove(p) do?

removes node at p

Why is pass by constant reference sometimes used instead of pass by value?

used to avoid copying data while ensuring the original value cannot be modified by the function.

code for recursive function to perform an in order traversal on a binary tree

void BST:: printInOrder(Node *node) const { if (node == nullptr) { return; } printInOrder(node->leftChild) cout << node ->value printInOrder(node->rightChild) }

rules for heap

children not sorted, children are larger than parents in min heap, parents larger in max heap. not a BST

full tree

internal nodes have 2 children, each nodes subtrees are the same height

f(i) for separate chaining

0

Lamda for Linear & Quadratic Probing

0.5

How do you rehash?

1. Make new Table. TableSize = FindNextPrime( TableSize * 2 ) 2. Take every element from the old table and hash it into new table 3. Destroy old table itself

Hashtable's goal is constant time lookup. What key elements work to ensure this?

1. Prime table size!! - reduces chance of collisions 2. Lamda Thresholds (load factor) 3. Hash functions

Lamda for Separate Chaining

1.0

Using an array, with the first element located at index 1, we can access the left child of a node located at index i with:

2i

Using an array, with the first element located at index 1, we can access the right child of a node located at index i with:

2i + 1

When we put the const keyword on a function definition, we force it to become an...

Accessor (doesn't change data members of the class)

T/F A complete binary tree cannot have an additional node added without increasing the height of the tree.

False

T/F All Binary trees are also heaps

False

T/F All heaps are full binary trees

False

T/F We can use the same insert functions on both heaps and binary search trees because they are so similar the algorithm is the same

False

Given all the ways we could analyze algorithm performance, why do we focus on Big-O?

It describes the worst case scenario by analyzing the end behavior of a program. Instead of testing many different numbers or conditions manually, big-O helps us figure out this more efficiently.

Best way to pick a pivot for quicksort?

Median of 3, choosing number in the middle of the list allows the algorithm to make less swaps

Three ways to pick a pivot for quicksort?

Median of three, random, largest/smallest

Load factor (lamda) equation

N / TS represents the average number of elements per linked list (separate chaining) in the hash table

Is selection sort stable?

No

Hashing stores records in (optimally) _____ time for: Access, Insert, Search

O(1)

What do we do if two elements hash to the same index?

Separate Chaining/Probing

Why don't we use radix sort everywhere?

Space Inefficiency: requires additional space. Not an in-place sorting algorithm as it requires extra space for output.

What is a good reason to use empirical testing

To test that your algorithms are functioning as intended. Also allows you to get a clear/specific idea of how your algorithm is performing.

T/F Any binary tree can be represented as an array

True

T/F The left child and right child of a node in a heap do not have any ordering to one another

True

What does it mean when a sort is stable?

When the objects are sorted, objects of the same value will be in order.

Is the Insertion Sort/Bubble Sort stable?

Yes

hash table chronology

data key --> hash function --> index/hash key --> bucket

use case for post order

deleting trees

balanced tree

each node's L & R subtrees differ by less than abs(1)

complete tree

full at level n-1, level n filled L-->R

use case for in order

getting non decreasing order of tree

To insert terms using hash tables:

hi(X) = (hash(X) + f(i)) % TS

f(i) for linear probing

i

Using an array, with the first element located at index 1, we can access the parent of a child node located at index i with:

i/2 (round down)

f(i) for quadratic probing

i^2

pseudocode for insert (hash table)

insert(x) 1. get hash of x 2. check if that spot is free 3. if spot is free, put x there - itemCount++ - update lamda (item count/table size) - rehash if needed 4. if no spot, rehash (but there should be a spot)

Simple Hash Integer Function

int hash( const int key, int tableSize ) { return( key % tableSize ); }


Ensembles d'études connexes

Aging, Place, and Health Check-up Quizzes

View Set

Switching, Routing, and Wireless Essentials chapters 1-4

View Set

Management process (key business functions)

View Set

Ch8 final, Final Ch4, Ch10 Final Exam

View Set