Data Structures and Algorithms

Ace your homework & exams now with Quizwiz!

15. What are some algorithms which we use daily that has O(1), O(nlogn) and O(logn) complexities?

O(1) - Accessing Array index (int item = array[5]) - Inserting a node into LinkedList - Pushing and poping on Stack - Insertion and removal from a Queue - Finding out the parent or left/right child of a node in a tree stored in Array - Jumping to the next/previous element in Doubly LinkedList O(n) - In a nutshell, all brute force algorithms or Noob ones that require linearity are based on O(n) time complexity - Traversing an array - Traversing a LinkedList - Linear search - Deletion of a specific element in a LinkedList (Not sorted) - Comparing two strings - Checking for a palindrome - Counting/Bucket Sort - and so on O(logn) - Binary search - Finding the largest/smallest number in a Binary Search Tree - Certain divide and conquer algorithms based on Linear functionality - Calculating Fibonacci numbers - The best method the basic premise here is not using the complete data and reducing the problem size with every iteration. O(nlogn) - The factor of logn is introduced by bringing into consideration divide and conquer. Some of these algorithms are the best-optimized ones and are used frequently. - Merge Sort - Heap Sort - Quick Sort - Certain Divide and Conquer algorithms are based on optimizing O(n^2) algorithms. O(n^2) - These ones are supposed to be the less efficient algorithms if their O(nlogn) counterparts are present. The general application may be a brute force here. - Bubble Sort - Insertion Sort - Selection Sort - Traversing a simple 2D Array

1. What is Big-O notation?

Big-O notation (also called "asymptotic growth" notation) is a relative representation of the complexity of an algorithm. It shows how an algorithm scales based on input size. We use it to talk about how things scale. Big-O complexity can be visualized with this graph.

13. What is the difference between lower bound and tight bound?

Big-O is the upper bound while Omega is the lower bound. Theta requires both Big-O and Omega so that's why it's referred to as a tight bound (it must be both the upper and lower bound). For example, an algorithm taking Omega(nlogn) takes at least nlogn time but has no upper limit. An algorithm taking Theta(nlogn) is far preferential since it takes at least nlogn (Omega nlogn) and no more than nlogn (Big-O nlogn).

29. Why is the complexity of fetching from an array be O(1)?

I thought the algorithm has to go through all the indexes and find the correct index. That means the complexity is O(n/2) on average. Why is it actually O(1)? The key to understanding why array access is O(1) is understanding that they have a known length. In a language like C, we can guarantee this by allocating a sequential block of memory. Since it's sequential in memory, we can use pointer arithmetic for direct addressing and thus direct access. The compiler knows the array to start at memory cell x. When it needs to get to a[120], it adds 120 * elementSize to x and uses that number to address the memory to get to the element. This addition is also why in C and C++ at least, all items in an array need to be the same type (to keep the same elementSize). They all are required to occupy the same number of bytes for this pointer arithmetic to work.

9. What is meant by "Constant amortized time" when talking about time complexity of an algorithm?

If you do an operation say a million times, you don't really care about the worst-case or the best-case of that operation. What you care about is how much time is taken in total when you repeat the operation a million times. Essentially amortized time means "average time taken per operation if you do many operations". Amortised time doesn't have to be constant; you can have linear and logarithmic amortized time or whatever else. A common example is a dynamic array. If we have already allocated memory for a new entry, adding it will be O(1). If we haven't allocated it we will do so by allocating, say, twice the current amount. This particular insertion will not be O(1) but rather something else. What is important is that the algorithm guarantees that after a sequence of operations the expensive operations will be amortized and thereby rendering the entire operation O(1).

23. What is an Associative Array?

In computer science, an associative array, map, symbol table, or dictionary is an abstract data type composed of a collection of key/value pairs, such that each possible key appears at most once in the collection. An associative array can be implemented as - Hash Table - Self-balancing Binary Search Tree - Unbalanced Binary Search Tree - Association List

11. Name some types of Big-O complexity and corresponding algorithms?

In many cases, an algorithm will fall into one of the following cases - O(1) - Constant complexity The time to complete is the same regardless of the size of the input set. An example is accessing an array element by index. 1 item ==> 1 second 10 items ==> 1 second 100 items ==> 1 second - O(log n) - Logarithmic complexity. Time to complete increases roughly in line with the log (n). For example 1024 items takes roughly twice as long as 32 items because Log2(1024) = 10 and Log2(32) = 5. An example is finding an item in a binary search tree (BST). 1 item ==> 1 second 10 items ==> 2 seconds 100 items ==> 3 seconds 1000 items ==> 4 seconds 10000 items ==> 5 seconds - O(n) - Linear complexity. Time to complete that scales linearly with the size of the input set. In other words, if you double the number of items in the input set, the algorithm takes roughly twice as long. An example is counting the number of items in a linked list. 1 item ==> 1 second 10 items ==> 10 seconds 100 items ==> 100 seconds - O(n log n) - Time to complete increases by the number of items times the result of Log2(N). An example of this is heap sort and quick sort. - O(n^2) - Quadratic complexity. The time to complete is roughly equal to the square of the number of items. An example of this is bubble sort. 1 item ==> 1 second 10 items ==> 100 seconds 100 items ==> 10000 seconds - O(n!) - Time to complete is the factorial of the input set. An example of this is the traveling salesman problem brute-force solution.

6. Explain the difference between O(1) vs O(n) space complexities?

Let's consider a traversal algorithm for traversing a list. - O(1) denotes constant space use => the algorithm allocates the same number of pointers irrespective to the list size. That will happen if we move (reuse) our pointer along the list. - In contrast, O(n) denotes linear space use => the algorithm space use grows together with respect to the input size n. That will happen if let's say for some reason the algorithm needs to allocate 'N' pointers (or other variables) when traversing a list.

30. Check for balanced parentheses in linear time using constant space?

((()())(())) should be accepted but ())() should be rejected.​ Initialize a counter at zero. Whenever you see ( increase it by one and whenever you see ) decrease it by one. Accept if the counter was always nonnegative and ended up at zero. Since the counter is always between −1 and n, only O(logn) bits are needed to store it. So I don't think we can do O(1) space.

38. What are the advantages of using bitwise operations?

- Basically, you use them due to size and speed considerations. Bitwise operations are incredibly simple and thus usually faster than arithmetic operations. - Bitwise operations come into play a lot when you need to encode/decode data in a compact and fast way. For example, to save space you may store multiple variables in a single 8-bit int variable by using each bit to represent a boolean. - Bitwise operations are absolutely essential when programming hardware registers in embedded systems where memory or CPU power is restricted. If you're programming a microcontroller in C with 2kb of memory, every single bit counts, so the ability to pack 8 bools into a single byte may be critical.

37. What are some real world use cases of the bitwise operators?

- Bit fields (flags) They are the most efficient way of representing something whose state is defined by several yes/no properties. ACLs are a good example; if we have four discrete permissions (read, write, execute, change policy), it's better to store this in 1 byte rather than waste 4. These can be mapped to enumeration types in many languages for added convenience. - Communication over ports/sockets Always involves checksums, parity, stop bits, flow control algorithms, and so on, which usually depend on the logic values of individual bytes as opposed to numeric values, since the medium may only be capable of transmitting one bit at a time. - Compression, Encryption Both of these are heavily dependent on bitwise algorithms. Look at the deflate algorithm for an example - everything is in bits, not bytes. - Finite State Machines I'm speaking primarily of the kind embedded in some piece of hardware, although they can be found in software too. These are combinatorial in nature - they might literally be getting "compiled" down to a bunch of logic gates, so they have to be expressed as AND, OR, NOT, etc. - Graphics There's hardly enough space here to get into every area where these operators are used in graphics programming. XOR (^) is particularly interesting here because applying the same input a second time will undo the first. Older GUIs used to rely on this for selection highlighting and other overlays, in order to eliminate the need for costly redraws. They're still useful in slow graphics protocols (i.e. remote desktop).

17. Name some characteristics of Array data structure?

- Finite (fixed-size) - An array is finite because it contains only a limited number of elements. - Order - All the elements are stored one by one in the contiguous locations of computer memory in a linear order and fashion. - Homogenous - All the elements of an array are of the same data types only and hence it is termed a collection of homogenous.

32. Name some bitwise operations you know?

- NOT (~) Bitwise NOT is an unary operator that flips the bits of the number. If the nth bit is 0, it will change it to 1 and vice versa. - AND (&) Bitwise AND is a binary operator that operates on two equal-length bit patterns. If both bits in the compared position of the bit patterns are 1, the bit in the resulting bit pattern is 1, otherwise 0. - OR (|) Bitwise OR is also a binary operator that operates on two equal-length bit patterns similar to bitwise AND. If both bits in the compared position of the bit patterns are 0, the bit in the resulting bit pattern is 0, otherwise 1. - XOR (^) Bitwise XOR also takes two equal-length bit patterns. If both bits in the compared position of the bit patterns are 0 or 1, the bit in the resulting bit pattern is 0, otherwise 1. - Left Shift (<<) Left shift operator is a binary operator which shifts some number of bits in the given bit pattern to the left and appends 0 at the end. - Signed Right Shift (>>) Right shift operator is a binary operator which shifts some number of bits in the given bit pattern to the right preserving the sign (which is the first bit). - Zero Fill Right Shift (>>>) Shifts right by pushing zeros in from the left, filling in the left bits with 0s.

27. What are advantages of Sorted Arrays?

- The major advantage is that search times are much faster than in an unordered array. You can apply Binary Search O(logn) only if an array is sorted. - The disadvantage is that insertion takes longer because all the data items with a higher key value must be moved up to make room. - Deletions are slow in both ordered and unordered arrays because items must be moved down to fill the hole left by the deleted item. - Ordered arrays are therefore useful in situations in which searches are frequent but insertions and deletions are not.

34. What is a Byte?

A byte is made up of 8 bits and the highest value of a byte is 255 which would mean every bit is set. We will look at why a byte's maximum value is 255 in just a second. Lets take it right to left and add up all those values together 1+2+4+8+16+32+64+128 = 255

21. What are Dynamic Arrays?

A dynamic array is an array with a significant improvement in automatic resizing. One limitation of arrays is that they're fixed size meaning you need to specify the number of elements your array will hold ahead of time. A dynamic array expands as you add more elements. So you don't need to determine the size ahead of time.

35. What is Bit Masking?

A mask/bitmask defines which bits you want to keep, and which bits you want to clear. Using a mask, multiple bits in a byte can be set either on, off, or inverted from on to off or vice versa in a single bitwise operation. This is accomplished by doing - Bitwise ANDing in order to extract a subset of the bits in the value. 1 1 1 0 1 1 0 1 => input & 0 0 1 1 1 1 0 0 => mask -------------------------- 0 0 1 0 1 1 0 0 => output - Bitwise ORing in order to set a subset of the bits in the value. 1 1 1 0 1 1 0 1 => input | 0 0 1 1 1 1 0 0 => mask -------------------------- 1 1 1 1 1 1 0 1 => output - Bitwise XORing in order to toggle a subset of the bits in the value. 1 1 1 0 1 1 0 1 => input ^ 0 0 1 1 1 1 0 0 => mask -------------------------- 1 1 0 1 0 0 0 1 => output - Using bitmask we can easily check the state of individual bits regardless of the other bits by ANDing. - Using bitmask we can toggle bit values by using the XOR (exclusive OR) operation.

22. How do Dynamic Arrays work?

A simple dynamic array can be constructed by allocating an array of fixed-size typically larger than the number of elements immediately required. The dynamic array elements are stored contiguously at the start of the underlying array and the remaining positions towards the end of the underlying array are reserved or unused. Elements can be added at the end of a dynamic array in constant time by using the reserved space until this space is completely consumed. When all space is consumed and an additional element is to be added, the underlying fixed-sized array needs to be increased in size. Typically resizing is expensive because you have to allocate a bigger array and copy over all of the elements from the array you have overgrown before we can finally append our item. Dynamic arrays memory allocation is language-specific. For example, in C++ arrays are created on the stack and have automatic storage duration means you don't need to manually manage memory but they get destroyed when the function they're in ends. They necessarily have a fixed size. int numbers[10]; Arrays created with operator new[] have dynamic storage duration and are stored on the heap. They can have any size but you need to allocate and free them yourself since they're not part of the stack frame. int* numbers = new int[10]; delete[] numbers;

24. What are time complexities of sorted array operations?

A sorted array is an array data structure in which each element is sorted in numerical, alphabetical, or some other order, and placed at equally spaced addresses in computer memory. The time complexity of operation on a sorted array Average Case - Worst Case Space O(n) O(n) Search O(logn) O(logn) Insert O(n) O(n) Delete O(n) O(n)

25. What does Sparse Array mean?

A sparse array is an array of data in which many elements have a value of zero. This is in contrast to a dense array where most of the elements have non-zero values or are full of numbers. For example, consider representing a very large 2D matrix of values where most of the values are 0. The most straightforward representation would be a very large two-dimensional array of integers. This straightforward approach has a few limitations. If the matrix is very large, it requires a large quantity of memory to store, even if the number of non-zero elements is small. A better way of representing a sparse matrix would be to only store the values that are non-zero in such a way that iterating through the rows or columns of non-empty elements is relatively fast.

7. What is an algorithm?

An algorithm is a method for computing the value of a mathematical function. Math's functions are a mapping from one input to an output. More technically, an algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state.

16. Explain what is an array?

An array is a collection of homogeneous (same type) data items stored in contiguous memory locations. For example, if an array is of type int, it can only store integer elements and cannot allow the elements of other types such as double, float, char, etc. The elements of an array are accessed by using an index.

20. What is a main difference between an Array and a Dictionary?

Arrays and dictionaries both store collections of data but differ in how they are accessed. Arrays provide random access to a sequential set of data. Dictionaries (or associative arrays) provide a map from a set of keys to a set of values. - Arrays store a set of objects (that can be accessed randomly). - Dictionaries store pairs of objects - Items in an array are accessed by position (index) (often a number) and hence have an order. - Items in a dictionary are accessed by key and are unordered. This makes array/lists more suitable when you have a group of objects in a set (prime numbers, colors, students, etc.) Dictionaries are better suited for showing relationships between a pair of objects.

26. How exactly indexing works in Arrays?

Arrays will always be laid out in memory using consecutive storage locations. Most languages model arrays as contiguous data in a memory of which each element is the same size. Let's say we had an array of int's [0: 10][32: 20][64: 30][96: 40][128: 50][160: 60] Each of these elements is a 32-bit integer so we know how much space it takes up in memory (32 bits). And we know the memory address of the pointer to the first element. It's trivially easy to get to the value of any other element in that array. 1. Take the address of the first element (the compiler knows the array to start at memory cell x) 2. Take the offset of each element (its size in memory) 3. Multiply the offset by the desired index 4. Add your result to the address of the first element It always takes one multiplication, one add operation, and one fetch of an element from a know location.

10. Why do we use Big-O instead of Big Theta (Θ)?

Because you are usually just interested in the worst-case when analyzing the performance. Thus, knowing the upper bound is sufficient. When it runs faster than expected for a given input. That is ok and it's not the critical point. It's mostly negligible information. Some algorithms don't have a tight bound at all. See quicksort for example which is O(n^2) and Omega(n). Moreover, tight bounds are often more difficult to compute.

4. What is Worst Case?

Big-O is often used to make statements about functions that measure the worst-case behavior of an algorithm. The worst-case analysis gives the maximum number of basic operations that have to be performed during the algorithm's execution. It assumes that the input is in the worst possible state and maximum work has to be done to put things right.

31. What is Bit?

Bit stands for Binary Digit and is the smallest unit of data in a computer. Binary digits can only be 0 or 1 because they are a 2-base number. This means that if we want to represent the number 5 in binary we would have to write 0101. An integer variable usually has a 32-bits limit which means it consists of 32 bits with its range of 2³² (2 derived from the state of bits — 0 or 1 which is 2 possibilities).

36. Explain how XOR (^) bit operator works?

Bitwise XOR (exclusive OR/exclusive disjunction) ( ^ ) like the other operators (except ~) takes two equal-length bit patterns. The XOR operator outputs a 1 whenever the inputs do not match. XOR truth table 0 ^ 0 = 0 0 ^ 1 = 1 1 ^ 0 = 1 1 ^ 1 = 0

33. Explain what is bitwise operation?

Bitwise operators are used for manipulating data at the bit level also called bit-level programming. It is a fast and simple action, directly supported by the processor, and is used to manipulate values for comparisons and calculations. On simple low-cost processors, typically, bitwise operations are substantially faster than division, several times faster than multiplication, and sometimes significantly faster than addition.

2. What the heck does it mean if an operation is O(log n)?

O(log n) means that you are doing something for every element that only needs to look at log N of the elements. This is usually because you know something about the elements that let you make an efficient choice (for example to reduce a search space). The most common attributes of logarithmic running-time function are that the choice of the next element on which to perform some action is one of several possibilities and only one will need to be chosen or the elements on which the action is performed are digits of n. Most efficient sorts are an example of this such as merge sort. It is O(log n) when we do divide and conquer type of algorithms e.g. binary search. Another example is the quick sort where each time we divide the array into two parts and each time it takes O(N) time to find a pivot element. Plotting log(n) on a plain piece of paper will result in a graph where the rise of the curve decelerates as n increases such as in the image.

14. What does it mean if an operation is O(n!)?

O(n!) an algorithm that "tries everything" since there are (proportional to) n! possible combinations of n elements that might solve a given problem. It means to do something for all possible permutations (possible ways in which a set or number of things can be ordered or arranged) of the N elements. Traveling salesman is an example where there are N! ways to visit the nodes and the brute force solution is to look at the total cost of every possible permutation to find the optimal one.

3. What exactly would an O(n^2) operation do?

O(n^2) means for every element you are doing something with every other element such as comparing them. Bubble sort is an example of this.

18. Name some advantages and disadvantages of Arrays?

Pros - Fast lookups - Retrieving the element at a given index takes O(1) time regardless of the length of the array. - Fast appends - Adding a new element at the end of the array takes O(1) time. Cons - Fixed-size - You need to specify how many elements you're going to store in your array ahead of time. - Costly inserts and deletes - You have to shift the other elements to fill in or close gaps which takes worst-case O(n) time.

8. What is the time complexity for the "Hello World!" function? int main() { printf("Hello World!"); }

Someone may provide the simple and wrong answer is O(1) as "Hello World!" prints only once on a screen (hence just one operation is needed to finish the function). Big-O notation in this context is being used to describe a relationship between the size of the input of a function and the number of operations that need to be performed to compute the result (output) for that input. That operation has no input that the output can be related to, so using Big-O notation is nonsensical. The time that the operation takes is independent of the inputs of the operation (which is none). Since there is no relationship between the input and the number of operations performed, you can't use Big-O to describe that non-existent relationship. Even worse this isn't technically an algorithm. Since this takes no input and returns nothing, it's not a function, in the mathematical sense.

12. Explain your understanding of "Space Complexity" with examples?

Space complexity means the amount of space the algorithm needs to run. Example 1 Sorting-Algorithms All sorting algorithms need at least O(n) space to save the list (of length n) they have to sort. - Bubble-Sort (and most other sorting algorithms too) can work in place, this means that it needs no more space as it needs to save the list. (just searched two elements in the wrong order and swapped them) - Stupid-Sort lists all permutations of the input list (and saves them) and then searches the sorted one. Since there are n! permutations this algorithm needs O(n*n!) space. Example 2 Number representation A number n can be saved in different ways. - Binary or any other base ≥ 2 => It needs O(log n) space due to n = 2^logn. - Unary => You write n as a sum of ones (n=1+1+1+...+1) and you have to save everyone. So this needs O(n) space due to n = n ⋅ 1.

19. What is time complexity of basic Array operations?

The array uses continuous memory locations (space complexity O(n)) to store the element so retrieving any element will take O(1) time complexity (constant time by using the index of the retrieved element). O(1) describes inserting at the end of the array. If you're inserting into the middle of an array, you have to shift all the elements after that element so the complexity for insertion, in that case, is O(n). End appending also discounts the case where you'd have to resize an array if it's full. Average Case - Worst Case Read O(1) O(1) Insert O(n) O(n) Append O(1) O(1) Delete O(n) O(n) Search O(n) O(n)

39. What is difference between >> and >>> operators?

The bitwise operators assume their operands are 32-bit signed integers. 00000000000000000000000000001000 ==> in base 2 8 ==> in base 10 8 >> 2 => The sign-propagating shift-right operator (>>) shifts the binary number two places preserving the sign (which is the first bit). 00000000000000000000000000000010 ==> in base 2 2 ==> in base 10 8 >>> 2 ==> The zero-fill right-shift operator (>>>) shifts the binary number two places filling in the left bits with 0s. 00000000000000000000000000000010 ==> in base 2 2 ==> in base 10 These are identical, simply because the first bit for positive binary numbers is a zero. For negative numbers, however, things look different 11111111111111111111111111111000 ==> in base 2 -8 ==> in base 10 -8 >> 2 ==> The sign-propagating shift-right operator (>>) shifts the binary number two places preserving the sign. 11111111111111111111111111111110 ==> in base 2 -2 ==> in base 10 -8 >>> 2 ==> The zero-fill right-shift operator (>>>) shifts the binary number two places filling in the left bits with 0s. 00111111111111111111111111111110 ==> in base 2 1073741822 ==> in base 10

5. Why do we use Big-O notation to compare algorithms?

The fact is it's difficult to determine the exact runtime of an algorithm. It depends on the speed of the computer processor. So instead of talking about the runtime directly, we use Big-O Notation to talk about how quickly the runtime grows depending on input size. With Big-O Notation, we use the size of the input which we call n. So we can say things like the runtime grows on the order of the size of the input (O(n)) or on the order of the square of the size of the input (O(n^2)). Our algorithm may have steps that seem expensive when n is small but are eclipsed eventually by other steps as n gets larger. For Big-O Notation analysis, we care more about the stuff that grows fastest as the input grows, because everything else is quickly eclipsed as n gets very large.

28. What defines the dimensionality of an Array?

When you have a collection of similar items, each with an item number, the collection is called an array, and the item number is called a subscript. The dimensionality of an entire array is the number of subscripts you need to give in order to address a single element. [n] ==> One Dimension [n][o] ==> Two Dimensions [n][o][p] ==> Three Dimensions [n][o][p][q] ==> Four Dimensions


Related study sets

project management chapter 9 quiz

View Set

Perioperative NCLEX style questions

View Set

Midterm 1 Intro Network Security

View Set