MAT 243 Test 3 Quiz Question Prep

¡Supera tus tareas y exámenes ahora con Quizwiz!

True or false? If a is a positive real number, and x and y are real numbers, then aˣ⁺ʸ = aˣ + aʸ.

False If a is a positive real number, and x and y are real numbers, then aˣ⁺ʸ = aˣaʸ (multiplication on the right side, not addition).

In the inductive step, we show that

if P(n) is true, then P(n+1) is true. Induction is about creating a logical "chain reaction" The base case starts the chain. The inductive step shows that there is always a "reaction" - the truth of every statement "triggers" the truth of the next statement.

Mark all true statements: If a,b,q and r are integers and a= bq + r, then gcd(a,b) = gcd(b,r). The last nonzero remainder in the Euclidean algorithm is the gcd. The first remainder in the Euclidean algorithm is an upper limit for the number of steps until the algorithm terminates. Remainders in the Euclidean algorithm decrease strictly. It is possible for the Euclidean algorithm to terminate in one step.

All statements are correct

What is the effect of applying | 65535 to a 32-bit number? | is the bitwise OR operation.

It sets bits 0-15 and leaves bits 16-31 unchanged. 65535 = (0000 0000 0000 0000 1111 1111 1111 1111)₂. Performing an OR of a 32-bit number with that will set the lower 16 bits and have no effect on the upper 16 bits.

We always start the inductive step with the assumption that

P(n) is true for some n. In the inductive step, we prove that P(n) → P(n+1) is true for all n, so we start with the assumption that P(n) is true for some arbitrary n .

An inductive proof that P(n) is true for all n always starts with the base case. What is the base case?

The base case is always P(n) for the lowest n value for which P(n) is defined. If you are proving a statement P(n) for all n ≥ N, then P(N) is the base case.

What do you get when you divide a 64-bit number by 2? Correct answer has to be general, that is, it has to be true for any 64-bit number. Find the smallest integer n that makes the following true: when you divide a 64-bit by 2, the quotient is always an n-bit number. Which statement is true?

A n=63 bit quotient and a remainder bit. A 64-bit number x satisfies 0 ≤ x < 2⁶⁴. Applying the division algorithm to x with divisor 2 to produce a quotient q and a remainder r means writing x as x = 2q + r. By definition, the remainder r can only be 0 or 1, and is thus a 1-bit value. Knowing the inequality that x satisfies, we know that 0 ≤ 2q + r < 2⁶⁴. This implies 2q < 2⁶⁴ - r ≤ 2⁶⁴ . Therefore, 2q < 2⁶⁴, and q < 2⁶³. Thus q is a 63-bit number. Now consider the example x = 2⁶⁴ - 2 = 2(2⁶³ - 1). For this x, q = 2⁶³ - 1, which is not a 62-bit number because it is larger than the largest 62-bit number, 2⁶² - 1. This shows that the best we can say in general about q is that it is a 63-bit number.

True or false? If P(n) is a summation formula involving a sigma sum, we always prove the statement P(n+1) by taking the statement P(n) and adding n+1 to both sides.

False It is true that if P(n) represents a summation formula, we can obtain P(n+1), i.e. the summation formula for the case n+1, by adding the next term of the summation to P(n). However, the next term of the summation is generally not n+1.

What is the effect of applying ^255 to a 16-bit number? ^ is the bitwise XOR operation.

It inverts the low byte and leaves the high byte unchanged. If x = (b₁₅b₁₄b₁₃b₁₂b₁₁b₁₀b₉b₈b₇b₆b₅b₄b₃b₂b₁b₀)₂, then x ^ 255 = (b₁₅b₁₄b₁₃b₁₂b₁₁b₁₀b₉b₈b₇b₆b₅b₄b₃b₂b₁b₀)₂ ^ ( 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1). XOR with 0 has no effect on a bit; XOR with 1 inverts it.

Check all statements that are true: If p is a polynomial of degree n, and q is a polynomial of degree m, and n=m, then p is of order q. The triangle inequality says that for all real numbers a and b, |a + b| ≤ |a| + |b|. f(x)=x is O(x²). There is a "largest order", i.e. there is some function g so that all other functions f are O(g). If p is a polynomial of degree n, and q is a polynomial of degree m, and n<m, then p is O(q). f(x)=x is Ω ( x ). f(x) = sin(x) is big-O of 1. aˣ is O(bˣ) exactly when a<b. aˣ is Ω(bˣ) exactly when a>b. f(x)=5x is of order 3x. All power functions f(x)=xⁿ, where n is a real constant, are O(eˣ) If two functions are of order g, then so is their sum. If two functions are O(g), then so is their sum. aˣ is of order bˣ exactly when a and b are equal. If f and g are functions defined for all positive real numbers and if lim x → ∞ | f ( x ) / g ( x ) | = C where C is a positive constant, then f is of order g.

If p is a polynomial of degree n, and q is a polynomial of degree m, and n=m, then p is of order q. The triangle inequality says that for all real numbers a and b, |a + b| ≤ |a| + |b|. f(x)=x is O(x²). If p is a polynomial of degree n, and q is a polynomial of degree m, and n<m, then p is O(q). f(x)=x is Ω ( x ). f(x) = sin(x) is big-O of 1. f(x)=5x is of order 3x. All power functions f(x)=xⁿ, where n is a real constant, are O(eˣ). If two functions are O(g), then so is their sum. aˣ is of order bˣ exactly when a and b are equal. If f and g are functions defined for all positive real numbers and if lim x → ∞ | f ( x ) / g ( x ) | = C where C is a positive constant, then f is of order g. Explanation of incorrect answers: "If two functions are of order g, then so is their sum" is incorrect as the example of the functions f(x) = x and h(x) = -x shows. Both are order of x, but their sum is zero, which is not order of x. The following statements are incorrect: aˣ is O(bˣ) exactly when a<b. aˣ is Ω(bˣ) exactly when a>b." The correct statements are: "aˣ is of order bˣ exactly when a and b are equal. aˣ is O(bˣ) exactly when a≤b. aˣ is Ω(bˣ) exactly when a ≥b.

Suppose x is an 8-bit number and y is a 3-bit number. What is the effect of the following assignment? x = (x & 248) + y & is the bitwise AND operator.

The upper 5 bits of x are unchanged, the number represented by the lower 3 bits is changed to y. Performing a bitwise AND of x with 248, which is 11111000 in binary, clears the lower 3 bits of x. If x = (b₇b₆b₅b₄b₃b₂b₁b₀)₂, then x & 248 = (b₇b₆b₅b₄b₃ 0 0 0)₂. If y is a 3-bit number, it has the form (0 0 0 0 0 c₂c₁c₀)₂. Then (x & 248) + y = (b₇b₆b₅b₄b₃c₂c₁c₀)₂.

Proof: Base case: The summation formula holds for n=1 because both sides evaluate to 1/2. Inductive step: Suppose we already know for some arbitrary positive integer n that ∑ k = 1 n 1 k ( k + 1 ) = 1 − 1 n + 1 By adding the quantity 1/((n+1)(n+2)) to both sides, we get ∑ k = 1 n + 1 1 k ( k + 1 ) = 1 − 1 n + 1 + 1 ( n + 1 ) ( n + 2 ) = 1 − n + 2 − 1 ( n + 1 ) ( n + 2 ) = 1 − n + 1 ( n + 1 ) ( n + 2 ) = 1 − 1 n + 2 Thus, the summation formula holds for the case n+1.

This is the best of all the proofs in this quiz because it is not only correct, but also succinct and readable. It does not burden the reader with the unnecessary notational abstraction of a "P(n)".

Assume that x holds an 8-bit number. You want to clear bit 7. (This means making the bit with positional value 2⁷ = 128 zero while leaving the other bits of x unchanged.) Which of the following replacements achieves this goal? Notations: & means bitwise AND, | means bitwise OR, ^ means bitwise XOR.

x = x & 127 Notice that in binary, 128 is 10000000, and 127 is 01111111. We will number the bits starting at 0, so the highest bit of x is bit 7. x = x & 128: this clears all bits except bit 7, which is left unchanged. x = x & 127: this clears bit 7, and leaves the other bits unchanged. x = x | 128: this sets bit 7, and leaves the other bits unchanged. x = x | 127: this sets bits 0-6, and leaves bit 7 unchanged. x = x ^ 128: this inverts bit 7, i.e. turns 0 into 1 and vice versa, and leaves the other bits unchanged. x = x ^ 127: this inverts bits 0-6, and leaves bit 7 unchanged.

Match each bitwise operator to its use for manipulating the bits of nonnegative integers: & (AND) | (OR) ^ (XOR)

& (AND) is for clearing selected bits | (OR) is for setting selected bits ^ (XOR) is for inverting selected bits Let the following example illustrate the general rule: if b = (1100 1001)₂, then & b clears bits 1, 2, 4, 5, | b sets bits 0, 3, 6, 7, ^b inverts bits 0, 3, 6, 7, when applied to any 8-bit number. All other bits are unchanged. b is called a bitmask.

In hexadecimal, you multiply a positive integer by 0x10 by appending the digit

0 Multiplication by 10 works that way in decimal, and there is nothing special about decimal. A more formal explanation could go like this: A positive integer m has the hexadecimal form m = 16ⁿ dₙ + 16ⁿ⁻¹ dₙ₋₁ + ... + 16d₁ + d₀. When we multiply by 0x10, ie. by 16, we get 16m = 16ⁿ⁺¹ dₙ₊₁ + 16ⁿ dₙ + ... + 16²d₁ +16d₀ = 16ⁿ⁺¹ dₙ₊₁ + 16ⁿ dₙ + ... + 16²d₁ +16d₀ + 0. This is the hex number with the digits of m shifted one to the left, and a 0 digit appended.

A non-negative integer m can be represented in hexadecimal with up to 4 digits if and only if..

0 ≤ m < 16⁴. Suppose m is a non-negative integer. Since we are proving an if and only if statement, we have to prove two conditionals: 1. if m has at most 4 hex digits, then m < 16⁴. 2. if m < 16⁴, then m has at most 4 hex digits. Observe that the other part of the inequality, 0 ≤ m, is always true because we are working in the domain of non-negative integers. Thus there is no need to show that in 1, because it is given, or redundantly assume it in 2. by assuming 0 ≤ m < 16⁴. We prove the two conditionals separately. 1. Suppose m has at most 4 hex digits. By definition, then m has the form 16³ d₃ + 16² d₂ + 16d₁ + d₀ (with d₃ possibly zero). Each digit is at most 15, thus m ≤ 16³ ·15 + 16² ·15 + 16·15 + 15. Simplifying the right side of the inequality, we get m ≤ 15(16³ + 16² + 16 + 1). The second factor can be simplified using geometric summation as (16⁴ - 1)/15. The factor 15 cancels, so we get m ≤ 16⁴ - 1 < 16⁴. 2. Now we must prove the converse: if m < 16⁴, then m has a hex representation with at most 4 digits. We prove this contrapositively: if m has more than 4 digits, then m < 16⁴ is false, i.e. m ≥ 16⁴. (Note that the contrapositive of the converse of statement 1 is its inverse. There is big picture idea here: we can prove a biconditional p ↔ q by proving p → q and its inverse ¬p → ¬q. ) So suppose m has at least 5 digits. Then there is a nonzero digit d in the hex representation in the 5th or higher position (from the right). That digit d has a weight of 16ᵏ with k ≥ 4. Since all terms in the hex representation are non-negative, m ≥ d · 16ᵏ, Using d ≥ 1 and k ≥ 4, we get m ≥ 1 · 16⁴ = 16⁴.

Suppose you apply the Euclidean algorithm to two positive integers a, b. You only know that a is a number with 1000 decimal digits.The value of b on the other hand is given: b = 3. Then we can be certain that the Euclidean algorithm will end with a zero remainder in how many steps? Enter the lowest number we can be certain of.

3 The Euclidean algorithm consists of repeated application of the division algorithm. In the division algorithm, the remainder r is always less than the divisor. So if b = 3, the remainder in the first step can at most be 2. Remainders have to decrease in each step. So the second remainder can be at most 1, and the third then has to be 0. Thus, the Euclidean algorithm finishes in at most 3 steps if b = 3. On the other hand, you cannot be certain that the algorithm will finish in less than 3 steps, because based on the given information, it is possible that it will require exactly 3 steps. Such is the case when a = 3·10⁹⁹⁹ + 2. (That is a number with 1000 decimal digits.) Then the Euclidean algorithm runs as follows: 3·10⁹⁹⁹ + 2 = 10⁹⁹⁹ · 3 + 2. 3 = 1 · 2 + 1 2 = 2 · 1 + 0. Observe the general pattern here. When you apply the Euclidean algorithm to positive integers a and b=3, then: if a=3k+2 for some non-negative integer k, then the algorithm stops in 3 steps and gcd(a, b)=1. if a=3k+1 for some non-negative integer k, then the algorithm stops in 2 steps and gcd(a, b)=1. if a=3k for some non-negative integer k, then the algorithm stops in 1 step and gcd(a, b)=3.

Generally, the sum of two 3-digit hexadecimal numbers is a hexadecimal number with

3 or 4 digits. If x and y are 3-digit hex numbers, then x and y satisfy 16² ≤ x < 16³ and 16² ≤ y < 16³. Adding these equations produces 2 ·16² ≤ x + y < 2 · 16³. We can extend this inequality on both sides as follows: 16² < 2 ·16² ≤ x + y < 2 · 16³ < 16⁴. This implies 16² < x + y < 16⁴. Thus x + y has at least 3 hex digits, because 16² has 3 hex digits. On the other hand, x + y has at most 4 digits, because 16⁴ is the lowest hex number with 5 digits. The example x = y = 16² shows that x + y can indeed have 3 hex digits, as x + y = 2 ·16² , a 3 hex digit number. The example x = y = 8 ·16² shows that x + y can indeed have 4 hex digits, as x + y = 16³ , a 4 hex digit number.

Referring again to the blur algorithm described previously, by what factor does the number of operations increase when instead of applying it at full HD resolution (1920 x 1080), we apply it at UHD resolution (3840 x 2160) ?

4 The number of arithmetic operations required is approximately proportional to n. 4 times as many pixels means 4 times as many operations.

Generally, the product of a 2-digit and a 3-digit hexadecimal number is a hexadecimal number with

4 or 5 digits. If x is a 2-digit hex number and y is a 3-digit hex number, then 16¹ ≤ x < 16² and 16² ≤ y < 16³. Multiplying these equations produces 16³ ≤ xy < 16⁵. This means that xy has at least 4 hex digits, and at most 5. The student is invited to produce two examples that show that both cases occur.

The binary form of a 3-digit hex number has

9, 10, 11 or 12 binary digits You may think that the answer is exactly 12 binary digits because you have learned that each hex digit corresponds to 4 binary digits. That's basically correct, but you need to keep in mind that some of those binary digits may be 0, including leading ones, and we don't count leading digits zero. The leading hex digit can't be 0, or it would not be the leading digit. Therefore, it is at least one, and that means that its binary representation has at least one digit. Therefore, the leading hex digit has a binary form with 1, 2, 3 or 4 binary digits. That means the entire number has 9, 10, 11 or 12 binary digits. Here are some examples to illustrate: 0x100 has three digits, and is equal to 0001,0000,0000 in binary, i.e. 1,000,000. That's 9 binary digits. 0x200 has three digits, and is equal to 0010,0000,0000 in binary, i.e. 10,000,000. That's 10 binary digits. 0x400 has three digits, and is equal to 0100,0000,0000 in binary, i.e. 100,000,000. That's 11 binary digits. 0x800 has three digits, and is equal to 1000,0000,0000 in binary, i.e. 1000,000,000. That's 12 binary digits. These examples teach us how to prove our answer. We assume that an arbitrary three-digit hex number n is given, and then conclude that n must be at least 0x100, and at most 0xFFF (the student is invited to show this in detail using the definition of base-b representation. )Then we make a case distinction: case 1: 0x100 ≤ n < 0x200 case 2: 0x200 ≤ n < 0x400 case 3: 0x400 ≤ n < 0x800 case 4: 0x800 ≤ n ≤ 0xFFF. In case 1, we show that the binary representation of n has 9 digits. In case 2, we show that the binary representation of n has 10 digits. In case 3, we show that the binary representation of n has 11 digits. In case 4, we show that the binary representation of n has 12 digits. Again, the student is invited to fill in the algebraic details for each case based on the definition of base-b representation.

You may have noticed that in practice problems related to order, logarithms are usually just "log". As you know from algebra, there is more than one logarithm. For each positive number b, there is a base-b logarithm. They are all different- the base-2 logarithm of 8 is 3, while the base 10 logarithm of 8 is 0.90308998699... . There is also a base-e logarithm, called the natural logarithm and usually written as "ln". This begs the question of how it could be justified to just say that a function f(n) is order of "log(n)". Isn't this meaningless if the base of the log is not specified?

Actually, "f is order of log(n)" means the same thing regardless of the base. More precisely, if a and b are two positive real numbers, then f is of order base-a log of n if and only if f is of order base-b log of n. The reason for that lies in the change of base formula, which says that any two log functions are in a constant positive factor relationship with each other. Order relations between two function do not change when you change a constant positive factor in one of the functions. The change of base formula says that log_b(x) = log_a(x) / log_a(b) for any x, a, b > 0. This means that log_b(x) and log_a(x) are strictly proportional to each other. The concept of order ignores proportionality constants. For different a, b values, "f is order of log_b(n)" means the same thing as "f is order of log_a(n)". This is why we don't need to bother to specify a base and can just say that f is order of log(n).

True or false? In the inductive step, we justify the assumption P(n) by referring to the base case. (Example: "Since P(0) is true, we can now assume that P(n) is true for some n...")

False Justifying why you can assume that P(n) is true for some n with the base case means trying to re-explain the principle of induction inside an inductive proof. An inductive proof uses the principle of induction; it does not justify it.

True or false? In the context of our theory of inductive proofs, P(n) represents the quantity about which we are proving something.

False P(n) is the statement we are proving for all n.

True or false? We always prove the statement P(n+1) by taking the statement P(n) and adding n+1 to both sides.

False That technique only works in the special case where we prove the summation formula for the sum of the numbers k from k =1 to n.

True or false? If x is a real number, then 2·3ˣ = 6ˣ.

False The operator precedence of exponentiation is higher than that of multiplication. This means that exponentiation gets carried out first, then exponentiation: 2·3ˣ = 2·(3ˣ).

True or false? The conclusion of the inductive step, P(n+1), is shown by substituting n+1 for the n in the statement P(n).

False You determine what P(n+1) is by substituting n+1 for n in P(n), but merely writing P(n+1) does not prove that it is true.

Judge the following reasoning as true or false. You are working for a tech company. You came up with an algorithm that performs a needed operation on n inputs in order of n operations. Someone else in the company simultaneously came up with an algorithm that performs the same task in order of log(n) operations. Then it follows necessarily that your algorithm will require more operations to perform this task and is therefore inferior and should not be used.

False. The reasoning is false because it ignores the arbitrary multiplicative constant that is part of the concept of order. When an algorithm performs a task in "order of n" operations, that does not mean that the task is performed with n operations. It only means that the number of operations is approximately proportional to n. The actual number of operations could be 0.1n or 1,000,000 n. We don't know what the proportionality constant is. Likewise, when an algorithm performs a task in "order of log(n)" operations, that does not mean that the task is performed with log(n) operations. It only means that the number of operations is approximately proportional to log(n). Again, we don't know what the proportionality constant is. It could be close to zero, or extremely large. Therefore, the order operation does not tell us how many operations are necessary for any specific n. It only tells us about the scaling behavior of the two algorithms. This means that as n goes to infinity, it is certain that eventually, a point will be reached where the log(n) algorithm is superior. Whether this point is reached for any practically relevant n cannot be determined based on the given information.

We want to prove by induction that n² + n is even for all positive integers n. The base case is: 1² +1 = 2 is even. Select the proper inductive hypothesis. It is left up to you to determine whether there are multiple correct answers: Suppose we have proven that n² + n is even. Suppose we have proven that n² + n is even for some arbitrary positive integer n. Suppose we have proven that n² + n is even for all positive integers n. Suppose n = k. Since we have proved the case n = 1, we can now assume that the statement has been proved for some n. Suppose we have already proved the case n = 1.

False: Suppose we have proven that n² + n is even. This is missing "for some arbitrary positive integer n". True: Suppose we have proven that n² + n is even for some arbitrary positive integer n. This is the correct premise. False: Suppose we have proven that n² + n is even for all positive integers n. This is assuming the conclusion. If we already know this for all positive integers n, then there is nothing left to prove. False: Suppose n = k. This is assuming nothing. The premise that n is equal to some undefined symbol k is meaningless, and certainly is not equivalent to assuming that we have already proved that n² + n is even for some arbitrary positive integer n. This mistake reveals another danger inherent in the "renaming ritual": it can make it feel like you assumed something when you assumed nothing. False: Since we have proved the case n = 1, we can now assume that the statement has been proved for some n. If you justify the inductive hypothesis with the base case, then the inductive step logically relies on the base case, and is only valid when applied to the base case. That means that the logical chain reaction we want stops after one step. False: Suppose we have already proved the case n = 1. This is a variation of the error of justifying the inductive hypothesis with the base case.

Mark all true statements: If the positive integers a and b are relatively prime, then so are a+1 and b+1. The number of prime numbers between 1 and 100 is greater than the number of prime numbers between 1000 and 1100. The sequence of prime numbers follows no exact pattern. If p and q are primes, then pq+1 is also prime. If p and q are distinct primes, then they are also relatively prime to each other. If p is prime, then p+2 may or may not be prime, i.e. there exist primes p such that p+2 is prime, and there exist primes p such that p+2 is not prime. A prime number is a positive integer that is only divisible by 1 and itself. To test whether 101 is prime, you only need to divide it by 2,3,5 and 7. If it is not divisible by any of those numbers, then it is prime. If p is prime and n is a positive integer, then the number pⁿ has exactly n positive divisors. It is possible for a positive integer to have two prime factorizations such that the prime factor 3 appears in one of them but not the other. If there are only finitely many primes, then 1=2. The number n can have at most the floor of base-2 log of n many prime factors, if prime factors are counted with repetition (i.e. 2*3*3*5 has 4 prime factors.) The sum of two primes is prime. There is a largest prime, and it is about the size of Graham's number.

False: There is a largest prime, and it is about the size of Graham's number. There are infinitely many primes. This rules out existence of a largest prime. True: To test whether 101 is prime, you only need to divide it by 2,3,5 and 7. If it is not divisible by any of those numbers, then it is prime. Using trial division to determine whether n is prime, you only need to test divide by primes up to the square root of n. The square root of 101 is between 10 and 11 because 10² = 100 and 11² = 121. The primes that are at most the square root of 101 are 2, 3, 5, 7. True: The number n can have at most the floor of base-2 log of n many prime factors, if prime factors are counted with repetition (i.e. 2*3*3*5 has 4 prime factors.) This is true because each prime factor is at least 2, so if n has k prime factors, counted with repetition, n is at least 2ᵏ. This means k is at most log_2(n). Since k is also an integer, k is at most the floor of log_2(n). False: If p and q are primes, then pq+1 is also prime. This is not always true. A counter-example is p = 3, q = 5, for which pq + 1 = 16. True: The sequence of prime numbers follows no exact pattern. Primes appear "randomly" in a certain sense. True: If p is prime, then p+2 may or may not be prime, i.e. there exist primes p such that p+2 is prime, and there exist primes p such that p+2 is not prime. Here are examples for each case: if p = 2, then p + 2 is not prime. If p = 3, then p+2 is prime. False: If the positive integers a and b are relatively prime, then so are a+1 and b+1. This is not always true. a = 2 and b = 5 are relatively prime to each other, but a+1 = 3 and b+1 = 6 are not. True: If there are only finitely many primes, then 1=2. This is a true conditional statement because its premise is false. False: A prime number is a positive integer that is only divisible by 1 and itself. This popular definition of prime is technically incorrect because it would make 1 prime. False: The sum of two primes is prime. Counter-example: 3 + 5 = 8 is not prime. False: If p is prime and n is a positive integer, then the number pⁿ has exactly n positive divisors. The number pⁿ has n+1 positive divisors: 1, p, p², .. , pⁿ. If p and q are distinct primes, then they are also relatively prime to each other. If they were not relatively prime to each other, then they would contain at least one common prime factor. Since the numbers are prime, they can each only contain one prime factor. Therefore, they would have to be the same, but they are not. True: The number of prime numbers between 1 and 100 is greater than the number of prime numbers between 1000 and 1100. This must be true because of the prime number theorem. The number of primes up to 100 is approximately 100/ln(100), which is about 28. The number of primes between 1000 and 1100 is approximately 1100/ln(1100) - 1000/ln(1000), which is about 12. False: It is possible for a positive integer to have two prime factorizations such that the prime factor 3 appears in one of them but not the other. This is not possible because prime factorizations are unique.

Proof: Suppose P(n) is defined by P ( n ) = ( ∑ k = 1 n 1 k ( k + 1 ) = 1 − 1 n + 1 )for all n.Base case: P(1) is true because both sides evaluate to 1/2. Inductive step: since we have proved P(1), we know that P(n) is true for some n: ∑ k = 1 n 1 k ( k + 1 ) = 1 − 1 n + 1 By adding the quantity 1/((n+1)(n+2)) to both sides, we get ∑ k = 1 n + 1 1 k ( k + 1 ) = 1 − 1 n + 1 + 1 ( n + 1 ) ( n + 2 ) = 1 − n + 2 − 1 ( n + 1 ) ( n + 2 ) = 1 − n + 1 ( n + 1 ) ( n + 2 ) = 1 − 1 n + 2 Thus, P(n+1) is true.

The inductive step is logically flawed because it is tied to the base case. Rather than simply assuming that the statement P(n) has already been shown for some (arbitrary) n, the inductive step attempts to justify this assumption by referring to P(1). In doing so, this proof writer is trying to re-explain/re-prove the global logic of induction inside the inductive step, which must only be concerned about why the truth of any P(n) implies the truth of the next one. The proof writer correctly and in full generality shows that P(n) implies P(n+1), but since the inductive hypothesis effectively only assumed P(1), the inductive step really only shows P(2). The inductive chain reaction dies, after one step. The inductive chain reaction can only propagate through all n if it is permitted to go from any n to the next.

Your job is to write a basic blurring algorithm for a video driver. The algorithm will do the following: it will go through all pixels on the screen and for each pixel, compute the average intensity value (in red, green and blue separately) of the pixel and its 8 neighbors. (At the edges of the screen, there are fewer neighbors for each pixel.) Let's say the number of pixels on the screen is n. Then what is the order of the number of arithmetic operations (additions and divisions) required?

The number is order of n . To find the average intensity value per color channel for a pixel with 8 neighbors requires an averaging of 9 numbers, which takes 8 additions and 1 division. Since this has to be done for each of the 3 color channels, each such pixel requires 24 additions and 3 divisions, or 27 arithmetic operations. Almost all pixels on the screen have 8 neighbors. We can approximate the total number of additions and divisions required by assuming that all pixels on the screen have 8 neighbors. Under that simplifying assumption, the total number of arithmetic operations is 27n. This means that the number is order of n. This answer does not change when we perform a more detailed calculation that takes the border and corner pixels into account.

Proof: Suppose P(n) is defined by P ( n ) = ( ∑ k = 1 n 1 k ( k + 1 ) = 1 − 1 n + 1 )for all n.Base case: P(0) is true because both sides evaluate to 0. Inductive step: suppose P(n) has already been proven for some n=k. We wish to prove P(k+1), which is ∑ k = 1 k + 1 1 k ( k + 1 ) = 1 − 1 k + 2By taking the statement P(k) and adding k+1 to both sides, we get ∑ k = 1 k + 1 1 k ( k + 1 ) = 1 − 1 k + 1 + k + 1 = 1 − 1 k + 2 This concludes the proof by induction.

The proof writer abuses the variable k to simultaneously play the role of the running variable in the sigma sum, and the upper limit of the running variable. This creates a circular reference (k goes from 1 to k) which renders all the sigma sums in the proof meaningless. The proof writer blindly imitates example 1 of the lecture on induction, in which the statement P(n+1) is obtained from P(n) by adding n+1 to both sides. In lecture example 1, n+1 was the next term in the summation, but different summations come with different next terms: In the sum ∑ k = 1 n k, the next term (i.e. the new term that gets added when we change the upper limit from n to n+1) is indeed n+1. In the sum ∑ k = 1 n 1 k ( k + 1 ), the "next" term is The proof writer exercises wishful thinking or consciously bluffs by claiming without showing any algebraic detail that 1 − (1 /k + 1) + k + 1 = 1 − (1/ k + 2) . The proof writer wants this to be true, but can't show it because it isn't true. The base case is n=1 not n=0.

Proof: let P ( n ) = ∑ k = 1 n 1 k ( k + 1 ) = 1 − 1 n + 1 .Base case: P(1) = 1/2. Inductive step: suppose P(n) has already been proven for some arbitrary n. The statement P(n+1) is P ( n + 1 ) = ∑ k = 1 n + 1 1 k ( k + 1 ) = 1 − 1 n + 2This concludes the proof by induction.

The proof writer confused stating P(n+1) with showing that it must be true, given P(n) is true. The proof abuses the notation P(n) to refer to both the common value of the two sides of the equation to be proved and the statement that the two sides are indeed equal as the notation was introduced in the lecture. Furthermore, it does not make sense to define P(n) as the common value of the two sides, because it assumes the conclusion, that the two sides are equal. At the very least, the definition of P(n) in the first line should have used parentheses: P ( n ) = ( ∑ k = 1 n 1 k ( k + 1 ) = 1 − 1 n + 1 ) .Related to that, P(1) is not the quantity 1/2. It's the statement (1/2 = 1/2).The best option is to not use the abstraction of "P(n)" in actual inductive proofs at all but refer verbally to the statement to be proved. The notation P(n) is best reserved for discussing the logic of inductive proofs in the abstract.

In computer technology, Endianness refers to the way individual bytes are ordered in memory within a multi-byte number. For example, the hex number 0xFAB3 would be stored in memory as 0xFA 0xB3 using the Big Endian system, and as 0xB3 0xFA using the Little Endian convention. In the Big Endian system, the bytes are stored in order of decreasing significance, with the most significant first. This is the mathematically "natural" format because it agrees with the order we write the digits in integer representation. In the Little Endian system, the bytes are stored in order of increasing significance, with the least significant first. Both systems have been and are in practical use. The historic Motorola 68000 series of CPUs was big endian. Intel and AMD x86-compatible CPUs use the little endian system. Imagine you are reading a book on assembly programming for a certain CPU architecture. You can't remember whether the architecture is little or big endian, but it is certain to be one of the two. A code example shows you that addition of the two 16-bit numbers 0x05 0x0A and0xC0 0xF6 produces the number 0xC6 0x00 without a carry bit. What is your conclusion?

The system is big endian. In the big endian convention, the two numbers are 0x050A and0xC0F6. Their sum is 0xC600, which is 0xC6 0x00 in big endian, with no carry bit. In the little endian convention, the two numbers are 0x0A05 and0xF6C0. Their sum is 0x100C5, which is 0x00 0xC5 in little endian with a carry bit.

The situation is the same as in the previous problem, except that now, the example shows you that multiplication of the two 16-bit numbers represented by 0x00 0x10 and 0x0F 0x97 produces the 32-bit number represented by 0x00 0xF0 0x70 0x09. What is your conclusion?

The system is little endian. Interpreted with the big endian convention, the two numbers are 0x10 and 0xF97. Their product is 0xF970, which in big endian is 0xF9 0x70, or 0x00 0x00 0xF9 0x70. Interpreted with the little endian convention, the two numbers are 0x1000 and 0x970F. Their product is 0x970F000, which can also be written as 0x0970F000. The little endian representation of that is 0x00 0xF0 0x70 0x09.

True or false? If a is a positive number and x and y are real numbers, then a ^(x y) = ( a ^x )^ y.

True

True or False? In hexadecimal, you divide a positive integer by 0x10 by separating the last digit r from the rest of the number q. r is the remainder, q the quotient of the division.

True Division by 10 works that way in decimal, and there is nothing special about decimal. A more formal explanation could go like this: A positive integer m has the hexadecimal form m = 16ⁿ dₙ + 16ⁿ⁻¹ dₙ₋₁ + ... + 16d₁ + d₀. We can rewrite that as m = 16(16ⁿ⁻¹ dₙ + 16ⁿ⁻² dₙ₋₁ + ... + d₁) + d₀. We have thus found written m as m = 16q + r, with q = 16ⁿ⁻¹ dₙ + 16ⁿ⁻² dₙ₋₁ + ... + d₁ and r = d₀. Since d₀ is by definition a hex digit, we know even more: m = 16q + r with 0 ≤ r ≤ 15. In other words, m = 16q + r is the division algorithm representation of m as a multiple of 16 plus a remainder. (Technically, it is the uniqueness property of the division algorithm representation of an integer in terms of a quotient and a remainder that allows us to draw this conclusion. There is only one way to write m as 16q + r with 0 ≤ r ≤ 15, and it is produced by the division algorithm. Since we got one such way, it must be the same one produced by the division algorithm.) So we got that the remainder is d₀, the last digit of m, and the quotient is 16ⁿ⁻¹ dₙ + 16ⁿ⁻² dₙ₋₁ + ... + d₁ , which is the hex number formed by the remaining digits.

True or False? The last hex digit of an even positive integer is always 0, 2, 4, 6, 8, A, C or E.

True Suppose n is a positive integer. Applying the division algorithm with divisor 16 yields the representation n = 16q + r, with 0 ≤ r ≤ 15. This can be rewritten as r = n - 16q, which shows that n has the same parity as its remainder. The remainder is the last digit. Thus an even integer has a last digit that represents an even number. The one-digit hex numbers that are even are 0, 2, 4, 6, 8, A, C and E.

True or false? If P(n) is a summation formula for the sigma sum ∑ k = 0 n f ( k ) = S ( n ) , where S(n) represents the sum in closed form, we prove the statement P(n+1) from P(n) by taking the statement P(n) and adding f(n+1) to both sides. Then we simplify S(n)+f(n+1) algebraically to show that it is S(n+1).

True The summation formula for the case n+1 is ∑ k = 0 n + 1 f ( k ) = S ( n + 1 ) . The left side of that can always be related to P(n) by splitting the sigma sum into the sum from k = 0 to n, plus another term: ∑ k = 0 n + 1 f ( k ) = ∑ k = 0 n f ( k ) + f ( n + 1 ) . Thus, if we already know ∑ k = 0 n f ( k ) = S ( n ), proving the formula for the case n+1 means proving that S(n) + f(n + 1) = S(n + 1)

True or false? If a n = ∑ k = 1 n k ^2, then a n + 1 = ∑ k = 1 n + 1 k ^2.

True a n = ∑ k = 1 n k 2is a function of n alone. a n + 1 = ∑ k = 1 n + 1 k 2is obtained by replacing n in the original formula by n+1.

Mark all true statements: The lcm of two distinct primes is their product. The gcd of two distinct primes is 1. If p,q and r are distinct primes, then gcd(pq, qr) = q. Given the prime factorizations of positive integers a and b, we can easily find the gcd and the lcm of a and b. If p,q and r are distinct primes, then lcm(pq, qr) = prq.

True: Given the prime factorizations of positive integers a and b, we can easily find the gcd and the lcm of a and b. For the gcd, we take the minimum power of each prime factor; for the lcm, the maximum power. True: The gcd of two distinct primes is 1. This is a statement from the previous problem rephrased - two distinct primes are also relatively prime. True: The lcm of two distinct primes is their product. This is true because two distinct primes are relatively prime to each other, so their lcm is their product. True: If p,q and r are distinct primes, then gcd(pq, qr) = q. This follows from the rule for finding the gcd of two numbers from their prime factorizations. True: If p,q and r are distinct primes, then lcm(pq, qr) = prq. This follows from the rule for finding the lcm of two numbers from their prime factorizations.

Check all statements that are true: Adding two integers and then taking the remainder produces the same result as taking their remainders first and then adding them. If an integer a divides a product of two integers b and c, then a must divide b or a must divide c. If an integer divides two numbers, it also divides their sum. If an integer divides two numbers, it also divides their difference. When you perform division by 5 with remainder, the remainder is an integer from -5 to 5. If a and b are positive integers, and a = bq + r is the decomposition of a given by the division algorithm, then q can be found as the floor of a/b, and then r can be found as r = a - bq. Saying that a divides b is the same as saying that b is a multiple of a. When you perform division by 3 with remainder, the remainder is one of the integers 0,1,2. Adding two integers and then taking the remainder produces the same result as taking their remainders first, then adding them, and then applying the remainder operation once more. If an integer a divides an integer b, then a also divides any multiple of b.

True: If an integer divides two numbers, it also divides their sum. That is a rewording of the rule given in the lecture:If b and c are both multiples of a, so is b + c. True: If an integer divides two numbers, it also divides their difference. This is the previous rule in disguise because b-c = b+(-c). -c is just another number. True: If an integer a divides an integer b, then a also divides any multiple of b. That is a rewording of the rule given in the lecture:If b is a multiple of a, so is any multiple of b. False: If an integer a divides a product of two integers b and c, then a must divide b or a must divide c. This is not always true. For example, 6 divides 10·21, but it neither divides 10 nor 21. True: Saying that a divides b is the same as saying that b is a multiple of a. Both manners of speaking mean that b = ak for some integer k. False: When you perform division by 5 with remainder, the remainder is an integer from -5 to 5. The remainder is an integer from 0 to 4. True: When you perform division by 3 with remainder, the remainder is one of the integers 0,1,2. When you divide by d>0, the remainder is always a number from 0 to d-1. True: If a and b are positive integers, and a = bq + r is the decomposition of a given by the division algorithm, then q can be found as the floor of a/b, and then r can be found as r = a - bq. The quotient in the division algorithm for the integer division a/b is by definition the largest number q so that bq does not exceed a. The condition bq ≤ a is equivalent to q ≤ a/b. The largest integer q that does not exceed a/b is by definition the floor of a/b. False: Adding two integers and then taking the remainder produces the same result as taking their remainders first and then adding them. This is not always true. (1 + 1) mod 2 is zero, but 1 mod 2 + 1 mod 2 is two. True: Adding two integers and then taking the remainder produces the same result as taking their remainders first, then adding them, and then applying the remainder operation once more. This consequence of the congruence theorem is derived in the lecture.

Check all true statements: In duodecimal (base 12), every digit is 0,1,2,3,4,5,6,7,8,9,A or B. Given a positive integer n and a base b, we can find the last digit of the base b expansion of n by performing the division algorithm to find n = bq + r. The remainder r is the last digit. By repeating the process with q instead of n, we find the next digit, and so on. The fast modular exponentiation algorithm takes advantage of the binary representation of the exponent. You can convert a number from hexadecimal to binary by replacing each hexadecimal digit separately by its corresponding 4-bit binary representation. The security of Diffie-Hellmann Key Exchange is based on the difficulty of computing discrete logarithms. Expressed in base-n, the integer n² is "100". In octal (base 8), every digit is 0,1,2,3,4,5,6 or 7. You can convert a number from binary to octal by grouping the digits ("bits") of the binary number into groups of 3, going from right to left. If the number of bits is not a multiple of 3, you may have to add one or two leading 0 bits on the left side. Then you convert each group of 3 bits into one octal digit. Expressed in base-n, the integer n is "10". Among all base b representations of a positive integer n, the binary one is always at least as long as any other (in terms of number of digits.) In ternary (base 3), every digit is a 0,1 or 2. In base b, it is easy to see whether an integer is a multiple of b. Its last digit is zero in that case. If k is an integer greater than 1 and n is a positive integer that is not a power of k, then n has ⌈logₖ(n)⌉ digits in base k. You can convert a number from decimal to binary by replacing each decimal digit separately by its corresponding binary representation. The fast modular exponentiation algorithm computes bⁿ mod m in only about log₂(n) steps. This makes it practical even when n is large.

True: In ternary (base 3), every digit is a 0,1 or 2. True: In octal (base 8), every digit is 0,1,2,3,4,5,6 or 7. True: In duodecimal (base 12), every digit is 0,1,2,3,4,5,6,7,8,9,A or B. The digits in base b always go from 0 to b-1. True: Given a positive integer n and a base b, we can find the last digit of the base-b expansion of n by performing the division algorithm to find n = bq + r. The remainder r is the last digit. We can see that this is true directly based on the definition of base b expansion. The base-b expansion of a an arbitrary positive integer n has the form n = bⁿ dₙ + bⁿ⁻¹ dₙ₋₁ + ... + bd₁ + d₀. All terms except the last one have a factor of b in it which we can factor out: n = b(bⁿ⁻¹ dₙ + bⁿ⁻² dₙ₋₁ + ... + d₁) + d₀. This way, we have decomposed n into b times another integer, plus a number in the range from 0 to b-1. That is precisely what the division algorithm does. Since quotient and remainder produced by the division algorithm are unique (i.e. there is only one way to write a number as a multiple of b plus a number in the range 0..b-1), q = bⁿ⁻¹ dₙ + bⁿ⁻² dₙ₋₁ + ... + d₁ , andr = d₀. The second equation is what we wanted: the last digit is the remainder. True: By repeating the process with q instead of n, we find the next digit, and so on. If we apply the division algorithm again, now to q = bⁿ⁻¹ dₙ + bⁿ⁻² dₙ₋₁ + ... + d₁, the remainder will be d₁. After that, d₂, etc. True: You can convert a number from binary to octal by grouping the digits ("bits") of the binary number into groups of 3, going from right to left. If the number of bits is not a multiple of 3, you may have to add one or two leading 0 bits on the left side. Then you convert each group of 3 bits into one octal digit. Generally, we can block-convert from base b to base bⁿ by grouping the base b digits of a number into groups of n, from right to left, with possibly some zero-padding on the left, and converting each block separately into a base bⁿ digit. The expanded PowerPoint contains a proof of this for the special case b = 2 and n = 4 (binary to hex). True: You can convert a number from hexadecimal to binary by replacing each hexadecimal digit separately by its corresponding 4-bit binary representation. False: You can convert a number from decimal to binary by replacing each decimal digit separately by its corresponding binary representation. Generally, we can convert (only) from base bⁿ to base b by converting each base bⁿ digit separately into the corresponding base b number. We can convert base 16 to base 2 this way because 16 is a power of 2. We cannot convert base-10 to base-2 this way because 10 is not a power of 2. True: Among all base b representations of a positive integer n, the binary one is always at least as long as any other (in terms of number of digits.) The formula for the number k of digits of a number n in base b is k = ⌊log_b(n)⌋+1. When we keep n constant and increase b, log_b(n) decreases, which we can see by using the change of base formula: log_b(n) = ln(n)/ln(b). Therefore, k(b) is a decreasing sequence. True: Expressed in base n, the integer n is "10". By definition of base n, (10)ₙ = 1·n¹ + 0·n⁰ = n. True: Expressed in base n, the integer n² is "100". By definition of base n, (100)ₙ = 1·n² + 0·n¹ + 0·n⁰ = n². True: In base b, it is easy to see whether an integer is a multiple of b. Its last digit is zero in that case. If n is a multiple of b, its remainder mod b is zero. The last digit in base b is the remainder mod b. Therefore, if n is a multiple of b, the last digit is zero. True: The fast modular exponentiation algorithm takes advantage of the binary representation of the exponent. Fast modular exponentiation finds the values b^(2^k) mod m through successive squaring. In order to compute bⁿ mod m using the values b^(2^k) mod m, we need to represent n as a sum of numbers 2ᵏ. That is the binary representation of n. True: The fast modular exponentiation algorithm computes bⁿ mod m in only about log₂(n) steps. This makes it practical even when n is large. The number of steps required by fast modular exponentiation to compute bⁿ mod m is roughly equal to the number of binary digits of n, which is log₂n. True: The security of Diffie-Hellmann Key Exchange is based on the difficulty of computing discrete logarithms. This is described in the expanded powerpoint on integer representations. True: If k is an integer greater than 1 and n is a positive integer that is not a power of k, then n has ⌈logₖ(n)⌉ digits in base k. The number of digits of n in base k is ⌊log_k(n)⌋+1. When n is not a power of k, log_k(n) is not an integer. In that case, rounding down and adding one is the same as rounding up. Therefore, the number of digits of n in base k is ⌈log_k(n)⌉.

You have a large text file of people. Each person is represented by one line in the text file. The line starts with their ID number and after that, has the person's name. The lines are sorted by ID number in ascending order. There are n lines in this file. You write a search function that returns the name of a person whose ID number is given. The simplest way to do that would be to program a loop that goes through each line and compares the ID number in the line against the given ID number. If there is a match, then it returns the name in that line. This is very inefficient, because the worst-case scenario is that this program needs to go through almost everyone- the person we are looking for could be last. Using the fact that the file is sorted will greatly speed up the process, by allowing us to use a binary search algorithm: We go to the middle line of the file first and compare the ID found there (P) to the given ID (Q). (If the number of lines is even, we go above or below the arithmetic middle.) If P=Q then our algorithm terminates - we have found the person we are looking for. If P is less than Q, that means that the person we are looking for is in the second half of the file. We now repeat our algorithm on the second half of the file. If P is greater than Q, that means that the person we are looking for is in the first half of the file. We now repeat our algorithm on the first half of the file. Of what order is the worst-case number of comparison operations that are needed for this algorithm to terminate?

log(n) It suffices to conduct a rough analysis where we ignore the fact that our algorithm behaves slightly differently depending on whether the number of lines is odd or even. With every bisection step, the number of lines of the remaining file we are searching is cut in half. The worst-case scenario occurs where we only find a match after bisection has reduced the remaining file to a single line. In that case, if it took k bisection steps, the file was approximately n = 2ᵏ lines in size. Solving for k, we get k = log₂(n).

A nibble is a 4-bit number. An 8-bit number has a high nibble and a low nibble. For example, given n = 147 = (1001 0011)₂, the high nibble of n is 1001 in binary, or 9 in decimal, the low nibble of n is 0011 in binary, or 3 in decimal. If n holds an 8-bit number, which one of the following expressions will produce the high nibble of n and which one will produce the low nibble of n? & is the bitwise AND.

low nibble of n: n & 15 high nibble of n: no options n & 15 is the low nibble of n, because 15 = (0000 1111)₂. A bitwise AND with that clears the upper 4 bits and leaves the lower 4 bits in place. You may think that n & 240 is the high nibble of n, because 240 = (1111 0000)₂. A bitwise AND with that clears the lower 4 bits and leaves the upper 4 bits in place. However, the result of that is not the high nibble, but 16 times the high nibble: (b₇b₆b₅b₄b₃b₂b₁b₀)₂ & 240 = (b₇b₆b₅b₄ 0 0 0 0 )₂ = 16 · (b₇b₆b₅b₄)₂. To illustrate with the example n = 147 = (1001 0011)₂, n & 240 = (1001 0000)₂ = 144. To fix the formula, we need to shift (n & 240) 4 bits to the right: (n & 240) >> 4. If the right shift operator just discards the lower bits (rather than rotating them back in), then don't need the & 240, and simply n >> 4 will do the trick.

What is the product of two 8-bit numbers? Correct answers must be general, i.e. be valid no matter what the two 8-bit numbers are. Select the smallest integer n that makes the following true: the product of two 8-bit numbers is always an n-bit number.

n=16 Given two numbers x and y that satisfy 0 ≤ x ≤ 255 and 0 ≤ y ≤ 255, by multiplying the inequalities, you get 0 ≤ xy ≤ 65025. Since the 17th bit in a binary number is worth 65536, the highest 16-bit number is 65535= (1111111111111111)₂. Thus xy is a 16-bit number. In exponential notation, the numbers x and y satisfy 0 ≤ x < 2⁸ and 0 ≤ y < 2⁸. Multiplying the inequalities, we get 0 ≤ xy < 2¹⁶ . Again, we get that xy is a 16-bit number. It is true that for some 8-bit numbers x and y, xy can be represented with a lower number of bits than 16. Indeed, if x = 0 and y = 0, then xy is a 1-bit number. However, the product of two 8-bit numbers is not always a 15-bit number, as the example x=y=255 shows. xy = 65025 is greater than the greatest 15 bit number 2¹⁵ -1 = 32767, and therefore 65025 is not a 15-bit number.

What is the sum of a 16-bit and a 32-bit number? Correct answers must be general, i.e. be valid no matter what the two numbers are. Select the smallest integer n that makes the following true: the sum of a 16-bit and a 32-bit number is always an n-bit number.

n=33 Given two numbers x and y with 0 ≤ x < 2¹⁶ and 0 ≤ y < 2³², 0 ≤ x + y < 2¹⁶+2³² < 2³² + 2³² = 2³³. This is the defining property of x + y being a 33-bit number. It is true that for some x and y, x + y can be represented with a lower number of bits than 33. For example, if x = 0 and y = 0, then x + y is a 1-bit number. However, the example x = 2¹⁶ - 1 and y = 2³² - 1 shows that x + y can be as large as 2¹⁶ + 2³² - 2, which is larger than the largest 32 bit number 2³² - 1.

What is the product of a 16-bit and a 32-bit number? Correct answers must be general, i.e. be valid no matter what the two numbers are. Select the smallest integer n that makes the following true: the product of a 16-bit and a 32-bit number is always an n-bit number.

n=48 Given two numbers x and y with 0 ≤ x < 2¹⁶ and 0 ≤ y < 2³², 0 ≤ xy < 2¹⁶2³² =2⁴⁸. This is the defining property of xy being a 48-bit number. It is true that for some x and y, xy can be represented with a lower number of bits than 48. For example, if x = 0 and y = 0, then xy is a 1-bit number. However, the example x = 2¹⁶ - 1 and y = 2³² - 1 shows that xy can be as large as (2¹⁶ -1)(2³² - 1) = 2⁴⁸ - (2¹⁶ + 2³²) + 1, which is larger than 2⁴⁷ - 1, the largest 47-bit number. You can see that 2⁴⁸ - (2¹⁶ + 2³²) + 1 > 2⁴⁷ - 1 by considering: 2⁴⁸ = 2⁴⁷ + 2⁴⁷ and 2¹⁶ + 2³² < 2³² + 2³² = 2³³ < 2⁴⁷, thus 2⁴⁸ - (2¹⁶ + 2³²) + 1 = 2⁴⁷ + 2⁴⁷ - (2¹⁶ + 2³²) + 1 > 2⁴⁷ + 1 > 2⁴⁷ - 1.

What is the sum of two 8-bit numbers? Correct answers must be general, i.e. be valid no matter what the two 8-bit numbers are. Select the smallest integer n that makes the following true: the sum of two 8-bit numbers is always an n-bit number.

n=9 Given two numbers x and y that satisfy 0 ≤ x ≤ 255 and 0 ≤ y ≤ 255, by adding the inequalities, you get 0 ≤ x + y ≤ 510. Since the 10th bit in a binary number is worth 512, the highest 9-bit number is 511= (111111111)₂. Thus x + y is a 9-bit number. Another way of carrying out this algebra, which is better because it makes the binary exponents visible, is by using exponential notation. The numbers x and y satisfy 0 ≤ x < 2⁸ and 0 ≤ y < 2⁸. Adding the inequalities, we get 0 ≤ x + y < 2·2⁸ , or simplified, 0 ≤ x + y < 2⁹. This is the defining property of x + y being a 9-bit number. It is true that for some 8-bit numbers x and y, x + y can be represented with a lower number of bits than 9. Indeed, if x = 0 and y = 0, then x + y is a 1-bit number. 1 bit, or any bit number lower than 9, is not sufficient for all cases though, as the example x = y = 255 shows. If you feel that the correct answer is a 16-bit number, then you are only partially correct. It is true that in the sense in which we are speaking here, the sum of two 8-bit numbers is also a 16-bit number. However, you were asked to select the lowest number of bits that can represent the sum.


Conjuntos de estudio relacionados

Chapter 4: Civil Liberties (inquizitive)

View Set

economics chapter 17: federal reserve system and monetary policy

View Set