Chapter 1
What is the largest integer we can represent in N bits? ie: 3 bits
(2^n)-1 ie: (2^3)-1=7 or (2^2)+(2^1)+(2^0)=4+2+1=7
new standard (integer)
4 bytes = integer
Gigabyte
G; billion; 2^30
character
char 1 byte of storage
floating points
numbers between 0.1 < x < 1
converting from decimal to binary
**use power of 2's 1. take decimal and subtract the largest power of 2 without going over the decimal 2. account for the power of two and allow for that many positions (put "1" in each placeholder and "0" in all the others)
source code
//R.Parker Demo 1 #include<iostream> #include<fstream> #include<string> #include<10manip> using namespace std; void main ( ) { declarations: datatype ident1, ident2...; int test1, test 2; double average; }
First program
1. input two tests (in C++ must start with a letter and can follow with any number of letters and digits)-- test1 test2 2. calculate average 3. output average
old standard (integer)
2 bytes = integer ie: 16 bits - 1 sign bit = 15 bits = 32,767
original IBN PC
7 bit byte
now we use...
8 bits
syntax
English-like language with rules of grammar, spelling, punctuation (high level programming language)
What is the largest signed integer that we can represent in 4 byte integer?
First, convert bytes to bits. 4 bytes = 32 bits (b/c 4x8 = 32) Then subtract 1 sign bit from 32. Which gives you 31 bits. Now, use (2^n)-1 = (2^31)-1 = 2,147,483,647
Kilobyte
K; thousand; 2^10=1,024
Power of 2's
Kilo, mega, giga, tara, peda
Megabyte
M; million; 2^20=1,048,576
Pedabyte
P; quadrillion; 2^50
Tarabyte
T; trillion; 2^40
scientific notation
all numbers expressed as number between 1 < x < 10 ie: 384 would be 3.84 x 10^2 ie: -0.003 would be -3.0 x 10^-3 (negative indicates going backwards)
bits
binary digits base 2 (power of 2) the only language the computer understands (machine language)
logic errors
errors in formulas, methods, sequencing of steps, etc.
syntax errors
errors in grammar, spelling and punctuation
common run time errors
math error (divide by zero), log of negative numbers, file errors
semantics
meaning (high level programming language)
integer
old 2 byte integer: short **NOW 4 byte (C++); int
real number
old: 4 byte (float) c++ 8 byte (double)***
converting from binary to decimal
record position write std. notation (ie: 1x2^6) ignore zeros take the sum
algorithm
steps we need to do to solve a problem 1. input data 2. process data 3. output results
object code
the file that stores machine language
source code
the file that stores programming language
compiler
translates from programming language into machine language
run time errors
while the program is running (executed) ie: two integers (int) x and y (user type in 0) cin>>x; y = 10/x nothing wrong syntactically