CS232: Computer Organization Final Exam
Memory location
"word"
Word
- 1 byte or sets of 8 bits - Each word hopes 1 piece of data or 1/ 2 instructions Ex.) 32 bit word has 4 bytes or 4 words
How is capacity important in a computer?
- Internal memory broken down into bytes/ words. - Number of addressable units: NumAddr = 2^n Ex.) 512 MB of RAM needs 29 bits to form 512 M addresses! 2^29 ~ 512
Combinations with k bits
A memory cell with k bits hold 2^k combinations Ex.) Memory cell with 8 bits can hold 2^8 or 256 combinations
7 basic digital logic gate + truth tables
AND - T only if both are true OR - T if least one is true XOR - T if JUST one is true, not both Inverted Gates: NOT - Inverts NAND - Treat as AND and invert NOR - Treat as NOR and invert XNOR - Treat as XOR and invert
Cache mapping
Direct mapping - each address has one line in cache. Associative mapping - any line in cache holds any address that needs to be cached Set associative mapping - solves problem of trashing in direct mapping; group few lines together to make a set instead of using 1 line that an address can map to
Metric Prefixes
Kilo - 10^3 Mega - 10^6 Giga - 10^9 Tera - 10 ^12
Machine language
L0 - Built in machine language L1 - Language that's more convenient for humans to use
Mealy state machine
Output depends on both present state AND present input
Moore state machine
Output depends only on present state, not present input
Instruction Set Architecture (ISA) considerations
Programmability Implementability Compatibility (backward and forward compatibility)
What are levels in memory?
Refers to number of parts from CPU to main memory include the CPU + MM Ex.) two levels: register + main memory three levels: register + cache + main memory
Base 10 vs. base 2
base 10 - 1, 2, 3, 4... base 2 - 01, 10, 11
Bus
connects hardware parts
IAS machine
first electronic computer to be built at institute for advanced studies novel concept: storing program in memory!
State machines
general method of representing time/ history dependent process. Circle = state, edge = transition Moore state machine Mealy state machine
What is computer organization?
highly important; developing software requires knowing the computer's architecture (set of data types, operations, and features of each level), which is visible to programmer
Overflow
occurs when the result cannot fit within the word-size limits of the result data type
Carry
overflow that occurs as a result of addition or subtraction
CPU components
processor, clock, fan, ALU
Hit ratio
ratio that accessed word/ data is found in the faster memory
Trade offs when considering memory and speed? (pyramid)
register (fast , least memory, more expensive) ram cache hard disk (slow, most memory, least expensive) - want frequently accessed memory loaded into quickest processor (cache) and least frequently accessed into other memory
Main memory components
secondary storage (hard disk), GPU (processor for graphics), battery, bus (connects hardware parts), I/O, compiler, GUI, servers (UNIX/ Windows/ OS)
Hz/ Hertz
unit for frequency 1 Hz = 1 cycle/ second 1 kHz = 10^3 Hz 1 MHz = 10^6 Hz 1 GHz = 10^9 Hz
Subtraction overflow rule
when leftmost bits of operands are different, check if the resulting operand is a different sign from the smallest decimal.
Addition overflow rule
when leftmost bits of operands are the same, check if the leftmost bits of operands are the same as well. If they're different, you have overflow
Sequential circuits
- Like combinational logic circuits, a sequential logic circuit has inputs (labelled with x with subscripts) and outputs (labelled with z with subscripts). - Unlike combinational, sequential circuit uses a clock + states (aka memory) flip-flop S-R latch clocked S-R latch clocked D latch edge-triggered D latch
Computer hardware components
- Motherboard includes CPU, ROM, RAM - CPU includes processor, clock, fan, ALU - Main memory includes secondary storage (hard disk), GPU (processor for graphics), battery, bus (connects hardware parts), I/O, compiler, GUI, servers (UNIX/ Windows/ OS)
Digital logic gate
- Physical or idealized logic functions that has boolean input and boolean output - Total of 7 basic ones Textbook Definition: - Built up of a handful of transistors. - Small number of DLG can be combined to form 1-bit memory, which can store a 1 or 0 - 1-bit memory can be combined in groups of 16, 32, and 64 to form registers
What are the four things a computer can do?
- Store data - move data - manipulate data - adjust control flow based on data
Cell
- location that stores address programmers can refer to - all cells in memory contain same number of bits - n cells in memory have 0 to n-1 addresses Ex.) 10 cells in memory will have 0 to 9 addresses
3 important considerations of a computer
- speed - memory (capacity) - price
Addressable memory in case of absolute address given N bits?
0 to (2^n - 1) - 1 Ex.) 24 bits: 0 to (2^24-1) - 1 0 to (2^23) - 1
Bit vs. byte
1 bit = 0 or 1 1 byte = 8 bits 1 KB = 2^10 bytes = 1024 bytes 1 MB = 2^20 bytes = 1024 KB 1 GB = 2^30 bytes = 1024 GB
Types of memory in computer?
2 types: internal and external internal: RAM, cache (b/t ram and register), registers external: hardrive, USB
Where is the cache located?
B/t RAM and register
Combinational circuits
CC always depend on input (input and control bits input to control outcome) adder multiplexer(MUX) demultiplexer(DEMUX) encoder(ENC) decoder(DEC) priority encoder(PENC)
Complex instruction set architecture vs. Reduced instruction set architecture
CISC: - emphasis on hardware - includes multi-clock - complex instructions - memory-to-memory - small code - 1000+ possible instructions RISC: - emphasis on software - single clock reduced instructions only - register to register - large code size - 34 - 64 instructions
Moore's Law
Expressed as the doubling of transistors every 18 months or 60% increase in transistor count per year
Machine layers/ levels
Topmost level is most complex/ sophisticated, bottommost is simplest Computer with n levels has n different virtual machines Ex.) Assembly language level, instruction set architecture, digital logic level are all different levels
Addressing methods in main memory
immediate - instr contains actual operand. adv: no memory reference cons: size of number restricted to size of address field direct - address field contains address that contains operand, requires only 1 memory reference + no sepcial calculations adv: simple cons: limited address space indirect - address field refers to another address of word in memory adv: large address space cons: multiple memory refrences aka longer wait times register direct - address field refers to register adv: faster b/c no memory reference + smaller address field so more space in instr for other information. cons: limited address space b/c we're using register register indirect - address field refers to another register that contains the memory adddress of operand adv: large address space cons: extra memory reference displacement : adv: flexibile, con: complex - base-register addressing: register + immediate replacement - indexed addressing (pre + post indexing): immediate address + index register - register-based, index addressing: base register + index register - register base-scaled, index addressing: base register + [index register*scale] - stack: operands are on top of stack + stack reg holds address. adv: no mem reference. cons: limited applicability
Transistors
is a device that regulates current or voltage flow and acts as a switch or gate for electronic signals.
Access methods + performance times
sequential access - state @ beginning and read through in order. PT: time it takes to position the read + write mechanism @ desired location direct access - blocks have unique addresses within them. PT: time it takes to position the read + write mechanism @ desired location (same as sequential) random access - individual addresses are coded with location (RAM). PT: consistent across all locations and independent of previous address (predictable timing) associative access - addressing information must be stored with data in general data location. PT: time it takes to search through address information associated with data to determine "hit"
Cache
speeds up processing by regularly accessed data between RAM and register - memory consists of 2^n addressable word, each word has unique n-bit. - numBlocks = 2^n / k where k = number of words per block - cache contains c lines of k words. Each with tag that identifies block of k-words
Basic Instruction Set Architecture classes
stack architecture - operations take place on top of stack, destroys operands, then leaves result on top - adv: short instructions , complier easy to read - cons: inefficient code b/c you can fetch from top stack only accumulator architecture - all alu operations assume one operand is in ACC and other is in memory. Result ends up in acc. - adv: short instructions - cons: many loads and stores register-set architecture - 3 types below - adv: fast access to temporary values (b/c each value has its own register) permits compiler optimization - cons: many end up with longer instructions 1.) register-to-register (RISC - makes instructions as simple as possible) adv: simple fixed length instructions, easily pipelined cons: higher instruction count. 2.) register-to-memory adv: small instructions count cons: results destroy operands 3.) memory-to-memory (CISC) - all alu operands from mem addr adv: lowest instruction count, no register disadv: huge memory traffic, large variation in instr length, large variation in clocks per instrs.