Test 2: Worksheets #11-22 (Skip Worksheet #14)

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Given a 32-bit virtual address of 0x0040000C and the following partial TLB. The page size is 4096. v | d | tag | page reference 1 | 0 | 0x10010 | 0x11B17 0 | 0 | 0x7FFFF | 0x30F21 1 | 0 | 0x00400 | 0x01AC2 1 | 0 | 0x1001F | 0x04CBA 1 | 0 | 0x10020 | disk What would be the final 32-bit physical memory address? Respond in hex (with leading 0x).

0x01AC200C

Given a 32-bit virtual address of 0x0040002A and the following partial TLB. The page size is 8192. v | d | tag | page reference 1 | 0 | 0x10010 | 0x11B17 0 | 0 | 0x7FFFF | 0x30F21 1 | 0 | 0x00400 | 0x01AC2 1 | 0 | 0x1001F | 0x04CBA 1 | 0 | 0x00200 | 0x01007 What would be the final 32-bit physical memory address? Respond in hex (with leading 0x).

0x0200E02A

Given a 32-bit virtual address of 0x0040002A and the following partial TLB. The page size is 8192. v | d | tag | page reference 1 | 0 | 0x10010 | 0x11B17 0 | 0 | 0x7FFFF | 0x30F21 1 | 0 | 0x00400 | 0x01AC2 1 | 0 | 0x1001F | 0x04CBA 1 | 0 | 0x10020 | disk What would be the final 32-bit physical memory address? Respond in hex (with leading 0x).

0x0200E02A

Given the following table; TLB | Page Table | Data Cache 1 | hit | hit | hit 2 | hit | hit | miss 3 | miss | miss | miss 4 | miss | miss | hit Which row (by number) represents a represents the best performance?

1

Given the following table; TLB | Page Table | Data Cache 1 | hit | hit | hit 2 | miss | miss | miss 3 | hit | hit | miss 4 | miss | hit | miss Which row (by number) represents the best case in terms of performance?

1

Using a RR preemptive scheduling algorithm and the following processes; Process | Execution Time | Arrival Time p1 | 20 | 0 p2 | 25 | 15 p3 | 15 | 30 p4 | 10 | 40 What is the average waiting time (awt). The time slice is 5 and all process use their time slice. New process placed in queue before in-system process. Note, awt = completion time - execution time - arrival time.

10

Consider a virtual memory system with the following properties: ○ 36-bit virtual address ○ 16 KB pages ○ 32-bit physical address What is the total size of the page table for each process on this processor, assuming that the valid, protection, dirty, and use bits take a total of 6 bits and that all the virtual pages are in use. Assume that disk addresses are not store in the page table. Respond with megabytes (do not enter MB, one decimal point, one digit after the decimal point).

12.6 MB

Given a fixed partitioning scheme with equal-size partitions of 2048 bytes and a total main memory size of 224 bytes. A page table is maintained that includes a pointer to a physical page for each resident process. How many bits are required for the pointer?

13

If the page size is 8192, how many bits are required for the page offset? ○ 10 ○ 11 ○ 12 ○ 13

13

Given the following table; TLB | Page Table | Data Cache 1 | hit | hit | hit 2 | miss | miss | miss 3 | hit | hit | miss 4 | miss | hit | miss Which row (by number) represents the worst case in terms of performance?

2

Given the following table; TLB | Page Table | Data Cache 1 | hit | hit | hit 2 | miss | miss | miss 3 | hit | miss | miss 4 | miss | miss | hit Which row (by number) represents a page fault?

2

Given the following table; TLB | Page Table | Data Cache 1 | hit | hit | hit 2 | hit | hit | miss 3 | miss | miss | miss 4 | miss | miss | hit Which row (by number) represents a *page fault*?

3

Consider a virtual memory system with the following properties: • 33-bit virtual address • 8 KB pages • 31-bit physical address What is the total size of the page table for each process on this processor, assuming that the valid, protection, dirty, and use bits take a total of 10 bits and that all the virtual pages are in use. Assume that disk addresses are not store in the page table. Respond with megabytes (do not enter MB, one decimal point, exactly one digit after the decimal point).

3.7

What is the size of the TLB in bytes based on the following assumptions: • 32-bit virtual address • 32-bit physical address • valid and dirty bits -> 1 bit each • reference bits -> 8 bits total • access bits -> 2 bits • 512 TLB entries • 8kb page size Respond in bytes (no comma's, no decimal point).

3200

What is the size of the TLB in bytes based on the following assumptions: ○ 32-bit virtual address ○ 32-bit physical address ○ valid and dirty bits -> 1 bit each ○ reference bits -> 8 bits total ○ access bits -> 2 bits ○ 512 TLB entries ○ 4kb page size Respond in bytes (no comma's, no decimal point).

3328 bytes

Using a SRT preemptive scheduling algorithm and the following processes; Process | Execution Time | Arrival Time p1 | 20 | 0 p2 | 25 | 15 p3 | 15 | 30 p4 | 10 | 40 What is the average waiting time (awt). The time slice is 5 and all process use their time slice. New process placed in queue before in-system process. Note, awt = completion time - execution time - arrival time.

6.25

Using a FCFS non preemptive scheduling algorithm and the following processes; Process | Execution Time | Arrival Time p1 | 20 | 0 p2 | 25 | 15 p3 | 10 | 30 p4 | 15 | 45 What is the average waiting time (awt). Note, awt = completion time - execution time - arrival time.

7.5

Using a SJF non preemptive scheduling algorithm and the following processes; Process | Execution Time | Arrival Time p1 | 20 | 0 p2 | 25 | 15 p3 | 15 | 30 p4 | 10 | 40 What is the average waiting time (awt). Note, awt = completion time - execution time - arrival time.

8.75

What is the purpose of a Translation Lookaside Buffer (TLB)? ○ Decreases paging associated with compulsory misses. ○ Increases associativeity for paging operations. ○ Supports page table looking operations. ○ A dedicated cache that contains those page table entries that have been most recently used.

A dedicated cache that contains those page table entries that have been most recently used.

What is the purpose of a Translation Lookaside Buffer (TLB)? ○ Decreases paging associated with compulsory misses. ○ Increases associativity for paging operations. ○ Supports page table lookup operations. ○ A dedicated cache that contains those page table entries that have been most recently used.

A dedicated cache that contains those page table entries that have been most recently used.

Why might a scheduling algorithm on a server system differ from a scheduling algorithm on a desktop system? ○ A server must be able to handle more jobs and more cores which requires a fundamentally different approach for a scheduling algorithm. ○ A desktop could more easily suspend background jobs. ○ A server might focus on response time and a desktop might focus on throughput. ○ A desktop might focus on response time and a server might focus on throughput.

A desktop might focus on response time and a server might focus on throughput.

What is *paging*? ○ Only implemented on high-end server systems (i.e., supercomputer centers). ○ Is the software implementation of the hardware device that at run time maps virtual address to physical address ○ A memory compaction process that eliminates the problems associated with contiguous memory allocation. ○ A memory management scheme that allows processes physical memory to be discontinuous.

A memory management scheme that allows processes physical memory to be discontinuous.

When a currently executing process uses its maximum allowed CPU time, what happens? ○ If other cores are currently free, the process time limit is re-set. ○ The process is terminated. ○ The OS scheduler is invoked and the scheduler may allow the process to retain the CPU core. ○ A process context switch is invoked.

A process context switch is invoked.

When a page fault occurs, the process requesting the page must block while waiting for the page to be brought from secondary storage into physical memory. Assume that there exists a process with five threads. If one thread incurs a page fault while accessing its data, will the other user threads belonging to the same process also be affected by the page fault — that is, will they also have to wait for the faulting page to be brought into memory? ○ Yes. ○ No. ○ A thread would only be blocked if that thread attempted to access that memory. ○ They will be blocked, but it is unrelated to the page fault. ○ None of these answers are fully correct.

A thread would only be blocked if that thread attempted to access that memory.

Which of the following are contiguous memory allocation strategies. ○ First fit. ○ Best fit. ○ Worst fit. ○ All of these answers are correct.

All of these answers are correct.

The term *late binding* is typically associated with which of the following? ○ Both soft and hard page faults. ○ Hard page faults. ○ Soft page faults. ○ DDL/SO functions.

DDL/SO functions.

Which of the following is a key problem with contiguous memory allocation strategies. ○ No specific issue, but provides overall less performance than other approaches. ○ Seperation of data and code. ○ Fragmentation. ○ Limits implementation of segmentation.

Fragmentation.

Which of the following is contained in the page/swap file? ○ A subset of the swapped out pages for the executing process. ○ Dirty data pages from the executing process (since instruction pages do not change). ○ Full copy of the executing process. ○ Swapped out pages of the executing process.

Full copy of the executing process.

What is meant by a hard real time system? ○ Scheduling algorithms that execute in a timed, scheduled manner. ○ Guaranteed to meet any scheduling requirements (within certain limits). ○ Refers to the notion that the kernel tries to schedule applications within timing deadlines, but the kernel does not promise to always achieve these goals. ○ None of these responses are fully correct.

Guaranteed to meet any scheduling requirements (within certain limits).

Which of the following is true regarding the memory management unit (MMU)? ○ Improves overall performance since since most programs exhibit both temporal and spatial locality. ○ Performs the hardware implementation for DLL's or SO's. ○ Addresses generated by the process. ○ Hardware that, at run time, helps maps virtual address to physical address.

Hardware that, at run time, helps maps virtual address to physical address.

What is *temporal locality*? ○ A reference to cache management protocols. ○ If an item is referenced, items whose addresses are close by will tend to be referenced soon. ○ If an item is referenced, it will tend to be referenced again soon. ○ Programs access a relatively small portion of their address space at any instant of time. ○ None of these answers are fully correct.

If an item is referenced, it will tend to be referenced again soon.

What is *spatial locality*? ○ A reference to cache management protocols. ○ If an item is referenced, items whose addresses are close by will tend to be referenced soon. ○ If an item is referenced, it will tend to be referenced again soon. ○ Programs access a relatively small portion of their address space at any instant of time. ○ None of these answers are fully correct.

If an item is referenced, items whose addresses are close by will tend to be referenced soon.

Under what circumstance might a call to a library function (from a DLL/SO library) cause a *soft* fault? ○ Such a call will not cause a page fault. ○ It will always cause a page fault (which might be soft or hard). ○ If the DLL/SO function being called is currently in memory. ○ If the DLL/SO function being called is not currently in memory.

If the DLL/SO function being called is currently in memory.

Under what circumstance might a call to a library function (from a DLL/SO library) cause a *hard* fault? ○ Such a call will not cause a page fault. ○ It will always cause a page fault (which might be soft or hard). ○ If the DLL/SO function being called is currently in memory. ○ If the DLL/SO function being called is not currently in memory.

If the DLL/SO function being called is not currently in memory.

Which of the following is true regarding *swapping*? ○ The need for swapping has been eliminated by the availability of cheap RAM storage. ○ All modern OSs use swapping due to their very high performance. ○ Is part of the hardware that, at run time, maps virtual address to physical address. ○ If there is not enough memory available to keep all running processes in memory at the same time, then some processes that are not currently using the CPU may have their memory swapped out to secondary storage called the backing store.

If there is not enough memory available to keep all running processes in memory at the same time, then some processes who are not currently using the CPU may have their memory swapped out to a secondary storage called the backing store.

What is the purpose of a *Translation Lookaside Buffer* (TLB)? ○ Reduce compulsory misses in main memory. ○ Implement security protocols for main memory to ensure appropriate memory segmentation. ○ Improve performance by reducing having to go to RAM to retrieve a page table entry. ○ To provide for efficient use of main memory for process execution.

Improve performance by reducing having to go to RAM to retrieve a page table entry.

What would be an appropriate application for *static partitioning*? ○ General use computing. ○ Large-scale computing (i.e., supercomputers) with limited job mixes. ○ Many embedded systems. ○ Older small systems, with limited functionality that only allow a limited number of processes.

Many embedded systems.

The scheduling criteria waiting time is defined as what? ○ Very easy to implement. ○ Keep CPU as busy as possible. ○ Get as many jobs done per unit time as possible. ○ Minimize time spent waiting in the ready queue for ready processes.

Minimize time spent waiting in the ready queue for ready processes.

Which of the following scheduling criteria is more important. ○ Throughput. ○ Response time. ○ Minimum time spent waiting in ready queue ○ None of these answers are correct.

None of these answers are correct.

Why might a process involuntarily give up the CPU? Assume a non-preemptive environment. Check all that apply. ○ A higher priority process enters the system. ○ The processes requests an I/O operaton. ○ The processes time slice expires. ○ The processes I/O request is completed. ○ The process terminates. ○ None of these responses are fully correct.

None of these responses are fully correct.

What is the *principal of locality*? ○ A reference to cache management protocols. ○ If an item is referenced, items whose addresses are close by will tend to be referenced soon. ○ If an item is referenced, it will tend to be referenced again soon. ○ Programs access a relatively small portion of their address space at any instant of time. ○ None of these answers are fully correct.

Programs access a relatively small portion of their address space at any instant of time.

What is meant by a soft real time system? ○ Scheduling algorithms that execute in a timed, scheduled manner. ○ Guaranteed to meet any scheduling requirements (within certain limits). ○ Refers to the notion that the kernel tries to schedule applications within timing deadlines, but the kernel does not promise to always achieve these goals. ○ None of these responses are fully correct.

Refers to the notion that the kernel tries to schedule applications within timing deadlines, but the kernel does not promise to always achieve these goals.

The xv6 OS uses a state called RUNABLE. What does this mean? ○ The process is being admitted into the system. ○ The process is ready to be executed. ○ The process is being executed. ○ The process is waiting an I/O operaton to be completed. ○ The process is terminating. ○ None of these responses are fully correct.

The process is ready to be executed.

If a process is blocked, what does that mean? ○ The process is being admitted into the system. ○ The process is ready to be executed. ○ The process is being executed. ○ The process is waiting an I/O operaton to be completed. ○ The process is terminating. ○ None of these responses are fully correct.

The process is waiting an I/O operaton to be completed.

Why might a process involuntarily give up the CPU? Assume a preemptive environment. Check all that apply. ○ A higher priority process enters the system. ○ The process terminates. ○ The processes requests an I/O operaton. ○ The processes time slice expires. ○ The processes I/O request is completed. ○ None of these responses are fully correct.

The processes time slice expires.

What is the purpose or goal of *paging*? ○ Reduce compulsory misses in main memory. ○ Implement security protocols for main memory to ensure appropriate memory segmentation. ○ Increase memory performance for executing processes. ○ To provide for efficient use of main memory for process execution.

To provide for efficient use of main memory for process execution.

When might a call to a library function cause a page fault? ○ It is unlikely a basic function call will cause a page fault. ○ Function calls only cause sotft faults (not hard faults). ○ When that function is on a page that is not currently in memory. ○ Only when the call is to a function in a DLL/SO.

When that function is on a page that is not currently in memory.

Broadly speaking, most processes ______________. ○ are preemptive. ○ are compute bound. ○ are I/O bound. ○ alternate between bursts of computing and I/O (disk or network).

alternate between bursts of computing and I/O (disk or network).

A certain computer provides its users with a virtual-memory space of 232 bytes. The virtual memory is implemented by paging, and the page size is 4096 bytes. A user process generates the virtual address 1112345610 (0x00a9bb00). Answer in hex. Note, a hex number must be proceeded with 0x. ○ What is the page number (tag)? ○ What is the page offset?

• 0x00a9b • 0xb00

Given the following table; TLB | Page Table | Data Cache 1 | hit | hit | hit 2 | miss | miss | miss 3 | hit | miss | miss 4 | miss | miss | hit Note which rows are legal or illegal.

• 1: Legal. • 2: Legal. • 3: Illegal. • 4: Illegal.

Which of the following is true regarding a priority scheduling algorithm. *Check all that apply.* ○ All processes are not treated equally (for scheduling). ○ Each process gets a small amount CPU time (time slice). ○ Shares some characteristics with a first-come, first-served (FCFS) algorithm. ○ Possible starvation for low priority processes. ○ All processes are treated equally (for scheduling).

• All processes are not treated equally (for scheduling). • Each process gets a small amount CPU time (time slice). • Possible starvation for low priority processes.

Which of the following are criteria for which scheduling decisions could be based on? *Check all that apply.* ○ CPU Utilization. ○ Throughput. ○ Turn-around time. ○ Waiting time. ○ Response time.

• CPU Utilization. • Throughput. • Turn-around time. • Waiting time. • Response time.

With regard to address binding, match the correct columns. Compile time. [ Choose ] Absolute code. Not loaded until/unless needed via a call. Random binding. Relocatable code. DLL's/SO's Load time. [ Choose ] Absolute code. Not loaded until/unless needed via a call. Random binding. Relocatable code. DLL's/SO's Execution time. [ Choose ] Absolute code. Not loaded until/unless needed via a call. Random binding. Relocatable code. DLL's/SO's

• Compile Time: Absolute code. • Load Time: Relocatable code. • Execution Time: DLL's/SO's

A process that performs a large number of calculations is referred to as [ Select ] ["Compute bound.", "I/O Bound.", "A process or thread."] . A process that performs a large number amount of I/O is referred to as [ Select ] ["Compute bound.", "I/O bound.", "A process or thread."] .

• Compute bound. • I/O bound.

Which of the following is true regarding a round-robin scheduling algorithm. *Check all that apply.* ○ All processes are not treated equally (for scheduling). ○ Each process gets a small amount CPU time (time slice). ○ Possible starvation for low priority processes. ○ Shares some characteristics with a first-come, first-served (FCFS) algorithm. ○ All processes are treated equally (for scheduling).

• Each process gets a small amount CPU time (time slice). • Shares some characteristics with a first-come, first-served (FCFS) algorithm. • All processes are treated equally (for scheduling).

Which of the following is/are true regarding *paging*? Check all that apply. ○ Is the modern version of the first fit memory allocation strategy. ○ Requires a page size of 4096. ○ Eliminates most of the problems of contiguous memory allocation. ○ Works due to the principle of locality.

• Eliminates most of the problems of contiguous memory allocation. • Works due to the principal of locality.

Which of the following are considered goals of an OS scheduler? Check all that apply. ○ Ensure fairness. ○ Starvation freedom. ○ Good response time. ○ Implement preemption. ○ Policy enforcement.

• Ensure fairness. • Starvation freedom. • Good response time. • Policy enforcement.

Given the following table. TLB | Page Table | Data Cache hit | hit | hit hit | hit | miss miss | miss | miss miss | miss | hit Performance?

• Fully completed within CPU. • Only address translation completed in CPU. • Page fault. • Illegal.

In the context of paging, which is true regarding a *hard (or major) fault*? Check all that apply. ○ Indicates that the page was placed into the free list and subsequently swapped out. ○ Indicates that the page was placed into the free list but not yet swapped out. ○ The page is not in memory and needs to be read from secondary storage. ○ The page is already in memory and does not need to be read from secondary storage. ○ None of these answers are fully correct.

• Indicates that the page was placed into the free list and subsequently swapped out. • The page is not in memory and needs to be read from secondary storage.

In the context of paging, which is true regarding a *soft (or minor) fault*? Check all that apply. ○ Indicates that the page was placed into the free list and subsequently swapped out. ○ Indicates that the page was placed into the free list but not yet swapped out. ○ The page is not in memory and needs to be read from secondary storage. ○ The page is already in memory and does not need to be read from secondary storage. ○ None of these answers are fully correct.

• Indicates that the page was placed into the free list but not yet swapped out. • The page is already in memory and does not need to be read from secondary storage.

Which of the following is true regarding processor cache configurations? Check all that apply. ○ L1 is shared between all cores. ○ L3 must be included in processes to ensure cache functionality. ○ L1 is split into L1 Data and L1 Instruction caches. ○ If an L3 is present, L2 may be per core. ○ If an L3 is present, L2 may be shared.

• L1 is split into L1 Data and L1 Instruction caches. • If an L3 is present, L2 may be • per core. • If an L3 is present, L2 may be shared.

For a virtual memory environment; ○ Is it necessary for all of the pages of a process to be in main memory while the process is executing? ○ Must all of the pages of a process in main memory be contiguous? ○ Must the virtual pages be in a specific order? ○ Must the physical pages be in a specific order?

• No • No • Yes • No

Which of the following describes the scheduling in a *batch system*? *Check all that apply.* ○ Preemptive. ○ Nonpreemptive. ○ Allows multiple simultaneous use of resource (i.e., CPU core). ○ First-come, first-served (FCFS) or first-in-first-out (FIFO) .

• Nonpreemptive. • First-come, first-served (FCFS) or first-in-first-out (FIFO) .

Which of the following is true regarding a physical address? Check all that apply. ○ Visible to the process. ○ Not visible to the process. ○ Is generated by the program. ○ Is seen by the memory hardware.

• Not visible to the process. • Is seen by the memory hardware.

Which of the following describes scheduling in an *interactive system*? *Check all that apply.* ○ Preemptive. ○ Nonpreemptive. ○ Allows sharing of resource (i.e., CPU core). ○ First-come, first-served (FCFS) or first-in-first-out (FIFO) .

• Preemptive. • Allows sharing of resource (i.e., CPU core).

What does it mean for a scheduling system to be *preemptive*? *Check all that apply.* ○ Proceeses are allow to run for some maximum allowed time (time-slice). ○ Select the next process to schedule when current process blocks. ○ Processes are preempted in order to more effectively share the CPU core. ○ Processes are not prempted and they execute until completion.

• Proceeses are allow to run for some maximum allowed time (time-slice). • Select the next process to schedule when current process blocks. • Processes are preempted in order to more effectively share the CPU core.

Which of the following are scheduling algorithms for an *interactive system*? *Check all that apply.* ○ Round-robin scheduling. ○ Priority scheduling. ○ Shortest remaining time. ○ Multiple queue scheduling. ○ Random scheduling.

• Round-robin scheduling. • Priority scheduling. • Shortest remaining time. • Multiple queue scheduling.

What does it mean for a scheduling system to be *non preemptive*? *Check all that apply.* ○ Processes are allowed to run for some maximum allowed time (time-slice). ○ Select the next process to schedule when the current process blocks. ○ Processes are preempted in order to more effectively share the CPU core. ○ Processes are not preempted and they execute until completion.

• Select the next process to schedule when current process blocks. • Processes are not prempted and they execute until completion.

A process that executes in the background is referred to as what? *Check all that apply.* ○ Magic. ○ Service. ○ Virus. ○ Daemon.

• Service. • Daemon.

Which of the following scheduling algorithms are typically associated with a batch environment? *Check all that apply.* ○ Multiple queue scheduling. ○ Round-robin scheduling. ○ Shortest job next (SJN). ○ First-come, first-served (FCFS) or first-in-first-out (FIFO) . ○ Random scheduling.

• Shortest job next (SJN). • First-come, first-served (FCFS) or first-in-first-out (FIFO) .

Which of the following are criteria for which scheduling decisions could be based on? *Check all that apply.* ○ Security enforcement. ○ Throughput . ○ Response time. ○ Policy enforcement. ○ CPU Utilization.

• Throughput . • Response time. • CPU Utilization.

The maximum allowed CPU time a process is allowed to execute (per turn using the CPU core) is referred to as what? *Check all that apply.* ○ Iota. ○ Time slice. ○ Interval. ○ Quantium.

• Time slice. • Quantium.

Which of the following is true regarding a logical or virtual address? Check all that apply. ○ Visible to the process. ○ Not visible to the process. ○ Is generated by the program. ○ Is seen by the memory hardware.

• Visible to the process. • Is seen by the memory hardware.

A simplified view of thread states is READY, RUNNING, and BLOCKED, where a thread is either ready and waiting to be scheduled, is running on the processor, or is blocked (for example, waiting for I/O). --------- > READY -------- | ^ | | | | BLOCKED <----- RUNNING <------ Assuming a thread is in the RUNNING state, answer the following questions. ○ Will the thread change state if it incurs a page fault? [ Select ] ["Yes", "No"] ○ If the thread change state if it incurs a page fault > what state would it change to? [ Select ] ["READY", "BLOCKED", "RUNNING"] ○ Will the thread change state if it generates a TLB miss that is resolved in the page table? [ Select ] ["Yes", "No"] ○ If the thread change state when it generates a TLB miss that is resolved in the page table -> what state would it change to? [ Select ] ["READY", "BLOCKED", "RUNNING"] ○ Will the thread change state when a page fault is fully resolved? [ Select ] ["Yes", "No"] ○ If the thread change state when a page fault is resolved -> what state would it change to? [ Select ] ["READY", "BLOCKED", "RUNNING"]

• Yes • BLOCKED • Yes • BLOCKED • Yes • READY

Given the following table, note which set of circumstances is possible. *TLB | Page Table | Data Cache | Possible?* hit | hit | hit | ? hit | hit | miss | ? miss | miss | miss | ? miss | miss | hit | ? Answer with *Yes* or *No*.

• Yes (Ideal) • Yes • Yes (Hard fault - worst) • No (Not in secondary storage, not possible to hit in data cache)

Given the below address stream, what is the applicable page number for the given page size. • Tag - 4KB page • Tag - 8KB page • Tag - 16KB page Reference (hex) ○ 0x00a9bb00 ○ 0x011a7a28 ○ 0x011a777a ○ 0x01113a98

○ 0x00a9bb00 • 4KB: 0x00a9b • 8KB: 0x0054d • 16KB: 0x002a6 ○ 0x011a7a28 • 4KB: 0x011a7 • 8KB: 0x008d3 • 16KB: 0x00469 ○ 0x011a777a • 4KB: 0x011a7 • 8KB: 0x008d3 • 16KB: 0x00469 ○ 0x01113a98 • 4KB: 0x01113 • 8KB: 0x00889 • 16KB: 0x00444

Given Reference (hex) Show 1. Tag - 4KB page 2. Tag - 8KB page 3. Tag - 16KB page ○ 0x08fff ○ 0x07a28 ○ 0x0777a ○ 0x03a98 ○ 0x01c19 ○ 0x01000 ○ 0x022d0

○ 0x08fff • 0x08 • 0x04 • 0x02 ○ 0x07a28 • 0x07 • 0x03 • 0x01 ○ 0x0777a • 0x07 • 0x03 • 0x01 ○ 0x03a98 • 0x03 • 0x01 • 0x00 ○ 0x01c19 • 0x01 • 0x00 • 0x00 ○ 0x01000 • 0x01 • 0x00 • 0x00 ○ 0x022d0 • 0x02 • 0x01 • 0x00


संबंधित स्टडी सेट्स

Chapter 11: Inflammation & wound healing

View Set

Khan Academy Biology: Chapter 2 Questions

View Set

SHADERN 2204 Relationship Development

View Set

Structural features for Edexcel GCSE English

View Set

Book Based CRIM 2313 Study Guide

View Set

Chapter 3: The Chemical Basis of Life II: Organic Molecules

View Set

States of Matter and Physical/Chemical Properties

View Set