CS240ExamQuestions

Ace your homework & exams now with Quizwiz!

Q1 Introduction: State five general objectives of an operating system giving a brief explanation of each objective.

(i) Allocation and Management of Resources among competing user processes. (ii) Maximising Resource Utilisation with the goal of improving overall system throughput. (iii) Providing a user interface and an application interface to the machine. (iv) Coordinating the many concurrent activities and devices, handling input and output from attached hardware and ensuring correct synchronisation and communication is achieved. (v) Acting as a Resource Guardian to protect various resources of the computer system from malicious or accidental misuse. (vi) Accounting for periods of resource usage by user processes, enforcing quotas or restrictions as appropriate. (vii) Power and Thermal Management.

Q9 file system :Describe three space allocation techniques Comment on the data structures needed to implement these allocation techniques and the efficiency of file processing operations.

1. Contiguous allocation: the position of the first block of the file is known, the size of the file is known. the size of the physical disk blocks are known. Then it is possible to determine where the remaining blocks of the file are located without having to explicitly store this location information. initial request: Read(FileA, Record=6, Destination=Buffer, NumBytes=250) 1.file system Consult the directory to determine the starting location of FileA on the disk 2. Read The Disk Block With Record 6(Calculate the Physical Block Location of the block containing our record and read that block from the storage device) 3. Then extract Record 6 and Move to Buffer (Calculate the offset of the desired record and copy its content to a program buffer area) "opening a file": directory is searched once, kept in memory by the file system, can be updated and write back to the storage device once when we close the file "cache of disk blocks" "Automatic File Space Allocation":When new files are created, or existing files are extended, free blocks can be allocated automatically by the file system. "Dynamic Space Allocation": file size is grown, we need to extend the space to file A. move all of FileA to a larger contiguous area. Contiguous allocation of space is very efficient ,However, if files are dynamically changing in size, it becomes quite inefficient to implement dynamic space allocation in this way due to the need for file relocation. 2. Linked Allocation: The directory entry contains a pointer to the location of the first block, each file block then contains a pointer to its successor. The last block containing a special pointer termination code. Small space is reserved in each file block by the file system to contain the pointer . Free space can also be managed as a linked list of blocks in the same way. Linked allocation gives quite good performance for sequential access. Random access to data is not efficient. Resource overhead (reserved)associated with the method. iss not as robust as the contiguous approach. 3. Indexed Allocation: An index block is assigned to each file. The index block is a table which maps logical file blocks to their physical block locations. The directory entry for the file then contains a pointer to the index block . It allows allows the logical blocks of a file to be scattered across the storage device As data blocks can be located arbitrarily by consulting the index block, this allocation method gives good performance for both sequential and random record access. Which is the best: File Systems may use a combination of allocation techniques for different types of files. Files which are static in size may be allocated space contiguously (saving on pointers or index blocks) and giving high access performance as the disk head doesn't have to travel far in order to read all file blocks. Executable files are examples of such files which hardly ever change and are read in their entirety when accessed. Large database files which require frequent random record access would use indexed allocation to achieve good access performance and efficient dynamic space allocation.

Q6,Q7 Mutual exclusion and concurrency: Readers/Writers concurrency problem prioritises readers and requires that no reader be kept waiting unless a writer has already obtained permission to use the shared item. Define a psuedo code solution to this coordination problem using semaphores. Use your experience from Practical 9 of your coursework.

1. The first readers/writers problem prioritises readers and requires that no reader be kept waiting unless a writer has already obtained permission to use the shared item. If a reader is finished with the data set, and there are no other readers using the item, then the last reader can give up the semaphore (wrt) to allow any waiting writers to enter. public DataAccessPolicyManager () { readerCount = 0; mutex = new Semaphore(1); wrt = new Semaphore(1); } public void acquireReadLock() { mutex.acquire(); ++readerCount; if (readerCount == 1) // first reader wrt.acquire(); mutex.release(); } public void releaseReadLock() { mutex.acquire(); --readerCount; if (readerCount == 0) // Last reader wrt.release(); mutex.release(); } public void acquireWriteLock() { wrt.acquire(); } public void releaseWriteLock() { wrt.release(); } 2. The second readers/writers problem prioritises writers and requires that once a writer is ready, that writer performs its write as soon as possible. The second readers writers policy prioritises writers. Writers only have to wait for current readers to finish. New readers must wait until there are no writers. public DataAccessPolicyManager2 () { readCount = 0; writeCount = 0; mutexReadCount = new Semaphore(1); mutexWriteCount = new Semaphore(1); wrt = new Semaphore(1); // to block writers rdr = new Semaphore (1); // to block readers } public void acquireReadLock() { rdr.acquire(); // Reader can enter if no writers mutexReadCount.acquire(); readCount = readCount + 1; if (readCount == 1) wrt.acquire(); // block writers mutexReadCount.release(); rdr.release(); // allow another in } public void releaseReadLock() { mutexReadCount.acquire(); readCount = readCount - 1; if (readCount == 0) wrt.release(); //last reader mutexReadCount.release(); } public void acquireWriteLock() { mutexWriteCount.acquire(); writeCount = writeCount+1; if (writeCount == 1) rdr.acquire(); //Block readers mutexWriteCount.release(); wrt.acquire(); } public void releaseWriteLock() { wrt.release(); mutexWriteCount.acquire(); writeCount = writeCount - 1; if (writeCount==0) rdr.release(); //no more writers mutexWriteCount.release(); }

Q6,Q7 Mutual exclusion and concurrency: Dining Philosophers problem and outline a deadlock free solution using semaphores.

1.Method 1 Allow at most four philosophers to sit at the table at the same time. This means at least one philosopher can always eat and when she's finished, another can eat and so on. There is progress. room.acquire();//semaphore "room" with value of 4 chopSticks[myName].acquire(); // Acquire left chopSticks[(myName+1)%5].acquire(); //Acquire right eat(); chopSticks[myName].release(); // Release left chopSticks[(myName+1)%5].release(); // Release right room.release(); Drawback: This scheme reduces the potential concurrency in order to prevent a deadlock situation occuring and therefore system throughput is reduced on the chance that a deadlock may otherwise occur. 2. Method 2 Acquire both chopsticks together. We could implement the chopsticks as an integer array with a mutex semaphore to guard modifications to it. haveChopSticks = false; while (!haveChopSticks) { mutex.acquire(); //semaphore(1) if ((chopSticks[myName]==1) &&(chopSticks[(myName+1)%5]==1)) { chopSticks[myName]=0; chopSticks[(myName+1)%5]=0; } mutex.release(); haveChopSticks = true; } Drawback: In reality, it would be difficult for a process, especially one with large resource requirements, to acquire all of its resources at the same time before it could do any work. 3. Method 3 Odd numbered philosopher picks up his left chopstick first and then his right, but an even philosopher picks up his right first then his left, then a deadlock situtation is prevented. if ((myName % 2) == 0) { chopSticks[myName].acquire(); // Acquire left chopSticks[(myName+1)%5].acquire(); // Acquire right } else { chopSticks[(myName+1)%5].acquire(); // Acquire right chopSticks[myName].acquire(); // Acquire left } Drawbacks: works only for this concurrency scenario. A similar such solution may not be obvious in a typical resource scheduling situation.

Q9 Design requirements of a File System,List and explain some of the basic system calls typically provided for accessing and organising files

A file system has a number of requirements: Mapping files onto devices Organising files into directories File Protection, File Sharing Data Integrity Support for a variety of storage devices User Interface The File System must provide an interface to the user which would typically offer the following functionality: Open Prepare a file to be referenced within the operating system Close Prevent further reference until file is reopened Create Build a new file within the collection Delete Remove a file Copy Create another version of the file with a new name Rename Change the name of a file

Q2,3 processor scheduling:Distributed System? Outline the scheduling goals of a Distributed Operating System and the difficulties in achieving them.

A Distributed System is composed of a collection of physically distributed computer systems connected by a local area network. From a scheduling perspective, a distributed operating system would endeavour to monitor and share or balance the load on each processing node. Scheduling in distributed systems is even more complex because the system information is generally not accessible in one single place but is managed on separate nodes in the network. Algorithms must gather information needed for decision making and transmit control information using message exchanges across the network. Message communication can be subject to delays, corruption and loss all leading to greater complexity within distributed algorithms.

Q1 Introduction: What is a process control block(PCB)? Summarize the typical organization and content of a PCB.

A Process Control Block is an execution context (all resource information about the process and its activity) that can be used for independent scheduling of that process onto any available processor. consists of : Process Identification Data Processor State Process Control Data The PCB may be moved between different queues over the process lifetime depending on the priority or state of its execution. The Process Control Block record of a process represents the process on one of the OS queues.

Q6,Q7 Mutual exclusion and concurrency: Peru and Bolivia have national train routes,correct solution

A single bowl is used, but initially it contains a rock. If either driver finds a rock in the bowl, he removes it and drives through the pass. He then walks back to put a rock in the bowl. If a driver finds no rock in the bowl when he wishes to enter the pass, he waits. int flag = 1; /* Shared variable between the processes initially 1 */ while (flag == 0) {}/*Do nothing while Flag=1*/ flag = 0; Enter the Pass ... Leave the Pass flag = 1; Method 4 - This method worked for the train drivers. Why doesn't it work in software? differ from method1: driver cannot both remove rocks. The reason is that removing a rock from the bowl was an indivisible operation. Only one driver could find and take the rock. In software however, our algorithm seperately tests Flag and then sets it.

Q1 Introduction: What is meant by Cloud Computing and state some of its benefits?

Data and/or applications are hosted remotely at data centres belonging to different companies on the Internet rather than residing within your own computer. You "rent" the software, storage or computational resources of these systems and access them using Web based browsers as the user interface. Cloud computing is seen as a highly reliable, highly available, cost effective high performance means of being able to access your data and applications from anywhere in the world. Microsoft Azure, Google Cloud Platform, Amazon Web Services are examples of Cloud Application Environments.

Q9:What is the motivation for implementing a hierarchical file system namespace instead of a flat namespace? Outline a scheme for implementing a hierarchical file system which uses index blocks for tracking file space allocation.

As the number of files increases, the number of unique names we use to identify each file becomes unwieldy. If all of our files are all contained in the same directory then the number of names involved makes it difficult to remember the name of a particular file we are looking for.That's a single directory contains all the files, we have a flat name space. ) We need to organise files into groups which might relate to a particular application or contain related documents or data and organise those files into separate directories of their own. As users of the system, when we are focused on a particular activity, all of the files relating to that activity can easily be identified within the subdirectory of interest. It also presents a suitable model for sharing portions of our file space with other users.

Q8 Memory: Compare the bitmap approach to the linked list approach for keeping track of a free memory space.

Bit Maps: Divide the memory space into small fixed sized allocation units, say 1,024 bytes (1K) each. The bitmap contains a single bit for each allocation unit which indicates the status of that unit, e.g. 1 for used, 0 for unused. The size of memory and the size of allocation units determine the size of the bitmap. The size of the allocation unit can be an important design consideration. allocation: searching for a string of bits which represents a region large enough to accommodate the request. disallocation: resets the appropriate bits to avalible Linked Lists: Maintain an information record describing the size and location of each of the variable sized free blocks and to keep this variable number of records in a dynamic linked list structure. Allocation: Memory manager must search the list to find a free block of suitable size for the request. Deallocation: A new free block record is added to the free list, describing the starting location and size of the released block. Best Fit: The smallest accommodating block. Worst Fit: The largest free block (divide it and leave a smaller but still useful free block) First Fit: the first block which can accommodate it. Next Fit: search for the first block can accomodate it from the position of last time used block. Bit Map Fixed Sized Data Structure regardless of memory usage Allocation efficiency depends on size of bitmap, byte search Deallocation very quick ½ page internal fragmentation per process Linked List Dynamic Size, depends on number of free segments Allocation involves possibly long list search if many small segments Deallocation requires complete list analysis to determine neighbour segment mergers No internal fragmentation

Q6,Q7 Mutual exclusion and concurrency: State the necessary conditions required for a good solution to the Mutual Exclusion problem. Comment on the application of a spinlock for implementing mutual exclusion. spin-lock and when would this mechanism be used? Give an example of how indivisible hardware instructions could be used to implement a spin-lock.

Conditions necessary for a correct solution to the Mutual Exclusion Problem (a) Mutual Exclusion No two threads can be able to execute simultaneously in the critical code section. (i.e. the section which manipulates the nonshareable resource.) (b) Progress A thread operating outside the critical section cannot prevent another thread form entering the critical section. (c) Bounded Waiting Once a thread has indicated its desire to enter a critical section, it is guaranteed that it may do so in a finite time. A safer approach to solving the mutual exclusion problem in multiprocessor systems, is to use special processor instructions which can read and write to memory indivisibly of the activities of other processors and which assert lock mechanisms on areas of memory to avoid cache coherence problems. A test-and-set instruction is an instruction used to write to a memory location and return its old value in a single atomic operation. The memory location is locked from access by other processors during a test-and-set operation. spinlock // lock is shared by all threads. Boolean lock = false; Each thread then tries to set lock to true before entering the critical section. The test-and-set operation returns the current value of lock and also sets it to true in an indivisible operation. If the value returned is true, then the lock was already set and the thread has to wait until the value returned by test-and-set is false. while ( test-and-set(lock, true) ) {}//loop until the get-and-set function returns false critical section test-and-set(lock, false);

Q1 Introduction: Explain the distinction between the Kernel of an operating system and an Operating System Distribution for a target environment.

The Kernel of an operating system refers to the core set of services associated with managing the CPU, the memory and hardware devices. The Kernel is common to various Operating System Distributions. An Operating System Distribution will have additional components to the Kernel such as a File System, a Database Engine, Network Communication Suite, Graphics and Media functions, a Web Server, a User Interface GUI, Security and Authentication elements, Device Drivers for various external I/O devices, and Utilities for configuring the system. The packages chosen for a particular distribution might be tailored for different target environments like desktop machines, servers or mobile devices, home or business use.

Q2,3 processor scheduling: Modern applications may be organized as a collection of cooperating processes. Discuss how the traditional unix scheduler scheme could be modified to give a fair share of scheduling time to applications regardless of the number of processes operating within each.

Designed to support a time sharing multitasking interactive environment on a single processor machine. Not originally designed for real time process requirements(scheduling tasks within time constraints) or for symmetric multiprocessing. Modern Unix implementations since about 2003 has been revamped to cater for these requirements. Provides good response time for interactive user tasks while ensuring low priority background tasks don't starve and also that high priority system tasks can be done quickly. Multilevel feedback approach Priority queues which are serviced using round robin. Priority is a value between 0 and 127. Priorities 0 to 49 were for Kernel processes, and priorities from 50 to 127 for user processes. The priority of all processes system wide was calculated at one second intervals from their execution history and a base priority level used to place processes in bands of priority levels. Pj(i) = Basej + CPUj(i)/2 + nicej User processes are preempted after 1 second if still using the CPU and the highest priority task is chosen next. Recent CPU usage reduced a process's scheduling priority by increasing its priority value. To ensure processes are eventually rescheduled, the recorded CPU utilization of a process is decayed during each priority recalculation interval using the following formula:- CPUj(i) = CPUj(i-1)/2

Q8 Memory : "working set"

Each process must be allocated a number of physical memory pages in which the memory manager attempts to store pages from its logical address space corresponding to its current locality of reference. The set of physical pages allocated to a process is known as its working set. The upper size limit of the working set is determined by the maximum number of physical memory pages available. The lower limit is determined by processor architectural features.

Q1 Introduction:how processes and hardware devices communicate with the operating system to obtain services.

How Processes Communicate with the Operating System Communication with the operating system kernel is done via a special system call mechanism. Processes need to communicate with the operating system in order to obtain protected system services like accessing the hard disk or other hardware, creating new processes, doing interprocess communication or configuring kernel services. A special processor instruction known as a software interrupt is the mechanism for doing this. When a processor encounters a software interrupt machine code instruction as part of the process code, the processor will stop executing the current process, and save its state in its PCB structures. The processor will then index into the interrupt vector table and proceed to fetch instructions from a designated area of memory where the operating system stores an interrupt handler for the desired function specified in the software interrupt. This is known as a context switch from one memory space (that of the process) to another memory space (that of the kernel). It is done in a controlled way automatically by the processor and not the user. The interrupt handler examines what type of service has been requested and calls the appropriate operating system subroutine to deal with it. After the interrupt routine has finished and returned, the original process may then be continued when it is next scheduled on the processor. In order to enforce hardware protection, certain processor instructions are restricted and cannot be executed by ordinary processes. The processor executes in one of two modes, User Mode or Supervisor Mode. When executing a normal process the processer is in User Mode, but when an interrupt is received, the processor automatically changes to Supervisor Mode to execute operating system code in a controlled manner.

Q2,3 processor scheduling:Priority Scheduling

In a priority scheduling system, processes are assigned a numeric value which indicates their scheduling priority. Processes with the highest priority are always chosen first by the scheduler. It is an easy scheme to implement but requires a means of allocating priorities fairly so that lower priority tasks do not starve

Q2,3 processor scheduling:A traditional processor scheduler focuses on achieving fair allocation of processor time among the total set of processes. It is possible that one user or application may be running significantly more processes to the detrement of other users. Suggest a means of achieving a fair allocation of CPU time among different users.

In multiuser multitasking systems each user may be running a collection of their own tasks. It is possible that one user or application may be running significantly less processes than another. Our scheduling algorithms so far focus only on achieving fair allocation among the total set of processes, not on achieving fair allocation of CPU time among different users or different applications. Fair-Share Scheduling: The scheme would require some alterations to the process priority calculation to assign a percentage of processor time to each group of processes. For example if there were 4 separate groups with different share weightings per group. Pj(i) = Basej + CPUj(i)/2 + GCPUk(i) / 4xWk (where Wk is a weighting of CPU time assigned to group k such that 0 <= Wk <=1) And we could decay GCPU in the same way GCPUj(i) = GCPUj(i-1)/2

Q5 Communication mechanism: Compare and contrast the benefits of communication mechanisms based on Shared Memory versus those based on Message Passing.

Interprocess communication refers to the exchange or sharing of information between separate independent schedulable tasks. Shared Memory : By utilising a shared region of memory accessible to all the communicating parties Usually communication requires processes to be on the same host, they are usually cooperative parts of the same application. 1.Model is application oriented, suits cooperating processes willing to share memory. 2.Implicit communication through read/write operations. 3.Highly efficient, no communication protocols. 4. Need synchronisation mechanisms. Message Passing: By using explicit message passing primitives provided by a communication subsystem of the operating system Independent processes need a different means of communication as they cannot access each other's disjoint address space. Processes may not trust each other and may reside on different hosts. A message passing facility is any intermediary operating system mechanism which can take data from the address space of one process and place it in an area accessible to the other. A message passing mechanism must implement two basic primitives, send and receive, or their equivalent, which processes explicitly invoke to exchange messages. Higher level abstractions, such as remote procedure call, are extensions of these basic primitives.

Q8 Memory:A computer memory system is composed of a hierarchy of mechanical and electrical components. Describe such a hierarchy and explain the function of the hierarchical layers

Many modern computers implement a modified Harvard architecture where both data and code can be stored together in a main memory, but separate CPU caches are used for instructions and data to achieve better performance. The system memory component is composed of a hierarchy of levels where each level in the hierarchy uses storage technologies offering different characteristics. The memory system is a configuration of different storage technologies that meets a specific cost/capacity/performance objective. A processor which is capable of executing a billion instructions per second cannot achieve that performance if these instructions cannot be supplied to the processor at comparable speeds by the memory. The electrical memory resource DRAM is quite critical for good system performance and this section focuses on the management of this resource. The memory manager keeps track of which parts of memory are in use and which parts are free, it allocates memory to processes when they need it and deallocates it when they are finished. On virtual memory systems, the memory manager will manage swapping between the main memory and disk when the main memory is too small to hold all the processes. The Mapping of Memory to the processor caches is done by hardware and not under software control, in order to achieve adequate performance. Memory Manager is concerned with operation and allocation of Main Memory and Secondary Storage only.

Q2,3 processor scheduling:symmetric multiprocessing and asymmetric multiprocessing

Multiprocessing Multiprocessing is the use of more than one CPU in a system, either separate physical CPUs or a single CPU with multiple execution cores. asymmetric multiprocessing one processor, the master, centrally executes operating system code and handles I/O operations and the assignment of workloads to the other processors which execute user processes. With this scheme, only one processor is accessing system data structures for resource control. While this makes it easier to code the operating system functions, in small systems with few processsor cores the master may not have enough slaves to keep it busy. Symmetric multiprocessing is a system where all processors carry out similar functions and are self scheduling. The identical processors use a shared bus to connect to a single shared main memory and have full access to all I/O devices and are controlled by a single operating system instance. Each processor will examine and manipulate the operating system queue structures concurrently with others when selecting a process to execute. This access must be programmed carefully to protect the integrity of the shared data structures.

Q8 Memory:When a program is loaded into memory with other programs two problems need to be solved:- (1) the prevention of access to its address space by other programs and (2) the binary image of the program needs to be compiled in such a way that it can execute no matter what part of memory it is loaded in(process relocation , memory address at runtime need to map to the physical location). Explain how paged memory systems can solve both of these problems.(MMU, maps memory adress to physical page. a process only see it's own page table, don't have the pointer to other pages. so cannot access other process's page)

Paged Architecture(paged memory systems) a paged architecture memory is divided into a number of relatively small fixed sized units known as pages. A process address space is made up of a number of pages. The physical pages may be allocated to a process from any of the free pages within the memory space. These pages are mapped by the memory management unit of the processor to the corresponding physical pages in the memory. The mapping information is kept in a page table and there is one page table per process. A memory address is divided into two parts, a page number and an offset within that page. The MMU exchanges the page number from the logical address generated by the processor, with the corresponding physical page found in the page table. Paged Architecture - Advantages Protection - As processes only have a logical view of their address spaces, it is not possible for them to access the address space of other processes because the page table won't contain pointers to pages for any other process. The memory space assigned to a process can be composed of pages which are scattered randomly throughout the physical memory and do not all have to be adjacent to one another. It is easy to expand the address space of a process by simply mapping additional free pages into its page table.

Q5 Communication mechanism: Compare and contrast the two Unix interprocess communication paradigms: message queues pipes Named Pipes Sockets

Pipe: Processes that are related by creation hierarchy and run on the same machine can use a fast kernel based stream communication mechanism known as a pipe. Pipes are FIFO byte stream communication channels implemented with finite size byte stream buffers maintained by the kernel A pipe can only be used between related processes and is generally used as a unidirectional communication channel from parent to child or vice versa. The parent would create the pipe before creating the child so that the child can inherit access to it. Named Pipes: can be used for communication between unrelated processes that have access to the same file name space. A named pipe is implemented as a special FIFO file by the kernel, rather than as a memory buffer, and so is accessible to independent processes through the file system's shared name space. Processes open named pipes in the same way as regular files, so unrelated processes can communicate by opening the pipe and either reading or writing it. Network Sockets: Sockets are the more usual and general purpose means of communication between independent processes and can be used when neither kernel data structures or files can be shared(Internet) A socket is a data structure created by an application to represent a communication channel. The socket must be associated to some entity (service access point/port) within the communication system. By using communication primitives of the socket interface, the process then exchanges this data from its address space to the address space of the communication subsystem which handles delivery. The communication service (e.g TCP/IP) provides the glue for establishing connections between sockets belonging to different processes, which are often on different machines and networks and for transporting and routing messages through the network to a destination. On the Internet, IP addresses and port numbers are used as a means of identifying the endpoints of a connection to which a program's socket might be connected. Message Queues Message queues allow processes to exchange data in the form of whole messages asynchronously. Messages have an associated priority and are queued in priority order. The operating system maintains the message queue independently until it is unlinked (destroyed) by a user process or the system is shutdown. It is not necessary to create a rendezvous style connection between the processes but they must both have access to the message queue and its API. Compare: Pipes: Fast memory based, stream oriented, between related processes Named Pipes: Persistent, uses shared filesystem Message Queues: Asynchronous whole message communication with priority ordering Sockets: Low level internet communication Remote Procedure Call: Simplifies client/server programming

Q4 Magnetic Hard Drives: Describe the physical operation of a magnetic hard disk and the organization of its recording surface.

Platter: surface of each platter: 1.track: Organised as a concentric group of magnetic tracks on which data can be stored. 2.Sector: Each track is divided into a number of blocks of fixed size called sectors( in which the data is stored ). A sector is the smallest amount of data that can be addressed, read from or written to the disk in a single I/O operation. We can view the disk as an array of sectors in the range 0 to n − 1 , essentially the address space of the drive. 4. A disk contains two platters has three sides available for data. Track-positioning data is written to the disk during assembly at the factory. The system disk controller reads this data to place the drive heads in the correct sector position. One side of one platter contains space reserved for hardware track-positioning information and is not available to the operating system. 5.Cluster: Some operating systems group adjacent sectors into clusters. A cluster is the minimum amount of space on a disk that a file can occupy. Reserving a cluster allows the file some growth while maintaining efficiency of unfragmented access. 6.Each platter surface has a read/write head which can move linearly across it. The actuator moves the head to the correct track on a given platter and then waits for the correct block to pass underneath it for access.

Q2,3 processor scheduling: non-preemptive scheduling algorithms:FCFS

Process Scheduling Algorithms Some evaluation criteria are necessary to allow us to compare different queuing policies and their effects on system performance. Processor Utilisation = (Execution Time) / ( Total Time) Throughput = Jobs per Unit time Turnaround Time = (Time job finishes) - (Time job was submitted) Waiting Time = Time doing nothing Response Time = (Time job is first scheduled on resource) - (Time job was submitted) Response Ratio = (Waiting Time) / (Service Time) The order in which processes are serviced from a queue can be described by a Gantt Chart. Each process receives a number of units of time on a particular resource in the order described. Scheduling algorithms can be preemptive or non-preemptive FCFS is an inherently fair algorithm but performs badly for interactive systems where the response time is important and in situations where the job length differs greatly.

Q5 Communication mechanism: Compare and contrast the merits of socket based communication in Java with the Remote Method Invocation (RMI) mechanism. Use your experience from Practical 6 of your coursework

Remote Procedure Call: Recall the socket mechanism which uses an underlying transport protocol like TCP to create a connection between two end points on a network. The paired sockets form a bi-directional streamed communication channel between client and server. When a client communicates with a server, it usually wants to execute a function of some kind at the server. Using socket based communication is really flexible for creating your own protocols but it places the burden on the programmer for writing all the code required to invoke functions and exchange and parse parameters and results as a series of byte streamed message exchanges. RPC is a communication mechanism built on top of an underlying connection oriented message passing facility, like sockets., which is intended to make socket based client server programming easier. The socket based messaging functions are hidden within communications stub programs which are linked with the client and server code. The communications stubs can be generated automatically from the interface that the server exposes. how it works: A client process calls the desired procedure and supplies arguments to it. The client RPC stub mechanism packs the arguments into a message and sends it to the appropriate server process. The receiving stub decodes it, identifies the procedure, extracts the arguments and calls the procedure locally. The results are then packed into a message and sent back in a reply which is passed back by the stub to the client as a procedure return.

Q2,3 processor scheduling: Preemptive Scheduling: Round Robin

Round Robin is chosen because it offers good response time to all processes, which is important for achieving satisfactory interactivity performance in multitasking systems. Round Robin is a preemptive version of FCFS. Each process executes in turn until its quantum expires forcing a task context switch. The quantum can be varied, to give best results for a particular workload. Note that the round robin algorithm incurs a greater number of task switches than non preemptive algorithms. Each task switch takes a certain amount of time for the CPU to change the process environment. Too many task switches (i.e. quantum too small) means a greater proportion of CPU time is spent doing task switches instead of useful work. If the quantum is too large than RR degenerates to FCFS with poor response times. When choosing the quantum for round robin scheduling, it is difficult to suit all jobs. If there is a wide deviation in average CPU burst times then the multilevel queue approach, with feedback may be adopted.

Q2,3 processor scheduling: non-preemptive algorithms: SJF

SJF is provably the optimal algorithm in terms of throughput, waiting time and response performance but is not fair. SJF favours short jobs over longer ones. The arrival of shorter jobs in the ready queue postpones the scheduling of the longer ones indefinitely even though they may be in the ready queue for quite some time. This is known as starvation. SJF Calculation with approximated CPU burst is more complex than FCFS You must maintain cumulative history information and perform the calculations required for predicting burst length each time you come to choose the next task.

Q2,3 processor scheduling:Distinguish between Soft Real Time Systems and Hard Real Time Systems. Briefly outline two real-time scheduling algorithms.

Some computer systems are designed for specific industrial process control applications. For example, consider a production line where raw materials are supplied at one end of the line and finished goods come out the other. Along the production line, sensors, valves and other actuators are monitored and controlled by the computer system. Sensors bring data to the computer which must be analysed and subsequently results in modifications to valves and actuators. A real time system has well defined, fixed time constraints. Processing must be done within those constraints or the system may fail. Hard Real Time Systems are required to complete a critical task within a guaranteed amount of time Soft Real Time Systems are ones which endeavour to meet scheduling deadlines but where missing an occasional deadline may be tolerable. Critical processes are typically given higher priority than others and their priority does not degrade over time. The dispatch latency must be small to ensure that a real time process can start executing quickly. This requires that system calls be preemptible. It must be possible to preempt the operating system itself if a higher priority real time process want to run. Real Time Scheduling: Earliest Deadline First When an event is detected, the handling process is added to the ready queue. The list is kept sorted by deadline, which for a periodic event is the next time of occurrence of the event. The scheduler services processes from the front of the sorted queue. Least Laxity Algorithm If a process requires 200msec and must finish in 250msec, then its laxity is 50msec. The Least Laxity Algorithm chooses the process with the smallest amount of time to spare. This algorithm might work better in situations where events occurred aperiodically.

Q8 Memory : "page replacement algorithm" Briefly discuss two practical page replacement algorithms.

Sometimes, there may be no free pages left in memory to accommodate an incoming page, or perhaps the operating system restricts the size of the working set for a process, in order to be fair to other processes in the allocation of physical memory space. a page currently in memory needs to be swapped out to disk to make room for an incoming page. The page that is selected for replacement is chosen using a page replacement algorithm. It is important not to swap out a page that is likely to be needed again shortly by the process because each page fault results in a disk operation which is significantly slower than access to main memory.A lot of page faulting activity has the effect of reducing the overall memory access time significantly as well as making the disk system busy. Sometimes, if the working set is too small, pages of the process will be continually exchanged between memory and disk resulting in poor execution performance. This phenomenon is known as thrashing and can generally be eliminated by increasing the size of the working set. The Optimal Page Replacement algorithm, which guarantees a minimum number of page faults is to replace the page which won't be used for the longest time. First In First Out : Algorithm replaces the page that has been in the memory the longest.the algorithm is very easy and cheap to implement,not particularly suited to the behaviour of most programs. Least Recently Used : selects the page which hasn't been accessed for the longest time for replacement. requires a time stamp Least Frequently Used: replaces the page with the fewest times of using over a past period of time. requires a counter Belady's Anomaly: With a bigger working set we ended up with more page faults.

Q1 Introduction: Simplified State Transition Diagram describing the typical life cycle of avprocess within an operating system and explain what causes the transitions betweenvstates in your diagram.

There will be periods where it needs the CPU to execute its instructions and there will be other periods where it is waiting on other system resources, such as input/output devices, to provide data so that it can continue its execution. Processes are not always ready to use the processor as sometimes they must wait for other devices that are perhaps supplying data to them or they must wait for resources that need to be assigned, or are already in use, or perhaps they must wait for user interaction or time periods that must occur before they can continue.

Q2,3 processor scheduling: non-preemptive algorithms: Highest Response Ratio Next (HRN)

With FCFS, long jobs hold up short jobs. While SJF solves this, but continually arriving short jobs block long jobs. It may be better that, the longer a job is in the system waiting for the CPU, the greater chance it has of being scheduled. The (HRN) highest response ratio next algorithm endeavours to meet this objective. Type of priority based algorithm. The Response Ratio determines the ordering of the jobs. As the job waits in the ready queue, its priority increases.

Q6,Q7 Mutual exclusion and concurrency: n-process mutual exclusion problem indicating the entry code and exit code to be executed by each process. Explain the components of your code.

bakery algorithm: It's a commonly used queue servicing approach. A thread chooses a ticket, from an ordered set of numbers, which determines its position in the queue for the critical section. Tickets are unique, there is a service order and so waiting is bounded Choosing a Ticket Look at the tickets of all threads so far and pick one bigger Note this operation could be done in parallel by other threads and they could generate the same ticket number. In cases of where the ticket number is the same, we will order the threads by their thread ID, with lower numbers going first. Waiting for Critical Section For each of the threads If a thread is choosing a ticket then wait If the thread holds a non zero ticket and it is smaller than ours then wait Enter the Critical Section /* Shared structures */ boolean[] choosing; /* An array of n elements initially false */ int[] number; /* An array of n elements initially 0 */ /* Thread i - Choose a ticket */ choosing[i] = true; number[i]=Max(number[0],number[1], ...,number[n-1])+1; choosing[i] = false; /* Wait until we have the lowest ticket */ for (j = 0; j<n; j++) { while( choosing[j] ){} while ((number[j] != 0) && ((number[j],j) < (number[i],i))) { } } Enter Critical Section ; exit critical section; number[i]=0;

Q6,Q7 Mutual exclusion and concurrency: Producer/Consumer problem using semaphores. Explain how producers are held up when the buffer is full and how consumers are held up when the buffer is empty, and how manipulation of the buffer structure itself is handled mutually exclusively.

class Semaphore { private int value; public Semaphore(int value) { this.value = value; } public synchronized void acquire() { while (value == 0) { wait(); } value = value - 1; } public synchronized void release() { value = value + 1; notify(); } } A producer process can only put an item into the buffer if a free space exists. We will use a semaphore called empty to hold up a producer if the buffer has no empty space. A consumer can only take an item from the buffer if the buffer is not empty. We will use a sempahore called full to hold up a consumer if the buffer has no items. Modification of the buffer itself involves changing the values of the in and out variables and the contents of the buffer array. These modifications must be done within a critical section. We will use a semaphore mutex for public Buffer() { in = 0; out = 0; buffer = new Object[BUFFER_SIZE]; mutex = new Semaphore(1); empty = new Semaphore(BUFFER_SIZE); full = new Semaphore(0); } public void insert(Object item) { empty.acquire(); mutex.acquire(); buffer[in] = item; in = (in + 1) % BUFFER_SIZE; mutex.release(); full.release(); } public Object remove() { full.acquire(); mutex.acquire(); Object item = buffer[out]; out = (out + 1) % BUFFER_SIZE; mutex.release(); empty.release(); return item; } Producer: public void run() { Date message; while (true) { message = new Date(); buffer.insert(message); } } Comsumer: public void run() { Date message; while (true) { message = (Date) buffer.remove(); } } The code creates a Buffer object to be shared between the producers and consumers. It then creates creates two threads. It passes a runnable object to the constructor of each thread. The first gets an instantiation of the Producer class and the second gets an instantiation of the Consumer class. Invoking the start() method causes each thread to execute the run() method of its runnable object.


Related study sets

Roe v Wade and Planned Parenthood v Casey

View Set

MKTG 321 - Chapter 15 - Practice Problems

View Set

[Unit 3 Study Guide] New American Diplomacy

View Set

digital media midterm (multiple choice)

View Set