CS 6210

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Is "ClientIdent" sent in plaintext or encrypted? Explain why.

"ClientIdent" is sent as plaintext. (+1) • Since the system uses private key encryption, the server needs to know the identity of the requestor to choose the right key for decryption.

(choose ONE false statement from the following) Satyanarayanan's LRVM is light weight because :

** I. It does not have to keep undo logs on the disk II. It provides lazy semantics for reducing the number I/O activities to keep the virtual memory persistent III. It does not implement the full transactional semantics in the database sense IV. It is implemented in the kernel

(choose ONE true statement from the following) There is no undo log in Satyanarayanan's LRVM because :

**I. LRVM copies the old values for the specified range of addresses in a set-range call into virtual memory II. Entire virtual memory of a process is made persistent by using LRVM library III. The transactions as defined in LRVM never abort IV. LRVM assumes that there is battery backup for the physical memory

(choose ONE true statement from the following) A server (such as a file system) with a recoverable state in Quicksilver will use:

**I. Two-phase commit protocol II. One-phase standard protocol III. One-phase immediate protocol IV. One-phase delayed protocol

in xFS, at some point there are three log segments (Seg 1, Seg 2, and Seg 3). The log cleaner coalesces the log segments. Show the contents of the coalesced new log segment.

1' 2' 5' 6' Seg 1 1" 2" 4' 3' Seg 2 3" Seg 3 ___________________ 5' 6' 1" 2" 4' 3" New segment

Identify four main problems with a traditional centralized distributed file system (e.g., NFS used in most academic institutions)?

1) File and the associated meta-data for the file collocated statically at the server 2) CPU of server overloaded due to meta-data processing for simultaneous client requests for such hot files 3) Server has no knowledge that a file is shared by multiple clients, as a consequence server does not manage consistency of a file which is being simultaneously read/written by multiple clients 4) Memory of server (which serves as a cache for the data of files accessed at this server) overloaded if "hot files" are at this server

In implementing GMS, what is the technical difficulty of obtaining age information for the pages that are backing the virtual memory of processes at a node?

1) Hardware does address translation on every memory access. 2) Therefore, the memory access pattern for the pages backing the virtual memory subsystem is invisible to GMS software.

Explain how xFS attempts to solve the four main problems with centralized distributed file systems.

1) Meta-data for a file can be dynamically placed at any node 2) Meta-data processing is distributed due to the dis-association of the location of the file and its meta-date 3) xFS implements file consistency ensuring single-writer multiple-reader semantics at the block level for the files 4) xFS implements cooperative client caching of files thus reducing the memory pressure on the server for caching the data

There are potentially four control transfers during the execution of an RPC. Enumerate them and identify the ones that are in the critical path of the RPC.

1) Switching from current running client to new one *2) Switching into server process that will handle the client request 3) Server procedure executes, results are sent out. Switching to a new process to handle other requests *4) Results return, currently running another process on the kernel, must wait to switch back to original sender

"AFS does not address confinement of resource usage by the clients." Explain this statement.

A client can flood the LAN with requests to the servers and thus making the system vulnerable to DOS attacks.

What is a "small file write problem" with the software RAID approach? How is this solved?

A small file is striped across multiple disks => high overhead for reads/writes to small file Log-structured file system avoids this problem by recording changes to files in a single "log" data structure owned by the file system and persisted by striping to the RAID.

At the beginning of each Epoch, what information is sent to the initiator node for an epoch by the rest of the nodes? (GLOBAL MEMORY)

Age information for all the local and global pages: bigger age means older

Assume 4 nodes (N1 through N4) each with 16 physical page frames. Initially none of the nodes have any valid pages in their physical memory. N1 and N2 each make a sequence of page accesses to 32 distinct pages all from the disk. N3 and N4 are idle. What is the state of the cluster (i.e. local and global caches at each node) at the end of the above set of accesses?

All of N1 and N2 memories contain only LOCAL pages All of N3 and N4 memories contain only GLOBAL pages

What is the purpose of the Interface Definition Language (IDL) in software construction?

Allows expressing interfaces in a language independent manner facilitating third party software development.

What is an overlay network? Give two example of an overlay networks: one at the network level and one at the application level.

An overlay network is a computer network, which is built on the top of another network. Nodes in the overlay can be thought of as being connected by virtual or logical links, each of which corresponds to a path, perhaps through many physical links, in the underlying network. Network Level: IP network is an overlay on a Ethernet connected LAN Application Level: CDNs is an overlay over IP network

Explain false sharing in general with a code example.

Assume P1's cache: Cacheline contains variables v1, Assume P2's cache: v2 Cacheline contains variables v1, v2 P1's code: P2's code: lock(L1) lock(L2) update(v1) repeatedely update(v2) repeatedly unlock(L2) unlock(L2) • Code on P1 and P2 can be executed concurrently • But cacheline on P1 and P2 contains v1, and v2 • This cacheline will ping pong between P1 and P2 • This is false sharing

What are the Cons of Lazy Release Consistency over Eager Release Consistency?

At the point of acquisition, all the coherence actions are not complete. Therefore, acquiring the lock incurs more latency compared to Eager consistency model.

Spring kernel allows extension of the operating system (i.e., addition of new subsystems and services) via the following method:

By using open language independent interfaces to enable third party software development

Andrew File System What is "ClientIdent" during an RPC session establishment?

ClientIdent will be the user's login "username" and the key for encryption will be the "password" associated with this username (this is known to the user and the system every authorized user of the system).

What values are calculated by the initiator node and returned to each node? (GLOBAL MEMORY)

Compute MinAge: this is the minimum cutoff age above which initiator guestimates M oldest pages will get replaced in the upcoming epoch • Compute Wi for each node i: Weight Wi is the fraction of the M pages that is expected to be replace at node I in the upcoming epoch • Pick initiator node for the next epoch: This is node i with MAX(Wi) • Each node receives {MinAge, Wi for all nodes i)

Explain priority inversion.

Consider a high pri task C1 calling a low pri server task. It is a blocking call, so server takes over and does the bidding. After the serivice time c1 ready to run again. C2 med prio becomes runnable. C2 preempts server =>priority inv wrt C1

Consider the following execution happening in Treadmarks. Recall that Treadmarks implements Lazy Release Consistency. Assume initially that both processors P1 and P2 have copies of the pages X, Y, and Z. P1: Lock (L) Step 1: Updates to page X; Step 2: Updates to page Y; Step 3: Updates to page Z; Unlock (L) Subsequently, processor P2 executes the following code: P2: Lock (L) Step 4: Read page X; Unlock (L) Explain what happens on P1 at Step 1 Explain what happens on P2 at Step 4

Create twin for X; call it X' • Write updates to X (original not the twin) P2 at the point of lock acquisition invalidates pages X, Y, and Z • At step 4, P2 goes to P1 and obtains diff(X): created by P1 at the point of unlock(L) • P2 applies diff(X) to its copy of X and makes X valid and completes read operation.

Recall that in Spring kernel, every domain has a "door table", which is a small table in which each entry is a small integer (called door identifier) akin to a file descriptor associated with an open file in a Unix process. Explain the role of the door table in the Spring kernel.

Door table is private to each domain and the index into the table (door-id) is a small integer (similar to a file descriptor in a Unix process). Each entry in the door table has a pointer to a particular door data structure in the nucleus, which contains an entry point to a specific target domain procedure for fast cross-domain calls.

Thekkath and Levy suggest using a shared descriptor between the client stub and the kernel to avoid the data copying overhead during marshaling the arguments of an RPC call. Explain how this works.

Each element of the shared descriptor contains: o <start address, number of contiguous bytes> (+1) Client stub fills up the descriptor at point of call: (+1) o <&arg1, length>, <&arg2, length>,,,, Kernel assembles the arguments into the kernel buffer without an extra copy by the client stub

Explain how the "put" algorithm of Coral works:

Every node attribute: • full: max l values for key k (+1) • loaded: max beta requests rate for key k (+1) Two-phase put: • Forward phase o Putter sends RPCs to nodes en route to node n (~=k) using the key-based routing or Coral (+2) o Put stops at "full" and "loaded" node (+1) • Reverse phase o Some nodes that were NOT full or loaded, may have become loaded and/or full (+1) o Pick a node that is closest to the key as the destination for the put => this avoids the meta server overload and tree saturation

True or False AFS is vulnerable to "man in the middle replay attack", wherein a malicious user captures all the packets from an individual client-server interaction, does pattern matching of the client packet against the captured packets, and responds with the captured and stored server's response packet to making the client believe that it has gotten a response from the genuine server.

False The premise is that random numbers used Xr and Yr for the two-way authentication may wrap around at some point resulting in a packet whose contents is EXACTLY the same as something that crossed the wire before. Let's say the size of the packet is 256 bits including all the fields (note that the layout of the fields in the packet is unknown to the "man in the middle"). In the limit, the "man in the middle" will have to store 2256 bit combinations (of all the packets that were observed on the wire) and pattern-match a new packet on the wire against each of these bit combinations. This is exponential in computational complexity and hence computationally infeasible to do.

In a centralized file system, the server performs the functions of managing the data blocks, metadata for the files, server-side file cache, and consistency of datablocks of files cached by multiple clients. The following questions are with respect to how these functions are carried out in xFS. (TRUE OR FALSE) 1) Meta data for files are located in the same node as the data. 2) A file is contained entirely on a single disk in the entire system. 3) Small file write problem is solved in xFS 4) The in-memory cache for a file resides at the same node as the disk copy.

False. To load-balance metadata management, xFS decouples the location of metadata for a file from the location of the content. False. It uses software RAID to stripe a file on a selected number of disks as determined by the stripe group. True. It uses a log structured file system to overcome the small write problem. False. xFS uses cooperative caching meaning that the file may be encached at a client node different from the server that hosts the file in its disk.

In implementing GMS, what is the technical difficulty of obtaining age information for the pages at a node?

Hardware does address translation on every memory access. • Therefore, the memory access pattern for the pages backing the virtual memory subsystem is invisible to GMS software. (+1.5) • Thus obtaining the age information for such pages (referred to as anonymous pages) is difficult. • GMS periodically runs a daemon as part of the VM subsystem to dump the TLB information to keep track of "approximate" age information for such pages

(choose ONE false statement from the following) The sources of problems in computer systems that lead to unreliability include:

I. Power failure resulting in loss of volatile state in physical memory II. Software crash due to bugs in the code III. Lack of semantics that allows inferring the state of the system prior to the crash **IV. Not implementing transactional semantics at the operating system level

Choose the ONE false statement that does not apply for remote objects in Java:

I. References to objects can be passed as args/result II. Java built-in operators are available for use with them **III. Parameter passing semantics are the same for remote objects as for local objects

Choose the ONE false statement that does not apply for object-oriented programming

I. They provide strong interfaces allowing 3rd party software development II. They promote reuse of software components III. They always lend themselves to superior implementation (i.e., performance) compared to implementation of a subsystem in a procedural or imperative style

Interface Definition Language (IDL) serves the following purpose:

It allows expressing a subsystem interface in a language independent manner

Does the use of "transaction" in LRVM ensure that changes made to persistent data structures survive machine failures and/or software crashes? Explain why or why not.

It does not (+1) The intent of LRVM is to provide a consistent point for server recovery upon crashes. (+2) Persistence for data structures modified between "begin-end" transaction is guaranteed only if the transaction has successfully flushed the in-memory re-do logs to the disk and written a commit record. (+2) The consistent point for resuming the server is derived by looking at the last commit record that exists on the disk.

Subcontract in Spring Kernel is a mechanism that serves the following purpose:

It makes client/server interactions location transparent

How does log structured file system solve the "small file write" problem?

LFS increases the aggregate write size by aggregating writes to different files in a time-ordered contiguous log segment. - converts random writes to sequential writes.

How are the values returned by the initiator used by each node during the epoch?

Let y be the page chosen for eviction at node i; • If Age(y) > MinAge => discard the page y • If Age(y) < MinAge => send y to peer node j o Choose node j using the weight information => probability of choice of node j proportional to Wj

The security issues and the techniques discussed in AFS are relevant to this day because......

Many applications that run on mobile devices (e.g., cellphones) rely on cloud storage and the two-authentication between the devices and the cloud is similar to the setup in AFS between clients and servers. ● The network connection in the Internet is insecure as is the assumption in AFS, so we need encryption to transfer data securely. ● For efficient encryption, we still need a private key cryptographic system. ● For prevention of replay attack, ephemeral id and keys are established during session establishment and they are used for communication.

How does the PTS computational model (with PTS threads and channels) differ from a Unix distributed program written with sockets and processes?

Many to many connections allowed in PTS (+1) Time is a first class entity managed by PTS (+2) Data in channels can be persisted on demand by the application in PTS

Map-reduce is a simple intuitive programming model for giant-scale applications. Enumerate the steps taken under the cover by the runtime during the map phase of the computation. Enumerate the steps taken under the cover by the runtime during the reduce phase of the computation.

Mappers are assigned specific splits of the input. ● They read the split from the local disk, parse it and process it using the user-defined map function. ● Intermediate output is buffered and written to the local disk periodically. ● Mappers inform the master after finishing the writing to local disk. Reducers do a remote read to the intermediate output stored on the mapper nodes ● Sort the intermediate data from the mappers to obtain the input to reduce function, and call user-defined reduce function ● Each reducer writes to the final output file corresponding to the partition/key that it is dealing with ● Reducer informs the master after writing the output

What are the advantages of decoupling the location of data and meta-data associated with files in a distributed file system?

Meta data management for "hot files" distributed Caching and serving the files to clients of "hot files" distributed

If k is the key and n is the node id, a traditional DHT would try to store k at a location n, where n is equal to k. Explain why Sloppy DHT of Coral Does not do that:

Meta data server can become overloaded if too many keys (k) map to the same node (n) (+2) • Nodes en route to node n would also experience too much traffic due to the put/get operations wanting to get to the overloaded meta data server (+2) • Coral avoids both these problems by using k=n as hint not absolute in get/put operations. (+1)

Why does a modular approach work for designing large-scale VLSI circuits but breaks down for designing large-scale software systems?

Modularity in software implies layering, this could potentially lead to inefficiencies depending on the interfaces between the layers Building modular VLSI circuits is akin to assembling lego blocks together; building blocks are electrically connected in a VLSI circuit and current passes freely among the components; no interfacing overhead as in software components

In a page-based DSM system, why does it make sense to implement a multi-writer protocol a la Treadmarks, wherein multiple nodes may simultaneously modify the same page?

Multiple independent shared data structures (governed by distinct locks) may fit within a single page Single writer cache coherence protocol (at page granularity) will result in serializing modifications to these independent data structures due to "false sharing"

Consider the following scenario: N3 acquires a lock L; at the point of acquire, N3 receives a notice to invalidate page P; upon access to P inside the critical section, N3 gets the pristine copy of P, and obtains diffs for the page Pd1, and Pd2 for the page P from previous lock holders N1 and N2, respectively. How does N3 create a current version of the page P?

N3 gets a pristine copy of P It applies the diffs (Pd1, and Pd2) IN THE ORDER OF PRIOR LOCK acquisitions to P to create an up to date copy.

How does parameter passing differ between Java RMI and local object invocation?

Object references in parameters are passed by value.

Give the Pro and Con of each of the following types of timers: Periodic One-shot Soft Firm

Periodic Pro: Periodicity, Con: Event recognition latency One-shot Pro: Timely, Con: Overhead Soft Pro: Reduced overhead, Con: Polling Overhead, latency Firm: Combines all of the above, Con : NA

List the pros and cons of the "Active Network" vision.

Pros: Flexibility Potential for reducing network traffic (e.g.,using multicast/broadcast) Cons: Protection threats Resource management threats

What are the goals of the "firm timer" design in TS Linux?

Provide accurate timer for applications that need a finer precision (e.g., multimedia applications) by using one-shot timers. Do not incur the performance penalty of one-shot timer interrupts where possible by providing an overshoot window that would allow executing the handler for the one-shot timer (without an interrupt) at a close enough preceding periodic timer interrupt or a system call.

DQ Principle: If D is data per query and Q is the number of queries per second, then the product D*Q tends towards a constant for a fully utilized system. How can the DQ principle be used to come up with policies for graceful degradation of the system under excess load?

Q relates to yield => number of successful queries processed D relates to harvest => fraction of complete data server per query Graceful degradation policy: • Admission control knob to reduce Q but keep D the same Fidelity control knob to reduce D to allow increasing Q These two knobs can be individually adjusted under the constraint that DQ is a constant.

Name one important virtue of object-oriented programming.

Re-use via inheritance, Modularity, containment of bugs, ease of evolution

(Argue for or against this statement) Recovery management in QuickSilver via transactions does not add significant overhead to the normal client/server operation.

Recovery management in QuickSilver via transactions does not add significant overhead, because: There is no overhead due to the communication that happens between the transaction managers, since all of it is piggy-backed on the regular IPC communication that has to happen anyway in the distributed system. Transactions are used purely for recovery management. So, the semantics are very simple and it does not need concurrency control as well. The graph structure created with the transaction ownerships facilitates in reduced network communication. All the transaction managers don't have to communicate with the coordinator. Quicksilver provides mechanisms while leaving the policies to the services that use it. Thus, simple services can choose low-overhead mechanisms.

Rio Vista implements the LRVM semantics. Yet it does not have "redo" logs as in the original LRVM system. Explain why.

Redo logs are necessary if the transaction has committed and the data segments have NOT yet been updated with the changes present in the redo logs. (+1) Rio Vista is built on top of Rio which is a battery backed file cache. (+2) The "data segments" of LRVM are persistent by construction since they are in the file cache. (+2) Therefore, there are no redo logs. Rio Vista writes an undo log which is also persistent by construction. If the system crashes and recovers and if there are undo logs present in the file cache, they are applied to the data segments to restore the state to "before" the transaction.

What is SK? What is its purpose?

SK is new "session key" generated by the server for the new RPC session that the client has requested. The new RPC session will use SK as ClientIdent. • Since ClientIdent has to be sent in plaintext, generating a new SK for each RPC session ensures that the username or the secrettoken is not over-exposed on the wire.

Given the following LRVM code: begin_xact (tid, mode = restore); (1) set_range (tid, base-addr, #bytes); (2) write-metadata m1; //normal writes to m1 contained in range (3) write-metadata m2; //normal writes to m2 contained in range (4) end_xact(tid, mode = no-flush); (5) State LRVM action at each of the 5 steps.

Step 1: Indicates the beginning of transaction on the external data segment. Restore mode indicates that an undo log is needed. Step 2: Creates an undo record for the specified range. Step 3: Regular write to the memory location. No LRVM involvement. Step 4: Regular write to the memory location. No LRVM involvement. Step 5: Writes to redo-log. Discards undo. Does not wait for file system flush operation before returning.

In a software DSM system implemented over a cluster of machines with no physical sharing of memory across the nodes of the cluster, what functionality does the "global virtual memory" abstraction provide to the application programmer?

The "global virtual memory" abstraction represents the entire cluster as a globally shared virtual memory to the programmer. Underneath, DSM software is partitioning this global address space into chunks that are managed individually on different nodes of the cluster. This abstraction gives address equivalence. Accessing a memory location X from a program means the same thing as accessing the memory location X from the processor.

Consider a distributed shared memory machine. On each processor, read/write to memory is atomic. But there is no guarantee on the interleaving of read/writes across processors. A parallel program is running on processors P1 and P2. Intent of the programmer: P1 - Modify Struct (A) P2 - Wait for Modification Use Struct(A) The pseudo code that the programmer has written to achieve this intent: flag = 0; initialization P1 - mod(A) flag = 1; P2 - while (flag == 0); //spinwait use (A); flag = 0; Explain why the above code may not achieve the programmer's intent. What is the guarantee needed from the memory system to make this code achieve the programmer's intent?

The above code is not guaranteed to work. (+1 if they get this right) Reason: read/writes from P1 happen in textual order. But since there is no guarantee on the order in which writes from a given processor become visible to the other, P1's write to flag may become visible to P2 before the modifications to A by P1 have become visible to P2. This will result in violating the intent of the programmer. ___________________________________________ Sequential consistency: • Read/writes on each processor respect the program order • Reads/writes from the different processors interleaved consistent with the program order on individual processors

"Coral CDN avoids tree saturation." Explain this statement with concise bullets.

The intent in Coral as in traditional CDNs, is to store the key K in node N, where K ~= N. However, if the node is loaded or full then an earlier node in the path from the source to the destination that is NOT loaded or full is chosen for keeping the meta data associated with a particular content.

Both Quicksilver and LRVM are using the idea of transaction to provide recoverability of persistent data. Enumerate two meaningful differences between the approaches used in the two systems.

The log maintenance in LRVM is per process, while the log maintenance in Quicksilver is for all the processes running in any particular node. (+2.5) The recovery management in Quicksilver is for a transaction that spans multiple nodes of the distributed system while it is only for a given node in LRVM. Quicksilver works across the entire distributed system, NOT just for a single node as in LRVM. (+2.5)

How does the MapReduce runtime deal with redundant Map and Reduce workers that could potentially be working on the same "split" of the data set for the map and reduce operations, respectively?

The master node knows the mapper nodes; if some are assigned duplicate shards, the early finishing mapper's result is used; the rest ignored The master node knows the reducer nodes; the master is responsible for "renaming" the local file produced by a reducer to the final "named" output file; once again master node does the renaming when the first redundant reducer finishes, and then ignores the rest of the redundant reducers.

Let us assume file f1 is a popular file accessed by several clients. How is the node that hosts the file in its disk prevented from being overloaded by requests for this file in xFS?

The meta data manager has a record of all the clients that has a copy of the file f1 The request is routed to one of the client nodes that has a copy of the file f1 thus reducing the host that has the disk copy of f1

What happens when a client wishes to write to a file that is currently actively shared for reading by several peer clients in xFS?

The meta-date manager for the file invalidates the read-only copies (if any) of the file from the client caches Upon receiving all the acks from the clients that have read copies, the manager grants permission to the requesting node for writing to the file. The granularity for read/write is at block level (not entire file)

In AFS, "clientident" has to be always sent in plaintext. Why?

The security infrastructure of AFS uses private key encryption. (+1) In this system, the server has to be able to find the right key to use to decrypt a client message. ClientIdent helps the server to find the right key. Hence plaintext for the ClientIdent.

What are the pros of Lazy Release Consistency over Eager Release Consistency?

There is an opportunity to overlap computation with communication in the window of time between release of a lock, and acquire of the same lock. There is only point to point communication in Lazy RC unlike broadcast in Eager RC. Thus, Lazy consistency model has less communication events over Eager RC

Give two bullet points (each for "Think Global" and "Act Locally") that illustrate the maxim "Think Global but Act Locally" in the way GMS calculates and implements the minAge algorithm.

Think Global: 1) Intent is to ensure that pages replaced in every epoch are M oldest pages in the entire system. 2) To meet this intent, compute centrally (initiator, chosen as the node with the maximum page replacement in the previous epoch) ONCE per epoch to determine (a) minAge by getting the age info of all pages at EACH node, and (b) fraction (represented by weight wi) of M pages that is EXPECTED TO BE REPLACED at each node, and disseminate this info to the nodes Act Locally: Each mode LOCALLY determines if an evicted page has to be kept or discarded (based on: age of page < minAge => keep) LOCALLY pick a node to send the evicted page based on the expected number of replacements at that target node in the current epoch

Define the following terms (use figures to help get your point across) (i) "timer latency" (ii) "preemption latency" (iii) "scheduling latency"

Timer latency: distance between event occurrence and timer interrupt due to the granularity of the timing mechanism • Preemption latency: distance between timer interrupt and opportunity to schedule the event due to activity on the CPU being non pre-emptible (e.g., kernel in the middle of an interrupt handler) • Scheduling latency: distance between when the event becomes schedulable and when it actually gets scheduled due to higher priority applications already in the CPU scheduling queue

What is the purpose of the "subcontract" mechanism in Spring?

To simplify marshaling/unmarshaling by the client/server and allow dynamic location/relocation of servers in a distributed setting.

Consider the following execution happening in Treadmarks. Recall that Treadmarks implements Lazy Release Consistency. Assume initially that ALL processors have copies of the pages X, Y, and Z. P1: Lock (L) Step 1: Updates page X; Step 2: Updates page Y; Step 3: Updates page Z; Unlock (L) Subsequently, processor P2 executes the following code: P2: Lock (L) Step 4: Updates page X; Unlock (L) Subsequently, processor P3 executes the following code: P3: Lock (L) Step 5: Reads page X; Explain what happens on P1 at Step 1 Explain what happens on P2 at Step 4 Explain what happens on P3 at Step 5

Treadmarks runtime creates twin for X Updates written to X ( the original) ______ Since previous actions with this lock L (by P1) modified pages X, Y, Z, Treadmarks runtime invalidates pages X, Y, and Z at the point of lock acquisition by P2. (+1) At Step 4, Treadmarks runtime obtains diff for X from P1 and applies it to the copy of X already available at P2 and makes the page valid again (call it modified original) (+1) Treadmarks runtime creates twin for X Updates written to X (the modified original) ______________ Since previous actions with this lock L (by P1 and P2) modified pages X, Y, Z, Treadmarks runtime invalidates pages X, Y, and Z at the point of lock acquisition by P3. At Step 5, Treadmarks runtime goes to P1 and P2 an obtains the diffs for Page X available at each of these nodes, and applies them to the copy of X already available at P3 and makes the page valid again (call it modified original) to allow reads by P3.

How does AFS avoid replay attacks?

Upon being contacted by a client, the server challenges the client by sending a random encrypted with a key known only to the client. A replay attacker will not be able to successfully decrypt the message to retrieve the random number.

What is the Spring kernel approach to "extensibility"?

Use of a microkernel based design, extensibility via servers above the microkernel, and subcontract mechanism for location transparency.

In AFS, how is over-exposure of client identity and passwords avoided?

Username and password used only once per login session. Secrettoken is used as a ClientIdent only for the duration of one login session. The handshake key extracted from the ClearToken is used only once per establishment of an RPC session. Each new RPC session uses a new session key (SK) for encryption.

In a client-server system, typically a server may be contacted by clients with different scheduling priorities. This leads to the potential for priority inversion. How can such priority inversion be averted?

Using Highest Locking Priority (HLP) protocol (paper terminology) ● When a task acquires a resource, it gets the highest priority of any task that can acquire this resource. or Using priority-based scheduling (lecture terminology) ● When a higher process calls a lower priority process, the higher process gives its priority to the lower one.

Distinguish between web proxies and content distribution networks.

Web proxies "pull" content and store it locally to serve the clients. With CDNs, the content is "pushed" from the origin servers to the CDN mirrors. (+1.5)

Andrew File System Xr is a random number generated by the client. What purpose does this serve?

When the client receives the first message from the server E[(Xr+1, Yr), HKS], it will check the field Xr+1. Note that Xr was generated by the client and only if the server was able to decipher the message, Xr+1 field will have the right value. A replay attack will not have the right value for this field.

Andrew File System Yr is a random number generated by the server. What purpose does this serve?

When the server receives the second message from the client E[Yr+1,HKC],it will check the field Yr+1. Note that Yr was generated by the server and only if the client was able to decipher the message, Yr+1 field will have the right value. A replay attack will not have the right value for this field.

List the tradeoffs between replication and partitioning in architecting the data repositories of giant scale services.

With replication we can have complete harvest despite failures, while with partitioning we cannot have complete harvest in the presence of failures. Since replication gives more control on harvest it gives more choice for the system administrator (give full harvest for some types of queries or partial harvest for some types of queries) when DQ goes down due to failures, replication is preferred beyond a point for building scalable servers.

What information needs to be kept in the kernel corresponding to entries in the door table? You can show the data structure of a door descriptor to succinctly show this information.

struct door_desc{ list_type who_has_access; // list of domains that have access to this door; nucleus uses this to validate a cross-domain call emanating from a domain via the door table (+2) domain_ptr entry_point; // pointer to the entry point procedure in the target domain }

Mention four important design principles that Saltzer advocates for building a secure information system in his classic paper which hold true to this day and age.

● Economy of mechanisms (easy to verify) ● Fail safe defaults (explicitly allow access) ● Complete mediation (No password caching) ● Open design: protect keys, but publish design ● Separation of privilege ● Least privilege ● Least common mechanism ● Psychological acceptability

Compare and contrast the channel abstraction of PTS with Unix Socket.

● Similarity (+1 for each bullet) ○ PTS channels globally uniqueness ○ PTS channels can be located and accessed from anywhere ● Difference (+0.5 for each bullet) ○ Items in the PTS channels are temporally indexed ○ PTS channel supports Many to many connection as opposed to the one-to-one connection of Unix sockets ○ PTS channel allows stream synchronization and persistence ○ PTS channels are automatically garbage collected or persisted on permanent storage

Explain the role played by the subcontract mechanism in the Spring kernel.

● Simplifies client-side stub generation (marshall, unmarshall, invoke) ● Simplifies server-side stub generation (marshall, unmarshall, create, revoke, process incoming calls, shutdown) ● Hides the details of the runtime behavior ● Allows dynamically changing the runtime behavior of the server (singleton, replicated, etc.)


Kaugnay na mga set ng pag-aaral

A&PII Lecture Exam 1 of 4 - Ch. 18-20

View Set

CP_Texas Tech_Superior Mediastinum & Lungs

View Set

More!2 Irregular Verbs (past simple)

View Set

Managerial Accounting: Chapter 1

View Set

BIO1050 Cumulative Quiz Unit #3 Note Cards

View Set

Unit 2 AP Human geography: Population and Migration Patterns and Processes

View Set