Quiz 2

Ace your homework & exams now with Quizwiz!

In an MPI program with 9 processes, what is the smallest rank that any process will have?

0. ranks go from 0 to n-1.

When running an MPI program with 9 processes that call MPI_Gather using the default communicator, how many processes will receive the data?

1. Only the destination process.

When running an MPI program with 9 processes that call MPI_Scatter using the default communicator where the source process scatters an array of 9 elements in total, how many elements does each destination process receive?

1. Scatter distributes elements as evenly as possible.

When running an MPI program with 9 processes that call MPI_Gather using the default communicator where each source process sends an array of 8 elements, how many elements does the destination process receive?

72. 9 processes each send 8 elements, and MPI_Gather takes them globally.

In an MPI program with 9 processes, what is the largest rank that any process will have?

8. n-1 process ranks

When running an MPI program with 9 processes that call MPI_Bcast using the default communicator where the source process sends an array of 9 elements, how many elements does each destination process receive?

9. All process recieve the information (including the local process that called MPI_Bcast)

When running an MPI program with 9 processes that call MPI_Reduce using the default communicator where each source process sends an array of 9 elements, how many elements does the destination process receive?

9. Destination process receives the "reduced" array at the end after each process in the tree has combined its information and sends it further down the tree.

When running an MPI program with 9 processes that call MPI_Scatter using the default communicator, how many processes will receive a chunk of the data?

9. It distributes chunks of data to all processes.

MPI_Send performs many-to-one communication (T/F)

False Performs point to point communication

MPI_Send performs one-to-many communication. (T/F)

False Performs point to point communication

In the call MPI_Reduce(a, z, n, MPI_DOUBLE, MPI_MIN, 4, MPI_COMM_WORLD), process n is the destination (T/F)

False Process 4 (argument is the value "4") is the destination process

MPI_Allreduce has a similar communication pattern (sends and receives of messages) as MPI_Allgather (T/F)

False.

MPI_Reduce has a similar communication pattern (sends and receives of messages) as MPI_Scatter (T/F)

False.

MPI_Gather implies a barrier (T/F).

False. "A barrier means if any process runs after the barrier, every other process must have reached the barrier. Because gather most processes send, and do not receive, and because sending can be non-blocking, you can finish and send before other processes."

The communication pattern (one-to-one, one-to-many, many-to-one, or many-to-many) of MPI_Gather and MPI_Scatter are identical (T/F).

False. Gather is many to one, Scatter is one to many

The MPI_Scatter function concatenates the data from all involved processes (T/F)

False. It "scatters" the data evenly amongst the processes.

MPI_Allreduce can be emulated with MPI_Reduce followed by MPI_Scatter (T/F)

False. It can be emulated with MPI_Reduce and then MPI_Broadcast

MPI_Allreduce performs many-to-one communication (T/F)

False. It performs many-to-many communication.

MPI_Gather performs one-to-many communication (T/F).

False. It performs many-to-one communication

MPI_Recv performs one-to-many communication (T/F).

False. It performs point-to-point communication.

MPI_Send implies some synchronization (T/F)

False. It was marked true, but was supposed to be false on the key.

MPI_Recv may return before the message has actually been received (T/F).

False. It'll block until it receives the full message.

MPI_Allgather performs many-to-one communication (T/F)

False. Many to Many communication

MPI_Reduce performs one-to-many communication (T/F)

False. Performs many to one.

MPI_Bcast performs many-to-one communication. (T/F)

False. Performs one to many

Returning from an MPI_Reduce call by any process implies that the process receiving the reduced result has already reached its MPI_Reduce call (T/F).

False. That's MPI_Gather (although... ?) In MPI_Reduce, if you return from this reduce call, then that unfortunately doesn't mean much if you're one of the processes who only send. B/c everyone is a sender and only one process is a receiver. Sends can be non-blocking and return immediately. TF, your contribution means nothing with respect to other processes.

MPI programs have to be run with more than one process (T/F).

False. You can run on only one process, effectively making it serial.

In the call MPI_Reduce(a, z, n, MPI_DOUBLE, MPI_MIN, 4, MPI_COMM_WORLD), each process contributes 4 elements to the reduction (T/F)

False. 4 is the destination process

MPI_Scatter performs many-to-one communication (T/F).

False. It performs one-to-many communication.

The receive buffer size parameter in MPI_Recv calls specifies the exact length of the message to be received (in number of elements) (T/F).

False. It specifies the total number of "elements"

The call MPI_Reduce(a, z, n, MPI_DOUBLE, MPI_MIN, 4, MPI_COMM_WORLD) performs a sum reduction (T/F)

False. The MPI_OP argument is asking for the min, not for the sum.

Multidimensional C/C++ arrays are stored column by column in main memory (T/F).

False. They are stored in row form.

If each process calls MPI_Reduce k > 1 times, the reductions are matched based on the order in which they are called (T/F)

True

MPI_Aint denotes an integer type that is large enough to hold an address (T/F)

True

Sending one long message in MPI is typically more efficient than sending multiple short messages with the same total amount of data (T/F)

True Overhead of functions n whatnot

MPI_Gather may be non-blocking on more than one of the involved processes (T/F)

True Remember: everybody except one is a sender, and senders potentially are non-blocking

The MPI_Barrier call requires a parameter (to be passed to the function) (T/F)

True Requires the Communicator

In MPI_Gather, rank 0 always contributes the first chunk of the result (T/F)

True.

MPI_Allreduce implies a barrier (T/F)

True. "Everyone is a sender and a receiver"

MPI programs can suffer from indeterminacy (T/F).

True. Anything parallel can suffer from indeterminacy.

A cyclic distribution of the elements in an array is useful for load balancing when the amount of work per element increases with increasing array indices (T/F)

True. If the amount of work needed on each element is not constant (like vector addition), cyclic is the way to go, that way you distribute the work towards an average

A cyclic distribution of the elements in an array is useful for load balancing when the amount of work per element decreases with increasing array indices (T/F).

True. If the amount of work needed on each element is not constant (like vector addition), cyclic is the way to go, that way you distribute the work towards an average.

The fractal code from the projects is likely to suffer from load imbalance (T/F).

True. The fractal code computes the color of each pixel. Generally, the different colors will cause a load imbalance.

MPI_Allgather is typically faster than calling MPI_Gather followed by MPI_Bcast (T/F).

True. less overhead through MPI

MPI_Send may return before the message has actually been sent (T/F).

True. For efficiency sake.

The collatz code from the projects is likely to suffer from load imbalance (T/F).

True. It'll load unbalance because of the non guarantee that the collatz algorithm will balance based on what iteration it's under.

In MPI_Gather, every process has to pass a parameter for the destination buffer, even processes that will not receive the result of the gather (T/F).

True. Since it's SPMD, only one process will receive the information, but all process need to know where to send their local information.

In this course, we should always call MPI_Init with two NULL parameters (T/F).

True. The two arguments are the command line arguments. We won't be using those.

A single call to MPI_Reduce by each process suffices to reduce local histograms with many buckets into a global histogram (T/F).

True. You only need to call it once if your SPMD, and thus each process will call it only once.

In the call MPI_Reduce(a, z, n, MPI_DOUBLE, MPI_MIN, 4, MPI_COMM_WORLD), the result is written into the "z" array (T/F)

True. It was actually true on the quiz, but marked false.


Related study sets

Chapter 17 Comprehension Questions

View Set

Chpt 43 Drugs for Nutritional Disorder

View Set

Chapter 11: Drug Therapy for Hematopoietic Disorders

View Set

MFT study ch.1: Research Methods

View Set

Ch 1: Information Systems an overview

View Set