Homework 2

Ace your homework & exams now with Quizwiz!

In an MPI program with 8 processes, what is the smallest rank that any process will have?

0

When running an MPI program with 8 processes that call MPI_Gather using the default communicator, how many processes will receive the data?

1 Gathers data from all processes on destination process

When running an MPI program with 8 processes that call MPI_Bcast using the default communicator where the source process sends an array of 10 elements, how many elements does each destination process receive?

10

When running an MPI program with 8 processes that call MPI_Reduce using the default communicator where each source process sends an array of 10 elements, how many elements does the destination process receive?

10

When running an MPI program with 8 processes that call MPI_Scatter using the default communicator where the source process scatters an array of 16 elements, how many elements does each destination process receive?

2

In an MPI program with 8 processes, what is the largest rank that any process will have?

7

When running an MPI program with 8 processes that call MPI_Scatter using the default communicator, how many processes will receive a chunk of the data?

8 Scatter sends a chunk to each process

When running an MPI program with 8 processes that call MPI_Gather using the default communicator where each source process sends an array of 10 elements, how many elements does the destination process receive?

80

MPI programs have to be run with more than one process.

False

MPI_Allgather performs many-to-one communication.

False

MPI_Allreduce can be emulated with MPI_Reduce followed by MPI_Scatter.

False

MPI_Allreduce performs many-to-one communication.

False

MPI_Bcast performs many-to-one communication.

False

MPI_Gather performs one-to-many communication.

False

MPI_Recv may return before the message has actually been received.

False

MPI_Recv performs one-to-many communication.

False

MPI_Reduce implies a barrier.

False

MPI_Reduce performs one-to-many communication.

False

MPI_Scatter performs many-to-one communication.

False

MPI_Send performs one-to-many communication.

False

MPI_Ssend performs many-to-one communication.

False

Returning from an MPI_Gather call by any process implies that the process receiving the gathered result has already reached its MPI_Gather call.

False

The MPI_Scatter function concatenates the data from all involved processes?

False

The receive buffer size parameter in MPI_Recv calls specifies the exact length of the message to be received (in number of elements).

False # of elements in the message MPI_Recv( &buf, int buf size(#elements), datatype, source rank, message tag, comm, status)

In the call MPI_Reduce(a, z, n, MPI_DOUBLE, MPI_SUM, 8, MPI_COMM_WORLD), process 0 is the destination.

False Destination rank is 8

In the call MPI_Reduce(a, z, n, MPI_DOUBLE, MPI_SUM, 8, MPI_COMM_WORLD), each process contributes 8 elements to the reduction.

False Has n elemnts

In the call MPI_Reduce(a, z, n, MPI_DOUBLE, MPI_SUM, 8, MPI_COMM_WORLD), the result is written into the "n" array.

False Written to the "z" receive buffer(array)

A cyclic distribution of the elements in an array is useful for load balancing when the amount of work per element increases with increasing array indices.

True

A single call to MPI_Reduce by each process suffices to reduce local histograms with many buckets into a global histogram.

True

In MPI_Gather, every process has to pass a parameter for the destination buffer, even processes that will not receive the result of the gather.

True

In MPI_Gather, rank 0 always contributes the first chunk of the result.

True

MPI programs can suffer from indeterminacy.

True

MPI_Allgather implies a barrier.

True

MPI_Allgather is typically faster than calling MPI_Gather followed by MPI_Bcast.

True

MPI_Reduce has a similar communication pattern (sends and receives of messages) as MPI_Gather.

True

MPI_Reduce may be non-blocking on more than one of the involved processes.

True

MPI_Send may return before the message has actually been sent.

True

MPI_Ssend implies some synchronization.

True

Multidimensional C/C++ arrays are stored row by row in main memory.

True

Sending one long message in MPI is typically more efficient than sending multiple short messages with the same total amount of data.

True

The MPI_Barrier call requires a parameter (to be passed to the function).

True

The communication pattern (one-to-one, one-to-many, many-to-one, or many-to-many) of MPI_Gather and MPI_Reduce are identical.

True

The collatz code from the project is likely to suffer from load imbalance.

True The fractal code does as well


Related study sets

ALGEBRA II: L2 Simplifying Radicals and Expressions

View Set

Environmental Science: Chapter 2

View Set

BIOLOGY: Campbell's Chapter 15: Beyond Mendelian Genetics

View Set

Chapter 12 Fill in the Blank Quiz

View Set

ATI Video Case Study Total Parenteral Nutrition

View Set

Latitude & Longitude Study Guide

View Set