Midterm parallel programming

¡Supera tus tareas y exámenes ahora con Quizwiz!

In an MPI program with 8 processes, what is the smallest rank that any process will have?

0

When running an MPI program with 8 processes that call MPI_Gather using the default communicator, how many processes will receive the data?

1

According to Amdahl's law, what is the upper bound on the achievable speedup when 10% of the code is not parallelized?

10

When running an MPI program with 8 processes that call MPI_Bcast using the default communicator where the source process sends an array of 10 elements, how many elements does each destination process receive?

10

When running an MPI program with 8 processes that call MPI_Reduce using the default communicator where each source process sends an array of 10 elements, how many elements does the destination process receive?

10

Assuming a parallel runtime of 20s on 8 cores, a serial runtime of 120s, and a fixed overhead, what is the expected speedup with 24 cores?

12

Assuming a parallel runtime of 20s on 8 cores, a serial runtime of 120s, and a fixed overhead (Slide Ch03.47), what is the expected runtime in seconds with 24 cores (do not include any units in the answer)?

16

Given a parallel runtime of 20s on 8 cores and a serial runtime of 120s, what is the runtime in seconds on 10 cores assuming the same efficiency (do not include any units in the answer)?

16

What is the speedup when 25% of the code is not parallelized and the rest of the code is perfectly parallelized (achieves linear speedup) and executed on 3 cores?

2

When running an MPI program with 8 processes that call MPI_Scatter using the default communicator where the source process scatters an array of 16 elements, how many elements does each destination process receive?

2

According to Amdahl's law, what is the upper bound on the achievable speedup when 25% of the code is not parallelized?

4

Assuming a parallel runtime of 20s on 8 cores, a serial runtime of 120s, and a fixed overhead, what is the expected efficiency in percent with 24 cores (use a whole number and do not include the "%" symbol in the answer)?

50

Given a parallel runtime of 20s on 8 cores and a serial runtime of 120s, what is the speedup?

6

In an MPI program with 8 processes, what is the largest rank that any process will have?

7

Given a parallel runtime of 20s on 8 cores and a serial runtime of 120s, what is the efficiency in percent (use a whole number and do not include the "%" symbol in the answer)?

75

When running an MPI program with 8 processes that call MPI_Scatter using the default communicator, how many processes will receive a chunk of the data?

8

When running an MPI program with 8 processes that call MPI_Gather using the default communicator where each source process sends an array of 10 elements, how many elements does the destination process receive?

80

Acquiring a lock by one thread before accessing a shared memory location prevents other threads from being able to access the same shared memory location, even if the other threads do not acquire a lock.

False

All data races involve at least two write operations.

False

All reductions compute a single sum.

False

Every parallel program requires explicit synchronization.

False

In the call MPI_Reduce(a, z, n, MPI_DOUBLE, MPI_SUM, 8, MPI_COMM_WORLD), each process contributes 8 elements to the reduction.

False

In the call MPI_Reduce(a, z, n, MPI_DOUBLE, MPI_SUM, 8, MPI_COMM_WORLD), process 0 is the destination.

False

In the call MPI_Reduce(a, z, n, MPI_DOUBLE, MPI_SUM, 8, MPI_COMM_WORLD), the result is written into the "n" array.

False

MPI programs have to be run with more than one process.

False

MPI_Allgather performs many-to-one communication.

False

MPI_Allreduce can be emulated with MPI_Reduce followed by MPI_Scatter.

False

MPI_Allreduce performs many-to-one communication.

False

MPI_Bcast performs many-to-one communication.

False

MPI_Gather performs one-to-many communication.

False

MPI_Recv may return before the message has actually been received.

False

MPI_Recv performs one-to-many communication.

False

MPI_Reduce implies a barrier.

False

MPI_Reduce performs one-to-many communication.

False

MPI_Scatter performs many-to-one communication.

False

MPI_Send performs one-to-many communication.

False

MPI_Ssend performs many-to-one communication.

False

Programs running on shared-memory systems cannot suffer from data races.

False

Returning from an MPI_Gather call by any process implies that the process receiving the gathered result has already reached its MPI_Gather call.

False

The MPI_Scatter function concatenates the data from all involved processes?

False

The receive buffer size parameter in MPI_Recv calls specifies the exact length of the message to be received (in number of elements).

False

When protecting a critical section with a lock, the threads are guaranteed to enter the critical section in the order in which they first tried to acquire the lock.

False

A barrier is a synchronization primitive.

True

A cyclic distribution of the elements in an array is useful for load balancing when the amount of work per element increases with increasing array indices.

True

A single call to MPI_Reduce by each process suffices to reduce local histograms with many buckets into a global histogram.

True

Data races always involve at least two threads.

True

Deadlock is a parallelism bug.

True

Embarrassingly parallel programs can suffer from load imbalance.

True

In MPI_Gather, every process has to pass a parameter for the destination buffer, even processes that will not receive the result of the gather.

True

In MPI_Gather, rank 0 always contributes the first chunk of the result.

True

MPI programs can suffer from indeterminacy.

True

MPI_Allgather implies a barrier.

True

MPI_Allgather is typically faster than calling MPI_Gather followed by MPI_Bcast.

True

MPI_Reduce has a similar communication pattern (sends and receives of messages) as MPI_Gather.

True

MPI_Reduce may be non-blocking on more than one of the involved processes.

True

MPI_Send may return before the message has actually been sent.

True

MPI_Ssend implies some synchronization.

True

Multidimensional C/C++ arrays are stored row by row in main memory.

True

Mutexes and locks are the same kind of synchronization primitive.

True

Reduction operations can be implemented using a reduction tree.

True

Sending one long message in MPI is typically more efficient than sending multiple short messages with the same total amount of data.

True

The MPI_Barrier call requires a parameter (to be passed to the function).

True

The busy-waiting code from the slides contains a data race.

True

The collatz code from the project is likely to suffer from load imbalance.

True

The communication pattern (one-to-one, one-to-many, many-to-one, or many-to-many) of MPI_Gather and MPI_Reduce are identical.

True

When a thread attempts to acquire a lock that is already taken, it is blocked until it obtains the lock.

True


Conjuntos de estudio relacionados

Community Health Ch 12: Economics of Health Care

View Set

Untested Material for Leadership final

View Set

Comidas Españolas, Ms. G, Espanol 4 y 5

View Set

Igneous Process: Joe vs. the Volcano

View Set

Chp 45- Iggy- care critical ill neuro

View Set

Keras, Deep Learning With Python

View Set