program executing on all processors and delegate interprocessor
communication and coordination to a communication library such as
MPI. Within this approach, many parallel computations can be
conveniently expressed in terms of a small number of collective
communication operations, where ``collective'' means that a subset of
processors is cooperating in a nontrivial way.
The talk gives a short introduction and then presents some example
algorithms for broadcasting and all-to-all communication.
We will see some nice effects like "trees" with fractional degree
and making detours to come to the goal faster.