On Thu, 2010-01-28 at 17:05 -0800, Tim wrote:
> Also I only need the loop that computes every element of the array to
> be parallelized. Someone said that the parallel part begins with
> MPI_Init and ends with MPI_Finilize, and one can do any serial
> computations before and/or after these calls. But I have wrote some
> MPI programs, and found that the parallel part is not restricted
> between MPI_Init and MPI_Finilize, but instead the whole program. If
> the rest part of the code has to be wrapped for process with ID 0, I
> have little idea about how to apply that to my case since the rest
> part would be the parts before and after the loop in the function and
> the whole in main().

I think you're being polluted by your OpenMP experience!  ;-)

Unlike in OpenMP, there is no concept of "parallel region" when using
MPI.  MPI allows you to pass data between processes.  That's all.  It's
up to you to write your code in such a way that the data is used allow
parallel computation.

Often MPI_Init and MPI_Finilize are amongst the first and last things
done in a parallel code, respectively.  They effectively say "set up
stuff so I can pass messages effectively" and "clean that up".  Each
process runs from start to finish "independently".

As an aside, using MPI is much more invasive than OpenMP.  Parallelising
an existing serial code can be hard with MPI.  But if you start from
scratch you usually end up with a better code with MPI than with OpenMP
(e.g. MPI makes you think about data locality, whereas you can ignore
all the bad things bad locality does and still have a working code with
OpenMP.)


Reply via email to