I guess, on a multicore machine, openmp/pthread code will always run faster
than MPI code on the same box, even if the MPI implementation is efficient
and uses a shared memory tool whereby the data is actually shared across the
different process, though it's in a different way than it is shared across
the threads in the same process.



I'd be curious to see some timing comparisons.

MM



From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of amjad ali
Sent: 10 December 2011 20:22
To: Open MPI Users
Subject: [OMPI users] How to justify the use MPI codes on multicore
systems/PCs?



Hello All,



I developed my MPI based parallel code for clusters, but now I use it on
multicore/manycore computers (PCs) as well. How to justify (in some
thesis/publication) the use of a distributed memory code (in MPI) on a
shared memory (multicore) machine. I guess to explain two reasons:



(1) Plan is to use several hunderds processes in future. So MPI like stuff
is necessary. To maintain code uniformity and save cost/time for developing
shared memory solution (using OpenMP, pthreads etc), I use the same MPI code
on shared memory systems (like multicore PCs). MPI based codes give
reasonable performance on multicore PCs, if not the best.



(2) The latest MPI implementations are intelligent enough that they use some
efficient mechanism while executing MPI based codes on shared memory
(multicore) machines.  (please tell me any reference to quote this fact).





Please help me in formally justifying this and comment/modify above two
justifications. Better if I you can suggent me to quote some reference of
any suitable publication in this regard. 



best regards,

Amjad Ali



Reply via email to