On Thu, Sep 19, 2013 at 2:48 PM, Xu, Jianqing <x...@medimmune.com> wrote:
>
> Dear all,
>
> I am learning the parallelization issues from the instructions on Gromacs 
> website. I guess I got a rough understanding of MPI, thread-MPI, OpenMP. But 
> I hope to get some advice about a correct way to run jobs.
>
> Say I have a local desktop having 16 cores. If I just want to run jobs on one 
> computer or a single node (but multiple cores), I understand that I don't 
> have to install and use OpenMPI, as Gromacs has its own thread-MPI included 
> already and it should be good enough to run jobs on one machine. However, for 
> some reasons, OpenMPI has already been installed on my machine, and I 
> compiled Gromacs with it by using the flag: "-DGMX_MPI=ON". My questions are:
>
>
> 1.       Can I still use this executable (mdrun_mpi, built with OpenMPI 
> library) to run multi-core jobs on my local desktop?

Yes

> Or the default Thread-MPI is actually a better option for a single computer 
> or single node (but multi-cores) for whatever reasons?

Yes - lower overhead.

> 2.       Assuming I can still use this executable, let's say I want to use 
> half of the cores (8 cores) on my machine to run a job,
>
> mpirun -np 8 mdrun_mpi -v -deffnm md
>
> a). Since I am not using all the cores, do I still need to "lock" the 
> physical cores to use for better performance? Something like "-nt" for 
> Thread-MPI? Or it is not necessary?

You will see improved performance if you set the thread affinity.
There is no advantage in allowing the threads to move.

> b). For running jobs on a local desktop, or single node having ...  say 16 
> cores, or even 64 cores, should I turn off the "separate PME nodes" (-npme 
> 0)? Or it is better to leave as is?

Depends, but usually best to use separate PME nodes. Try g_tune_pme,
as Carsten suggests.

> 3.       If I want to run two different projects on my local desktop, say one 
> project takes 8 cores, the other takes 4 cores (assuming I have enough 
> memory), I just submit the jobs twice on my desktop:
>
> nohup mpirun -np 8 mdrun_mpi -v -deffnm md1 >& log1&
>
> nohup mpirun -np 4 mdrun_mpi -v -deffnm md2 >& log2 &
>
> Will this be acceptable ? Will two jobs be competing the resource and 
> eventually affect the performance?

Depends how many cores you have. If you want to share a node between
mdruns, you should specify how many (real- or thread-) MPI ranks for
each run, and how many OpenMP threads per rank, arrange for one thread
per core, and use mdrun -pin and mdrun -pinoffset suitably. You should
expect near linear scaling of each job when you are doing it right -
but learn the behaviour of running one job per node first!

Mark

> Sorry for so many detailed questions, but your help on this will be highly 
> appreciated!
>
> Thanks a lot,
>
> Jianqing
>
>
>
> To the extent this electronic communication or any of its attachments contain 
> information that is not in the public domain, such information is considered 
> by MedImmune to be confidential and proprietary. This communication is 
> expected to be read and/or used only by the individual(s) for whom it is 
> intended. If you have received this electronic communication in error, please 
> reply to the sender advising of the error in transmission and delete the 
> original message and any accompanying documents from your system immediately, 
> without copying, reviewing or otherwise using them for any purpose. Thank you 
> for your cooperation.
> --
> gmx-users mailing list    gmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to