Hi Jianqing,

On Sep 19, 2013, at 2:48 PM, "Xu, Jianqing" <x...@medimmune.com> wrote:
> Say I have a local desktop having 16 cores. If I just want to run jobs on one 
> computer or a single node (but multiple cores), I understand that I don't 
> have to install and use OpenMPI, as Gromacs has its own thread-MPI included 
> already and it should be good enough to run jobs on one machine. However, for 
> some reasons, OpenMPI has already been installed on my machine, and I 
> compiled Gromacs with it by using the flag: "-DGMX_MPI=ON". My questions are:
> 
> 
> 1.       Can I still use this executable (mdrun_mpi, built with OpenMPI 
> library) to run multi-core jobs on my local desktop? Or the default 
> Thread-MPI is actually a better option for a single computer or single node 
> (but multi-cores) for whatever reasons?
You can either use OpenMPI or Gromacs build-in thread MPI library. If you only 
want
to run on a single machine, I would recommend recompiling with thread-MPI, 
because 
this is in many cases a bit faster.

> 2.       Assuming I can still use this executable, let's say I want to use 
> half of the cores (8 cores) on my machine to run a job,
> 
> mpirun -np 8 mdrun_mpi -v -deffnm md
> 
> a). Since I am not using all the cores, do I still need to "lock" the 
> physical cores to use for better performance? Something like "-nt" for 
> Thread-MPI? Or it is not necessary?
Depends on whether you get good scaling or not. Compare to a run on 1 core, for 
large
systems the 4 or 8 core parallel runs should be (nearly) 4 or 8 times as fast. 
If 
that is the case, you do not need to worry about pinning.

> 
> b). For running jobs on a local desktop, or single node having ...  say 16 
> cores, or even 64 cores, should I turn off the "separate PME nodes" (-npme 
> 0)? Or it is better to leave as is?
You may want to check with g_tune_pme. Note that the optimum will depend on your
system, and for each MD system you should find that out.

> 
> 3.       If I want to run two different projects on my local desktop, say one 
> project takes 8 cores, the other takes 4 cores (assuming I have enough 
> memory), I just submit the jobs twice on my desktop:
> 
> nohup mpirun -np 8 mdrun_mpi -v -deffnm md1 >& log1&
> 
> nohup mpirun -np 4 mdrun_mpi -v -deffnm md2 >& log2 &
> 
> Will this be acceptable ? Will two jobs be competing the resource and 
> eventually affect the performance?
Make some quick test runs (over a couple of minutes). Then you can check 
the performance of your 8 core run with and without another simulation running.

Best,
  Carsten

> 
> Sorry for so many detailed questions, but your help on this will be highly 
> appreciated!
> 
> Thanks a lot,
> 
> Jianqing
> 
> 
> 
> To the extent this electronic communication or any of its attachments contain 
> information that is not in the public domain, such information is considered 
> by MedImmune to be confidential and proprietary. This communication is 
> expected to be read and/or used only by the individual(s) for whom it is 
> intended. If you have received this electronic communication in error, please 
> reply to the sender advising of the error in transmission and delete the 
> original message and any accompanying documents from your system immediately, 
> without copying, reviewing or otherwise using them for any purpose. Thank you 
> for your cooperation.
> -- 
> gmx-users mailing list    gmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to