Hi Max,

On 18 Dec 2014, at 15:30, Ebert Maximilian <m.eb...@umontreal.ca> wrote:

> Dear list,
> 
> I am benchmarking my system on a GPU cluster with 6 GPU’s and two quad core 
> CPUs for each node. First I am wondering if there is any output which 
> confirms how many CPUs and GPUs were used during the run? I find the output 
> for GPUs in the log file but only for a single node. When I use multiple 
> nodes why don’t the other nodes show up in the log file as hosts? For 
> instance in this example I used two nodes and claimed 4 GPUs each but got 
> this in my log file:
> 
> 6 GPUs detected on host ngpu-a4-01:
>  #0: NVIDIA GeForce GTX 580, compute cap.: 2.0, ECC:  no, stat: compatible
>  #1: NVIDIA GeForce GTX 580, compute cap.: 2.0, ECC:  no, stat: compatible
>  #2: NVIDIA GeForce GTX 580, compute cap.: 2.0, ECC:  no, stat: compatible
>  #3: NVIDIA GeForce GTX 580, compute cap.: 2.0, ECC:  no, stat: compatible
>  #4: NVIDIA GeForce GTX 580, compute cap.: 2.0, ECC:  no, stat: compatible
>  #5: NVIDIA GeForce GTX 580, compute cap.: 2.0, ECC:  no, stat: compatible
> 
> 4 GPUs auto-selected for this run.
> Mapping of GPUs to the 4 PP ranks in this node: #0, #1, #2, #3
This will be the same across all nodes. Gromacs will refuse to run if
there are not enough GPUs on any of your other nodes.

> 
> 
> 
> ngpu-a4-02 is not shown here. Any idea? The job was submitted in the 
> following way:
> 
> qsub -q @test -lnodes=2:ppn=4 -lwalltime=1:00:00 gromacs_run_gpu
> 
> and the gromacs_run_gpu file:
> 
> #!/bin/csh
> #
> 
> #PBS -o result_run10ns96-8.dat
> #PBS -j oe
> #PBS -W umask=022
> #PBS -r n
> 
> cd 8_gpu
> 
> module add CUDA
> module load gromacs/5.0.1-gpu
> 
> mpirun gmx_mpi mdrun -v -x -deffnm 10ns_rep1-8GPU
> 
> 
> Another question I had was how can I define the number of CPUs and check if 
> they were really used?
Use -ntomp to control how many OpenMP threads each of your MPI processes will 
have.
This way you can make use of all cores you have on each node.

> I can’t find any information about the number of CPUs in the log file.
Look for 
“Using … MPI processes”
“Using … OpenMP threads per MPI process”
in the log file.

> I would also like to try combinations like 4 CPUs + 1 GPU
You can use the -gpu_id switch to supply a list of eligible GPUs (see mdrun -h).
If you just want to use the first GPU on you node with, e.g. 4 MPI processes,
use -gpu_id 0000.

Best,
  Carsten



> or 2 CPUs + 2 GPU. How do I set this up?
> 
> Thank you very much for your help,
> 
> Max
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Reply via email to