Re: [gmx-users] GPU job failed

2017-08-02 Thread Mark Abraham
Hi,

My first guess is that the implementation of PLUMED doesn't support this.
Does a normal non-PLUMED simulation run correctly when called in this
manner?

Mark

On Wed, Aug 2, 2017 at 9:55 AM Albert  wrote:

> Hello,
>
> I am trying to run Gromacs with the following command line:
>
>
>   mpirun -np 4 gmx_mpi mdrun -v -g 7.log -s 7.tpr -x 7.xtc -c 7.gro -e
> 7.edr -plumed plumed.dat -ntomp 2 -gpu_id 0123
>
> but it always failed with the following messages:
>
> Running on 1 node with total 24 cores, 48 logical cores, 4 compatible GPUs
> Hardware detected on host cudaC.europe.actelion.com (the node of MPI
> rank 0):
>CPU info:
>  Vendor: GenuineIntel
>  Brand:  Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
>  SIMD instructions most likely to fit this hardware: AVX2_256
>  SIMD instructions selected at GROMACS compile time: AVX2_256
>GPU info:
>  Number of GPUs detected: 4
>  #0: NVIDIA GeForce GTX TITAN X, compute cap.: 5.2, ECC:  no, stat:
> compatible
>  #1: NVIDIA GeForce GTX TITAN X, compute cap.: 5.2, ECC:  no, stat:
> compatible
>  #2: NVIDIA GeForce GTX TITAN X, compute cap.: 5.2, ECC:  no, stat:
> compatible
>  #3: NVIDIA GeForce GTX TITAN X, compute cap.: 5.2, ECC:  no, stat:
> compatible
>
> Reading file 7.tpr, VERSION 5.1.3 (single precision)
> Changing nstlist from 20 to 40, rlist from 1.02 to 1.08
>
> Using 4 MPI processes
> Using 2 OpenMP threads per MPI process
>
> On host cudaC.europe.actelion.com 4 compatible GPUs are present, with
> IDs 0,1,2,3
> On host cudaC.europe.actelion.com 4 GPUs auto-selected for this run.
> Mapping of GPU IDs to the 4 PP ranks in this node: 0,1,2,3
>
>
> ---
> Program gmx mdrun, VERSION 5.1.3
> Source code file:
> /home/albert/Downloads/gromacs/gromacs-5.1.3/src/gromacs/mdlib/nbnxn_cuda/
> nbnxn_cuda_data_mgmt.cu,
> line: 403
>
> Fatal error:
> cudaCreateTextureObject on nbfp_texobj failed: invalid argument
>
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> ---
>
> Does anybody have any idea what's happening?
>
> THX a lot.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] GPU job failed

2017-08-02 Thread Albert

Hello,

I am trying to run Gromacs with the following command line:


 mpirun -np 4 gmx_mpi mdrun -v -g 7.log -s 7.tpr -x 7.xtc -c 7.gro -e 
7.edr -plumed plumed.dat -ntomp 2 -gpu_id 0123


but it always failed with the following messages:

Running on 1 node with total 24 cores, 48 logical cores, 4 compatible GPUs
Hardware detected on host cudaC.europe.actelion.com (the node of MPI 
rank 0):

  CPU info:
Vendor: GenuineIntel
Brand:  Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
SIMD instructions most likely to fit this hardware: AVX2_256
SIMD instructions selected at GROMACS compile time: AVX2_256
  GPU info:
Number of GPUs detected: 4
#0: NVIDIA GeForce GTX TITAN X, compute cap.: 5.2, ECC:  no, stat: 
compatible
#1: NVIDIA GeForce GTX TITAN X, compute cap.: 5.2, ECC:  no, stat: 
compatible
#2: NVIDIA GeForce GTX TITAN X, compute cap.: 5.2, ECC:  no, stat: 
compatible
#3: NVIDIA GeForce GTX TITAN X, compute cap.: 5.2, ECC:  no, stat: 
compatible


Reading file 7.tpr, VERSION 5.1.3 (single precision)
Changing nstlist from 20 to 40, rlist from 1.02 to 1.08

Using 4 MPI processes
Using 2 OpenMP threads per MPI process

On host cudaC.europe.actelion.com 4 compatible GPUs are present, with 
IDs 0,1,2,3

On host cudaC.europe.actelion.com 4 GPUs auto-selected for this run.
Mapping of GPU IDs to the 4 PP ranks in this node: 0,1,2,3


---
Program gmx mdrun, VERSION 5.1.3
Source code file: 
/home/albert/Downloads/gromacs/gromacs-5.1.3/src/gromacs/mdlib/nbnxn_cuda/nbnxn_cuda_data_mgmt.cu, 
line: 403


Fatal error:
cudaCreateTextureObject on nbfp_texobj failed: invalid argument

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

Does anybody have any idea what's happening?

THX a lot.

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU job failed

2014-09-09 Thread Carsten Kutzner
Hi,

from the double output it looks like two identical mdruns, 
each with 1 PP process and 10 OpenMP threads, are started. 
Maybe there is something wrong with your MPI setup (did
you by mistake compile with thread-MPI instead of MPI?)

Carsten


On 09 Sep 2014, at 09:06, Albert mailmd2...@gmail.com wrote:

 Here are more informations from log file:
 
 mpirun -np 2 mdrun_mpi -v -s npt2.tpr -c npt2.gro -x npt2.xtc -g
 npt2.log -gpu_id 01 -ntomp 0
 
 
 Number of hardware threads detected (20) does not match the number
 reported by OpenMP (10).
 Consider setting the launch configuration manually!
 
 Number of hardware threads detected (20) does not match the number
 reported by OpenMP (10).
 Consider setting the launch configuration manually!
 Reading file npt2.tpr, VERSION 5.0.1 (single precision)
 Reading file npt2.tpr, VERSION 5.0.1 (single precision)
 Using 1 MPI process
 Using 10 OpenMP threads
 
 2 GPUs detected on host cudaB:
 #0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no, stat: compatible
 #1: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no, stat: compatible
 
 2 GPUs user-selected for this run.
 Mapping of GPUs to the 1 PP rank in this node: #0, #1
 
 
 ---
 Program mdrun_mpi, VERSION 5.0.1
 Source code file:
 /soft2/plumed-2.2/gromacs-5.0.1/src/gromacs/gmxlib/gmx_detect_hardware.c, 
 line:
 359
 
 Fatal error:
 Incorrect launch configuration: mismatching number of PP MPI processes
 and GPUs per node.
 mdrun_mpi was started with 1 PP MPI process per node, but you provided 2
 GPUs.
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 ---
 
 Halting program mdrun_mpi
 
 gcq#314: Do You Have Sex Maniacs or Schizophrenics or Astrophysicists
 in Your Family? (Gogol Bordello)
 
 --
 MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
 with errorcode -1.
 
 NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
 You may or may not see output from other processes, depending on
 exactly when Open MPI kills them.
 --
 Using 1 MPI process
 Using 10 OpenMP threads
 
 2 GPUs detected on host cudaB:
 #0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no, stat: compatible
 #1: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no, stat: compatible
 
 2 GPUs user-selected for this run.
 Mapping of GPUs to the 1 PP rank in this node: #0, #1
 
 
 ---
 Program mdrun_mpi, VERSION 5.0.1
 Source code file:
 /soft2/plumed-2.2/gromacs-5.0.1/src/gromacs/gmxlib/gmx_detect_hardware.c, 
 line:
 359
 
 Fatal error:
 Incorrect launch configuration: mismatching number of PP MPI processes
 and GPUs per node.
 mdrun_mpi was started with 1 PP MPI process per node, but you provided 2
 GPUs.
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 ---
 
 Halting program mdrun_mpi
 
 gcq#56: Lunatics On Pogo Sticks (Red Hot Chili Peppers)
 
 --
 MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
 with errorcode -1.
 
 NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
 You may or may not see output from other processes, depending on
 exactly when Open MPI kills them.
 --
 
 
 
 
 
 
 
 
 
 On 09/08/2014 11:59 PM, Yunlong Liu wrote:
 Same idea with Szilard.
 
 How many nodes are you using?
 On one nodes, how many MPI ranks do you have? The error is complaining about 
 you assigned two GPUs to only one MPI process on one node. If you spread 
 your two MPI ranks on two nodes, that means you only have one at each. Then 
 you can't assign two GPU for only one MPI rank.
 
 How many GPU do you have on one node? If there are two, you can either 
 launch two PPMPI processes on one node and assign two GPU for them. If you 
 only want to launch one MPI rank on each node, you can assign only one GPU 
 for each node ( by -gpu_id 0 )
 
 Yunlong
 
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302

Re: [gmx-users] GPU job failed

2014-09-09 Thread Albert

thank you for reply.

I compiled it with command:


env CC=mpicc CXX=mpicxx F77=mpif90 FC=mpif90 LDF90=mpif90 
CMAKE_PREFIX_PATH=/home/albert/install/intel-2013/mkl/include/fftw:/home/albert/install/intel-mpi/bin64 
cmake .. -DBUILD_SHARED_LIB=OFF -DBUILD_TESTING=OFF 
-DCMAKE_INSTALL_PREFIX=/home/albert/install/gromacs-5.0.1_plumed_2.2-intel 
-DGMX_MPI=ON -DGMX_GPU=ON -DGMX_PREFER_STATIC_LIBS=ON 
-DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-6.0




On 09/09/2014 09:16 AM, Carsten Kutzner wrote:

Hi,

from the double output it looks like two identical mdruns,
each with 1 PP process and 10 OpenMP threads, are started.
Maybe there is something wrong with your MPI setup (did
you by mistake compile with thread-MPI instead of MPI?)

Carsten


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU job failed

2014-09-09 Thread Albert

thank you for reply.

I compiled it with command:


env CC=mpicc CXX=mpicxx F77=mpif90 FC=mpif90 LDF90=mpif90 
CMAKE_PREFIX_PATH=/home/albert/install/intel-2013/mkl/include/fftw:/home/albert/install/intel-mpi/bin64 
cmake .. -DBUILD_SHARED_LIB=OFF -DBUILD_TESTING=OFF 
-DCMAKE_INSTALL_PREFIX=/home/albert/install/gromacs-5.0.1_plumed_2.2-intel 
-DGMX_MPI=ON -DGMX_GPU=ON -DGMX_PREFER_STATIC_LIBS=ON 
-DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-6.0




On 09/09/2014 09:16 AM, Carsten Kutzner wrote:

Hi,

from the double output it looks like two identical mdruns,
each with 1 PP process and 10 OpenMP threads, are started.
Maybe there is something wrong with your MPI setup (did
you by mistake compile with thread-MPI instead of MPI?)

Carsten


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU job failed

2014-09-09 Thread Albert

I recompiled Gromacs-5.0.1, finally it works now
Probably I made some mistakes in previous compiling

thanks a lot guys

regards
Albert


On 09/09/2014 09:16 AM, Carsten Kutzner wrote:

Hi,

from the double output it looks like two identical mdruns,
each with 1 PP process and 10 OpenMP threads, are started.
Maybe there is something wrong with your MPI setup (did
you by mistake compile with thread-MPI instead of MPI?)

Carsten


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] GPU job failed

2014-09-08 Thread Albert

Hello:

I am trying to use the following command in Gromacs-5.0.1:

mpirun -np 2 mdrun_mpi -v -s npt2.tpr -c npt2.gro -x npt2.xtc -g 
npt2.log -gpu_id 01 -ntomp 10



but it always failed with messages:


2 GPUs detected on host cudaB:
  #0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat: 
compatible
  #1: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat: 
compatible


2 GPUs user-selected for this run.
Mapping of GPUs to the 1 PP rank in this node: #0, #1


---
Program mdrun_mpi, VERSION 5.0.1
Source code file: 
/soft2/plumed-2.2/gromacs-5.0.1/src/gromacs/gmxlib/gmx_detect_hardware.c, line: 
359


Fatal error:
Incorrect launch configuration: mismatching number of PP MPI processes 
and GPUs per node.
mdrun_mpi was started with 1 PP MPI process per node, but you provided 2 
GPUs.

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors



However, this command works fine in Gromacs-4.6.5, and I don't know why 
it failed in 5.0.1. Does anybody have any idea?


thx a lot

Albert
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU job failed

2014-09-08 Thread Szilárd Páll
Hi,

It looks like you're starting two ranks and passing two GPU IDs so it
should work. The only think I can think of is that you are either
getting the two MPI ranks placed on different nodes or that for some
reason mpirun -np 2 is only starting one rank (MPI installation
broken?).

Does the same setup work with thread-MPI?

Cheers,
--
Szilárd


On Mon, Sep 8, 2014 at 2:50 PM, Albert mailmd2...@gmail.com wrote:
 Hello:

 I am trying to use the following command in Gromacs-5.0.1:

 mpirun -np 2 mdrun_mpi -v -s npt2.tpr -c npt2.gro -x npt2.xtc -g npt2.log
 -gpu_id 01 -ntomp 10


 but it always failed with messages:


 2 GPUs detected on host cudaB:
   #0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat:
 compatible
   #1: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat:
 compatible

 2 GPUs user-selected for this run.
 Mapping of GPUs to the 1 PP rank in this node: #0, #1


 ---
 Program mdrun_mpi, VERSION 5.0.1
 Source code file:
 /soft2/plumed-2.2/gromacs-5.0.1/src/gromacs/gmxlib/gmx_detect_hardware.c,
 line: 359

 Fatal error:
 Incorrect launch configuration: mismatching number of PP MPI processes and
 GPUs per node.
 mdrun_mpi was started with 1 PP MPI process per node, but you provided 2
 GPUs.
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors



 However, this command works fine in Gromacs-4.6.5, and I don't know why it
 failed in 5.0.1. Does anybody have any idea?

 thx a lot

 Albert
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU job failed

2014-09-08 Thread Yunlong Liu
Same idea with Szilard.

How many nodes are you using?
On one nodes, how many MPI ranks do you have? The error is complaining about 
you assigned two GPUs to only one MPI process on one node. If you spread your 
two MPI ranks on two nodes, that means you only have one at each. Then you 
can't assign two GPU for only one MPI rank.

How many GPU do you have on one node? If there are two, you can either launch 
two PPMPI processes on one node and assign two GPU for them. If you only want 
to launch one MPI rank on each node, you can assign only one GPU for each node 
( by -gpu_id 0 )

Yunlong

Try to run
Sent from my iPhone

 On Sep 8, 2014, at 5:35 PM, Szilárd Páll pall.szil...@gmail.com wrote:
 
 Hi,
 
 It looks like you're starting two ranks and passing two GPU IDs so it
 should work. The only think I can think of is that you are either
 getting the two MPI ranks placed on different nodes or that for some
 reason mpirun -np 2 is only starting one rank (MPI installation
 broken?).
 
 Does the same setup work with thread-MPI?
 
 Cheers,
 --
 Szilárd
 
 
 On Mon, Sep 8, 2014 at 2:50 PM, Albert mailmd2...@gmail.com wrote:
 Hello:
 
 I am trying to use the following command in Gromacs-5.0.1:
 
 mpirun -np 2 mdrun_mpi -v -s npt2.tpr -c npt2.gro -x npt2.xtc -g npt2.log
 -gpu_id 01 -ntomp 10
 
 
 but it always failed with messages:
 
 
 2 GPUs detected on host cudaB:
  #0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat:
 compatible
  #1: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat:
 compatible
 
 2 GPUs user-selected for this run.
 Mapping of GPUs to the 1 PP rank in this node: #0, #1
 
 
 ---
 Program mdrun_mpi, VERSION 5.0.1
 Source code file:
 /soft2/plumed-2.2/gromacs-5.0.1/src/gromacs/gmxlib/gmx_detect_hardware.c,
 line: 359
 
 Fatal error:
 Incorrect launch configuration: mismatching number of PP MPI processes and
 GPUs per node.
 mdrun_mpi was started with 1 PP MPI process per node, but you provided 2
 GPUs.
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 
 
 
 However, this command works fine in Gromacs-4.6.5, and I don't know why it
 failed in 5.0.1. Does anybody have any idea?
 
 thx a lot
 
 Albert
 --
 Gromacs Users mailing list
 
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
 mail to gmx-users-requ...@gromacs.org.
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU job failed

2014-09-08 Thread Albert


thanks a lot for reply both Yunlong and Szilard.

I don't set up PBS system and nodes in the workstation. In the GPU 
workstation, it contains 1 CPU with 20 cores, and two GPUs. So it is 
similar to 1 nodes with 2 GPUs.


But I don't know why 4.6.5 works, but 5.0.1 doesn't ...




Thx again for reply.

Albert

On 09/08/2014 11:59 PM, Yunlong Liu wrote:

Same idea with Szilard.

How many nodes are you using?
On one nodes, how many MPI ranks do you have? The error is complaining about 
you assigned two GPUs to only one MPI process on one node. If you spread your 
two MPI ranks on two nodes, that means you only have one at each. Then you 
can't assign two GPU for only one MPI rank.

How many GPU do you have on one node? If there are two, you can either launch 
two PPMPI processes on one node and assign two GPU for them. If you only want 
to launch one MPI rank on each node, you can assign only one GPU for each node 
( by -gpu_id 0 )

Yunlong


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.