Re: [gmx-users] Commands to run simulations using multiple GPU's in version 5.0.1

2014-09-24 Thread Johnny Lu
what happened when you ran without gpu? I installed 5.0.1 on a single
machine without gpu. It used threaded mpi and no real mpi and ran fine.

On Wed, Sep 24, 2014 at 12:21 PM, Johnny Lu  wrote:

> well... I think i read it somewhere that threaded MPI is a dropped in
> replacement for real MPI. OpenMPI is a real MPI. So those two shouldn't be
> compatible.
> I think we chose that when we compiled gromacs (whether we use real MPI or
> not). Threaded MPI is enabled by default if we didn't compile for gromacs
> with real MPI.
>
> Oh. If you use MPI when you compile, I think the binary should be
> mdrun_mpi instead of just mdrun
>
> "-DGMX_MPI=on to build using an MPI
> 
> wrapper compiler" in (
> http://www.gromacs.org/Documentation/Installation_Instructions#typical-gromacs-installation
> )
>
>
> On Wed, Sep 24, 2014 at 11:57 AM, Siva Dasetty 
> wrote:
>
>> Thank you again for the reply.
>>
>> ntmpi is for threadMPI but I am using OpenMPI for MPI as I am planning to
>> use multiple nodes.
>> As I have pointed in case 7 of my post that if I use ntmpi, i get a fatal
>> error that says :thread mpi's are requested but gromacs is not compiled
>> with thread MPI.
>>
>> For which my questions are,
>> 1. Isnt threadMPI enabled by default?
>> 2. Are threadMPI and OpenMPI mutually incompatible?
>>
>> In any case if I use mpirun -np 2 instead of ntmpi, I still cannot use
>> ntomp because gromacs now automatically detects environment settings for
>> the OpenMP threads which is equal to the number of hardware threads that
>> is
>> available and this resulted in case 4 (please check above) of my post.
>>
>> Is there any other command similar to the one you posted above that I can
>> use with OpenMPI? Because it looks like threadMPI and OpenMPI are not
>> compatible to me.
>>
>> Thanks,
>>
>>
>>
>>
>> On Wed, Sep 24, 2014 at 10:50 AM, Johnny Lu 
>> wrote:
>>
>> > found it.
>> > http://www.gromacs.org/Documentation/Acceleration_and_parallelization
>> >
>> > GPUs are assigned to PP ranks within the same physical node in a
>> sequential
>> > order, that is GPU 0 to the (thread-)MPI rank 0, GPU 1 to rank 1. In
>> order
>> > to manually specify which GPU(s) to be used by mdrun, the respective
>> device
>> > ID(s) can be passed with the -gpu_id XYZ command line option or with the
>> > GMX_GPU_ID=XYZ environment variable. Here, XYZ is a sequence of digits
>> > representing the numeric ID-s of available GPUs (the numbering starts
>> from
>> > 0) . The environment variable is particularly useful when running on
>> > multiple compute nodes with different GPU configurations.
>> >
>> > Taking the above example of 8-core machine with two compatible GPUs, we
>> can
>> > manually specify the GPUs and get the same launch configuration as in
>> the
>> > above examples by:
>> >
>> > mdrun -ntmpi 2 -ntomp 4 -gpu_id 01
>> >
>> >
>> >
>> > On Wed, Sep 24, 2014 at 10:49 AM, Johnny Lu 
>> > wrote:
>> >
>> > > Actually i am trying to find the answer to the same question now.
>> > >
>> > > manual 4.6.7/appendix D/mdrun
>> > > says
>> > >
>> > > -gpu_id
>> > > string
>> > > List of GPU device id-s to use, specifies the per-node PP rank to GPU
>> > > mapping
>> > >
>> > > On Tue, Sep 23, 2014 at 11:07 PM, Siva Dasetty > >
>> > > wrote:
>> > >
>> > >> Thank you Lu for the reply.
>> > >>
>> > >> As I have mentioned in the post, I have already tried those options
>> but
>> > it
>> > >> didn't work. Kindly please let me know if you have anymore
>> suggestions.
>> > >>
>> > >> Thank you,
>> > >>
>> > >> On Tue, Sep 23, 2014 at 8:41 PM, Johnny Lu 
>> > >> wrote:
>> > >>
>> > >> > Try -nt, -ntmpi, -ntomp, -np  (one at a time) ?
>> > >> > I forget about what I tried now But I just stop the mdrun, and
>> > then
>> > >> > read the log file.
>> > >> > Also can look for the mdrun page in the offical manual (pdf) and
>> try
>> > >> this
>> > >> > page:
>> > >> >
>> > >> >
>> > >>
>> >
>> http://www.gromacs.org/Documentation/Gromacs_Utilities/mdrun?highlight=mdrun
>> > >> >
>> > >> >
>> > >> > On Mon, Sep 22, 2014 at 6:46 PM, Siva Dasetty <
>> sdas...@g.clemson.edu>
>> > >> > wrote:
>> > >> >
>> > >> > > Dear  All,
>> > >> > >
>> > >> > > I am trying to run NPT simulations using GROMACS version 5.0.1
>> of a
>> > >> > system
>> > >> > > of size 140k atoms (protein+water systems) with 2 or more GPU's
>> > >> > > (model=k20); 8 cores (or more); and 1 or more nodes. I am trying
>> to
>> > >> > > understand how to run simulations using multiple gpus on more
>> than
>> > one
>> > >> > > node. I  get the following errors/output when I run the
>> simulation
>> > >> using
>> > >> > > the following commands:-
>> > >> > >
>> > >> > > Note: time-step used = 2 fs and total number of steps = 2
>> > >> > >
>> > >> > > First 4 cases are using single GPU and cases 5-8 are using 2
>> GPU's.
>> > >> > >
>> > >> > > 1. 1 node, 8 cpus, 1 gpu
>> > >> > > export OMP_NUM_THREADS = 8
>> > >> > > command u

Re: [gmx-users] Commands to run simulations using multiple GPU's in version 5.0.1

2014-09-24 Thread Johnny Lu
well... I think i read it somewhere that threaded MPI is a dropped in
replacement for real MPI. OpenMPI is a real MPI. So those two shouldn't be
compatible.
I think we chose that when we compiled gromacs (whether we use real MPI or
not). Threaded MPI is enabled by default if we didn't compile for gromacs
with real MPI.

Oh. If you use MPI when you compile, I think the binary should be mdrun_mpi
instead of just mdrun

"-DGMX_MPI=on to build using an MPI

wrapper compiler" in (
http://www.gromacs.org/Documentation/Installation_Instructions#typical-gromacs-installation
)


On Wed, Sep 24, 2014 at 11:57 AM, Siva Dasetty 
wrote:

> Thank you again for the reply.
>
> ntmpi is for threadMPI but I am using OpenMPI for MPI as I am planning to
> use multiple nodes.
> As I have pointed in case 7 of my post that if I use ntmpi, i get a fatal
> error that says :thread mpi's are requested but gromacs is not compiled
> with thread MPI.
>
> For which my questions are,
> 1. Isnt threadMPI enabled by default?
> 2. Are threadMPI and OpenMPI mutually incompatible?
>
> In any case if I use mpirun -np 2 instead of ntmpi, I still cannot use
> ntomp because gromacs now automatically detects environment settings for
> the OpenMP threads which is equal to the number of hardware threads that is
> available and this resulted in case 4 (please check above) of my post.
>
> Is there any other command similar to the one you posted above that I can
> use with OpenMPI? Because it looks like threadMPI and OpenMPI are not
> compatible to me.
>
> Thanks,
>
>
>
>
> On Wed, Sep 24, 2014 at 10:50 AM, Johnny Lu 
> wrote:
>
> > found it.
> > http://www.gromacs.org/Documentation/Acceleration_and_parallelization
> >
> > GPUs are assigned to PP ranks within the same physical node in a
> sequential
> > order, that is GPU 0 to the (thread-)MPI rank 0, GPU 1 to rank 1. In
> order
> > to manually specify which GPU(s) to be used by mdrun, the respective
> device
> > ID(s) can be passed with the -gpu_id XYZ command line option or with the
> > GMX_GPU_ID=XYZ environment variable. Here, XYZ is a sequence of digits
> > representing the numeric ID-s of available GPUs (the numbering starts
> from
> > 0) . The environment variable is particularly useful when running on
> > multiple compute nodes with different GPU configurations.
> >
> > Taking the above example of 8-core machine with two compatible GPUs, we
> can
> > manually specify the GPUs and get the same launch configuration as in the
> > above examples by:
> >
> > mdrun -ntmpi 2 -ntomp 4 -gpu_id 01
> >
> >
> >
> > On Wed, Sep 24, 2014 at 10:49 AM, Johnny Lu 
> > wrote:
> >
> > > Actually i am trying to find the answer to the same question now.
> > >
> > > manual 4.6.7/appendix D/mdrun
> > > says
> > >
> > > -gpu_id
> > > string
> > > List of GPU device id-s to use, specifies the per-node PP rank to GPU
> > > mapping
> > >
> > > On Tue, Sep 23, 2014 at 11:07 PM, Siva Dasetty 
> > > wrote:
> > >
> > >> Thank you Lu for the reply.
> > >>
> > >> As I have mentioned in the post, I have already tried those options
> but
> > it
> > >> didn't work. Kindly please let me know if you have anymore
> suggestions.
> > >>
> > >> Thank you,
> > >>
> > >> On Tue, Sep 23, 2014 at 8:41 PM, Johnny Lu 
> > >> wrote:
> > >>
> > >> > Try -nt, -ntmpi, -ntomp, -np  (one at a time) ?
> > >> > I forget about what I tried now But I just stop the mdrun, and
> > then
> > >> > read the log file.
> > >> > Also can look for the mdrun page in the offical manual (pdf) and try
> > >> this
> > >> > page:
> > >> >
> > >> >
> > >>
> >
> http://www.gromacs.org/Documentation/Gromacs_Utilities/mdrun?highlight=mdrun
> > >> >
> > >> >
> > >> > On Mon, Sep 22, 2014 at 6:46 PM, Siva Dasetty <
> sdas...@g.clemson.edu>
> > >> > wrote:
> > >> >
> > >> > > Dear  All,
> > >> > >
> > >> > > I am trying to run NPT simulations using GROMACS version 5.0.1 of
> a
> > >> > system
> > >> > > of size 140k atoms (protein+water systems) with 2 or more GPU's
> > >> > > (model=k20); 8 cores (or more); and 1 or more nodes. I am trying
> to
> > >> > > understand how to run simulations using multiple gpus on more than
> > one
> > >> > > node. I  get the following errors/output when I run the simulation
> > >> using
> > >> > > the following commands:-
> > >> > >
> > >> > > Note: time-step used = 2 fs and total number of steps = 2
> > >> > >
> > >> > > First 4 cases are using single GPU and cases 5-8 are using 2
> GPU's.
> > >> > >
> > >> > > 1. 1 node, 8 cpus, 1 gpu
> > >> > > export OMP_NUM_THREADS = 8
> > >> > > command used-  mdrun -s topol.tpr  -gpu_id 0
> > >> > > Speed - 5.8 ns/day
> > >> > >
> > >> > > 2.  1 node, 8 cpus, 1 gpu
> > >> > > export OMP_NUM_THREADS = 16
> > >> > > command used-  mdrun -s topol.tpr   -gpu_id 0
> > >> > > Speed - 4.7 ns/day
> > >> > >
> > >> > > 3. 1 node, 8cpus, 1gpu
> > >> > > mdrun -s topol.tpr -ntomp 8  -gpu_id 0
> > >> > > Speed- 5.876 ns/day
> > >>

Re: [gmx-users] Commands to run simulations using multiple GPU's in version 5.0.1

2014-09-24 Thread Szilárd Páll
On Wed, Sep 24, 2014 at 5:57 PM, Siva Dasetty  wrote:
> Thank you again for the reply.
>
> ntmpi is for threadMPI but I am using OpenMPI for MPI as I am planning to
> use multiple nodes.
> As I have pointed in case 7 of my post that if I use ntmpi, i get a fatal
> error that says :thread mpi's are requested but gromacs is not compiled
> with thread MPI.
>
> For which my questions are,
> 1. Isnt threadMPI enabled by default?

Yes it is, but you could have checked that by running cmake and then
checking the resulting cache or mdrun binary.

> 2. Are threadMPI and OpenMPI mutually incompatible?

Yes. Again, that's not hard to figure out either; e.g.
* from the wiki page on parallelization: "Acting as a drop-in
replacement for MPI, thread-MPI enables..."
* from the cmake output:
$ cmake -DGMX_MPI=ON .
-- MPI is not compatible with thread-MPI. Disabling thread-MPI.

> In any case if I use mpirun -np 2 instead of ntmpi, I still cannot use
> ntomp because gromacs now automatically detects environment settings for
> the OpenMP threads which is equal to the number of hardware threads that is
> available and this resulted in case 4 (please check above) of my post.

Your case 4) doesn't work simply because your environment contains
OMP_NUM_THREADS (set e.g. by your job script or scheduler) set to a
value different from what you pass to -ntomp. You should do exactly as
the error message says.

> Is there any other command similar to the one you posted above that I can
> use with OpenMPI? Because it looks like threadMPI and OpenMPI are not
> compatible to me.

The command line interface, with the exception of the "-ntmpi"
argument which is thread-MPI specific, is always the same.

--
Szilárd

> Thanks,
>
>
>
>
> On Wed, Sep 24, 2014 at 10:50 AM, Johnny Lu  wrote:
>
>> found it.
>> http://www.gromacs.org/Documentation/Acceleration_and_parallelization
>>
>> GPUs are assigned to PP ranks within the same physical node in a sequential
>> order, that is GPU 0 to the (thread-)MPI rank 0, GPU 1 to rank 1. In order
>> to manually specify which GPU(s) to be used by mdrun, the respective device
>> ID(s) can be passed with the -gpu_id XYZ command line option or with the
>> GMX_GPU_ID=XYZ environment variable. Here, XYZ is a sequence of digits
>> representing the numeric ID-s of available GPUs (the numbering starts from
>> 0) . The environment variable is particularly useful when running on
>> multiple compute nodes with different GPU configurations.
>>
>> Taking the above example of 8-core machine with two compatible GPUs, we can
>> manually specify the GPUs and get the same launch configuration as in the
>> above examples by:
>>
>> mdrun -ntmpi 2 -ntomp 4 -gpu_id 01
>>
>>
>>
>> On Wed, Sep 24, 2014 at 10:49 AM, Johnny Lu 
>> wrote:
>>
>> > Actually i am trying to find the answer to the same question now.
>> >
>> > manual 4.6.7/appendix D/mdrun
>> > says
>> >
>> > -gpu_id
>> > string
>> > List of GPU device id-s to use, specifies the per-node PP rank to GPU
>> > mapping
>> >
>> > On Tue, Sep 23, 2014 at 11:07 PM, Siva Dasetty 
>> > wrote:
>> >
>> >> Thank you Lu for the reply.
>> >>
>> >> As I have mentioned in the post, I have already tried those options but
>> it
>> >> didn't work. Kindly please let me know if you have anymore suggestions.
>> >>
>> >> Thank you,
>> >>
>> >> On Tue, Sep 23, 2014 at 8:41 PM, Johnny Lu 
>> >> wrote:
>> >>
>> >> > Try -nt, -ntmpi, -ntomp, -np  (one at a time) ?
>> >> > I forget about what I tried now But I just stop the mdrun, and
>> then
>> >> > read the log file.
>> >> > Also can look for the mdrun page in the offical manual (pdf) and try
>> >> this
>> >> > page:
>> >> >
>> >> >
>> >>
>> http://www.gromacs.org/Documentation/Gromacs_Utilities/mdrun?highlight=mdrun
>> >> >
>> >> >
>> >> > On Mon, Sep 22, 2014 at 6:46 PM, Siva Dasetty 
>> >> > wrote:
>> >> >
>> >> > > Dear  All,
>> >> > >
>> >> > > I am trying to run NPT simulations using GROMACS version 5.0.1 of a
>> >> > system
>> >> > > of size 140k atoms (protein+water systems) with 2 or more GPU's
>> >> > > (model=k20); 8 cores (or more); and 1 or more nodes. I am trying to
>> >> > > understand how to run simulations using multiple gpus on more than
>> one
>> >> > > node. I  get the following errors/output when I run the simulation
>> >> using
>> >> > > the following commands:-
>> >> > >
>> >> > > Note: time-step used = 2 fs and total number of steps = 2
>> >> > >
>> >> > > First 4 cases are using single GPU and cases 5-8 are using 2 GPU's.
>> >> > >
>> >> > > 1. 1 node, 8 cpus, 1 gpu
>> >> > > export OMP_NUM_THREADS = 8
>> >> > > command used-  mdrun -s topol.tpr  -gpu_id 0
>> >> > > Speed - 5.8 ns/day
>> >> > >
>> >> > > 2.  1 node, 8 cpus, 1 gpu
>> >> > > export OMP_NUM_THREADS = 16
>> >> > > command used-  mdrun -s topol.tpr   -gpu_id 0
>> >> > > Speed - 4.7 ns/day
>> >> > >
>> >> > > 3. 1 node, 8cpus, 1gpu
>> >> > > mdrun -s topol.tpr -ntomp 8  -gpu_id 0
>> >> > > Speed- 5.876 ns/day
>> >> > >
>> >> > > 4. 1 node, 8cpus, 1gpu
>> >> 

Re: [gmx-users] Commands to run simulations using multiple GPU's in version 5.0.1

2014-09-24 Thread Siva Dasetty
Thank you again for the reply.

ntmpi is for threadMPI but I am using OpenMPI for MPI as I am planning to
use multiple nodes.
As I have pointed in case 7 of my post that if I use ntmpi, i get a fatal
error that says :thread mpi's are requested but gromacs is not compiled
with thread MPI.

For which my questions are,
1. Isnt threadMPI enabled by default?
2. Are threadMPI and OpenMPI mutually incompatible?

In any case if I use mpirun -np 2 instead of ntmpi, I still cannot use
ntomp because gromacs now automatically detects environment settings for
the OpenMP threads which is equal to the number of hardware threads that is
available and this resulted in case 4 (please check above) of my post.

Is there any other command similar to the one you posted above that I can
use with OpenMPI? Because it looks like threadMPI and OpenMPI are not
compatible to me.

Thanks,




On Wed, Sep 24, 2014 at 10:50 AM, Johnny Lu  wrote:

> found it.
> http://www.gromacs.org/Documentation/Acceleration_and_parallelization
>
> GPUs are assigned to PP ranks within the same physical node in a sequential
> order, that is GPU 0 to the (thread-)MPI rank 0, GPU 1 to rank 1. In order
> to manually specify which GPU(s) to be used by mdrun, the respective device
> ID(s) can be passed with the -gpu_id XYZ command line option or with the
> GMX_GPU_ID=XYZ environment variable. Here, XYZ is a sequence of digits
> representing the numeric ID-s of available GPUs (the numbering starts from
> 0) . The environment variable is particularly useful when running on
> multiple compute nodes with different GPU configurations.
>
> Taking the above example of 8-core machine with two compatible GPUs, we can
> manually specify the GPUs and get the same launch configuration as in the
> above examples by:
>
> mdrun -ntmpi 2 -ntomp 4 -gpu_id 01
>
>
>
> On Wed, Sep 24, 2014 at 10:49 AM, Johnny Lu 
> wrote:
>
> > Actually i am trying to find the answer to the same question now.
> >
> > manual 4.6.7/appendix D/mdrun
> > says
> >
> > -gpu_id
> > string
> > List of GPU device id-s to use, specifies the per-node PP rank to GPU
> > mapping
> >
> > On Tue, Sep 23, 2014 at 11:07 PM, Siva Dasetty 
> > wrote:
> >
> >> Thank you Lu for the reply.
> >>
> >> As I have mentioned in the post, I have already tried those options but
> it
> >> didn't work. Kindly please let me know if you have anymore suggestions.
> >>
> >> Thank you,
> >>
> >> On Tue, Sep 23, 2014 at 8:41 PM, Johnny Lu 
> >> wrote:
> >>
> >> > Try -nt, -ntmpi, -ntomp, -np  (one at a time) ?
> >> > I forget about what I tried now But I just stop the mdrun, and
> then
> >> > read the log file.
> >> > Also can look for the mdrun page in the offical manual (pdf) and try
> >> this
> >> > page:
> >> >
> >> >
> >>
> http://www.gromacs.org/Documentation/Gromacs_Utilities/mdrun?highlight=mdrun
> >> >
> >> >
> >> > On Mon, Sep 22, 2014 at 6:46 PM, Siva Dasetty 
> >> > wrote:
> >> >
> >> > > Dear  All,
> >> > >
> >> > > I am trying to run NPT simulations using GROMACS version 5.0.1 of a
> >> > system
> >> > > of size 140k atoms (protein+water systems) with 2 or more GPU's
> >> > > (model=k20); 8 cores (or more); and 1 or more nodes. I am trying to
> >> > > understand how to run simulations using multiple gpus on more than
> one
> >> > > node. I  get the following errors/output when I run the simulation
> >> using
> >> > > the following commands:-
> >> > >
> >> > > Note: time-step used = 2 fs and total number of steps = 2
> >> > >
> >> > > First 4 cases are using single GPU and cases 5-8 are using 2 GPU's.
> >> > >
> >> > > 1. 1 node, 8 cpus, 1 gpu
> >> > > export OMP_NUM_THREADS = 8
> >> > > command used-  mdrun -s topol.tpr  -gpu_id 0
> >> > > Speed - 5.8 ns/day
> >> > >
> >> > > 2.  1 node, 8 cpus, 1 gpu
> >> > > export OMP_NUM_THREADS = 16
> >> > > command used-  mdrun -s topol.tpr   -gpu_id 0
> >> > > Speed - 4.7 ns/day
> >> > >
> >> > > 3. 1 node, 8cpus, 1gpu
> >> > > mdrun -s topol.tpr -ntomp 8  -gpu_id 0
> >> > > Speed- 5.876 ns/day
> >> > >
> >> > > 4. 1 node, 8cpus, 1gpu
> >> > > mdrun -s topol.tpr -ntomp 16  -gpu_id 0
> >> > > Fatal Error: Environment variable OMP_NUM_THREADS (8) and the number
> >> of
> >> > > threads requested on  the command line (16) have different values.
> >> Either
> >> > > omit one, or set them both
> >> > >  to the same value.
> >> > >
> >> > > Question for 3 and 4 : Do I need to always use OMP_NUM_THREADS or is
> >> > there
> >> > > a way ntomp overwrites the environment settings?
> >> > >
> >> > >
> >> > > 5. 1 node, 8cpus , 2gpus
> >> > > export OMP_NUM_THREADS = 8
> >> > > mpirun -np 2 mdrun -s topol.tpr -pin on  -gpu_id 01
> >> > > Speed - 4.044 ns/day
> >> > >
> >> > > 6. 2 nodes, 8cpus , 2 gpus
> >> > > export OMP_NUM_THREADS = 8
> >> > > mpirun -np 2 mdrun -s topol.tpr -pin on  -gpu_id 01
> >> > > Speed - 3.0 ns/day
> >> > >
> >> > > Are the commands that I used for 5 and 6 correct?
> >> > >
> >> > > 7. I also used (1node, 8 cpus, 2 gpus)
> >> > >  mdrun -s topol.tpr -ntmp

Re: [gmx-users] Commands to run simulations using multiple GPU's in version 5.0.1

2014-09-24 Thread Johnny Lu
found it.
http://www.gromacs.org/Documentation/Acceleration_and_parallelization

GPUs are assigned to PP ranks within the same physical node in a sequential
order, that is GPU 0 to the (thread-)MPI rank 0, GPU 1 to rank 1. In order
to manually specify which GPU(s) to be used by mdrun, the respective device
ID(s) can be passed with the -gpu_id XYZ command line option or with the
GMX_GPU_ID=XYZ environment variable. Here, XYZ is a sequence of digits
representing the numeric ID-s of available GPUs (the numbering starts from
0) . The environment variable is particularly useful when running on
multiple compute nodes with different GPU configurations.

Taking the above example of 8-core machine with two compatible GPUs, we can
manually specify the GPUs and get the same launch configuration as in the
above examples by:

mdrun -ntmpi 2 -ntomp 4 -gpu_id 01



On Wed, Sep 24, 2014 at 10:49 AM, Johnny Lu  wrote:

> Actually i am trying to find the answer to the same question now.
>
> manual 4.6.7/appendix D/mdrun
> says
>
> -gpu_id
> string
> List of GPU device id-s to use, specifies the per-node PP rank to GPU
> mapping
>
> On Tue, Sep 23, 2014 at 11:07 PM, Siva Dasetty 
> wrote:
>
>> Thank you Lu for the reply.
>>
>> As I have mentioned in the post, I have already tried those options but it
>> didn't work. Kindly please let me know if you have anymore suggestions.
>>
>> Thank you,
>>
>> On Tue, Sep 23, 2014 at 8:41 PM, Johnny Lu 
>> wrote:
>>
>> > Try -nt, -ntmpi, -ntomp, -np  (one at a time) ?
>> > I forget about what I tried now But I just stop the mdrun, and then
>> > read the log file.
>> > Also can look for the mdrun page in the offical manual (pdf) and try
>> this
>> > page:
>> >
>> >
>> http://www.gromacs.org/Documentation/Gromacs_Utilities/mdrun?highlight=mdrun
>> >
>> >
>> > On Mon, Sep 22, 2014 at 6:46 PM, Siva Dasetty 
>> > wrote:
>> >
>> > > Dear  All,
>> > >
>> > > I am trying to run NPT simulations using GROMACS version 5.0.1 of a
>> > system
>> > > of size 140k atoms (protein+water systems) with 2 or more GPU's
>> > > (model=k20); 8 cores (or more); and 1 or more nodes. I am trying to
>> > > understand how to run simulations using multiple gpus on more than one
>> > > node. I  get the following errors/output when I run the simulation
>> using
>> > > the following commands:-
>> > >
>> > > Note: time-step used = 2 fs and total number of steps = 2
>> > >
>> > > First 4 cases are using single GPU and cases 5-8 are using 2 GPU's.
>> > >
>> > > 1. 1 node, 8 cpus, 1 gpu
>> > > export OMP_NUM_THREADS = 8
>> > > command used-  mdrun -s topol.tpr  -gpu_id 0
>> > > Speed - 5.8 ns/day
>> > >
>> > > 2.  1 node, 8 cpus, 1 gpu
>> > > export OMP_NUM_THREADS = 16
>> > > command used-  mdrun -s topol.tpr   -gpu_id 0
>> > > Speed - 4.7 ns/day
>> > >
>> > > 3. 1 node, 8cpus, 1gpu
>> > > mdrun -s topol.tpr -ntomp 8  -gpu_id 0
>> > > Speed- 5.876 ns/day
>> > >
>> > > 4. 1 node, 8cpus, 1gpu
>> > > mdrun -s topol.tpr -ntomp 16  -gpu_id 0
>> > > Fatal Error: Environment variable OMP_NUM_THREADS (8) and the number
>> of
>> > > threads requested on  the command line (16) have different values.
>> Either
>> > > omit one, or set them both
>> > >  to the same value.
>> > >
>> > > Question for 3 and 4 : Do I need to always use OMP_NUM_THREADS or is
>> > there
>> > > a way ntomp overwrites the environment settings?
>> > >
>> > >
>> > > 5. 1 node, 8cpus , 2gpus
>> > > export OMP_NUM_THREADS = 8
>> > > mpirun -np 2 mdrun -s topol.tpr -pin on  -gpu_id 01
>> > > Speed - 4.044 ns/day
>> > >
>> > > 6. 2 nodes, 8cpus , 2 gpus
>> > > export OMP_NUM_THREADS = 8
>> > > mpirun -np 2 mdrun -s topol.tpr -pin on  -gpu_id 01
>> > > Speed - 3.0 ns/day
>> > >
>> > > Are the commands that I used for 5 and 6 correct?
>> > >
>> > > 7. I also used (1node, 8 cpus, 2 gpus)
>> > >  mdrun -s topol.tpr -ntmpi 2 -ntomp 8  -gpu_id 01
>> > > but this time I get a fatal error: thread mpi's are requested but
>> gromacs
>> > > is not compiled with thread MPI.
>> > >
>> > > Question: Isn't thread MPI enabled by default?
>> > >
>> > > 8. Finally, I recompiled Gromacs without OpenMP and re-ran case 1 but
>> > this
>> > > time there is a fatal error "More than 1 OpenMP thread requested, but
>> > > Gromacs was compiled without OpenMP support."
>> > > command : mdrun -s topol.tpr  (no environment settings)  -gpu_id 0
>> > > Question: Here again, I assumed thread MPI is enabled by default and I
>> > > think Gromacs still assumes OpenMp thread settings. Am i doing
>> something
>> > > wrong here?
>> > >
>> > > Thanks in advance for your help
>> > >
>> > > --
>> > > Siva
>> > > --
>> > > Gromacs Users mailing list
>> > >
>> > > * Please search the archive at
>> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> > > posting!
>> > >
>> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> > >
>> > > * For (un)subscribe requests visit
>> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> > > send a mail

Re: [gmx-users] Commands to run simulations using multiple GPU's in version 5.0.1

2014-09-24 Thread Johnny Lu
Actually i am trying to find the answer to the same question now.

manual 4.6.7/appendix D/mdrun
says

-gpu_id
string
List of GPU device id-s to use, specifies the per-node PP rank to GPU
mapping

On Tue, Sep 23, 2014 at 11:07 PM, Siva Dasetty 
wrote:

> Thank you Lu for the reply.
>
> As I have mentioned in the post, I have already tried those options but it
> didn't work. Kindly please let me know if you have anymore suggestions.
>
> Thank you,
>
> On Tue, Sep 23, 2014 at 8:41 PM, Johnny Lu  wrote:
>
> > Try -nt, -ntmpi, -ntomp, -np  (one at a time) ?
> > I forget about what I tried now But I just stop the mdrun, and then
> > read the log file.
> > Also can look for the mdrun page in the offical manual (pdf) and try this
> > page:
> >
> >
> http://www.gromacs.org/Documentation/Gromacs_Utilities/mdrun?highlight=mdrun
> >
> >
> > On Mon, Sep 22, 2014 at 6:46 PM, Siva Dasetty 
> > wrote:
> >
> > > Dear  All,
> > >
> > > I am trying to run NPT simulations using GROMACS version 5.0.1 of a
> > system
> > > of size 140k atoms (protein+water systems) with 2 or more GPU's
> > > (model=k20); 8 cores (or more); and 1 or more nodes. I am trying to
> > > understand how to run simulations using multiple gpus on more than one
> > > node. I  get the following errors/output when I run the simulation
> using
> > > the following commands:-
> > >
> > > Note: time-step used = 2 fs and total number of steps = 2
> > >
> > > First 4 cases are using single GPU and cases 5-8 are using 2 GPU's.
> > >
> > > 1. 1 node, 8 cpus, 1 gpu
> > > export OMP_NUM_THREADS = 8
> > > command used-  mdrun -s topol.tpr  -gpu_id 0
> > > Speed - 5.8 ns/day
> > >
> > > 2.  1 node, 8 cpus, 1 gpu
> > > export OMP_NUM_THREADS = 16
> > > command used-  mdrun -s topol.tpr   -gpu_id 0
> > > Speed - 4.7 ns/day
> > >
> > > 3. 1 node, 8cpus, 1gpu
> > > mdrun -s topol.tpr -ntomp 8  -gpu_id 0
> > > Speed- 5.876 ns/day
> > >
> > > 4. 1 node, 8cpus, 1gpu
> > > mdrun -s topol.tpr -ntomp 16  -gpu_id 0
> > > Fatal Error: Environment variable OMP_NUM_THREADS (8) and the number of
> > > threads requested on  the command line (16) have different values.
> Either
> > > omit one, or set them both
> > >  to the same value.
> > >
> > > Question for 3 and 4 : Do I need to always use OMP_NUM_THREADS or is
> > there
> > > a way ntomp overwrites the environment settings?
> > >
> > >
> > > 5. 1 node, 8cpus , 2gpus
> > > export OMP_NUM_THREADS = 8
> > > mpirun -np 2 mdrun -s topol.tpr -pin on  -gpu_id 01
> > > Speed - 4.044 ns/day
> > >
> > > 6. 2 nodes, 8cpus , 2 gpus
> > > export OMP_NUM_THREADS = 8
> > > mpirun -np 2 mdrun -s topol.tpr -pin on  -gpu_id 01
> > > Speed - 3.0 ns/day
> > >
> > > Are the commands that I used for 5 and 6 correct?
> > >
> > > 7. I also used (1node, 8 cpus, 2 gpus)
> > >  mdrun -s topol.tpr -ntmpi 2 -ntomp 8  -gpu_id 01
> > > but this time I get a fatal error: thread mpi's are requested but
> gromacs
> > > is not compiled with thread MPI.
> > >
> > > Question: Isn't thread MPI enabled by default?
> > >
> > > 8. Finally, I recompiled Gromacs without OpenMP and re-ran case 1 but
> > this
> > > time there is a fatal error "More than 1 OpenMP thread requested, but
> > > Gromacs was compiled without OpenMP support."
> > > command : mdrun -s topol.tpr  (no environment settings)  -gpu_id 0
> > > Question: Here again, I assumed thread MPI is enabled by default and I
> > > think Gromacs still assumes OpenMp thread settings. Am i doing
> something
> > > wrong here?
> > >
> > > Thanks in advance for your help
> > >
> > > --
> > > Siva
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
>
>
>
> --
> Siva
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)

Re: [gmx-users] Commands to run simulations using multiple GPU's in version 5.0.1

2014-09-23 Thread Siva Dasetty
Thank you Lu for the reply.

As I have mentioned in the post, I have already tried those options but it
didn't work. Kindly please let me know if you have anymore suggestions.

Thank you,

On Tue, Sep 23, 2014 at 8:41 PM, Johnny Lu  wrote:

> Try -nt, -ntmpi, -ntomp, -np  (one at a time) ?
> I forget about what I tried now But I just stop the mdrun, and then
> read the log file.
> Also can look for the mdrun page in the offical manual (pdf) and try this
> page:
>
> http://www.gromacs.org/Documentation/Gromacs_Utilities/mdrun?highlight=mdrun
>
>
> On Mon, Sep 22, 2014 at 6:46 PM, Siva Dasetty 
> wrote:
>
> > Dear  All,
> >
> > I am trying to run NPT simulations using GROMACS version 5.0.1 of a
> system
> > of size 140k atoms (protein+water systems) with 2 or more GPU's
> > (model=k20); 8 cores (or more); and 1 or more nodes. I am trying to
> > understand how to run simulations using multiple gpus on more than one
> > node. I  get the following errors/output when I run the simulation using
> > the following commands:-
> >
> > Note: time-step used = 2 fs and total number of steps = 2
> >
> > First 4 cases are using single GPU and cases 5-8 are using 2 GPU's.
> >
> > 1. 1 node, 8 cpus, 1 gpu
> > export OMP_NUM_THREADS = 8
> > command used-  mdrun -s topol.tpr  -gpu_id 0
> > Speed - 5.8 ns/day
> >
> > 2.  1 node, 8 cpus, 1 gpu
> > export OMP_NUM_THREADS = 16
> > command used-  mdrun -s topol.tpr   -gpu_id 0
> > Speed - 4.7 ns/day
> >
> > 3. 1 node, 8cpus, 1gpu
> > mdrun -s topol.tpr -ntomp 8  -gpu_id 0
> > Speed- 5.876 ns/day
> >
> > 4. 1 node, 8cpus, 1gpu
> > mdrun -s topol.tpr -ntomp 16  -gpu_id 0
> > Fatal Error: Environment variable OMP_NUM_THREADS (8) and the number of
> > threads requested on  the command line (16) have different values. Either
> > omit one, or set them both
> >  to the same value.
> >
> > Question for 3 and 4 : Do I need to always use OMP_NUM_THREADS or is
> there
> > a way ntomp overwrites the environment settings?
> >
> >
> > 5. 1 node, 8cpus , 2gpus
> > export OMP_NUM_THREADS = 8
> > mpirun -np 2 mdrun -s topol.tpr -pin on  -gpu_id 01
> > Speed - 4.044 ns/day
> >
> > 6. 2 nodes, 8cpus , 2 gpus
> > export OMP_NUM_THREADS = 8
> > mpirun -np 2 mdrun -s topol.tpr -pin on  -gpu_id 01
> > Speed - 3.0 ns/day
> >
> > Are the commands that I used for 5 and 6 correct?
> >
> > 7. I also used (1node, 8 cpus, 2 gpus)
> >  mdrun -s topol.tpr -ntmpi 2 -ntomp 8  -gpu_id 01
> > but this time I get a fatal error: thread mpi's are requested but gromacs
> > is not compiled with thread MPI.
> >
> > Question: Isn't thread MPI enabled by default?
> >
> > 8. Finally, I recompiled Gromacs without OpenMP and re-ran case 1 but
> this
> > time there is a fatal error "More than 1 OpenMP thread requested, but
> > Gromacs was compiled without OpenMP support."
> > command : mdrun -s topol.tpr  (no environment settings)  -gpu_id 0
> > Question: Here again, I assumed thread MPI is enabled by default and I
> > think Gromacs still assumes OpenMp thread settings. Am i doing something
> > wrong here?
> >
> > Thanks in advance for your help
> >
> > --
> > Siva
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>



-- 
Siva
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Commands to run simulations using multiple GPU's in version 5.0.1

2014-09-23 Thread Johnny Lu
Try -nt, -ntmpi, -ntomp, -np  (one at a time) ?
I forget about what I tried now But I just stop the mdrun, and then
read the log file.
Also can look for the mdrun page in the offical manual (pdf) and try this
page:
http://www.gromacs.org/Documentation/Gromacs_Utilities/mdrun?highlight=mdrun


On Mon, Sep 22, 2014 at 6:46 PM, Siva Dasetty  wrote:

> Dear  All,
>
> I am trying to run NPT simulations using GROMACS version 5.0.1 of a system
> of size 140k atoms (protein+water systems) with 2 or more GPU's
> (model=k20); 8 cores (or more); and 1 or more nodes. I am trying to
> understand how to run simulations using multiple gpus on more than one
> node. I  get the following errors/output when I run the simulation using
> the following commands:-
>
> Note: time-step used = 2 fs and total number of steps = 2
>
> First 4 cases are using single GPU and cases 5-8 are using 2 GPU's.
>
> 1. 1 node, 8 cpus, 1 gpu
> export OMP_NUM_THREADS = 8
> command used-  mdrun -s topol.tpr  -gpu_id 0
> Speed - 5.8 ns/day
>
> 2.  1 node, 8 cpus, 1 gpu
> export OMP_NUM_THREADS = 16
> command used-  mdrun -s topol.tpr   -gpu_id 0
> Speed - 4.7 ns/day
>
> 3. 1 node, 8cpus, 1gpu
> mdrun -s topol.tpr -ntomp 8  -gpu_id 0
> Speed- 5.876 ns/day
>
> 4. 1 node, 8cpus, 1gpu
> mdrun -s topol.tpr -ntomp 16  -gpu_id 0
> Fatal Error: Environment variable OMP_NUM_THREADS (8) and the number of
> threads requested on  the command line (16) have different values. Either
> omit one, or set them both
>  to the same value.
>
> Question for 3 and 4 : Do I need to always use OMP_NUM_THREADS or is there
> a way ntomp overwrites the environment settings?
>
>
> 5. 1 node, 8cpus , 2gpus
> export OMP_NUM_THREADS = 8
> mpirun -np 2 mdrun -s topol.tpr -pin on  -gpu_id 01
> Speed - 4.044 ns/day
>
> 6. 2 nodes, 8cpus , 2 gpus
> export OMP_NUM_THREADS = 8
> mpirun -np 2 mdrun -s topol.tpr -pin on  -gpu_id 01
> Speed - 3.0 ns/day
>
> Are the commands that I used for 5 and 6 correct?
>
> 7. I also used (1node, 8 cpus, 2 gpus)
>  mdrun -s topol.tpr -ntmpi 2 -ntomp 8  -gpu_id 01
> but this time I get a fatal error: thread mpi's are requested but gromacs
> is not compiled with thread MPI.
>
> Question: Isn't thread MPI enabled by default?
>
> 8. Finally, I recompiled Gromacs without OpenMP and re-ran case 1 but this
> time there is a fatal error "More than 1 OpenMP thread requested, but
> Gromacs was compiled without OpenMP support."
> command : mdrun -s topol.tpr  (no environment settings)  -gpu_id 0
> Question: Here again, I assumed thread MPI is enabled by default and I
> think Gromacs still assumes OpenMp thread settings. Am i doing something
> wrong here?
>
> Thanks in advance for your help
>
> --
> Siva
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Commands to run simulations using multiple GPU's in version 5.0.1

2014-09-22 Thread Siva Dasetty
Dear  All,

I am trying to run NPT simulations using GROMACS version 5.0.1 of a system
of size 140k atoms (protein+water systems) with 2 or more GPU's
(model=k20); 8 cores (or more); and 1 or more nodes. I am trying to
understand how to run simulations using multiple gpus on more than one
node. I  get the following errors/output when I run the simulation using
the following commands:-

Note: time-step used = 2 fs and total number of steps = 2

First 4 cases are using single GPU and cases 5-8 are using 2 GPU's.

1. 1 node, 8 cpus, 1 gpu
export OMP_NUM_THREADS = 8
command used-  mdrun -s topol.tpr  -gpu_id 0
Speed - 5.8 ns/day

2.  1 node, 8 cpus, 1 gpu
export OMP_NUM_THREADS = 16
command used-  mdrun -s topol.tpr   -gpu_id 0
Speed - 4.7 ns/day

3. 1 node, 8cpus, 1gpu
mdrun -s topol.tpr -ntomp 8  -gpu_id 0
Speed- 5.876 ns/day

4. 1 node, 8cpus, 1gpu
mdrun -s topol.tpr -ntomp 16  -gpu_id 0
Fatal Error: Environment variable OMP_NUM_THREADS (8) and the number of
threads requested on  the command line (16) have different values. Either
omit one, or set them both
 to the same value.

Question for 3 and 4 : Do I need to always use OMP_NUM_THREADS or is there
a way ntomp overwrites the environment settings?


5. 1 node, 8cpus , 2gpus
export OMP_NUM_THREADS = 8
mpirun -np 2 mdrun -s topol.tpr -pin on  -gpu_id 01
Speed - 4.044 ns/day

6. 2 nodes, 8cpus , 2 gpus
export OMP_NUM_THREADS = 8
mpirun -np 2 mdrun -s topol.tpr -pin on  -gpu_id 01
Speed - 3.0 ns/day

Are the commands that I used for 5 and 6 correct?

7. I also used (1node, 8 cpus, 2 gpus)
 mdrun -s topol.tpr -ntmpi 2 -ntomp 8  -gpu_id 01
but this time I get a fatal error: thread mpi's are requested but gromacs
is not compiled with thread MPI.

Question: Isn't thread MPI enabled by default?

8. Finally, I recompiled Gromacs without OpenMP and re-ran case 1 but this
time there is a fatal error "More than 1 OpenMP thread requested, but
Gromacs was compiled without OpenMP support."
command : mdrun -s topol.tpr  (no environment settings)  -gpu_id 0
Question: Here again, I assumed thread MPI is enabled by default and I
think Gromacs still assumes OpenMp thread settings. Am i doing something
wrong here?

Thanks in advance for your help

-- 
Siva
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.