Re: [gmx-users] gromacs performance

2019-03-08 Thread Mark Abraham
Hi,

In particular, you only want to to consider comparing performance on the
production MD setup. Minimization is basically a different piece of code,
and typically a negligible fraction of the workload. Benson's point about
Ethernet is fairly sound, however. You can do ok with care, but you can't
just pick two nodes out of a farm and expect good performance while
synchronizing every millisecond or so.

Mark

On Fri., 8 Mar. 2019, 23:55 Carlos Rivas,  wrote:

> Thank you.
> I will try that.
>
> CJ
>
> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> On Behalf Of Benson
> Muite
> Sent: Friday, March 8, 2019 4:51 PM
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] gromacs performance
>
> Communication on most instances will use some form of Ethernet, so unless
> carefully setup will have a rather high latency - thread-mpi is quite well
> optimized for a single node.  Perhaps check performance on a single V100
> GPU coupled with an decent CPU and compare that to p3.dn24xlarge to
> determine best place to run. Usual GROMACS pipleline involves several
> steps, so just measuring one part may not be reflective of typical workflow.
>
>
> On 3/9/19 12:40 AM, Carlos Rivas wrote:
> > Benson,
> > When I was testing on a single machine, performance was moving by leaps
> and bounds, like this:
> >
> > -- 2 hours on a c5.2xlarge
> > -- 68 minutes on a p2.xlarge
> > -- 18 minutes on a p3.2xlarge
> > -- 7 minutes on a p3.dn24xlarge
> >
> > It's only when I switched to using clusters that things went downhill
> and I haven't been able to beat the above numbers by throwing more CPUs and
> GPUs at it.
> >
> > CJ
> >
> >
> > -Original Message-
> > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
> >  On Behalf Of
> > Benson Muite
> > Sent: Friday, March 8, 2019 4:19 PM
> > To: gromacs.org_gmx-users@maillist.sys.kth.se
> > Subject: Re: [gmx-users] gromacs performance
> >
> > You seem to be using a relatively large number of GPUs. May want to
> check your input data (many cases will not scale well, but ensemble runs
> can be quite common). Perhaps check speedup in going from 1 to 2 to 4 GPUs
> on one node.
> >
> > On 3/9/19 12:11 AM, Carlos Rivas wrote:
> >> Hey guys,
> >> Anybody running GROMACS on AWS ?
> >>
> >> I have a strong IT background , but zero understanding of GROMACS or
> >> OpenMPI. ( even less using sge on AWS ), Just trying to help some PHD
> Folks with their work.
> >>
> >> When I run gromacs using Thread-mpi on a single, very large node on AWS
> things work fairly fast.
> >> However, when I switch from thread-mpi to OpenMPI even though
> everything's detected properly, the performance is horrible.
> >>
> >> This is what I am submitting to sge:
> >>
> >> ubuntu@ip-10-10-5-81:/shared/charmm-gui/gromacs$ cat sge.sh
> >> #!/bin/bash # #$ -cwd #$ -j y #$ -S /bin/bash #$ -e out.err #$ -o
> >> out.out #$ -pe mpi 256
> >>
> >> cd /shared/charmm-gui/gromacs
> >> touch start.txt
> >> /bin/bash /shared/charmm-gui/gromacs/run_eq.bash
> >> touch end.txt
> >>
> >> and this is my test script , provided by one of the Doctors:
> >>
> >> ubuntu@ip-10-10-5-81:/shared/charmm-gui/gromacs$ cat run_eq.bash
> >> #!/bin/bash export GMXMPI="/usr/bin/mpirun --mca btl ^openib
> >> /shared/gromacs/5.1.5/bin/gmx_mpi"
> >>
> >> export MDRUN="mdrun -ntomp 2 -npme 32"
> >>
> >> export GMX="/shared/gromacs/5.1.5/bin/gmx_mpi"
> >>
> >> for comm in min eq; do
> >> if [ $comm == min ]; then
> >>  echo ${comm}
> >>  $GMX grompp -f step6.0_minimization.mdp -o
> step6.0_minimization.tpr -c step5_charmm2gmx.pdb -p topol.top
> >>  $GMXMPI $MDRUN -deffnm step6.0_minimization
> >>
> >> fi
> >>
> >> if [ $comm == eq ]; then
> >> for step in `seq 1 6`;do
> >>  echo $step
> >>  if [ $step -eq 1 ]; then
> >> echo ${step}
> >> $GMX grompp -f step6.${step}_equilibration.mdp -o
> step6.${step}_equilibration.tpr -c step6.0_minimization.gro -r
> step5_charmm2gmx.pdb -n index.ndx -p topol.top
> >> $GMXMPI $MDRUN -deffnm step6.${step}_equilibration
> >>  fi
> >>  if [ $step -gt 1 ]; then
> >> old=`expr $step - 1`
> >> echo $old
> >> $GMX grompp -f step6.${step}_equilibration.mdp -o
> step6.${step}_equilibration.tpr -c step6.${old}_equilibration.gro -r
> step5_charmm2gmx.pdb -n index.ndx -p topol.top
> >> $GMXMPI $MDRUN -deffnm step6.${step}_equilibration
> >>  fi
> >> done
> >> fi
> >> done
> >>
> >>
> >>
> >>
> >> during the output, I see this , and I get really excited, expecting
> blazing speeds and yet, it's much worse than a single node:
> >>
> >> Command line:
> >> gmx_mpi mdrun -ntomp 2 -npme 32 -deffnm step6.0_minimization
> >>
> >>
> >> Back Off! I just backed up step6.0_minimization.log to
> >> ./#step6.0_minimization.log.6#
> >>
> >> Running on 4 nodes with total 128 cores, 256 logical cores, 32
> compatible GPUs
> >> Cores per node:

Re: [gmx-users] gromacs performance

2019-03-08 Thread Carlos Rivas
Thank you.
I will try that.

CJ

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of Benson Muite
Sent: Friday, March 8, 2019 4:51 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] gromacs performance

Communication on most instances will use some form of Ethernet, so unless 
carefully setup will have a rather high latency - thread-mpi is quite well 
optimized for a single node.  Perhaps check performance on a single V100 GPU 
coupled with an decent CPU and compare that to p3.dn24xlarge to determine best 
place to run. Usual GROMACS pipleline involves several steps, so just measuring 
one part may not be reflective of typical workflow.


On 3/9/19 12:40 AM, Carlos Rivas wrote:
> Benson,
> When I was testing on a single machine, performance was moving by leaps and 
> bounds, like this:
>
> -- 2 hours on a c5.2xlarge
> -- 68 minutes on a p2.xlarge
> -- 18 minutes on a p3.2xlarge
> -- 7 minutes on a p3.dn24xlarge
>
> It's only when I switched to using clusters that things went downhill and I 
> haven't been able to beat the above numbers by throwing more CPUs and GPUs at 
> it.
>
> CJ
>
>
> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
>  On Behalf Of 
> Benson Muite
> Sent: Friday, March 8, 2019 4:19 PM
> To: gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] gromacs performance
>
> You seem to be using a relatively large number of GPUs. May want to check 
> your input data (many cases will not scale well, but ensemble runs can be 
> quite common). Perhaps check speedup in going from 1 to 2 to 4 GPUs on one 
> node.
>
> On 3/9/19 12:11 AM, Carlos Rivas wrote:
>> Hey guys,
>> Anybody running GROMACS on AWS ?
>>
>> I have a strong IT background , but zero understanding of GROMACS or 
>> OpenMPI. ( even less using sge on AWS ), Just trying to help some PHD Folks 
>> with their work.
>>
>> When I run gromacs using Thread-mpi on a single, very large node on AWS 
>> things work fairly fast.
>> However, when I switch from thread-mpi to OpenMPI even though everything's 
>> detected properly, the performance is horrible.
>>
>> This is what I am submitting to sge:
>>
>> ubuntu@ip-10-10-5-81:/shared/charmm-gui/gromacs$ cat sge.sh 
>> #!/bin/bash # #$ -cwd #$ -j y #$ -S /bin/bash #$ -e out.err #$ -o 
>> out.out #$ -pe mpi 256
>>
>> cd /shared/charmm-gui/gromacs
>> touch start.txt
>> /bin/bash /shared/charmm-gui/gromacs/run_eq.bash
>> touch end.txt
>>
>> and this is my test script , provided by one of the Doctors:
>>
>> ubuntu@ip-10-10-5-81:/shared/charmm-gui/gromacs$ cat run_eq.bash 
>> #!/bin/bash export GMXMPI="/usr/bin/mpirun --mca btl ^openib 
>> /shared/gromacs/5.1.5/bin/gmx_mpi"
>>
>> export MDRUN="mdrun -ntomp 2 -npme 32"
>>
>> export GMX="/shared/gromacs/5.1.5/bin/gmx_mpi"
>>
>> for comm in min eq; do
>> if [ $comm == min ]; then
>>  echo ${comm}
>>  $GMX grompp -f step6.0_minimization.mdp -o step6.0_minimization.tpr -c 
>> step5_charmm2gmx.pdb -p topol.top
>>  $GMXMPI $MDRUN -deffnm step6.0_minimization
>>
>> fi
>>
>> if [ $comm == eq ]; then
>> for step in `seq 1 6`;do
>>  echo $step
>>  if [ $step -eq 1 ]; then
>> echo ${step}
>> $GMX grompp -f step6.${step}_equilibration.mdp -o 
>> step6.${step}_equilibration.tpr -c step6.0_minimization.gro -r 
>> step5_charmm2gmx.pdb -n index.ndx -p topol.top
>> $GMXMPI $MDRUN -deffnm step6.${step}_equilibration
>>  fi
>>  if [ $step -gt 1 ]; then
>> old=`expr $step - 1`
>> echo $old
>> $GMX grompp -f step6.${step}_equilibration.mdp -o 
>> step6.${step}_equilibration.tpr -c step6.${old}_equilibration.gro -r 
>> step5_charmm2gmx.pdb -n index.ndx -p topol.top
>> $GMXMPI $MDRUN -deffnm step6.${step}_equilibration
>>  fi
>> done
>> fi
>> done
>>
>>
>>
>>
>> during the output, I see this , and I get really excited, expecting blazing 
>> speeds and yet, it's much worse than a single node:
>>
>> Command line:
>> gmx_mpi mdrun -ntomp 2 -npme 32 -deffnm step6.0_minimization
>>
>>
>> Back Off! I just backed up step6.0_minimization.log to 
>> ./#step6.0_minimization.log.6#
>>
>> Running on 4 nodes with total 128 cores, 256 logical cores, 32 compatible 
>> GPUs
>> Cores per node:   32
>> Logical cores per node:   64
>> Compatible GPUs per node:  8
>> All nodes have identical type(s) of GPUs Hardware detected on 
>> host
>> ip-10-10-5-89 (the node of MPI rank 0):
>> CPU info:
>>   Vendor: GenuineIntel
>>   Brand:  Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
>>   SIMD instructions most likely to fit this hardware: AVX2_256
>>   SIMD instructions selected at GROMACS compile time: AVX2_256
>> GPU info:
>>   Number of GPUs detected: 8
>>   #0: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
>> compatible
>>   #1: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
>> compatible
>>   #2: NVIDIA 

Re: [gmx-users] gromacs performance

2019-03-08 Thread Carlos Rivas
Benson,
When I was testing on a single machine, performance was moving by leaps and 
bounds, like this:

-- 2 hours on a c5.2xlarge
-- 68 minutes on a p2.xlarge
-- 18 minutes on a p3.2xlarge
-- 7 minutes on a p3.dn24xlarge

It's only when I switched to using clusters that things went downhill and I 
haven't been able to beat the above numbers by throwing more CPUs and GPUs at 
it.

CJ


-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of Benson Muite
Sent: Friday, March 8, 2019 4:19 PM
To: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] gromacs performance

You seem to be using a relatively large number of GPUs. May want to check your 
input data (many cases will not scale well, but ensemble runs can be quite 
common). Perhaps check speedup in going from 1 to 2 to 4 GPUs on one node.

On 3/9/19 12:11 AM, Carlos Rivas wrote:
> Hey guys,
> Anybody running GROMACS on AWS ?
>
> I have a strong IT background , but zero understanding of GROMACS or 
> OpenMPI. ( even less using sge on AWS ), Just trying to help some PHD Folks 
> with their work.
>
> When I run gromacs using Thread-mpi on a single, very large node on AWS 
> things work fairly fast.
> However, when I switch from thread-mpi to OpenMPI even though everything's 
> detected properly, the performance is horrible.
>
> This is what I am submitting to sge:
>
> ubuntu@ip-10-10-5-81:/shared/charmm-gui/gromacs$ cat sge.sh 
> #!/bin/bash # #$ -cwd #$ -j y #$ -S /bin/bash #$ -e out.err #$ -o 
> out.out #$ -pe mpi 256
>
> cd /shared/charmm-gui/gromacs
> touch start.txt
> /bin/bash /shared/charmm-gui/gromacs/run_eq.bash
> touch end.txt
>
> and this is my test script , provided by one of the Doctors:
>
> ubuntu@ip-10-10-5-81:/shared/charmm-gui/gromacs$ cat run_eq.bash 
> #!/bin/bash export GMXMPI="/usr/bin/mpirun --mca btl ^openib 
> /shared/gromacs/5.1.5/bin/gmx_mpi"
>
> export MDRUN="mdrun -ntomp 2 -npme 32"
>
> export GMX="/shared/gromacs/5.1.5/bin/gmx_mpi"
>
> for comm in min eq; do
> if [ $comm == min ]; then
> echo ${comm}
> $GMX grompp -f step6.0_minimization.mdp -o step6.0_minimization.tpr -c 
> step5_charmm2gmx.pdb -p topol.top
> $GMXMPI $MDRUN -deffnm step6.0_minimization
>
> fi
>
> if [ $comm == eq ]; then
>for step in `seq 1 6`;do
> echo $step
> if [ $step -eq 1 ]; then
>echo ${step}
>$GMX grompp -f step6.${step}_equilibration.mdp -o 
> step6.${step}_equilibration.tpr -c step6.0_minimization.gro -r 
> step5_charmm2gmx.pdb -n index.ndx -p topol.top
>$GMXMPI $MDRUN -deffnm step6.${step}_equilibration
> fi
> if [ $step -gt 1 ]; then
>old=`expr $step - 1`
>echo $old
>$GMX grompp -f step6.${step}_equilibration.mdp -o 
> step6.${step}_equilibration.tpr -c step6.${old}_equilibration.gro -r 
> step5_charmm2gmx.pdb -n index.ndx -p topol.top
>$GMXMPI $MDRUN -deffnm step6.${step}_equilibration
> fi
>done
> fi
> done
>
>
>
>
> during the output, I see this , and I get really excited, expecting blazing 
> speeds and yet, it's much worse than a single node:
>
> Command line:
>gmx_mpi mdrun -ntomp 2 -npme 32 -deffnm step6.0_minimization
>
>
> Back Off! I just backed up step6.0_minimization.log to 
> ./#step6.0_minimization.log.6#
>
> Running on 4 nodes with total 128 cores, 256 logical cores, 32 compatible GPUs
>Cores per node:   32
>Logical cores per node:   64
>Compatible GPUs per node:  8
>All nodes have identical type(s) of GPUs Hardware detected on host 
> ip-10-10-5-89 (the node of MPI rank 0):
>CPU info:
>  Vendor: GenuineIntel
>  Brand:  Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
>  SIMD instructions most likely to fit this hardware: AVX2_256
>  SIMD instructions selected at GROMACS compile time: AVX2_256
>GPU info:
>  Number of GPUs detected: 8
>  #0: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
> compatible
>  #1: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
> compatible
>  #2: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
> compatible
>  #3: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
> compatible
>  #4: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
> compatible
>  #5: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
> compatible
>  #6: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
> compatible
>  #7: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, 
> stat: compatible
>
> Reading file step6.0_minimization.tpr, VERSION 5.1.5 (single 
> precision) Using 256 MPI processes Using 2 OpenMP threads per MPI 
> process
>
> On host ip-10-10-5-89 8 compatible GPUs are present, with IDs 
> 0,1,2,3,4,5,6,7 On host ip-10-10-5-89 8 GPUs auto-selected for this run.
> Mapping of GPU IDs to the 56 PP ranks in this node: 
> 

Re: [gmx-users] gromacs performance

2019-03-08 Thread Benson Muite
You seem to be using a relatively large number of GPUs. May want to 
check your input data (many cases will not scale well, but ensemble runs 
can be quite common). Perhaps check speedup in going from 1 to 2 to 4 
GPUs on one node.


On 3/9/19 12:11 AM, Carlos Rivas wrote:

Hey guys,
Anybody running GROMACS on AWS ?

I have a strong IT background , but zero understanding of GROMACS or OpenMPI. ( 
even less using sge on AWS ),
Just trying to help some PHD Folks with their work.

When I run gromacs using Thread-mpi on a single, very large node on AWS things 
work fairly fast.
However, when I switch from thread-mpi to OpenMPI even though everything's 
detected properly, the performance is horrible.

This is what I am submitting to sge:

ubuntu@ip-10-10-5-81:/shared/charmm-gui/gromacs$ cat sge.sh
#!/bin/bash
#
#$ -cwd
#$ -j y
#$ -S /bin/bash
#$ -e out.err
#$ -o out.out
#$ -pe mpi 256

cd /shared/charmm-gui/gromacs
touch start.txt
/bin/bash /shared/charmm-gui/gromacs/run_eq.bash
touch end.txt

and this is my test script , provided by one of the Doctors:

ubuntu@ip-10-10-5-81:/shared/charmm-gui/gromacs$ cat run_eq.bash
#!/bin/bash
export GMXMPI="/usr/bin/mpirun --mca btl ^openib 
/shared/gromacs/5.1.5/bin/gmx_mpi"

export MDRUN="mdrun -ntomp 2 -npme 32"

export GMX="/shared/gromacs/5.1.5/bin/gmx_mpi"

for comm in min eq; do
if [ $comm == min ]; then
echo ${comm}
$GMX grompp -f step6.0_minimization.mdp -o step6.0_minimization.tpr -c 
step5_charmm2gmx.pdb -p topol.top
$GMXMPI $MDRUN -deffnm step6.0_minimization

fi

if [ $comm == eq ]; then
   for step in `seq 1 6`;do
echo $step
if [ $step -eq 1 ]; then
   echo ${step}
   $GMX grompp -f step6.${step}_equilibration.mdp -o 
step6.${step}_equilibration.tpr -c step6.0_minimization.gro -r 
step5_charmm2gmx.pdb -n index.ndx -p topol.top
   $GMXMPI $MDRUN -deffnm step6.${step}_equilibration
fi
if [ $step -gt 1 ]; then
   old=`expr $step - 1`
   echo $old
   $GMX grompp -f step6.${step}_equilibration.mdp -o 
step6.${step}_equilibration.tpr -c step6.${old}_equilibration.gro -r 
step5_charmm2gmx.pdb -n index.ndx -p topol.top
   $GMXMPI $MDRUN -deffnm step6.${step}_equilibration
fi
   done
fi
done




during the output, I see this , and I get really excited, expecting blazing 
speeds and yet, it's much worse than a single node:

Command line:
   gmx_mpi mdrun -ntomp 2 -npme 32 -deffnm step6.0_minimization


Back Off! I just backed up step6.0_minimization.log to 
./#step6.0_minimization.log.6#

Running on 4 nodes with total 128 cores, 256 logical cores, 32 compatible GPUs
   Cores per node:   32
   Logical cores per node:   64
   Compatible GPUs per node:  8
   All nodes have identical type(s) of GPUs
Hardware detected on host ip-10-10-5-89 (the node of MPI rank 0):
   CPU info:
 Vendor: GenuineIntel
 Brand:  Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
 SIMD instructions most likely to fit this hardware: AVX2_256
 SIMD instructions selected at GROMACS compile time: AVX2_256
   GPU info:
 Number of GPUs detected: 8
 #0: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
 #1: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
 #2: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
 #3: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
 #4: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
 #5: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
 #6: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
 #7: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible

Reading file step6.0_minimization.tpr, VERSION 5.1.5 (single precision)
Using 256 MPI processes
Using 2 OpenMP threads per MPI process

On host ip-10-10-5-89 8 compatible GPUs are present, with IDs 0,1,2,3,4,5,6,7
On host ip-10-10-5-89 8 GPUs auto-selected for this run.
Mapping of GPU IDs to the 56 PP ranks in this node: 
0,0,0,0,0,0,0,1,1,1,1,1,1,1,2,2,2,2,2,2,2,3,3,3,3,3,3,3,4,4,4,4,4,4,4,5,5,5,5,5,5,5,6,6,6,6,6,6,6,7,7,7,7,7,7,7



Any suggestions? Greatly appreciate the help.


Carlos J. Rivas
Senior AWS Solutions Architect - Migration Specialist


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] gromacs performance

2019-03-08 Thread Carlos Rivas
Hey guys,
Anybody running GROMACS on AWS ?

I have a strong IT background , but zero understanding of GROMACS or OpenMPI. ( 
even less using sge on AWS ),
Just trying to help some PHD Folks with their work.

When I run gromacs using Thread-mpi on a single, very large node on AWS things 
work fairly fast.
However, when I switch from thread-mpi to OpenMPI even though everything's 
detected properly, the performance is horrible.

This is what I am submitting to sge:

ubuntu@ip-10-10-5-81:/shared/charmm-gui/gromacs$ cat sge.sh
#!/bin/bash
#
#$ -cwd
#$ -j y
#$ -S /bin/bash
#$ -e out.err
#$ -o out.out
#$ -pe mpi 256

cd /shared/charmm-gui/gromacs
touch start.txt
/bin/bash /shared/charmm-gui/gromacs/run_eq.bash
touch end.txt

and this is my test script , provided by one of the Doctors:

ubuntu@ip-10-10-5-81:/shared/charmm-gui/gromacs$ cat run_eq.bash
#!/bin/bash
export GMXMPI="/usr/bin/mpirun --mca btl ^openib 
/shared/gromacs/5.1.5/bin/gmx_mpi"

export MDRUN="mdrun -ntomp 2 -npme 32"

export GMX="/shared/gromacs/5.1.5/bin/gmx_mpi"

for comm in min eq; do
if [ $comm == min ]; then
   echo ${comm}
   $GMX grompp -f step6.0_minimization.mdp -o step6.0_minimization.tpr -c 
step5_charmm2gmx.pdb -p topol.top
   $GMXMPI $MDRUN -deffnm step6.0_minimization

fi

if [ $comm == eq ]; then
  for step in `seq 1 6`;do
   echo $step
   if [ $step -eq 1 ]; then
  echo ${step}
  $GMX grompp -f step6.${step}_equilibration.mdp -o 
step6.${step}_equilibration.tpr -c step6.0_minimization.gro -r 
step5_charmm2gmx.pdb -n index.ndx -p topol.top
  $GMXMPI $MDRUN -deffnm step6.${step}_equilibration
   fi
   if [ $step -gt 1 ]; then
  old=`expr $step - 1`
  echo $old
  $GMX grompp -f step6.${step}_equilibration.mdp -o 
step6.${step}_equilibration.tpr -c step6.${old}_equilibration.gro -r 
step5_charmm2gmx.pdb -n index.ndx -p topol.top
  $GMXMPI $MDRUN -deffnm step6.${step}_equilibration
   fi
  done
fi
done




during the output, I see this , and I get really excited, expecting blazing 
speeds and yet, it's much worse than a single node:

Command line:
  gmx_mpi mdrun -ntomp 2 -npme 32 -deffnm step6.0_minimization


Back Off! I just backed up step6.0_minimization.log to 
./#step6.0_minimization.log.6#

Running on 4 nodes with total 128 cores, 256 logical cores, 32 compatible GPUs
  Cores per node:   32
  Logical cores per node:   64
  Compatible GPUs per node:  8
  All nodes have identical type(s) of GPUs
Hardware detected on host ip-10-10-5-89 (the node of MPI rank 0):
  CPU info:
Vendor: GenuineIntel
Brand:  Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
SIMD instructions most likely to fit this hardware: AVX2_256
SIMD instructions selected at GROMACS compile time: AVX2_256
  GPU info:
Number of GPUs detected: 8
#0: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
#1: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
#2: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
#3: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
#4: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
#5: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
#6: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
#7: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible

Reading file step6.0_minimization.tpr, VERSION 5.1.5 (single precision)
Using 256 MPI processes
Using 2 OpenMP threads per MPI process

On host ip-10-10-5-89 8 compatible GPUs are present, with IDs 0,1,2,3,4,5,6,7
On host ip-10-10-5-89 8 GPUs auto-selected for this run.
Mapping of GPU IDs to the 56 PP ranks in this node: 
0,0,0,0,0,0,0,1,1,1,1,1,1,1,2,2,2,2,2,2,2,3,3,3,3,3,3,3,4,4,4,4,4,4,4,5,5,5,5,5,5,5,6,6,6,6,6,6,6,7,7,7,7,7,7,7



Any suggestions? Greatly appreciate the help.


Carlos J. Rivas
Senior AWS Solutions Architect - Migration Specialist

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] grompp is using a very large amount of memory on a modestly-sized system

2019-03-08 Thread Mark Abraham
Hi,

I don't have a solution for the question at hand, but it'd be great to have
your inputs attached to a new issue at https://redmine.gromacs.org, so that
we can have such an input case to test with, so that we can improve the
simplistic implementation! Please upload it if you can.

Mark

On Fri., 8 Mar. 2019, 19:24 Sean Marks,  wrote:

> Scratch that comment about sparseness. I am short on sleep, and for a
> moment thought I was talking about constraints, not electrostatics.
>
> On Fri, Mar 8, 2019 at 1:12 PM Sean Marks  wrote:
>
> > I understand now, thank you for the prompt response. While the matrix
> > would actually be quite sparse (since the constraints are localized to
> each
> > ice molecule), I take it that memory is being allocated for a dense
> matrix.
> >
> > That aside, is it feasible to accomplish my stated goal of scaling
> > ice-water electrostatics while leaving other interactions unaffected? One
> > alternative I considered was manually scaling down the charges
> themselves,
> > but doing this causes the lattice to lose its form.
> >
> > On Fri, Mar 8, 2019 at 12:28 PM Justin Lemkul  wrote:
> >
> >>
> >>
> >> On 3/8/19 11:04 AM, Sean Marks wrote:
> >> > Hi, everyone,
> >> >
> >> > I am running into an issue where grompp is using a tremendous amount
> of
> >> > memory and crashing, even though my system is not especially large
> >> (63976
> >> > atoms).
> >> >
> >> > I am using GROMACS 2016.3.
> >> >
> >> > My system consists of liquid water (7,930 molecules) next to a block
> of
> >> ice
> >> > (8,094 molecules). The ice oxygens are restrained to their lattice
> >> position
> >> > with a harmonic potential with strength k = 4,000 kJ/mol/nm^2. I am
> >> using
> >> > the TIP4P/Ice model, which is a rigid 4-site model with a negative
> >> partial
> >> > charge located on a virtual site rather than the oxygen.
> >> >
> >> > My goal is to systematically reduce the electrostatic interactions
> >> between
> >> > the water molecules and the position-restrained ice, while leaving
> >> > water-water and ice-ice interactions unaffected.
> >> >
> >> > To accomplish this, I am modeling all of the ice molecules using a
> >> single
> >> > moleculetype so that I can take advantages of GROMACS' FEP features to
> >> > selectively scale interactions. I explicitly specify all constraints
> and
> >> > exclusions in the topology file. This moleculetype contains one
> virtual
> >> > site, 3 constraints, and 4 exclusions per "residue" (ice molecule).
> >> >
> >> > When I run grompp, I receive the following error, which I think means
> >> that
> >> > a huge block of memory (~9 GB) was requested but could not be
> allocated:
> >> >
> >> > =
> >> > Command line:
> >> >gmx grompp -f npt.mdp -c md.gro -p topol.top -n index.ndx -r
> >> > initconf_packmol.gro -o input.tpr -maxwarn 2 -pp processed.top
> >> >
> >> > ...
> >> >
> >> > Generated 21 of the 21 non-bonded parameter combinations
> >> > Generating 1-4 interactions: fudge = 0.5
> >> > Generated 21 of the 21 1-4 parameter combinations
> >> > Excluding 3 bonded neighbours molecule type 'ICE'
> >> > turning H bonds into constraints...
> >> > Excluding 3 bonded neighbours molecule type 'SOL'
> >> > turning H bonds into constraints...
> >> > Coupling 1 copies of molecule type 'ICE'
> >> > Setting gen_seed to 1021640799
> >> > Velocities were taken from a Maxwell distribution at 273 K
> >> > Cleaning up constraints and constant bonded interactions with virtual
> >> sites
> >> > Removing all charge groups because cutoff-scheme=Verlet
> >> >
> >> > ---
> >> > Program: gmx grompp, version 2016.3
> >> > Source file: src/gromacs/utility/smalloc.cpp (line 226)
> >> >
> >> > Fatal error:
> >> > Not enough memory. Failed to realloc -8589934588 bytes for il->iatoms,
> >> > il->iatoms=25e55010
> >> > (called from file
> >> >
> >>
> /home/semarks/source/gromacs/2016.3/icc/serial/gromacs-2016.3/src/gromacs/
> >> > gmxpreprocess/convparm.cpp,
> >> > line 565)
> >> >
> >> > For more information and tips for troubleshooting, please check the
> >> GROMACS
> >> > website at http://www.gromacs.org/Documentation/Errors
> >> > ---
> >> > ===
> >> >
> >> > In the hope that it helps with diagnosing the problem, here is my mdp
> >> file:
> >>
> >> The problem is this:
> >> > couple-intramol = no; don't adjust ice-ice interactions
> >> >
> >> This setting causes the creation of a large exclusion matrix, which in
> >> your case is approximately 32,376 x 32,376 elements. For small
> >> molecules, this generally isn't an issue, but since you're trying to
> >> modulate a large number of molecules within a much larger
> >> [moleculetype], the memory requirement goes up exponentially.
> >>
> >> -Justin
> >>
> >> --
> >> ==
> >>
> >> Justin A. Lemkul, Ph.D.
> >> Assistant Professor
> >> Office: 301 Fralin Hall
> >> Lab: 303 

[gmx-users] Calculating enthalpies of solvation.

2019-03-08 Thread William Welch
Friends,
I want to obtain values for enthalpies of solvation of nucleotide
phosphates.  My initial idea was to calculate values for the solvated
system, the waterbox without the molecule, and the gas phase molecule.
Enthalpies are only calculated using the NPT ensemble, so I was thinking
that for the gas phase molecule, I could just use a huge box with a
pressure of .001 atm and the cutoff scheme for coulombic interactions with
a radius that exceeds the size of the molecule.  I'm running such
calculations on AMP and they work, but it seems that the enthalpy value I
get depends on my cutoff radius even though I'm setting cuttoff radii well
above the size of the molecule. The gas phase simulations are 1 ns.
Does anyone know why this is or of a better way to do this?
Also, for systems as described, shouldn't the differences in enthalpy
correspond approximately to the changes in potential energy?
I'm somewhat concerned about having an overall charged system for the
solvated PME calculations, but if need by I can neutralize the systems.
Thank you in advance for any information.
Will Welch
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] grompp is using a very large amount of memory on a modestly-sized system

2019-03-08 Thread Sean Marks
Scratch that comment about sparseness. I am short on sleep, and for a
moment thought I was talking about constraints, not electrostatics.

On Fri, Mar 8, 2019 at 1:12 PM Sean Marks  wrote:

> I understand now, thank you for the prompt response. While the matrix
> would actually be quite sparse (since the constraints are localized to each
> ice molecule), I take it that memory is being allocated for a dense matrix.
>
> That aside, is it feasible to accomplish my stated goal of scaling
> ice-water electrostatics while leaving other interactions unaffected? One
> alternative I considered was manually scaling down the charges themselves,
> but doing this causes the lattice to lose its form.
>
> On Fri, Mar 8, 2019 at 12:28 PM Justin Lemkul  wrote:
>
>>
>>
>> On 3/8/19 11:04 AM, Sean Marks wrote:
>> > Hi, everyone,
>> >
>> > I am running into an issue where grompp is using a tremendous amount of
>> > memory and crashing, even though my system is not especially large
>> (63976
>> > atoms).
>> >
>> > I am using GROMACS 2016.3.
>> >
>> > My system consists of liquid water (7,930 molecules) next to a block of
>> ice
>> > (8,094 molecules). The ice oxygens are restrained to their lattice
>> position
>> > with a harmonic potential with strength k = 4,000 kJ/mol/nm^2. I am
>> using
>> > the TIP4P/Ice model, which is a rigid 4-site model with a negative
>> partial
>> > charge located on a virtual site rather than the oxygen.
>> >
>> > My goal is to systematically reduce the electrostatic interactions
>> between
>> > the water molecules and the position-restrained ice, while leaving
>> > water-water and ice-ice interactions unaffected.
>> >
>> > To accomplish this, I am modeling all of the ice molecules using a
>> single
>> > moleculetype so that I can take advantages of GROMACS' FEP features to
>> > selectively scale interactions. I explicitly specify all constraints and
>> > exclusions in the topology file. This moleculetype contains one virtual
>> > site, 3 constraints, and 4 exclusions per "residue" (ice molecule).
>> >
>> > When I run grompp, I receive the following error, which I think means
>> that
>> > a huge block of memory (~9 GB) was requested but could not be allocated:
>> >
>> > =
>> > Command line:
>> >gmx grompp -f npt.mdp -c md.gro -p topol.top -n index.ndx -r
>> > initconf_packmol.gro -o input.tpr -maxwarn 2 -pp processed.top
>> >
>> > ...
>> >
>> > Generated 21 of the 21 non-bonded parameter combinations
>> > Generating 1-4 interactions: fudge = 0.5
>> > Generated 21 of the 21 1-4 parameter combinations
>> > Excluding 3 bonded neighbours molecule type 'ICE'
>> > turning H bonds into constraints...
>> > Excluding 3 bonded neighbours molecule type 'SOL'
>> > turning H bonds into constraints...
>> > Coupling 1 copies of molecule type 'ICE'
>> > Setting gen_seed to 1021640799
>> > Velocities were taken from a Maxwell distribution at 273 K
>> > Cleaning up constraints and constant bonded interactions with virtual
>> sites
>> > Removing all charge groups because cutoff-scheme=Verlet
>> >
>> > ---
>> > Program: gmx grompp, version 2016.3
>> > Source file: src/gromacs/utility/smalloc.cpp (line 226)
>> >
>> > Fatal error:
>> > Not enough memory. Failed to realloc -8589934588 bytes for il->iatoms,
>> > il->iatoms=25e55010
>> > (called from file
>> >
>> /home/semarks/source/gromacs/2016.3/icc/serial/gromacs-2016.3/src/gromacs/
>> > gmxpreprocess/convparm.cpp,
>> > line 565)
>> >
>> > For more information and tips for troubleshooting, please check the
>> GROMACS
>> > website at http://www.gromacs.org/Documentation/Errors
>> > ---
>> > ===
>> >
>> > In the hope that it helps with diagnosing the problem, here is my mdp
>> file:
>>
>> The problem is this:
>> > couple-intramol = no; don't adjust ice-ice interactions
>> >
>> This setting causes the creation of a large exclusion matrix, which in
>> your case is approximately 32,376 x 32,376 elements. For small
>> molecules, this generally isn't an issue, but since you're trying to
>> modulate a large number of molecules within a much larger
>> [moleculetype], the memory requirement goes up exponentially.
>>
>> -Justin
>>
>> --
>> ==
>>
>> Justin A. Lemkul, Ph.D.
>> Assistant Professor
>> Office: 301 Fralin Hall
>> Lab: 303 Engel Hall
>>
>> Virginia Tech Department of Biochemistry
>> 340 West Campus Dr.
>> Blacksburg, VA 24061
>>
>> jalem...@vt.edu | (540) 231-3129
>> http://www.thelemkullab.com
>>
>> ==
>>
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail 

Re: [gmx-users] grompp is using a very large amount of memory on a modestly-sized system

2019-03-08 Thread Sean Marks
I understand now, thank you for the prompt response. While the matrix would
actually be quite sparse (since the constraints are localized to each ice
molecule), I take it that memory is being allocated for a dense matrix.

That aside, is it feasible to accomplish my stated goal of scaling
ice-water electrostatics while leaving other interactions unaffected? One
alternative I considered was manually scaling down the charges themselves,
but doing this causes the lattice to lose its form.

On Fri, Mar 8, 2019 at 12:28 PM Justin Lemkul  wrote:

>
>
> On 3/8/19 11:04 AM, Sean Marks wrote:
> > Hi, everyone,
> >
> > I am running into an issue where grompp is using a tremendous amount of
> > memory and crashing, even though my system is not especially large (63976
> > atoms).
> >
> > I am using GROMACS 2016.3.
> >
> > My system consists of liquid water (7,930 molecules) next to a block of
> ice
> > (8,094 molecules). The ice oxygens are restrained to their lattice
> position
> > with a harmonic potential with strength k = 4,000 kJ/mol/nm^2. I am using
> > the TIP4P/Ice model, which is a rigid 4-site model with a negative
> partial
> > charge located on a virtual site rather than the oxygen.
> >
> > My goal is to systematically reduce the electrostatic interactions
> between
> > the water molecules and the position-restrained ice, while leaving
> > water-water and ice-ice interactions unaffected.
> >
> > To accomplish this, I am modeling all of the ice molecules using a single
> > moleculetype so that I can take advantages of GROMACS' FEP features to
> > selectively scale interactions. I explicitly specify all constraints and
> > exclusions in the topology file. This moleculetype contains one virtual
> > site, 3 constraints, and 4 exclusions per "residue" (ice molecule).
> >
> > When I run grompp, I receive the following error, which I think means
> that
> > a huge block of memory (~9 GB) was requested but could not be allocated:
> >
> > =
> > Command line:
> >gmx grompp -f npt.mdp -c md.gro -p topol.top -n index.ndx -r
> > initconf_packmol.gro -o input.tpr -maxwarn 2 -pp processed.top
> >
> > ...
> >
> > Generated 21 of the 21 non-bonded parameter combinations
> > Generating 1-4 interactions: fudge = 0.5
> > Generated 21 of the 21 1-4 parameter combinations
> > Excluding 3 bonded neighbours molecule type 'ICE'
> > turning H bonds into constraints...
> > Excluding 3 bonded neighbours molecule type 'SOL'
> > turning H bonds into constraints...
> > Coupling 1 copies of molecule type 'ICE'
> > Setting gen_seed to 1021640799
> > Velocities were taken from a Maxwell distribution at 273 K
> > Cleaning up constraints and constant bonded interactions with virtual
> sites
> > Removing all charge groups because cutoff-scheme=Verlet
> >
> > ---
> > Program: gmx grompp, version 2016.3
> > Source file: src/gromacs/utility/smalloc.cpp (line 226)
> >
> > Fatal error:
> > Not enough memory. Failed to realloc -8589934588 bytes for il->iatoms,
> > il->iatoms=25e55010
> > (called from file
> >
> /home/semarks/source/gromacs/2016.3/icc/serial/gromacs-2016.3/src/gromacs/
> > gmxpreprocess/convparm.cpp,
> > line 565)
> >
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> > ---
> > ===
> >
> > In the hope that it helps with diagnosing the problem, here is my mdp
> file:
>
> The problem is this:
> > couple-intramol = no; don't adjust ice-ice interactions
> >
> This setting causes the creation of a large exclusion matrix, which in
> your case is approximately 32,376 x 32,376 elements. For small
> molecules, this generally isn't an issue, but since you're trying to
> modulate a large number of molecules within a much larger
> [moleculetype], the memory requirement goes up exponentially.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>


-- 
Sean M. Marks
PhD Candidate
Dept. of Chemical and Biomolecular Engineering
University of Pennsylvania
seanmarks1...@gmail.com
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read 

Re: [gmx-users] grompp is using a very large amount of memory on a modestly-sized system

2019-03-08 Thread Justin Lemkul




On 3/8/19 11:04 AM, Sean Marks wrote:

Hi, everyone,

I am running into an issue where grompp is using a tremendous amount of
memory and crashing, even though my system is not especially large (63976
atoms).

I am using GROMACS 2016.3.

My system consists of liquid water (7,930 molecules) next to a block of ice
(8,094 molecules). The ice oxygens are restrained to their lattice position
with a harmonic potential with strength k = 4,000 kJ/mol/nm^2. I am using
the TIP4P/Ice model, which is a rigid 4-site model with a negative partial
charge located on a virtual site rather than the oxygen.

My goal is to systematically reduce the electrostatic interactions between
the water molecules and the position-restrained ice, while leaving
water-water and ice-ice interactions unaffected.

To accomplish this, I am modeling all of the ice molecules using a single
moleculetype so that I can take advantages of GROMACS' FEP features to
selectively scale interactions. I explicitly specify all constraints and
exclusions in the topology file. This moleculetype contains one virtual
site, 3 constraints, and 4 exclusions per "residue" (ice molecule).

When I run grompp, I receive the following error, which I think means that
a huge block of memory (~9 GB) was requested but could not be allocated:

=
Command line:
   gmx grompp -f npt.mdp -c md.gro -p topol.top -n index.ndx -r
initconf_packmol.gro -o input.tpr -maxwarn 2 -pp processed.top

...

Generated 21 of the 21 non-bonded parameter combinations
Generating 1-4 interactions: fudge = 0.5
Generated 21 of the 21 1-4 parameter combinations
Excluding 3 bonded neighbours molecule type 'ICE'
turning H bonds into constraints...
Excluding 3 bonded neighbours molecule type 'SOL'
turning H bonds into constraints...
Coupling 1 copies of molecule type 'ICE'
Setting gen_seed to 1021640799
Velocities were taken from a Maxwell distribution at 273 K
Cleaning up constraints and constant bonded interactions with virtual sites
Removing all charge groups because cutoff-scheme=Verlet

---
Program: gmx grompp, version 2016.3
Source file: src/gromacs/utility/smalloc.cpp (line 226)

Fatal error:
Not enough memory. Failed to realloc -8589934588 bytes for il->iatoms,
il->iatoms=25e55010
(called from file
/home/semarks/source/gromacs/2016.3/icc/serial/gromacs-2016.3/src/gromacs/
gmxpreprocess/convparm.cpp,
line 565)

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
===

In the hope that it helps with diagnosing the problem, here is my mdp file:


The problem is this:

couple-intramol = no; don't adjust ice-ice interactions

This setting causes the creation of a large exclusion matrix, which in 
your case is approximately 32,376 x 32,376 elements. For small 
molecules, this generally isn't an issue, but since you're trying to 
modulate a large number of molecules within a much larger 
[moleculetype], the memory requirement goes up exponentially.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] grompp is using a very large amount of memory on a modestly-sized system

2019-03-08 Thread Sean Marks
Hi, everyone,

I am running into an issue where grompp is using a tremendous amount of
memory and crashing, even though my system is not especially large (63976
atoms).

I am using GROMACS 2016.3.

My system consists of liquid water (7,930 molecules) next to a block of ice
(8,094 molecules). The ice oxygens are restrained to their lattice position
with a harmonic potential with strength k = 4,000 kJ/mol/nm^2. I am using
the TIP4P/Ice model, which is a rigid 4-site model with a negative partial
charge located on a virtual site rather than the oxygen.

My goal is to systematically reduce the electrostatic interactions between
the water molecules and the position-restrained ice, while leaving
water-water and ice-ice interactions unaffected.

To accomplish this, I am modeling all of the ice molecules using a single
moleculetype so that I can take advantages of GROMACS' FEP features to
selectively scale interactions. I explicitly specify all constraints and
exclusions in the topology file. This moleculetype contains one virtual
site, 3 constraints, and 4 exclusions per "residue" (ice molecule).

When I run grompp, I receive the following error, which I think means that
a huge block of memory (~9 GB) was requested but could not be allocated:

=
Command line:
  gmx grompp -f npt.mdp -c md.gro -p topol.top -n index.ndx -r
initconf_packmol.gro -o input.tpr -maxwarn 2 -pp processed.top

...

Generated 21 of the 21 non-bonded parameter combinations
Generating 1-4 interactions: fudge = 0.5
Generated 21 of the 21 1-4 parameter combinations
Excluding 3 bonded neighbours molecule type 'ICE'
turning H bonds into constraints...
Excluding 3 bonded neighbours molecule type 'SOL'
turning H bonds into constraints...
Coupling 1 copies of molecule type 'ICE'
Setting gen_seed to 1021640799
Velocities were taken from a Maxwell distribution at 273 K
Cleaning up constraints and constant bonded interactions with virtual sites
Removing all charge groups because cutoff-scheme=Verlet

---
Program: gmx grompp, version 2016.3
Source file: src/gromacs/utility/smalloc.cpp (line 226)

Fatal error:
Not enough memory. Failed to realloc -8589934588 bytes for il->iatoms,
il->iatoms=25e55010
(called from file
/home/semarks/source/gromacs/2016.3/icc/serial/gromacs-2016.3/src/gromacs/
gmxpreprocess/convparm.cpp,
line 565)

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
===

In the hope that it helps with diagnosing the problem, here is my mdp file:

===
; Water and ice

; RUN CONTROL PARAMETERS

; Molecular dynamics
integrator   = md   ; Leapfrog
dt   = 0.002; 2 fs - time step [ps]
nsteps   = 500  ; 10 ns - run time [steps]

; Center of mass motion (COMM) removal
comm-mode= none ; Linear
nstcomm  = 10; Removal frequency (>=
nstcalcenergy) [steps]
comm-grps= System   ; Groups for COMM removal (blank =>
whole system)

; Initial velocities
gen_vel  = yes; Generate velocities using Boltzmann
distribution
gen_temp = 273; Temperature for Boltzmann distribution
[K]
gen_seed = -1 ; Seed for RNG is the job ID

; OUTPUT CONTROL OPTIONS
nstcalcenergy= 1  ; Frequency of energy calculation [steps]
; Output frequency for coords (x), velocities (v) and forces (f)
; trr
nstxout  = 0   ; never - print coordinates [steps]
nstvout  = 0   ; never - print velocities [steps]
nstfout  = 0   ; never - print forces [steps]
; Log end edr files
nstlog   = 2500; 5 ps - print energies to log file
[steps]
nstenergy= 2500; 5 ps - print energies to energy file
[steps]
; xtc
nstxout-compressed   = 2500; 5 ps - print coordinates to xtc file
[steps]
compressed-x-precision   = 1000; Number of zeros = number of places
after decimal point
compressed-x-grps= System  ; Groups to write to xtc file

; BOUNDARY CONDITIONS
pbc  = xyz
periodic_molecules   = no  ; Rigid graphene has no intramolecule
interactions

; NEIGHBOR LIST
cutoff-scheme= Verlet
nstlist  = 10  ; Neighbor list update frequency [steps]
ns-type  = grid; More efficient than simple search
verlet-buffer-tolerance  = 0.005   ; [kJ/mol/ps] (-1.0 --> use rlist)
rlist= 1.0 ; Cutoff distance for short-range
neighbor list [nm]

; VAN DER WAALS
vdwtype  = Cut-off
vdw-modifier = Potential-shift-Verlet
rvdw = 1.0; Radius for vdW cutoff [nm]
DispCorr = no

; ELECTROSTATICS
coulombtype  = PME; Fast, smooth, particle mesh 

[gmx-users] (无主题)

2019-03-08 Thread 吴修聪
Dear gmx-users, i am trying to use GROMACS to simulate an infinite system, a 
triple helix polymer(three chains), but a tricky problem challenged me that 
different chains jump between the box in the trajectory. I tried to use 
different options for pbc,  like whole/mol/nojump/atom/res. It still can't be 
solved. Since the polymer is bonded by command to become an infinite system, 
the box is a little smaller than the monomer. With pbc - atom or res or nojump, 
the atoms are wrongly bonded in the trj, with pbc -mol, the chain jump between 
the box. I guess it's because the inconsistent shifts. Can anyone help me solve 
this problem? Best regards Xiucong

| |
吴修聪
|
|
邮箱:wuxc1...@126.com
|

签名由 网易邮箱大师 定制
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.