Re: [gmx-users] FEP calculations on multiple nodes

2017-08-24 Thread Mark Abraham
Hi,

It is intended that they work on GPUs, but the implementation of FEP
support for FB restraints pre-dates GPUs for nonbonded interactions, and is
probably not tested, so could be broken.

Mark

On Thu, 24 Aug 2017 18:03 Vikas Dubey  wrote:

> Hi,
> I am also running calculation on GPUs. If FB restraints are only limited to
> CPUs, I am sorry I was not aware of that.
>
> On 24 August 2017 at 17:55, Mark Abraham  wrote:
>
> > Hi,
> >
> > Thanks. That should not be the problem, because all such computations are
> > only on the CPU... But hopefully we will see.
> >
> > Mark
> >
> > On Thu, 24 Aug 2017 17:35 Leandro Bortot  wrote:
> >
> > > Hello all,
> > >
> > >  This may add something: I had Segmentation Fault using flat-bottom
> > > restraints with GPUs before. I just assumed that this type of restraint
> > was
> > > not supported by GPUs and moved to a CPU-only system.
> > >  Sadly it was some time ago and I don't have the files anymore.
> > >
> > > Best,
> > > Leandro
> > >
> > >
> > > On Thu, Aug 24, 2017 at 5:13 PM, Mark Abraham <
> mark.j.abra...@gmail.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > Thanks. Good lesson here - try simplifying until things work. That
> does
> > > > suggest there is a bug in flat bottomed position restraints. Can you
> > > please
> > > > upload a tpr with those restraints, along with a report at
> > > > https://redmine.gromacs.org so we can reproduce and hopefully fix
> it?
> > > >
> > > > Mark
> > > >
> > > > On Thu, 24 Aug 2017 16:55 Vikas Dubey 
> wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I have just checked with normal restraints. it works fine.
> Simulation
> > > > crash
> > > > > with flat bottom restraints.
> > > > >
> > > > > On 24 August 2017 at 16:43, Mark Abraham  >
> > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > Does it work if you just have the normal position restraints, or
> > just
> > > > > have
> > > > > > the flat-bottom restraints? In particular, I could image the
> latter
> > > are
> > > > > not
> > > > > > widely used and might have a bug.
> > > > > >
> > > > > > Mark
> > > > > >
> > > > > > On Thu, Aug 24, 2017 at 4:36 PM Vikas Dubey <
> > vikasdubey...@gmail.com
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hi everyone,
> > > > > > >
> > > > > > > I have found out that positions restrains is the issue in my
> FEP
> > > > > > > simulation.  As soon as I switch off position restraints it
> works
> > > > > fine. I
> > > > > > > have the following the restraint file for the ions in my system
> > (I
> > > > > don't
> > > > > > > see any problems with it):
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > *[ position_restraints ]; atom  type  fx  fy  fz
>   1
> > > >  1
> > > > > > 0
> > > > > > > 0  1000 2 1  0  0  1000 3 1  0  0  1000 4
> >  1
> > > > > 0  0
> > > > > > > 10005 1  0  0  1000 6 1  0  0  1000 8 1
> > > 0  0
> > > > > > > 1000 9 1  0  0  100010 1  0  0  100011
> >  1  0
> > > > 0
> > > > > > > 100012 1  0  0  100013 1  0  0  100014
> >  1  0
> > > > 0
> > > > > > > 100015 1  0  0  100016 1  0  0  100017
> >  1  0
> > > > 0
> > > > > > > 100018 1  0  0  100019 1  0  0  100020
> >  1  0
> > > > 0
> > > > > > > 100021 1  1000  1000  1000;[ position_restraints ] ;
> flat
> > > > > bottom
> > > > > > > position restraints, here for potassium in site I;  type, g(8
> > for a
> > > > > > > cylinder), r(nm), k7  28  1  1000*
> > > > > > >
> > > > > > >
> > > > > > > On 22 August 2017 at 14:18, Vikas Dubey <
> vikasdubey...@gmail.com
> > >
> > > > > wrote:
> > > > > > >
> > > > > > > > Hi, I use the following script for my cluster. Also, I think
> > > > problem
> > > > > is
> > > > > > > > calculation specific. I have run a quite a few normal
> > > simulations ,
> > > > > it
> > > > > > > > works fine :
> > > > > > > >
> > > > > > > >
> > > > > > > > #SBATCH --job-name=2_1_0
> > > > > > > > #SBATCH --mail-type=ALL
> > > > > > > > #SBATCH --time=24:00:00
> > > > > > > > #SBATCH --nodes=1
> > > > > > > > #SBATCH --ntasks-per-node=1
> > > > > > > > #SBATCH --ntasks-per-core=2
> > > > > > > > #SBATCH --cpus-per-task=4
> > > > > > > > #SBATCH --constraint=gpu
> > > > > > > > #SBATCH --output out.txt
> > > > > > > > #SBATCH --error  err.txt
> > > > > > > > #
> > > > > > > > # load modules and run simulation
> > > > > > > > module load daint-gpu
> > > > > > > > module load GROMACS
> > > > > > > > export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
> > > > > > > > export CRAY_CUDA_MPS=1
>

[gmx-users] Gromacs simulation of two or more peptide chain

2017-08-24 Thread Kalyanashis Jana
Dear all,
I would like to run an MD simulation for two or more peptide chain. I got
the following error in the steepest descent energy minimization step. Could
you please suggest me, how can I do the MD simulation?

Steepest Descents:
   Tolerance (Fmax)   =  1.0e+02
   Number of steps=   20

WARNING: Listed nonbonded interaction between particles 215 and 219
at distance 3f which is larger than the table limit 3f nm.

This is likely either a 1,4 interaction, or a listed interaction inside
a smaller molecule you are decoupling during a free energy calculation.
Since interactions at distances beyond the table cannot be computed,
they are skipped until they are inside the table limit again. You will
only see this message once, even if it occurs for several interactions.

IMPORTANT: This should not happen in a stable simulation, so there is
probably something wrong with your system. Only change the table-extension
distance in the mdp file if you are really sure that is the reason.


Segmentation fault (core dumped)ot= -nan Fmax= 8.56592e+05, atom=
218


Thanks with regards
Kalyanashis Jana
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] FEP calculations on multiple nodes

2017-08-24 Thread Vikas Dubey
Hi,
I am also running calculation on GPUs. If FB restraints are only limited to
CPUs, I am sorry I was not aware of that.

On 24 August 2017 at 17:55, Mark Abraham  wrote:

> Hi,
>
> Thanks. That should not be the problem, because all such computations are
> only on the CPU... But hopefully we will see.
>
> Mark
>
> On Thu, 24 Aug 2017 17:35 Leandro Bortot  wrote:
>
> > Hello all,
> >
> >  This may add something: I had Segmentation Fault using flat-bottom
> > restraints with GPUs before. I just assumed that this type of restraint
> was
> > not supported by GPUs and moved to a CPU-only system.
> >  Sadly it was some time ago and I don't have the files anymore.
> >
> > Best,
> > Leandro
> >
> >
> > On Thu, Aug 24, 2017 at 5:13 PM, Mark Abraham 
> > wrote:
> >
> > > Hi,
> > >
> > > Thanks. Good lesson here - try simplifying until things work. That does
> > > suggest there is a bug in flat bottomed position restraints. Can you
> > please
> > > upload a tpr with those restraints, along with a report at
> > > https://redmine.gromacs.org so we can reproduce and hopefully fix it?
> > >
> > > Mark
> > >
> > > On Thu, 24 Aug 2017 16:55 Vikas Dubey  wrote:
> > >
> > > > Hi,
> > > >
> > > > I have just checked with normal restraints. it works fine. Simulation
> > > crash
> > > > with flat bottom restraints.
> > > >
> > > > On 24 August 2017 at 16:43, Mark Abraham 
> > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > Does it work if you just have the normal position restraints, or
> just
> > > > have
> > > > > the flat-bottom restraints? In particular, I could image the latter
> > are
> > > > not
> > > > > widely used and might have a bug.
> > > > >
> > > > > Mark
> > > > >
> > > > > On Thu, Aug 24, 2017 at 4:36 PM Vikas Dubey <
> vikasdubey...@gmail.com
> > >
> > > > > wrote:
> > > > >
> > > > > > Hi everyone,
> > > > > >
> > > > > > I have found out that positions restrains is the issue in my FEP
> > > > > > simulation.  As soon as I switch off position restraints it works
> > > > fine. I
> > > > > > have the following the restraint file for the ions in my system
> (I
> > > > don't
> > > > > > see any problems with it):
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > *[ position_restraints ]; atom  type  fx  fy  fz1
> > >  1
> > > > > 0
> > > > > > 0  1000 2 1  0  0  1000 3 1  0  0  1000 4
>  1
> > > > 0  0
> > > > > > 10005 1  0  0  1000 6 1  0  0  1000 8 1
> > 0  0
> > > > > > 1000 9 1  0  0  100010 1  0  0  100011
>  1  0
> > > 0
> > > > > > 100012 1  0  0  100013 1  0  0  100014
>  1  0
> > > 0
> > > > > > 100015 1  0  0  100016 1  0  0  100017
>  1  0
> > > 0
> > > > > > 100018 1  0  0  100019 1  0  0  100020
>  1  0
> > > 0
> > > > > > 100021 1  1000  1000  1000;[ position_restraints ] ; flat
> > > > bottom
> > > > > > position restraints, here for potassium in site I;  type, g(8
> for a
> > > > > > cylinder), r(nm), k7  28  1  1000*
> > > > > >
> > > > > >
> > > > > > On 22 August 2017 at 14:18, Vikas Dubey  >
> > > > wrote:
> > > > > >
> > > > > > > Hi, I use the following script for my cluster. Also, I think
> > > problem
> > > > is
> > > > > > > calculation specific. I have run a quite a few normal
> > simulations ,
> > > > it
> > > > > > > works fine :
> > > > > > >
> > > > > > >
> > > > > > > #SBATCH --job-name=2_1_0
> > > > > > > #SBATCH --mail-type=ALL
> > > > > > > #SBATCH --time=24:00:00
> > > > > > > #SBATCH --nodes=1
> > > > > > > #SBATCH --ntasks-per-node=1
> > > > > > > #SBATCH --ntasks-per-core=2
> > > > > > > #SBATCH --cpus-per-task=4
> > > > > > > #SBATCH --constraint=gpu
> > > > > > > #SBATCH --output out.txt
> > > > > > > #SBATCH --error  err.txt
> > > > > > > #
> > > > > > > # load modules and run simulation
> > > > > > > module load daint-gpu
> > > > > > > module load GROMACS
> > > > > > > export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
> > > > > > > export CRAY_CUDA_MPS=1
> > > > > > >
> > > > > > > srun -n $SLURM_NTASKS --ntasks-per-node=$SLURM_NTASKS_PER_NODE
> -c
> > > > > > > $SLURM_CPUS_PER_TASK gmx_mpi mdrun -deffnm md_0
> > > > > > >
> > > > > > > On 22 August 2017 at 06:11, Nikhil Maroli  >
> > > > wrote:
> > > > > > >
> > > > > > >> Okay, you might need to consider
> > > > > > >>
> > > > > > >> gmx mdrun -v -ntmpi XX -ntomp XX -deffnm   -gpu_id XXX
> > > > > > >>
> > > > > > >>
> > > > > > >>
> > > > > > >> http://manual.gromacs.org/documentation/5.1/user-guide/mdrun
> > > > > > >> -performance.html
> > > > > > >>
> > > > > > >> http://www.gromacs.org/Documentation/Errors#There_is_no_
> >

Re: [gmx-users] FEP calculations on multiple nodes

2017-08-24 Thread Mark Abraham
Hi,

Thanks. That should not be the problem, because all such computations are
only on the CPU... But hopefully we will see.

Mark

On Thu, 24 Aug 2017 17:35 Leandro Bortot  wrote:

> Hello all,
>
>  This may add something: I had Segmentation Fault using flat-bottom
> restraints with GPUs before. I just assumed that this type of restraint was
> not supported by GPUs and moved to a CPU-only system.
>  Sadly it was some time ago and I don't have the files anymore.
>
> Best,
> Leandro
>
>
> On Thu, Aug 24, 2017 at 5:13 PM, Mark Abraham 
> wrote:
>
> > Hi,
> >
> > Thanks. Good lesson here - try simplifying until things work. That does
> > suggest there is a bug in flat bottomed position restraints. Can you
> please
> > upload a tpr with those restraints, along with a report at
> > https://redmine.gromacs.org so we can reproduce and hopefully fix it?
> >
> > Mark
> >
> > On Thu, 24 Aug 2017 16:55 Vikas Dubey  wrote:
> >
> > > Hi,
> > >
> > > I have just checked with normal restraints. it works fine. Simulation
> > crash
> > > with flat bottom restraints.
> > >
> > > On 24 August 2017 at 16:43, Mark Abraham 
> > wrote:
> > >
> > > > Hi,
> > > >
> > > > Does it work if you just have the normal position restraints, or just
> > > have
> > > > the flat-bottom restraints? In particular, I could image the latter
> are
> > > not
> > > > widely used and might have a bug.
> > > >
> > > > Mark
> > > >
> > > > On Thu, Aug 24, 2017 at 4:36 PM Vikas Dubey  >
> > > > wrote:
> > > >
> > > > > Hi everyone,
> > > > >
> > > > > I have found out that positions restrains is the issue in my FEP
> > > > > simulation.  As soon as I switch off position restraints it works
> > > fine. I
> > > > > have the following the restraint file for the ions in my system (I
> > > don't
> > > > > see any problems with it):
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > *[ position_restraints ]; atom  type  fx  fy  fz1
> >  1
> > > > 0
> > > > > 0  1000 2 1  0  0  1000 3 1  0  0  1000 4 1
> > > 0  0
> > > > > 10005 1  0  0  1000 6 1  0  0  1000 8 1
> 0  0
> > > > > 1000 9 1  0  0  100010 1  0  0  100011 1  0
> > 0
> > > > > 100012 1  0  0  100013 1  0  0  100014 1  0
> > 0
> > > > > 100015 1  0  0  100016 1  0  0  100017 1  0
> > 0
> > > > > 100018 1  0  0  100019 1  0  0  100020 1  0
> > 0
> > > > > 100021 1  1000  1000  1000;[ position_restraints ] ; flat
> > > bottom
> > > > > position restraints, here for potassium in site I;  type, g(8 for a
> > > > > cylinder), r(nm), k7  28  1  1000*
> > > > >
> > > > >
> > > > > On 22 August 2017 at 14:18, Vikas Dubey 
> > > wrote:
> > > > >
> > > > > > Hi, I use the following script for my cluster. Also, I think
> > problem
> > > is
> > > > > > calculation specific. I have run a quite a few normal
> simulations ,
> > > it
> > > > > > works fine :
> > > > > >
> > > > > >
> > > > > > #SBATCH --job-name=2_1_0
> > > > > > #SBATCH --mail-type=ALL
> > > > > > #SBATCH --time=24:00:00
> > > > > > #SBATCH --nodes=1
> > > > > > #SBATCH --ntasks-per-node=1
> > > > > > #SBATCH --ntasks-per-core=2
> > > > > > #SBATCH --cpus-per-task=4
> > > > > > #SBATCH --constraint=gpu
> > > > > > #SBATCH --output out.txt
> > > > > > #SBATCH --error  err.txt
> > > > > > #
> > > > > > # load modules and run simulation
> > > > > > module load daint-gpu
> > > > > > module load GROMACS
> > > > > > export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
> > > > > > export CRAY_CUDA_MPS=1
> > > > > >
> > > > > > srun -n $SLURM_NTASKS --ntasks-per-node=$SLURM_NTASKS_PER_NODE -c
> > > > > > $SLURM_CPUS_PER_TASK gmx_mpi mdrun -deffnm md_0
> > > > > >
> > > > > > On 22 August 2017 at 06:11, Nikhil Maroli 
> > > wrote:
> > > > > >
> > > > > >> Okay, you might need to consider
> > > > > >>
> > > > > >> gmx mdrun -v -ntmpi XX -ntomp XX -deffnm   -gpu_id XXX
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> http://manual.gromacs.org/documentation/5.1/user-guide/mdrun
> > > > > >> -performance.html
> > > > > >>
> > > > > >> http://www.gromacs.org/Documentation/Errors#There_is_no_
> > > > > >> domain_decomposition_for_n_nodes_that_is_compatible_with_the
> > > > > >> _given_box_and_a_minimum_cell_size_of_x_nm
> > > > > >> --
> > > > > >> Gromacs Users mailing list
> > > > > >>
> > > > > >> * Please search the archive at http://www.gromacs.org/Support
> > > > > >> /Mailing_Lists/GMX-Users_List before posting!
> > > > > >>
> > > > > >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > > > >>
> > > > > >> * For (un)subscribe requests visit
> > > > > >> https://maillist.sys.kth.se/mailm

Re: [gmx-users] FEP calculations on multiple nodes

2017-08-24 Thread Leandro Bortot
Hello all,

 This may add something: I had Segmentation Fault using flat-bottom
restraints with GPUs before. I just assumed that this type of restraint was
not supported by GPUs and moved to a CPU-only system.
 Sadly it was some time ago and I don't have the files anymore.

Best,
Leandro


On Thu, Aug 24, 2017 at 5:13 PM, Mark Abraham 
wrote:

> Hi,
>
> Thanks. Good lesson here - try simplifying until things work. That does
> suggest there is a bug in flat bottomed position restraints. Can you please
> upload a tpr with those restraints, along with a report at
> https://redmine.gromacs.org so we can reproduce and hopefully fix it?
>
> Mark
>
> On Thu, 24 Aug 2017 16:55 Vikas Dubey  wrote:
>
> > Hi,
> >
> > I have just checked with normal restraints. it works fine. Simulation
> crash
> > with flat bottom restraints.
> >
> > On 24 August 2017 at 16:43, Mark Abraham 
> wrote:
> >
> > > Hi,
> > >
> > > Does it work if you just have the normal position restraints, or just
> > have
> > > the flat-bottom restraints? In particular, I could image the latter are
> > not
> > > widely used and might have a bug.
> > >
> > > Mark
> > >
> > > On Thu, Aug 24, 2017 at 4:36 PM Vikas Dubey 
> > > wrote:
> > >
> > > > Hi everyone,
> > > >
> > > > I have found out that positions restrains is the issue in my FEP
> > > > simulation.  As soon as I switch off position restraints it works
> > fine. I
> > > > have the following the restraint file for the ions in my system (I
> > don't
> > > > see any problems with it):
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > *[ position_restraints ]; atom  type  fx  fy  fz1
>  1
> > > 0
> > > > 0  1000 2 1  0  0  1000 3 1  0  0  1000 4 1
> > 0  0
> > > > 10005 1  0  0  1000 6 1  0  0  1000 8 1  0  0
> > > > 1000 9 1  0  0  100010 1  0  0  100011 1  0
> 0
> > > > 100012 1  0  0  100013 1  0  0  100014 1  0
> 0
> > > > 100015 1  0  0  100016 1  0  0  100017 1  0
> 0
> > > > 100018 1  0  0  100019 1  0  0  100020 1  0
> 0
> > > > 100021 1  1000  1000  1000;[ position_restraints ] ; flat
> > bottom
> > > > position restraints, here for potassium in site I;  type, g(8 for a
> > > > cylinder), r(nm), k7  28  1  1000*
> > > >
> > > >
> > > > On 22 August 2017 at 14:18, Vikas Dubey 
> > wrote:
> > > >
> > > > > Hi, I use the following script for my cluster. Also, I think
> problem
> > is
> > > > > calculation specific. I have run a quite a few normal simulations ,
> > it
> > > > > works fine :
> > > > >
> > > > >
> > > > > #SBATCH --job-name=2_1_0
> > > > > #SBATCH --mail-type=ALL
> > > > > #SBATCH --time=24:00:00
> > > > > #SBATCH --nodes=1
> > > > > #SBATCH --ntasks-per-node=1
> > > > > #SBATCH --ntasks-per-core=2
> > > > > #SBATCH --cpus-per-task=4
> > > > > #SBATCH --constraint=gpu
> > > > > #SBATCH --output out.txt
> > > > > #SBATCH --error  err.txt
> > > > > #
> > > > > # load modules and run simulation
> > > > > module load daint-gpu
> > > > > module load GROMACS
> > > > > export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
> > > > > export CRAY_CUDA_MPS=1
> > > > >
> > > > > srun -n $SLURM_NTASKS --ntasks-per-node=$SLURM_NTASKS_PER_NODE -c
> > > > > $SLURM_CPUS_PER_TASK gmx_mpi mdrun -deffnm md_0
> > > > >
> > > > > On 22 August 2017 at 06:11, Nikhil Maroli 
> > wrote:
> > > > >
> > > > >> Okay, you might need to consider
> > > > >>
> > > > >> gmx mdrun -v -ntmpi XX -ntomp XX -deffnm   -gpu_id XXX
> > > > >>
> > > > >>
> > > > >>
> > > > >> http://manual.gromacs.org/documentation/5.1/user-guide/mdrun
> > > > >> -performance.html
> > > > >>
> > > > >> http://www.gromacs.org/Documentation/Errors#There_is_no_
> > > > >> domain_decomposition_for_n_nodes_that_is_compatible_with_the
> > > > >> _given_box_and_a_minimum_cell_size_of_x_nm
> > > > >> --
> > > > >> Gromacs Users mailing list
> > > > >>
> > > > >> * Please search the archive at http://www.gromacs.org/Support
> > > > >> /Mailing_Lists/GMX-Users_List before posting!
> > > > >>
> > > > >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > > >>
> > > > >> * For (un)subscribe requests visit
> > > > >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_
> gmx-users
> > or
> > > > >> send a mail to gmx-users-requ...@gromacs.org.
> > > > >>
> > > > >
> > > > >
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/g

Re: [gmx-users] FEP calculations on multiple nodes

2017-08-24 Thread Mark Abraham
Hi,

Thanks. Good lesson here - try simplifying until things work. That does
suggest there is a bug in flat bottomed position restraints. Can you please
upload a tpr with those restraints, along with a report at
https://redmine.gromacs.org so we can reproduce and hopefully fix it?

Mark

On Thu, 24 Aug 2017 16:55 Vikas Dubey  wrote:

> Hi,
>
> I have just checked with normal restraints. it works fine. Simulation crash
> with flat bottom restraints.
>
> On 24 August 2017 at 16:43, Mark Abraham  wrote:
>
> > Hi,
> >
> > Does it work if you just have the normal position restraints, or just
> have
> > the flat-bottom restraints? In particular, I could image the latter are
> not
> > widely used and might have a bug.
> >
> > Mark
> >
> > On Thu, Aug 24, 2017 at 4:36 PM Vikas Dubey 
> > wrote:
> >
> > > Hi everyone,
> > >
> > > I have found out that positions restrains is the issue in my FEP
> > > simulation.  As soon as I switch off position restraints it works
> fine. I
> > > have the following the restraint file for the ions in my system (I
> don't
> > > see any problems with it):
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > *[ position_restraints ]; atom  type  fx  fy  fz1 1
> > 0
> > > 0  1000 2 1  0  0  1000 3 1  0  0  1000 4 1
> 0  0
> > > 10005 1  0  0  1000 6 1  0  0  1000 8 1  0  0
> > > 1000 9 1  0  0  100010 1  0  0  100011 1  0  0
> > > 100012 1  0  0  100013 1  0  0  100014 1  0  0
> > > 100015 1  0  0  100016 1  0  0  100017 1  0  0
> > > 100018 1  0  0  100019 1  0  0  100020 1  0  0
> > > 100021 1  1000  1000  1000;[ position_restraints ] ; flat
> bottom
> > > position restraints, here for potassium in site I;  type, g(8 for a
> > > cylinder), r(nm), k7  28  1  1000*
> > >
> > >
> > > On 22 August 2017 at 14:18, Vikas Dubey 
> wrote:
> > >
> > > > Hi, I use the following script for my cluster. Also, I think problem
> is
> > > > calculation specific. I have run a quite a few normal simulations ,
> it
> > > > works fine :
> > > >
> > > >
> > > > #SBATCH --job-name=2_1_0
> > > > #SBATCH --mail-type=ALL
> > > > #SBATCH --time=24:00:00
> > > > #SBATCH --nodes=1
> > > > #SBATCH --ntasks-per-node=1
> > > > #SBATCH --ntasks-per-core=2
> > > > #SBATCH --cpus-per-task=4
> > > > #SBATCH --constraint=gpu
> > > > #SBATCH --output out.txt
> > > > #SBATCH --error  err.txt
> > > > #
> > > > # load modules and run simulation
> > > > module load daint-gpu
> > > > module load GROMACS
> > > > export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
> > > > export CRAY_CUDA_MPS=1
> > > >
> > > > srun -n $SLURM_NTASKS --ntasks-per-node=$SLURM_NTASKS_PER_NODE -c
> > > > $SLURM_CPUS_PER_TASK gmx_mpi mdrun -deffnm md_0
> > > >
> > > > On 22 August 2017 at 06:11, Nikhil Maroli 
> wrote:
> > > >
> > > >> Okay, you might need to consider
> > > >>
> > > >> gmx mdrun -v -ntmpi XX -ntomp XX -deffnm   -gpu_id XXX
> > > >>
> > > >>
> > > >>
> > > >> http://manual.gromacs.org/documentation/5.1/user-guide/mdrun
> > > >> -performance.html
> > > >>
> > > >> http://www.gromacs.org/Documentation/Errors#There_is_no_
> > > >> domain_decomposition_for_n_nodes_that_is_compatible_with_the
> > > >> _given_box_and_a_minimum_cell_size_of_x_nm
> > > >> --
> > > >> Gromacs Users mailing list
> > > >>
> > > >> * Please search the archive at http://www.gromacs.org/Support
> > > >> /Mailing_Lists/GMX-Users_List before posting!
> > > >>
> > > >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >>
> > > >> * For (un)subscribe requests visit
> > > >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > >> send a mail to gmx-users-requ...@gromacs.org.
> > > >>
> > > >
> > > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.

Re: [gmx-users] FEP calculations on multiple nodes

2017-08-24 Thread Vikas Dubey
Hi,

I have just checked with normal restraints. it works fine. Simulation crash
with flat bottom restraints.

On 24 August 2017 at 16:43, Mark Abraham  wrote:

> Hi,
>
> Does it work if you just have the normal position restraints, or just have
> the flat-bottom restraints? In particular, I could image the latter are not
> widely used and might have a bug.
>
> Mark
>
> On Thu, Aug 24, 2017 at 4:36 PM Vikas Dubey 
> wrote:
>
> > Hi everyone,
> >
> > I have found out that positions restrains is the issue in my FEP
> > simulation.  As soon as I switch off position restraints it works fine. I
> > have the following the restraint file for the ions in my system (I don't
> > see any problems with it):
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > *[ position_restraints ]; atom  type  fx  fy  fz1 1
> 0
> > 0  1000 2 1  0  0  1000 3 1  0  0  1000 4 1  0  0
> > 10005 1  0  0  1000 6 1  0  0  1000 8 1  0  0
> > 1000 9 1  0  0  100010 1  0  0  100011 1  0  0
> > 100012 1  0  0  100013 1  0  0  100014 1  0  0
> > 100015 1  0  0  100016 1  0  0  100017 1  0  0
> > 100018 1  0  0  100019 1  0  0  100020 1  0  0
> > 100021 1  1000  1000  1000;[ position_restraints ] ; flat bottom
> > position restraints, here for potassium in site I;  type, g(8 for a
> > cylinder), r(nm), k7  28  1  1000*
> >
> >
> > On 22 August 2017 at 14:18, Vikas Dubey  wrote:
> >
> > > Hi, I use the following script for my cluster. Also, I think problem is
> > > calculation specific. I have run a quite a few normal simulations , it
> > > works fine :
> > >
> > >
> > > #SBATCH --job-name=2_1_0
> > > #SBATCH --mail-type=ALL
> > > #SBATCH --time=24:00:00
> > > #SBATCH --nodes=1
> > > #SBATCH --ntasks-per-node=1
> > > #SBATCH --ntasks-per-core=2
> > > #SBATCH --cpus-per-task=4
> > > #SBATCH --constraint=gpu
> > > #SBATCH --output out.txt
> > > #SBATCH --error  err.txt
> > > #
> > > # load modules and run simulation
> > > module load daint-gpu
> > > module load GROMACS
> > > export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
> > > export CRAY_CUDA_MPS=1
> > >
> > > srun -n $SLURM_NTASKS --ntasks-per-node=$SLURM_NTASKS_PER_NODE -c
> > > $SLURM_CPUS_PER_TASK gmx_mpi mdrun -deffnm md_0
> > >
> > > On 22 August 2017 at 06:11, Nikhil Maroli  wrote:
> > >
> > >> Okay, you might need to consider
> > >>
> > >> gmx mdrun -v -ntmpi XX -ntomp XX -deffnm   -gpu_id XXX
> > >>
> > >>
> > >>
> > >> http://manual.gromacs.org/documentation/5.1/user-guide/mdrun
> > >> -performance.html
> > >>
> > >> http://www.gromacs.org/Documentation/Errors#There_is_no_
> > >> domain_decomposition_for_n_nodes_that_is_compatible_with_the
> > >> _given_box_and_a_minimum_cell_size_of_x_nm
> > >> --
> > >> Gromacs Users mailing list
> > >>
> > >> * Please search the archive at http://www.gromacs.org/Support
> > >> /Mailing_Lists/GMX-Users_List before posting!
> > >>
> > >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >>
> > >> * For (un)subscribe requests visit
> > >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > >> send a mail to gmx-users-requ...@gromacs.org.
> > >>
> > >
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Regarding Atom in residue XXX was not found in rtp entry with xxx atoms while sorting atoms

2017-08-24 Thread Mark Abraham
Hi,

pdb2gmx is giving you the normal termini found in a peptide in aqueous
solution near physiological pH, ie. zwitterionic. You can choose different
termini if your force field supports it, but you should only do so if you
are modelling something that requires a non-zwitterionic form.

Mark

On Thu, Aug 24, 2017 at 3:48 PM Dilip H N  wrote:

> Hello,
> I have a amino acid and i am using charmm 36 FF to create the topology by:-
>  gmx pdb2gmx -f abc.pdb -o abc.grobut i am getting error as:-
>
> Atom HO in residue XXX was not found in rtp entry with xxx atoms while
> sorting atoms.
> So  if i use the -ignh flag in the gmx pdb2gmx command, i am getting the
> topology and the .gro file, but i  in the .gro file the  the hydrogen in
> the OH ie., in C terminal, is getting attached to N terminal (which was NH2
> initially but in the .gro file it getting converted into NH3). So how do i
> avoid the hydrogen getting shifted from OH or C terminal side to N terminal
> side..??
>
> How can i solve this error..??
>
> Thank you...
>
> --
> With Best Regards,
>
> DILIP.H.N
> Ph.D Student
>
>
>
>  Sent with Mailtrack
> <
> https://mailtrack.io/install?source=signature&lang=en&referral=cy16f01.di...@nitk.edu.in&idSignature=22
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] FEP calculations on multiple nodes

2017-08-24 Thread Mark Abraham
Hi,

Does it work if you just have the normal position restraints, or just have
the flat-bottom restraints? In particular, I could image the latter are not
widely used and might have a bug.

Mark

On Thu, Aug 24, 2017 at 4:36 PM Vikas Dubey  wrote:

> Hi everyone,
>
> I have found out that positions restrains is the issue in my FEP
> simulation.  As soon as I switch off position restraints it works fine. I
> have the following the restraint file for the ions in my system (I don't
> see any problems with it):
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[ position_restraints ]; atom  type  fx  fy  fz1 1  0
> 0  1000 2 1  0  0  1000 3 1  0  0  1000 4 1  0  0
> 10005 1  0  0  1000 6 1  0  0  1000 8 1  0  0
> 1000 9 1  0  0  100010 1  0  0  100011 1  0  0
> 100012 1  0  0  100013 1  0  0  100014 1  0  0
> 100015 1  0  0  100016 1  0  0  100017 1  0  0
> 100018 1  0  0  100019 1  0  0  100020 1  0  0
> 100021 1  1000  1000  1000;[ position_restraints ] ; flat bottom
> position restraints, here for potassium in site I;  type, g(8 for a
> cylinder), r(nm), k7  28  1  1000*
>
>
> On 22 August 2017 at 14:18, Vikas Dubey  wrote:
>
> > Hi, I use the following script for my cluster. Also, I think problem is
> > calculation specific. I have run a quite a few normal simulations , it
> > works fine :
> >
> >
> > #SBATCH --job-name=2_1_0
> > #SBATCH --mail-type=ALL
> > #SBATCH --time=24:00:00
> > #SBATCH --nodes=1
> > #SBATCH --ntasks-per-node=1
> > #SBATCH --ntasks-per-core=2
> > #SBATCH --cpus-per-task=4
> > #SBATCH --constraint=gpu
> > #SBATCH --output out.txt
> > #SBATCH --error  err.txt
> > #
> > # load modules and run simulation
> > module load daint-gpu
> > module load GROMACS
> > export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
> > export CRAY_CUDA_MPS=1
> >
> > srun -n $SLURM_NTASKS --ntasks-per-node=$SLURM_NTASKS_PER_NODE -c
> > $SLURM_CPUS_PER_TASK gmx_mpi mdrun -deffnm md_0
> >
> > On 22 August 2017 at 06:11, Nikhil Maroli  wrote:
> >
> >> Okay, you might need to consider
> >>
> >> gmx mdrun -v -ntmpi XX -ntomp XX -deffnm   -gpu_id XXX
> >>
> >>
> >>
> >> http://manual.gromacs.org/documentation/5.1/user-guide/mdrun
> >> -performance.html
> >>
> >> http://www.gromacs.org/Documentation/Errors#There_is_no_
> >> domain_decomposition_for_n_nodes_that_is_compatible_with_the
> >> _given_box_and_a_minimum_cell_size_of_x_nm
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at http://www.gromacs.org/Support
> >> /Mailing_Lists/GMX-Users_List before posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
> >
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] FEP calculations on multiple nodes

2017-08-24 Thread Vikas Dubey
Hi everyone,

I have found out that positions restrains is the issue in my FEP
simulation.  As soon as I switch off position restraints it works fine. I
have the following the restraint file for the ions in my system (I don't
see any problems with it):



























*[ position_restraints ]; atom  type  fx  fy  fz1 1  0
0  1000 2 1  0  0  1000 3 1  0  0  1000 4 1  0  0
10005 1  0  0  1000 6 1  0  0  1000 8 1  0  0
1000 9 1  0  0  100010 1  0  0  100011 1  0  0
100012 1  0  0  100013 1  0  0  100014 1  0  0
100015 1  0  0  100016 1  0  0  100017 1  0  0
100018 1  0  0  100019 1  0  0  100020 1  0  0
100021 1  1000  1000  1000;[ position_restraints ] ; flat bottom
position restraints, here for potassium in site I;  type, g(8 for a
cylinder), r(nm), k7  28  1  1000*


On 22 August 2017 at 14:18, Vikas Dubey  wrote:

> Hi, I use the following script for my cluster. Also, I think problem is
> calculation specific. I have run a quite a few normal simulations , it
> works fine :
>
>
> #SBATCH --job-name=2_1_0
> #SBATCH --mail-type=ALL
> #SBATCH --time=24:00:00
> #SBATCH --nodes=1
> #SBATCH --ntasks-per-node=1
> #SBATCH --ntasks-per-core=2
> #SBATCH --cpus-per-task=4
> #SBATCH --constraint=gpu
> #SBATCH --output out.txt
> #SBATCH --error  err.txt
> #
> # load modules and run simulation
> module load daint-gpu
> module load GROMACS
> export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
> export CRAY_CUDA_MPS=1
>
> srun -n $SLURM_NTASKS --ntasks-per-node=$SLURM_NTASKS_PER_NODE -c
> $SLURM_CPUS_PER_TASK gmx_mpi mdrun -deffnm md_0
>
> On 22 August 2017 at 06:11, Nikhil Maroli  wrote:
>
>> Okay, you might need to consider
>>
>> gmx mdrun -v -ntmpi XX -ntomp XX -deffnm   -gpu_id XXX
>>
>>
>>
>> http://manual.gromacs.org/documentation/5.1/user-guide/mdrun
>> -performance.html
>>
>> http://www.gromacs.org/Documentation/Errors#There_is_no_
>> domain_decomposition_for_n_nodes_that_is_compatible_with_the
>> _given_box_and_a_minimum_cell_size_of_x_nm
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support
>> /Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Regarding Atom in residue XXX was not found in rtp entry with xxx atoms while sorting atoms

2017-08-24 Thread Dilip H N
Hello,
I have a amino acid and i am using charmm 36 FF to create the topology by:-
 gmx pdb2gmx -f abc.pdb -o abc.grobut i am getting error as:-

Atom HO in residue XXX was not found in rtp entry with xxx atoms while
sorting atoms.
So  if i use the -ignh flag in the gmx pdb2gmx command, i am getting the
topology and the .gro file, but i  in the .gro file the  the hydrogen in
the OH ie., in C terminal, is getting attached to N terminal (which was NH2
initially but in the .gro file it getting converted into NH3). So how do i
avoid the hydrogen getting shifted from OH or C terminal side to N terminal
side..??

How can i solve this error..??

Thank you...

-- 
With Best Regards,

DILIP.H.N
Ph.D Student



 Sent with Mailtrack

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Torsion analysis

2017-08-24 Thread João Henriques
gmx chi?

http://manual.gromacs.org/documentation/2016.3/onlinehelp/gmx-chi.html

I never used it though. I personally like to use PLUMED for this:

https://plumed.github.io/doc-v2.3/user-doc/html/_t_o_r_s_i_o_n.html

João

On Thu, Aug 24, 2017 at 1:43 PM, RAHUL SURESH 
wrote:

> Dear All,
>
> To deduce the the stability of ligand and protein binding, I would like to
> carry out torsion analysis. How is it possible using Gromacs?
>
> Thanks in advance
> --
> *Regards,*
> *Rahul Suresh*
> *Research Scholar*
> *Bharathiar University*
> *Coimbatore*
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Torsion analysis

2017-08-24 Thread RAHUL SURESH
Dear All,

To deduce the the stability of ligand and protein binding, I would like to
carry out torsion analysis. How is it possible using Gromacs?

Thanks in advance
-- 
*Regards,*
*Rahul Suresh*
*Research Scholar*
*Bharathiar University*
*Coimbatore*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] FEP: mdp setting for mutation

2017-08-24 Thread Alex Mathew
Hi,

Following changes are made in the mdp file for mutation analysis.Can anyone
tell me is there anything wrong here?



; init_lambda_state0123456789
 10   11   12   13   14   15   16   17   18   19   20
vdw_lambdas  = 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40
0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00
coul_lambdas  = 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40
0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00

bonded_lambdas   = 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40
0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00
restraint_lambdas  = 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40
0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00

mass_lambdas   = 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40
0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00

temperature_lambdas  = 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
; Options for the decoupling
sc-alpha = 0.5
sc-coul  = yes  ; linear interpolation of Coulomb (none
in this case)
sc-power = 1
sc-sigma = 0.3
;couple-moltype   = Methane  ; name of moleculetype to decouple
;couple-lambda0   = vdw  ; only van der Waals interactions
;couple-lambda1   = none ; turn off everything, in this case
only vdW
;couple-intramol  = no
nstdhdl  = 10
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] g_cluster

2017-08-24 Thread Erik Marklund
Dear Nikolai,

Group 1.

Kind regards,
Erik
__
Erik Marklund, PhD, Marie Skłodowska Curie INCA Fellow
Department of Chemistry – BMC, Uppsala University
+46 (0)18 471 4539
erik.markl...@kemi.uu.se

On 24 Aug 2017, at 00:07, Smolin, Nikolai 
mailto:nikolai.smo...@gmail.com>> wrote:

Dear All,

I am using g_cluster. I am wondering what two groups used for?

group 1 for fit and RMSD calculation
group 2 for output

is g_cluster use RMSD values for clustering based on group 1 or group 2?


Any suggestions?

Thanks
Nikolai
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Average and bfactors.pdb

2017-08-24 Thread farial tavakoli
 blockquote, div.yahoo_quoted { margin-left: 0 !important; border-left:1px 
#715FFA solid !important; padding-left:1ex !important; background-color:white 
!important; }  Dear justin
Thanks alot for your advice 


Sent from Yahoo Mail for iPhone


On Thursday, August 24, 2017, 5:51 AM, Justin Lemkul  wrote:



On 8/21/17 5:25 PM, farial tavakoli wrote:
>  blockquote, div.yahoo_quoted { margin-left: 0 !important; border-left:1px 
>#715FFA solid !important; padding-left:1ex !important; background-color:white 
>!important; } Hi justin
> Thank you so much for replyingAccording to the gromacs tuturial, i am trying 
> to analyse my complex in terms of RMSD. In order to do that, first it is 
> needed to obtain the average structure to get RMSD vs average structure . And 
> average structure is a side product of obtaining RMSF. Is there anyway that i 
> can calculate RMSD instead of geting RMSD vs average structure? If you want 
> to caculate RMSD , wont you perform this way?

I normally compute RMSD vs. the equilibrated structure or vs. the 
crystal structure, not an average structure.

-Justin

-- 
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
 

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.