Re: [gmx-users] Output velocities and forces for each step?

2018-01-25 Thread Mohd Farid Ismail
That does it. Thank you for your quick reply. -- Mohd Farid Ismail26.01.2018, 13:19, "Shrinath Kumar" :1. No, they are written at whatever frequency you specify in your mdp file.Look at the nstxout, nstvout and nstfout options. You can choose to writethem to the .trr file at each step if you wish.3. You can use gmx traj -ov -ofOn 26 January 2018 at 04:55, Mohd Farid Ismail 

Re: [gmx-users] Output velocities and forces for each step?

2018-01-25 Thread Shrinath Kumar
1. No, they are written at whatever frequency you specify in your mdp file.
Look at the nstxout, nstvout and nstfout options. You can choose to write
them to the .trr file at each step if you wish.
3. You can use gmx traj -ov -of

On 26 January 2018 at 04:55, Mohd Farid Ismail  wrote:

>
> Hi,
>
> I don't understand the documentation about the .trr file. The manual
> mentions that the .trr file contains the coordinates and velocities, and
> optionally the forces. A few quetions:
> 1) Does this mean that the velocities are written at each step just like
> the coordinate, and that the information is written to the .trr file?
> 2) What switch to the mdrun do I use to have the forces for each step are
> written to the .trr file?
> 3) How can one extract these velocities and forces into separate files?
>
> I've googled a little bit but couldn't find any definitive information.
>
> --
> Mohd Farid Ismail
>
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Output velocities and forces for each step?

2018-01-25 Thread Mohd Farid Ismail
 Hi, I don't understand the documentation about the .trr file. The manual mentions that the .trr file contains the coordinates and velocities, and optionally the forces. A few quetions:1) Does this mean that the velocities are written at each step just like the coordinate, and that the information is written to the .trr file?2) What switch to the mdrun do I use to have the forces for each step are written to the .trr file?3) How can one extract these velocities and forces into separate files? I've googled a little bit but couldn't find any definitive information. -- Mohd Farid Ismail  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] How to convert charmm str to itp file?

2018-01-25 Thread 金 瑞涛
Greetings Miji

That means you need to install netwokx.
Try pip install networkx. if you still get other problem, it might caused by 
the lasted networkx, try the old version if other problem occurs, unless you 
are still missing other module.

regards

Riotto



Sent from my Samsung Galaxy smartphone.


 Original message 
From: Мижээ Батсайхан 
Date: 26/1/18 2:44 pm (GMT+10:00)
To: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: [gmx-users] How to convert charmm str to itp file?

Dear Experts,

I generated LIG.str file using CGenFF for a specific molecule. I would like
to use GROMACS package. I tried to convert LIG.str file to LIG.itp file. I
tried as following command using cgenff_charmm2gmx.py:

./cgenff_charmm2gmx.py LIG LIG.mol2 LIG.str charmm36

I got following error:

Traceback (most recent call last):
File "./cgenff_charmm2gmx.py", line 47, in 
import networkx as nx
ImportError: No module named networkx

How can I fix this error?
Please give me any advice.


Miji
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] How to convert charmm str to itp file?

2018-01-25 Thread Мижээ Батсайхан
Dear Experts,

I generated LIG.str file using CGenFF for a specific molecule. I would like
to use GROMACS package. I tried to convert LIG.str file to LIG.itp file. I
tried as following command using cgenff_charmm2gmx.py:

./cgenff_charmm2gmx.py LIG LIG.mol2 LIG.str charmm36

I got following error:

Traceback (most recent call last):
File "./cgenff_charmm2gmx.py", line 47, in 
import networkx as nx
ImportError: No module named networkx

How can I fix this error?
Please give me any advice.


Miji
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Replica exchange checkpoint error

2018-01-25 Thread 金 瑞涛
Dear Gromacs users and developers


I am currently facing a problem from one of my simulations. I am doing replica 
exchange simulation which has 48 subsystems, when I started the simulation, 
everything wss fine, but when I restarted them from checkpoint file, they 
looked running, were keeping generating log file, trr file, edr file, etc, but 
not checkpoint file were generated at all.


The version I am using is gromacs/5.1.3, any idea what is going wrong with 
simulations of grimaces?


Best regards


Riotto


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Formula for 9-3 wall potential

2018-01-25 Thread Jacob Alan Spooner
Dear GROMACS users,

I am attempting some simulations using the wall option with the 9-3 potential.  
The manual is very brief on this topic and does not explicitly show the form of 
this potential.  If anybody is familiar enough with using the 9-3 walls, am I 
correct to assume the interaction between a particle and the wall is calulated 
as follows:

U_lj = (2Pi/3) (rho*epsilon*sigma^3) ((2/15)(sigma/z)^9-(sigma/z)^6)

where rho is the wall density and z is distance from particle to the wall.  Any 
help is greatly appreciated.

Thanks,
Jake Spooner
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] GROMACS performance information

2018-01-25 Thread Johanna K. S. Tiemann
Dear all

We are planning a new HPC cluster (budget ~$300,000) for GROMACS (typical
simulation size: 100,000 atoms) and I have a few questions regarding its
performance characteristics which are really crucial for that.

1. Can 4 GPUs be used efficiently simultaneously in one compute node? I
have seen some benchmarks and they usually show bad scaling to more than 2
GPUs. However, my plan is to use more CPU power for each node than in all
the benchmarks I found. Is that going to help or is the bottleneck
somewhere else than CPU power?

2. Is NVLINK going to have a positive performance impact?

3. We consider using AMD EPYC CPUs instead of Intel Xeon for
price/performance reasons. However, I saw some recent benchmarks which
showed substantial performance gains when using AVX-512 (which only Intel
supports at the moment). However, these benchmarks were done without GPU
acceleration. I would guess that most of the calculations that benefit from
AVX-512 are actually offloaded to the GPU anyway. So my question boils down
to: is there any disadvantage when using AMD EPYC CPUs that I should be
aware of?

Any help would be highly appreciated. Best regards and thanks in advance,

Alexander Vogel & Johanna Tiemann

-- 
Johanna Tiemann, MSc
AG ProteInformatics / Hildebrand lab
Institute of Medical Physics and Biophysics
University Leipzig
Härtelstr. 16-18, 04107 Leipzig

Phone: +49 341 - 97 157 60
mobile: +49 176 - 832 798 51
skype: minga-madl
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Simulation freezes but the job keeps on running

2018-01-25 Thread Åke Sandgren


On 01/25/2018 08:41 PM, Szilárd Páll wrote:
> Åke, do you have any other data from out investigation (e.g. version/range
> that reproduced the hangs, freqeuncy of hangs, size of the runs, etc.).

No hard data, but multiple versions of OpenMPI 2.x, various user cases
of different sizes, usually takes a copule of hours before it hangs.

I basically gave up after determining that IntelMPI solved the problem.
I just have too many other, more pressing, issues to deal with at the
moment.

With a bit of luck I'll be able to revisit this some time later. I do
have a specific case that always hangs though.

-- 
Ake Sandgren, HPC2N, Umea University, S-90187 Umea, Sweden
Internet: a...@hpc2n.umu.se   Phone: +46 90 7866134 Fax: +46 90-580 14
Mobile: +46 70 7716134 WWW: http://www.hpc2n.umu.se
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] KALP15 in DPPC

2018-01-25 Thread Justin Lemkul



On 1/25/18 12:17 PM, negar habibzadeh wrote:

How much time is needed to run ? i changed from 100 ps ( restrained
equilibration run ( nvt)) to 1 ns(1000ps) . but when i did npt (without
water and lipids restraints) again i saw water inside membrane.


I don't know. Such protocols are usually not necessary for a properly 
prepared membrane. If you've got a huge amount of void space, I suggest 
trying a different method to build the system, because perhaps the 
starting coordinates are simply poor.


-Justin



On Wed, Jan 24, 2018 at 10:51 PM, Justin Lemkul  wrote:



On 1/24/18 11:16 AM, negar habibzadeh wrote:


i did it  but when i removed the restraints from water to equilibrate
again
,(after new equilibration ) i saw some water molecules  inside the
membrane
again. what can i do ?


Let the restrained equilibration run longer. Make sure you're not
restraining the lipids in any way.

-Justin




On Wed, Jan 24, 2018 at 4:24 PM, Justin Lemkul  wrote:



On 1/24/18 5:02 AM, negar habibzadeh wrote:

hi . i am doing simulation of peptide in DOPC bilayer. i have dopc.itp ,

dopc.pdb, dopc.gro , peptide.itp , sample.top for dopc ,
peptide.pdb,topol.top. i used below commands.

gmx editconf -f peptide.gro -o pep.gro -box 6.35172   6.80701   7.49241
-c
(it corresponds to the x/y/z box vectors of the DOPC unit cell)
i merg peptide and dopc:
cat pep.gro DOPC_323K.gro > tot1.gro
(I remove unnecessary lines)
i add ions :
gmx grompp -f ions.mdp -c tot1.gro -p mem.top -o ions.tpr
gmx genion -s ions.tpr -o tot.gro -p mem.top -pname NA -nname CL -nn 8
i get tpr file  (in mem.mdp i add some line to freeze protein )
gmx grompp -f mem.mdp -c tot.gro -p mem.top -o mem.tpr -n index.ndx
and i use g-membed command:
g_membed -f mem.tpr -dat mem.dat -c final.gro -n index.ndx -xyinit 0.1
(in
mem.dat i include the place of protein in the center of box)
in final.gro there were a few stray water molecules, i deleted them
manually and
i did energy minimization :
gmx grompp -f minim.mdp -c final.gro -p mem.top -o em.tpr
gmx mdrun -v -deffnm em
i checked em.gro , every thing is ok . but when i run nvt
in nvt.gro , A large number of water molecules are inside the membrane.
how can i solve this problem ?

If there's lots of void space around the protein in the membrane, then

you'll either need to prepare the system more carefully to prevent such
voids, or do an equilibration with water molecules restrained in the
z-dimension only, to prevent them from diffusing into the membrane. Then,
remove the restraints and equilibrate again.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==

--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support
/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==

--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support
/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Simulation freezes but the job keeps on running

2018-01-25 Thread Szilárd Páll
On Thu, Jan 25, 2018 at 7:23 PM, Searle Duay  wrote:

> Hi Ake,
>
> I am not sure, and I don't know how to check the build. But, I see the
> following in the output log file whenever I run GROMACS in PSC bridges:
>
> GROMACS version:2016
> Precision:  single
> Memory model:   64 bit
> MPI library:MPI
> OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 32)
> GPU support:CUDA
> SIMD instructions:  AVX2_256
> FFT library:fftw-3.3.4-sse2-avx
> RDTSCP usage:   enabled
> TNG support:enabled
> Hwloc support:  hwloc-1.7.0
> Tracing support:disabled
> Built on:   Fri Oct  7 15:06:50 EDT 2016
> Built by:   mmad...@gpu012.pvt.bridges.psc.edu [CMAKE]
> Build OS/arch:  Linux 3.10.0-327.4.5.el7.x86_64 x86_64
> Build CPU vendor:   Intel
> Build CPU brand:Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz
> Build CPU family:   6   Model: 63   Stepping: 2
> Build CPU features: aes apic avx avx2 clfsh cmov cx8 cx16 f16c fma htt lahf
> mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp sse2
> sse3 sse4.1 sse4.2 ssse3 tdt x2apic
> C compiler: /usr/lib64/ccache/cc GNU 4.8.5
> C compiler flags:-march=core-avx2 -O3 -DNDEBUG -funroll-all-loops
> -fexcess-precision=fast
> C++ compiler:   /usr/lib64/ccache/c++ GNU 4.8.5
> C++ compiler flags:  -march=core-avx2-std=c++0x   -O3 -DNDEBUG
> -funroll-all-loops -fexcess-precision=fast
> CUDA compiler:  /opt/packages/cuda/8.0RC/bin/nvcc nvcc: NVIDIA (R)
> Cuda
> compiler driver;Copyright (c) 2005-2016 NVIDIA Corporation;Built on
> Wed_May__4_21:01:56_CDT_2016;Cuda compilation tools, release 8.0, V8.0.26
> CUDA compiler
> flags:-gencode;arch=compute_20,code=sm_20;-gencode;arch=
> compute_30,code=sm_30;-gencode;arch=compute_35,code=
> sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=
> compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;\
>
> -gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_
> 61,code=sm_61;-gencode;arch=compute_60,code=compute_60;-
> gencode;arch=compute_61,code=compute_61;-use_fast_math;;;-
> Xcompiler;,-march=core-avx2,,;-Xcompiler;-O3,-DNDEBUG,-funro\
>
> ll-all-loops,-fexcess-precision=fast,,;
> CUDA driver:9.0
> CUDA runtime:   8.0
>
> Would that be built using openmpi?
>

Based on that it's hard to say. We don't detect MPI flavors, the only hint
from the version header would be the path to the compiler wrapper that
might indicate what was the MPI version used. However, in this case whoever
compiled GROMACS used ccache so we can't see the full path to an mpicc
binary.

I suggest that you consult your admins, perhaps try to use a different
MPI/version.


Åke, do you have any other data from out investigation (e.g. version/range
that reproduced the hangs, freqeuncy of hangs, size of the runs, etc.).

--
Szilárd



>
> Thanks!
>
> Searle
>
> On Thu, Jan 25, 2018 at 1:08 PM, Åke Sandgren 
> wrote:
>
> > Is that build using openmpi?
> >
> > We've seen cases when gromacs built with openmpi hangs repeatedly, while
> > the same build using intelmpi works.
> >
> > We still haven't figured out why.
> >
> > On 01/25/2018 06:39 PM, Searle Duay wrote:
> > > Good day!
> > >
> > > I am running a 10 ns peptide-membrane simulation using GPUs from PSC
> > > Bridges. The simulation starts properly, but it does not end on the
> time
> > > that the simulation will end, as estimated by the software. The job is
> > > still running and the simulation seems frozen because no simulation
> time
> > is
> > > added even after an hour of the job running.
> > >
> > > I have submitted the following SLURM code:
> > >
> > > #!/bin/bash
> > > #SBATCH -J k80_1n_4g
> > > #SBATCH -o %j.out
> > > #SBATCH -N 1
> > > #SBATCH -n 28
> > > #SBATCH --ntasks-per-node=28
> > > #SBATCH -p GPU
> > > #SBATCH --gres=gpu:k80:4
> > > #SBATCH -t 48:00:00
> > > #SBATCH --mail-type=BEGIN,END,FAIL
> > > #SBATCH --mail-user=searle.d...@uconn.edu
> > >
> > > set echo
> > > set -x
> > >
> > > module load gromacs/2016_gpu
> > >
> > > echo SLURM_NPROCS= $SLURM_NPROCS
> > >
> > > cd $SCRATCH/prot_umbrella/gromacs/conv
> > >
> > > mpirun -np $SLURM_NPROCS gmx_mpi mdrun -deffnm umbrella8 -pf
> > > pullf-umbrella8.xvg -px pullx-umbrella8.xvg -v -ntomp 2
> > >
> > > exit
> > >
> > > I am not sure if the error is from the hardware or from my simulation
> > > setup. I have already ran similar simulations (I just varied the number
> > of
> > > nodes that I am using, but same system), and some of them are
> successful.
> > > There are just some which seems to freeze in the middle of the run.
> > >
> > > Thank you!
> > >
> >
> > --
> > Ake Sandgren, HPC2N, Umea University, S-90187 Umea, Sweden
> > Internet: a...@hpc2n.umu.se   Phone: +46 90 7866134 Fax: +46 90-580 14
> > Mobile: +46 70 7716134 WWW: http://www.hpc2n.umu.se
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > 

Re: [gmx-users] Simulation freezes but the job keeps on running

2018-01-25 Thread Searle Duay
Hi Ake,

I am not sure, and I don't know how to check the build. But, I see the
following in the output log file whenever I run GROMACS in PSC bridges:

GROMACS version:2016
Precision:  single
Memory model:   64 bit
MPI library:MPI
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 32)
GPU support:CUDA
SIMD instructions:  AVX2_256
FFT library:fftw-3.3.4-sse2-avx
RDTSCP usage:   enabled
TNG support:enabled
Hwloc support:  hwloc-1.7.0
Tracing support:disabled
Built on:   Fri Oct  7 15:06:50 EDT 2016
Built by:   mmad...@gpu012.pvt.bridges.psc.edu [CMAKE]
Build OS/arch:  Linux 3.10.0-327.4.5.el7.x86_64 x86_64
Build CPU vendor:   Intel
Build CPU brand:Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz
Build CPU family:   6   Model: 63   Stepping: 2
Build CPU features: aes apic avx avx2 clfsh cmov cx8 cx16 f16c fma htt lahf
mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp sse2
sse3 sse4.1 sse4.2 ssse3 tdt x2apic
C compiler: /usr/lib64/ccache/cc GNU 4.8.5
C compiler flags:-march=core-avx2 -O3 -DNDEBUG -funroll-all-loops
-fexcess-precision=fast
C++ compiler:   /usr/lib64/ccache/c++ GNU 4.8.5
C++ compiler flags:  -march=core-avx2-std=c++0x   -O3 -DNDEBUG
-funroll-all-loops -fexcess-precision=fast
CUDA compiler:  /opt/packages/cuda/8.0RC/bin/nvcc nvcc: NVIDIA (R) Cuda
compiler driver;Copyright (c) 2005-2016 NVIDIA Corporation;Built on
Wed_May__4_21:01:56_CDT_2016;Cuda compilation tools, release 8.0, V8.0.26
CUDA compiler
flags:-gencode;arch=compute_20,code=sm_20;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;\

-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_60,code=compute_60;-gencode;arch=compute_61,code=compute_61;-use_fast_math;;;-Xcompiler;,-march=core-avx2,,;-Xcompiler;-O3,-DNDEBUG,-funro\

ll-all-loops,-fexcess-precision=fast,,;
CUDA driver:9.0
CUDA runtime:   8.0

Would that be built using openmpi?

Thanks!

Searle

On Thu, Jan 25, 2018 at 1:08 PM, Åke Sandgren 
wrote:

> Is that build using openmpi?
>
> We've seen cases when gromacs built with openmpi hangs repeatedly, while
> the same build using intelmpi works.
>
> We still haven't figured out why.
>
> On 01/25/2018 06:39 PM, Searle Duay wrote:
> > Good day!
> >
> > I am running a 10 ns peptide-membrane simulation using GPUs from PSC
> > Bridges. The simulation starts properly, but it does not end on the time
> > that the simulation will end, as estimated by the software. The job is
> > still running and the simulation seems frozen because no simulation time
> is
> > added even after an hour of the job running.
> >
> > I have submitted the following SLURM code:
> >
> > #!/bin/bash
> > #SBATCH -J k80_1n_4g
> > #SBATCH -o %j.out
> > #SBATCH -N 1
> > #SBATCH -n 28
> > #SBATCH --ntasks-per-node=28
> > #SBATCH -p GPU
> > #SBATCH --gres=gpu:k80:4
> > #SBATCH -t 48:00:00
> > #SBATCH --mail-type=BEGIN,END,FAIL
> > #SBATCH --mail-user=searle.d...@uconn.edu
> >
> > set echo
> > set -x
> >
> > module load gromacs/2016_gpu
> >
> > echo SLURM_NPROCS= $SLURM_NPROCS
> >
> > cd $SCRATCH/prot_umbrella/gromacs/conv
> >
> > mpirun -np $SLURM_NPROCS gmx_mpi mdrun -deffnm umbrella8 -pf
> > pullf-umbrella8.xvg -px pullx-umbrella8.xvg -v -ntomp 2
> >
> > exit
> >
> > I am not sure if the error is from the hardware or from my simulation
> > setup. I have already ran similar simulations (I just varied the number
> of
> > nodes that I am using, but same system), and some of them are successful.
> > There are just some which seems to freeze in the middle of the run.
> >
> > Thank you!
> >
>
> --
> Ake Sandgren, HPC2N, Umea University, S-90187 Umea, Sweden
> Internet: a...@hpc2n.umu.se   Phone: +46 90 7866134 Fax: +46 90-580 14
> Mobile: +46 70 7716134 WWW: http://www.hpc2n.umu.se
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>



-- 
Searle Aichelle S. Duay
Ph.D. Student
Chemistry Department, University of Connecticut
searle.d...@uconn.edu
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Simulation freezes but the job keeps on running

2018-01-25 Thread Åke Sandgren
Is that build using openmpi?

We've seen cases when gromacs built with openmpi hangs repeatedly, while
the same build using intelmpi works.

We still haven't figured out why.

On 01/25/2018 06:39 PM, Searle Duay wrote:
> Good day!
> 
> I am running a 10 ns peptide-membrane simulation using GPUs from PSC
> Bridges. The simulation starts properly, but it does not end on the time
> that the simulation will end, as estimated by the software. The job is
> still running and the simulation seems frozen because no simulation time is
> added even after an hour of the job running.
> 
> I have submitted the following SLURM code:
> 
> #!/bin/bash
> #SBATCH -J k80_1n_4g
> #SBATCH -o %j.out
> #SBATCH -N 1
> #SBATCH -n 28
> #SBATCH --ntasks-per-node=28
> #SBATCH -p GPU
> #SBATCH --gres=gpu:k80:4
> #SBATCH -t 48:00:00
> #SBATCH --mail-type=BEGIN,END,FAIL
> #SBATCH --mail-user=searle.d...@uconn.edu
> 
> set echo
> set -x
> 
> module load gromacs/2016_gpu
> 
> echo SLURM_NPROCS= $SLURM_NPROCS
> 
> cd $SCRATCH/prot_umbrella/gromacs/conv
> 
> mpirun -np $SLURM_NPROCS gmx_mpi mdrun -deffnm umbrella8 -pf
> pullf-umbrella8.xvg -px pullx-umbrella8.xvg -v -ntomp 2
> 
> exit
> 
> I am not sure if the error is from the hardware or from my simulation
> setup. I have already ran similar simulations (I just varied the number of
> nodes that I am using, but same system), and some of them are successful.
> There are just some which seems to freeze in the middle of the run.
> 
> Thank you!
> 

-- 
Ake Sandgren, HPC2N, Umea University, S-90187 Umea, Sweden
Internet: a...@hpc2n.umu.se   Phone: +46 90 7866134 Fax: +46 90-580 14
Mobile: +46 70 7716134 WWW: http://www.hpc2n.umu.se
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Simulation freezes but the job keeps on running

2018-01-25 Thread Searle Duay
Good day!

I am running a 10 ns peptide-membrane simulation using GPUs from PSC
Bridges. The simulation starts properly, but it does not end on the time
that the simulation will end, as estimated by the software. The job is
still running and the simulation seems frozen because no simulation time is
added even after an hour of the job running.

I have submitted the following SLURM code:

#!/bin/bash
#SBATCH -J k80_1n_4g
#SBATCH -o %j.out
#SBATCH -N 1
#SBATCH -n 28
#SBATCH --ntasks-per-node=28
#SBATCH -p GPU
#SBATCH --gres=gpu:k80:4
#SBATCH -t 48:00:00
#SBATCH --mail-type=BEGIN,END,FAIL
#SBATCH --mail-user=searle.d...@uconn.edu

set echo
set -x

module load gromacs/2016_gpu

echo SLURM_NPROCS= $SLURM_NPROCS

cd $SCRATCH/prot_umbrella/gromacs/conv

mpirun -np $SLURM_NPROCS gmx_mpi mdrun -deffnm umbrella8 -pf
pullf-umbrella8.xvg -px pullx-umbrella8.xvg -v -ntomp 2

exit

I am not sure if the error is from the hardware or from my simulation
setup. I have already ran similar simulations (I just varied the number of
nodes that I am using, but same system), and some of them are successful.
There are just some which seems to freeze in the middle of the run.

Thank you!

-- 
Searle Aichelle S. Duay
Ph.D. Student
Chemistry Department, University of Connecticut
searle.d...@uconn.edu
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] KALP15 in DPPC

2018-01-25 Thread negar habibzadeh
How much time is needed to run ? i changed from 100 ps ( restrained
equilibration run ( nvt)) to 1 ns(1000ps) . but when i did npt (without
water and lipids restraints) again i saw water inside membrane.


On Wed, Jan 24, 2018 at 10:51 PM, Justin Lemkul  wrote:

>
>
> On 1/24/18 11:16 AM, negar habibzadeh wrote:
>
>> i did it  but when i removed the restraints from water to equilibrate
>> again
>> ,(after new equilibration ) i saw some water molecules  inside the
>> membrane
>> again. what can i do ?
>>
>
> Let the restrained equilibration run longer. Make sure you're not
> restraining the lipids in any way.
>
> -Justin
>
>
>
>> On Wed, Jan 24, 2018 at 4:24 PM, Justin Lemkul  wrote:
>>
>>
>>> On 1/24/18 5:02 AM, negar habibzadeh wrote:
>>>
>>> hi . i am doing simulation of peptide in DOPC bilayer. i have dopc.itp ,
 dopc.pdb, dopc.gro , peptide.itp , sample.top for dopc ,
 peptide.pdb,topol.top. i used below commands.

 gmx editconf -f peptide.gro -o pep.gro -box 6.35172   6.80701   7.49241
 -c
 (it corresponds to the x/y/z box vectors of the DOPC unit cell)
 i merg peptide and dopc:
 cat pep.gro DOPC_323K.gro > tot1.gro
 (I remove unnecessary lines)
 i add ions :
 gmx grompp -f ions.mdp -c tot1.gro -p mem.top -o ions.tpr
 gmx genion -s ions.tpr -o tot.gro -p mem.top -pname NA -nname CL -nn 8
 i get tpr file  (in mem.mdp i add some line to freeze protein )
 gmx grompp -f mem.mdp -c tot.gro -p mem.top -o mem.tpr -n index.ndx
 and i use g-membed command:
 g_membed -f mem.tpr -dat mem.dat -c final.gro -n index.ndx -xyinit 0.1
 (in
 mem.dat i include the place of protein in the center of box)
 in final.gro there were a few stray water molecules, i deleted them
 manually and
 i did energy minimization :
 gmx grompp -f minim.mdp -c final.gro -p mem.top -o em.tpr
 gmx mdrun -v -deffnm em
 i checked em.gro , every thing is ok . but when i run nvt
 in nvt.gro , A large number of water molecules are inside the membrane.
 how can i solve this problem ?

 If there's lots of void space around the protein in the membrane, then
>>> you'll either need to prepare the system more carefully to prevent such
>>> voids, or do an equilibration with water molecules restrained in the
>>> z-dimension only, to prevent them from diffusing into the membrane. Then,
>>> remove the restraints and equilibrate again.
>>>
>>> -Justin
>>>
>>> --
>>> ==
>>>
>>> Justin A. Lemkul, Ph.D.
>>> Assistant Professor
>>> Virginia Tech Department of Biochemistry
>>>
>>> 303 Engel Hall
>>> 340 West Campus Dr.
>>> Blacksburg, VA 24061
>>>
>>> jalem...@vt.edu | (540) 231-3129
>>> http://www.biochem.vt.edu/people/faculty/JustinLemkul.html
>>>
>>> ==
>>>
>>> --
>>> Gromacs Users mailing list
>>>
>>> * Please search the archive at http://www.gromacs.org/Support
>>> /Mailing_Lists/GMX-Users_List before posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>> send a mail to gmx-users-requ...@gromacs.org.
>>>
>>>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Virginia Tech Department of Biochemistry
>
> 303 Engel Hall
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.biochem.vt.edu/people/faculty/JustinLemkul.html
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Alternative for do_dssp for secondary structure analysis?

2018-01-25 Thread Justin Lemkul



On 1/25/18 10:54 AM, ZHANG Cheng wrote:

Dear Gromacs,
Can I ask if there is an alternative to do_dssp for secondary structure 
analysis?


I am waiting for our IT staff to install the DSSP on our cluster. But there was 
some errors.
https://github.com/UCL-RITS/rcps-buildscripts/issues/137



Normally, users can install anything they like in their home 
directories. Then just set the DSSP environment variable to point to the 
binary.



While still waiting for that, can I ask if Gromacs has other tools (e.g. 
STRIDE) for secondary structure analysis?


It does not.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Alternative for do_dssp for secondary structure analysis?

2018-01-25 Thread ZHANG Cheng
Dear Gromacs,
Can I ask if there is an alternative to do_dssp for secondary structure 
analysis?


I am waiting for our IT staff to install the DSSP on our cluster. But there was 
some errors.
https://github.com/UCL-RITS/rcps-buildscripts/issues/137


While still waiting for that, can I ask if Gromacs has other tools (e.g. 
STRIDE) for secondary structure analysis?


Now I am using the VMD-Timeline tool, which uses the STRIDE
http://webclu.bio.wzw.tum.de/stride/


Thank you.


Yours sincerely
Cheng
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] (no subject)

2018-01-25 Thread alex rayevsky
Dear users and developers!

I have a question about replica exchange sampling and simulation annealing
method. Well, I have a protein (TubulinG) X-ray, however it lack last 10-11
residues, which are probably exposured to the solvent (and it seems are
flexible enough to be invisible for X-ray). The protein exists in two
isoforms, which differ on a single amino acid (approximately -15 position
from the end), however, some in-house biochemical experiments stated that
this change is crucial for motility of these terminal 10-15 residue.

The problem is that I can't predict the exact initial location/conformation
of the full-length C-termini and all standard MD simulations showed
different, not similar, positions of the region - it can be either flexible
or pinned to the 'protein's body' during the MD. It seems, that it depends
on the seed which turns on a trigger for rotation of exposured straight
10-15 res. C-termini... Before starting a production MD for subsequent
analysis I should be somehow ensured that the initial conformation is
reliable. i-Tasser is not a good way, because it doesn't guarantee nothing,
except the total minimization state, and homology modelling is also
impotent.

That is why I want to start a preliminary MD to sample these C-terminal end
(rebuild with any program like pymol, molsoft or swissmodel server) and I
bet on these two approaches, mentioned above.

Does anybody know is it possible, reasonable?
where I can find a working tutorial to provide some changes or use it as it
is?
The additional question is about 'partial application' of the method to the
fragment of the protein, to avoid time consuming calculations for the whole
system, which is not a priority, as the final goal is just a minimized
protein with a more informed preparation alghorithm.

Thank You in advance!!!



*Nemo me impune lacessit*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Rupture force definition

2018-01-25 Thread Justin Lemkul



On 1/25/18 2:41 AM, Rakesh Mishra wrote:

Thanks for reply.

Dear Justin

Is it possible to transform .xtc file in the form of .pdb formate
excluding solvent and ion molecules simultaneously.
I am familiar to do this with "trjconv" command. But by using this we can
transform .xtc file in the form of .pdb formate,
where water molecules and ions gets also present in the .pdb formate. How
it is possible to save only system coordinates
rather than solvent ( water molecules ) molecules and ions  at the same
time .


trjconv (like most GROMACS tools) lets you pick whatever subset of atoms 
you want. Choose an appropriate group for output that suits your needs.


-Justin






On Wed, Jan 24, 2018 at 6:25 PM, Justin Lemkul  wrote:



On 1/24/18 5:02 AM, Rakesh Mishra wrote:


Dear Justin

Thank you very much  for removing the doubts .
Let me extend  my query in this respect again. As according to pull code
formate which is  discussed in your testing work of umbrella sampling .
Please have a rough look.

Pull code
pull= yes
pull_ngroups= 2
pull_ncoords= 1
pull_group1_name= chain_B
pull_group2_name= chain_A
pull_coord1_type= umbrella  ; harmonic biasing force
pull_coord1_geometry= distance  ; simple distance increase
pull_coord1_groups  =  1 2
pull_coord1_dim = Y N N
pull_coord1_rate= 0.0005  ; 0.0005 nm per ps = 5 nm per 10
ns
pull_coord1_k   = 400  ; kJ mol^-1 nm^-2
pull_coord1_start   = yes   ; define initial COM distance > 0

As according to some explanation on the net,  I found  that  in the Pull
code written above,

"pull_group1_name = chain_B "   gromacs read it as a reference group by
default &
"pull_group2_name = chain_A"gromacs read it as a pull group by
default.

But suppose, we want to pull 2 or more groups eg.  B2 & B3 in the same
system and which are having
two respective reference group A2 & A3 w. r. t. pulled groups  B2 & B3.
Then how to define these one in above code.

I mean If I have  A1, A2, and A3 three pull group ( which we want to pull)
and corresponding to these we have three
reference group B1 , B2 and B3. We are pulling all in the same direction
(+x  direction ).
then how to define these simultaneously in above pull code.

will I need to define rate for all these pull group separately ( three
times) or if rate value is same then no need to
   define three times corresponding to these three pull groups.


You can define any number of groups to be used in any number of biasing
potentials. Each needs its own complete definition (geometry, dimensions,
rate, force constant, etc). Get rid of the notion of "reference" and "pull
group" in this application; groups define the two ends of a reaction
coordinate, nothing more.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==

--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support
/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.






--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.