Re: [gmx-users] OPLS/AA + TIP5P, anybody?

2013-10-07 Thread gigo

Dear Chris,
Thank you for your message. I uploaded everything to the redmine. I will 
let you know how the simulation with generated velocities went.
I asked the authors about any exemplary input that worked with tip5p and 
oplsaa, but I did not get anything...

Best,
Grzegorz


On 2013-10-04 17:20, Christopher Neale wrote:

Dear Grzegorz:

From a quick look at your .mdp, I also suggest that you go back to 
your system including the peptide that you had managed to finish EM 
with modified flexible tip5p but then crashed with the standard rigid 
tip5p during MD and try the MD again using gen-vel = yes


if you're still seeing problems, why not upload your water-only system
and your with-small-peptide test system to redmine. It's meant as a
place to start a discussion, share files, and help us not to foget
about a problem that may exist, so I am not sure why you hesitate.

Also, you said that the authors of that other OPLS-Tip5p paper had no
problems. You might ask them for .gro .mdp and .top files so that you
can see exactly what they did and how it differs from what you are
doing.

Chris.



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] OPLS/AA + TIP5P, anybody?

2013-10-01 Thread gigo

Dear Chris,
By now 7ns of the MD passed without a single warning.
Best Regards,
Grzegorz


P.s. The mdp:

constraints =  none
integrator  =  md
dt  =  0.001; ps
nsteps  =  1000 ; total 10 ns
nstcomm =  1000
nstxout =  0
nstvout =  0
nstfout =  0
nstxtcout   =  25
xtc_grps=  System
nstlog  =  1000
nstenergy   =  1000
nstlist =  20
ns_type =  grid
rlist   =  1.3
coulombtype =  PME
fourierspacing  =  0.1
pme_order   =  4
optimize_fft=  yes
rcoulomb=  1.3
rvdw=  1.3
vdwtype =  cut-off
pbc =  xyz
DispCorr=  EnerPres
Tcoupl  =  v-rescale
ld_seed =  -1
tc-grps =  System
tau_t   =  0.1
ref_t   =  300.0
pcoupl  =  Parrinello-Rahman
tau_p   =  0.5
compressibility =  4.5e-5
ref_p   =  1.0
gen_vel =  no
cutoff-scheme   =  Verlet


On 2013-09-30 21:47, Christopher Neale wrote:

One a system passes EM and a couple of ps of MD, is it always stable
indefinitely? If not, then
something is wrong somewhere.

-- original message --

Dear Chris,
I put one tip5p molecule in a center of dodecahedral box - 2nm from 
that

molecule to walls, filled it with tip5p, ran 6000 steps of steep
minimization. After another 2704 steps of cg it converged to emtol 1.0.
I run 100k steps of nvt on this box afterwards
(http://shroom.ibb.waw.pl/tip5p/pure1). But the water is very
capricious. If I ran only 2000 steps of steep, the following cg crashed
after less than 1000 steps because a water molecule could not be
settled. I could not minimize another box, filled only by genbox from 
an
empty gro file (http://shroom.ibb.waw.pl/tip5p/pure2). I understand 
that

you have to have some luck if you run a simulation in pbc with rigid
water, which interacts through walls of the box with the other side and
was not well placed. Also, I had several segfaults during minimization
that I was able to avoid only by limiting the number of cores.
I checked distances between OW and LPx on a crashing minimization with 
a

peptide - 2812 water molecules. Maximum force reached 8.8e+24 in 225
steep steps, but all the 5624 distances were rock solid 0.7A, as
expected.
I still did not post the redmine issue, I want to be sure that I am
doing everything correctly.

On 2013-09-29 18:47, Christopher Neale wrote:

Dear Grzegorz:

Under no conditions should any of the tip5p geometry change (for the
standard tip5p model).
If you find that this is happening, then that is certainly an error.
You can check if you like by analyzing
your trajectory. However, flexible bonds will allow the distance from
the arginine N to the arginine
H to vary, which my allow a closer approach of the arginine H to the
tip5p dummy site.

Did you verify that a water box (no protein) simulates without error?

Did you post a redmine issue with .mdp , .gro , and .top files?

Chris.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] OPLS/AA + TIP5P, anybody?

2013-09-29 Thread gigo

Dear Chris,
Authors answered me very quickly. They did not have such a problem, but 
I still don't know the details of their input. They used gromacs-3.3, so 
I decided to give the old one a try. I did some tests with 3.3.4. 
Although the same problem occurred during steep minimization, some 
interesting things popped out. When I tried to grompp the system with 
plain tip5p for cg minimization, it failed:
ERROR: can not do Conjugate Gradients with constraints (8484), even 
though I did not set any constraints. The error is the same for tip4p 
unless you use flexible model, which tip5p does not have - the water had 
to be constrained then. I guess that treatment of virtual sites in 
gromacs-3.3 has something to do with this. I noticed, that constraints 
make simulations with tip5p more stable. It should not happen, that the 
LP virtual atoms are pulled further then the defined 0.7 from the 
oxygen, right? I will keep you updated.

Best Regards,
Grzegorz


On 2013-09-29 04:50, Christopher Neale wrote:

Dear Gigo:

that's a good comprehensive testing and report. Please let us know
what you find out from those authors.
Their paper was short on methods (unless I missed it... I didn't check
for any SI), so perhaps they did something
non-standard and didn't report it.

I think at this point it is a good idea for you to file a redmine
issue. It's not a gromacs error per se, but
if this is true then pdb2gmx or grompp should give a warning or error
for the combination of oplsaa and tip5p.

Chris.

gigo gigo at ibb.waw.pl
Sun Sep 29 00:59:42 CEST 2013
Previous message: [gmx-users] OPLS/AA + TIP5P, anybody?
Next message: [gmx-users] Restarting simulation -s. files?
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Dear Chris,
I am really grateful for your help. This is what I did, with additional
LJ terms on LP1 and LP2 of tip5p:
- 5000 steps of steepest descent with positions restraints on protein
and flexible water (flexibility like in tip4p),
- 5000 steps of steep, no restraints, flexible water,
- 5000 steps of cg, flexible water,
- 10 steps of MD with posres and constrained bond lengths, very 
weak

temp coupling (v-rescale), starting without generating of initial
velocities, so its heating very slowly to 300K, no pressure coupling
- 10 steps of MD with posres, no constraints, v-rescale, nvt
- 10 steps of MD, no posres, nvt,
- 10 steps of MD, v-rescale, Berendsen pressure coupling,
- 10 steps of MD, v-rescale, Parrinello-Rahman pressure coupling.

Output of this chain, after removing LJ from LP, became input for 4
simulations just like the last one from the above chain, with or 
without

posres and constraints turned on.
Results:
1) Posres off, constraints off :
step 0
WARNING: Listed nonbonded interaction between particles 1157 and 1163
== Why does it say particles and not atoms? Nevermind, its lysine
on the surface of the protein, one of atoms is dimensionless hydrogen.
(...)
step 63: Water molecule starting at atom 7923 can not be settled.
(...)
Segmentation fault

2) Posres off, constraints on :
Warning like above + LINCS warnings, all for charged aminoacids on
surface of the protein
Segfault at step 63

3) Posres on, constraints off, had to add refcoord_scaling = COM :
WARNING: Listed nonbonded interaction between particles 3075 and 3079
== arginine on the surface, dimensionless hydrogen

4) Posres on, constraints on, refcoord_scaling = COM :
Same warnings, several other positively charged dimensionless hydrogens
listed, waters could not be settled, segfault.

I tried to run few other peptides and proteins with tip5p, 100% of
crashes.


Also, tip5p has been used successfully with the charmm foce field:
http://pubs.acs.org/doi/abs/10.1021/ct700053u


Yes, there are no dimensionless charged hydrogens there except of these
on ... tip3p, tip4p and tip5p water.


(...)
http://pubs.acs.org/doi/abs/10.1021/ct300180w
(...)
Are you using halogens with oplsaa-x?


No, I use OPLSAA distributed with Gromacs.



Standard oplsaa (non-x) and tip5p seem to be a fine combination:
http://www.sciencedirect.com/science/article/pii/S0006349507713863

you might want to contact the authors of that paper to see if they
ever had such problems.


Thank you, I will ask them. I am starting to be sure, though, that it 
is

not possible to run any simulations with tip5p and proteins (containing
arginine for example) without some tricks, strengthening/constraining
bonds at least. It happened during the minimization, that the distance
between HHxy and NHx in arginine grew up to 1.48A, while HHxy was
landing on LP of the water. Repeatability is 100%.
Regards,
Grzegorz

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post

Re: [gmx-users] OPLS/AA + TIP5P, anybody?

2013-09-29 Thread gigo

Dear Chris,
I did not post the redmine issue yet, I want to check every possibility 
beforehand. I will analyze trajectories more closely now.

Best,
Grzegorz

On 2013-09-29 18:47, Christopher Neale wrote:

Dear Grzegorz:

Under no conditions should any of the tip5p geometry change (for the
standard tip5p model).
If you find that this is happening, then that is certainly an error.
You can check if you like by analyzing
your trajectory. However, flexible bonds will allow the distance from
the arginine N to the arginine
H to vary, which my allow a closer approach of the arginine H to the
tip5p dummy site.

Did you verify that a water box (no protein) simulates without error?

Did you post a redmine issue with .mdp , .gro , and .top files?

Chris.

-- original message --

Dear Chris,
Authors answered me very quickly. They did not have such a problem, but
I still don't know the details of their input. They used gromacs-3.3, 
so

I decided to give the old one a try. I did some tests with 3.3.4.
Although the same problem occurred during steep minimization, some
interesting things popped out. When I tried to grompp the system with
plain tip5p for cg minimization, it failed:
ERROR: can not do Conjugate Gradients with constraints (8484), even
though I did not set any constraints. The error is the same for tip4p
unless you use flexible model, which tip5p does not have - the water 
had

to be constrained then. I guess that treatment of virtual sites in
gromacs-3.3 has something to do with this. I noticed, that constraints
make simulations with tip5p more stable. It should not happen, that the
LP virtual atoms are pulled further then the defined 0.7 from the
oxygen, right? I will keep you updated.
Best Regards,
Grzegorz

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] OPLS/AA + TIP5P, anybody?

2013-09-29 Thread gigo

Dear Chris,
I put one tip5p molecule in a center of dodecahedral box - 2nm from that 
molecule to walls, filled it with tip5p, ran 6000 steps of steep 
minimization. After another 2704 steps of cg it converged to emtol 1.0. 
I run 100k steps of nvt on this box afterwards 
(http://shroom.ibb.waw.pl/tip5p/pure1). But the water is very 
capricious. If I ran only 2000 steps of steep, the following cg crashed 
after less than 1000 steps because a water molecule could not be 
settled. I could not minimize another box, filled only by genbox from an 
empty gro file (http://shroom.ibb.waw.pl/tip5p/pure2). I understand that 
you have to have some luck if you run a simulation in pbc with rigid 
water, which interacts through walls of the box with the other side and 
was not well placed. Also, I had several segfaults during minimization 
that I was able to avoid only by limiting the number of cores.
I checked distances between OW and LPx on a crashing minimization with a 
peptide - 2812 water molecules. Maximum force reached 8.8e+24 in 225 
steep steps, but all the 5624 distances were rock solid 0.7A, as 
expected.
I still did not post the redmine issue, I want to be sure that I am 
doing everything correctly.


On 2013-09-29 18:47, Christopher Neale wrote:

Dear Grzegorz:

Under no conditions should any of the tip5p geometry change (for the
standard tip5p model).
If you find that this is happening, then that is certainly an error.
You can check if you like by analyzing
your trajectory. However, flexible bonds will allow the distance from
the arginine N to the arginine
H to vary, which my allow a closer approach of the arginine H to the
tip5p dummy site.

Did you verify that a water box (no protein) simulates without error?

Did you post a redmine issue with .mdp , .gro , and .top files?

Chris.


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] OPLS/AA + TIP5P, anybody?

2013-09-28 Thread gigo

Dear Chris,
I am really grateful for your help. This is what I did, with additional 
LJ terms on LP1 and LP2 of tip5p:
- 5000 steps of steepest descent with positions restraints on protein 
and flexible water (flexibility like in tip4p),

- 5000 steps of steep, no restraints, flexible water,
- 5000 steps of cg, flexible water,
- 10 steps of MD with posres and constrained bond lengths, very weak 
temp coupling (v-rescale), starting without generating of initial 
velocities, so its heating very slowly to 300K, no pressure coupling

- 10 steps of MD with posres, no constraints, v-rescale, nvt
- 10 steps of MD, no posres, nvt,
- 10 steps of MD, v-rescale, Berendsen pressure coupling,
- 10 steps of MD, v-rescale, Parrinello-Rahman pressure coupling.

Output of this chain, after removing LJ from LP, became input for 4 
simulations just like the last one from the above chain, with or without 
posres and constraints turned on.

Results:
1) Posres off, constraints off :
step 0
WARNING: Listed nonbonded interaction between particles 1157 and 1163 
== Why does it say particles and not atoms? Nevermind, its lysine 
on the surface of the protein, one of atoms is dimensionless hydrogen.

(...)
step 63: Water molecule starting at atom 7923 can not be settled.
(...)
Segmentation fault

2) Posres off, constraints on :
Warning like above + LINCS warnings, all for charged aminoacids on 
surface of the protein

Segfault at step 63

3) Posres on, constraints off, had to add refcoord_scaling = COM :
WARNING: Listed nonbonded interaction between particles 3075 and 3079 
== arginine on the surface, dimensionless hydrogen


4) Posres on, constraints on, refcoord_scaling = COM :
Same warnings, several other positively charged dimensionless hydrogens 
listed, waters could not be settled, segfault.


I tried to run few other peptides and proteins with tip5p, 100% of 
crashes.



Also, tip5p has been used successfully with the charmm foce field:
http://pubs.acs.org/doi/abs/10.1021/ct700053u


Yes, there are no dimensionless charged hydrogens there except of these 
on ... tip3p, tip4p and tip5p water.



(...)
http://pubs.acs.org/doi/abs/10.1021/ct300180w
(...)
Are you using halogens with oplsaa-x?


No, I use OPLSAA distributed with Gromacs.



Standard oplsaa (non-x) and tip5p seem to be a fine combination:
http://www.sciencedirect.com/science/article/pii/S0006349507713863

you might want to contact the authors of that paper to see if they
ever had such problems.


Thank you, I will ask them. I am starting to be sure, though, that it is 
not possible to run any simulations with tip5p and proteins (containing 
arginine for example) without some tricks, strengthening/constraining 
bonds at least. It happened during the minimization, that the distance 
between HHxy and NHx in arginine grew up to 1.48A, while HHxy was 
landing on LP of the water. Repeatability is 100%.

Regards,
Grzegorz



Chris.


Dear Chris,
Thank you for your reply. I defined a new virtual atomtype (type D) 
with

LJ sigma 1.72A ( 2*0.7+1.72=3.12, which is sigma of the oxygen) and
played a bit with epsilon till the new LJ repulsion was able to prevent
the build up of a huge force on the oxygen while interacting with
dimensionless charged hydrogens. Also, I copied the flexibility
parameters from tip4p to see if it helps in minimization before I turn
it into rigid water - it seems that it does. I was able to minimize the
system with such water. Also, I minimized the system with tip4p and
replaced it with tip5p with a script. I tried to minimize the system
afterwards with true tip5p, which did not work. My question is, besides
the correctness of water model, why do you think it is safe to remove
the LJ on lone electron pairs in MD? Will it not collapse like in 
energy

minimization?
Best Regards,
Grzegorz

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] OPLS/AA + TIP5P, anybody?

2013-09-27 Thread gigo

Dear Chris,
Thank you for your reply. I defined a new virtual atomtype (type D) with 
LJ sigma 1.72A ( 2*0.7+1.72=3.12, which is sigma of the oxygen) and 
played a bit with epsilon till the new LJ repulsion was able to prevent 
the build up of a huge force on the oxygen while interacting with 
dimensionless charged hydrogens. Also, I copied the flexibility 
parameters from tip4p to see if it helps in minimization before I turn 
it into rigid water - it seems that it does. I was able to minimize the 
system with such water. Also, I minimized the system with tip4p and 
replaced it with tip5p with a script. I tried to minimize the system 
afterwards with true tip5p, which did not work. My question is, besides 
the correctness of water model, why do you think it is safe to remove 
the LJ on lone electron pairs in MD? Will it not collapse like in energy 
minimization?

Best Regards,
Grzegorz

On 2013-09-27 05:58, Christopher Neale wrote:

Dear Gigo:

I've never used tip5p, but perhaps you could add some LJ terms to the
opls_120 definition,
do your minimization, then remove the fake LJ term on opls_120 and run 
your MD?


If that doesn't work, then you might be able to minimize your system
using FLEXIBLE tip3p
water and then use a script to convert the tip3p into tip5p. I expect
that you can set 0,0,0
coordinates for each of the tip5p dummy atoms and that they will get
correctly positioned
in your first mdrun step with tip5p.

Chris.

-- original message --

Dear Mark,
Thank you for your reply. Unfortunately, TIP5P is completely rigid and
the FLEXIBLE define will not change it. Any other ideas?
Best,
g

On 2013-09-24 23:51, Mark Abraham wrote:

You should be able to minimize with CG and TIP5P by eliminating
constraints, by making the water use a flexible molecule, e.g. define
= -DFLEXIBLE (or something). Check your water .itp file for how to do
it.

Mark

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] OPLS/AA + TIP5P, anybody?

2013-09-24 Thread gigo

Dear GMXers,
Since I am interested in interactions of lone electron pairs of water 
oxygen within the active site of an enzyme that I work on, I decided to 
give TIP5P a shot. I use OPLSAA. I run into troubles very fast trying to 
minimize freshly solvated system. I found on the gmx-users 
(http://lists.gromacs.org/pipermail/gmx-users/2008-March/032732.html) 
that cg and constraints don't go together when TIP5P is to be used - 
thats OK. It turned out, however, that I was not able to minimize my 
protein even with steepest descent. The system minimizes with TIP4P 
pretty well (emtol=1.0). In the meantime I tried to minimize short 
peptide - 10aa, did not work as well. What happens? The LP of water used 
to get too close to positively charged hydrogens (without VDW radius) on 
arginine. It looks like this:


Step=  579, Dmax= 8.0e-03 nm, Epot= -1.40714e+05 Fmax= 1.20925e+04, 
atom= 171
Step=  580, Dmax= 9.6e-03 nm, Epot= -1.41193e+05 Fmax= 8.13923e+04, 
atom= 171
Step=  581, Dmax= 1.1e-02 nm, Epot= -1.43034e+05 Fmax= 1.03648e+06, 
atom= 11181
Step=  585, Dmax= 1.7e-03 nm, Epot= -1.46878e+05 Fmax= 4.23958e+06, 
atom= 11181
Step=  587, Dmax= 1.0e-03 nm, Epot= -1.49565e+05 Fmax= 9.43285e+06, 
atom= 11181
Step=  589, Dmax= 6.2e-04 nm, Epot= -1.59042e+05 Fmax= 3.55920e+07, 
atom= 11181
Step=  591, Dmax= 3.7e-04 nm, Epot= -1.69054e+05 Fmax= 7.79944e+07, 
atom= 11181
Step=  593, Dmax= 2.2e-04 nm, Epot= -1.85575e+05 Fmax= 2.27640e+08, 
atom= 11181
Step=  595, Dmax= 1.3e-04 nm, Epot= -2.35034e+05 Fmax= 5.88938e+08, 
atom= 17181
Step=  597, Dmax= 8.0e-05 nm, Epot= -2.39154e+05 Fmax= 1.22615e+09, 
atom= 11181
Step=  598, Dmax= 9.6e-05 nm, Epot= -2.67157e+05 Fmax= 1.96782e+09, 
atom= 11181
Step=  600, Dmax= 5.8e-05 nm, Epot= -4.37260e+05 Fmax= 1.08988e+10, 
atom= 11181
Step=  602, Dmax= 3.5e-05 nm, Epot= -4.65654e+05 Fmax= 1.29609e+10, 
atom= 11181
Step=  604, Dmax= 2.1e-05 nm, Epot= -1.17945e+06 Fmax= 1.31028e+11, 
atom= 11181
Step=  607, Dmax= 6.3e-06 nm, Epot= -3.07551e+06 Fmax= 6.04297e+11, 
atom= 11181
Step=  610, Dmax= 1.9e-06 nm, Epot= -4.26709e+06 Fmax= 1.61390e+12, 
atom= 11181
Step=  611, Dmax= 2.3e-06 nm, Epot= -4.39724e+06 Fmax= 2.14416e+12, 
atom= 11181
Step=  613, Dmax= 1.4e-06 nm, Epot= -1.27489e+07 Fmax= 1.03223e+13, 
atom= 17181
Step=  614, Dmax= 1.6e-06 nm, Epot= -5.23118e+06 Fmax= 3.18465e+12, 
atom= 11181

Energy minimization has stopped, but the forces havenot converged to the
(...)

In this example atom 171 is HH21 of ARG, and 11181 is oxygen of water 
that got close to this ARG. Sometimes the epot turns nan at the end. If 
you would like to reproduce, I put the peptide.pdb, the mdp file and the 
running script at http://shroom.ibb.waw.pl/tip5p . If anybody have any 
suggestions how to minimize (deep) with OPLSAA + TIP5P in gromacs (4.6.3 
preferably...) without constraining bond lengths (which is also 
problematic), I will be very very grateful.

Best,

Grzegorz Wieczorek
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] OPLS/AA + TIP5P, anybody?

2013-09-24 Thread gigo

Dear Mark,
Thank you for your reply. Unfortunately, TIP5P is completely rigid and 
the FLEXIBLE define will not change it. Any other ideas?

Best,
g

On 2013-09-24 23:51, Mark Abraham wrote:

You should be able to minimize with CG and TIP5P by eliminating
constraints, by making the water use a flexible molecule, e.g. define
= -DFLEXIBLE (or something). Check your water .itp file for how to do
it.

Mark

On Tue, Sep 24, 2013 at 10:25 PM, gigo g...@ibb.waw.pl wrote:

Dear GMXers,
Since I am interested in interactions of lone electron pairs of water 
oxygen
within the active site of an enzyme that I work on, I decided to give 
TIP5P

a shot. I use OPLSAA. I run into troubles very fast trying to minimize
freshly solvated system. I found on the gmx-users
(http://lists.gromacs.org/pipermail/gmx-users/2008-March/032732.html) 
that
cg and constraints don't go together when TIP5P is to be used - thats 
OK. It
turned out, however, that I was not able to minimize my protein even 
with
steepest descent. The system minimizes with TIP4P pretty well 
(emtol=1.0).
In the meantime I tried to minimize short peptide - 10aa, did not work 
as
well. What happens? The LP of water used to get too close to 
positively
charged hydrogens (without VDW radius) on arginine. It looks like 
this:


Step=  579, Dmax= 8.0e-03 nm, Epot= -1.40714e+05 Fmax= 1.20925e+04, 
atom=

171
Step=  580, Dmax= 9.6e-03 nm, Epot= -1.41193e+05 Fmax= 8.13923e+04, 
atom=

171
Step=  581, Dmax= 1.1e-02 nm, Epot= -1.43034e+05 Fmax= 1.03648e+06, 
atom=

11181
Step=  585, Dmax= 1.7e-03 nm, Epot= -1.46878e+05 Fmax= 4.23958e+06, 
atom=

11181
Step=  587, Dmax= 1.0e-03 nm, Epot= -1.49565e+05 Fmax= 9.43285e+06, 
atom=

11181
Step=  589, Dmax= 6.2e-04 nm, Epot= -1.59042e+05 Fmax= 3.55920e+07, 
atom=

11181
Step=  591, Dmax= 3.7e-04 nm, Epot= -1.69054e+05 Fmax= 7.79944e+07, 
atom=

11181
Step=  593, Dmax= 2.2e-04 nm, Epot= -1.85575e+05 Fmax= 2.27640e+08, 
atom=

11181
Step=  595, Dmax= 1.3e-04 nm, Epot= -2.35034e+05 Fmax= 5.88938e+08, 
atom=

17181
Step=  597, Dmax= 8.0e-05 nm, Epot= -2.39154e+05 Fmax= 1.22615e+09, 
atom=

11181
Step=  598, Dmax= 9.6e-05 nm, Epot= -2.67157e+05 Fmax= 1.96782e+09, 
atom=

11181
Step=  600, Dmax= 5.8e-05 nm, Epot= -4.37260e+05 Fmax= 1.08988e+10, 
atom=

11181
Step=  602, Dmax= 3.5e-05 nm, Epot= -4.65654e+05 Fmax= 1.29609e+10, 
atom=

11181
Step=  604, Dmax= 2.1e-05 nm, Epot= -1.17945e+06 Fmax= 1.31028e+11, 
atom=

11181
Step=  607, Dmax= 6.3e-06 nm, Epot= -3.07551e+06 Fmax= 6.04297e+11, 
atom=

11181
Step=  610, Dmax= 1.9e-06 nm, Epot= -4.26709e+06 Fmax= 1.61390e+12, 
atom=

11181
Step=  611, Dmax= 2.3e-06 nm, Epot= -4.39724e+06 Fmax= 2.14416e+12, 
atom=

11181
Step=  613, Dmax= 1.4e-06 nm, Epot= -1.27489e+07 Fmax= 1.03223e+13, 
atom=

17181
Step=  614, Dmax= 1.6e-06 nm, Epot= -5.23118e+06 Fmax= 3.18465e+12, 
atom=

11181
Energy minimization has stopped, but the forces havenot converged to 
the

(...)

In this example atom 171 is HH21 of ARG, and 11181 is oxygen of water 
that
got close to this ARG. Sometimes the epot turns nan at the end. If you 
would

like to reproduce, I put the peptide.pdb, the mdp file and the running
script at http://shroom.ibb.waw.pl/tip5p . If anybody have any 
suggestions
how to minimize (deep) with OPLSAA + TIP5P in gromacs (4.6.3 
preferably...)
without constraining bond lengths (which is also problematic), I will 
be

very very grateful.
Best,

Grzegorz Wieczorek
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problems with REMD in Gromacs 4.6.3

2013-07-19 Thread gigo

Hi!

On 2013-07-17 21:08, Mark Abraham wrote:

You tried ppn3 (with and without --loadbalance)?


I was testing on 8-replicas simulation.

1) Without --loadbalance and -np 8.
Excerpts from the script:
#PBS -l nodes=8:ppn=3
setenv OMP_NUM_THREADS 4
mpiexec mdrun_mpi -v -cpt 20 -multi 8 -ntomp 4 -replex 2500 -cpi -pin 
on


Excerpts from logs:
Using 3 MPI processes
Using 4 OpenMP threads per MPI process
(...)
Overriding thread affinity set outside mdrun_mpi

Pinning threads with an auto-selected logical core stride of 1

WARNING: In MPI process #0: Affinity setting for 1/4 threads failed.
 This can cause performance degradation! If you think your 
setting are

 correct, contact the GROMACS developers.


WARNING: In MPI process #2: Affinity setting for 4/4 threads failed.

Load: The job was allocated 24 cores (3 cores on 8 different nodes). 
Each OpenMP thread uses ~1/3 of a CPU core on average.
Conclusions: MPI runs as many processes as cores requested 
(nnodes*ppn=24), it ignores OMP_NUM_THREADS env == this is wrong and is 
not Gromacs issue. Each MPI process forks to 4 threads as requested. The 
24-core limit granted by Torque is not violated.


2) The same script, but with -np 8, to limit the number of MPI 
processes to the number of replicas

Logs:
Using 1 MPI process
Using 4 OpenMP threads
(...)

Replicas 0,3 and 6: WARNING: Affinity setting for 1/4 threads failed.
Replicas 1,2,4,5,7: WARNING: Affinity setting for 4/4 threads failed.


Load: The job was allocated 24 cores on 8 nodes. Only on first 3 nodes 
mpiexec was run. Each OpenMP thread uses ~20% of a CPU core.


3) -np 8 --loadbalance
Excerpts from logs:
Using 1 MPI process
Using 4 OpenMP threads
(...)
Each replica says: WARNING: Affinity setting for 3/4 threads failed.

Load: MPI processes spread evenly on all 8 nodes. Each OpenMP thread 
uses ~50% of a CPU core.


4) -np 8 --loadbalance, #PBS -l nodes=8:ppn=4 == this worked ~OK with 
gromacs 4.6.2

Logs:
WARNING: Affinity setting for 2/4 threads failed.

Load: 32 cores allocated on 8 nodes. MPI processes spread evenly, each 
OpenMP thread uses ~70% of a CPU core.
With 144 replicas the simulation did not produce any results, just got 
stuck.



Some thoughts: the main problem is most probably in the way MPI 
interprets the information from torque, it is not Gromacs related. MPI 
ignores OMP_NUM_THREADS. The environment is just broken. Since 
gromacs-4.6.2 behaved better than 4.6.3 there, I am coming back to it.

Best,
G



Mark

On Wed, Jul 17, 2013 at 6:30 PM, gigo g...@ibb.waw.pl wrote:

On 2013-07-13 11:10, Mark Abraham wrote:


On Sat, Jul 13, 2013 at 1:24 AM, gigo g...@ibb.waw.pl wrote:


On 2013-07-12 20:00, Mark Abraham wrote:



On Fri, Jul 12, 2013 at 4:27 PM, gigo g...@ibb.waw.pl wrote:



Hi!

On 2013-07-12 11:15, Mark Abraham wrote:




What does --loadbalance do?





It balances the total number of processes across all allocated 
nodes.




OK, but using it means you are hostage to its assumptions about 
balance.




Thats true, but as long as I do not try to use more resources that 
the
torque gives me, everything is OK. The question is, what is a 
proper way

of
running multiple simulations in parallel with MPI that are further
parallelized with OpenMP, when pinning fails? I could not find any 
other.



I think pinning fails because you are double-crossing yourself. You 
do

not want 12 MPI processes per node, and that is likely what ppn is
setting. AFAIK your setup should work, but I haven't tested it.




The
thing is that mpiexec does not know that I want each replica to 
fork to

4
OpenMP threads. Thus, without this option and without affinities 
(in a

sec
about it) mpiexec starts too many replicas on some nodes - 
gromacs

complains
about the overload then - while some cores on other nodes are not 
used.

It
is possible to run my simulation like that:

mpiexec mdrun_mpi -v -cpt 20 -multi 144 -replex 2000 -cpi 
(without

--loadbalance for mpiexec and without -ntomp for mdrun)

Then each replica runs on 4 MPI processes (I allocate 4 times 
more

cores
then replicas and mdrun sees it). The problem is that it is much 
slower

than
using OpenMP for each replica. I did not find any other way than
--loadbalance in mpiexec and then -multi 144 -ntomp 4 in mdrun to 
use

MPI
and OpenMP at the same time on the torque-controlled cluster.




That seems highly surprising. I have not yet encountered a job
scheduler that was completely lacking a do what I tell you 
layout
scheme. More importantly, why are you using #PBS -l 
nodes=48:ppn=12?




I thing that torque is very similar to all PBS-like resource 
managers in
this regard. It actually does what I tell it to do. There are 
12-core

nodes,
I ask for 48 of them - I get them (simple #PBS -l ncpus=576 does 
not

work),
end of story. Now, the program that I run is responsible for 
populating

resources that I got.



No, that's not the end of the story. The scheduler and the MPI 
system
typically cooperate to populate

Re: [gmx-users] Problems with REMD in Gromacs 4.6.3

2013-07-17 Thread gigo

On 2013-07-13 11:10, Mark Abraham wrote:

On Sat, Jul 13, 2013 at 1:24 AM, gigo g...@ibb.waw.pl wrote:

On 2013-07-12 20:00, Mark Abraham wrote:


On Fri, Jul 12, 2013 at 4:27 PM, gigo g...@ibb.waw.pl wrote:


Hi!

On 2013-07-12 11:15, Mark Abraham wrote:



What does --loadbalance do?




It balances the total number of processes across all allocated 
nodes.



OK, but using it means you are hostage to its assumptions about 
balance.



Thats true, but as long as I do not try to use more resources that 
the
torque gives me, everything is OK. The question is, what is a proper 
way of

running multiple simulations in parallel with MPI that are further
parallelized with OpenMP, when pinning fails? I could not find any 
other.


I think pinning fails because you are double-crossing yourself. You do
not want 12 MPI processes per node, and that is likely what ppn is
setting. AFAIK your setup should work, but I haven't tested it.




The
thing is that mpiexec does not know that I want each replica to 
fork to 4
OpenMP threads. Thus, without this option and without affinities 
(in a

sec
about it) mpiexec starts too many replicas on some nodes - gromacs
complains
about the overload then - while some cores on other nodes are not 
used.

It
is possible to run my simulation like that:

mpiexec mdrun_mpi -v -cpt 20 -multi 144 -replex 2000 -cpi (without
--loadbalance for mpiexec and without -ntomp for mdrun)

Then each replica runs on 4 MPI processes (I allocate 4 times more 
cores
then replicas and mdrun sees it). The problem is that it is much 
slower

than
using OpenMP for each replica. I did not find any other way than
--loadbalance in mpiexec and then -multi 144 -ntomp 4 in mdrun to 
use MPI

and OpenMP at the same time on the torque-controlled cluster.



That seems highly surprising. I have not yet encountered a job
scheduler that was completely lacking a do what I tell you layout
scheme. More importantly, why are you using #PBS -l nodes=48:ppn=12?



I thing that torque is very similar to all PBS-like resource managers 
in
this regard. It actually does what I tell it to do. There are 12-core 
nodes,
I ask for 48 of them - I get them (simple #PBS -l ncpus=576 does not 
work),
end of story. Now, the program that I run is responsible for 
populating

resources that I got.


No, that's not the end of the story. The scheduler and the MPI system
typically cooperate to populate the MPI processes on the hardware, set
OMP_NUM_THREADS, set affinities, etc. mdrun honours those if they are
set.


I was able to run what I wanted flawlessly on another cluster with 
PBS-Pro. The torque cluster seem to work like I said (the end of story 
behaviour). REMD runs well on torque when I give a whole physical node 
to one replica. Otherwise the simulation does not go or the pinning 
fails (sometimes partially). I run out of options, I did not find any 
working example/documentation on running hybrid MPI/OpenMP jobs in 
torque. It seems that I stumbled upon limitations of this resource 
manager, and it is not really the Gromacs issue.

Best Regards,
Grzegorz



You seem to be using 12 because you know there are 12 cores per node.
The scheduler should know that already. ppn should be a command about
what to do with the hardware, not a description of what it is. More to
the point, you should read the docs and be sure what it does.


Surely you want 3 MPI processes per 12-core node?



Yes - I want each node to run 3 MPI processes. Preferably, I would 
like to
run each MPI process on separate node (spread on 12 cores with 
OpenMP) but I
will not get as much of resources. But again, without the 
--loadbalance hack

I would not be able to properly populate the nodes...


So try ppn 3!




What do the .log files say about
OMP_NUM_THREADS, thread affinities, pinning, etc?




Each replica logs:
Using 1 MPI process
Using 4 OpenMP threads,
That is is correct. As I said, the threads are forked, but 3 out of 
4

don't
do anything, and the simulation does not go at all.

About affinities Gromacs says:
Can not set thread affinities on the current platform. On NUMA 
systems

this
can cause performance degradation. If you think your platform 
should

support
setting affinities, contact the GROMACS developers.

Well, the current platform is normal x86_64 cluster, but the 
whole

information about resources is passed by Torque to OpenMPI-linked
Gromacs.
Can it be that mdrun sees the resources allocated by torque as a 
big pool

of
cpus and misses the information about nodes topology?



mdrun gets its processor topology from the MPI layer, so that is 
where
you need to focus. The error message confirms that GROMACS sees 
things

that seem wrong.



Thank you, I will take a look. But the first thing I want to do is 
finding
the reason why Gromacs 4.6.3 is not able to run on my (slightly 
weird, I

admit) setup, while 4.6.2 does it very well.


4.6.2 had a bug that inhibited any MPI-based mdrun from attempting to
set affinities. It's still not clear

Re: [gmx-users] Problems with REMD in Gromacs 4.6.3

2013-07-12 Thread gigo

Hi!

On 2013-07-12 11:15, Mark Abraham wrote:

What does --loadbalance do?


It balances the total number of processes across all allocated nodes. 
The thing is that mpiexec does not know that I want each replica to fork 
to 4 OpenMP threads. Thus, without this option and without affinities 
(in a sec about it) mpiexec starts too many replicas on some nodes - 
gromacs complains about the overload then - while some cores on other 
nodes are not used. It is possible to run my simulation like that:


mpiexec mdrun_mpi -v -cpt 20 -multi 144 -replex 2000 -cpi (without 
--loadbalance for mpiexec and without -ntomp for mdrun)


Then each replica runs on 4 MPI processes (I allocate 4 times more 
cores then replicas and mdrun sees it). The problem is that it is much 
slower than using OpenMP for each replica. I did not find any other way 
than --loadbalance in mpiexec and then -multi 144 -ntomp 4 in mdrun to 
use MPI and OpenMP at the same time on the torque-controlled cluster.



What do the .log files say about
OMP_NUM_THREADS, thread affinities, pinning, etc?


Each replica logs:
Using 1 MPI process
Using 4 OpenMP threads,
That is is correct. As I said, the threads are forked, but 3 out of 4 
don't do anything, and the simulation does not go at all.


About affinities Gromacs says:
Can not set thread affinities on the current platform. On NUMA systems 
this
can cause performance degradation. If you think your platform should 
support

setting affinities, contact the GROMACS developers.

Well, the current platform is normal x86_64 cluster, but the whole 
information about resources is passed by Torque to OpenMPI-linked 
Gromacs. Can it be that mdrun sees the resources allocated by torque as 
a big pool of cpus and misses the information about nodes topology?


If you have any suggestions how to debug or trace this issue, I would 
be glad to participate.

Best,
G








Mark

On Fri, Jul 12, 2013 at 3:46 AM, gigo g...@poczta.ibb.waw.pl wrote:

Dear GMXers,
With Gromacs 4.6.2 I was running REMD with 144 replicas. Replicas 
were
separate MPI jobs of course (OpenMPI 1.6.4). Each replica I run on 4 
cores
with OpenMP. There is Torque installed on the cluster build of 
12-cores

nodes, so I used the following script:

#!/bin/tcsh -f
#PBS -S /bin/tcsh
#PBS -N test
#PBS -l nodes=48:ppn=12
#PBS -l walltime=300:00:00
#PBS -l mem=288Gb
#PBS -r n
cd $PBS_O_WORKDIR
mpiexec -np 144 --loadbalance mdrun_mpi -v -cpt 20 -multi 144 -ntomp 
4

-replex 2000

It was working just great with 4.6.2. It does not work with 4.6.3. 
The new
version was compiled with the same options in the same environment. 
Mpiexec
spreads the replicas evenly over the cluster. Each replica forks 4 
threads,
but only one of them uses any cpu. Logs end at the citations. Some 
empty

energy and trajectory files are created, nothing is written to them.
Please let me know if you have any immediate suggestion on how to 
make it
work (maybe based on some differences between versions), or if I 
should fill

the bug report with all the technical details.
Best Regards,

Grzegorz Wieczorek

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] remd

2013-07-12 Thread gigo

Hi!

On 2013-07-12 07:58, Shine A wrote:

Hi Sir,

 Is it possible to run an REMD simulation having 16 replicas 
in a
cluster(group of cpu) having 8 nodes. Here each node have 8 
processors.


It is possible. If you have Gromacs (version = 4.6) compiled with MPI 
and you specify the number of replicas (-multi 16) in the mdrun command 
and 64 processors are allocated by mpirun, mdrun should start 4 MPI 
processes per each replica. It worked for me, at least. With OpenMP 
parallelization it would run faster, I have some problems with it 
though. Read the latest posts Problems with REMD in Gromacs 4.6.3.

Best,
G
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problems with REMD in Gromacs 4.6.3

2013-07-12 Thread gigo

On 2013-07-12 20:00, Mark Abraham wrote:

On Fri, Jul 12, 2013 at 4:27 PM, gigo g...@ibb.waw.pl wrote:

Hi!

On 2013-07-12 11:15, Mark Abraham wrote:


What does --loadbalance do?



It balances the total number of processes across all allocated nodes.


OK, but using it means you are hostage to its assumptions about 
balance.


Thats true, but as long as I do not try to use more resources that the 
torque gives me, everything is OK. The question is, what is a proper way 
of running multiple simulations in parallel with MPI that are further 
parallelized with OpenMP, when pinning fails? I could not find any 
other.





The
thing is that mpiexec does not know that I want each replica to fork 
to 4
OpenMP threads. Thus, without this option and without affinities (in 
a sec
about it) mpiexec starts too many replicas on some nodes - gromacs 
complains
about the overload then - while some cores on other nodes are not 
used. It

is possible to run my simulation like that:

mpiexec mdrun_mpi -v -cpt 20 -multi 144 -replex 2000 -cpi (without
--loadbalance for mpiexec and without -ntomp for mdrun)

Then each replica runs on 4 MPI processes (I allocate 4 times more 
cores
then replicas and mdrun sees it). The problem is that it is much 
slower than

using OpenMP for each replica. I did not find any other way than
--loadbalance in mpiexec and then -multi 144 -ntomp 4 in mdrun to use 
MPI

and OpenMP at the same time on the torque-controlled cluster.


That seems highly surprising. I have not yet encountered a job
scheduler that was completely lacking a do what I tell you layout
scheme. More importantly, why are you using #PBS -l nodes=48:ppn=12?


I thing that torque is very similar to all PBS-like resource managers 
in this regard. It actually does what I tell it to do. There are 12-core 
nodes, I ask for 48 of them - I get them (simple #PBS -l ncpus=576 does 
not work), end of story. Now, the program that I run is responsible for 
populating resources that I got.



Surely you want 3 MPI processes per 12-core node?


Yes - I want each node to run 3 MPI processes. Preferably, I would like 
to run each MPI process on separate node (spread on 12 cores with 
OpenMP) but I will not get as much of resources. But again, without the 
--loadbalance hack I would not be able to properly populate the nodes...





What do the .log files say about
OMP_NUM_THREADS, thread affinities, pinning, etc?



Each replica logs:
Using 1 MPI process
Using 4 OpenMP threads,
That is is correct. As I said, the threads are forked, but 3 out of 4 
don't

do anything, and the simulation does not go at all.

About affinities Gromacs says:
Can not set thread affinities on the current platform. On NUMA 
systems this
can cause performance degradation. If you think your platform should 
support

setting affinities, contact the GROMACS developers.

Well, the current platform is normal x86_64 cluster, but the whole
information about resources is passed by Torque to OpenMPI-linked 
Gromacs.
Can it be that mdrun sees the resources allocated by torque as a big 
pool of

cpus and misses the information about nodes topology?


mdrun gets its processor topology from the MPI layer, so that is where
you need to focus. The error message confirms that GROMACS sees things
that seem wrong.


Thank you, I will take a look. But the first thing I want to do is 
finding the reason why Gromacs 4.6.3 is not able to run on my (slightly 
weird, I admit) setup, while 4.6.2 does it very well.

Best,

Grzegorz
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Problems with REMD in Gromacs 4.6.3

2013-07-11 Thread gigo

Dear GMXers,
With Gromacs 4.6.2 I was running REMD with 144 replicas. Replicas were 
separate MPI jobs of course (OpenMPI 1.6.4). Each replica I run on 4 
cores with OpenMP. There is Torque installed on the cluster build of 
12-cores nodes, so I used the following script:


#!/bin/tcsh -f
#PBS -S /bin/tcsh
#PBS -N test
#PBS -l nodes=48:ppn=12
#PBS -l walltime=300:00:00
#PBS -l mem=288Gb
#PBS -r n
cd $PBS_O_WORKDIR
mpiexec -np 144 --loadbalance mdrun_mpi -v -cpt 20 -multi 144 -ntomp 4 
-replex 2000


It was working just great with 4.6.2. It does not work with 4.6.3. The 
new version was compiled with the same options in the same environment. 
Mpiexec spreads the replicas evenly over the cluster. Each replica forks 
4 threads, but only one of them uses any cpu. Logs end at the citations. 
Some empty energy and trajectory files are created, nothing is written 
to them.
Please let me know if you have any immediate suggestion on how to make 
it work (maybe based on some differences between versions), or if I 
should fill the bug report with all the technical details.

Best Regards,

Grzegorz Wieczorek

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Problems with REMD in Gromacs 4.6.3

2013-07-11 Thread gigo

Dear GMXers,
With Gromacs 4.6.2 I was running REMD with 144 replicas. Replicas were 
separate MPI jobs of course (OpenMPI 1.6.4). Each replica I run on 4 
cores with OpenMP. There is Torque installed on the cluster build of 
12-cores nodes, so I used the following script:


#!/bin/tcsh -f
#PBS -S /bin/tcsh
#PBS -N test
#PBS -l nodes=48:ppn=12
#PBS -l walltime=300:00:00
#PBS -l mem=288Gb
#PBS -r n
cd $PBS_O_WORKDIR
mpiexec -np 144 --loadbalance mdrun_mpi -v -cpt 20 -multi 144 -ntomp 4 
-replex 2000


It was working just great with 4.6.2. It does not work with 4.6.3. The 
new version was compiled with the same options in the same environment. 
Mpiexec spreads the replicas evenly over the cluster. Each replica forks 
4 threads, but only one of them uses any cpu. Logs end at the citations. 
Some empty energy and trajectory files are created, nothing is written 
to them.
Please let me know if you have any immediate suggestion on how to make 
it work (maybe based on some differences between versions), or if I 
should fill the bug report with all the technical details.

Best Regards,

Grzegorz Wieczorek

Ps. I'm sending this message for the 3rd time - it did not appear on 
the list the last 2 times. Just in case - sorry for the spam.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Problem with running REMD in Gromacs 4.6.3

2013-07-09 Thread gigo

Dear GMXers,
With Gromacs 4.6.2 I was running REMD with 144 replicas. Replicas were 
separate MPI jobs of course (OpenMPI 1.6.4). Each replica I run on 4 
cores with OpenMP. There is Torque installed on the cluster build of 
12-cores nodes, so I used the following script:


#!/bin/tcsh -f
#PBS -S /bin/tcsh
#PBS -N test
#PBS -l nodes=48:ppn=12
#PBS -l walltime=300:00:00
#PBS -l mem=288Gb
#PBS -r n
cd $PBS_O_WORKDIR
mpiexec -np 144 --loadbalance mdrun_mpi -v -cpt 20 -multi 144 -ntomp 4 
-replex 2000


It was working just great with 4.6.2. It does not work with 4.6.3. The 
new version was compiled with the same options in the same environment. 
Mpiexec spreads the replicas evenly over the cluster. Each replica forks 
4 threads, but only one of them uses any cpu. Logs end at the citations. 
Some empty energy and trajectory files are created, nothing is written 
to them.
Please let me know if you have any immediate suggestion on how to make 
it work (maybe based on some differences between versions), or if I 
should fill the bug report with all the technical details.

Best Regards,

Grzegorz Wieczorek

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Nucleic Acid Simulations with Gromacs

2007-10-02 Thread gigo

Hi,
On the gromacs webpage in user contributions-topologies you have (at 
least) 2 forcefields do download that allow you to simulate NA. The first 
is OPLS NA records from rnp-group
(http://rnp-group.genebee.msu.su/3d/oplsa_ff.html). It is for gromacs 
3.2.1, so minor manual adjustments for 3.3.1 are required. The second is 
AMBER ff variants from Stanford (http://folding.stanford.edu/ffamber/).

Good Luck


Grzegorz Wieczorek
Department of Bioinformatics
Institute of Biochemistry and Biophysics
Polish Academy of Sciences
ul. Pawinskiego 5a
02-106 Warszawa, Poland

On Tue, 2 Oct 2007, Monika Sharma wrote:


Dear All,
I want to start nucleic acid simulations. I am using gromacs3.3.1. But I 
could not find any mention of Nucleic Acids in any of the force-field 
provided by gromacs distro. So does it mean that one _can not_ simulate 
nucleic acids with gromacs. Has anyone tried? And if someone can guide me 
through??

Thanks in advance
Regards,
Monika


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface 
or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] openmpi

2006-09-17 Thread gigo

 Hi,
I'm using openmpi on our 24-nodes 2 cores each cluster without any problem 
so far. I run my jobs under torque and I did not change any of default 
settings. With my system it scales rather well on 4 nodes, but I have no 
problems with running more.


Grzegorz Wieczorek
Department of Bioinformatics
Institute of Biochemistry and Biophysics
Polish Academy of Sciences
ul. Pawinskiego 5a
02-106 Warszawa, Poland

On Thu, 14 Sep 2006, [EMAIL PROTECTED] wrote:


Anyone using openmpi for parallel gromacs? If so, how to set the maximum short
tcp length? I have tried some things unsuccessfully which are posted at the open
mpi site:
http://www.open-mpi.org/community/lists/users/2006/09/1864.php
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php