[gmx-users] Gibbs Energy Calculation and charges

2013-11-01 Thread Christopher Neale
Dear Dallas:

Seems like you could test Michael's idea by removing all 1-4 NB interactions 
from your topology. It won't produce any biologically useful results, but might 
be a worthwhile check to see if indeed this is the issue.

To do this, I figure you would set gen-pairs to "no" in the [ defaults ] 
directive of forcefield.itp, remove the [ pairtypes ] section from 
ffnonbonded.itp, and remove the [ pairs ] section from your molecular .itp 
file. (You can quickly check that the 1-4 energy is zero in all states to 
ensure that this works).

If that gives you the result that you expect, then you could go on to 
explicitely state the 1-4 interactions for the A and B states (I presume that 
this is possible). Of course, you should be able to jump directly to this 
second test, but the first test might be useful because it rules out the 
possibility that you make a typo somewhere.

Chris.

-- original message --

I think the grammar got a little garbled there, so I'm not sure quite
what you are claiming.

One important thing to remember; 1-4 interactions are treated as
bonded interactions right now FOR COUPLE intramol (not for lambda
dependence of the potential energy function), so whether
couple-intramol is set to yes or no does not affect these interactions
at all.  It only affects the nonbondeds with distances greater than
1-5.  At least to me, this is nonintuitive (and we're coming up with a
better scheme for 5.0), but might that explain what you are getting?

On Tue, Oct 29, 2013 at 9:44 PM, Dallas Warren  
wrote:
> Just want this to make another pass, just in case those in the know missed it.
>
> Using couple-intrmol = yes the resulting dH/dl plot actually looks like that 
> at lamba = 1 it is actually equal to couple-intramol = no with lambda = 0.
>
> Should that be the case?
>
> Catch ya,
>
> Dr. Dallas Warren
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] 2D umbrella sampling simulation

2013-10-25 Thread Christopher Neale
unfortunately not. It will be done eventually though: to follow the progress of 
that feature, check out: http://redmine.gromacs.org/issues/1346

If you have to do that, there is a patch that I posted on that same redmine 
page (posts 8 and 9) to allow this in gromacs 4.0.5.

Chris.

-- original message --

   Thanks for your check. Yes, it should be pull_group2 for C. In this case, 
the two distances have a reference group B. What about the case in which no 
common reference group is shared with the two distances, e.g. distance1 is 
defined as A-B and distance2 as C-D? Can current gromacs codes perform 2D 
umbrella sampling for this problem?

Many thanks.

Mingjun 
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] 2D umbrella sampling simulation

2013-10-24 Thread Christopher Neale
Your restraint involving group C should use pull_group2, etc, not another copy 
of pull_group1. Other than that,
it looks like a valid approach.

Chris.

-- original message --

I am going to perform the two-dimensional umbrella sampling using a pair of 
distances (the distance btw atoms A and B, C and B) in the restraints. Could 
someone please tell whether I can use the current pull code as follows?
-
; Pull code
pull= umbrella
pull_geometry   = distance
pull_dim= Y Y Y
pull_start  = no
pull_ngroups= 2
pull_group0 = B
pull_group1 = A ;restraint of distance btw A and B
pull_init1  = 2.7 ;equilibrium distance of A-B
pull_rate1  = 0.0
pull_k1 = 2000  ; kJ mol^-1 nm^-2

pull_group1 = C ;restraint of distance btw C and B
pull_init1  = 3.7 ;equilibrium distance of C-B
pull_rate1  = 0.0
pull_k1 = 2500  ; kJ mol^-1 nm^-2

pull_nstxout= 1000  ; every 2 ps
pull_nstfout= 1000  ; every 2 ps
- 

Many thanks.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] default -rdd with distance restraints seems too large

2013-10-17 Thread Christopher Neale
Thanks for the hint XAvier.

Unfortunately, I get crashes with particle decomposition (see below). If I use 
either DD or PD, I can run on up to 2 threads 
without adjusting -rdd or -dds. I can only use >2 thread with DD if I set -rdd 
2.8. If I try to use more than 2 threads with PD, 
I get lincs problems and immediate crashes. If I export GMX_MAXCONSTRWARN=-1 
with the same setup , then I get a segfault immediately. 
Note, however, that if I use constraints=none and set my timestep to 0.5 fs, I 
can indeed use PD with 8 threads (without exporting GMX_MAXCONSTRWARN). 
Also note that I am using the SD integrator, but I just tested and PD also 
gives me an error with the md integrator.
(( I don't think that the crashes have anything to do with improper setup. 
These runs are fine in serial or in parallel as described 
above and only ever "explode"/crash with PD and >2 threads, for which they do 
so immediately )).

Here is the error that I get when I 

Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS: 
rms 218.843810, max 8135.581543 (between atoms 15623 and 15624)
bonds that rotated more than 30 degrees: 
 atom 1 atom 2  angle  previous, current, constraint length
  13908  13916   90.80.2130   0.8066  0.1538 
  13916  13917   90.30.2402   0.7979  0. 
  13916  13918   90.20.2403   0.8452  0.
  13916  13919   89.31.3408   0.9956  0.1430
...
...
  22587  22589   31.70.4648   0.1144  0.
  22587  22590   90.20.4168   0.1273  0.

Wrote pdb files with previous and current coordinates
starting mdrun 'Gallium Rubidium Oxygen Manganese Argon Carbon Silicon in water'
500 steps,  1.0 ps.

WARNING: Listed nonbonded interaction between particles 13908 and 13920
at distance 3f which is larger than the table limit 3f nm.

This is likely either a 1,4 interaction, or a listed interaction inside
a smaller molecule you are decoupling during a free energy calculation.
Since interactions at distances beyond the table cannot be computed,
they are skipped until they are inside the table limit again. You will
only see this message once, even if it occurs for several interactions.

IMPORTANT: This should not happen in a stable simulation, so there is
probably something wrong with your system. Only change the table-extension
distance in the mdp file if you are really sure that is the reason.



step 0: Water molecule starting at atom 39302 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.

step 0: Water molecule starting at atom 53072 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.

Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 2569455308.447471, max 215672291328.00 (between atoms 14054 and 14055)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
  13442  13444   90.00.   0.1147  0.
  13503  13506   40.80.1538   0.2002  0.1538
  13506  13507   45.20.   0.1541  0.
...
...
  19020  19023   89.80.1534 66420.9531  0.1530
;;dispcorr = EnerPres  ;; not using for CHARMM simulations


###

mdp options:

constraints = all-bonds
lincs-iter =  1
lincs-order =  6
constraint_algorithm =  lincs
integrator = sd
dt = 0.002
tinit = 0
nsteps = 500
nstcomm = 1
nstxout = 500
nstvout = 500
nstfout = 500
nstxtcout = 500
nstenergy = 500
nstlist = 10
nstlog=0 ; reduce log file size
ns_type = grid
vdwtype = switch
rlist = 1.2
rlistlong = 1.3
rvdw = 1.2
rvdw-switch = 0.8
rcoulomb = 1.2
coulombtype = PME
ewald-rtol = 1e-5
optimize_fft = yes
fourierspacing = 0.12
fourier_nx = 0
fourier_ny = 0
fourier_nz = 0
pme_order = 4
tc_grps =  System   
tau_t   =  1.0   
ld_seed =  -1   
ref_t = 310
gen_temp = 310
gen_vel = yes
unconstrained_start = no
gen_seed = -1
Pcoupl = berendsen
pcoupltype = semiisotropic
tau_p = 4 4
compressibility = 4.5e-5 4.5e-5
ref_p = 1.0 1.0
disre = simple
disre-fc = 5


-- original message --

Yes it is a pity!

But particle decomposition helps :)) well helped!

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] default -rdd with distance restraints seems too large

2013-10-17 Thread Christopher Neale
Indeed, sorry that I didn't notice that, Mark. Looks as if the two-body bonded 
interaction gets multiplied by 1.1/0.8 so I suppose that this is working as it 
should.

It's a shame that long distance restraints limit the parallalization so much, 
but it is understandable. Thanks for helping me with this.

Chris.

-- original message --

Initializing Domain Decomposition on 8 nodes
Dynamic load balancing: auto
Will sort the charge groups at every domain (re)decomposition
Initial maximum inter charge-group distances:
two-body bonded interactions: 2.636 nm, Dis. Rest., atoms 1701 4425
  multi-body bonded interactions: 0.479 nm, CMAP Dih., atoms 1062 1081
Minimum cell size due to bonded interactions: 2.899 nm
Maximum distance for 7 constraints, at 120 deg. angles, all-trans: 1.172 nm
Estimated maximum distance required for P-LINCS: 1.172 nm
Using 0 separate PME nodes, as there are too few total
 nodes for efficient splitting
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 8 cells with a minimum initial size of 3.624 nm
The maximum allowed number of cells is: X 1 Y 1 Z 2

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] default -rdd with distance restraints seems too large

2013-10-16 Thread Christopher Neale
I have a system that also uses a set of distance restraints

The box size is:
   7.12792   7.12792  10.25212

When running mdrun -nt 8, I get:

Fatal error:
There is no domain decomposition for 8 nodes that is compatible with the given 
box and a minimum cell size of 3.62419 nm

However, the largest restrained distance is 2.0 nm and the largest displacement 
between restrained atoms is 2.63577 nm

So why does mdrun set -rdd to 3.62419 nm ?

If I run mdrun -rdd 2.8 everything works fine.

Thank you,
Chris.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Gibbs Energy Calculation and charges

2013-10-16 Thread Christopher Neale
Ah, I see. I guess that you are using couple-intramol = no (the default in 
v4.6.3 at least). That means
that the intramolecular charge-charge interactions are always at full-strength 
(and therefore different).

I would expect that normal at lambda=0 should be the same as double at 
lambda=0.5 only for
couple-intramol = yes

If you were using couple-intramol = yes already, then I am as confused as you 
are.

Chris.

-- original message --

You are correct in the first part of your statement, part that isn't correct is 
I expect for the same charge on the atom I expect it to give the same dH/dl 
value.

For example, for the OE atom that I provided the graphs for ( 
http://ozreef.org/stuff/octanol.gif and http://ozreef.org/stuff/water.gif )

Lambda   Normal
0.  -0.5310
 Double
0.  -1.0620
0.5000  -0.5310

Therefore, the normal charge molecule when lambda = 0 should be identical to 
that double charge one when lambda = 0.5.  They should be the same in all 
manners, LJ, bonds, charges etc.

So would I not expect to get the same dH/dl value out?

Catch ya,
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Gibbs Energy Calculation and charges

2013-10-16 Thread Christopher Neale
Dear Dallas:

Am I correct that you are saying that for both the regular-charge and 
double-charge 
solute molecule, you decoupled the solvent-solute charge-charge interactions 
and 
expected that the dH/dL and overall free energy values of the double-charge 
solute would 
be exactly two times the respective values of the regular-charge system?

For that to be true, I believe it would also have to be true that doubling the 
charges
has no bearing on the conformations that are sampled (including the solvent), 
which is 
certainly incorrect.

To clarify my point, I believe that if you used the conformations from the 
regular-charge 
simulations in an mdrun -rerun using the double-charge topology (assuming that 
such a 
thing is even possible), then I do agree that you would expect exactly double 
the 
magnitudes for the free energy. But if doubling the charge affects solute or 
solvent conformation, then I would expect that the exact relationship is 
generally
impossible to predict a priori.

Chris.


-- original message --

Here is the molecule in octanol
http://ozreef.org/stuff/octanol.gif

And here in water
http://ozreef.org/stuff/water.gif

Just realised, it is actually quite different in water, consistently.

So the only difference between the two simulations is the charges on the 
molecule have been multiplied by 2.  Same settings, same bonded and LJ topology 
etc.

Catch ya,

Dr. Dallas Warren
Drug Delivery, Disposition and Dynamics
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.warren at monash.edu
+61 3 9903 9304
-
When the only tool you own is a hammer, every problem begins to resemble a 
nail. 

> -Original Message-
> From: gmx-users-bounces at gromacs.org [mailto:gmx-users-
> bounces at gromacs.org] On Behalf Of Dallas Warren
> Sent: Thursday, 17 October 2013 9:06 AM
> To: Discussion list for GROMACS users
> Subject: [gmx-users] Gibbs Energy Calculation and charges
> 
> We have a molecule, and have run two sets of  Gibbs energy calculation,
> making the charge disappear. One molecule has the normal charges, the
> other all the charges are doubled. Taking the dHdl results for both and
> plotting against the charge of a selected atom (charge based at each
> lambda value) should those results over lap?
> 
> I thought that they would, since charge only is being changed, and it
> is a linear function of lambda. However, are getting some deviation
> closer to the lambda=0 values for the normal charge, it is lower than
> the double charged molecule around lambda = 0.5
> 
> Can provide graphs etc when I am back at my workstation.
> 
> Any suggestions on whether should be over lapping or what we may be
> doing incorrectly?
> 
> Catch ya,
> 
> Dallas Warren--
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Gromacs on Stampede

2013-10-13 Thread Christopher Neale
Why not put it in a slurm script and submit that script as a (probably 
single-node) job. It is not generally 
acceptable to use a large fraction of the head node of a shared resource for a 
substantial amount of 
time.

If your problem is different and of a gromacs nature, you may need to describe 
it better. (i.e., if you're really just saying that you can't use MPI with 
g_hbond then show us what you did, what happened, and likely somebody will be 
able to answer you. Personally, I don't think any of the analysis tools are 
MPI-enabled, but I could be wrong).

If you problem is really more about using stampede, you can get help directly 
by submitting an xsede help 
ticket (portal.xsede.org).

Chris.

-- original message --

Hello,

I have a question about running gromacs utilities on Stampede and hopefully 
someone can point me in the right direction. I compiled gromacs using 
instructions in this thread and mdrun works fine. Also, some utilities like 
g_energy, g_analyze (single - core utilities, I believe) seem to be working 
fine. 

I am interested in computing life time of hydrogen bonds and this calculation 
is  quite expensive. Is there a way to submit this as a job using 32 or higher 
cores? When I run g_hbond on my workstation (16 cores) it runs on 16 threads by 
default. However, I am not sure if it is a good idea to run it on Stampede 
without submitting it as a job. 

I noticed that g_hbond works on OpenMP, while gromacs was compiled for Mpi 
according to these instructions. Just curious, if that would be the reason and 
if there is a suitable workaround for this problem.

As always, help is greatly appreciated. 
Thanks,
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Gromacs on Stampede

2013-10-10 Thread Christopher Neale
Dear Arun:

here is how I compile fftw and gromacs on stampede. 
I have also included a job script and a script to submit a chain of jobs.
As Szilárd notes, this does not use the MICs, but it is still a rather fast 
machine.

# Compilation for single precision gromacs plus mdrun_mpi
#

# Compile fftw on stampede:
cd fftw-3.3.3
mkdir exec
export FFTW_LOCATION=$(pwd)/exec
module purge
module load intel/13.0.2.146
export CC=icc
export CXX=icpc
./configure --enable-float --enable-threads --prefix=${FFTW_LOCATION} 
--enable-sse2
make -j4
make -j4 install
cd ../


# Compile gromacs 4.6.1 on stampede:

cd gromacs-4.6.1
mkdir source
mv * source
mkdir exec
cd exec

module purge
module load intel/13.0.2.146
module load cmake/2.8.9
export FFTW_LOCATION=$(pwd)/../fftw-3.3.3/exec
export CXX=icpc
export CC=icc
cmake ../source/ \
  -DCMAKE_PREFIX_PATH=$FFTW_LOCATION \
  -DCMAKE_INSTALL_PREFIX=$(pwd) \
  -DGMX_X11=OFF \
  -DCMAKE_CXX_COMPILER=${CXX} \
  -DCMAKE_C_COMPILER=${CC} \
  -DGMX_PREFER_STATIC_LIBS=ON \
  -DGMX_MPI=OFF
make -j4
make -j4 install

cd ../
mkdir exec2
cd exec2

module purge
module load intel/13.0.2.146
module load cmake/2.8.9
module load mvapich2/1.9a2
export FFTW_LOCATION=$(pwd)/../fftw-3.3.3/exec
export CXX=mpicxx
export CC=mpicc
cmake ../source/ \
  -DCMAKE_PREFIX_PATH=$FFTW_LOCATION \
  -DCMAKE_INSTALL_PREFIX=$(pwd) \
  -DGMX_X11=OFF \
  -DCMAKE_CXX_COMPILER=${CXX} \
  -DCMAKE_C_COMPILER=${CC} \
  -DGMX_PREFER_STATIC_LIBS=ON \
  -DGMX_MPI=ON
make -j4 mdrun
make -j4 install-mdrun

cp bin/mdrun_mpi ../exec/bin
cd ../






# Here is a script that you can submit to run gromacs on stampede:
# Set SBATCH -A according to your allocation
# Set SBATCH -N to number of nodes
# Set SBATCH -n to number of nodes x 16 (= number of CPU cores)
# Set PATH and GMXLIB according to your compilation of gromacs
# Remove -notunepme option if you don't mind some of the new optimizations

#!/bin/bash
#SBATCH -J test # Job name
#SBATCH -o myjob.%j.out  # Name of stdout output file (%j expands to jobId)
#SBATCH -p normal   # Queue name
#SBATCH -N 7   # Total number of nodes requested (16 
cores/node)
#SBATCH -n 112# Total number of mpi tasks requested
#SBATCH -t 48:00:00 # Run time (hh:mm:ss) 

#SBATCH -A TG-XX  # <-- Allocation name to charge job against

PATH=/home1/02417/cneale/exe/gromacs-4.6.1/exec/bin:$PATH
GMXLIB=/home1/02417/cneale/exe/gromacs-4.6.1/exec/share/gromacs/top


# grompp -f md.mdp -p new.top -c crashframe.gro -o md3.tpr -r restr.gro

ibrun mdrun_mpi -notunepme -deffnm md3 -dlb yes -npme 16 -cpt 60 -cpi md3.cpt 
-nsteps 50 -maxh 47.9 -noappend

cp md3.cpt backup_md3_$(date|sed "s/ /_/g").cpt


# submit the above script like this:

sbatch script.sh


# or create a chain of jobs like this:

N=8
script=stamp.sh
if [ ! -e last_job_in_chain ]; then
  id=$(sbatch ${script}|tail -n 1 |awk '{print $NF}')
  echo $id > last_job_in_chain
  let "N--"
fi
id=$(cat last_job_in_chain)
for((i=1;i<=N;i++)); do
  id=$(sbatch -d afterany:${id} ${script}|tail -n 1 |awk '{print $NF}')
  echo $id > last_job_in_chain
done

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] how to calculate kinetic constant?

2013-10-05 Thread Christopher Neale
Dear Rajat:

I just checked the first two papers that you mentioned and they both get 
kinetics from standard equilibrium simulations. As for the Arrhenius law, with 
k, A, and the energy of activation (Ea) all unknown for each T, how do you 
obtain a unique solution for k given T ? Even if you assume that Ea is some 
function of the maximum of your PMF (which is not always true), I presume that 
you can only then get the relationship between k and A, not the absolute value 
of k, even with information from many temperatures. However, I've never worked 
on this directly. Can you provide a reference so that I can take a look?

Thank you,
Chris.

-- original message --

Hi Chris,
I have never done this and I may be missing something. But here is what I
think.
I have seen a few papers use the Arrhenius law, k=A*exp
(-deltaG/kB*T)...-deltaG/kB*T can be obtained from the PMF...Now, if you do
this for different temperatures, you can back out the activation energy and
hence the rate constant.
I would love to learn more about this. Any inputs will be welcome.

Regards,


On Sat, Oct 5, 2013 at 11:44 PM, Christopher Neale <
chris.neale at mail.utoronto.ca> wrote:

> If you want K_on and K_off, then I think you need to look at long-time
> equilibrium simulations or massively repeated simulations connected with a
> MSM. Beyond that, I believe that you will need to understand all of the
> important free energy barriers in all degrees of freedom (hard, to say the
> least).
>
> Rajat: how are you going to compute kinetics from a PMF? Barriers in
> orthogonal degrees of freedom don't show up on your PMF but can greatly
> affect the kinetics. Even relatively minor roughness of the
> multidimensional free energy surface and off-pathway kinetic traps are
> going to affect the kinetics but not the PMF. Some people have tried to
> circumvent this limitation by using the PMF in addition to computing the
> local diffusion at each small section of the order parameter (e.g.,
> http://www.nature.com/nnano/journal/v3/n6/full/nnano.2008.130.html ) but
> unless there is excellent sampling overlap and lots of transitions between
> all relevant states, I see this as a way to calculate an upper bound of
> rates that I think could easily be much slower. See, for example,
> http://pubs.acs.org/doi/abs/10.1021/jp045544s . Finally, I am not sure
> how rates can be usefully extracted from a non-equilibrium method like REMD.
>
> Unless I missed it, the paper that David cites:
> http://pubs.acs.org/doi/abs/10.1021/ct400404q doesn't compute kinetics.
>
> Perhaps the OP can provide more information on what they are trying to
> obtain, exactly.
>
> Chris.
>
> -- original message --
>
> If you are looking at binding/unbinding as a function of temperature
> (hopefully with REMD), you can use g_kinetics. If you are looking at
> unbinding/binding events in a single simulation with temperature, etc
> constant (no annealing), you will need to calculate binding probabilities,
> from which you can back out a rate constant. A simple google search gave me
> these papers (http://www.pnas.org/content/90/20/9547.full.pdf,
> http://pubs.acs.org/doi/abs/10.1021/jp037422q)
>
> Of course, the best approach is to calculate the PMF and back out the rate
> constant from the free energy. Hope that helps.
>
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] how to calculate kinetic constant?

2013-10-05 Thread Christopher Neale
If you want K_on and K_off, then I think you need to look at long-time 
equilibrium simulations or massively repeated simulations connected with a MSM. 
Beyond that, I believe that you will need to understand all of the important 
free energy barriers in all degrees of freedom (hard, to say the least).

Rajat: how are you going to compute kinetics from a PMF? Barriers in orthogonal 
degrees of freedom don't show up on your PMF but can greatly affect the 
kinetics. Even relatively minor roughness of the multidimensional free energy 
surface and off-pathway kinetic traps are going to affect the kinetics but not 
the PMF. Some people have tried to circumvent this limitation by using the PMF 
in addition to computing the local diffusion at each small section of the order 
parameter (e.g., 
http://www.nature.com/nnano/journal/v3/n6/full/nnano.2008.130.html ) but unless 
there is excellent sampling overlap and lots of transitions between all 
relevant states, I see this as a way to calculate an upper bound of rates that 
I think could easily be much slower. See, for example, 
http://pubs.acs.org/doi/abs/10.1021/jp045544s . Finally, I am not sure how 
rates can be usefully extracted from a non-equilibrium method like REMD.

Unless I missed it, the paper that David cites: 
http://pubs.acs.org/doi/abs/10.1021/ct400404q doesn't compute kinetics. 

Perhaps the OP can provide more information on what they are trying to obtain, 
exactly.

Chris.

-- original message --

If you are looking at binding/unbinding as a function of temperature
(hopefully with REMD), you can use g_kinetics. If you are looking at
unbinding/binding events in a single simulation with temperature, etc
constant (no annealing), you will need to calculate binding probabilities,
from which you can back out a rate constant. A simple google search gave me
these papers (http://www.pnas.org/content/90/20/9547.full.pdf,
http://pubs.acs.org/doi/abs/10.1021/jp037422q)

Of course, the best approach is to calculate the PMF and back out the rate
constant from the free energy. Hope that helps.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] pull-code constant-force direction gives unexpected error about distance larger than box size

2013-10-05 Thread Christopher Neale
Commenting out the gmx_fatal() call in src/mdlib/pull.c, line: 331 and 
recompiling grompp and mdrun
allows the run to proceed. Everything is stable for 250 ps. I will report if it 
fails.

I have posted a redmine at: http://redmine.gromacs.org/issues/1352

Thank you,
Chris.

-- original message --

I am trying to use the pull code to add a constant force in a particular 
direction.
I am getting an error that the initial distance is greater than 1/2 the box 
size.
(error in 4.5.5, 4.6.1, 4.6.3)

I must not understand how to use this. I checked the online .mdp options:
http://manual.gromacs.org/online/mdp_opt.html#pull

which read as if there is no concept of "distance" in this case, just an 
applied force along the specified vector.

One reason I know that I clearly don't understand is that if my interpretation 
of "constant-force, direction"
was correct, I can't see the need for pull_geometry=direction-periodic.

My .mdp options are:
pull= constant-force
pull_geometry   = direction
pull_dim= Y Y Y
pull_nstxout= 500
pull_nstfout= 500
pull_ngroups= 1
pull_group1 = r_1_&_CA
pull_pbcatom1   = 0
pull_k1 = 30
pull_vec1   = -0.848 -0.179 0.497


And when I run:
grompp -p ../this.top -c ../start.gro -n ../index.ndx -f this.mdp

The error message is:
Pull group 1 'r_1_&_CA' has 1 atoms
Number of degrees of freedom in T-Coupling group System is 113574.00

WARNING 1 [file this.mdp]:
  You are using an absolute reference for pulling, but the rest of the
  system does not have an absolute reference. This will lead to artifacts.

Largest charge group radii for Van der Waals: 0.040, 0.040 nm
Largest charge group radii for Coulomb:   0.079, 0.079 nm
Calculating fourier grid dimensions for X Y Z
Using a fourier grid of 64x64x96, spacing 0.117 0.117 0.110
Pull group  natoms  pbc atom  distance at start reference at t=0
   0 0 0
   1 1 0
---
Program grompp, VERSION 4.6.3
Source code file: 
/project/p/pomes/cneale/GPC/exe/intel/gromacs-4.6.3/source/src/mdlib/pull.c, 
line: 331

Fatal error:
Distance of pull group 1 (4.706043 nm) is larger than 0.49 times the box size 
(3.738000)
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---


###

Looking at the positions, vectors, and derived "distances":

initial position: 5.466   3.477   2.453
pull_vec1: -0.848 -0.179 0.497
box:   7.47600   7.47600  10.55300
shortest dist: 2.858 3.656 1.956 = 5.036 nm
length from initial position to (0,0,0): 2.01 3.477 2.453 = 4.706

But why does it matter what the distance from (0,0,0) to the initial position 
is?

Moreover, adding pull_start = yes does not change the error , nor does setting 
pull_init1 = 5.466   3.477   2.453 
(I didn't expect pull_init1 to have any effect, but I tried it just in case).

Finally, If I do use direction-periodic, I don't get an error with grompp, but 
I get the following output,
which is hard for me to align with my current understanding of what the pull 
code should be doing:

Pull group  natoms  pbc atom  distance at start reference at t=0
   0 0 0
   1 1 0   2.301 0.000

(why is there a "distance at start" and why is the reference at t=0 affected if 
I set:
pull_init1 = 5.466   3.477   2.453

Pull group  natoms  pbc atom  distance at start reference at t=0
   0 0 0
   1 1 0   2.301 5.466

when pull_init1 is supposed to be ignored for constant-force pulling?

This may be related to: 
http://gromacs.5086.x6.nabble.com/constant-force-pulling-td5010180.html

Perhaps the problem is simply that this error should not be thrown in this 
case? Hard to believe nobody has caught that over the years so it must be 
something else.

Thank you,
Chris.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] pull-code constant-force direction gives unexpected error about distance larger than box size

2013-10-04 Thread Christopher Neale
Dear Users:

I am trying to use the pull code to add a constant force in a particular 
direction.
I am getting an error that the initial distance is greater than 1/2 the box 
size.
(error in 4.5.5, 4.6.1, 4.6.3)

I must not understand how to use this. I checked the online .mdp options:
http://manual.gromacs.org/online/mdp_opt.html#pull

which read as if there is no concept of "distance" in this case, just an 
applied force along the specified vector.

One reason I know that I clearly don't understand is that if my interpretation 
of "constant-force, direction"
was correct, I can't see the need for pull_geometry=direction-periodic.

My .mdp options are:
pull= constant-force
pull_geometry   = direction
pull_dim= Y Y Y
pull_nstxout= 500
pull_nstfout= 500
pull_ngroups= 1
pull_group1 = r_1_&_CA
pull_pbcatom1   = 0
pull_k1 = 30
pull_vec1   = -0.848 -0.179 0.497


And when I run:
grompp -p ../this.top -c ../start.gro -n ../index.ndx -f this.mdp

The error message is:
Pull group 1 'r_1_&_CA' has 1 atoms
Number of degrees of freedom in T-Coupling group System is 113574.00

WARNING 1 [file this.mdp]:
  You are using an absolute reference for pulling, but the rest of the
  system does not have an absolute reference. This will lead to artifacts.

Largest charge group radii for Van der Waals: 0.040, 0.040 nm
Largest charge group radii for Coulomb:   0.079, 0.079 nm
Calculating fourier grid dimensions for X Y Z
Using a fourier grid of 64x64x96, spacing 0.117 0.117 0.110
Pull group  natoms  pbc atom  distance at start reference at t=0
   0 0 0
   1 1 0
---
Program grompp, VERSION 4.6.3
Source code file: 
/project/p/pomes/cneale/GPC/exe/intel/gromacs-4.6.3/source/src/mdlib/pull.c, 
line: 331

Fatal error:
Distance of pull group 1 (4.706043 nm) is larger than 0.49 times the box size 
(3.738000)
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---


###

Looking at the positions, vectors, and derived "distances":

initial position: 5.466   3.477   2.453
pull_vec1: -0.848 -0.179 0.497
box:   7.47600   7.47600  10.55300
shortest dist: 2.858 3.656 1.956 = 5.036 nm
length from initial position to (0,0,0): 2.01 3.477 2.453 = 4.706

But why does it matter what the distance from (0,0,0) to the initial position 
is?

Moreover, adding pull_start = yes does not change the error , nor does setting 
pull_init1 = 5.466   3.477   2.453 
(I didn't expect pull_init1 to have any effect, but I tried it just in case).

Finally, If I do use direction-periodic, I don't get an error with grompp, but 
I get the following output,
which is hard for me to align with my current understanding of what the pull 
code should be doing:

Pull group  natoms  pbc atom  distance at start reference at t=0
   0 0 0
   1 1 0   2.301 0.000

(why is there a "distance at start" and why is the reference at t=0 affected if 
I set:
pull_init1 = 5.466   3.477   2.453

Pull group  natoms  pbc atom  distance at start reference at t=0
   0 0 0
   1 1 0   2.301 5.466

when pull_init1 is supposed to be ignored for constant-force pulling?

This may be related to: 
http://gromacs.5086.x6.nabble.com/constant-force-pulling-td5010180.html

Perhaps the problem is simply that this error should not be thrown in this 
case? Hard to believe nobody has caught that over the years so it must be 
something else.

Thank you,
Chris.




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Membrane is shifted a lot during umbrella sampling

2013-10-04 Thread Christopher Neale
Dear Sudipta:

on average, bilayers migrate along the positive z axis during gromacs 
simulations with a variety of atomistic force fields. This has been reported 
before, but never fully resolved ( http://redmine.gromacs.org/issues/165 
suggests it is due to rounding ) and I see it all the time. It is not that 
rapid, but in microsecond-scale simulations it is quite obvious and 
statistically significant (i.e., we're not talking about something I've seen in 
one simulation ... the chance that this "upward" drift is due to random noise 
that could lead to drift along plus or minus z are vanishingly small). I 
presume that an electric field could magnify this incorrect behaviour. That 
said, if your membrane has a net charge, or contains an embedded protein with a 
net charge, then I expect that migration along the applied field is actually 
the proper behaviour.

There are a few ways that you could stop drift along the z axis. One is to use 
the pull code to restrain your bilayer COM to (0,0,0) or some other absolute 
position (i.e., no reference group selected in the pull code) and only turn on 
the Z component of the pull force. You could also set comm-grps to remove COM 
motion of the bilayer only (hopefully somebody else will let me know if that is 
a bad idea for some reason).

Chris.

-- original message --

I am facing problem for doing umbrella sampling simulation for the
transferring of a small peptide across a membrene in presence of electric
field. Moreover, the simulation was carried out at constant area. Martini
force field for protein, lipid and water was used for this simulation. The
electric field was applied along the -z direction across the membrane. The
problem, which I am facing is that the position of membrane is shifted a
lot during this this simulation. Moreover, I notice the shifting of
membrane is significantly enhanced when I apply high electric filed. I
don't have any clue for this shifting problem. If anyone have  idea to
arrest the shifting problem then please share it. A copy of my input file
is attached herewith.

Thanks in advance
Sudipta

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] OPLS/AA + TIP5P, anybody?

2013-10-04 Thread Christopher Neale
Dear Grzegorz:

>From a quick look at your .mdp, I also suggest that you go back to your system 
>including the peptide that you had managed to finish EM with modified flexible 
>tip5p but then crashed with the standard rigid tip5p during MD and try the MD 
>again using gen-vel = yes

if you're still seeing problems, why not upload your water-only system and your 
with-small-peptide test system to redmine. It's meant as a place to start a 
discussion, share files, and help us not to foget about a problem that may 
exist, so I am not sure why you hesitate.

Also, you said that the authors of that other OPLS-Tip5p paper had no problems. 
You might ask them for .gro .mdp and .top files so that you can see exactly 
what they did and how it differs from what you are doing.

Chris.

-- original message --

P.s. The mdp:

constraints =  none
integrator  =  md
dt  =  0.001; ps
nsteps  =  1000 ; total 10 ns
nstcomm =  1000
nstxout =  0
nstvout =  0
nstfout =  0
nstxtcout   =  25
xtc_grps=  System
nstlog  =  1000
nstenergy   =  1000
nstlist =  20
ns_type =  grid
rlist   =  1.3
coulombtype =  PME
fourierspacing  =  0.1
pme_order   =  4
optimize_fft=  yes
rcoulomb=  1.3
rvdw=  1.3
vdwtype =  cut-off
pbc =  xyz
DispCorr=  EnerPres
Tcoupl  =  v-rescale
ld_seed =  -1
tc-grps =  System
tau_t   =  0.1
ref_t   =  300.0
pcoupl  =  Parrinello-Rahman
tau_p   =  0.5
compressibility =  4.5e-5
ref_p   =  1.0
gen_vel =  no
cutoff-scheme   =  Verlet
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] OPLS/AA + TIP5P, anybody?

2013-09-30 Thread Christopher Neale
One a system passes EM and a couple of ps of MD, is it always stable 
indefinitely? If not, then
something is wrong somewhere.

-- original message --

Dear Chris,
I put one tip5p molecule in a center of dodecahedral box - 2nm from that 
molecule to walls, filled it with tip5p, ran 6000 steps of steep 
minimization. After another 2704 steps of cg it converged to emtol 1.0. 
I run 100k steps of nvt on this box afterwards 
(http://shroom.ibb.waw.pl/tip5p/pure1). But the water is very 
capricious. If I ran only 2000 steps of steep, the following cg crashed 
after less than 1000 steps because a water molecule could not be 
settled. I could not minimize another box, filled only by genbox from an 
empty gro file (http://shroom.ibb.waw.pl/tip5p/pure2). I understand that 
you have to have some luck if you run a simulation in pbc with rigid 
water, which interacts through walls of the box with the other side and 
was not well placed. Also, I had several segfaults during minimization 
that I was able to avoid only by limiting the number of cores.
I checked distances between OW and LPx on a crashing minimization with a 
peptide - 2812 water molecules. Maximum force reached 8.8e+24 in 225 
steep steps, but all the 5624 distances were rock solid 0.7A, as 
expected.
I still did not post the redmine issue, I want to be sure that I am 
doing everything correctly.

On 2013-09-29 18:47, Christopher Neale wrote:
> Dear Grzegorz:
> 
> Under no conditions should any of the tip5p geometry change (for the
> standard tip5p model).
> If you find that this is happening, then that is certainly an error.
> You can check if you like by analyzing
> your trajectory. However, flexible bonds will allow the distance from
> the arginine N to the arginine
> H to vary, which my allow a closer approach of the arginine H to the
> tip5p dummy site.
> 
> Did you verify that a water box (no protein) simulates without error?
> 
> Did you post a redmine issue with .mdp , .gro , and .top files?
> 
> Chris.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] OPLS/AA + TIP5P, anybody?

2013-09-29 Thread Christopher Neale
Dear Grzegorz:

Under no conditions should any of the tip5p geometry change (for the standard 
tip5p model). 
If you find that this is happening, then that is certainly an error. You can 
check if you like by analyzing 
your trajectory. However, flexible bonds will allow the distance from the 
arginine N to the arginine
H to vary, which my allow a closer approach of the arginine H to the tip5p 
dummy site.

Did you verify that a water box (no protein) simulates without error?

Did you post a redmine issue with .mdp , .gro , and .top files?

Chris.

-- original message --

Dear Chris,
Authors answered me very quickly. They did not have such a problem, but 
I still don't know the details of their input. They used gromacs-3.3, so 
I decided to give the old one a try. I did some tests with 3.3.4. 
Although the same problem occurred during steep minimization, some 
interesting things popped out. When I tried to grompp the system with 
plain tip5p for cg minimization, it failed:
"ERROR: can not do Conjugate Gradients with constraints (8484)", even 
though I did not set any constraints. The error is the same for tip4p 
unless you use flexible model, which tip5p does not have - the water had 
to be constrained then. I guess that treatment of virtual sites in 
gromacs-3.3 has something to do with this. I noticed, that constraints 
make simulations with tip5p more stable. It should not happen, that the 
LP virtual atoms are pulled further then the defined 0.7 from the 
oxygen, right? I will keep you updated.
Best Regards,
Grzegorz

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] OPLS/AA + TIP5P, anybody?

2013-09-28 Thread Christopher Neale
Dear Gigo:

that's a good comprehensive testing and report. Please let us know what you 
find out from those authors.
Their paper was short on methods (unless I missed it... I didn't check for any 
SI), so perhaps they did something
non-standard and didn't report it.

I think at this point it is a good idea for you to file a redmine issue. It's 
not a gromacs error per se, but
if this is true then pdb2gmx or grompp should give a warning or error for the 
combination of oplsaa and tip5p.

Chris.

gigo gigo at ibb.waw.pl 
Sun Sep 29 00:59:42 CEST 2013
Previous message: [gmx-users] OPLS/AA + TIP5P, anybody?
Next message: [gmx-users] Restarting simulation -s. files?
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Dear Chris,
I am really grateful for your help. This is what I did, with additional 
LJ terms on LP1 and LP2 of tip5p:
- 5000 steps of steepest descent with positions restraints on protein 
and flexible water (flexibility like in tip4p),
- 5000 steps of steep, no restraints, flexible water,
- 5000 steps of cg, flexible water,
- 10 steps of MD with posres and constrained bond lengths, very weak 
temp coupling (v-rescale), starting without generating of initial 
velocities, so its heating very slowly to 300K, no pressure coupling
- 10 steps of MD with posres, no constraints, v-rescale, nvt
- 10 steps of MD, no posres, nvt,
- 10 steps of MD, v-rescale, Berendsen pressure coupling,
- 10 steps of MD, v-rescale, Parrinello-Rahman pressure coupling.

Output of this chain, after removing LJ from LP, became input for 4 
simulations just like the last one from the above chain, with or without 
posres and constraints turned on.
Results:
1) Posres off, constraints off :
"step 0"
"WARNING: Listed nonbonded interaction between particles 1157 and 1163" 
<== Why does it say "particles" and not "atoms"? Nevermind, its lysine 
on the surface of the protein, one of atoms is dimensionless hydrogen.
(...)
"step 63: Water molecule starting at atom 7923 can not be settled."
(...)
"Segmentation fault"

2) Posres off, constraints on :
Warning like above + LINCS warnings, all for charged aminoacids on 
surface of the protein
Segfault at step 63

3) Posres on, constraints off, had to add refcoord_scaling = COM :
"WARNING: Listed nonbonded interaction between particles 3075 and 3079" 
<== arginine on the surface, dimensionless hydrogen

4) Posres on, constraints on, refcoord_scaling = COM :
Same warnings, several other positively charged dimensionless hydrogens 
listed, waters could not be settled, segfault.

I tried to run few other peptides and proteins with tip5p, 100% of 
crashes.

> Also, tip5p has been used successfully with the charmm foce field:
> http://pubs.acs.org/doi/abs/10.1021/ct700053u

Yes, there are no dimensionless charged hydrogens there except of these 
on ... tip3p, tip4p and tip5p water.

> (...)
> http://pubs.acs.org/doi/abs/10.1021/ct300180w
> (...)
> Are you using halogens with oplsaa-x?

No, I use OPLSAA distributed with Gromacs.


> Standard oplsaa (non-x) and tip5p seem to be a fine combination:
> http://www.sciencedirect.com/science/article/pii/S0006349507713863
> 
> you might want to contact the authors of that paper to see if they
> ever had such problems.

Thank you, I will ask them. I am starting to be sure, though, that it is 
not possible to run any simulations with tip5p and proteins (containing 
arginine for example) without some tricks, strengthening/constraining 
bonds at least. It happened during the minimization, that the distance 
between HHxy and NHx in arginine grew up to 1.48A, while HHxy was 
landing on LP of the water. Repeatability is 100%.
Regards,
Grzegorz
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] OPLS/AA + TIP5P, anybody?

2013-09-28 Thread Christopher Neale
Dear Gigo:

everything that I suggested was just ways that you might get a system without 
bad contacts to
start your simulation with the proper (standard) tip5p model and oplsaa. I 
expected that combination
to work together since they came out of the same lab, but looking at the 
initial tip5p paper, they
didn't test with protein in that first work. I thought that perhaps you had 
some bad clashes
that simply could not be removed with EM using a rigid water model (I have seen 
that sometimes 
even with tip3p).

I just checked and the tip5p sigmas and epsilons in gromacs are correct. Also, 
tip5p has been used
successfully with the charmm foce field: 
http://pubs.acs.org/doi/abs/10.1021/ct700053u

However, it does appear that oplsaa-x (including new halogen parameters) can 
not be used with tip5p:
http://pubs.acs.org/doi/abs/10.1021/ct300180w

due to the exact problem that you are noting: 
"Thus, the present OPLS-AAx model should not be used with TIP5P water,(7d) as 
the X-sites and water lone-pair sites can fuse; there is no problem with TIP3P 
and TIP4P, as the negative charge is shielded by the Lennard-Jones sphere 
centered on the oxygen atom.(14) If needed, the issue can be avoided by the 
introduction of small Lennard-Jones parameters for the X-sites."

Are you using halogens with oplsaa-x?

Standard oplsaa (non-x) and tip5p seem to be a fine combination:
http://www.sciencedirect.com/science/article/pii/S0006349507713863

you might want to contact the authors of that paper to see if they ever had 
such problems.

Chris.


Dear Chris,
Thank you for your reply. I defined a new virtual atomtype (type D) with 
LJ sigma 1.72A ( 2*0.7+1.72=3.12, which is sigma of the oxygen) and 
played a bit with epsilon till the new LJ repulsion was able to prevent 
the build up of a huge force on the oxygen while interacting with 
dimensionless charged hydrogens. Also, I copied the flexibility 
parameters from tip4p to see if it helps in minimization before I turn 
it into rigid water - it seems that it does. I was able to minimize the 
system with such water. Also, I minimized the system with tip4p and 
replaced it with tip5p with a script. I tried to minimize the system 
afterwards with true tip5p, which did not work. My question is, besides 
the correctness of water model, why do you think it is safe to remove 
the LJ on lone electron pairs in MD? Will it not collapse like in energy 
minimization?
Best Regards,
Grzegorz
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] OPLS/AA + TIP5P, anybody?

2013-09-26 Thread Christopher Neale
Dear Gigo:

I've never used tip5p, but perhaps you could add some LJ terms to the opls_120 
definition,
do your minimization, then remove the fake LJ term on opls_120 and run your MD?

If that doesn't work, then you might be able to minimize your system using 
FLEXIBLE tip3p
water and then use a script to convert the tip3p into tip5p. I expect that you 
can set 0,0,0
coordinates for each of the tip5p dummy atoms and that they will get correctly 
positioned 
in your first mdrun step with tip5p.

Chris.

-- original message --

Dear Mark,
Thank you for your reply. Unfortunately, TIP5P is completely rigid and 
the FLEXIBLE define will not change it. Any other ideas?
Best,
g

On 2013-09-24 23:51, Mark Abraham wrote:
> You should be able to minimize with CG and TIP5P by eliminating
> constraints, by making the water use a flexible molecule, e.g. define
> = -DFLEXIBLE (or something). Check your water .itp file for how to do
> it.
> 
> Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Membrane simulation with OPLS ff.

2013-09-26 Thread Christopher Neale
Dear Karthi:

As far as I am aware, there is no OPLSAA lipid force field. I have used Berger 
lipids with OPLSAA protein
( http://www.pomeslab.com/files/lipidCombinationRules.pdf ) but that is mixing 
a UA lipid with an AA protein
so be aware of possible problems arising out of that.

Charmm has proteins and lipids, but charmm lipids require charmm tip3p water 
(or at least tip4p or spc, certainly not 
regular tip3p) and are thus slower to simulate in gromacs. I'm more recently 
using the Slipids (stockholm lipids) and 
Amber99SB-ILDN protein forcefield.

Chris.

-- original message --
Is OPLSAA forcefield data already available for POPC membranes.  I am 
interested in simulation of proteins in POPC membrane.
Thank you.
Best Regards
Karthi.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] "Illegal instruction" error from alchemical-gromacs.py

2013-09-26 Thread Christopher Neale
Dear Users:

I'm having difficulty running MBAR after some free energy calculations (MBAR 
via alchemical-gromacs.py obtained from alchemistry.org).
The input options to alchemical-gromacs.py have obviously changed since the 
site at 
http://www.alchemistry.org/wiki/GROMACS_4.6_example:_Direct_ethanol_solvation_free_energy
was updated, but I've taken a look at the python source and it seems as if my 
invocation below should work. 
Nevertheless, I get an "Illegal instruction" error (See below).

gpc-logindm02-$ ls ANALYSIS/
ethanol.0.dhdl.xvg  ethanol.2.dhdl.xvg  ethanol.4.dhdl.xvg  ethanol.6.dhdl.xvg  
ethanol.8.dhdl.xvg
ethanol.1.dhdl.xvg  ethanol.3.dhdl.xvg  ethanol.5.dhdl.xvg  ethanol.7.dhdl.xvg

gpc-logindm02-$ python 
/project/p/pomes/cneale/GPC/exe/pymbar/trunk/examples/alchemical-free-energy/alchemical-gromacs.py
 -d ANALYSIS -p ethanol. -t 300 -s 1000 -v
Warning on use of the timeseries module: If the inherent timescales of the 
system are long compared to those being analyzed, this statistical inefficiency 
may be an underestimate.  The estimate presumes the use of many statistically 
independent samples.  Tests should be performed to assess whether this 
condition is satisfied.   Be cautious in the interpretation of the data.
Started on Thu Sep 26 22:32:05 2013
Illegal instruction

(omitting the -t -s and -v flags yields the same error).

I realize that I'm hijacking the gromacs list a little bit here but there is no 
mailing list for http://www.alchemistry.org/
and it seems like a good idea to have these Q&A's archived.

Thank you,
Chris.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segfault when running the alchemistry.org ethanol solvation free energy tutorial

2013-09-26 Thread Christopher Neale
Agreed, the following parameters do not segfault in single or double precision:
sc-alpha = 0.5
sc-power = 1
sc-r-power   = 6
Same goes for http://bugzilla.gromacs.org/issues/1306 

The following parameters give a segfault in single precision but are ok in 
double precision
sc-alpha = 0.001
sc-power = 1
sc-r-power   = 48
Same goes for http://bugzilla.gromacs.org/issues/1306 

Sorry for the confusion earlier. I was using a compilation that I thought was 
double precision 
but it was actually single. Recompiling in double precision gave me the 
stability outlined above.

Thank you for your assitance Mark and Michael.

Chris.

-- original message --

Michael Shirts mrshirts at gmail.com 
Fri Sep 27 00:41:17 CEST 2013
Previous message: [gmx-users] segfault when running the alchemistry.org ethanol 
solvation free energy tutorial
Next message: [gmx-users] segfault when running the alchemistry.org ethanol 
solvation free energy tutorial
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
And with this change, single is running fine as well.

This was a known issue, but was only documented in the expanded.mdp
files, which was an oversight.  After this, I switched so the default
is less likely to cause problems.  Because of some theory improvements
developed in the group in free energy calculation pathways, the
sc-r-power=48 pathway will now be phased out anyway by 5.1.

On Thu, Sep 26, 2013 at 6:37 PM, Michael Shirts  wrote:
> I thought I had just managed to solve the issue :)
>
> If you look at the soft core parameters, there are two types listed --
> one with sc-r-power = 48, and one with sc-r-power = 6.  The sc-r-power
> are more stable with single precision calculations.
>
> I have changed the files on the website to make the single precision
> ones the default.  Expanded.mdp warned about the issues with
> precision, but left the sc-r-power in place; the Ethanol.mdp did not
> warn about the potential issue.  Now both include the more efficient
> path commented out.
>
> NOTE that this means the exact numbers of the tutorials are not quite
> right anymore; the process is unchanged, as is the final answer, but
> the intermediate dG's will be slightly different.
>
> However, I need to redo them anyway to make them easier in the next
> 1-2 weeks, so I will update them then.
>
> If it still fails with double, that's a different issue -- because I'm
> running them fine with double.
>
>
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segfault when running the alchemistry.org ethanol solvation free energy tutorial

2013-09-26 Thread Christopher Neale
No, this is not the expanded ensemble version. It's the initial "Running the 
calculation with Gromacs" section straight out of 
http://www.alchemistry.org/wiki/GROMACS_4.6_example:_Direct_ethanol_solvation_free_energy

I get the segfault with a single run (at any of the 9 individual lambda values).

-- original message --

Just to be clear, is this the expanded ensemble version of the calculation?

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segfault when running the alchemistry.org ethanol solvation free energy tutorial

2013-09-26 Thread Christopher Neale
My mistake  I still get a segfault even when using double precision. (EM 
doesn't help, nor does switching to Berendsen pressure coupling).

Note that I can stop the segfault when running at init-lambda-state = 0 if I 
set:

couple-lambda0   = none
couple-lambda1   = none

instead of 

couple-lambda0   = vdw-q
couple-lambda1   = none

This looks like http://bugzilla.gromacs.org/issues/1306

Thank you,
Chris.

-- original message --

Thank you Mark.

I actually found that it crashed wihout the -multi part (no hamiltonian 
exchange). 
The command that I used was: mdrun -nt 1 -deffnm ethanol.1 -dhdl 
ethanol.1.dhdl.xvg

If I use the double precision version, there is no segfault. That's a working 
solution, but it is worrysome.

Thank you,
Chris.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segfault when running the alchemistry.org ethanol solvation free energy tutorial

2013-09-26 Thread Christopher Neale
Thank you Mark.

I actually found that it crashed wihout the -multi part (no hamiltonian 
exchange). 
The command that I used was: mdrun -nt 1 -deffnm ethanol.1 -dhdl 
ethanol.1.dhdl.xvg

If I use the double precision version, there is no segfault. That's a working 
solution, but it is worrysome.

Thank you,
Chris.


-- original message --

I found the -multi version of that tutorial a bit temperamental...
Michael Shirts suggested that double precision is more reliable for
expanded ensemble. Hopefully he can chime in in a day or two.

Mark

On Thu, Sep 26, 2013 at 9:00 PM, Christopher Neale
 wrote:
> Dear Users:
>
> Has anyone successfully run the free energy tutorial at 
> http://www.alchemistry.org/wiki/GROMACS_4.6_example:_Direct_ethanol_solvation_free_energy
>  ?
>
> I just tried it and I get a segmentation fault immediately (see output at the 
> end of this post).
>
> I get a segfault with both 4.6.3 and 4.6.1.
>
> Note that if I modify the .mdp file to set free-energy = no , then the 
> simulation runs just fine. (I have, of course, set init-lambda-state in the 
> .mdp file that I downloaded from the aforementioned site and I get a segfault 
> with any value of init-lambda-state from 0 to 8).
>
> gpc-f103n084-$ mdrun -nt 1 -deffnm ethanol.1 -dhdl ethanol.1.dhdl.xvg
>  :-)  G  R  O  M  A  C  S  (-:
>
>   GROup of MAchos and Cynical Suckers
>
> :-)  VERSION 4.6.3  (-:
>
> Contributions from Mark Abraham, Emile Apol, Rossen Apostolov,
>Herman J.C. Berendsen, Aldert van Buuren, Pär Bjelkmar,
>  Rudi van Drunen, Anton Feenstra, Gerrit Groenhof, Christoph Junghans,
> Peter Kasson, Carsten Kutzner, Per Larsson, Pieter Meulenhoff,
>Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz,
> Michael Shirts, Alfons Sijbers, Peter Tieleman,
>
>Berk Hess, David van der Spoel, and Erik Lindahl.
>
>Copyright (c) 1991-2000, University of Groningen, The Netherlands.
>  Copyright (c) 2001-2012,2013, The GROMACS development team at
> Uppsala University & The Royal Institute of Technology, Sweden.
> check out http://www.gromacs.org for more information.
>
>  This program is free software; you can redistribute it and/or
>modify it under the terms of the GNU Lesser General Public License
> as published by the Free Software Foundation; either version 2.1
>  of the License, or (at your option) any later version.
>
> :-)  mdrun  (-:
>
> Option Filename  Type Description
> 
>   -s  ethanol.1.tpr  InputRun input file: tpr tpb tpa
>   -o  ethanol.1.trr  Output   Full precision trajectory: trr trj cpt
>   -x  ethanol.1.xtc  Output, Opt. Compressed trajectory (portable xdr format)
> -cpi  ethanol.1.cpt  Input, Opt.  Checkpoint file
> -cpo  ethanol.1.cpt  Output, Opt. Checkpoint file
>   -c  ethanol.1.gro  Output   Structure file: gro g96 pdb etc.
>   -e  ethanol.1.edr  Output   Energy file
>   -g  ethanol.1.log  Output   Log file
> -dhdl ethanol.1.dhdl.xvg  Output, Opt! xvgr/xmgr file
> -field  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
> -table  ethanol.1.xvg  Input, Opt.  xvgr/xmgr file
> -tabletf  ethanol.1.xvg  Input, Opt.  xvgr/xmgr file
> -tablep  ethanol.1.xvg  Input, Opt.  xvgr/xmgr file
> -tableb  ethanol.1.xvg  Input, Opt.  xvgr/xmgr file
> -rerun  ethanol.1.xtc  Input, Opt.  Trajectory: xtc trr trj gro g96 pdb cpt
> -tpi  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
> -tpid ethanol.1.xvg  Output, Opt. xvgr/xmgr file
>  -ei  ethanol.1.edi  Input, Opt.  ED sampling input
>  -eo  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
>   -j  ethanol.1.gct  Input, Opt.  General coupling stuff
>  -jo  ethanol.1.gct  Output, Opt. General coupling stuff
> -ffout  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
> -devout  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
> -runav  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
>  -px  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
>  -pf  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
>  -ro  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
>  -ra  ethanol.1.log  Output, Opt. Log file
>  -rs  ethanol.1.log  Output, Opt. Log file
>  -rt  ethanol.1.log  Output, Opt. Log file
> -mtx  ethanol.1.mtx  Output, Opt. Hessian matrix
>  -dn  ethanol.1.ndx  Output, Opt. Index file
> -multidir ethanol.1  Input, Opt., Mult. Run directory
> -membed  ethanol.1.dat  Input, Opt.  Generic data file
>  -mp  ethanol.1.top  Input, Opt.  Topology file
>  -mn  ethanol.1.ndx  Input, Opt.  Index file
>
> Option   Type   Value   Descripti

[gmx-users] segfault when running the alchemistry.org ethanol solvation free energy tutorial

2013-09-26 Thread Christopher Neale
Dear Users:

Has anyone successfully run the free energy tutorial at 
http://www.alchemistry.org/wiki/GROMACS_4.6_example:_Direct_ethanol_solvation_free_energy
 ?

I just tried it and I get a segmentation fault immediately (see output at the 
end of this post).

I get a segfault with both 4.6.3 and 4.6.1.

Note that if I modify the .mdp file to set free-energy = no , then the 
simulation runs just fine. (I have, of course, set init-lambda-state in the 
.mdp file that I downloaded from the aforementioned site and I get a segfault 
with any value of init-lambda-state from 0 to 8).

gpc-f103n084-$ mdrun -nt 1 -deffnm ethanol.1 -dhdl ethanol.1.dhdl.xvg
 :-)  G  R  O  M  A  C  S  (-:

  GROup of MAchos and Cynical Suckers

:-)  VERSION 4.6.3  (-:

Contributions from Mark Abraham, Emile Apol, Rossen Apostolov,
   Herman J.C. Berendsen, Aldert van Buuren, Pär Bjelkmar,
 Rudi van Drunen, Anton Feenstra, Gerrit Groenhof, Christoph Junghans,
Peter Kasson, Carsten Kutzner, Per Larsson, Pieter Meulenhoff,
   Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz,
Michael Shirts, Alfons Sijbers, Peter Tieleman,

   Berk Hess, David van der Spoel, and Erik Lindahl.

   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 Copyright (c) 2001-2012,2013, The GROMACS development team at
Uppsala University & The Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
   modify it under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
 of the License, or (at your option) any later version.

:-)  mdrun  (-:

Option Filename  Type Description

  -s  ethanol.1.tpr  InputRun input file: tpr tpb tpa
  -o  ethanol.1.trr  Output   Full precision trajectory: trr trj cpt
  -x  ethanol.1.xtc  Output, Opt. Compressed trajectory (portable xdr format)
-cpi  ethanol.1.cpt  Input, Opt.  Checkpoint file
-cpo  ethanol.1.cpt  Output, Opt. Checkpoint file
  -c  ethanol.1.gro  Output   Structure file: gro g96 pdb etc.
  -e  ethanol.1.edr  Output   Energy file
  -g  ethanol.1.log  Output   Log file
-dhdl ethanol.1.dhdl.xvg  Output, Opt! xvgr/xmgr file
-field  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
-table  ethanol.1.xvg  Input, Opt.  xvgr/xmgr file
-tabletf  ethanol.1.xvg  Input, Opt.  xvgr/xmgr file
-tablep  ethanol.1.xvg  Input, Opt.  xvgr/xmgr file
-tableb  ethanol.1.xvg  Input, Opt.  xvgr/xmgr file
-rerun  ethanol.1.xtc  Input, Opt.  Trajectory: xtc trr trj gro g96 pdb cpt
-tpi  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
-tpid ethanol.1.xvg  Output, Opt. xvgr/xmgr file
 -ei  ethanol.1.edi  Input, Opt.  ED sampling input
 -eo  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
  -j  ethanol.1.gct  Input, Opt.  General coupling stuff
 -jo  ethanol.1.gct  Output, Opt. General coupling stuff
-ffout  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
-devout  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
-runav  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
 -px  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
 -pf  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
 -ro  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
 -ra  ethanol.1.log  Output, Opt. Log file
 -rs  ethanol.1.log  Output, Opt. Log file
 -rt  ethanol.1.log  Output, Opt. Log file
-mtx  ethanol.1.mtx  Output, Opt. Hessian matrix
 -dn  ethanol.1.ndx  Output, Opt. Index file
-multidir ethanol.1  Input, Opt., Mult. Run directory
-membed  ethanol.1.dat  Input, Opt.  Generic data file
 -mp  ethanol.1.top  Input, Opt.  Topology file
 -mn  ethanol.1.ndx  Input, Opt.  Index file

Option   Type   Value   Description
--
-[no]h   bool   no  Print help info and quit
-[no]version bool   no  Print version info and quit
-niceint0   Set the nicelevel
-deffnm  string ethanol.1  Set the default filename for all file options
-xvg enum   xmgrace  xvg plot formatting: xmgrace, xmgr or none
-[no]pd  bool   no  Use particle decompostion
-dd  vector 0 0 0   Domain decomposition grid, 0 is optimize
-ddorder enum   interleave  DD node order: interleave, pp_pme or cartesian
-npmeint-1  Number of separate nodes to be used for PME, -1
is guess
-nt  int1   Total number of threads to start (0 is guess)
-ntmpi   int0   Number of thread-MPI threads to start (0 is guess)
-ntomp   int0   Number of OpenMP threads per MPI process/thread
to start (0 is guess)
-ntomp_pme   int0   Number of OpenMP threads per MPI process/t

[gmx-users] BGQ compilation with verlet kernels: #include file "kernel_impl.h" not found.

2013-09-17 Thread Christopher Neale
Indeed, it works just fine when I compile with mpi. I never thought to check 
that. My usual procedure is
to compile the whole package without mpi and then to compile mdrun with mpi. 
Thanks for the help Mark.

Here is the compilation script that worked for me.

module purge
module load vacpp/12.1 xlf/14.1 mpich2/xl
module load cmake/2.8.8
module load fftw/3.3.2

export FFTW_LOCATION=/scinet/bgq/Libraries/fftw-3.3.2

cmake ../source/ \
  -DCMAKE_TOOLCHAIN_FILE=BlueGeneQ-static-XL-C \
  -DCMAKE_PREFIX_PATH=$FFTW_LOCATION \
  -DCMAKE_INSTALL_PREFIX=$(pwd) \
  -DGMX_X11=OFF \
  -DGMX_MPI=ON \
  -DGMX_PREFER_STATIC_LIBS=ON

make -j 16
make install

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] BGQ compilation with verlet kernels: #include file "kernel_impl.h" not found.

2013-09-17 Thread Christopher Neale
Dear Users:

I am attempting to use the new BGQ kernels with the version of gromacs at 
https://gerrit.gromacs.org/#/c/2572/ , which I obtained by:

git init
git fetch https://gerrit.gromacs.org/gromacs refs/changes/72/2572/1 && git 
checkout FETCH_HEAD

I then attempted to compile like this:

module purge
module load vacpp/12.1 xlf/14.1 mpich2/xl
module load cmake/2.8.8
module load fftw/3.3.2
export FFTW_LOCATION=/scinet/bgq/Libraries/fftw-3.3.2
cmake ../source/ \
  -DCMAKE_PREFIX_PATH=$FFTW_LOCATION \
  -DCMAKE_INSTALL_PREFIX=$(pwd) \
  -DGMX_X11=OFF \
  -DGMX_MPI=OFF \
  -DGMX_PREFER_STATIC_LIBS=ON 
make -j 4
make install

###

That gave me the following error during make:

Scanning dependencies of target gromacs_include_links
[  0%] Generating gromacs
[  0%] Built target gromacs_include_links
Scanning dependencies of target gmx
[  0%] [  0%] [  0%] [  0%] [  0%] [  0%] [  0%] [  0%] [  1%] Building C 
object src/gmxlib/CMakeFiles/gmx.dir/gmxcpp.c.o
[  1%] Building C object src/gmxlib/CMakeFiles/gmx.dir/md_logging.c.o
Building C object src/gmxlib/CMakeFiles/gmx.dir/vmddlopen.c.o
[  1%] [  1%] Building C object src/gmxlib/CMakeFiles/gmx.dir/chargegroup.c.o
Building C object src/gmxlib/CMakeFiles/gmx.dir/gmx_cpuid.c.o
[  1%] Building C object src/gmxlib/CMakeFiles/gmx.dir/gmxfio_rw.c.o
Building C object src/gmxlib/CMakeFiles/gmx.dir/gmxfio.c.o
[  1%] [  1%] Building C object src/gmxlib/CMakeFiles/gmx.dir/trxio.c.o
[  1%] Building C object src/gmxlib/CMakeFiles/gmx.dir/gmx_system_xdr.c.o
Building C object src/gmxlib/CMakeFiles/gmx.dir/wgms.c.o
Building C object src/gmxlib/CMakeFiles/gmx.dir/wman.c.o
Building C object src/gmxlib/CMakeFiles/gmx.dir/smalloc.c.o
Building C object src/gmxlib/CMakeFiles/gmx.dir/rmpbc.c.o
Building C object src/gmxlib/CMakeFiles/gmx.dir/nrjac.c.o
Building C object src/gmxlib/CMakeFiles/gmx.dir/index.c.o
Building C object src/gmxlib/CMakeFiles/gmx.dir/gmx_thread_affinity.c.o
[  1%] Building C object src/gmxlib/CMakeFiles/gmx.dir/mshift.c.o
[  1%] Building C object src/gmxlib/CMakeFiles/gmx.dir/macros.c.o
[  1%] Building C object src/gmxlib/CMakeFiles/gmx.dir/oenv.c.o
[  1%] Building C object src/gmxlib/CMakeFiles/gmx.dir/maths.c.o
[  2%] Building C object src/gmxlib/CMakeFiles/gmx.dir/gmx_omp.c.o
[  2%] Building C object src/gmxlib/CMakeFiles/gmx.dir/random.c.o
[  2%] Building C object src/gmxlib/CMakeFiles/gmx.dir/gmx_sort.c.o
[  2%] Building C object src/gmxlib/CMakeFiles/gmx.dir/gmx_random.c.o
[  2%] Building C object src/gmxlib/CMakeFiles/gmx.dir/md5.c.o
[  2%] Building C object src/gmxlib/CMakeFiles/gmx.dir/network.c.o
[  2%] Building C object src/gmxlib/CMakeFiles/gmx.dir/copyrite.c.o
[  2%] Building C object src/gmxlib/CMakeFiles/gmx.dir/cinvsqrtdata.c.o
[  2%] Building C object src/gmxlib/CMakeFiles/gmx.dir/writeps.c.o
[  2%] Building C object src/gmxlib/CMakeFiles/gmx.dir/mtop_util.c.o
"/home/p/pomes/cneale/exec/gromacs-4.6.3_bgq/source/src/gmxlib/network.c", line 
264.10: 1506-296 (S) #include file  not found.
make[2]: *** [src/gmxlib/CMakeFiles/gmx.dir/network.c.o] Error 1
make[2]: *** Waiting for unfinished jobs
make[1]: *** [src/gmxlib/CMakeFiles/gmx.dir/all] Error 2
make: *** [all] Error 2

###

After some looking around, I added the following line before cmake:

export CFLAGS=-I/bgsys/drivers/ppcfloor

That solved the initial problem, but now I have the following error:

...
[  2%] Building C object src/gmxlib/CMakeFiles/gmx.dir/writeps.c.o
[  2%] Building C object src/gmxlib/CMakeFiles/gmx.dir/mtop_util.c.o
[  3%] Building C object src/gmxlib/CMakeFiles/gmx.dir/gmx_fatal.c.o
"/bgsys/drivers/ppcfloor/spi/include/kernel/location.h", line 34.10: 1506-296 
(S) #include file "kernel_impl.h" not found.
"/bgsys/drivers/ppcfloor/spi/include/kernel/process.h", line 36.10: 1506-296 
(S) #include file "kernel_impl.h" not found.
"/bgsys/drivers/ppcfloor/spi/include/kernel/process.h", line 194.10: 1506-296 
(S) #include file "process_impl.h" not found.
"/bgsys/drivers/ppcfloor/spi/include/kernel/location.h", line 173.10: 1506-296 
(S) #include file "location_impl.h" not found.
make[2]: *** [src/gmxlib/CMakeFiles/gmx.dir/network.c.o] Error 1
make[2]: *** Waiting for unfinished jobs
make[1]: *** [src/gmxlib/CMakeFiles/gmx.dir/all] Error 2
make: *** [all] Error 2

###

And it's harder to know what to do now since there are multiple instances of 
kernel_impl.h , etc.:

bgqdev-fen1-$ find /bgsys/drivers/ppcfloor/spi/include/kernel/|grep 
kernel_impl.h
/bgsys/drivers/ppcfloor/spi/include/kernel/cnk/kernel_impl.h
/bgsys/drivers/ppcfloor/spi/include/kernel/firmware/kernel_impl.h
/bgsys/drivers/ppcfloor/spi/include/kernel/klinux/kernel_impl.h


Does this indicate that the computer that I am using really has such a 
non-standard BG/Q setup? or is it possible that compiling on the BG/Q with the 
new verlet kernel requires an entirely different approach?

I understand that the Jenkins Buildbot has succes

[gmx-users] charmm36 proteins

2013-08-14 Thread Christopher Neale
What is the charmm36 protein force field? (can you provide a reference to what 
you are referring to).

As far as I know, there is no such thing as "charmm36" proteins. There are 
charmm27+cmap proteins
(implemented in gromacs already) and charmm36 lipids (implemented in gromacs 
already).

-- original message --

Hello all,
Does anyone know if there is a working version of charmm36 proteins
forcefield converted for gromacs?
Thank you.
-Dina

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] VDW switched off in CHARMM tip3p model

2013-08-14 Thread Christopher Neale
>From my inspection of the force field files included in gromacs-4.6.1, when you
use the charmm ff and the charmm tip3p (tips3p.itp) you get the hydrogen type 
HT,
which does have LJ on the hydrogen atoms, whereas you only get the hydrogen 
type HWT3 when you use the non-charmm tip3p (tip3p.itp).

If I understand Mark's comment to be that there is no good reason to use the 
charmm
tip3p water model with the charmm ff, then I disagree (See below).

I have posted an image ( http://tinypic.com/r/110ak5s/5 ) of the area per lipid 
(APL) 
for a POPC bilayer that I built with charmm-gui (83 lipids/leaflet; 43 
waters/lipid)
and then simulated under the charmm36 lipid parameters for ~ 300 ns many times 
successively with different water models (2 repeats each, starting from 
different 
conformations out of charmm-gui). Note the large difference in the APL for the 
regular tip3p and the charmm tip3p.  In fact, some papers note that you get 
gel-phase bilayers with the charmm36 lipid ff and the non-charmm tip3p model 
when using 
certain cutoff treatments 
(Piggot and Khalid -- http://pubs.acs.org/doi/abs/10.1021/ct3003157 -- is one 
such paper).

Chris.

-- original message --

You could do that, but it's extra cost for no known benefit. See
http://dx.doi.org/10.1021/ct900549r

On Tue, Aug 13, 2013 at 7:15 PM, Weilong Zhao  wrote:
> Hi,
>
> I noticed that for GROMACS versions of 4.5 of later, the force field for
> TIP3P water model in CHARMM27.ff package has a following line included:
>
>  The following atom types are NOT part of the CHARMM distribution
> ..
> .
> HWT31   1.0080000.417   A   0.0 0.0 ; TIP3p H
>
> It seems that VDW interaction of the default hydrogen of water is switched
> off, which makes it not the correct TIP3P ff of CHARMM. However this
> hydrogen type HWT3, is used as default by gromacs as CHARMM water model.
>
> Even though the hydrogen without VDW is the original TIP3P water model, I
> wonder is there any problem to include it in CHARMM ff set? Really
> appreciate if anyone could share some opinion.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] RE: Umbrella sampling question

2013-07-29 Thread Christopher Neale
Why not do two umbrella sampling simulations: one with initial conformations 
from your faster pulling and one with initial conformations from your slower 
pulling. Then you can run them both as regular US simulations until (a) neither 
US PMF is drifting systematically with increasing simulation time and (b) they 
converge to the same answer. This way, you might also be able to make some 
comment on the relative importance of doing the initial pulling slowly.

Next time, please wait until your post has appeared on the list (it is 
currently awaiting approval for inclusion of large images) before sending me a 
private email asking me to comment on-list (which is, by itself, fine with me).

Also: I can't make heads or tails of your plots. (i) I have no idea which one 
corresponds to which force constant; (ii) you mentioned 4 force constants but 
you posted 8 figures; (iii) I can not read any of the axes or labels.

Chris.

From: surampu...@gmail.com [surampu...@gmail.com]
Sent: 29 July 2013 19:39
To: chris.ne...@utoronto.ca
Subject: Umbrella sampling question

Hi Chris,

I have seen many of your contributions to the Umbrella sampling topic in the 
forum. It would be of immense help to me if you can throw light on a few points.

I am running my first umbrella sampling simulation, and having hard time to 
decide if the PMF plot I am getting is right or wrong. I am pulling one atom 
from a short polymer chain from the center of a micelle and trying to obtain a 
PMF plot along the distance from the center of the micelle to the surface and 
eventually few nanometers away from the micelle. Here is the mdp file for the 
pulling:
title   = Umbrella pulling simulation
define  = -DPOSRES_surfpoly
; Run parameters
integrator  = md
dt  = 0.002 ; timestep (ps)
tinit   = 0
nsteps  = 25;  500 ps= 0.5ns
nstcomm = 10; remove COM every 10 steps
; Output parameters
nstxout = 5000
nstvout = 5000
nstfout = 250
nstxtcout   = 250
nstenergy   = 500
; Bond parameters
constraint_algorithm= lincs
constraints = all-bonds
continuation= yes   ; continuing from NPT
; Single-range cutoff scheme
nstlist = 5
ns_type = grid
rlist   = 1.4
rcoulomb= 1.4
rvdw= 1.4
; PME electrostatics parameters
coulombtype = PME
fourierspacing  = 0.12
fourier_nx  = 0
fourier_ny  = 0
fourier_nz  = 0
pme_order   = 4
ewald_rtol  = 1e-5
optimize_fft= yes
; Berendsen temperature coupling is on in two groups
Tcoupl  = Nose-Hoover
tc_grps = System
tau_t   = 0.5
ref_t   = 300
; Pressure coupling is on
Pcoupl  = Parrinello-Rahman
pcoupltype  = isotropic
tau_p   = 1.0
compressibility = 4.5e-5
ref_p   = 1.0
refcoord_scaling = com
; Generate velocities is off
gen_vel = no
; Periodic boundary conditions are on in all directions
pbc = xyz
; Long-range dispersion correction
DispCorr= EnerPres
; Pull code
pull= umbrella  ; COM pulling using umbrella potential between 
reference group and other groups
;pull_geometry   = distance  ; simple distance increase
pull_geometry   = direction  ; simple distance increase
;pull_dim= Y N N ; pulling in x-direction
pull_vec1   = 1 0 0 ; pulling in x-direction
pull_start  = yes   ; define initial COM distance > 0
pull_ngroups= 1 ; we are only applying a pulling force to one group
pull_group0 = surf   ; reference group for pulling
pull_group1 = poly; group to which pulling force is applied
pull_rate1  = 0.01  ; 0.01 nm per ps = 10 nm per ns
pull_k1 = 300   ; kJ mol^-1 nm^-2

I have repeated pull run using various force constants 
(pull_k1=300,500,1000,2000 kJ/mol/nm^2) to pull over 5 nm distance with 0.1nm 
windows and none of them is a clear pick as in the tutorial as to which one to 
use for umbrella sampling. From my understanding, force constant should neither 
be too low or too high. Is there a simpler way to decide which one is the right 
force constant to use from the plots? Please see the link below for the F vs 
time and  position vs time plots (top to bottom order --- pull_k1=2000, 1000, 
500, 300 kJ/mol/nm^2).

http://gromacs.5086.x6.nabble.com/Umbrella-sampling-force-vs-time-plots-tp5009709p5010097.html

Thanks a lot,
Andy S.

_
Sent from http://gromacs.5086.x6.nabble.com



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segfault with an otherwise stable system when I turn on FEP (complete decoupling)

2013-07-18 Thread Christopher Neale
Dear Michael:

I have uploaded them as http://redmine.gromacs.org/issues/1306

It does not crash immediately. The crash is stochastic, giving a segfault 
between 200 and 5000 integration steps. That made me think it was a simple 
exploding system problem, but there are other things (listed in my original 
post) that make me think otherwise. Most notably, the drug is fine in both 
water and vacuum. I have also built numerous systems and get the crash in each 
one. My actual production system also has a protein, but during debugging I 
found that the error persists in a simple and small water solution.

Thank you for your assistance.
Chris.

-- original message --

Chris, can you post a redmine on this so I can look at the files?

Also, does it crash immediately, or after a while?

On Thu, Jul 18, 2013 at 2:45 PM, Christopher Neale
 wrote:
> Dear Users:
>
> I have a system with water and a drug (54 total atoms; 27 heavy atoms). The 
> system is stable when I simulate it for 1 ns. However, Once I add the 
> following options to the .mdp file, the run dies after a few ps with a 
> segfault.
>
> free-energy = yes
> init-lambda = 1
> couple-lambda0 = vdw-q
> couple-lambda1 = none
> couple-intramol = no
> couple-moltype = drug
>
> I do not get any step files or any lincs warnings. If I look at the .xtc and 
> .edr files, there is no indication of something blowing up before the 
> segfault. I have also verified that the drug runs without any problem in 
> vacuum. I get the same behaviour if I remove constraints and use a timestep 
> of 0.5 fs. The segfault is reproducible with v4.6.1 and v4.6.3. I am using 
> the charmm FF, but I converted all UB angles in my drug to type-1 angles and 
> still got the segfault. I also get the segfault with particle decomposition 
> and/or while running a single thread. I am currently using the SD integrator, 
> but I get the same segfault with md and md-vv. Couple-intramol=yes doesn't 
> resolve it, neither does using separate T-coupling groups for the water and 
> drug. Neither does turning off pressure coupling.
>
> Here is the .mdp file that works fine, but gives me a segfault when I add the 
> free energy stuff (above):
>
> constraints = all-bonds
> lincs-iter =  1
> lincs-order =  6
> constraint_algorithm =  lincs
> integrator = sd
> dt = 0.002
> tinit = 0
> nsteps = 10
> nstcomm = 1
> nstxout = 0
> nstvout = 0
> nstfout = 0
> nstxtcout = 500
> nstenergy = 500
> nstlist = 10
> nstlog=0 ; reduce log file size
> ns_type = grid
> vdwtype = cut-off
> rlist = 0.8
> rvdw = 0.8
> rcoulomb = 0.8
> coulombtype = cut-off
> tc_grps =  System
> tau_t   =  1.0
> ld_seed =  -1
> ref_t = 310
> gen_temp = 310
> gen_vel = yes
> unconstrained_start = no
> gen_seed = -1
> Pcoupl = berendsen
> pcoupltype = isotropic
> tau_p = 4
> compressibility = 4.5e-5
> ref_p = 1.0
>
> I do realize that some of these settings are not ideal for a production run. 
> I started with the real Charmm cutoffs + PME, etc, (which also gives the 
> segfault) but this is what I am using right now for quick testing.
>
> The only thing keeping me from filing a redmine issue is that if I remove my 
> drug and do the FEP on one of the water molecules (using the FEP code listed 
> above), I have no segfault. Therefore it is clearly related to the drug, 
> whose parameters I built so I may have caused the problem somehow. 
> Nevertheless, the drug runs fine in water and in vacuum without the FEP code, 
> so I can't imagine what could be causing this segfault (also, the fact that 
> it's a segfault means that I don;t get any useful info from mdrun as to what 
> might be going wrong).
>
> Thank you,
> Chris.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segfault with an otherwise stable system when I turn on FEP (complete decoupling)

2013-07-18 Thread Christopher Neale
Dear Users:

I have a system with water and a drug (54 total atoms; 27 heavy atoms). The 
system is stable when I simulate it for 1 ns. However, Once I add the following 
options to the .mdp file, the run dies after a few ps with a segfault. 

free-energy = yes
init-lambda = 1
couple-lambda0 = vdw-q
couple-lambda1 = none
couple-intramol = no
couple-moltype = drug

I do not get any step files or any lincs warnings. If I look at the .xtc and 
.edr files, there is no indication of something blowing up before the segfault. 
I have also verified that the drug runs without any problem in vacuum. I get 
the same behaviour if I remove constraints and use a timestep of 0.5 fs. The 
segfault is reproducible with v4.6.1 and v4.6.3. I am using the charmm FF, but 
I converted all UB angles in my drug to type-1 angles and still got the 
segfault. I also get the segfault with particle decomposition and/or while 
running a single thread. I am currently using the SD integrator, but I get the 
same segfault with md and md-vv. Couple-intramol=yes doesn't resolve it, 
neither does using separate T-coupling groups for the water and drug. Neither 
does turning off pressure coupling.

Here is the .mdp file that works fine, but gives me a segfault when I add the 
free energy stuff (above):

constraints = all-bonds
lincs-iter =  1
lincs-order =  6
constraint_algorithm =  lincs
integrator = sd
dt = 0.002
tinit = 0
nsteps = 10
nstcomm = 1
nstxout = 0
nstvout = 0
nstfout = 0
nstxtcout = 500
nstenergy = 500
nstlist = 10
nstlog=0 ; reduce log file size
ns_type = grid
vdwtype = cut-off
rlist = 0.8
rvdw = 0.8
rcoulomb = 0.8
coulombtype = cut-off
tc_grps =  System
tau_t   =  1.0
ld_seed =  -1
ref_t = 310
gen_temp = 310
gen_vel = yes
unconstrained_start = no
gen_seed = -1
Pcoupl = berendsen
pcoupltype = isotropic
tau_p = 4 
compressibility = 4.5e-5 
ref_p = 1.0 

I do realize that some of these settings are not ideal for a production run. I 
started with the real Charmm cutoffs + PME, etc, (which also gives the 
segfault) but this is what I am using right now for quick testing.

The only thing keeping me from filing a redmine issue is that if I remove my 
drug and do the FEP on one of the water molecules (using the FEP code listed 
above), I have no segfault. Therefore it is clearly related to the drug, whose 
parameters I built so I may have caused the problem somehow. Nevertheless, the 
drug runs fine in water and in vacuum without the FEP code, so I can't imagine 
what could be causing this segfault (also, the fact that it's a segfault means 
that I don;t get any useful info from mdrun as to what might be going wrong).

Thank you,
Chris.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] clarification of equation 4.65

2013-06-27 Thread Christopher Neale
Dear Users:

can anybody confirm that there is a mistake in equation 4.65 on page 82 of the 
manual for version 4.6.1?

Specifically, I think that the final term should be C4*(1-cos(4*theta)) and not 
C4*(1+cos(4*theta)) where the difference is the sign of the cosine term ?

Thank you,
Chris.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] cation-pi interaction

2013-06-02 Thread Christopher Neale
I've always wondered why people do this. There are no pi electrons in your 
force field so it seems to me that if you find a stable "cation-pi" interaction 
in your simulations then that just implies that a real cation-pi interaction is 
not necessary to stabilize this particular conformation. That is, unless one is 
just after pretty pictures.  Am I missing something?

Chris.

-- original message --

I'm trying to find the angle between a cation and a benzene cycle from a MD
trajectory. I'm really confused  is there a tool for such calculations ?
because i must find angle using a plan passing by the center of the cycle
and vector (from cation to the mass center of the benzene cycle).
How can i do it please ?.
thanks in advance.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Usage of g_spatial

2013-05-14 Thread Christopher Neale
Sounds like a trjconv problem, not a g_spatial problem to me. You should center 
only the group that 
makes sense for your SDF -- you probably want to pick a single cation for this 
(then, separately, do another
run selecting a single anion; each will be used to generate a separate SDF). 
Then output the whole system.
This should work. If not, then I suggest that you start a new post with a 
relevant title and figure out what you
are doing wrong with trjconv ... you'll need to post actual commands there with 
associated error messages.

If you want to output only a portion of the system in your step 1, then you 
need to provide a -s input with the same 
number of atoms in step 2. Nevertheless, just use the whole system for trjconv 
for now since it should work.

Regarding your "seeing the whole distribution" for ANI from g_spatial output in 
VMD, I don't understand what you
see and what you expect/want to see. Try posting a figure somewhere online and 
post with its location and
a description of what you want to obtain.

If you want the distribution of all C cations around the average of all A 
anions, then you need to create a trajectory
that is A times as long as your real trajectory and reorder the coordinates so 
that each anion has a turn
being residue number N. Then center your whole trajectory about residue N. This 
is a hack, but it works for me
and I think is what you want. Also, the tool g_sdf does this on its own so you 
might look online for an older
version of it. I found it less useful than g_spatial for my purposes, but some 
users have found it to be the perfect 
tool.

Chris.

-- original message --

Yes, trjconv works without tpr. Thanks for that.

I want to see the distribution of anion around cation. Is it not possible
to see an averaged spatial distribution of all anions around all cations?
Do I need to make a selection of a single cation molecule?

I did the selection of a single cation molecule which is approximately at
the center of the box and did the following

step 1: trjconv -s 1md.pdb -f 11md_gro.pdb -o 11md_gro_mod.pdb -center -ur
compact -pbc none
when it asked for group for centering and output, I selected the whole
system for both.

step 2: trjconv -s 1md.pdb -f 11md_gro_mod.pdb -o 11md_gro_mod1.pdb -fit
rot+trans
when it asked for group for least square once again I selected the whole
system

I selected the whole system as a previous discussion
http://lists.gromacs.org/pipermail/gmx-users/2010-May/050879.html

suggested the same because of an error that was happening in step 2.

*Fatal error: Index[2385] 2410 is larger than the number of atoms in**
the trajectory file (2409).
*
Since I was getting the same error if I made any form of selections in
the first two steps, I output the whole system.

step 3: g_spatial. and used ANI as group to generate SDF and a single
cation residue as the group to output coords

When I loaded the resultant cube in vmd I could see the whole
distribution.

Where am I going wrong?

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] line wrapping on the gmx-users list

2013-05-11 Thread Christopher Neale
Perhaps it is just the way I am doing things then. From my viewing, there were 
4 posts yesterday that required me to scroll horizontally when reading in my 
browser (This line too will come out requiring scrolling on the gmx list page 
that I find most useful):

http://lists.gromacs.org/pipermail/gmx-users/2013-May/081144.html
http://lists.gromacs.org/pipermail/gmx-users/2013-May/081148.html
http://lists.gromacs.org/pipermail/gmx-users/2013-May/081156.html
http://lists.gromacs.org/pipermail/gmx-users/2013-May/081165.html

Since http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List is also 
available, I guess that I have that option too, but I much prefer the long 
monthly list with links that colour according to my browser history.

I asked the question years ago about how to properly reply to a post and was 
told that I would have to reply to
the specific email of that post. However, I choose to receive digests, so I 
don't get those emails
and thus can not reply to them.

Thanks for your help.
Chris.



On Sat, May 11, 2013 at 5:15 AM, Christopher Neale <
chris.neale at mail.utoronto.ca> wrote:

> The lack of line-wrapping makes it a pain to read. It happens to my emails
> that are posted on this list
> (and I have seen others), unless I put explicit line-breaks in my posts,
> which I often forget to do.
>
> Is there anything that can be done?
>

I looked at the mailman docs, and saw no such feature. It's not clear that
we would want to use one if it existed. The days of 80-column everything
are gone.

IMHP, this should be automatically handled, and I often don't look too far
> into posts that are not line-wrapped online because it's tough to read and
> retain.
>

Yeah, but I would look to my mail client to configure mail the way I want
to send it and see it, not a mail server to configure how everybody has to
see it. With all due respect, I have long wondered why your email workflow
is unable to reply to a thread correctly. :-) Perhaps it is time for an
update!

Cheers,

Mark

I guess this must have been discussed before, but I searched and didn;t
> come up with anything.
>
> Thank you,
> Chris.
> --
> gmx-users mailing listgmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] line wrapping on the gmx-users list

2013-05-10 Thread Christopher Neale
The lack of line-wrapping makes it a pain to read. It happens to my emails that 
are posted on this list 
(and I have seen others), unless I put explicit line-breaks in my posts, which 
I often forget to do.

Is there anything that can be done?

IMHP, this should be automatically handled, and I often don't look too far into 
posts that are not line-wrapped online because it's tough to read and retain.

I guess this must have been discussed before, but I searched and didn;t come up 
with anything.

Thank you,
Chris.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] keeping water (entirely) out of the bilayer core

2013-05-08 Thread Christopher Neale
Dear Jianguo, XAvier, and Dallas:

Thank you for your great suggestions. I am making progress, but still have not 
found an efficient way
to do this. I don;t have any particular questions in this post, but wanted to 
provide an update and
also to see if anybody has any additional ideas based on what I have tried.

Jianguo: I have successfully interfaced plumed with gromacs and was able to do 
what I wanted. 
My initial (naive) approach is inefficient (speed reduced from 26 ns/day to 1.7 
ns.day) although I 
am getting some help from the plumed developers to find a more efficient 
approach.

XAvier: I was unable to find a repulsive potential that I could apply to water 
molecules, excepting 
simply increasing the LJ repulsion. I suspect that this approach is not going 
to work for me since the 
lipid-water interface is rugged and lipid tails sometimes approach the surface 
and, therefore, 
a LJ repulsion approach might end up creating large vaccuums at the interfacial 
region.

Dallas and Jianguo: This might work. I would need to find a way to anchor 
virtual atoms at the bilayer center.
Unfortunately, other parts of my approach make it undesirable to have an r^12 
(LJ) or exponential (Buckingham)
repulsion. Specifically, I'd like to be able to not only keep water molecules 
out of the bilayer core
but also force them out if they are already inside, and these rapidly rising 
potentials will likely lead
to expolding systems if water molecules are already deeply burried in the 
bilayer core. However, there might 
be a way forward with the free energy code using some type of soft core 
potential on the LJ via the 
free energy code. I'm going to try this and will report back.

Also, I did get gromacs to do this with the following 2 code changes in 
src/mdlib/pull.c:

(i) in static void do_pull_pot(), immediately after calling 
get_pullgrp_distance(), I
added the following code
if(g>1&&dev[0]>0.0){
dev[0]=0.0;
}

When used with ePull == epullUMBRELLA and pull->eGeom == epullgDIST, this 
ensures that the pull force only
contributes when the displacement is smaller than the reference distance, but 
is not applied when
the displacement is larger than the reference distance.

(ii) I removed the following code from get_pullgrps_dr():

if (max_dist2 >= 0 && dr2 > 0.98*0.98*max_dist2)
{
gmx_fatal(FARGS, "Distance of pull group %d (%f nm) is larger than 0.49 
times the box size (%f)", g, sqrt(dr2), sqrt(max_dist2));
}

Normally, I suppose that this code is there so that atoms are not being pulled 
in oscillating directions when
they are 1/2 box dimension away from the reference position (although I am not 
exactly sure why that would
be an error). Nevertheless, this error is not necessary in my case because I 
have also removed any force when atoms are farther away than the reference 
distance (in modificaion i, above).

After making these modification, I then added about 10,000 separate pull 
groups, one for each water oxygen 
atom, each of which keeps that water molecule out of the bilayer core along Z.

The only problem with this approach is that it is really inefficient: with 16 
threads over 8 cores, I get 
26 ns/day without the 10,000 pull groups and only 4 ns/day with the pull 
groups, probably because so much 
information is being passed around between threads for the water molecules that 
are really far away from
the bilayer center.

Thank you,
Chris.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] mdp-settings for charmm36 and lipid apl values

2013-05-08 Thread Christopher Neale
It is a really bad idea to use standard tip3p with charmm36 lipids (see the 
Piggot paper that you referenced and also Sapay, N. et al. 2010 J. Comp. Chem. 
32, 1400-1410 + probably others).

dt 0.001 with nstlist 5 seems like overkill on the nstlist update frequency 
(not a problem though).

Here's how I do charmm36 lipid simulations:

constraints = all-bonds
lincs-iter =  1
lincs-order =  6
constraint_algorithm =  lincs
integrator = sd
dt = 0.002
tinit = 0
nsteps = 500
nstcomm = 1
nstxout = 500
nstvout = 500
nstfout = 500
nstxtcout = 5
nstenergy = 5
nstlist = 10
nstlog=0 ; reduce log file size
ns_type = grid
vdwtype = switch
rlist = 1.2
rlistlong = 1.3
rvdw = 1.2
rvdw-switch = 0.8
rcoulomb = 1.2
coulombtype = PME
ewald-rtol = 1e-5
optimize_fft = yes
fourierspacing = 0.12
fourier_nx = 0
fourier_ny = 0
fourier_nz = 0
pme_order = 4
tc_grps =  System
tau_t   =  1.0
ld_seed =  -1
ref_t = 310
gen_temp = 310
gen_vel = yes
unconstrained_start = no
gen_seed = -1
Pcoupl = berendsen
pcoupltype = semiisotropic
tau_p = 4 4
compressibility = 4.5e-5 4.5e-5
ref_p = 1.0 1.0


For a pure POPC bilayer with Charmm36 lipids and tips3p water, I get ~ 0.64 
nm^2/lipid (not using grid-mat, just looking at box dimensions).

If I use regular tip3p instead, the APL decreases a lot and eventually forms a 
gel. In simple terms of APL and phase, you can get the same results as tips3p 
(0.64 nm^2/lipid) if you use tip4p (and spce is not too bad, but is still not 
as close as tip4p). Note tip4p will run faster than tips3p by a large margin. 
Nevertheless, I use tip3sp in all of my work with charmm36. 

Also, note that ions can shrink your APL, particularly Na+ (and divalent 
cations are even worse). My simulations use 50 mM KCl, which basically doesn't 
affect the average APL (although 50 mM NaCl does noticeably reduce the APL).

Finally, I am not convinced that a per-molecule area per lipid is a useful 
quantity to compare to experimental areas per lipid. I haven't looked at 
Grid-Mat myself, but there must be a lot of assumptions underlying any analysis 
that tried to assign an "area" to a single 3D lipid. If I were you, I'd be 
looking for expt. results of APL for POPC vs. POPE vs. 70%POPC/30%POPE. Also, 
are you entirely sure that you didn't mix up the POPC vs. POPE values? It looks 
to me like a simple labelling error enough that it warrants a second look.

Chris.



-- original message --

I've been experimenting with simulations of mixed bilayers (512 lipids in
total, 70% POPC, 30% POPE) using the charmm36 parameter set in gromacs, and
have a couple of questions. I know this has been discussed before, but I'd
appreciate some input nonetheless :-)

The relevant sections of my mdp-file are pasted below:

; Start time and timestep in ps
tinit= 0
dt   = 0.001
nsteps   = 1

; NEIGHBORSEARCHING PARAMETERS
; nblist update frequency
nstlist  = 5
; ns algorithm (simple or grid)
ns_type  = grid
; Periodic boundary conditions: xyz, no, xy
pbc  = xyz
periodic_molecules   = no
; nblist cut-off
rlist= 1.2
; long-range cut-off for switched potentials
rlistlong= 1.4

; OPTIONS FOR ELECTROSTATICS AND VDW
; Method for doing electrostatics
coulombtype  = PME
rcoulomb-switch  = 0
rcoulomb = 1.2
; Relative dielectric constant for the medium and the reaction field
epsilon_r= 1
epsilon_rf   = 1
; Method for doing Van der Waals
vdw-type = switch
; cut-off lengths
rvdw-switch  = 0.8
rvdw = 1.2
; Apply long range dispersion corrections for Energy and Pressure
DispCorr = No

; OPTIONS FOR WEAK COUPLING ALGORITHMS
; Temperature coupling
tcoupl   = V-rescale
nsttcouple   = -1
nh-chain-length  = 10
; Groups to couple separately
tc-grps  = System
; Time constant (ps) and reference temperature (K)
tau_t= 0.1
ref_t= 300
; Pressure coupling
pcoupl   = Parrinello-Rahman
pcoupltype   = semiisotropic
nstpcouple   = -1

This is as far as I can tell from earlier discussions on the list, and also
from reading the Piggott et al. paper in JCTC, the correct settings for
charmm36.

After a simulation of ~50 ns, I use GridMatMD to calculate the area per
headgroup of POPC and POPE, respectively, and get what I think are not 100%
acceptable results (but maybe they are)

For POPC, I get 59,7 A^2, and for POPE, I get 63,1 A^2.

The value for POPE would have been fine I suppose if it hadn't been for the
fact that the APL for POPC is smaller. Should it not be larger than POPE?

I notice in the Piggott-paper that they in the supplement for some
simulations of POPC also get APL's of around 59-60 (without POPE of
c

[gmx-users] Usage of g_spatial

2013-05-07 Thread Christopher Neale
See my previous response. You don't need a .tpr for trjconv (unless you are 
doing pbc operations). trjconv -h will also tell you that. Just use trjconv -s 
my.pdb

Chris.

-- original message --

Sorry for my ignorance, since the help manual uses tpr file for the first
two steps before running g_spatial, could you please tell me how I should
do them without tpr. Also when I just run g_spatial (without the first two
steps) I get the distribution of all the anions around cations, so it is
not informative.

Thank you so much for all the help
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Usage of g_spatial

2013-05-07 Thread Christopher Neale
Try this:

trjconv -s my.pdb -f my.pdb -o mymod.pdb
g_spatial -s mymod.pdb -f mymod.pdb

For the pre-processing, there must also be AMBER tools that will do this for 
you if for some reason the above does not work for you (e.g. Option -pbc mol 
requires a .tpr file for the -s option)

Chris.

-- original message --

I am analyzing trajectories of an ionic liquid system generated using AMBER
MD package. The force field parameters are typical of this IL system and
not from the existing libraries. Hence, it is difficult for me to generate
a tpr file. I understand that running the g_spatial command needs
pre-processing of the coordinates (Steps 1 and 2 in g_spatial help) but I
need tpr file for that. So there is  no way of doing g_spatial without a
tpr file is it? Then how do I generate tpr file for my system?

Thanks for all the help
-- 
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] keeping water (entirely) out of the bilayer core

2013-05-05 Thread Christopher Neale
Thank you for the advice Jianguo. This seems like a good way forward. I was 
hoping not to have to learn how to use new software, but perhaps it is time.

Thank you,
Chris.


-- original message --

Hi Chris,

Perhaps you can try PLUMED+GROMACS. In that case, you can define a collective 
variable as 
mindist between the water molecules and the lipid tails. And then apply wall 
potential to keep 
this CV above a certain value. 

Jianguo 



Dear users:

I am interested in running simulations of lipid bilayers in which I keep all 
water molecules out of
the bilayer core(not just statistically, but absolutely). However, I have been 
unable to figure out
how to do it. I'll list what I have tried in the hope that others have some 
ideas or even perhaps 
know how to do this with standard gromacs.
...
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] keeping water (entirely) out of the bilayer core

2013-05-05 Thread Christopher Neale
Thank you XAvier,

The only thing I could find in the 4.6.1 manual about repulsive interactions is 
the repulsive part of the LJ and Buckingham potentials, in addition to 
tabulated potentials. Were you referring to one of these?

Chris.

-- original message --

You could define a repulsive potential that would apply to water molecules. I 
am not sure how it is called but it is available and described in the manual. 

On May 4, 2013, at 2:46, Christopher Neale  
wrote:

> Dear users:
> 
> I am interested in running simulations of lipid bilayers in which I keep all 
> water molecules out of the bilayer core
> (not just statistically, but absolutely). However, I have been unable to 
> figure out how to do it. 
> I'll list what I have tried in the hope that others have some ideas or even 
> perhaps know how to do this 
> with standard gromacs.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] keeping water (entirely) out of the bilayer core

2013-05-03 Thread Christopher Neale
Dear users:

I am interested in running simulations of lipid bilayers in which I keep all 
water molecules out of the bilayer core
(not just statistically, but absolutely). However, I have been unable to figure 
out how to do it. 
I'll list what I have tried in the hope that others have some ideas or even 
perhaps know how to do this 
with standard gromacs.

Everything that I have tried revolves around using the pull code, setting the 
entire lipid bilayer as the reference 
group, and having thousands of pulled groups -- one for each water molecule. 
Also, I modified the gromacs
source code in mdlib/pull.c (version 4.6.1) so that the force is only applied 
when the displacement is smaller than 
the desired distance and not when the displacement is larger than the specified 
distance (to keep water out but
then to otherwise let it go anywhere without bias):

static void do_pull_pot(int ePull,
t_pull *pull, t_pbc *pbc, double t, real lambda,
real *V, tensor vir, real *dVdl)
{
intg, j, m;
dvec   dev;
double ndr, invdr;
real   k, dkdl;
t_pullgrp *pgrp;

/* loop over the groups that are being pulled */
*V= 0;
*dVdl = 0;
for (g = 1; g < 1+pull->ngrp; g++)
{
pgrp = &pull->grp[g];
get_pullgrp_distance(pull, pbc, g, t, pgrp->dr, dev);

/*  ## HERE IS THE CODE CHANGE 
if(dev[0]>0.0){
dev[0]=0.0;
}
/* End code snippit */


The most relevant lines from the .mdp file were:

pull = umbrella
pull_geometry= distance
pull_dim = N N Y
pull_start   = no
pull_ngroups = 9000
pull_group0  = POPC

pull_group1  = r_1_&_OW
pull_init1   = 1.9
pull_k1  = 500

pull_group2  = r_2_&_OW
pull_init2   = 1.9
pull_k2  = 500

... etc...


The problem with this was that it crashed immediately with an error that my 
pulling distance was 
greater than 1/2 of the box vector.:

Fatal error:
Distance of pull group 165 (5.353939 nm) is larger than 0.49 times the box size 
(5.395960)

Pretty obvious in retrospect. If I could get this error to go away, then 
everything should be fine because I have modified the code so that forces are 
zero at large displacements. However, I am worried that if I simply modify 
grompp to bypass this error then I might get some type of strange and possibly 
silent error in mdrun. Any thoughts on this are really appreciated.

For my second try, I used pull_geometry = direction-periodic (see more details 
below), but that also didn't work 
because if I set pull_vec1 = 0 0 1, then everything gets pulled to the upper 
part of the box (and LINCS errors 
ensue), rather than simply pulling away from the bilayer.

Pcoupl= berendsen
pcoupltype   = semiisotropic
compressibility= 4.5e-5 0
pull = umbrella
pull_geometry= direction-periodic
pull_start   = no
pull_ngroups = 9000
pull_group0  = POPC

pull_group1  = r_1_&_OW
pull_init1   = 1.9
pull_k1  = 500
pull_vec1= 0 0 1

pull_group2  = r_2_&_OW
pull_init2   = 1.9
pull_k2  = 500
pull_vec2= 0 0 1

... etc...


If anybody has an idea about how I could get this to work with standard gromacs 
or with a modified version, I would be very grateful to hear your thoughts. 

Thank you,
Chris.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] How to rescue trr trajectory when two or more corrupted frames exist

2013-04-27 Thread Christopher Neale
This is great advice from Mark. If you don't get around to doing analysis 
frequently,
then you can at least set up your run script to execute gmxcheck on all 
relevant files prior to each time 
you restart a run in which you load in a .cpt file and to error out if there is 
some corruption found.

Chris.

-- original message --

AFAIK there is no magic number header. However, .trr files have frames of
constant size, so you can make a .tpr that will write a .trr file that will
include the longest period of your nst[xfv]out. Some Unix tool like dd (or
maybe hexdump) can probably tell you the size of that repeating unit. Then
you can use dd to write increasingly larger multiples of that size to get a
subset that includes all the trajectory before the *second* corruption.
Back up your files first! I suggest you write two periods to check my
assertion that the frames ought to be of constant size. Practice on a small
trajectory you don't care about.

Then you'll be in a position to judge the start point of the first complete
frame after the first corruption, which will allow you to construct a
fancier dd solution to get the remainder of the trajectory from the
post-first-corruption fragment. Then trjcat -h to learn how to stitch the
parts together.

Everybody learn: do (parts of) your analysis while your simulation is
running, not after it finishes! :-)

Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] How to rescue trr trajectory when two or more corrupted frames exist

2013-04-26 Thread Christopher Neale
If the corrupted frame is at 1000 ps, and you can not get at the data after the 
corrupted frame by using:
trjconv -b 1100 , then I suspect that you don't have any good solution here, 
unless the developers put in
some type of magic number at the start of each frame that you could search for 
with a c program (perhaps
contact the developers of gmx_rescue for advice?)

A related question is how you ended up with a long .trr file that has 
corruption in the middle.
Did that corruption happen at runtime or later due to a filesystem problem? If 
it was a later filesystem 
problem, then you might look into backup solutions. If it happened at runtime, 
then it would be 
useful to figure out what happened because ideally gromacs would not forever 
append to corrupted files
since many of us don't look at information in the .trr files until very late in 
the analysis and an optional
0.1% loss in efficiency might be reasonable to ensure that this doesn't happen 
(at least our system admins 
would tell us so). For that, you'll probably need to provide details, including 
what version of gromacs you 
used. 

Chris.


-- original message --

I have some corrupted frames in different trajectories. gmxcheck with .trr 
trajectories gives extraordinary positions or velocities and with .xtc 
trajectories gives rise to the magic number error. I am aware of the program 
gmx_rescue kindly offered to us by its developers. However, this program can 
only work with .xtc files. It is possible to rescue .trr files when there is a 
maximum of one corrupted frame by checking the size of healthy frames, chopping 
the parts of the trajectory before and after, using e.g. programs head and tail 
with the corresponding integer multiples of one healthy frame in bytes and 
stitching them together. However, when there is two or more corrupted frames in 
different locations, although it is not hard to spot the exact locations, it is 
no longer possible to remove the problematic frames size-wise (or at least it 
is less likely to succeed than winning the lottery) , since the size of each 
corrupted frame is non-standard. Is there any corresponding software to 
gmx_rescue that can be used with .trr files? Is there any other recent program 
or any other way of coping with the problem? I did not post any details of my 
systems or the specific error messages I get because I believe my question is 
clear.

Thank you in advance!

Best,
Yiannis
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: How to use multiple nodes, each with 2 CPUs and 3 GPUs

2013-04-25 Thread Christopher Neale
Dear Szilárd:

Thank you for your assistance. I understand the importance of reading the 
documentation and I read it about 5 times before I posted to this list. In 
fact, it's kind of buried in my initial post, but I did run MPI gromacs with 
mpirun -np 3 the first time and it didn't work.

I have finally realized that my problems are based on a PBS variable. 
Previously, I was using 
#PBS -l walltime=00:10:00,nodes=2:ppn=12:gpus=3:shared
I chose that because it was what was recommended on the webpage of the new 
cluster that I am using.

However, I can only get things to work when I use ppn=3 instead:
#PBS -l walltime=00:10:00,nodes=2:ppn=3:gpus=3:shared

That surprises me, because I thought that the mpirun -np option should take 
care of things, but in any event this has solved my general problem, which was 
that I could not get it running across multiple nodes at all. Now that I have 
this, I will take a look at your great suggestions about ranks, which I hadn't 
had the chance to explore because my #PBS ppn setting was stopping these runs 
from ever getting going.

Thank you again for your help.

Chris.

-- original message --

Hi,

You should really check out the documentation on how to use mdrun 4.6:
http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Running_simulations

Brief summary: when running on GPUs every domain is assigned to a set
of CPU cores and a GPU, hence you need to start as many PP MPI ranks
per node as the number of GPUs (or pass a PP-GPU mapping manually).


Now, there are some slight complications with the inconvenient
hardware setup of the machines you are using. When the number of cores
is not divisible by the number of GPUs, you'll end up wasting cores.
In your case only 3*5=15 cores per compute node will be used. What
will make things even worse, unless you use "-pin on" (which is the
default behavior *only* if you use all cores in a node), is that mdrun
will not lock threads to cores and will let them be moved around by
the OS which can cause severe performance degradation .

However, you can actually work around these issues and get good
performance by using separate PME ranks. You can just try using 3 PP +
1 PME per compute node with four OpenMP threads each, i.e:
mpirun -np 4*Nnodes mpirun_mpi -npme 1 -ntomp 4
If you are lucky with the PP/PME load, this should work well and even
if you get some PP-PME imbalance, this should hurt performance way
less than the inconvenient 3x5 threads setup.

Cheers,
-- 
Szilárd

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] How to use multiple nodes, each with 2 CPUs and 3 GPUs

2013-04-25 Thread Christopher Neale
Thank you Berk,

I am still getting an error when I try with MPI compiled gromacs 4.6.1 and -np 
set as you suggested.

I ran like this:
mpirun -np 6 /nics/b/home/cneale/exe/gromacs-4.6.1_cuda/exec2/bin/mdrun_mpi 
-notunepme -deffnm md3 -dlb yes -npme -1 -cpt 60 -maxh 0.1 -cpi md3.cpt -nsteps 
50 -pin on

Here is the .log file output:
Log file opened on Thu Apr 25 10:24:55 2013
Host: kfs064  pid: 38106  nodeid: 0  nnodes:  6
Gromacs version:VERSION 4.6.1
Precision:  single
Memory model:   64 bit
MPI library:MPI
OpenMP support: enabled
GPU support:enabled
invsqrt routine:gmx_software_invsqrt(x)
CPU acceleration:   AVX_256
FFT library:fftw-3.3.3-sse2
Large file support: enabled
RDTSCP usage:   enabled
Built on:   Tue Apr 23 12:43:12 EDT 2013
Built by:   cne...@kfslogin2.nics.utk.edu [CMAKE]
Build OS/arch:  Linux 2.6.32-220.4.1.el6.x86_64 x86_64
Build CPU vendor:   GenuineIntel
Build CPU brand:Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
Build CPU family:   6   Model: 45   Stepping: 7
Build CPU features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr 
nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 
sse4.2 ssse3 tdt x2apic
C compiler: /opt/intel/composer_xe_2011_sp1.11.339/bin/intel64/icc 
Intel icc (ICC) 12.1.5 20120612
C compiler flags:   -mavx   -std=gnu99 -Wall   -ip -funroll-all-loops  -O3 
-DNDEBUG
C++ compiler:   /opt/intel/composer_xe_2011_sp1.11.339/bin/intel64/icpc 
Intel icpc (ICC) 12.1.5 20120612
C++ compiler flags: -mavx   -Wall   -ip -funroll-all-loops  -O3 -DNDEBUG
CUDA compiler:  nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 
2005-2012 NVIDIA Corporation;Built on Thu_Apr__5_00:24:31_PDT_2012;Cuda 
compilation tools, release 4.2, V0.2.1221
CUDA driver:5.0
CUDA runtime:   4.20


 :-)  G  R  O  M  A  C  S  (-:

   Groningen Machine for Chemical Simulation

:-)  VERSION 4.6.1  (-:

Contributions from Mark Abraham, Emile Apol, Rossen Apostolov, 
   Herman J.C. Berendsen, Aldert van Buuren, Pär Bjelkmar,  
 Rudi van Drunen, Anton Feenstra, Gerrit Groenhof, Christoph Junghans, 
Peter Kasson, Carsten Kutzner, Per Larsson, Pieter Meulenhoff, 
   Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz, 
Michael Shirts, Alfons Sijbers, Peter Tieleman,

   Berk Hess, David van der Spoel, and Erik Lindahl.

   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 Copyright (c) 2001-2012,2013, The GROMACS development team at
Uppsala University & The Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
   modify it under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
 of the License, or (at your option) any later version.

:-)  /nics/b/home/cneale/exe/gromacs-4.6.1_cuda/exec2/bin/mdrun_mpi  (-:


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
  --- Thank You ---  


For optimal performance with a GPU nstlist (now 10) should be larger.
The optimum depends on your CPU and GPU resources.
You might want to try several nstlist values.
Can not increase nstlist for GPU run because verlet-buffer-drift is not set or 
used
Input Parameters:
   integrator   = sd
   nsteps   = 500
   init-step= 0
   cutoff-scheme= Verlet
   ns_type  = Grid
   nstlist  = 10
   ndelta   = 2
   nstcomm  = 100
   comm-mode= Linear
   nstlog   = 0
   nstxout  = 500
   nstvout  = 500
   nstfout  = 500
   nstcalcenergy= 100
   ns

[gmx-users] How to use multiple nodes, each with 2 CPUs and 3 GPUs

2013-04-24 Thread Christopher Neale
Dear Users:

I am having trouble getting any speedup by using more than one node, 
where each node has 2 8-core cpus and 3 GPUs. I am using gromacs 4.6.1.

I saw this post, indicating that the .log file output about number of gpus used 
might not be accurate:
http://lists.gromacs.org/pipermail/gmx-users/2013-March/079802.html

Still, I'm getting 21.2 ns/day on 1 node, 21.2 ns/day on 2 nodes, and 20.5 
ns/day on 3 nodes. 
Somehow I think I have not configures the mpirun -np and mdrun -ntomp correctly 
(although I have tried numerous combinations).

On 1 node, I can just run mdrun without mpirun like this:
http://lists.gromacs.org/pipermail/gmx-users/2013-March/079802.html

For that run, the top of the .log file is:
Log file opened on Wed Apr 24 11:36:53 2013
Host: kfs179  pid: 59561  nodeid: 0  nnodes:  1
Gromacs version:VERSION 4.6.1
Precision:  single
Memory model:   64 bit
MPI library:thread_mpi
OpenMP support: enabled
GPU support:enabled
invsqrt routine:gmx_software_invsqrt(x)
CPU acceleration:   AVX_256
FFT library:fftw-3.3.3-sse2
Large file support: enabled
RDTSCP usage:   enabled
Built on:   Tue Apr 23 12:59:48 EDT 2013
Built by:   cne...@kfslogin2.nics.utk.edu [CMAKE]
Build OS/arch:  Linux 2.6.32-220.4.1.el6.x86_64 x86_64
Build CPU vendor:   GenuineIntel
Build CPU brand:Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
Build CPU family:   6   Model: 45   Stepping: 7
Build CPU features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr 
nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 
sse4.2 ssse3 tdt x2apic
C compiler: /opt/intel/composer_xe_2011_sp1.11.339/bin/intel64/icc 
Intel icc (ICC) 12.1.5 20120612
C compiler flags:   -mavx   -std=gnu99 -Wall   -ip -funroll-all-loops  -O3 
-DNDEBUG
C++ compiler:   /opt/intel/composer_xe_2011_sp1.11.339/bin/intel64/icpc 
Intel icpc (ICC) 12.1.5 20120612
C++ compiler flags: -mavx   -Wall   -ip -funroll-all-loops  -O3 -DNDEBUG
CUDA compiler:  nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 
2005-2012 NVIDIA Corporation;Built on Thu_Apr__5_00:24:31_PDT_2012;Cuda 
compilation tools, release 4.2, V0.2.1221
CUDA driver:5.0
CUDA runtime:   4.20
...

...
Initializing Domain Decomposition on 3 nodes
Dynamic load balancing: yes
Will sort the charge groups at every domain (re)decomposition
Initial maximum inter charge-group distances:
two-body bonded interactions: 0.431 nm, LJ-14, atoms 101 108
  multi-body bonded interactions: 0.431 nm, Proper Dih., atoms 101 108
Minimum cell size due to bonded interactions: 0.475 nm
Maximum distance for 7 constraints, at 120 deg. angles, all-trans: 1.175 nm
Estimated maximum distance required for P-LINCS: 1.175 nm
This distance will limit the DD cell size, you can override this with -rcon
Using 0 separate PME nodes, per user request
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 3 cells with a minimum initial size of 1.469 nm
The maximum allowed number of cells is: X 5 Y 5 Z 6
Domain decomposition grid 3 x 1 x 1, separate PME nodes 0
PME domain decomposition: 3 x 1 x 1
Domain decomposition nodeid 0, coordinates 0 0 0

Using 3 MPI threads
Using 5 OpenMP threads per tMPI thread

Detecting CPU-specific acceleration.
Present hardware specification:
Vendor: GenuineIntel
Brand:  Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
Family:  6  Model: 45  Stepping:  7
Features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr nonstop_tsc pcid 
pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
Acceleration most likely to fit this hardware: AVX_256
Acceleration selected at GROMACS compile time: AVX_256


3 GPUs detected:
  #0: NVIDIA Tesla M2090, compute cap.: 2.0, ECC: yes, stat: compatible
  #1: NVIDIA Tesla M2090, compute cap.: 2.0, ECC: yes, stat: compatible
  #2: NVIDIA Tesla M2090, compute cap.: 2.0, ECC: yes, stat: compatible

3 GPUs auto-selected for this run: #0, #1, #2

Will do PME sum in reciprocal space.
...

...

 R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

 Computing: Nodes   Th. Count  Wall t (s) G-Cycles   %
-
 Domain decomp. 35   4380  23.714  922.574 6.7
 DD comm. load  35   4379   0.0542.114 0.0
 DD comm. bounds35   4381   0.0562.193 0.0
 Neighbor search35   4380  11.325  440.581 3.2
 Launch GPU ops.35  87582   3.970  154.455 1.1
 Comm. coord.   35  39411   2.522   98.132 0.7
 Force  35  43791  55.351 2153.40915.5
 Wait + Comm. F 35  43791   2.800  108.930 0.8
 PME mesh   35  43791  97.377 3788.42727.3
 Wait GPU nonlocal  35  43791  

[gmx-users] position restraint and umbrella potential contribution to the virial

2013-04-22 Thread Christopher Neale
Dear users and developers:

We have conducted some NPT umbrella sampling (US) simulations to investigate 
the free energy associate with the aproach of 2 alpha helical peptides. The 
order parameter is the Cartesian X dimension. During these simulations, in 
order to maintain (i) the structure of isolated helices and (ii) the relative 
orientation of the isolated helices, we also applied position restraints in the 
Cartesian Y and Z dimensions. In addition to free energies of interaction, we 
are also interested in looking at the system volume as a function of separation 
distance. In doing so, we realized that the restraint potentials are affecting 
the virial and thus the volume of the simulation system. I consider this volume 
dependence to be an artefact of the procedure that we used and we would 
therefore like to debias this volume data. 

Debiasing is relatively straightforward for the umbrella potential. When dx is 
less than dx0, the umbrella potential provides an expansion force that 
increases the volume. Conversely, when dx is greater than dx0, the volume is 
decreased. However, there is no simple way to debias the volume for the effects 
of position restraints, which are unpredictable a priori as the helices come 
into contact. It should be possible to go back to the .xtc files to compute the 
restraint forces and use these, together with the compressibility, to compute 
the volume change that was induced by the forces via the virial.

My first question is why these restraint forces are added to the virial at all. 
Perhaps the US potential needs to be added (although I am not entirely sure 
why), but if the system is translationally invariant then why would position 
restraints be included in the virial?

My second question is what kind of errors might I introduce by modifying the 
source code such that these restraint forces are not included in the virial?

My third question is what gromacs source code file contains the virial 
contribution of the position restraints? (the forces from the US potential are 
added to the virial in mdlib/pull.c , where I can set pull->bVirial=FALSE for 
do_pull_pot() ).

Thank you,
Chris.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] amber99 with berger's lipids

2013-04-09 Thread Christopher Neale
Did you correctly account for the different 1-4 scaling factors in the Berger 
and Amber lipids (either by obtaining .itp files from the authors of the Berger 
lipids-Amber protein article or making the changes yourself)? If not, then you 
are doing your simulation incorrectly (see their paper for why).

Chris.

-- original message --

By the way during simulation of the membrane-protein systems in the
Amber99sb ff (with berger lipids) I've noticed decreased of my system in
the Z-direction ( I've observed the same also during simulation of such
systems in the Charm full atomic ff). In both cases the observed effect was
seen in both GPU and non-gpu regimes with different cutoffs.

It's intresting that in the gromos-united atom ff ( with vdw 1.2 cutoff
with berger lipids) I didnt observe such decreasing in the Z - dimension of
my bilayer.  The only difference was in the water models (spc in the gromos
and tip3p in the amber or charm). Might it be relevant ? Should I switch to
the spc water with the amber ?

James
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] chiller failure leads to truncated .cpt and _prev.cpt files using gromacs 4.6.1

2013-03-29 Thread Christopher Neale
Thank you Berk, my problem, was indeed that I didn't have any valid .cpt files. 
The only way that I could proceed was to extract a frame from the .xtc file and 
run it through grompp again to get a new .tpr. That's fine and things are 
running again. I just wanted to pass all of this information along.

It sounds, from Mark's most recent email on this subject, that there may be 
nothing that can be done to avoid this on Gromacs' end. I would have thought 
that gmxcheck would handle it, but if Mark is right that the OS might serve up 
the version of the .cpt that is in memory, rather than what is on disk, then I 
suppose that gmxcheck on the .cpt might pass just prior to the shutdown even 
though the file was not on disk and the gmxcheck will fail later, after a 
reboot.

I've scripted the creation of automatic backups of good .cpt files so I should 
be ok for the future. I thought that I'd get replies from lots of other people 
who had experienced this (since I've seen it on multiple clusters, all managed 
by entirely different staff), but since nobody responded to say that they have 
experienced this then I guess it is probably not worth any more of your time.

To all those who helped me on this subject, I appreciate all of your assistance 
.
Chris.

-- original message --

I don't know enough about the details of Lustre to understand what's going on 
exactly.
But I think mdrun can't do more then check the return value of fsync and 
believe that the file is completely flushed to disk. Possibly Lustre does some 
syncing, but doesn't actually flush the file physically to disk, which could 
lead to corruption when power goes down unexpectedly.
But I hope this would happen so infrequently that you can take your losses (of 
up to the queue time, which is, hopefully, around 24 hours).

I assume your problem is that you don't even have the checkpoint file of the 
previous simulation part left. Another option would then be using mdrun 
-noappend

Cheers,

Berk

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] chiller failure leads to truncated .cpt and _prev.cpt files using gromacs 4.6.1

2013-03-28 Thread Christopher Neale
Thank you, Berk, Justin, and Matthew, for your assistance.

I checked with my sysadmin, who said:

The /global/scratch FS is Lustre. It is fully POSIX and the fsync etc 
are fully and well implemented. However when the 'power off' command is 
issued there is no way OS can finish I/O in a controlled way.

Note that the power off command was given when they
realized that they had lost all cooling in the data room, and they had just a 
few 
minutes to react, forcing them to shutdown all compute nodes. 

Justin's suggestion to use  -cpnum is good, although I think it will be easier 
to simply have a script that runs 
gmxcheck once every 12 hours and backs up the .cpt file if it is ok.

I don't know enough about computer OS's to say if there is any possible way for 
gromacs to avoid this
in the future, but if it was possible, then it would be useful.

Thank you again,
Chris.

-- original message --

Gromacs calls fsync for every checkpoint file written:

   fsync() transfers ("flushes") all modified in-core data of (i.e., modi-
   fied  buffer cache pages for) the file referred to by the file descrip-
   tor fd to the disk device (or other permanent storage device)  so  that
   all  changed information can be retrieved even after the system crashed
   or was rebooted.  This includes writing  through  or  flushing  a  disk
   cache  if  present.   The call blocks until the device reports that the
   transfer has completed.  It also flushes metadata  information  associ-
   ated with the file (see stat(2)).

If fsync fails, mdrun exits with a fatal error.
We have experience with unreliable AFS file systems, where fsync mdrun could 
wait for hours and fail,
for which we added an environment variable.
So either fsync is not supported on your system (highly unlikely)
or your file system returns 0, indicating the file was synched, but it actually 
didn't fully sync.

Note that we first write a new checkpoint file with number, fynsc that, then 
move the current
to _prev (thereby loosing the old prev) and then the numbered one to the 
current.
So you should never end up with only corrupted files, unless fsync doesn't do 
what it's supposed to do.

Cheers,

Berk

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Simulation membrane proteins in amber99 force field.

2013-03-27 Thread Christopher Neale
Dear James:

As always, check the primary literature. The Amber99SB ff was introduced with 
an 8A cutoff and PME: http://www.ncbi.nlm.nih.gov/pubmed/16981200

Other cutoffs are at your discretion. I am, for instance, using this ff with a 
1.0 nm cutoff and PME because I am using it with the Stockholm Lipids.

Chris.

-- original message --

I'd like to perform MD simulation of the membrane protein parametrized
in Amber99sb force field. Could you tell me what cut-off patterns
should I use for such simulation ?


James

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] chiller failure leads to truncated .cpt and _prev.cpt files using gromacs 4.6.1

2013-03-26 Thread Christopher Neale
Dear Matthew:

Thank you for noticing the file size. This is a very good lead. 
I had not noticed that this was special. Indeed, here is the complete listing 
for truncated/corrupt .cpt files:

-rw-r- 1 cneale cneale 1048576 Mar 26 18:53 md3.cpt
-rw-r- 1 cneale cneale 1048576 Mar 26 18:54 md3.cpt
-rw-r- 1 cneale cneale 1048576 Mar 26 18:54 md3.cpt
-rw-r- 1 cneale cneale 1048576 Mar 26 18:54 md3.cpt
-rw-r- 1 cneale cneale 1048576 Mar 26 18:50 md3.cpt
-rw-r- 1 cneale cneale 1048576 Mar 26 18:50 md3.cpt
-rw-r- 1 cneale cneale 1048576 Mar 26 18:50 md3.cpt
-rw-r- 1 cneale cneale 1048576 Mar 26 18:51 md3.cpt
-rw-r- 1 cneale cneale 1048576 Mar 26 18:51 md3.cpt
-rw-r- 1 cneale cneale 2097152 Mar 26 18:52 md3.cpt
-rw-r- 1 cneale cneale 2097152 Mar 26 18:52 md3.cpt
-rw-r- 1 cneale cneale 2097152 Mar 26 18:52 md3.cpt
-rw-r- 1 cneale cneale 2097152 Mar 26 18:52 md3.cpt

I will contact my sysadmins and let them know about your suggestions.

Nevertheless, I respectfully reject the idea that there is really nothing that 
can be done about this inside
gromacs. About 6 years ago, I worked on a cluster with massive sporadic NSF 
delays. The only solution to 
automate runs on that machine was to, for example, use sed to create a .mdp 
from a template .mdp file, which had ;;;EOF as the last line and then to poll 
the created mdp file for ;;;EOF until it existed prior to running
grompp (at the time I was using mdrun -sort and desorting with an in-house 
script prior to domain 
decomposition, so I had to stop/start gromacs every coupld of hours). This is 
not to say that such things are 
ideal, but I think  gromacs would be all the better if it was able to avoid 
with problems like this regardless of 
the cluster setup.

Please note that, over the years, I have seen this on 4 different clusters 
(albeit with different versions of 
gromacs), but that is to say that it's not just one setup that is to blame.

Matthew, please don't take my comments the wrong way. I deeply appreciate your 
help. I just want to put it
out there that I believe that gromacs would be better if it didn't overwrite 
good .cpt files with truncated/corrupt
.cpt files ever, even if the cluster catches on fire or the earth's magnetic 
field reverses, etc. 
Also, I suspect that sysadmins don't have a lot of time to test their clusters 
for graceful exit upon chiller failure 
conditions, so a super-careful regime of .cpt update will always be useful.

Thank you again for your help, I'll take it to my sysadmins, who are very good 
and may be able to remedy 
this on their cluster, but who knows what cluster I will be using in 5 years.

Again, thank you for your assistance, it is very useful,
Chris.

-- original message --


Dear Chris,

While it's always possible that GROMACS can be improved (or debugged), this
smells more like a system-level problem. The corrupt checkpoint files are
precisely 1MiB or 2MiB, which suggests strongly either 1) GROMACS was in
the middle of a buffer flush when it was killed (but the filesystem did
everything right; it was just sent incomplete data), or 2) the filesystem
itself wrote a truncated file (but GROMACS wrote it successfully, the data
was buffered, and GROMACS went on its merry way).

#1 could happen, for example, if GROMACS was killed with SIGKILL while
copying .cpt to _prev.cpt -- if GROMACS even copies, rather than renames --
its checkpoint files. #2 could happen in any number of ways, depending on
precisely how your disks, filesystems, and network filesystems are all
configured (for example, if a RAID array goes down hard with per-drive
writeback caches enabled, or your NFS system is soft-mounted and either
client or server goes down). With the sizes of the truncated checkpoint
files being very convenient numbers, my money is on #2.

Have you contacted your sysadmins to report this? They may be able to take
some steps to try to prevent this, and (if this is indeed a system problem)
doing so would provide all their users an increased measure of safety for
their data.

Cheers,
MZ

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Hydrophobic contact cut-off

2013-03-26 Thread Christopher Neale
The standard approach in cases like this, assuming that you have some contacts 
at some point in your trajectory, is to use your trajectory to construct a 
radial distribution function (RDF) and then to define a "contact" as any 
interaction up to the minimum following the first maximum of the RDF. This way, 
your cutoff will depend on your definition for contact evaluation (any atom 
pair, heavy atom pair, side chain center of mass, etc).

Chris.

-- original message --

Sorry for an off-topic question..

What is the distance cut-off considered for
hydrophobic contact in protein?
Some paper states 4-8Ang, while some other
considers only till 5Ang. It is reported that this
is a long range interaction.

Any information clarifying this doubt will be very
useful.

Thank you
Kavya
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] chiller failure leads to truncated .cpt and _prev.cpt files using gromacs 4.6.1

2013-03-26 Thread Christopher Neale
Dear Users:

A cluster that I use went down today with a chiller failure. I lost all 16 jobs 
(running gromacs 4.6.1). For 13 of these jobs, not only is the .cpt file 
truncated, but also the _prev.cpt file is truncated, meaning that I am going to 
have to go back through the files, extract a frame, make a new .tpr file (using 
a new, custom .mdp file to get the timestamp right), restart the runs, and then 
later join the trajectory data fragments. 

I have experienced this a number of times over the years with different 
versions of gromacs (see, for example, http://redmine.gromacs.org/issues/790 ) 
and wonder if anybody else has experienced this?

Also, does anybody have some advice on how to handle this? For now, my idea is 
to run a script in the background to periodically check the .cpt file and make 
a copy if it is not corrupted/truncated so that I can always restart.

If it is useful information, both the .cpt and the _prev.cpt files have the 
same size and timestamp, but are smaller than non-corrupted .cpt files. E.g.:

$ ls -ltr --full-time *cpt
-rw-r- 1 cneale cneale 1963536 2013-03-22 17:18:04.0 -0700 
md2d_prev.cpt
-rw-r- 1 cneale cneale 1963536 2013-03-22 17:18:04.0 -0700 md2d.cpt
-rw-r- 1 cneale cneale 1048576 2013-03-26 12:46:02.0 -0700 md3.cpt
-rw-r- 1 cneale cneale 1048576 2013-03-26 12:46:03.0 -0700 
md3_prev.cpt

Where, above, md2d.cpt was from the last stage of my equilibration and md3.cpt 
was from my production.

Here is another example from a different run with corruption:

$ ls -ltr --full-time *cpt
-rw-r- 1 cneale cneale 2209508 2013-03-21 08:24:33.0 -0700 
md2d_prev.cpt
-rw-r- 1 cneale cneale 2209508 2013-03-21 08:24:33.0 -0700 md2d.cpt
-rw-r- 1 cneale cneale 2097152 2013-03-26 12:46:01.0 -0700 
md3_prev.cpt
-rw-r- 1 cneale cneale 2097152 2013-03-26 12:46:01.0 -0700 md3.cpt

I detect corruption/truncation in the .cpt file like this:
$ gmxcheck  -f md3.cpt
Fatal error:
Checkpoint file corrupted/truncated, or maybe you are out of disk space?
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors

Also, I confirmed the problem by trying to run mdrun:

$ mdrun -nt 1 -deffnm md3  -cpi md3.cpt -nsteps 50
Fatal error:
Checkpoint file corrupted/truncated, or maybe you are out of disk space?
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors

(and I get the same thing using md3_prev.cpt)

I am not out of disk space, but probably some type of condition like that 
existed when the chiller failed and the system went down:
$ df -h .
FilesystemSize  Used Avail Use% Mounted on
  342T   57T  281T  17% /global/scratch

Nor am I out of quota (although I have no command to show that here).

There is no corruption of .edr, .trr, or .xtc files

The .log files end like this:

Writing checkpoint, step 194008250 at Tue Mar 26 10:49:31 2013
Writing checkpoint, step 194932330 at Tue Mar 26 11:49:31 2013
   Step   Time Lambda
  195757661   391515.322000.0
Writing checkpoint, step 195757661 at Tue Mar 26 12:46:02 2013

I am motivated to help solve this problem, but have no idea how to stop gromacs 
from copying corrupted/truncated checkpoint files to _prev.cpt . I presume that 
one could write a magic number to the end of the .cpt file and test that it 
exists prior to moving .cpt to _prev.cpt , but perhaps I misunderstand the 
problem. If needs be, perhaps mdrun could call gmxcheck, since that tool seems 
to detect the corruption/truncation. If it's done every 30 minutes, it 
shouldn't affect the performance.

Thank you for any advice,
Chris.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Performance of 4.6.1 vs. 4.5.5

2013-03-08 Thread Christopher Neale
Dear users:

I am seeing a 140% performance boost when moving from gromacs 4.5.5 to 4.6.1 
when I run a simulation on a single node. However, I am "only" seeing a 110% 
performance boost when running on multiple nodes. Does anyone else see this? 
Note that I am not using the verlet cutoff scheme.

I'm not sure that this is a problem, but I was surprised to see how big the 
difference was between 1 and 2 nodes, while for 2-10 nodes I saw a reliable 10% 
performance boost.

Please note that, while I compiled the fftw (with sse2) and gromacs 4.6.1, I 
did not compile the 4.5.5 version that I am comparing to (or its fftw) so the 
difference might be in compilation options. Still, I wonder why the benefits of 
4.6.1 are so fantastic on 1 node but fall off to good-but-not-amazing on > 1 
node.

The system is about 43K atoms. I have not tested this with other systems or 
cutoffs.

My mdp file follows. Thank you for any advice.

Chris.

constraints = all-bonds
lincs-iter =  1
lincs-order =  6
constraint_algorithm =  lincs
integrator = sd
dt = 0.002
tinit = 0
nsteps = 25
nstcomm = 1
nstxout = 25
nstvout = 25
nstfout = 25
nstxtcout = 5
nstenergy = 5
nstlist = 10
nstlog=0
ns_type = grid
vdwtype = switch
rlist = 1.0
rlistlong = 1.6
rvdw = 1.5
rvdw-switch = 1.4
rcoulomb = 1.0
coulombtype = PME
ewald-rtol = 1e-5
optimize_fft = yes
fourierspacing = 0.12
fourier_nx = 0
fourier_ny = 0
fourier_nz = 0
pme_order = 4
tc_grps =  System
tau_t   =  1.0
ld_seed =  -1
ref_t = 310
gen_temp = 310
gen_vel = yes
unconstrained_start = no
gen_seed = -1
Pcoupl = berendsen
pcoupltype = semiisotropic
tau_p = 4 4
compressibility = 4.5e-5 4.5e-5
ref_p = 1.0 1.0
dispcorr = EnerPres

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Gromacs with Intel Xeon Phi coprocessors ?

2013-03-08 Thread Christopher Neale
Dear users:

does anybody have any experience with gromacs on a cluster in which each node 
is composed of 1 or 2 x86 processors plus an Intel Xeon Phi coprocessor? Can 
gromacs make use of the xeon phi coprocessor? If not, does anybody know if that 
is in the pipeline?

Thank you,
Chris.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_density fails after calculating for the last snapshot

2013-03-08 Thread Christopher Neale
What output do you get from gmxcheck -f martin_md_290K_100ns_tau1.xtc ? Also, 
can you extract that problematic frame with trjconv and run g_density on that 
one frame (what happens?). Also, can you make a new .xtc that is missing that 
last frame and see what g_density does?

These will help you to figure out what is going on.

Chris.

-- original message --

Hi All
I have a 100 ns lipid simulation for which I want the density profile. I issued 
the following command

g_density -f martin_md_290K_100ns_tau1.xtc -n dppc.ndx -s 
martin_md_290K_100ns_tau1.tpr -o water_density.xvg -b 5

The program fails with the following message

Selected 3: 'Water'
Last frame  25000 time 10.000  
*** glibc detected *** g_density: munmap_chunk(): invalid pointer: 
0x00d655d0 ***
=== Backtrace: =
/lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7fb120a73b96]
/usr/lib/libgmxana.so.6(calc_density+0x30c)[0x7fb120e14bec]
/usr/lib/libgmxana.so.6(gmx_density+0x42b)[0x7fb120e1541b]
g_density(main+0x9)[0x400629]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7fb120a1676d]
g_density[0x400659]
=== Memory map: 
0040-00401000 r-xp  08:06 4099695
/usr/bin/g_density
0060-00601000 r--p  08:06 4099695
/usr/bin/g_density
00601000-00602000 rw-p 1000 08:06 4099695
/usr/bin/g_density
00d57000-00e57000 rw-p  00:00 0  [heap]
7fb11f003000-7fb11f018000 r-xp  08:06 3014802
/lib/x86_64-linux-gnu/libgcc_s.so.1
7fb11f018000-7fb11f217000 ---p 00015000 08:06 3014802
/lib/x86_64-linux-gnu/libgcc_s.so.1
7fb11f217000-7fb11f218000 r--p 00014000 08:06 3014802
/lib/x86_64-linux-gnu/libgcc_s.so.1
7fb11f218000-7fb11f219000 rw-p 00015000 08:06 3014802
/lib/x86_64-linux-gnu/libgcc_s.so.1
7fb11f239000-7fb11f26c000 rw-p  00:00 0
7fb11f26c000-7fb11f282000 r-xp  08:06 3015169
/lib/x86_64-linux-gnu/libz.so.1.2.3.4
7fb11f282000-7fb11f481000 ---p 00016000 08:06 3015169
/lib/x86_64-linux-gnu/libz.so.1.2.3.4
7fb11f481000-7fb11f482000 r--p 00015000 08:06 3015169
/lib/x86_64-linux-gnu/libz.so.1.2.3.4
7fb11f482000-7fb11f483000 rw-p 00016000 08:06 3015169
/lib/x86_64-linux-gnu/libz.so.1.2.3.4
7fb11f483000-7fb11f485000 r-xp  08:06 3014907
/lib/x86_64-linux-gnu/libdl-2.15.so
7fb11f485000-7fb11f685000 ---p 2000 08:06 3014907
/lib/x86_64-linux-gnu/libdl-2.15.so
7fb11f685000-7fb11f686000 r--p 2000 08:06 3014907
/lib/x86_64-linux-gnu/libdl-2.15.so
7fb11f686000-7fb11f687000 rw-p 3000 08:06 3014907
/lib/x86_64-linux-gnu/libdl-2.15.so
7fb11f687000-7fb11f7d8000 r-xp  08:06 4070629
/usr/lib/x86_64-linux-gnu/libxml2.so.2.7.8
7fb11f7d8000-7fb11f9d7000 ---p 00151000 08:06 4070629
/usr/lib/x86_64-linux-gnu/libxml2.so.2.7.8
7fb11f9d7000-7fb11f9df000 r--p 0015 08:06 4070629
/usr/lib/x86_64-linux-gnu/libxml2.so.2.7.8
7fb11f9df000-7fb11f9e1000 rw-p 00158000 08:06 4070629
/usr/lib/x86_64-linux-gnu/libxml2.so.2.7.8
7fb11f9e1000-7fb11f9e2000 rw-p  00:00 0
7fb11f9e2000-7fb11fb46000 r-xp  08:06 4064740
/usr/lib/libfftw3f.so.3.3.0
7fb11fb46000-7fb11fd45000 ---p 00164000 08:06 4064740
/usr/lib/libfftw3f.so.3.3.0
7fb11fd45000-7fb11fd51000 r--p 00163000 08:06 4064740
/usr/lib/libfftw3f.so.3.3.0
7fb11fd51000-7fb11fd52000 rw-p 0016f000 08:06 4064740
/usr/lib/libfftw3f.so.3.3.0
7fb11fd52000-7fb11fe4d000 r-xp  08:06 3014916
/lib/x86_64-linux-gnu/libm-2.15.so
7fb11fe4d000-7fb12004c000 ---p 000fb000 08:06 3014916
/lib/x86_64-linux-gnu/libm-2.15.so
7fb12004c000-7fb12004d000 r--p 000fa000 08:06 3014916
/lib/x86_64-linux-gnu/libm-2.15.so
7fb12004d000-7fb12004e000 rw-p 000fb000 08:06 3014916
/lib/x86_64-linux-gnu/libm-2.15.so
7fb12004e000-7fb120066000 r-xp  08:06 3014910
/lib/x86_64-linux-gnu/libpthread-2.15.so
7fb120066000-7fb120265000 ---p 00018000 08:06 3014910
/lib/x86_64-linux-gnu/libpthread-2.15.so
7fb120265000-7fb120266000 r--p 00017000 08:06 3014910
/lib/x86_64-linux-gnu/libpthread-2.15.so
7fb120266000-7fb120267000 rw-p 00018000 08:06 3014910
/lib/x86_64-linux-gnu/libpthread-2.15.so
7fb120267000-7fb12026b000 rw-p  00:00 0
7fb12026b000-7fb1204f9000 r-xp  08:06 4099651
/usr/lib/libgmx.so.6
7fb1204f9000-7fb1206f8000 ---p 0028e000 08:06 4099651
/usr/lib/libgmx.so.6
7fb1206f8000-7fb1206fe000 r--p 0028d000 08:06 4099

[gmx-users] Trouble restarting a 4.5.5 run with 4.6.1 (forced to use -noappend)

2013-03-07 Thread Christopher Neale
Dear Users,

I have a system that has been running under gromacs 4.5.5. I can restart it 
fine from
a checkpoint file with gromacs 4.5.5 but not with gromacs 4.6.1. 

The long description follows, but the short answer seems to be that 4.5.5 
recorded My-X/Y/Z values (the internet tells me these relate to dipole 
information) in the .edr file but 4.6.1 does not for my system so there is an 
incompatibility that I can only get around by using mdrun -noappend.

I'm not sure if this is known or not, or if it is a minor bug or if I am doing 
something wrong.

Thank you,
Chris.

Here is what I get if I try to run under gromacs 4.5.5:

[cneale@seawolf3 12]$ ~/gromacs_links/mdrun -deffnm popc -cpi popc.cpt
NNODES=1, MYRANK=0, HOSTNAME=seawolf3
 :-)  G  R  O  M  A  C  S  (-:
  ߼߬
   ¸­°ß²¾ß
:-)  VERSION 4.5.5  (-:

Written by Emile Apol, Rossen Apostolov, Herman J.C. Berendsen,
  Aldert van Buuren, Pär Bjelkmar, Rudi van Drunen, Anton Feenstra,
Gerrit Groenhof, Peter Kasson, Per Larsson, Pieter Meulenhoff,
   Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz,
Michael Shirts, Alfons Sijbers, Peter Tieleman,

   Berk Hess, David van der Spoel, and Erik Lindahl.

   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2010, The GROMACS development team at
Uppsala University & The Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
  modify it under the terms of the GNU General Public License
 as published by the Free Software Foundation; either version 2
 of the License, or (at your option) any later version.

   :-)  /home/cneale/gromacs_links/mdrun  (-:

Option Filename  Type Description

  -s   popc.tpr  InputRun input file: tpr tpb tpa
  -o   popc.trr  Output   Full precision trajectory: trr trj cpt
  -x   popc.xtc  Output, Opt. Compressed trajectory (portable xdr format)
-cpi   popc.cpt  Input, Opt!  Checkpoint file
-cpo   popc.cpt  Output, Opt. Checkpoint file
  -c   popc.gro  Output   Structure file: gro g96 pdb etc.
  -e   popc.edr  Output   Energy file
  -g   popc.log  Output   Log file
-dhdl  popc.xvg  Output, Opt. xvgr/xmgr file
-field popc.xvg  Output, Opt. xvgr/xmgr file
-table popc.xvg  Input, Opt.  xvgr/xmgr file
-tableppopc.xvg  Input, Opt.  xvgr/xmgr file
-tablebpopc.xvg  Input, Opt.  xvgr/xmgr file
-rerun popc.xtc  Input, Opt.  Trajectory: xtc trr trj gro g96 pdb cpt
-tpi   popc.xvg  Output, Opt. xvgr/xmgr file
-tpid  popc.xvg  Output, Opt. xvgr/xmgr file
 -ei   popc.edi  Input, Opt.  ED sampling input
 -eo   popc.edo  Output, Opt. ED sampling output
  -j   popc.gct  Input, Opt.  General coupling stuff
 -jo   popc.gct  Output, Opt. General coupling stuff
-ffout popc.xvg  Output, Opt. xvgr/xmgr file
-devoutpopc.xvg  Output, Opt. xvgr/xmgr file
-runav popc.xvg  Output, Opt. xvgr/xmgr file
 -px   popc.xvg  Output, Opt. xvgr/xmgr file
 -pf   popc.xvg  Output, Opt. xvgr/xmgr file
-mtx   popc.mtx  Output, Opt. Hessian matrix
 -dn   popc.ndx  Output, Opt. Index file
-multidir  popc  Input, Opt., Mult. Run directory

Option   Type   Value   Description
--
-[no]h   bool   no  Print help info and quit
-[no]version bool   no  Print version info and quit
-niceint0   Set the nicelevel
-deffnm  string popcSet the default filename for all file options
-xvg enum   xmgrace  xvg plot formatting: xmgrace, xmgr or none
-[no]pd  bool   no  Use particle decompostion
-dd  vector 0 0 0   Domain decomposition grid, 0 is optimize
-npmeint-1  Number of separate nodes to be used for PME, -1
is guess
-ddorder enum   interleave  DD node order: interleave, pp_pme or cartesian
-[no]ddcheck bool   yes Check for all bonded interactions with DD
-rdd real   0   The maximum distance for bonded interactions with
DD (nm), 0 is determine from initial coordinates
-rconreal   0   Maximum distance for P-LINCS (nm), 0 is estimate
-dlb enum   autoDynamic load balancing (with DD): auto, no or yes
-dds real   0.8 Minimum allowed dlb scaling of the DD cell size
-gcomint-1  Global communication frequency
-[no]v   bool   no  Be loud and noisy
-[no]compact bool   yes Write a compact log file
-[no]seppot  bool   no  Write separate V and dVdl terms for each
interaction typ

[gmx-users] how to get pdb2gmx to use an arbitrarily located forcefield.ff directory?

2013-03-05 Thread Christopher Neale
Thank you, Francesco, for the assistance. However, I still can not get it to 
work (see below). Can you please specify exactly what you have done and perhaps 
show a listing of the directory structure?

My command was: pdb2gmx -f protein.pdb -ff charmm36

I tried this in a number of ways, all eventually failed:

neither GMXLIB or GMXDATA set:
Error: "Could not find force field 'charmm36' in current directory, install 
tree or GMXDATA path."

# define GMXDATA
export GMXLIB=""
export GMXDATA=~/gromacs/CHARMM36/share 
Error: "Could not find force field 'charmm36' in current directory, install 
tree or GMXDATA path."

# define GMXLIB:
export GMXLIB=~/gromacs/CHARMM36/share/gromacs/top
export GMXDATA=""
the charmm36 ff is found, but after selecting a water model, I get:
Error: "Library file residuetypes.dat not found in current dir nor in your 
GMXLIB path."

# define both:
export GMXDATA=~/gromacs/CHARMM36/share  
export GMXLIB=~/gromacs/CHARMM36/share/gromacs/top 
the charmm36 ff is found, but after selecting a water model, I get:
Error: "Library file residuetypes.dat not found in current dir nor in your 
GMXLIB path."

# set GMXLIB to the regular installation:
export GMXDATA=~/gromacs/CHARMM36/share  
export GMXLIB=/usr/local/gromacs-4.5.5/share/gromacs/top/
Error: "Could not find force field 'charmm36' in current directory, install 
tree or GMXDATA path."

# the reverse of the idea above:
export GMXDATA=/usr/local/gromacs-4.5.5/share
export GMXLIB=~/gromacs/CHARMM36/share/gromacs/top
Error: "Library file residuetypes.dat not found in current dir nor in your 
GMXLIB path."

Here is a listing of the artificial directory structure that I created to try 
to get this to work:

[nealec@pegasus check]$ ls ~/gromacs/CHARMM36/
share
[nealec@pegasus check]$ ls ~/gromacs/CHARMM36/share/
gromacs
[nealec@pegasus check]$ ls ~/gromacs/CHARMM36/share/gromacs/
top
[nealec@pegasus check]$ ls ~/gromacs/CHARMM36/share/gromacs/top/
charmm36.ff
[nealec@pegasus check]$ ls ~/gromacs/CHARMM36/share/gromacs/top/charmm36.ff/
aminoacids.arnaminoacids.n.tdb  aminoacids.vsd  dna.arndna.n.tdb 
ffnabonded.itp forcefield.doc  ions.itppopc.itp   rna.hdbrna.rtp   
tip3p.itp   watermodels.dat
aminoacids.c.tdb  aminoacids.r2batomtypes.atp   dna.c.tdb  dna.rtp   
ffnanonbonded.itp  forcefield.itp  lipids.hdb  rna.arnrna.n.tdb  spce.itp  
tip4p.itp
aminoacids.hdbaminoacids.rtpcmap.itpdna.hdbffbonded.itp  
ffnonbonded.itpgb.itp  lipids.rtp  rna.c.tdb  rna.r2bspc.itp   
tips3p.itp

Thank you,
Chris.

-- original message --

Hi Chris,
I don't know if thngs changed in gromacs 4.6.x, but I succesfully did what
you are triying to do setting the two variable GMXLIB and GMXDATA at the
same time.

Francesco


2013/3/5 Christopher Neale <[hidden email]>

> Hello,
>
> I have downloaded the charmm36.ff directory and would like to use it with
> pdb2gmx. Everything works fine if I put it in the current directory or the
> share/gromacs/top directory of the binary that I am using. However, I'd
> like to be able to put the charmm36.ff directory in an arbitrary place.
>
> I tried setting GMXLIB, no luck. I also tried setting GMXDATA and also,
> separately, creating a directory structure share/gromacs/top/charm36ff and
> then setting GMXDATA to share/ (which seems absurdly convoluted, but is
> implied that it should work by the description of GMXDATA here
> http://www.gromacs.org/Documentation/Terminology/Environment_Variablesand the 
> text at the output of pdb2gmx:
>
> Fatal error:
> Could not find force field 'charmm36' in current directory, install tree
> or GMXDATA path.
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
>
> I can get around this by making a link in the current directory and then
> using sed to change the directory structure in the resulting .top file,
> after which I can remove the link in the .top file.
>
> Thank you for any advice,
> Chris. 

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] how to get pdb2gmx to use an arbitrarily located forcefield.ff directory?

2013-03-05 Thread Christopher Neale
Hello,

I have downloaded the charmm36.ff directory and would like to use it with 
pdb2gmx. Everything works fine if I put it in the current directory or the 
share/gromacs/top directory of the binary that I am using. However, I'd like to 
be able to put the charmm36.ff directory in an arbitrary place.

I tried setting GMXLIB, no luck. I also tried setting GMXDATA and also, 
separately, creating a directory structure share/gromacs/top/charm36ff and then 
setting GMXDATA to share/ (which seems absurdly convoluted, but is implied that 
it should work by the description of GMXDATA here 
http://www.gromacs.org/Documentation/Terminology/Environment_Variables and the 
text at the output of pdb2gmx:

Fatal error:
Could not find force field 'charmm36' in current directory, install tree or 
GMXDATA path.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors

I can get around this by making a link in the current directory and then using 
sed to change the directory structure in the resulting .top file, after which I 
can remove the link in the .top file.

Thank you for any advice,
Chris.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] The sum of the two largest charge group radii is larger than rlist - rvdw (because rlist < rvdw)

2013-02-27 Thread Christopher Neale
Dear Joakim:

I've started bilayer simulations with both of the new conditions that you 
mentioned. Thank you for the advice ans references. I am surprised to hear that 
the bilayer features are similar with different vdw cutoffs (although I'll 
check also). Perhaps this is an unexpected win for dispersion correction. 
Michael Shirts, et al., continue to show us the importance/usefulness of 
dispersion correction in aqueous solution, but I was surprised to hear that it 
also seems to work better than one might hope for when simulating 
orientationally anisotropic biphasic systems like a bilayer in water.

Thanks for your input,
Chris.

-- original message --

Dear Chris,

We have used the following settings in our recent work 
(http://pubs.rsc.org/en/content/articlelanding/2013/CP/C3CP44472D):

coulombtype=pme
rcoulomb=1.0
rlist=1.0
vdw-type=cut-off
rvdw=1.4
dispcorr=enerpres

And this works fine without any notes etc from grompp, and you do not have to 
use the switching function. Should probably change the md.mdp file on the 
website...thanks for bringing this to our attention!

A colleague of mine have used Slipids with a shorter rvdw and the bilayer 
properties were still very stable. When I use rcoulomb=rvdw=1.0 I see no major 
difference either except for a relatively big boost in performance. But please 
double check that your simulations are able to reproduce experimental 
observables if changing rvdw to around 1.0 nm., but it should be fine.

Best regards,
Joakim


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] On the usage of SD integrator as the thermostat

2012-11-28 Thread Christopher Neale
By 'proper dynamics' I mean the correct dynamics. For example, if a simulation 
with the SD integrator indicates that loop1 folds before loop2 folds, then this 
might be incorrect. The state information will be correct in velocity Langevin 
dynamics but the dynamic information might be incorrect.

The advantage of the SD integrator over the Berendsen thermostat is that you 
get the correct ensemble. Check out papers by Garcia in relation to the flying 
ice cube problem and lipid bilayers. Is the same thing true for Nose-Hoover? I 
am not sure, but I am sure that you can find out. I recall hearing that the 
Nose-Hoover thermostat is very sensitive to propagating and magnifying 
temperature fluctuations. 

Chris.

-- original message --

I am wondering what you mean by 'proper dynamics', Chris? And in
general, what's the advantage of using sd integrator over md
integrator together with Nose-Hoover thermostat.

Thanks,
km.

On Fri, Nov 23, 2012 at 5:43 PM, Christopher Neale
 wrote:
> I use the SD integrator with tau_t = 1.0 ps for all of my work, including 
> proteins in aqueous solution
> or embedded in a lipid membrane.
>
> Any value of tau-t is "correct", and none will give you the proper dynamics, 
> but I find that the diffusion of
> both water and lipids is quite reasonable when using tau_t=1.0 ps.
>
> I arrived at 1.0 ps after some help from Berk Hess on this list. I suggest 
> that you search out those old posts.
>
> Chris.
>
> -- original message --
>
> In manual I've found possibility of the usage of the sd (langeven's
> dynamics) integrator as the thermostat.
>
> It's known that friction coefficient in the Langeven's equations is
> defined as m/Tau_t. So the  high values of tau t can be appropriate
> for the modeling of the thermostat without t_coupl. Also I know that
> friction coefficient for such simulation must  corresponds to the
> viscosity of the system.  In Gromacs manual I've found that Tau-t= 2.0
> ps can be appropriate value for such simulations. Does this value
> suitable for water-soluble system only ? What Tau_t should I use for
> modeling of the membrane proteins in the lipid-water environment which
> has higher viscosity ?
>
>
> Thanks for help,
>
> James
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] On the usage of SD integrator as the thermostat

2012-11-23 Thread Christopher Neale
I use the SD integrator with tau_t = 1.0 ps for all of my work, including 
proteins in aqueous solution
or embedded in a lipid membrane.

Any value of tau-t is "correct", and none will give you the proper dynamics, 
but I find that the diffusion of
both water and lipids is quite reasonable when using tau_t=1.0 ps.

I arrived at 1.0 ps after some help from Berk Hess on this list. I suggest that 
you search out those old posts.

Chris.

-- original message --

In manual I've found possibility of the usage of the sd (langeven's
dynamics) integrator as the thermostat.

It's known that friction coefficient in the Langeven's equations is
defined as m/Tau_t. So the  high values of tau t can be appropriate
for the modeling of the thermostat without t_coupl. Also I know that
friction coefficient for such simulation must  corresponds to the
viscosity of the system.  In Gromacs manual I've found that Tau-t= 2.0
ps can be appropriate value for such simulations. Does this value
suitable for water-soluble system only ? What Tau_t should I use for
modeling of the membrane proteins in the lipid-water environment which
has higher viscosity ?


Thanks for help,

James
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] REMD simulation

2012-11-19 Thread Christopher Neale
Xavier is right, except that you can also reduce the size of your system. You 
can take larger steps in temperature 
if you have fewer atoms. If you are using a  cubic system, you can move to a 
rhombic dodecahedron. 
Even constraining all bonds will help a bit here (vs. harmonic bonds). 
There are lots of papers on this topic.

To see why you don't get any exchanges, construct histograms of your potential 
energies and you will see
that they don't overlap. Also, it is inefficient to take evenly spaced 
temperatures. This is not your major problem,
but read a bit on exponentially spaced temperatures for REMD.

Chris.

-- original message --

Well either you use more replicas or you reduce the temperature  
range ...
There is no way around!

On Nov 19, 2012, at 5:54 PM, Kenny Bravo Rodriguez wrote:

> Dear All,
>
> i am trying to performed REMD simulations using Gromacs.
> My question is concerning the temperature distribution and the  
> number of replica.
> I need to run 24 replicas of my system with a temperature range of  
> 290-400 K. How can I select the temperatures values for each replica?
> I tried the server http://folding.bmc.uu.se/remd/index.php but for  
> my system it gives 50 replicas. If i try to take 24 evenly spaced  
> values from the obtained list of temperature then
> the replicas do not exchange at all.
> I am using Gromacs 4.5.5 and my system has 6862 water molecules and  
> 535 atoms for the solute.
>
> Thanks in advanced
> Kenny
>
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Umbrella sampling question

2012-11-14 Thread Christopher Neale
What you reported is not what you did. It appears that grompp, gtraj, and 
g_dist report the same value. 
Please also report the value that you get from your pullx.xvg file that you get 
from mdrun, which I suspect
will also be the same.

The difference that you report is actually between the first FRAME of your 
trajectory from g_dist
and the first LINE of the file from the g_wham output. I see no reason to 
assume that the values in the
output of g_wham must be time-ordered. Also, I have never used g_wham myself (I 
use an external program
to do wham) and so I can not say if you are using it correctly.

My overall conclusion is that you need to investigate g_wham output not worry 
about a new run at this stage.

Regarding pull_pbcatom0, there is lots of information on the mailing list about 
this. It is a global atom number
that defines the unit cell for selection of which periodic image of each 
molecule will be used for the pulling.
If all of your box dimensions are >> 2*1.08 nm, then pull_pbcatom0 will not 
affect your results.

Chris.

-- original message --

Gmx QA gmxquestions at gmail.com 
Wed Nov 14 15:06:46 CET 2012
Previous message: [gmx-users] Weird result of WHAM
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Hi Chris,

and thank you for your reply. I should have included my g_dist command in
my first mail. Here goes:

I first run trjconv to extract the individual frames from my pulling
trajectory
$ trjconv -f pull.xtc -s pull.tpr -o conf.gro -sep -pbc mol -ur compact

Then, g_dist like so:
$ g_dist -s pull.tpr -f conf0.gro -n index.ndx -o dist0.xvg

where index.ndx is an index-file that contains both pull-groups (ie the
Protein in one group, and another group with the single atom from the
molecule I'm pulling). I've checked that is is the same as in the mdp-file
for the pulling simulation.

>From this dist0.xvg-file I extract the z-component of the distance:
$ tail -n 1 dist0.xvg | awk '{print $5}'
-1.8083301

And this is also the same value I get using g_traj to manually calculate it.

Now, I run g_wham with this command line:
$ g_wham -it tpr-files.dat -if pullf-files.dat -o -hist -unit kCal

which gives me a profile.xvg file (per the default -o flag) containing the
PMF.
The top-part of the profile.xvg file looks like this:

# This file was created Tue Nov  6 09:56:23 2012
# by the following command:
# g_wham -it tpr-files.dat -if pullf-files.dat -o -hist -unit kCal
#
# g_wham is part of G R O M A C S:
#
# Gromacs Runs On Most of All Computer Systems
#
@title "Umbrella potential"
@xaxis  label "z"
@yaxis  label "E (kcal mol\S-1\N)"
@TYPE xy
1.761971e+000.00e+00
1.782558e+00-1.436057e-01
1.803145e+00-3.857955e-01

and I assumed that the distances reported here should be the same as the
distances calculated by g_dist for each frame, but as you can see they are
not (1.761971 vs 1.8083301)

I have been trying to understand how the 1.761971 distance is calculated,
but I do not get it since I use the pullf.xvg-files as input to g_wham. And
even using the z-component of the distance from the pullx.xvg-file and
calculating ie the average does not give me a value close to 1.761971. For
the next couple of frames, there are also differences, but they are not
constant.


I should add that after having sent my first mail, I found this thread:
http://gromacs.5086.n6.nabble.com/umbrella-sampling-PMF-position-discrepancy-td5000886.html
which fairly well describes my problem. Following the suggestion to change
(or in my case add, it was not there originally) the pull_pbcatom0 entry to
the mdp-file as an atom that is close to the COM of the protein, gives me a
distance of 1.809 reported from running grompp (when making the tpr for the
pulling simulation). I've started another pulling run with this new tpr,
but I don't like making changes I don't understand. I guess I'm not clear
about what pull_pbcatom0 does, but I though the distance in this case was
rather unambiguous.

Thanks
/PK

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Umbrella sampling question

2012-11-13 Thread Christopher Neale
Can you please let us know exactly how you got the two values that you find to 
be different (but expected
to be the same)? i.e. post your full g_dist command and explain how you 
observed the value in the output from
mdrun. One frame should be enough for now (as long as you are sure -- and can 
prove to us -- that it is
indeed the same frame).

You don't define profile.xvg and I wonder if the discrepancy might be related 
to your use of pull_start=yes
Do you see a consistent bias between mdrun output and g_dist output if you 
subtract the values for
different time frames? If so, is this difference the same as the distance in 
the initial frame (pull_start=yes)?

Chris.

-- original message --

I'm a new poster on the maillist, and new to umbrella sampling but not to
MD in general.

I have recently done some pulling simulation with Gromacs, but have a few
questions about the outcome, and in particular the distances that are
calculated for the different windows. I realize this is a question that has
come up before, and there are some useful posts in the archives, but
nothing that exactly answers my questions.

To begin, I ran a pulling-simulation pulling a molecule starting in the
middle of a membrane (and inside a protein transport channel) straight up
in z. To that end, I used the following mdp-settings:

pull = umbrella
pull_geometry = direction
pull_vec1 = 0 0 1
pull_ngroups = 1
pull_start = yes
pull_group0 = Protein
pull_group1 = pg1
pull_rate1 = 0.001
pull_k1 = 500

So my pull groups are the COM of the protein, and one atom named pg1 on the
pull molecule.
To my understanding, this should pull the pg1-molecule straight up in z,
and that is indeed also what is happening in the simulation.

I ran this simulation for 6 ns, and it resulted in about 40 separate
conformations to use for umbrella sampling. All of those simulations also
seem to work, I can use the resulting pullf,xvg-files as input to g_wham,
and get a histogram-plot with reasonable overlaps.

However, I'm trying to understand how the various distances relate to each
other. For example, in the profile.xvg-file I get a z-distance for the
first frame as 1.761971 nm, while checking with g_dist gives me 1.80833
(for z). Continuing a few frames further, there are still differences, and
they appear to be random.

How are the distances in the profile.xvg-file computed? The average of the
dZ column of the pullx-file for the first frame is 1.75787, which is sort
of close to 1.761971, but not quite (and also for the next frames there is
no closeness).

Sorry if this became a long mail, but I need to understand this in order to
be able to progress with my research.

Thanks
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Setting up a complex membrane simulation

2012-11-12 Thread Christopher Neale
The simple answer is yes, you could make Lipid_A-box.gro larger in the bilayer 
plane. That probably won`t
address the underlying problem though.

As far as I know, genbox doesn't take periodicity into account. That means that 
with larger species, such as lipid A
you are going to need to start with a much larger box and let pressure 
equilibration bring it down to the correct 
size in the context of PBC during mdrun. The alternative is to build a crystal 
with your own script and then set the
box boundaries yourself so that periodicity is taken into account.

Note that this is a major limitation of genbox and it would be great if 
somebody had the time to address it...

Also note that there will be a similar packing problem between lipids even if 
you discount problems with PBC
packing. There are only 2 ways to deal with this: (a) start with a crystal or 
(b) start with a sparse bilayer and have a
good equilibration technique. If you go for option A then you need to be aware 
of biases from the starting 
crystal. If you go for option B, then you'll likely have problems with lipid A 
acyl chains moving out toward bulk water 
and then not equilibrating on your achievable simulation timescale (there's at 
least one paper out there describing 
how to build a lipid A system and circumvent some setup problems, you should 
read it. I think it's at least 10 years 
old). The main problem with setup is that lipid A equilibration times are 
orders of magnitude larger than those for
phospholipids. I have personally had good success with high-temperature 
equilibration in which I add restraints to
a number of atoms that keep their z-coordinates in certain regions (where z is 
along the bilayer normal).

In fact, a good technique is probably to build a sparse lipid A bilayer,
restrain all Z coordinates of all lipid A atoms to their original values, and 
run some high-temperature MD to get 
some preliminary packing before slowly releasing the z-value restraints. I 
haven`t done this myself, but it is what I
would do the next time I need to build a lipid A bilayer from scratch.

Finally, there are papers recently in the literature of lipid A simulations. 
Perhaps you can get a lipid A bilayer from
one of those groups. I didn't have any luck obtaining these myself but perhaps 
they will be kinder to you if you ask 
nicely. Then you can bypass the building step entirely.

Chris.

-- original message --


Ok, what i've currently done:

I start with a pdb file of a single molecule Lipid_A and its topology and
put it in a box:

editconf -f Lipid_A.pdb -o Lipid_A-box.gro -bt triclinic -d 1.0

I want a total of 128 molecules of this lipid in my system so I use genbox
to add 127 more:

genbox -cp Lipid_A-box.gro -ci Lipid_A.pdb -p Lipid_A.top -o 128_Lipid_A.pdb
-nmol 127

This only ads 8 molecules, I'm going to guess because more don't fit in my
box. How do I deal with this? How can i know in advance how big i have to
make the box?

kind regards,

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] How to avoid dumping trr file?

2012-11-09 Thread Christopher Neale
By dumping, I presume you mean writing it to disk during mdrun.

set this in your .mdp file:

nstxout = 0
nstvout = 0
nstfout = 0

And, while you are at it:

nstlog = 0

Chris.

-- original message --

Sometimes I do not need to dump the .trr file, which is really big.

I am wondering if I can avoid it?

Thank you very much.

Regards,
Kai
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Problem building Gromacs-4.5.5 on BlueGene/Q

2012-11-09 Thread Christopher Neale
Dear Dejun:

I don't know why --enable-bluegene gives you problems or what it is supposed to 
do. Was the BGQ even 
available when gromacs 4.5.5 came out? I doubt it.

In any event, here is how my colleague successfully ran configure:

./configure \
--build=ppc64 \
--prefix=${HOME}/exec/gromacs-4.5.5 \
--disable-ppc-altivec \
--with-fft=fftw3 \
--without-x \
--enable-all-static \
CC="bgxlc_r"  CFLAGS="-O3 -qarch=auto -qtune=auto -q64 -qinfo=all" \
CXX="bgxlC_r" CXXFLAGS="-O3 -qarch=auto -qtune=auto -q64 -qinfo=all" \
F77="bgxlf_r" FFLAGS="-O3 -qnoprefetch -qarch=auto -qtune=auto -q64 
-qinfo=all" \
CPPFLAGS="-I/opt/ibmcmp/xlmass/bg/7.3/include 
-I${HOME}/zx_local/fftw-3.3.2/include" \
LDFLAGS="-L/opt/ibmcmp/xlmass/bg/7.3/lib64 
-L${HOME}/zx_local/fftw-3.3.2/lib" \
LIBS="-lmass_64 -lm"

-- original message --

I'm trying to build the mdrun binary for the BGQ system with the following 
commands (current dir is /scratch/dlin13/gromacs/gromacs-4.5.5/backend) :

../configure --prefix=/scratch/dlin13/gromacs/gromacs-4.5.5/backend \
  --build=powerpc64-bgq-linux \
  --enable-bluegene \
  --with-fft=fftw3 \
  --enable-float \
  --enable-mpi \
  --enable-shared=no \
  --without-dlopen \
  --disable-threads \
  --program-suffix=_mpi_bg \
CC="mpixlc" CFLAGS="-O3 -qarch=450d -qtune=450" \
CXX="mpixlcxx" CXXFLAGS="-O3 -qarch=450d -qtune=450" \
CPPFLAGS="-I/scratch/dlin13/fftw/fftw-3.3.2-mpi/include" \
F77="mpixlf77" FFLAGS="-O3 -qarch=auto -qtune=auto" \
LDFLAGS="-L/scratch/dlin13/fftw/fftw-3.3.2-mpi/lib" \
LIBS="-lmass"

make mdrun

The build generate some errors in the middle:

../../../../../src/gmxlib/nonbonded/nb_kernel_bluegene/nb_kernel_gen_bluegene.h",
 line 259.13: 1506-1231 (S) The built-in function "__fpadd" is not valid for 
the target architecture.
make[3]: *** [nb_kernel010_bluegene.lo] Error 1
make[3]: Leaving directory 
`/scratch/dlin13/gromacs/gromacs-4.5.5/backend/src/gmxlib/nonbonded/nb_kernel_bluegene'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory 
`/scratch/dlin13/gromacs/gromacs-4.5.5/backend/src/gmxlib/nonbonded'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory 
`/scratch/dlin13/gromacs/gromacs-4.5.5/backend/src/gmxlib'
(cd ./src/mdlib && make ; exit 0)

and it finally stop at:
:0  -L/scratch/dlin13/fftw/fftw-3.3.2-mpi/lib  -o libgmxpreprocess_mpi.la 
-rpath /scratch/dlin13/gromacs/gromacs-4.5.5/backend/lib add_par.lo 
compute_io.lo convparm.lo fflibutil.lo gen_ad.lo gen_vsite.lo genhydro.lo 
gpp_atomtype.lo gpp_bond_atomtype.lo h_db.lo hackblock.lo hizzie.lo pdb2top.lo 
pgutil.lo readir.lo readpull.lo resall.lo sorting.lo specbond.lo ter_db.lo 
tomorse.lo topdirs.lo topexcl.lo topio.lo toppush.lo topshake.lo toputil.lo 
tpbcmp.lo vsite_parm.lo xlate.lo ../mdlib/libmd_mpi.la -lnsl -lm -lmass
mkdir .libs
libtool: link: cannot find the library `../mdlib/libmd_mpi.la' or unhandled 
argument `../mdlib/libmd_mpi.la'
make[1]: *** [libgmxpreprocess_mpi.la] Error 1
make[1]: Leaving directory 
`/scratch/dlin13/gromacs/gromacs-4.5.5/backend/src/kernel'

But if I leave out the --enable-bluegene option and configure otherwise the 
same way as I showed, it works. Does anyone have any suggestion?

Thanks,
Dejun
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] simulated annealing

2012-11-05 Thread Christopher Neale
your tau_t is 1 ps and you want to increase the temperature by 300 K in 10 ps. 
I doubt that this is possible.
Try heating over a longer period of time. The longer the period, the closer you 
will get to your target temperature.
Why do you need to do the heating over 10 ps? Also, when using any 
deterministic integrator, I think that you are 
asking for problems anyway based on a rapid scaling of forces into velocities. 
Try doing this over 1 ns instead. The 
alternative, reducing tau_t, is not a good idea.

Chris.

-- original message --

In my test simulation annealing I'm trying to heat my system from 0 K
to 300 k after 10 ps of run. But at the end I see the temperature
didn't reach to 300 K , it's near around 22 K.
So can anyone please correct me in my .mdp file ?



constraints= none
integrator = md-vv
dt = 0.0005  ; 1fs
nsteps = 2  ; 30ps
nstcomm= 1
nstxout= 1000   ; frequency to write
coordinates to output trajectory
nstvout= 0  ; frequency to write
velocities to output trajectory; the last velocities are always
written
nstfout= 0  ; frequency to write forces to
output trajectory
nstlog = 1000 ; frequency to write
energies to log file
nstenergy  = 1000 ; frequency to write energies to edr file
nstcalcenergy  = 1000
vdwtype= cut-off
coulombtype= PME
pbc= xyz
table-extension= 20.0
nstlist= 100
ns_type= grid
rlist  = 1.2

rcoulomb   = 1.2
rvdw   = 1.2

comm-grps  = system
optimize_fft   = yes

;heating
annealing  = single
annealing_npoints  = 2
annealing_time = 0 10
annealing_temp = 0 300

ld_seed= 8072012
;nsttcouple = 0.001
;temperature coupling is on
Tcoupl = Nose-Hoover
tau_t = 1.0
tc_grps = system
ref_t = 0

;Pressure coupling is off
Pcoupl = no

; Generate velocites is on
gen_vel = yes
gen_temp = 0
gen_seed = 8042012


THE .log FILE

Constraining the starting coordinates (step 0)
RMS relative constraint deviation after constraining: 0.00e+00
Initial temperature: 0 K

Started mdrun on node 0 Tue Nov  6 21:21:05 2012

   Step   Time Lambda
  00.00.0

Grid: 14 x 14 x 14 cells
Current ref_t for group System:  0.0
   Energies (kJ/mol)
   Bond  AngleProper Dih.  Improper Dih.  LJ-14
6.98365e+022.31495e+034.34144e+039.32563e+013.06280e+03
 Coulomb-14LJ (SR)   Coulomb (SR)   Coul. recip.  Potential
4.90527e+043.16274e+05   -1.60409e+06   -1.10076e+05   -1.33833e+06
Kinetic En.   Total Energy  Conserved En.Temperature Pressure (bar)
0.0e+00   -1.33833e+06   -1.33833e+060.0e+001.63793e+02

   Step   Time Lambda
   10000.50.0

Current ref_t for group System: 15.0
   Energies (kJ/mol)
   Bond  AngleProper Dih.  Improper Dih.  LJ-14
5.91509e+022.27120e+034.37612e+031.01524e+023.13151e+03
 Coulomb-14LJ (SR)   Coulomb (SR)   Coul. recip.  Potential
4.91902e+042.58594e+05   -1.55303e+06   -1.10344e+05   -1.34512e+06
Kinetic En.   Total Energy  Conserved En.Temperature Pressure (bar)
6.82988e+03   -1.33829e+06   -1.33829e+061.08252e+01   -4.10650e+03

   Step   Time Lambda
   20001.00.0

Current ref_t for group System: 30.0
   Energies (kJ/mol)
   Bond  AngleProper Dih.  Improper Dih.  LJ-14
5.90602e+022.30240e+034.39329e+031.10298e+023.15642e+03
 Coulomb-14LJ (SR)   Coulomb (SR)   Coul. recip.  Potential
4.91731e+042.60786e+05   -1.55644e+06   -1.10395e+05   -1.34632e+06
Kinetic En.   Total Energy  Conserved En.Temperature Pressure (bar)
8.05310e+03   -1.33827e+06   -1.33827e+061.27640e+01   -3.72695e+03

..
..

Current ref_t for group System:255.0
   Energies (kJ/mol)
   Bond  AngleProper Dih.  Improper Dih.  LJ-14
6.07478e+022.34899e+034.43725e+031.08744e+023.17997e+03
 Coulomb-14LJ (SR)   Coulomb (SR)   Coul. recip.  Potential
4.91213e+042.69271e+05   -1.57036e+06   -1.10532e+05   -1.35182e+06
Kinetic En.   Total Energy  Conserved En.Temperature Pressure (bar)
1.39095e+04   -1.33791e+06   -1.33791e+062.20464e+01   -3.08283e+03

   Step   Time Lambda
  180009.00.0

Current ref_t for group System:270.0
   Energies (kJ/mol)
   Bond  AngleProper Dih.  Im

[gmx-users] Setting up a complex membrane simulation

2012-11-02 Thread Christopher Neale
Whatever method you use to set up your system, and be it atomistic or 
coarse-grained, I suggest that you
build at least 2 such systems independently and run them all for the same 
amount of time. Looking at convergence 
from the same starting structure with different initial velocities is not 
nearly as useful as looking at convergence 
from two independent starting conformations.

Chris.

-- original message --

I wish to run a simulation of a membrane consisting of about 10 different
lipids.
Through the automated topology builder ( example
  ) I can obtain
of each lipid a structure file in pdb format (it has one single molecule)
and an .itp topology file.
I want to make a random mix of these lipids with water and let it
equilibrate over 300 us or so to let it assemble into a bilayer.
My question is, how can I combine all these structures and topologies into
one system that has X molecules of lipid A, Y molecules of lipid B, etc. I
think genbox might have the answer for combining the sructures, but how?
Also, if someone could point me towards a tutorial were they let a membrane
form from a random distribution of a lipid with water that would be useful.
The only GROMACS membrane tutorials that I can find always start with
pre-equilibrated membranes of 128 DOPC molecules.

Best regards,

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] regarding converge of system

2012-11-01 Thread Christopher Neale
RMSD of what? Probably you mean RMSD from the starting (or crystal) structure. 
First, consider that your profile
of RMSD vs. crystal structure levels off at 0.4 nm with increasing simulation 
time. Consider how many possible 
conformations are 0.4 nm RMSD away from the crystal structure. A stable RMSD 
does not necessarily indicate
that you have obtained Boltzmann sampling of a particular conformational basin. 
Very generally, it is the 
degeneracy of RMSD with respect to conformational space that makes RMSD an 
insufficient indicator of
convergence. There are other ways to look at this though. For instance, there 
might be an important degree of
freedom that is important for your system but does not greatly affect the 
global conformation (and thus the 
RMSD). This might be a dihedral angle, formation of a hydrogen bond, 
tilt/rotation of a protein/peptide in a 
lipid bilayer, solvation patterns, ligand binding, system volume, etc.

Also, and I presume that this is not what Tsjerk meant (but you should check 
whatever reference you are referring
to) it is always possible that you remain stuck within a conformational basin 
that is locally favourable but globally
unfavourable.

For other measures of convergence, first think about what you are trying to 
study for your system and use that
to guide your selection of properties for which you can evaluate convergence. 
This should include both global
and local structural measures. There are loads of papers available that discuss 
this. See papers by Grossfield,
Zuckerman, and others. A colleague of mine has also shown that some properties 
converge faster than others
(meaning that some properties, like the RMSD, converge before the partition 
function has converged: 
http://pubs.acs.org/doi/abs/10.1021/ct900302n ).

Chris.

-- original message --

I have two questions:

1-We can say "The RMSD is commonly used as an indicator of convergence
of the structure towards an equilibrium state (Tsjerk W.)". RMSD is
not sufficient to determine whether or not converge of
structure/system. Why?

2-Radius of gyration, RMSD, rmsd matrix are used as an indicator of
convergence of the structure. are there other indicators of
convergence of the structure?

Thanks in advance

--
Ahmet Yıldırım
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_rdf

2012-10-31 Thread Christopher Neale
Dear Ali:

I don't think that you are re-posting the same question often enough. 
Perhaps you should repost more often? 4 identical posts in 5 hours might not be 
enough to get a reply

Seriously though, we all saw your message.

Chris.

-- original message(s) --

I have a system that contains water , methane and propane in 240 k and 300
bar,

My simulation box is rectangular .

Water film is in middle of my box. Methane and propane is around it.

My simulation box is symmetric,

1- I used g_rdf program for . My result is exotic. My g(r) in profile do
not reach to 1 . Why ?


2- I test my number density profiles(from g_density) but they do not
correct result because when i

calculate number of my molecules by multiplying volume to average number
density, i can not take the same number of my particle,

Where do i mistake?

-- 
Sincerely

Ali Alizadeh

-- 
Sincerely

Ali Alizadeh
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Freeze group atoms changing position

2012-10-31 Thread Christopher Neale
makes sense.

-- original message --

On 10/31/12 4:21 PM, Christopher Neale wrote:
> Well, I thought that it would work for for freeze groups, and that is what I 
> intended to say. I've used freeze groups
> before, but don't have tonnes of experience with them. The gromacs mdp 
> options page reads as if it should work ( 
> http://manual.gromacs.org/online/mdp_opt.html#neq ), but if Justin says that 
> you can not use an index group for freeze groups (only position restraints) 
> then I suggest that you take his word for it. He answers a lot more posts and 
> is probably more familiar with the ins and outs of gromacs.
>

That's not what I meant at all.  One can use an index group for freeze groups, 
as well.  The original problem was that apparently the frozen group was not 
working entirely, so I suggested (as a troubleshooting measure) that the OP use 
restraints instead to see if the outcome was different.  The bizarre behavior 
before (two molecules moving, but the remainder staying frozen) suggested to me 
that perhaps there was something going wrong with the initial configuration and 
I was hoping that, at the very least, using restraints would determine if the 
configuration was messed up or not amenable to freezing.

-Justin

-- 
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Freeze group atoms changing position

2012-10-31 Thread Christopher Neale
Well, I thought that it would work for for freeze groups, and that is what I 
intended to say. I've used freeze groups 
before, but don't have tonnes of experience with them. The gromacs mdp options 
page reads as if it should work ( 
http://manual.gromacs.org/online/mdp_opt.html#neq ), but if Justin says that 
you can not use an index group for freeze groups (only position restraints) 
then I suggest that you take his word for it. He answers a lot more posts and 
is probably more familiar with the ins and outs of gromacs.

Chris,.

-- original message --

Alex Marshall amarsh59 at uwo.ca 
Wed Oct 31 20:55:08 CET 2012
Previous message: [gmx-users] Freeze group atoms changing position
Next message: [gmx-users] Freeze group atoms changing position
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Chris, is that for freeze groups or position restraints?

On Wed, Oct 31, 2012 at 3:22 PM, Christopher Neale <
chris.neale at mail.utoronto.ca> wrote:

> No need to rename... just make an .ndx group.
>
> -- original message --
>
> As I understand it, position restraints for an atom are set in the topology
> file and applied to that atom in each of that species. In order to restrain
> some but not all of the water I'd have to copy the topology of my water
> model and add the restraints, then rename (and group together) the atoms I
> want to freeze so that they're identified with the appropriate topology
> file. Does this sound like it would work? Is there some other way that you
> might do it?
>
> Thanks
>
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Freeze group atoms changing position

2012-10-31 Thread Christopher Neale
No need to rename... just make an .ndx group.

-- original message --

As I understand it, position restraints for an atom are set in the topology
file and applied to that atom in each of that species. In order to restrain
some but not all of the water I'd have to copy the topology of my water
model and add the restraints, then rename (and group together) the atoms I
want to freeze so that they're identified with the appropriate topology
file. Does this sound like it would work? Is there some other way that you
might do it?

Thanks

On Fri, Oct 26, 2012 at 5:18 PM, Alex Marshall  wrote:

> Justin: I'll try using position restraints instead of freezing the water
> in the tube. Thanks for the tip.
>
> Bogdan: I don't think I'm using constraints other than freeze groups. I
> wasn't using energy group exclusions though. I tried running the simulation
> from the same initial configuration with newly-defined energy groups
> CNT_GRA_WAL_IN and OUT, the first for the frozen atoms and the second for
> the free atoms. The list of exclusions reads:
> energygrp_excl   = CNT_GRA_WAL_IN CNT_GRA_WAL_IN
>
> Long story short, I'm roughly 15 ns into the simulation and the same two
> waters have jumped. I'll check the manual again though. Thanks.
>
> On Thu, Oct 25, 2012 at 10:43 AM, Bogdan Costescu  gmail.com>wrote:
>
>> On Thu, Oct 25, 2012 at 3:59 PM, Alex Marshall  wrote:
>> > Thanks Justin. I identified the offending waters using vmd (adding 1 to
>> > resID and atom number since vmd starts counting at 0) and checked
>> > confout.gro to make sure the coordinates matched up. I only have one
>> group
>> > for all frozen atoms in the system, and these guys are definitely in it.
>>
>> Are you using some kind of constraints ? Are you using energy group
>> exclusions to avoid interactions between frozen atoms ? If you search
>> the manual for "frozen" you'll find some warnings and recommendations.
>>
>> Cheers,
>> Bogdan
>> --
>> gmx-users mailing listgmx-users at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> * Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-request at gromacs.org.
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>
>
>
> --
> Alex Marshall
> M.Sc. Candidate
> Department of Applied Mathematics
> The University of Western Ontario
>
>


-- 
Alex Marshall
M.Sc.
Department of Applied Mathematics
The University of Western Ontario
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] What exactly is the artifact caused by setting refcoord_scaling=no with position restraint

2012-10-31 Thread Christopher Neale
I've asked the same question myself and would also like to know.

That said, there seems to be no issue if you set refcoord_scaling=com. 
According to the manual, this does
not give artefacts (whatever they are) and also does not change your restraint 
positions (as refcoord_scaling=all
does).

Sorry that I can't give a complete answer, but, to my understanding, 
refcoord_scaling=no is probably just there
for backward compatibility and you should use refcoord_scaling=com.

Hopefully somebody who knows the full answer can comment.

Chris.

-- original message -- 

After reading the manual and some earlier threads, I understand that this
would cause incorrect pressure and some "artifact". I found gromacs4.5.4
does not give warning while gromacs4.5.5 does?

But I wonder how large would the average pressure deviate when setting
refcoord_scaling=no compared when setting refcoord_scaling=com/ all, with
absolute position restraints? Does the pressure reported in .log file still
correct when setting refcoord_scaling=no?

Thanks,
Yun

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] time-block analysis

2012-10-31 Thread Christopher Neale
Leila, I provided 2 good references to answer your question earlier. If you 
can't be bothered to read about it,
then the answer is yes, it is a good method.

Looking at your other questions, I think that it is time for you to look at 
those references and learn more about
block averaging and convergence in general.

Chris.

-- original message --

Thanks for your quick reply.

You did not answer me (is block averaging good way for investigation of the
simulation convergence?)

Based on your example (histogram of the distribution of a dihedral angle),
if I want to investigate convergence of RMSD, Figure 1 is RMSD vs time. But
I dont know about Figure S1. what parameter vs what parameter (RMSD vs
what)? Please describe more.

If the value of [rms (quantity)/quantity] be less than 10 exp (-3), can I
use that as convergence criterion? (for example, quantity= potential energy)

Best regards
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re, density

2012-10-31 Thread Christopher Neale
Dear Ali:

I am not sure what ecenter is, it looks like it might be the position to which 
the user wants to set the center?
In any event, it doesn't mater for this modification.

I did not write the center_x() function, it is a standard part of gmx_trjconv.c 
What I did was to copy center_x() to 
center_x_com() and then modify center_x_com() such that it computes the center 
of mass and not the position of
(max-min)/2.

I then modified the call to center_x() to actually be a call to center_x_com(), 
where it requires the additional 
argument of "atoms->atom" at the end of the list.

What I posted here: 
http://lists.gromacs.org/pipermail/gmx-users/2011-June/062239.html is actually 
the non-
modified function call from the original gmx_trjconv.c in gromacs 4.5.3 (posted 
to show that there is a problem).

Here is a patch that you can apply to gromacs version 4.5.3 (but possibly not 
to 4.5.4 or 4.5.5 because they made 
slight other modifications to gmx_trjconv.c so I am not sure if this patch will 
work). Still, it is just 2 blocks of changes
so once you understand them you can make the changes by hand.


--- ../../../../gromacs-4.5.3b/source/src/tools/gmx_trjconv.c   2010-09-29 
07:35:06.0 -0400
+++ ./gmx_trjconv.c 2011-08-02 13:13:00.0 -0400
@@ -385,6 +385,33 @@
 }
 }
 
+static void center_x_com(int ecenter,rvec x[],matrix box,int n,int nc,atom_id 
ci[],t_atom atom[])
+{
+int i,m,ai;
+rvec com,box_center,dx;
+real mtot;
+
+mtot=0.0;
+com[0]=com[1]=com[2]=0.0;
+if (nc > 0) {
+for(i=0; iatom);
+}
 }
 
 if (bPBCcomAtom) {





 PATCH IS OVER

In case the patch is mangled by email formatting, you can try to just add the 
lines yourself. Just add the 
center_x_com() function and then use the call to center_x_com() and not 
center_x() in the "if(bCenter)" sataement.
Note that there is the additional argument "atoms->atom" in the call to 
center_x_com().


Chris.

-- original message --

I find out where you change the trjconv, I have a question

"ecenter" in this code, Where do you use it? and What is the ecenter?


static void center_x(int ecenter,rvec x[],matrix box,int n,int nc,atom_id
ci[])

{
int i,m,ai;
rvec cmin,cmax,box_center,dx;

if (nc > 0) {
copy_rvec(x[ci[0]],cmin);
copy_rvec(x[ci[0]],cmax);
for(i=0; i cmax[m])
cmax[m]=x[ai][m];
}
}
calc_box_center(ecenter,box,box_center);
for(m=0; mhttp://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] time-block analysis

2012-10-31 Thread Christopher Neale
Technically, in order to show that your simulation has reached complete 
convergence, you would need to
show that every type of data has converged. Obviously nobody does this. You 
should start with looking at the
convergence of any data type that you analyze in any other way. i.e. if you 
show a histogram of the distribution
of a dihedral angle as Figure 1, then Figure S1 might be some analysis of the 
convergence of that histogram.

Look at papers where people are doing similar things to what you are doing. 
Unfortunately, lots of otherwise
good papers don't even consider convergence so you may need to do a little 
digging.

PS: g_analyze sounds like a good idea. I've always scripted my own because it 
is more flexible, but if there is a 
gromacs tool to do this then use it.

Chris.

-- original message --

Very thanks for your reply.


In addition of your suggested script, in the gromacs, -ee option of
g_analyze produces error estimates using block averaging.

A person suggest me to do time-block analysis to find that my simulation
has converged or not.

You mentioned to "2-column data file that has time in the first column and
some other data in the second column".

To investigate convergence of simulation, what should be as other data in
mentioned file (2-column data file)?

Any help will highly appreciated.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Error during CNP simulation

2012-10-30 Thread Christopher Neale
You show different atoms before and after minimization so this is not proof of 
movement. 

I often resize my box by changing the value at the end of the file and have 
never had any problem with it. Of 
course, you do need to be careful with this and everything you do.

Chris.

-- original message --

My EM works just fine if I use a cubical box of water, so I'm baffled as
to what's going wrong in this case.

I tried the entire process again, this time removing position restraints.
Let me describe my entire procedure on this occasion, once again:

1) I first assembled a bilayer of 128 DOPC lipids, in a box of dimension
7.5 x 7.5 x 7.5 nm3.

2) I then removed the water molecules, resized the box length by changing
the bottom line of the coordinate file, and inserted a CNP to an
appropriate location, about 7 nm above the bilayer.

3) Then, I performed an energy minimization, and the CNP did not move
about. So far, so good.

4) I solvated the box with water using genbox, and then attempted an
energy minimization (using steepest descents). This is where the problem
arises.

These are the coordinates from the input coordinate file, before the
energy minimization:
  129F16C01 1793  10.600   3.474   3.330
  129F16C02 1794  10.438   3.758   3.294
  129F16C03 1795  10.381   3.588   3.947
  129F16C04 1796  10.795   3.703   3.387
  129F16C05 1797  10.564   3.308   3.584
  129F16C06 1798  10.289   3.506   3.415
  129F16C07 1799  10.322   3.939   3.528
  129F16C08 1800  10.655   3.430   3.872
  129F16C09 1801  10.840   3.479   3.620
  129F16C10 1802  10.641   3.958   3.453
  129F16C11 1803  10.342   3.876   3.850
  129F16C12 1804  10.610   3.991   3.756
  129F16C13 1805  10.675   3.736   3.950
  129F16C14 1806  10.307   3.397   3.721
  129F16C15 1807  10.862   3.787   3.699
  129F16C16 1808  10.172   3.686   3.649

And these are the coordinates from the output coordinate file, within just
TEN steps of energy minimization:
 4849F16C01 6513   1.015   0.049   5.350
 4849F16C02 6514   1.294  -3.135   4.063
 4849F16C03 6515   0.663  -1.872   3.948
 4849F16C04 6516   1.521  -1.365   3.815
 4849F16C05 6517   2.111  -0.464   4.030
 4849F16C06 6518   0.546  -2.205   6.177
 4849F16C07 6519  -0.418  -2.289   6.608
 4849F16C08 6520  -0.301  -1.054   4.991
 4849F16C09 6521  -0.154  -1.757   5.881
 4849F16C10 6522   0.595  -0.223   5.592
 4849F16C11 6523   1.450  -0.980   7.148
 4849F16C12 6524   0.055  -1.027   5.487
 4849F16C13 6525   0.020  -0.473   5.753
 4849F16C14 6526   1.492  -0.255   4.474
 4849F16C15 6527   0.926  -0.762   4.489
 4849F16C16 6528   0.147  -2.022   3.748

I have no idea why this is happening- that the molecule could move so much
seems ridiculous. Besides the em.mdp file, is something wrong with, for
example, my method of resizing the box?

Thanks!
Bharath

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Holes in the solvent!

2012-10-29 Thread Christopher Neale
In addition to the comments of Dallas Warren, which seems quite possible and 
should be checked, 
are you sure that you are seeing real holes and not just salt crystals that 
look like holes when you render only
the waters with VMD? I have found that, at NaCL concentrations above 1 M, lipid 
bilayers catalyze the formation
of salt crystals when using TIP3P/4P water and the OPLS/AA-L force field for 
ions. These crystals looked a lot
like vacuum holes to me until I visualized the salt.

Chris.

-- original message --

I am simulating a protein on a polymer surface for further adsorption free
energy calculation. 
However, after performing the NVT run either for short or long durations I
found out that couple of holes appear in the physiological saline solution
(solvent)!!!
Additionally, even proceeding with a long NPT run the holes are still
present in the system.
I would be thankful if you could direct me what could be source of such a
strange outcome.  

Best Wishes

Arman

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] time-block analysis

2012-10-29 Thread Christopher Neale
Dear Leila:

I presume that "time-block analysis" is the same as block averaging, but it is 
impossible to be sure without more 
context. I suggest that you find the person who suggested that you do 
"time-block analysis" and ask them.

Very generally, you will need to do your own scripting to do block averaging. 
Look at Allen and Tildesley for
a good description of block averaging or look at the original (as far as I 
know) reference:
Flyvbjerg, H.; Petersen, H. G., Error estimates on averages of correlated data. 
J. Chem. Phys. 1989, 91 (1), 461-466

Basically, you will need to take your raw data and make block averages. Here is 
a very simple script to
do that for a 2-column data file that has time in the first column and some 
other data in the second column

MIN=0
MAX=1000
BLOCK=100
for ((min=MIN; min+=BLOCK; min='${min}'&&$1<'${max}'){s+=$2;n++}}END{print 
('${max}'-'${min}')/2,s/n}'
done > my.block.averaged.data

I am sure that there are more elegant awk scripts that can do this, but the 
intention here was to show how it is 
done most clearly.

Chris.

-- original message --


Can I do time-block analysis by gromacs? If so, which command?

Is time-block analysis the same as block averaging?


Any help will highly appreciated.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_density

2012-10-29 Thread Christopher Neale
Dear Ali:

I assure you that all relevant information has been provided. As I said, it's 
not simple. If it is beyond your 
abilities then perhaps you should find a local collaborator. These days, many 
computational consortia have 
computational specialists who can help you with your scripting and coding. I'm 
not going to do it for you and
I think that we have reached the end of our useful discussion on this topic.

Chris.

-- original message --


Dear Chris

I saw two your email :

static void center_x(int ecenter,rvec x[],matrix box,int n,int nc,atom_id ci[])

{
int i,m,ai;
rvec cmin,cmax,box_center,dx;

if (nc > 0) {
copy_rvec(x[ci[0]],cmin);
copy_rvec(x[ci[0]],cmax);
for(i=0; i cmax[m])
cmax[m]=x[ai][m];
}
}
calc_box_center(ecenter,box,box_center);
for(m=0; mhttp://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Error during CNP simulation

2012-10-29 Thread Christopher Neale
Sorry Bharath, I simply can not believe that a macromolecule drifted 3 nm 
during energy minimization.
What happened to the intervening solvent? I think that you have some problem 
with your technique here.

Why did you use 120 constraints? Is this for bonds within the CNP?

You need to do 2 things:

a) figure out what is going wrong with your EM... what you describe seems 
impossible to me.
b) simplify your system to debug the problem. So you get the same warnings 
about too many constraints
if you omit the position restraints?

Chris.

-- original message --

You're right, my resizing from 7.5 nm to 15 nm refers to the dimension
normal to the lipid bilayer (in this case, the x dimension)

My bilayer center of mass is at around +3 nm, and I placed the CNP at
around 10 nm to start with. I had periodic boundary conditions employed in
all three dimensions. I didn't add the CNP using genbox or any other
command- I manually manipulated the .gro coordinate file of the bilayer.
When I ran an energy minimization of just the bilayer and the CNP, the
nanoparticle moved, but very slightly- so far, so good. But, once I added
water and did another energy minimization and checked the structure after
convergence, the nanoparticle had drifted all the way up to around 0 nm,
which meant it was only around 1 nm away from my bilayer before the MD
run. So, yes, it drifted during the energy minimization.

I should probably try what you said. I repeated the entire above process
using a posre.itp file. I had created this posre.itp file using the the
lipids in the bilayer, and the CNP atoms, and I used this file during the
energy minimization after adding water, which is when i got the following
error- "A charge group moved too far between two domain decomposition
steps.".

I also got a warning before the fatal EM, which said that the number of
constraints I had imposed on the CNP (120) is greater than the number of
possible degrees of freedom (48), which could cause the system to be
unstable. I wasn't sure how to fix that, though.

I'm sorry if I'm a bit vague, but I'd appreciate if you could advice me
more on this issue.

Thanks!
Bharath
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] strange RMSD output

2012-10-28 Thread Christopher Neale
Try looking at the RMSD after fitting only the the backbone that is not in this 
variable loop. 
Since you are fitting to the whole protein, and your entire protein group is 
larger than your loop, 
your result does not surprise me. I would suggest:

a) Do the fitting based on the backbone but excluding the variable loop
b) Compute the RMSD for the i) whole backbone, ii) backbone excluding the loop, 
and iii) the backbone of the loop only.

Finally, next time please post all relevant commands that you used. You don't 
mention using the -fit command in
g_rms and you don;t mention running trjconv to do a preliminary fitting. You 
make it harder for us to help you 
when you provide incomplete information and we need to guess what you did.

Chris.

-- original message --

Hello:

  I've got several individual .pdb file and I use the following command to 
merge it input .xtc and calculate the rmsd by g_rms:


trjcat -cat -keeplast -f *.pdb -o out.xtc

g_rms -f out.xtc -s 001.pdb -o rmsd.xvg

However, I find a problem with the results. Since most part of each PDB
file overlapped very well only a small loop is different. I first
calculate the rmsd of backbone and got:


@ subtitle "Backbone after lsq fit to Backbone"
 0.0000.003
 0.0000.3306260
 0.0000.2545602
 0.0000.3293277
 0.0000.2299789
 0.0000.3216407

Then I calculate the loop region and got:

@ subtitle "Backbone_&_r_120-131 after lsq fit to Backbone"
 0.0000.003
 0.0000.1526730
 0.0000.1202507
 0.0000.1449232
 0.0000.1318949
 0.0000.1675032

As we can see, the rmsd of loop region is smaller than the backbone
which I think it should be reversed...

thank you very much.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Error during CNP simulation

2012-10-28 Thread Christopher Neale
I presume that your 7.5 nm -> 15 nm resizing refers to the Cartesian dimension 
that lies along the bilayer normal?

Also, are you sure that the nanoparticle drifts significantly closer to the 
bilayer "during minimization", as you stated? That sounds unlikely. Perhaps 
this drift occurred during MD simulation after energy minimization?

How did you implement your restraints? My first try would be using the pull 
code harmonic restraints to restrain the distance between the center of mass of 
the bilayer and the center of mass of the nanoparticle, only applying the force 
along the dimension of your global bilayer normal. Note that, when doing this, 
the distance must always remain less than 1/2 of the box length along that 
dimension or your simulation will be unstable.

How did the system "blow up"? Was it a LINCS-based crash or massive expansion 
of the box or just that the bilayer fell apart? Did it blow up during energy 
minimization or MD simulation?

Chris.

-- original message --

I'm trying to simulate the diffusion of a coarse-grained carbon
nanoparticle (from Monticelli) into a coarse-grained DOPC lipid bilayer,
to reproduce the results obtained by Wong-Ekkabut et al. I first assembled
my bilayer in a box of size 7.5 x 7.5 x 7.5 nm3, and the results appeared
legitimate. However, I'm now trying to repeat the experiment by placing
the carbon nanoparticle at an initial position further away from the
bilayer, and so I've resized my box to 15 nm x 7.5 nm x 7.5 nm.

In this case, what happens is that after solvation using genbox, if I
carry out an energy minimization, the CNP drifts very close to the
bilayer, rendering my increase of the box size redundant. To overcome
this, I've tried restraining the positions of the CNP and the bilayer, but
when I do this, my simulation 'blows up'. I'm not aware of how to overcome
this problem. Would you be able to help me with this?

Thanks!

Regards
Bharath K Srikanth
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_rms problem

2012-10-28 Thread Christopher Neale
I thought you said that you have 1000 structures. It seems like you only have 
6? Nevertheless, I am glad that trjcat worked for you.

I don't know what is going on with your RMSD values, but I suggest that you 
start a separate post with a new subject line for that. I suspect that you need 
to do some fitting.

Chris.

-- original message --

thanks a lot for kind reply.
The outout for

for i in $(ls *pdb); do grep ^ATOM $i|wc -l; done |sort -n |head

is:
1838
1838
1838
1838
1838
1838

The output for

for i in $(ls *pdb); do grep ^ATOM $i|wc -l; done |sort -n |tail

is also:
1838
1838
1838
1838
1838
1838


It seems that trjcat works fine, although the time is 0 for all frames. 
Here is the command I use:

trjcat -cat -keeplast -f *.pdb -o out.xtc

g_rms -f out.xtc -s 001.pdb -o rmsd.xvg

However, I find a problem with the results. Since most part of each PDB 
file overlapped very well only a small loop is different. I first 
calculate the rmsd of backbone and got:

@ subtitle "Backbone after lsq fit to Backbone"
0.0000.003
0.0000.3306260
0.0000.2545602
0.0000.3293277
0.0000.2299789
0.0000.3216407

Then I calculate the loop region and got:

@ subtitle "Backbone_&_r_120-131 after lsq fit to Backbone"
0.0000.003
0.0000.1526730
0.0000.1202507
0.0000.1449232
0.0000.1318949
0.0000.1675032

As we can see, the rmsd of loop region is smaller than the backbone 
which I think it should be reversed...

thank you very much.

On 10/28/2012 05:03 PM, Christopher Neale wrote:
> First, please let me complain that you did not run those 2 commands and post 
> the full output with the line on which you entered the command (for each 
> one). Each command is expected to give you 10 lines of output, but you posted 
> a single group of 12 lines. That seems like unlikely output and just confuses 
> things.
>
> Second, I am not sure that you can simply cat the results of rosetta together 
> (or any pdb files). I suggest that you try using trjcat -cat -keeplast to put 
> them together or convert them all to .gro files with a loop over editconf and 
> then join them with cat (or trjcat).
>
> Chris.
>
>
>> How many atoms are in each .pdb file?
>>
>> for i in $(ls *pdb); do grep ^ATOM $i|wc -l; done |sort -n |head
>> for i in $(ls *pdb); do grep ^ATOM $i|wc -l; done |sort -n |tail
>>
>> Chris.
> Hi Chris:
>
>they are 1838 atoms in each PDB file and all of them are full atoms. I
> run the command you provide and I got:
>
> 1838
> 1838
> 1838
> 1838
> 1838
> 1838
> 1838
> 1838
> 1838
> 1838
> 1838
> 1838
>
> When I try to reuse g_rms, the problem is still the same.
>
> thank you very much
>

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_rms problem

2012-10-28 Thread Christopher Neale
First, please let me complain that you did not run those 2 commands and post 
the full output with the line on which you entered the command (for each one). 
Each command is expected to give you 10 lines of output, but you posted a 
single group of 12 lines. That seems like unlikely output and just confuses 
things.

Second, I am not sure that you can simply cat the results of rosetta together 
(or any pdb files). I suggest that you try using trjcat -cat -keeplast to put 
them together or convert them all to .gro files with a loop over editconf and 
then join them with cat (or trjcat).

Chris.


> How many atoms are in each .pdb file?
>
> for i in $(ls *pdb); do grep ^ATOM $i|wc -l; done |sort -n |head
> for i in $(ls *pdb); do grep ^ATOM $i|wc -l; done |sort -n |tail
>
> Chris.

Hi Chris:

  they are 1838 atoms in each PDB file and all of them are full atoms. I 
run the command you provide and I got:

1838
1838
1838
1838
1838
1838
1838
1838
1838
1838
1838
1838

When I try to reuse g_rms, the problem is still the same.

thank you very much

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_rms problem

2012-10-28 Thread Christopher Neale
How many atoms are in each .pdb file?

for i in $(ls *pdb); do grep ^ATOM $i|wc -l; done |sort -n |head
for i in $(ls *pdb); do grep ^ATOM $i|wc -l; done |sort -n |tail

Chris.

-- original message --

   I've 1000 separate pdb files generated by other Rosetta and I would 
like to calculate the RMSD between them. I use command:

cat *.pdb > all
mv all all.pdb

to merge it.

then I use g_rms to calculate the rmsd between them:

g_rms -f all.pdb -s 0001.pdb -o rmsd.xvg

However, g_rms only give one rmsd value for it with following messages:
warnings: if there are broken molecules in the trjectory file,  they can 
not be made whole without a run input file

reading frame  0 time -1.
Warning: topology has 1840 atoms, whereas trjectory has 7360 '',. 7360 atoms


I am just wondering how can I make g_rms works fine correctly?

thank you very much
Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] problem with g_density

2012-10-24 Thread Christopher Neale
All versions of g_density suffer from the same problem. It is not precisely a 
bug, since it works perfectly if you
do constant volume simulations (presuming that trjconv -center works properly, 
which it does not in any version of 
gromacs) or if you properly process your trajectory prior to running g_density.

I have already posted the entire solution online (as you have referenced) and I 
have already stated that this is
as simple as I can make the solution. Why don't you try that solution and when 
you run into a specific problem then
please post that to this mailing list. We tried that a bit already but I didn;t 
hear back from you about remaking the 
.tpr file, which is required at some point or you get the error that you 
previously reported with grompp.

Chris.

-- original message --

Which  version of gromacs has problem related to g_density?(All of them?)

I nearly red all of your email about : problem with density, I did not
understand correct method.

Please, if it is possible, you send me only specified method To be
revealed to me it. In fact i do not exact method for this problem,

Then i try to discover it!

-- 
Sincerely

Ali Alizadeh
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] possible problem with gpc-f106n004

2012-10-21 Thread Christopher Neale
Dear SciNet:

I have resubmitted, just letting you know.

Thank you,
Chris.

starting mdrun 'title'
10 steps, 200.0 ps (continuing from step 101920720, 203841.4 ps).
[[3105,1],27][../../../../../ompi/mca/btl/openib/btl_openib_component.c:3238:handle_wc]
 from gpc-f106n004 to: gpc-f106n003-ib0 error polling LP CQ with status RETRY 
EXCEEDED ERROR status number 12 for wr_id 10982784 opcode 32767  vendor error 
129 qp_idx 0
[[3105,1],25][../../../../../ompi/mca/btl/openib/btl_openib_component.c:3238:handle_wc]
 from gpc-f106n004 to: gpc-f106n003-ib0 error polling LP CQ with status RETRY 
EXCEEDED ERROR status number 12 for wr_id 10982016 opcode 32767  vendor error 
129 qp_idx 0
[[3105,1],28][../../../../../ompi/mca/btl/openib/btl_openib_component.c:3238:handle_wc]
 from gpc-f106n004 to: gpc-f106n003-ib0 error polling LP CQ with status RETRY 
EXCEEDED ERROR status number 12 for wr_id 11032832 opcode 32767  vendor error 
129 qp_idx 2
--
The InfiniBand retry count between two MPI processes has been
exceeded.  "Retry count" is defined in the InfiniBand spec 1.2
(section 12.7.38):

The total number of times that the sender wishes the receiver to
retry timeout, packet sequence, etc. errors before posting a
completion error.

This error typically means that there is something awry within the
InfiniBand fabric itself.  You should note the hosts on which this
error has occurred; it has been observed that rebooting or removing a
particular host from the job can sometimes resolve this issue.  

Two MCA parameters can be used to control Open MPI's behavior with
respect to the retry count:

* btl_openib_ib_retry_count - The number of times the sender will
  attempt to retry (defaulted to 7, the maximum value).
* btl_openib_ib_timeout - The local ACK timeout parameter (defaulted
  to 20).  The actual timeout value used is calculated as:

 4.096 microseconds * (2^btl_openib_ib_timeout)

  See the InfiniBand spec 1.2 (section 12.7.34) for more details.

Below is some information about the host that raised the error and the
peer to which it was connected:

  Local host:   gpc-f106n004
  Local device: mlx4_0
  Peer host:gpc-f106n003-ib0

You may need to consult with your system administrator to get this
problem fixed.
--
--
mpirun has exited due to process rank 28 with PID 6738 on
node gpc-f106n004-ib0 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--
[[3105,1],5][../../../../../ompi/mca/btl/openib/btl_openib_component.c:3238:handle_wc]
 from gpc-f106n001 to: gpc-f106n003-ib0 error polling LP CQ with status RETRY 
EXCEEDED ERROR status number 12 for wr_id 12949632 opcode 32767  vendor error 
129 qp_idx 0
[[3105,1],7][../../../../../ompi/mca/btl/openib/btl_openib_component.c:3238:handle_wc]
 from gpc-f106n001 to: gpc-f106n003-ib0 error polling LP CQ with status RETRY 
EXCEEDED ERROR status number 12 for wr_id 11033216 opcode 32767  vendor error 
129 qp_idx 2
[gpc-f106n001:06928] 4 more processes have sent help message 
help-mpi-btl-openib.txt / pp retry exceeded
[gpc-f106n001:06928] Set MCA parameter "orte_base_help_aggregate" to 0 to see 
all help / error messages
[gpc-f106n001:06928] [[3105,0],0]-[[3105,0],2] mca_oob_tcp_msg_recv: readv 
failed: Connection reset by peer (104)
=>> PBS: job killed: node 2 (gpc-f106n003-ib0) requested job terminate, 'EOF' 
(code 1099) - received SISTER_EOF attempting to communicate with sister MOM's
Terminated
mpirun: abort is already in progress...hit ctrl-c again to forcibly terminate

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] problem with g_density -center

2012-10-19 Thread Christopher Neale
The solution that I have posted with modified trjconv -center gives the correct 
values. However, you should always
check such things because at the end of the day it is your name on the paper so 
you are the one responsible.

If you want to reproduce a previous result, just contact them to find out 
exactly what methods they used
and use those same methods.

Chris.

 -- original message --

Ali Alizadeh ali.alizadehmojarad at gmail.com 
Fri Oct 19 20:22:54 CEST 2012
Previous message: [gmx-users] problem with g_density -center
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Dear Chris

1- Can you get correctly density profile by own method?

I will use your method and i hope to get  correctly symmetric density
profile for my symmetric system.

2- One question, if i find out that the paper (has published 2009) has
used which of old version of gromacs,

Can i hope to reproduce its results?

-- 
Sincerely
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] problem with g_density -center

2012-10-19 Thread Christopher Neale
Sorry Ali, I can't do your work for you. If this is way over your head, then 
either forget it, find a colleague to help 
you, or spend some time learning bash and gromacs. I'n not commenting on your 
approach, but if you modified
your .gro then you need a new .top file. You can create one.

Chris.

-- original message --

I know that these commands are not similar to your code at all.

1-  trjconv -f md.xtc -o tmp.xtc -n cn.ndx

2- trjconv -center -pbc mol -f tmp.xtc -o
tmp2.xtc -s md.tpr -n cn.ndx tmp.xtc

3- trjconv -center -pbc atom -f tmp2.xtc -o
tmp3.xtc -s md.tpr -n cn.ndx tmp2.xtc

  4-  ## now make a new .tpr file in which the solute is at the
center of the box
  #first output a single frame

   trjconv -f  tmp3.xtc -dump 125000 -o
tmpgro.gro -s md.tpr -center -pbc mol -n cn.ndx

 5- #make a new .tpr file
 touch empty.mdp

 grompp -p mytop.top -c tmpgro.gro -f
empty.mdp -o centered.tpr -maxwarn 1

At this step give me an error :

 Fatal error:
Invalid line in tmpgro.gro for atom 1:
1LI  C11   0.8   1.9   0.7

I know this error related to my .top file. Because my top file did not
change while my .gro file changed.

2- I really did not find out these line in your commands. I'm not a
professional linux programmer.

 
GMXLIB=/project/pomes/cneale/GPC/exe/intel/gromacs-4.5.3_modtrjconv/exec/share/gromacs/top
echo "NE_CZ_NH1_NH2_CB_CG_CD System" |

and another lines 

   Please explain more if it is possible,
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


  1   2   >