[gmx-users] Re.Importance of -rcon and -dd options when

2012-03-15 Thread PAVAN PAYGHAN
On Wed, Mar 14, 2012 at 7:40 PM, gmx-users-requ...@gromacs.org wrote:

 Send gmx-users mailing list submissions to
gmx-users@gromacs.org

 To subscribe or unsubscribe via the World Wide Web, visit
http://lists.gromacs.org/mailman/listinfo/gmx-users
 or, via email, send a message with subject or body 'help' to
gmx-users-requ...@gromacs.org

 You can reach the person managing the list at
gmx-users-ow...@gromacs.org

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of gmx-users digest...


 Today's Topics:

   1. Re: Importance of -rcon and -dd options when using mdrun  with
  mpi. (Mark Abraham)
   2. clashes (Dariush Mohammadyani)
   3. Re: clashes (Justin A. Lemkul)


 --

 Message: 1
 Date: Wed, 14 Mar 2012 23:21:34 +1100
 From: Mark Abraham mark.abra...@anu.edu.au
 Subject: Re: [gmx-users] Importance of -rcon and -dd options when
using mdrun with mpi.
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 Message-ID: 4f608d4e.10...@anu.edu.au
 Content-Type: text/plain; charset=iso-8859-1

 On 14/03/2012 9:54 PM, PAVAN PAYGHAN wrote:
 
  Dear Gromacs Users,
 
  I am running mdrun on single node with 8 CPU and getting following error
 
  Fatal error:-
 
  D D cell  1 0 0 could only obtain 1520 of the 1521 atoms that are
  connected via constraints from the neighbouring cells.
 
  This probably means your constraint length are too long compared to
  the domain decomposition cell size.
 
  Decrease the number of domain decomposition grid cells or lincs order.
 

 The .log file has a detailed analysis of how DD is setting things up.
 You need to make sure that this output is sensible for your system.

 You should also desist from using P-R pressure coupling during
 equilibration (i.e. with velocity generation), as warned in the manual
 section on pressure coupling. Perhaps your system is blowing up.

 Mark

 Dear Mark,

Thanks for the reply.
 I have successfully done the equilibration run ,while doing production run
I am gettiing above error (almost after 10 ns run).
 Also It would be worth working if you explain the importance -rcon value
and other things that I have asked .

Regards,

Pavan



  For solving this problem following are the attempts by me.
 
  *1] Decreasing grid cell size:-*
 
   As per the suggestion in error, I tried to decrease the grid cells by
  option -dd from 8 1 1 to 6 1 1 , it has thrown following error..
 
   Fatal error:-The size of the DD grid (6) does not match the number of
  nodes(8).
 
  Can you please suggest any better way to overcome this for decreasing
  grid cell size.
 
  *2] -rcon option:*
 
  What is the correlation between the -rcon value with DD cell
  size(Directly or inversely proportional ) , for the problem entitled
  above what should be the strategy (to decrease or to increase rcon
 value).
 
  If one changes the -rcon value will it affect the lincs accuracy, or
  in other words the run will hold the same continuation or any change
  in it.
 
  For changing the -rcon value the reference of previous log file i.e.
  Estimated maximum distance required for p-lincs say 0.877, so one can
  increase than what has been estimated.
 
  *3] lincs_order and lincs_iter :*
 
  **
 
  If we don't want to deteriorate the lincs accuracy (1+
  lincs_iter)*lincs_order has to remain constant , In my case
 
  With lincs_order = 4 and lincs_iter =1 I got above error. So I
  decreased lincs _order (2) and increased lincs_iter(3) proportionally.
  What I am following is right or I have misunderstood it. If so please
  correct it. Can this value be fraction?
 
  Values which I have tried are relevant or very bad?**
 
  Please explain it.
 
  If the same problem can be solved by any other methodology please
  explain it.
 
  *Please see the mdp file  details.*
 
 
  integrator= md
 
  nsteps = 1000
 
  dt = 0.002   ; 2 fs
 
  ; Output control
 
  nstxout= 1000; save
  coordinates every 2 ps
 
  nstvout= 1000; save
  velocities every 2 ps
 
  nstxtcout = 1000; xtc compressed
  trajectory output every 2 ps
 
  nstenergy   = 1000; save energies every 2 ps
 
  nstlog  = 1000; update log file
  every 2 ps
 
  ; Bond parameters
 
  continuation   = yes   ; Restarting after NPT
 
  constraint_algorithm = lincs ; holonomic constraints
 
  constraints = all-bonds ; all bonds (even heavy atom-H
  bonds)
 
  lincs_iter = 1   ; accuracy of LINCS
 
  lincs_order = 4
 
  ; Neighborsearching
 
  ns_type   = grid
 
  nstlist  = 5
 
  rlist  = 1.2
 
  rcoulomb= 

Re: [gmx-users] Re.Importance of -rcon and -dd options when

2012-03-15 Thread Mark Abraham

On 15/03/2012 5:36 PM, PAVAN PAYGHAN wrote:



On 14/03/2012 9:54 PM, PAVAN PAYGHAN wrote:

 Dear Gromacs Users,

 I am running mdrun on single node with 8 CPU and getting
following error

 Fatal error:-

 D D cell  1 0 0 could only obtain 1520 of the 1521 atoms that are
 connected via constraints from the neighbouring cells.

 This probably means your constraint length are too long compared to
 the domain decomposition cell size.

 Decrease the number of domain decomposition grid cells or lincs
order.


The .log file has a detailed analysis of how DD is setting things up.
You need to make sure that this output is sensible for your system.

You should also desist from using P-R pressure coupling during
equilibration (i.e. with velocity generation), as warned in the manual
section on pressure coupling. Perhaps your system is blowing up.

Mark

Dear Mark,

Thanks for the reply.
 I have successfully done the equilibration run ,while doing 
production run I am gettiing above error (almost after 10 ns run).
 Also It would be worth working if you explain the importance -rcon 
value and other things that I have asked .


A description of your system, GROMACS version and objective are 
important to solving the problem. I don't think we've seen those. Nobody 
wants to spend time talking about DD options if you are trying to run a 
system with 1000 atoms on 8 processors. By not describing fully, you 
make it easy for people to not be bothered.




Regards,

Pavan

 For solving this problem following are the attempts by me.

 *1] Decreasing grid cell size:-*

  As per the suggestion in error, I tried to decrease the grid
cells by
 option -dd from 8 1 1 to 6 1 1 , it has thrown following error..

  Fatal error:-The size of the DD grid (6) does not match the
number of
 nodes(8).

 Can you please suggest any better way to overcome this for
decreasing
 grid cell size.



You can't decrease the number of DD cells since you need one per 
processor. Maybe you are trying to parallelize a system that is too 
small for this number of processors, which brings us back to needing a 
description of your system.




 *2] -rcon option:*

 What is the correlation between the -rcon value with DD cell
 size(Directly or inversely proportional ) , for the problem entitled
 above what should be the strategy (to decrease or to increase
rcon value).

 If one changes the -rcon value will it affect the lincs accuracy, or
 in other words the run will hold the same continuation or any change
 in it.

 For changing the -rcon value the reference of previous log file i.e.
 Estimated maximum distance required for p-lincs say 0.877, so
one can
 increase than what has been estimated.



You need to read mdrun -h about -rcon. You need to be trying to increase 
the ratio of cell size to constraint length, per the error message.




 *3] lincs_order and lincs_iter :*

 **

 If we don't want to deteriorate the lincs accuracy (1+
 lincs_iter)*lincs_order has to remain constant , In my case

 With lincs_order = 4 and lincs_iter =1 I got above error. So I
 decreased lincs _order (2) and increased lincs_iter(3)
proportionally.
 What I am following is right or I have misunderstood it. If so
please
 correct it. Can this value be fraction?



The values must be integers.


 Values which I have tried are relevant or very bad?**



That is a correct approach for maintaining LINCS accuracy and trying to 
decrease the required constraint length, however it may not help solve 
the underlying problem.


Mark



 Please explain it.

 If the same problem can be solved by any other methodology please
 explain it.

 *Please see the mdp file  details.*


 integrator= md

 nsteps = 1000

 dt = 0.002   ; 2 fs

 ; Output control

 nstxout= 1000; save
 coordinates every 2 ps

 nstvout= 1000; save
 velocities every 2 ps

 nstxtcout = 1000; xtc compressed
 trajectory output every 2 ps

 nstenergy   = 1000; save energies
every 2 ps

 nstlog  = 1000; update log file
 every 2 ps

 ; Bond parameters

 continuation   = yes   ; Restarting after NPT

 constraint_algorithm = lincs ; holonomic constraints

 constraints = all-bonds ; all bonds (even heavy
atom-H
 bonds)

 lincs_iter = 1   ; accuracy
of LINCS


[gmx-users] Problems with simulation on multi-nodes cluster

2012-03-15 Thread James Starlight
Dear Gromacs Users!


I have some problems with running my simulation on multi-modes station wich
use open_MPI

I've launch my jobs by means of that script. The below example of running
work on 1 node ( 12 cpu).

#!/bin/sh
#PBS -N gromacs
#PBS -l nodes=1:red:ppn=12
#PBS -V
#PBS -o gromacs.out
#PBS -e gromacs.err

cd /globaltmp/xz/job_name
grompp -f md.mdp -c nvtWprotonated.gro -p topol.top -n index.ndx -o job.tpr
mpiexec -np 12 mdrun_mpi_d.openmpi -v -deffnm job

All nodes of my cluster consist of 12 CPU. When I'm using just 1 node on
that cluster I have no problems with running of my jobs but when I try to
use more than one nodes I've obtain error ( the example is attached in the
gromacs.err file as well as mmd.mdp of that system). Another outcome of
such multi-node simulation is that my job has been started but no
calculation were done ( the name_of_my_job.log file was empty and no update
of .trr file was seen ). Commonly this error occurs when I uses many nodes
(8-10) Finally sometimes I've obtain some errors with the PME order ( this
time I've used 3 nodes). The exactly error differs when I varry the number
of nodes.


Could you tell me whats wrong could be with my cluster?

Thanks for help

James


gromacs.err
Description: Binary data


md.mdp
Description: Binary data
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Problems with simulation on multi-nodes cluster

2012-03-15 Thread Mark Abraham

On 15/03/2012 6:04 PM, James Starlight wrote:

Dear Gromacs Users!


I have some problems with running my simulation on multi-modes station 
wich use open_MPI


I've launch my jobs by means of that script. The below example of 
running work on 1 node ( 12 cpu).


#!/bin/sh
#PBS -N gromacs
#PBS -l nodes=1:red:ppn=12
#PBS -V
#PBS -o gromacs.out
#PBS -e gromacs.err

cd /globaltmp/xz/job_name
grompp -f md.mdp -c nvtWprotonated.gro -p topol.top -n index.ndx -o 
job.tpr

mpiexec -np 12 mdrun_mpi_d.openmpi -v -deffnm job

All nodes of my cluster consist of 12 CPU. When I'm using just 1 node 
on that cluster I have no problems with running of my jobs but when I 
try to use more than one nodes I've obtain error ( the example is 
attached in the gromacs.err file as well as mmd.mdp of that system). 
Another outcome of such multi-node simulation is that my job has been 
started but no calculation were done ( the name_of_my_job.log file was 
empty and no update of .trr file was seen ). Commonly this error 
occurs when I uses many nodes (8-10) Finally sometimes I've obtain 
some errors with the PME order ( this time I've used 3 nodes). The 
exactly error differs when I varry the number of nodes.



Could you tell me whats wrong could be with my cluster?


The error message is quite explicit and is clearly nothing to do with 
GROMACS. Your invocation of mpiexec is malformed. You need to consult 
your local documentation.


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problems with simulation on multi-nodes cluster

2012-03-15 Thread Peter C. Lai
Try separating your grompp run from your mpirun:
You should not really be having the scheduler execute the grompp. Run
your grompp step to generate a .tpr either on the head node or on your local
machine (then copy it over to the cluster).

(The -p that the scheduler is complaining about only appears in the grompp
step, so don't have the scheduler run it).


On 2012-03-15 10:04:49AM +0300, James Starlight wrote:
 Dear Gromacs Users!
 
 
 I have some problems with running my simulation on multi-modes station wich
 use open_MPI
 
 I've launch my jobs by means of that script. The below example of running
 work on 1 node ( 12 cpu).
 
 #!/bin/sh
 #PBS -N gromacs
 #PBS -l nodes=1:red:ppn=12
 #PBS -V
 #PBS -o gromacs.out
 #PBS -e gromacs.err
 
 cd /globaltmp/xz/job_name
 grompp -f md.mdp -c nvtWprotonated.gro -p topol.top -n index.ndx -o job.tpr
 mpiexec -np 12 mdrun_mpi_d.openmpi -v -deffnm job
 
 All nodes of my cluster consist of 12 CPU. When I'm using just 1 node on
 that cluster I have no problems with running of my jobs but when I try to
 use more than one nodes I've obtain error ( the example is attached in the
 gromacs.err file as well as mmd.mdp of that system). Another outcome of
 such multi-node simulation is that my job has been started but no
 calculation were done ( the name_of_my_job.log file was empty and no update
 of .trr file was seen ). Commonly this error occurs when I uses many nodes
 (8-10) Finally sometimes I've obtain some errors with the PME order ( this
 time I've used 3 nodes). The exactly error differs when I varry the number
 of nodes.
 
 
 Could you tell me whats wrong could be with my cluster?
 
 Thanks for help
 
 James



 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


-- 
==
Peter C. Lai| University of Alabama-Birmingham
Programmer/Analyst  | KAUL 752A
Genetics, Div. of Research  | 705 South 20th Street
p...@uab.edu| Birmingham AL 35294-4461
(205) 690-0808  |
==

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problems with simulation on multi-nodes cluster

2012-03-15 Thread Mark Abraham

On 15/03/2012 6:13 PM, Peter C. Lai wrote:

Try separating your grompp run from your mpirun:
You should not really be having the scheduler execute the grompp. Run
your grompp step to generate a .tpr either on the head node or on your local
machine (then copy it over to the cluster).


Good advice.


(The -p that the scheduler is complaining about only appears in the grompp
step, so don't have the scheduler run it).


grompp is running successfully, as you can see from the output

I think mpiexec -np 12 is being interpreted as mpiexec -n 12 -p, and 
the process of separating the grompp stage from the mdrun stage would 
help make that clear - read documentation first, however.


Mark




On 2012-03-15 10:04:49AM +0300, James Starlight wrote:

Dear Gromacs Users!


I have some problems with running my simulation on multi-modes station wich
use open_MPI

I've launch my jobs by means of that script. The below example of running
work on 1 node ( 12 cpu).

#!/bin/sh
#PBS -N gromacs
#PBS -l nodes=1:red:ppn=12
#PBS -V
#PBS -o gromacs.out
#PBS -e gromacs.err

cd /globaltmp/xz/job_name
grompp -f md.mdp -c nvtWprotonated.gro -p topol.top -n index.ndx -o job.tpr
mpiexec -np 12 mdrun_mpi_d.openmpi -v -deffnm job

All nodes of my cluster consist of 12 CPU. When I'm using just 1 node on
that cluster I have no problems with running of my jobs but when I try to
use more than one nodes I've obtain error ( the example is attached in the
gromacs.err file as well as mmd.mdp of that system). Another outcome of
such multi-node simulation is that my job has been started but no
calculation were done ( the name_of_my_job.log file was empty and no update
of .trr file was seen ). Commonly this error occurs when I uses many nodes
(8-10) Finally sometimes I've obtain some errors with the PME order ( this
time I've used 3 nodes). The exactly error differs when I varry the number
of nodes.


Could you tell me whats wrong could be with my cluster?

Thanks for help

James




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Generation of the Distance Restraints

2012-03-15 Thread James Starlight
Mark,

thanks again for explanation



 The force is the negative of the derivative of the potential with respect
 to the distance. So the force is also zero between r_0 and r_1. So if you
 want a distance to be restrained between 1 and 2 nm, set r_0=1 and r_1=2.
 That way the force is zero if the distance is satisfactory, and non-zero
 when it is not.


I'm not quite understood the restrains definition in that case :( So in the
above example the distance between 1 and 2 nm would be restrained and in
accordance to the graph the forces will be zero. But in the range below 1
and 2 nm the forces would be increased in quadratic progression. So if I
understood correctly only when atoms are not in the desired distance range
forces will occur that must bring atoms to the desired distance. This is
the opposite to the position restrains where the forses are constant to
prevent movement of the atoms. Does it correct?




I leave the choice of r_2 to you as an exercise


So as I understood the forces occured after r_2 threshold must be extremely
hight in comparison to gradually parabolic rise in the two others
thresholds. In what exacly cases this rapid increase must be usefull in
comparison to the gradually parabolic manner?

Thanks again

James



 Mark


  I must define R1=1 and R2=2 values from my example 1Rij2 to obtain
 quadratic restrain forces done in my distance range ( from 1 to 2 angstr).
 In other words this would restrains the i and j atom to the desired
 distance by the force wich would increased by the quadratic progresion upon
 distance will increased up to 2. Does it correct ?

 So the value R0 ( no forces= no restraints) must correspond to the values
 above and below my range. How the same range value for R0 could be defined ?


 JAmes

 14 марта 2012 г. 3:42 пользователь Mark Abraham 
 mark.abra...@anu.edu.auнаписал:



 I can't think of a clearer way to explain the functional form of the
 distance restraint than the given equation with an example graph of it
 nearby. You have some distance range that you want to see happen based on
 some external information. You need to choose the distance constants for
 that functional form to reproduce that in a way that you judge will work,
 given your initial distance. The linear regime above r_2 is useful for not
 having forces that are massively large (from a quadratic potential) far
 from the region of zero potential. Whether this is important depends on
 your starting configuration.




  I already answered this.
 http://lists.gromacs.org/pipermail/gmx-users/2012-March/069301.html

 I've found only theoretical explanation of such possibility (
 gradually increasing force constant during simulation). But I
 intresting in practical implementation. Could I do it in scope of
 single MDrun by some options in mdm fle or should I do step-by-step
 series of simulation with gradually changing forces appplied on the
 disres in each MDrun?


  Only step by step. Something like simulated annealing is only available
 for temperature variation.

 Mark



 James


 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists






 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] re: Unreadable trajectory [solved]

2012-03-15 Thread Jernej Zidar

 Writing 100,000 or 100.000 for 10 is prone to misinterpretation. Get
 out of the habit of using a thousands separator in scientific contexts :-)

Thanks for the advice.



 2. Remove contraints and repeat steps a-h.

    The problem I'm having is that I cannot visualize the trajectories
 from step 2.e) onwards with VMD simply segfaulting. While it may well
 be a VMD issue but I can't visualize the trajectory using ngmx (blank
 screen) either, so it's an issue either with my inputs or Gromacs.

 Your .mdp file is not writing any trajectory frames.

Thanks for pointing that out! It was writing only information on velocities.

 I'm still getting used to the GROMACS way of doing things. One input
(i.e. MDP file) for each run and all that.


 Mark


    I tried to use trjconv tool to adjust the centering or remove the
 PBC or save just the polymer residues but the utility always fails
 with: WARNING no output, last frame read at t=200.
    Checking the run log file did not reveal any errors. Checking the
 EDR file revealed no jumps in any of the enery terms.

    What's most surprising, is that I can visualize the GRO file that is
 generated at the end of the simulation.

    The relevant files are on the following links:
 - MDP: http://dl.dropbox.com/u/5761806/md-nvt-2-nofix.mdp
 - LOG: http://dl.dropbox.com/u/5761806/wpoly-2x-box-ions-nvt-nofix-2.log
 - TPR: http://dl.dropbox.com/u/5761806/wpoly-2x-box-ions-nvt-nofix-2.tpr

     I use GROMACS 4.5.5, the system is electrically neutral. To prepare
 a new TPR file a use the GRO file from the previous run.

 Don't, you're losing precision and introducing perturbations. grompp -t
 old.cpt is your friend.

 Mark

Using that. Many many thanks!

Thanks for the help,
Jernej Zidar
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Generation of the Distance Restraints

2012-03-15 Thread Mark Abraham

On 15/03/2012 7:06 PM, James Starlight wrote:

Mark,

thanks again for explanation



The force is the negative of the derivative of the potential with
respect to the distance. So the force is also zero between r_0 and
r_1. So if you want a distance to be restrained between 1 and 2
nm, set r_0=1 and r_1=2. That way the force is zero if the
distance is satisfactory, and non-zero when it is not.


I'm not quite understood the restrains definition in that case :( So 
in the above example the distance between 1 and 2 nm would be 
restrained and in accordance to the graph the forces will be zero.


The word restrained is ambiguous. Being in the region of zero force 
can be said to have been restrained, but being outside the region where 
force is action can be said to be being restrained.


But in the range below 1 and 2 nm the forces would be increased in 
quadratic progression.


Below 1nm and above 2nm.

So if I understood correctly only when atoms are not in the desired 
distance range forces will occur that must bring atoms to the desired 
distance. This is the opposite to the position restrains where the 
forses are constant to prevent movement of the atoms. Does it correct?


The forces in PR are not constant. See manual 4.3.1. The forces act in 
each case to return the distance/displacement to the region/point of 
zero force. A GROMACS position restraint is exactly like a GROMACS 
distance restraint to the original position with r_0==r_1 and r_2 infinite.




I leave the choice of r_2 to you as an exercise


So as I understood the forces occured after r_2 threshold must be 
extremely hight in comparison to gradually parabolic rise in the two 
others thresholds. In what exacly cases this rapid increase must be 
usefull in comparison to the gradually parabolic manner?


A linear rise of the potential above r_2 is *more* gradual than a 
parabolic rise in the limit of large r, which is the important part, as 
Fig 4.13 makes clear... You still might be confusing potential and force 
in your mind. Get that clear :-)


Mark



Thanks again

James



Mark



I must define R1=1 and R2=2 values from my example 1Rij2 to
obtain quadratic restrain forces done in my distance range ( from
1 to 2 angstr). In other words this would restrains the i and j
atom to the desired distance by the force wich would increased by
the quadratic progresion upon distance will increased up to 2.
Does it correct ?

So the value R0 ( no forces= no restraints) must correspond to
the values above and below my range. How the same range value for
R0 could be defined ?


JAmes

14 ? 2012 ?. 3:42  Mark Abraham
mark.abra...@anu.edu.au mailto:mark.abra...@anu.edu.au ???:



I can't think of a clearer way to explain the functional form
of the distance restraint than the given equation with an
example graph of it nearby. You have some distance range that
you want to see happen based on some external information.
You need to choose the distance constants for that functional
form to reproduce that in a way that you judge will work,
given your initial distance. The linear regime above r_2 is
useful for not having forces that are massively large (from a
quadratic potential) far from the region of zero potential.
Whether this is important depends on your starting
configuration.




I already answered this.

http://lists.gromacs.org/pipermail/gmx-users/2012-March/069301.html

I've found only theoretical explanation of such possibility (
gradually increasing force constant during simulation). But I
intresting in practical implementation. Could I do it in
scope of
single MDrun by some options in mdm fle or should I do
step-by-step
series of simulation with gradually changing forces
appplied on the
disres in each MDrun?


Only step by step. Something like simulated annealing is only
available for temperature variation.

Mark



James


-- 
gmx-users mailing list gmx-users@gromacs.org

mailto:gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before
posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org
mailto:gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists







--
gmx-users mailing list gmx-users@gromacs.org
mailto:gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search 

[gmx-users] 4.6 development version

2012-03-15 Thread SebastianWaltz
Dear Gromacs user,

since a few days we try to get the heterogeneous parallelization on a
Dell blade server with Tesla M2090 GPUs to work using the worksheet on
the page:

http://www.gromacs.org/Documentation/Acceleration_and_parallelization

we only get the OpenMM pure GPU version with mdrun-gpu running. Actually
is the heterogeneous parallelization already working in the development
version you can download using the link on the page:

 http://www.gromacs.org/Developer_Zone/Roadmap/GROMACS_4.6

and how can we get it running? Just adding the CMake variable GMX_GPU=ON
when compiling mdrun did not enable the  heterogeneous parallelization.

We want to use the heterogeneous parallelization used in the 4.6 version
to find out which is the optimal GPU/CPU ratio for our studied systems,
since we soon have to buy machines for the upgrade of our cluster.

Thanks a lot

yours

Sebastian



-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Umbrella_pull_simulation

2012-03-15 Thread shahid nayeem
I have added some new windows in mutant umbrella sampling which removes the
sampling gap around 4 nm. Also I sampled more windows in the initial COM
distance in the hope that I get energy minimum of profile well defined. I
also did boot strapping for error estimates. The files can be assessed at
http://www.freefilehosting.net/umbrellamut


On Wed, Mar 7, 2012 at 4:38 PM, Justin A. Lemkul jalem...@vt.edu wrote:



 shahid nayeem wrote:

 The attached profile.xvg and histo.xvg are here.
 sorry for sending earlier mail without attachments
 Shahid Nayeem

 On Wed, Mar 7, 2012 at 3:25 PM, shahid nayeem msnay...@gmail.commailto:
 msnay...@gmail.com wrote:

As suggested by you I added some new window and extended some
simulation and I got the attached profile and histo file. Please see
these files. Experimentally it is known that wt protein-protein
interaction is stronger than the mutants. But I get here is reverse.
what could be the possible reason for it. My profile.xvg and
histo.xvg are right or they need more improvement.


 I wouldn't base any conclusions off of them.  You have a sampling gap at
 just over 4 nm in the mutant simulations.  More importantly, you do not
 have a defined energy minimum in the mutant windows so it is impossible to
 calculate a reliable value for DeltaG.  Moreover, in the absence of any
 error estimates, you can't make any conclusions about these data.  g_wham
 can generate error bars for you; I'd suggest you do it.

 -Justin

 Shahid Nayeem


On Tue, Feb 28, 2012 at 7:44 PM, Justin A. Lemkul jalem...@vt.edu
mailto:jalem...@vt.edu wrote:



shahid nayeem wrote:

Thanks. But Does that mean that I should look in pullf.xvg
of each window and see whether the value is converged or
not. If not then I should extend the simulation.


I've already made numerous suggestions.  The value in pullf.xvg
is a consequence of the nature of the system.  Looking at the
interactions between your proteins, the stability of those
proteins, etc. is far more informative, like you would for any
simulation (even those that do not make use of the pull code).


-Justin

-- ==**__==


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu http://vt.edu | (540) 231-9080

 http://www.bevanlab.biochem.__**vt.edu/Pages/Personal/justinhttp://vt.edu/Pages/Personal/justin

 http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justinhttp://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
 

==**__==
-- gmx-users mailing listgmx-users@gromacs.org
mailto:gmx-users@gromacs.org

 http://lists.gromacs.org/__**mailman/listinfo/gmx-usershttp://lists.gromacs.org/__mailman/listinfo/gmx-users


 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 
Please search the archive at

 http://www.gromacs.org/__**Support/Mailing_Lists/Searchhttp://www.gromacs.org/__Support/Mailing_Lists/Search


 http://www.gromacs.org/**Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Search
 before
posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org
mailto:gmx-users-request@**gromacs.orggmx-users-requ...@gromacs.org
 .
Can't post? Read 
 http://www.gromacs.org/__**Support/Mailing_Listshttp://www.gromacs.org/__Support/Mailing_Lists

 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists
 




 --
 ==**==

 Justin A. Lemkul
 Ph.D. Candidate
 ICTAS Doctoral Scholar
 MILES-IGERT Trainee
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justinhttp://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 ==**==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 

[gmx-users] Re:g_energy g_enemat

2012-03-15 Thread lloyd riggs

-Does anyone off hand know the code (or line numbers ) for the actual energy 
calculations in either g_energy or g_enemat?

Basically, it says it does not implement this yes (the error) or just does not 
print something out.  Basically, I wanted to look at the code but didnt want to 
sit there for a week just trying to figure out what is what...all though I know 
thats a tall order.  

Basically, I can just use g_energy to extract the components for everything in 
my index files, and then do a spreadsheet based thing such as e^(file 
33-file42)-(File 56-file77)= times a few hundred if I want to say look at 
dozens of amino acids over the trajectory plus total energy contributions.  
This however would yield the same results as if I had g_enemat working for the 
energy analysis thus...

, has anyone gotten g_enemat to work for energy calculations (not just 
extraction of particular groups components).  I searched the mailing list and 
found a dozen or so odd questions and answeres but no one mentiones the above.

Although runing this over spread sheets generates a nice curve, so If anyone 
does have g_enemat running nicely, is the output simple a single figure or a 
.xvg file with the entire results based on a reference structure/energy?

Sincerely,

Stephan Lloyd Watkins

-- 
NEU: FreePhone 3-fach-Flat mit kostenlosem Smartphone!  

Jetzt informieren: http://mobile.1und1.de/?ac=OM.PW.PW003K20328T7073a
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] 4.6 development version

2012-03-15 Thread lloyd riggs


You guys should hit up UniZurich, the have a series of 1024 CPUs in combination 
with several hundred GPU's in a cluster system.  You could then test out some 
things, including scalability etc...

Sincerely,

Stephan Lloyd Watkins


 Original-Nachricht 
 Datum: Thu, 15 Mar 2012 10:33:11 +0100
 Von: SebastianWaltz sebastian.wa...@physik.uni-freiburg.de
 An: Discussion list for GROMACS users gmx-users@gromacs.org
 Betreff: [gmx-users] 4.6 development version

 Dear Gromacs user,
 
 since a few days we try to get the heterogeneous parallelization on a
 Dell blade server with Tesla M2090 GPUs to work using the worksheet on
 the page:
 
 http://www.gromacs.org/Documentation/Acceleration_and_parallelization
 
 we only get the OpenMM pure GPU version with mdrun-gpu running. Actually
 is the heterogeneous parallelization already working in the development
 version you can download using the link on the page:
 
  http://www.gromacs.org/Developer_Zone/Roadmap/GROMACS_4.6
 
 and how can we get it running? Just adding the CMake variable GMX_GPU=ON
 when compiling mdrun did not enable the  heterogeneous parallelization.
 
 We want to use the heterogeneous parallelization used in the 4.6 version
 to find out which is the optimal GPU/CPU ratio for our studied systems,
 since we soon have to buy machines for the upgrade of our cluster.
 
 Thanks a lot
 
 yours
 
 Sebastian
 
 
 

-- 
NEU: FreePhone 3-fach-Flat mit kostenlosem Smartphone!  

Jetzt informieren: http://mobile.1und1.de/?ac=OM.PW.PW003K20328T7073a
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Umbrella_pull_simulation

2012-03-15 Thread shahid nayeem
The similar files for wt can be assesed at
 http://www.freefilehosting.net/umbrellawt
 I expected wild type binding energy to be less than mutant
The command used for g_wham is
 g_wham_mpi_4.5.4  -it tpr-files.dat -if pullf-files.dat -o profile_mut.xvg
-hist histo_mut.xvg -unit kCal -b 500 -nBootstrap 200 -bsres
bsResult_mut.xvg -bsprof bsprofile_mut.xvg  -ac
I get the values exactly opposite to my expectation and unable to find out
where I am wrong please suggest.
Shahid Nayeem

On Thu, Mar 15, 2012 at 3:31 PM, shahid nayeem msnay...@gmail.com wrote:

 I have added some new windows in mutant umbrella sampling which removes
 the sampling gap around 4 nm. Also I sampled more windows in the initial
 COM distance in the hope that I get energy minimum of profile well defined.
 I also did boot strapping for error estimates. The files can be assessed at
 http://www.freefilehosting.net/umbrellamut


 On Wed, Mar 7, 2012 at 4:38 PM, Justin A. Lemkul jalem...@vt.edu wrote:



 shahid nayeem wrote:

 The attached profile.xvg and histo.xvg are here.
 sorry for sending earlier mail without attachments
 Shahid Nayeem

 On Wed, Mar 7, 2012 at 3:25 PM, shahid nayeem msnay...@gmail.commailto:
 msnay...@gmail.com wrote:

As suggested by you I added some new window and extended some
simulation and I got the attached profile and histo file. Please see
these files. Experimentally it is known that wt protein-protein
interaction is stronger than the mutants. But I get here is reverse.
what could be the possible reason for it. My profile.xvg and
histo.xvg are right or they need more improvement.


 I wouldn't base any conclusions off of them.  You have a sampling gap at
 just over 4 nm in the mutant simulations.  More importantly, you do not
 have a defined energy minimum in the mutant windows so it is impossible to
 calculate a reliable value for DeltaG.  Moreover, in the absence of any
 error estimates, you can't make any conclusions about these data.  g_wham
 can generate error bars for you; I'd suggest you do it.

 -Justin

 Shahid Nayeem


On Tue, Feb 28, 2012 at 7:44 PM, Justin A. Lemkul jalem...@vt.edu
mailto:jalem...@vt.edu wrote:



shahid nayeem wrote:

Thanks. But Does that mean that I should look in pullf.xvg
of each window and see whether the value is converged or
not. If not then I should extend the simulation.


I've already made numerous suggestions.  The value in pullf.xvg
is a consequence of the nature of the system.  Looking at the
interactions between your proteins, the stability of those
proteins, etc. is far more informative, like you would for any
simulation (even those that do not make use of the pull code).


-Justin

-- ==**__==


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu http://vt.edu | (540) 231-9080

 http://www.bevanlab.biochem.__**vt.edu/Pages/Personal/justinhttp://vt.edu/Pages/Personal/justin

 http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justinhttp://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
 

==**__==
-- gmx-users mailing listgmx-users@gromacs.org
mailto:gmx-users@gromacs.org

 http://lists.gromacs.org/__**mailman/listinfo/gmx-usershttp://lists.gromacs.org/__mailman/listinfo/gmx-users


 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 
Please search the archive at

 http://www.gromacs.org/__**Support/Mailing_Lists/Searchhttp://www.gromacs.org/__Support/Mailing_Lists/Search


 http://www.gromacs.org/**Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Search
 before
posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org

 mailto:gmx-users-request@**gromacs.orggmx-users-requ...@gromacs.org
 .
Can't post? Read 
 http://www.gromacs.org/__**Support/Mailing_Listshttp://www.gromacs.org/__Support/Mailing_Lists

 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists
 




 --
 ==**==

 Justin A. Lemkul
 Ph.D. Candidate
 ICTAS Doctoral Scholar
 MILES-IGERT Trainee
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justinhttp://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 ==**==
 --
 gmx-users mailing listgmx-users@gromacs.org
 

[gmx-users] center the system on specific atoms type

2012-03-15 Thread R.Perez Garcia
 Dear all,
I am running a simulation in of a fullerene in different solvents. I would like 
to keep the fullerene somehow centred in the box, so I changed co comm_grps =  
System to mm_grps  =  FUL. 
If I do this the following error pop up:

There are:   395  Other residues
Analysing residues not classified as Protein/DNA/RNA/Water and splitting into 
groups...

NOTE 1 [file short1.mdp]:
  4728 atoms are not part of any of the VCM groups

WARNING 1 [file short1.mdp]:
  Some atoms are not part of any center of mass motion removal group.
  This may lead to artifacts.
  In most cases one should use one group for the whole system.

I am not sure if it is possible to tackle this 
Any suggestion would be welcomed.
Best regards: R
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] center the system on specific atoms type

2012-03-15 Thread Tsjerk Wassenaar
Hey :)

Why would you want to keep it constant? It's asking for trouble. The
fluctuations in a small part of the system like a fullerene molecule
can be pretty large. If you try to correct the VCM of that bit only,
you make it constantly bump into your solvent molecules. Especially if
you have nstcomm = 10 or so, the shift may be considerable, and you
may cause overlaps and crash your system. And that while it's so easy
to center on the fullerene afterwards!

Cheers,

Tsjerk

On Thu, Mar 15, 2012 at 11:16 AM, R.Perez Garcia
r.perez.gar...@student.rug.nl wrote:
  Dear all,
 I am running a simulation in of a fullerene in different solvents. I would
 like to keep the fullerene somehow centred in the box, so I changed co
 comm_grps =  System to mm_grps  =  FUL.
 If I do this the following error pop up:

 There are:   395  Other residues
 Analysing residues not classified as Protein/DNA/RNA/Water and splitting
 into groups...

 NOTE 1 [file short1.mdp]:
   4728 atoms are not part of any of the VCM groups

 WARNING 1 [file short1.mdp]:
   Some atoms are not part of any center of mass motion removal group.
   This may lead to artifacts.
   In most cases one should use one group for the whole system.

 I am not sure if it is possible to tackle this
 Any suggestion would be welcomed.
 Best regards: R
 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



-- 
Tsjerk A. Wassenaar, Ph.D.

post-doctoral researcher
Molecular Dynamics Group
* Groningen Institute for Biomolecular Research and Biotechnology
* Zernike Institute for Advanced Materials
University of Groningen
The Netherlands
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] center the system on specific atoms type

2012-03-15 Thread R.Perez Garcia
Dear Tsjerk,
Thank you for your answer!
I dont like trouble, so, Ii will leave comm_grps =  System...
But, could you spare some comments how to center it afterwards? I dont know why 
but the fullerene like to spend a lot of time in the boundary (i.e. divided in 
two, where it is difficult to see).
Best regards: R

On 15-03-12, Tsjerk Wassenaar  tsje...@gmail.com wrote:
 Hey :)
 
 Why would you want to keep it constant? It's asking for trouble. The
 fluctuations in a small part of the system like a fullerene molecule
 can be pretty large. If you try to correct the VCM of that bit only,
 you make it constantly bump into your solvent molecules. Especially if
 you have nstcomm = 10 or so, the shift may be considerable, and you
 may cause overlaps and crash your system. And that while it's so easy
 to center on the fullerene afterwards!
 
 Cheers,
 
 Tsjerk
 
 On Thu, Mar 15, 2012 at 11:16 AM, R.Perez Garcia
 r.perez.gar...@student.rug.nl wrote:
   Dear all,
  I am running a simulation in of a fullerene in different solvents. I would
  like to keep the fullerene somehow centred in the box, so I changed co
  comm_grps =  System to mm_grps  =  FUL.
  If I do this the following error pop up:
 
  There are:   395  Other residues
  Analysing residues not classified as Protein/DNA/RNA/Water and splitting
  into groups...
 
  NOTE 1 [file short1.mdp]:
    4728 atoms are not part of any of the VCM groups
 
  WARNING 1 [file short1.mdp]:
    Some atoms are not part of any center of mass motion removal group.
    This may lead to artifacts.
    In most cases one should use one group for the whole system.
 
  I am not sure if it is possible to tackle this
  Any suggestion would be welcomed.
  Best regards: R
  --
  gmx-users mailing list    gmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
 
 -- 
 Tsjerk A. Wassenaar, Ph.D.
 
 post-doctoral researcher
 Molecular Dynamics Group
 * Groningen Institute for Biomolecular Research and Biotechnology
 * Zernike Institute for Advanced Materials
 University of Groningen
 The Netherlands
 -- 
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] center the system on specific atoms type

2012-03-15 Thread Tsjerk Wassenaar
Hey :)

There's a suggested trjconv workflow at
http://www.gromacs.org/Documentation/Terminology/Periodic_Boundary_Conditions

Cheers,

T.

On Thu, Mar 15, 2012 at 11:41 AM, R.Perez Garcia
r.perez.gar...@student.rug.nl wrote:
 Dear Tsjerk,
 Thank you for your answer!
 I dont like trouble, so, Ii will leave comm_grps =  System...
 But, could you spare some comments how to center it afterwards? I dont know
 why but the fullerene like to spend a lot of time in the boundary (i.e.
 divided in two, where it is difficult to see).
 Best regards: R


 On 15-03-12, Tsjerk Wassenaar tsje...@gmail.com wrote:

 Hey :)

 Why would you want to keep it constant? It's asking for trouble. The
 fluctuations in a small part of the system like a fullerene molecule
 can be pretty large. If you try to correct the VCM of that bit only,
 you make it constantly bump into your solvent molecules. Especially if
 you have nstcomm = 10 or so, the shift may be considerable, and you
 may cause overlaps and crash your system. And that while it's so easy
 to center on the fullerene afterwards!

 Cheers,

 Tsjerk

 On Thu, Mar 15, 2012 at 11:16 AM, R.Perez Garcia
 r.perez.gar...@student.rug.nl wrote:
  Dear all,
 I am running a simulation in of a fullerene in different solvents. I would
 like to keep the fullerene somehow centred in the box, so I changed co
 comm_grps =  System to mm_grps  =  FUL.
 If I do this the following error pop up:

 There are:   395  Other residues
 Analysing residues not classified as Protein/DNA/RNA/Water and splitting
 into groups...

 NOTE 1 [file short1.mdp]:
   4728 atoms are not part of any of the VCM groups

 WARNING 1 [file short1.mdp]:
   Some atoms are not part of any center of mass motion removal group.
   This may lead to artifacts.
   In most cases one should use one group for the whole system.

 I am not sure if it is possible to tackle this
 Any suggestion would be welcomed.
 Best regards: R
 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



 --
 Tsjerk A. Wassenaar, Ph.D.

 post-doctoral researcher
 Molecular Dynamics Group
 * Groningen Institute for Biomolecular Research and Biotechnology
 * Zernike Institute for Advanced Materials
 University of Groningen
 The Netherlands
 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



-- 
Tsjerk A. Wassenaar, Ph.D.

post-doctoral researcher
Molecular Dynamics Group
* Groningen Institute for Biomolecular Research and Biotechnology
* Zernike Institute for Advanced Materials
University of Groningen
The Netherlands
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] question about replica exchange

2012-03-15 Thread Asaf Farhi

Dear gmx user

If I may I have a specific question about replica exchange.
I want to define different Hamiltonians for the different temperatures.
It seems that the replica exchange can handle it. I wanted to ask if it's 
implemented in a general way so different Hamiltonians can be used for 
different temperatures?

Thanks a lot,
Best regards,
Asaf
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Problems with simulation on multi-nodes cluster

2012-03-15 Thread James Starlight
Mark, Peter,


I've tried to do .tpr file on my local CPU and launch only

mpiexec -np 24 mdrun_mpi_d.openmpi -v -deffnm MD_100

on the cluster with 2 nodes.

I see my job as working but when I've checking the MD_100.log (attached)
file there are no any information about simulation steps in that file (
when I use just one node I see in that file step-by-step progression of my
simulation like below wich was find in the same log file for ONE NODE
simulation ):

Started mdrun on node 0 Thu Mar 15 11:22:35 2012

   Step   Time Lambda
  00.00.0

Grid: 12 x 9 x 12 cells
   Energies (kJ/mol)
   G96AngleProper Dih.  Improper Dih.  LJ-14 Coulomb-14
1.32179e+043.27485e+032.53267e+034.06443e+026.15315e+04
LJ (SR)LJ (LR)  Disper. corr.   Coulomb (SR)   Coul. recip.
4.12152e+04   -5.51788e+03   -1.70930e+03   -4.54886e+05   -1.46292e+05
 Dis. Rest. D.R.Viol. (nm) Dih. Rest.  PotentialKinetic En.
2.14240e-023.46794e+001.33793e+03   -4.84889e+059.88771e+04
   Total Energy  Conserved En.Temperature Pres. DC (bar) Pressure (bar)
   -3.86012e+05   -3.86012e+053.11520e+02   -1.14114e+023.67861e+02
   Constr. rmsd
3.75854e-05

   Step   Time Lambda
   20004.00.0

   Energies (kJ/mol)
   G96AngleProper Dih.  Improper Dih.  LJ-14 Coulomb-14
1.31741e+043.25280e+032.58442e+033.51371e+026.15913e+04
LJ (SR)LJ (LR)  Disper. corr.   Coulomb (SR)   Coul. recip.
4.16349e+04   -5.53474e+03   -1.70930e+03   -4.56561e+05   -1.46485e+05
 Dis. Rest. D.R.Viol. (nm) Dih. Rest.  PotentialKinetic En.
4.78276e+013.38844e+009.82735e+00   -4.87644e+059.83280e+04
   Total Energy  Conserved En.Temperature Pres. DC (bar) Pressure (bar)
   -3.89316e+05   -3.87063e+053.09790e+02   -1.14114e+027.25905e+02
   Constr. rmsd
1.88008e-05

end etc...



What's wrong can be with multi-node computations?


James


15 марта 2012 г. 11:25 пользователь Mark Abraham
mark.abra...@anu.edu.auнаписал:

 On 15/03/2012 6:13 PM, Peter C. Lai wrote:

 Try separating your grompp run from your mpirun:
 You should not really be having the scheduler execute the grompp. Run
 your grompp step to generate a .tpr either on the head node or on your
 local
 machine (then copy it over to the cluster).


 Good advice.


 (The -p that the scheduler is complaining about only appears in the grompp
 step, so don't have the scheduler run it).


 grompp is running successfully, as you can see from the output

 I think mpiexec -np 12 is being interpreted as mpiexec -n 12 -p, and
 the process of separating the grompp stage from the mdrun stage would help
 make that clear - read documentation first, however.

 Mark




 On 2012-03-15 10:04:49AM +0300, James Starlight wrote:

 Dear Gromacs Users!


 I have some problems with running my simulation on multi-modes station
 wich
 use open_MPI

 I've launch my jobs by means of that script. The below example of running
 work on 1 node ( 12 cpu).

 #!/bin/sh
 #PBS -N gromacs
 #PBS -l nodes=1:red:ppn=12
 #PBS -V
 #PBS -o gromacs.out
 #PBS -e gromacs.err

 cd /globaltmp/xz/job_name
 grompp -f md.mdp -c nvtWprotonated.gro -p topol.top -n index.ndx -o
 job.tpr
 mpiexec -np 12 mdrun_mpi_d.openmpi -v -deffnm job

 All nodes of my cluster consist of 12 CPU. When I'm using just 1 node on
 that cluster I have no problems with running of my jobs but when I try to
 use more than one nodes I've obtain error ( the example is attached in
 the
 gromacs.err file as well as mmd.mdp of that system). Another outcome of
 such multi-node simulation is that my job has been started but no
 calculation were done ( the name_of_my_job.log file was empty and no
 update
 of .trr file was seen ). Commonly this error occurs when I uses many
 nodes
 (8-10) Finally sometimes I've obtain some errors with the PME order (
 this
 time I've used 3 nodes). The exactly error differs when I varry the
 number
 of nodes.


 Could you tell me whats wrong could be with my cluster?

 Thanks for help

 James



  --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists



 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/**
 

Re: [gmx-users] Umbrella_pull_simulation

2012-03-15 Thread Justin A. Lemkul



shahid nayeem wrote:

The similar files for wt can be assesed at
 http://www.freefilehosting.net/umbrellawt 
 I expected wild type binding energy to be less than mutant

The command used for g_wham is
 g_wham_mpi_4.5.4  -it tpr-files.dat -if pullf-files.dat -o 
profile_mut.xvg -hist histo_mut.xvg -unit kCal -b 500 -nBootstrap 200 
-bsres bsResult_mut.xvg -bsprof bsprofile_mut.xvg  -ac
I get the values exactly opposite to my expectation and unable to find 
out where I am wrong please suggest.


Determining flaws in one's model or improved ways of doing things is the hardest 
part about being a scientist.  From the data presented, Gromacs has done its job 
and the results are reasonable.  The fact that they don't align with 
expectations is something you will have to reconcile and only you are equipped 
to deal with it.  No one on this list really knows enough about what you're 
doing to do your thinking for you, nor should they.  The principal function of 
this list is to troubleshoot problems with Gromacs, of which there are none here.


Good luck.

-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] using MPI

2012-03-15 Thread Erik Marklund
If you are to run on a single node, then there's no need for mpi nowadays. 
mdrun uses all cores it can find anyway. If you need to split your calculation 
over many machines, however, you will need mpi.

Best,

Erik

15 mar 2012 kl. 04.50 skrev cuong nguyen:

 Dear Gromacs users,
  
 I prepare to run my simulations on the supercomputer on single node with 64 
 CPUs. Although I have seen on Gromacs Mannual suggesting to use MPI to 
 parellel, I still haven't understood how to use this application and which 
 commands I have to use. Please help me to deal with this?
  
 Many thanks and regards,
  
 Cuong
  
 
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

---
Erik Marklund, PhD
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 6688fax: +46 18 511 755
er...@xray.bmc.uu.se
http://www2.icm.uu.se/molbio/elflab/index.html

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Announcement: Large biomolecule benchmark report

2012-03-15 Thread Hannes Loeffler
Dear all,

we proudly announce our third benchmarking report on (large)
biomolecular systems carried out on various HPC platforms.  We have
expanded our repertoire to five MD codes (AMBER, CHARMM, GROMACS,
LAMMPS and NAMD) and to five protein and protein-membrane systems
ranging from 20 thousand to 3 million atoms.

Please find the report on
http://www.stfc.ac.uk/CSE/randd/cbg/Benchmark/25241.aspx
where we also offer the raw runtime data.  We also plan to release
the complete joint benchmark suite at a later date (as soon as we
have access to a web server with sufficient storage space).

We are open to any questions or comments related to our reports.


Kind regards,
Hannes Loeffler
STFC Daresbury
-- 
Scanned by iCritical.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Information on .mdp files

2012-03-15 Thread Justin A. Lemkul



Lara Bunte wrote:

Hello
I want to learn how to create a .mdp file and learn what I can write in such a file and learn what the entries in this file are meaning. 


I did not find this in the manual. Could you please give me link or some source 
where I can find this information?



The entirety of Chapter 7 in the manual is devoted to this information.  There 
is an online manual (linked from the PDF manual), but it contains the same 
information and is present mostly for convenience.


-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


AW: [gmx-users] Information on .mdp files

2012-03-15 Thread Rausch, Felix
Hey.

Well, it's always good to check the online documentation for such things.

http://www.gromacs.org/Documentation/File_Formats/.mdp_File

There you should find all information you need. Including examples and all 
usable keywords sorted by topic/task.

Good luck,
Felix

-Ursprüngliche Nachricht-
Von: gmx-users-boun...@gromacs.org [mailto:gmx-users-boun...@gromacs.org] Im 
Auftrag von Lara Bunte
Gesendet: Donnerstag, 15. März 2012 16:24
An: gmx-users@gromacs.org
Betreff: [gmx-users] Information on .mdp files

Hello
I want to learn how to create a .mdp file and learn what I can write in such a 
file and learn what the entries in this file are meaning. 

I did not find this in the manual. Could you please give me link or some source 
where I can find this information?

Thanks and best greetings
Lara

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface or 
send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] g_sas

2012-03-15 Thread afsaneh maleki
Hello dear user,

I have a system that is contained protein-water-ions. I used the
following command:
g_sas -f  free.xtc  -s  free.tpr   -o area  -or  res_area -oa
atom_area –q -nopbc

I select the whole protein first for calculation, and then this protein
for output.In this way I can obtain Area per residue from res_area file and
area per atom from atom_area file.

How to get area per residue with data of area per atom from atom_area
file? When I average on area per atoms for a selected residue, it
doesn't correspond with area per residue for a selected residue from
res_area.
How to correlate area per residue for a selected residue with area per
atoms for a selected residue?

Thanks in advance,
Afsaneh
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_sas

2012-03-15 Thread Justin A. Lemkul



afsaneh maleki wrote:

Hello dear user,

I have a system that is contained protein-water-ions. I used the
following command:
g_sas -f  free.xtc  -s  free.tpr   -o area  -or  res_area -oa
atom_area –q -nopbc

I select the whole protein first for calculation, and then this protein
for output.In this way I can obtain Area per residue from res_area file and
area per atom from atom_area file.

How to get area per residue with data of area per atom from atom_area
file? When I average on area per atoms for a selected residue, it
doesn't correspond with area per residue for a selected residue from
res_area.


It shouldn't.  Averaging the areas per atom should not produce anything related 
to the constituent residue(s).  The sum of the atom areas should yield the 
residue area.  A quick look through the code seems to indicate that this is 
true, that is, the two quantities are not produced independently; residue area 
arises from atom area.


-Justin


How to correlate area per residue for a selected residue with area per
atoms for a selected residue?

Thanks in advance,
Afsaneh


--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] bug in g_msd

2012-03-15 Thread Gavin Melaugh
Hi all

Is there a bug with g_msd when using the -mol flag. I have my own
program that calculates the MSD. If I compare it with the gromacs
utility for one system the curves are the exact same, however when I
compare with another system th curves are very different. Someone else
mentioned something similar a few days ago, so I was just wondering if
this was the case.

Cheers

Gavin
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] [Fwd: bug in g_msd]

2012-03-15 Thread Gavin Melaugh

---BeginMessage---
Hi all

Is there a bug with g_msd when using the -mol flag. I have my own
program that calculates the MSD. If I compare it with the gromacs
utility for one system the curves are the exact same, however when I
compare with another system th curves are very different. Someone else
mentioned something similar a few days ago, so I was just wondering if
this was the case.

Cheers

Gavin

---End Message---
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] bug in g_msd

2012-03-15 Thread Justin A. Lemkul



Gavin Melaugh wrote:

Hi all

Is there a bug with g_msd when using the -mol flag. I have my own
program that calculates the MSD. If I compare it with the gromacs
utility for one system the curves are the exact same, however when I
compare with another system th curves are very different. Someone else
mentioned something similar a few days ago, so I was just wondering if
this was the case.



Sounds like this is a known issue:

http://redmine.gromacs.org/issues/774

-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Structural features for LINCS application

2012-03-15 Thread Francesco Oteri

Dear gromacs users,
I am trying to simulate a protein (containing FeS cluster and a complex 
metal active site) using virtual site.
I've to face a problem with LINCS. In particular, if I constrain only 
h-bonds without using virtual site,
simulation runs fine but constraining all-bonds, simulation crashes 
after a lot of LINCS warning like:


Step 356, time 0.712 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.408533, max 8.159325 (between atoms 2750 and 2754)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
   2750   2754   90.00.1365   1.2502  0.1365

In both cases simulation conditions are the same. The bonds causing 
problem belongs to the active site.
I am wondering if there are structural features imparing the use of 
all-bonds constraints in LINCS.
A second question is, how can I run MD with virtual site without 
all-bonds options.


Thank you,
Francesco
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Announcement: Large biomolecule benchmark report

2012-03-15 Thread David van der Spoel

On 2012-03-15 14:37, Hannes Loeffler wrote:

Dear all,

we proudly announce our third benchmarking report on (large)
biomolecular systems carried out on various HPC platforms.  We have
expanded our repertoire to five MD codes (AMBER, CHARMM, GROMACS,
LAMMPS and NAMD) and to five protein and protein-membrane systems
ranging from 20 thousand to 3 million atoms.

Please find the report on
http://www.stfc.ac.uk/CSE/randd/cbg/Benchmark/25241.aspx
where we also offer the raw runtime data.  We also plan to release
the complete joint benchmark suite at a later date (as soon as we
have access to a web server with sufficient storage space).

We are open to any questions or comments related to our reports.

It looks very interesting, and having benchmarks done by independent 
researchers is the best way to avoid bias. The differences are quite 
revealing, but it's also good that you point to problems compiling 
gromacs. Is this going to be submitted for publication somewhere too?


Thanks for doing this, it must have been quite a job!



Kind regards,
Hannes Loeffler
STFC Daresbury



--
David van der Spoel, Ph.D., Professor of Biology
Dept. of Cell  Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] ***Extended deadline: March 26*** COMETS 2012 - 3rd International Track on Collaborative Modeling and Simulation - Call for Papers

2012-03-15 Thread Daniele Gianni
* Deadline Extended to March 26, 2012 ***

(Please accept our apologies if you receive multiple copies of this message)

#
                      IEEE WETICE 2012
    3rd IEEE Track on Collaborative Modeling and Simulation
                        (Comets 2012)

                      in cooperation with
                  AFIS (INCOSE France Chapter)
              MIMOS (Italian Association for MS)

                      CALL FOR PAPERS

#

June 25-27, 2012, Toulouse (France)
http://www.sel.uniroma2.it/comets12

#
# Papers Due: March 26, 2012  Extended Deadline 
# Accepted papers will be published in the conference proceedings
# by the IEEE Computer Society Press and indexed by EI.
#

Modeling and Simulation (MS) is increasingly becoming a central
activity in the design of new systems and in the analysis of
existing systems because it enables designers and researchers to
investigate systems behavior through virtual representations. For
this reason, MS is gaining a primary role in many industrial and
research fields, such as space, critical infrastructures,
manufacturing, emergency management, biomedical systems and
sustainable future. However, as the complexity of the
investigated systems increases and the types of investigations
widens, the cost of MS activities increases for the more
complex models and for the communications among a wider number and
variety of MS stakeholders (e.g., sub-domain experts, simulator
users, simulator engineers, and final system users). To address
the increasing costs of MS activities, collaborative
technologies must be introduced to support these activities by
fostering the sharing and reuse of models, by facilitating the
communications among MS stakeholders, and more generally by
integrating processes, tools and platforms.

Aside from seeking applications of collaborative technologies to
MS activities, the track seeks innovative contributions that
deal with the application of MS practices to the design of
collaborative environments. These environments are continuously
becoming more complex, and therefore their design requires
systematic approaches to meet the required quality of
collaboration. This is important for two reasons: to reduce
rework activities on the actual collaborative environment, and to
maximize the productivity and the quality of the process the
collaborative environment supports. MS offers the methodologies
and tools for such investigations and therefore it can be used to
improve the quality of collaborative environments.

A non–exhaustive list of topics of interest includes:

* collaborative environments for MS
* collaborative Systems of Systems MS
* workflow modelling for collaborative environments and processes
* agent-based MS
* collaborative distributed simulation
* collaborative component-based MS
* net-centric MS
* web-based MS
* model sharing and reuse
* model building and evaluation
* modeling and simulation of business processes
* modeling for collaboration
* simulation-based performance evaluation of collaborative networks
* model-driven simulation engineering
* domain specific languages for the simulation of collaborative environments
* domain specific languages for collaborative MS
* databases and repositories for MS
* distributed virtual environments
* virtual research environment for MS
* collaborative DEVS MS

To stimulate creativity, however, the track maintains a wider
scope and invites interested researchers to present contributions
that offer original perspectives on collaboration and MS.

+++
On-Line Submissions and Publication
+++

CoMetS'12 intends to bring together researchers and practitioners
to discuss key issues, approaches, open problems, innovative
applications and trends in the track research area.

This year, we will accept submissions in two forms:

(1) papers
(2) poster and industrial presentations

(1) Papers should contain original contributions not published or
submitted elsewhere. Papers up to six pages (including figures,
tables and references) can be submitted. Papers should follow the
IEEE format, which is single spaced, two columns, 10 pt
Times/Roman font. All submissions should be electronic (in PDF)
and will be peer-reviewed by at least three program committee
members.

Accepted full papers will be included in the proceedings and
published by the IEEE Computer Society Press (IEEE approval pending).
Please note that at least one author for each accepted paper should
register to attend WETICE 2012 (http://www.wetice.org) to have the
paper published in the proceedings.

(2) Posters should describe a practical, on-the-field, experience in
any domain area using collaborative 

[gmx-users] Re: on the RAM capacity needed for GROMACS

2012-03-15 Thread Dr. Vitaly V. Chaban
 Dear All,

 Suppose I need to install the GROMACS in my laptop. Will you please tell me 
 the requirement on the capacity of RAM?


I still did not invent a system that would not run on my 8GB RAM laptop.



Dr. Vitaly V. Chaban, 430 Hutchison Hall, Chem. Dept.
Univ. Rochester, Rochester, New York 14627-0216
THE UNITED STATES OF AMERICA
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Energy exclusions for freeze groups, and the pressure

2012-03-15 Thread Andrew DeYoung
Hi,

I am so sorry to bother you.  If you have time, do you have any ideas or
advice about my trouble below about implementing energy group exclusions to
avoid spurious contribution to the pressure and virial due to frozen atoms?


Thank you so very much!

Andrew DeYoung
Carnegie Mellon University

Hi,

I have a system containing two graphene sheets (with residue names GR1 and
GR2, respectively) plus some liquid.  I would like to hold the two graphene
sheets fixed in space and observe the dynamics of the liquid around it.  

To hold the graphene sheets fixed in space, I used freeze groups:

freezegrps = GR1 GR2
freezedim = Y Y Y Y Y Y ; freeze x, y  z directions

This indeed holds the sheets fixed in space, just as I want.  However, the
pressure increases dramatically, from about 10^3 bar with no frozen atoms to
about 10^29 bar when the graphene sheets are frozen.  I noticed that the
manual says (http://manual.gromacs.org/current/online/mdp_opt.html#neq):

To avoid spurious contibrutions to the virial and pressure due to large
forces between completely frozen atoms you need to use energy group
exclusions, this also saves computing time. Note that frozen coordinates are
not subject to pressure scaling.

So, it seems that to avoid spurious contribution to the pressure, I need to
exclude interactions between completely frozen atoms.  I used the following
directives in my .mdp file:

energygrps = GR1 GR2
freezegrps = GR1 GR2
freezedim = Y Y Y Y Y Y ; freeze x, y  z directions
energygrp_excl = GR1 GR1  GR2 GR2  GR1 GR2

This series of directives, I think, should tell Gromacs to exclude the
nonbonded interactions between atoms within GR1, between atoms within GR2,
and between atoms in GR1 and in GR2.

However, when I run g_energy to extract the (average) pressure (selecting
Pressure from the menu in g_energy), it turns out that the pressure is the
same with or without the energy group exclusion defined by my directive
energygrp_excl above; the average pressure in each case is a whopping
6.91498*10^29 bar (and the RMSDs are the same, too).  So it seems that the
spurious contribution to the pressure described in the manual is not
actually being removed by my energy exclusions.

Can you please help me think what I may be doing wrong, or how I can
otheriwse remove the spurious contribution to the pressure in the case of
freeze groups?

Or, is the key sentence in the manual actually Note that frozen coordinates
are not subject to pressure scaling.?  What does it mean that frozen
coordinates are not subject to pressure scaling?  Does this mean that the
pressure is not computed for freeze groups?

Thank you so very much for your time!  I truly appreciate it.

Andrew DeYoung
Carnegie Mellon University

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Announcement: Large biomolecule benchmark report

2012-03-15 Thread Szilárd Páll
I fully agree with David, it's great to have independent benchmarks!

In fact, already the the previous version of the report has been of
great use for us, we have referred to the results in a few occasions.

--
Szilárd



On Thu, Mar 15, 2012 at 2:37 PM, Hannes Loeffler
hannes.loeff...@stfc.ac.uk wrote:
 Dear all,

 we proudly announce our third benchmarking report on (large)
 biomolecular systems carried out on various HPC platforms.  We have
 expanded our repertoire to five MD codes (AMBER, CHARMM, GROMACS,
 LAMMPS and NAMD) and to five protein and protein-membrane systems
 ranging from 20 thousand to 3 million atoms.

 Please find the report on
 http://www.stfc.ac.uk/CSE/randd/cbg/Benchmark/25241.aspx
 where we also offer the raw runtime data.  We also plan to release
 the complete joint benchmark suite at a later date (as soon as we
 have access to a web server with sufficient storage space).

 We are open to any questions or comments related to our reports.


 Kind regards,
 Hannes Loeffler
 STFC Daresbury
 --
 Scanned by iCritical.
 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Structural features for LINCS application

2012-03-15 Thread Mark Abraham

On 16/03/2012 6:02 AM, Francesco Oteri wrote:

Dear gromacs users,
I am trying to simulate a protein (containing FeS cluster and a 
complex metal active site) using virtual site.
I've to face a problem with LINCS. In particular, if I constrain only 
h-bonds without using virtual site,
simulation runs fine but constraining all-bonds, simulation crashes 
after a lot of LINCS warning like:


Step 356, time 0.712 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.408533, max 8.159325 (between atoms 2750 and 2754)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
   2750   2754   90.00.1365   1.2502  0.1365

In both cases simulation conditions are the same. The bonds causing 
problem belongs to the active site.
I am wondering if there are structural features imparing the use of 
all-bonds constraints in LINCS.
A second question is, how can I run MD with virtual site without 
all-bonds options.


Coupled constraints, such as you might have in a cluster, can be 
delicate. Equilibrating with a quite small time step can be necessary.


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists