[gmx-users] Defined number of molecule of water into box

2011-04-04 Thread battis...@libero.it
Dear all,
is it possible to put into a box a defined number of particle?in other word, 
i'd like put into my system eg. 100 molecule of water, I tried with:
genbox -cp protein.gro -cs  -nmol 100 -try 1 -o out.gro
but it does not work.Can you help me?
Thanks!
Anna


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Heat of vap

2011-04-04 Thread David van der Spoel

On 2011-04-04 01.32, Justin A. Lemkul wrote:



Elisabeth wrote:





Elisabeth wrote:

Dear David,

I followed your instructions and calculated Heat of vaporization
of my alkane once with one molecule in gas phase (no cutoff) and
once with equivalent number of molecules as in liquid phase as
Justin suggested. Results are as follows:


To get heat of vaporization, you shouldn't be simulating just a
single molecule in the gas phase, it should be an equivalent number
of molecules as you have in the liquid phase.

Hello David and Justin,

My explanation was not clear. Below is the results for liquid phase
and for gas phase I tried two cases: one single molecule and the other
time for equivalent number of molecules as in liquid phase and thats
why results are very similar. ( However Justin says one single
molecule is not correct. I think when cutoffs is set to zero only
bonded terms are


What is not correct is comparing the potential energy of a liquid system
of many molecules with a gas phase of a single molecule. Whether or
not that was something you did still is not entirely clear, but to be
very clear, that's what I was saying is incorrect to do. DHvap is based
on conversion of equivalent systems between liquid and gas.


treated and even where there are many particles in gas phase to get


This is incorrect. Cutoffs of zero mean that all nonbonded interactions
are calculated, they are not truncated.


energies per mole of molecules i.e g_energy -nmol XXX must be used so
values should be colse to a single molecules case.. please correct me!
Anyway results for gas phase are close and this is not the issue now).



You shouldn't need -nmol for any of this. Simply take the potential
energy of the two systems (with equivalent numbers of molecules) and
apply the formula I gave you several emails ago.

NOO


1 molecule in the gas phase  - Epot(g) in your case 59.2 kJ/mol
N molecules in the liquid phase - Epot(l) (since this is per mole you 
DO need the -nmol option) in you case 34.7 kJ/mol
DHvap = Epot(g) + kBT - Epot(l) = 59.2+2.5-34.7 = 27 kJ/mol which is 
quite close to hexane (28.9 kJ/mol).




-Justin


Liquid phase:

Energy Average Err.Est. RMSD Tot-Drift
---

LJ (SR) -27.3083 0.01 0.296591 -0.0389173 (kJ/mol)
Coulomb (SR) 6.00527 0.0074 0.122878 0.00576827 (kJ/mol)
Coul. recip. 5.59559 0.0032 0.0557413 0.00316957 (kJ/mol)
Potential *34.6779 * 0.025 1.03468 -0.11177 (kJ/mol)
Total Energy 86.4044 0.026 1.44353 -0.112587 (kJ/mol)




*one single molecule in gas phase*


Energy Average Err.Est. RMSD Tot-Drift
---

LJ (SR) -2.24473 0.073 1.292 0.342696 (kJ/mol)
Coulomb (SR) 11.5723 0.55 2.17577 -2.33224 (kJ/mol)
Potential * 59.244 * 0.94 10.9756 6.35631 (kJ/mol)
Total Energy 106.647 1 15.4828 6.78792 (kJ/mol)

*equivalent number of molecules as in liquid* ( large box 20 nm)

Statistics over 101 steps [ 0. through 2000. ps ], 4
data sets
All statistics are over 11 points

Energy Average Err.Est. RMSD Tot-Drift
---

LJ (SR) -2.16367 0.053 0.171542 0.374027 (kJ/mol)
Coulomb (SR) 11.2894 0.23 0.49105 -1.44437 (kJ/mol)
Potential * 63.2369 * 1.1 2.47211 7.69756 (kJ/mol)
Total Energy 114.337 1.1 2.65547 7.72258 (kJ/mol)


Since pbc is set to NO molecules leave the box and I dont know
if this all right. I hope the difference is acceptable...!


For pbc = no there is no box.


0- I am going to do the same calculation but for some polymers
solvated in the alkane. For binary system do I need to look at
nonboded terms? and then run a simulation for a single polymer
in vacuum?

Can you please provide me with a recipe for Delta Hvap of the
solute in a solvent?


The method for calculating heat of vaporization is not dependent
upon the contents of the system; it is a fundamental thermodynamic
definition. Heat of vaporization is not something that can be
calculated from a solute in a solvent. You can calculate DHvap for
a particular system, but not some subset of that system.

Thanks Justin. I am interested in the energy required to vaporize
the solute in a particular solvent not the whole DHvap of the
mixture. do you think this can be achieved by calculating nonbonded
energies between solute and solvent? ( defining energy groups ..)




1- If I want to look at nonboded interactions only, do I have to
add Coul. recip. to [ LJ (SR) + Coulomb (SR) ] ?


The PME-related terms contain both solute-solvent, solvent-solvent,
and potentially solute-solute terms (depending on the size and
nature of the solute), so trying to interpret this term in some
pairwise fashion is an exercise in futility.

What you mean is when one uses PME interaction energies between
components can not be decomposed? So the energy groups I defined to
extract nonbonded energies are not giving 

[gmx-users] Defined number of molecule of water into box

2011-04-04 Thread battis...@libero.it

I solved with 

genbox -cp conf.gro -cs -maxsol 100  -p topol.top -o out.gro

Thanks!

Anna-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] implicit water and a layer of explicit water molecule

2011-04-04 Thread battis...@libero.it
Dear all,
I got some question about the implicit solvent.1)  In gromacs, is it possible 
simulate a protein in a layer  of explicit water, and put this system (protein 
+ SOL) into a big box and make the MD simulation with implicit solvent? How I 
have to set the md.mdp parameter  (; IMPLICIT SOLVENT ALGORITHM) in this 
case?Do you know some tutorial about this method?

2) I'd like to put into my box a definied number of explicit number of molecule 
of water eg. 100, soI usedgenbox -cp conf.gro -cs -maxsol 100  -p topol.top -o 
out.gro


but the water molecule is not in the random position in the box, but is in 
clustered conformation.Is it possible tell to genbox to put in a random way, in 
all space the defined number of molecule?
3) For that system (100 explicit solvent molecule + implicit solvent) I 
generated the topol.tpr using the following set up into the mdp file:

; IMPLICIT SOLVENT ALGORITHM
implicit_solvent = GBSA
;GENERALIZED BORN ELECTROSTATICS;
;Algorithm for calculating Born radii
gb_algorithm = Still
grompp do not give problem at all,  but mdrun give problem:segmantation fault 
or   Norm of force =nanI think that the problem is the use of 
explicit water molecule and the implcit water together.
Can you help me?
All the best
Anna
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] implicit water and a layer of explicit water molecule

2011-04-04 Thread Mark Abraham

On 4/04/2011 6:11 PM, battis...@libero.it wrote:


Dear all,


I got some question about the implicit solvent.

1)  In gromacs, is it possible simulate a protein in a layer  of 
explicit water, and put this system (protein + SOL) into a big box and 
make the MD simulation with implicit solvent?


How I have to set the md.mdp parameter  (; IMPLICIT SOLVENT ALGORITHM) 
in this case?


Do you know some tutorial about this method?



It can't work as simply as that, because the waters on the edge will fly 
off into the implicit solvent region. People have tried various things - 
check out the literature.


2) I'd like to put into my box a definied number of explicit number of 
molecule of water eg. 100, so


I used

genbox -cp conf.gro -cs -maxsol 100  -p topol.top -o out.gro



but the water molecule is not in the random position in the box, but 
is in clustered conformation.


Is it possible tell to genbox to put in a random way, in all space the 
defined number of molecule?




What do you actually want - a uniform gas of a given density?

3) For that system (100 explicit solvent molecule + implicit solvent) 
I generated the topol.tpr using the following set up into the mdp file:



; IMPLICIT SOLVENT ALGORITHM
implicit_solvent = GBSA
;GENERALIZED BORN ELECTROSTATICS;
;Algorithm for calculating Born radii
gb_algorithm = Still


grompp do not give problem at all,  but mdrun give problem:

segmantation fault or

Norm of force =nan

I think that the problem is the use of explicit water molecule and the 
implcit water together.




Maybe. We haven't got enough information to know.

Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Setting the C6 LJ term for OPLSA FF

2011-04-04 Thread Luca Bellucci
Dear all
I need to change sigma and epsilon for non-bonded parameters of the OPLSA FF.
In particular I want to set the attractive part of the LJ potential to zero 
(C6=0). 
In doing this I have read the manual but unfortunately the reported 
explanation did not help me. To understand how it works in a reliable way, 
I followed the Berk suggestions available at
http://lists.gromacs.org/pipermail/gmx-users/2010-December/056303.html
and i decided to report a simple example.

The main rules are in forcefield.itp file and for OPLSA FF they are: 
 ; nbfunc   comb-rule   gen-pairs fudgeLJ fudgeQQ
 1   3  yes   0.50.5

The non-bonded force field parameters for two atoms are in ffnonbonded.itp 
file and they look like:

[ atomtypes ]
; name   bond_type  mass   charge  ptypesigma   epsilon
 opls_1   C 612.01100   0.500A   sig_1esp_1
 opls_2   O 8   15.99940   -0.500   A   sig_2eps_2

From these values I am going to define the non-bonded parameter between a 
couple of atoms as:
  
[ nonbond_params ]
  i  j func  SIG_ij  EPS_ij 
opls_1 opls_2  1(sig_1*sig2)^1/2  (eps_1*eps_2)^1/2 ; Normal behavior

However, if I want the attractive term C6 of LJ potential equal zero, I should
set sig_12=-sig_12

[ nonbond_params ]
  i  j funcSIG_ij EPS_ij 
opls_1 opls_2  1   -(sig_1*sig_2)^1/2  (eps_1*eps_2)^1/2  ; -sig_ij - C6=0

It is right?

Thanks
 Luca

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Setting the C6 LJ term for OPLSA FF

2011-04-04 Thread Mark Abraham

On 4/04/2011 6:55 PM, Luca Bellucci wrote:

Dear all
I need to change sigma and epsilon for non-bonded parameters of the OPLSA FF.
In particular I want to set the attractive part of the LJ potential to zero
(C6=0).
In doing this I have read the manual but unfortunately the reported
explanation did not help me. To understand how it works in a reliable way,
I followed the Berk suggestions available at
http://lists.gromacs.org/pipermail/gmx-users/2010-December/056303.html
and i decided to report a simple example.

The main rules are in forcefield.itp file and for OPLSA FF they are:
  ; nbfunc   comb-rule   gen-pairs fudgeLJ fudgeQQ
  1   3  yes   0.50.5

The non-bonded force field parameters for two atoms are in ffnonbonded.itp
file and they look like:

[ atomtypes ]
; name   bond_type  mass   charge  ptypesigma   epsilon
  opls_1   C 612.01100   0.500A   sig_1esp_1
  opls_2   O 8   15.99940   -0.500   A   sig_2eps_2

 From these values I am going to define the non-bonded parameter between a
couple of atoms as:

[ nonbond_params ]
   i  j func  SIG_ij  EPS_ij
opls_1 opls_2  1(sig_1*sig2)^1/2  (eps_1*eps_2)^1/2 ; Normal behavior

However, if I want the attractive term C6 of LJ potential equal zero, I should
set sig_12=-sig_12

[ nonbond_params ]
   i  j funcSIG_ij EPS_ij
opls_1 opls_2  1   -(sig_1*sig_2)^1/2  (eps_1*eps_2)^1/2  ; -sig_ij -  C6=0

It is right?


Yes, I think so. Set up a two-atom example to compare the normal and 
modified functions. Get a trajectory frame, make energy group(s) 
containing only pair(s) of atoms of interest and use mdrun -rerun to 
compute the energy to your heart's content.


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


R: [gmx-users] implicit water and a layer of explicit water molecule

2011-04-04 Thread battis...@libero.it
Dear Mark,

about point 2, yes I need to have 
a uniform distribution of a defined numberof water molecule (eg. 100 water 
molecule ) into my box.
Is it possible with genbox?

After, I'll have to make the md simulation for my
system in implicit solvent
(I'll have  protein + 100 molecule SOL + implicit solvent)

So my next problem is to set the parameter into mdp file, for this mixed type 
of kind of water.

Thanks for your reply

Anna




 Dear all,


 I got some question about the implicit solvent.

 1)  In gromacs, is it possible simulate a protein in a layer  of 
 explicit water, and put this system (protein + SOL) into a big box and 
 make the MD simulation with implicit solvent?

 How I have to set the md.mdp parameter  (; IMPLICIT SOLVENT ALGORITHM) 
 in this case?

 Do you know some tutorial about this method?


It can't work as simply as that, because the waters on the edge will fly 
off into the implicit solvent region. People have tried various things - 
check out the literature.

 2) I'd like to put into my box a definied number of explicit number of 
 molecule of water eg. 100, so

 I used

 genbox -cp conf.gro -cs -maxsol 100  -p topol.top -o out.gro



 but the water molecule is not in the random position in the box, but 
 is in clustered conformation.

 Is it possible tell to genbox to put in a random way, in all space the 
 defined number of molecule?


What do you actually want - a uniform gas of a given density?

 3) For that system (100 explicit solvent molecule + implicit solvent) 
 I generated the topol.tpr using the following set up into the mdp file:


 ; IMPLICIT SOLVENT ALGORITHM
 implicit_solvent = GBSA
 ;GENERALIZED BORN ELECTROSTATICS;
 ;Algorithm for calculating Born radii
 gb_algorithm = Still


 grompp do not give problem at all,  but mdrun give problem:

 segmantation fault or

 Norm of force =nan

 I think that the problem is the use of explicit water molecule and the 
 implcit water together.


Maybe. We haven't got enough information to know.

Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: R: [gmx-users] implicit water and a layer of explicit water molecule

2011-04-04 Thread Mark Abraham

On 4/04/2011 7:12 PM, battis...@libero.it wrote:

Dear Mark,

about point 2, yes I need to have
a uniform distribution of a defined number
of water molecule (eg. 100 water molecule ) into my box.
Is it possible with genbox?


Yes, but not by starting with a uniform distribution of a 
condensed-phase density. You need one of the right density to start with.


A better approach is to decide how large a box of what density you want. 
Work out how much volume that gives to each molecule. Take a single 
molecule and put it in a box of that size with editconf. Move the 
molecule a bit off-center. Then use genconf -rot to replicate that box 
into a large one. Then equilibrate that thoroughly to get rid of the 
residual ordering. If, later on, you want a different size, genbox with 
the box you've equilibrated here will be a good approach.



After, I'll have to make the md simulation for my
system in implicit solvent
(I'll have  protein + 100 molecule SOL + implicit solvent)

So my next problem is to set the parameter into mdp file, for this mixed type 
of kind of water.


You have more problems than that. The force fields probably don't have 
implicit solvation parameters for water atom types. You'll need to 
source them somehow. And like I told you last time, your solvent 
molecules are not going to stay happily around your solvent like you 
hope. I'm going to stop repeating myself :-)


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] FEP and loss of performance

2011-04-04 Thread Luca Bellucci
Dear all,
when I run a single free energy simulation 
i noticed that there is a loss of performace with respect to
the normal MD

free_energy= yes
init_lambda= 0.9
delta_lambda   = 0.0
couple-moltype = Protein_Chain_P
couple-lambda0 = vdw-q
couple-lambda0 = none
couple-intramol= yes

   Average load imbalance: 16.3 %
   Part of the total run time spent waiting due to load imbalance: 12.2 %
   Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X0 %
   Time:   1852.712   1852.712100.0

free_energy= no
   Average load imbalance: 2.7 %
   Part of the total run time spent waiting due to load imbalance: 1.7 % 
   Time:127.394127.394100.0

It seems that the loss of performace is due in part to in the load imbalance 
in the domain decomposition, however I tried to change
these keywords without benefit
Any comment is welcome.

Thanks
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


R: [gmx-users] implicit water and a layer of explicit water molecule

2011-04-04 Thread battis...@libero.it
Thank you very much for your suggestions!
Anna
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Check out my photos on Facebook

2011-04-04 Thread Gokul Algates
Hi Gmx-users,

I set up a Facebook profile where I can post my pictures, videos and events and 
I want to add you as a friend so you can see it. First, you need to join 
Facebook! Once you join, you can also create your own profile.

Thanks,
Gokul

To sign up for Facebook, follow the link below:
http://www.facebook.com/p.php?i=12233939090k=Z6E3Y3UXVW4N6KDJPB63QUXV2W1GVX5NUV1PJBTGYQr

Already have an account? Add this email address to your account:
http://www.facebook.com/n/?merge_accounts.phpe=gmx-users%40gromacs.orgc=f3a89fc32e182e6abbd64f366ef3f589

===
gmx-users@gromacs.org was invited to join Facebook by Gokul Algates. If you 
don't want to receive these emails from Facebook in the future, please follow 
the link below to unsubscribe.
http://www.facebook.com/o.php?k=7c1955u=11958234951mid=403935cG5af385328b47G0G8
Learn more about this email: http://www.facebook.com/help/?faq=17151\nFacebook, 
Inc. P.O. Box 10005, Palo Alto, CA 94303

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Check out my photos on Facebook

2011-04-04 Thread Gokul Algates
Hi Gmx-users,

I set up a Facebook profile where I can post my pictures, videos and events and 
I want to add you as a friend so you can see it. First, you need to join 
Facebook! Once you join, you can also create your own profile.

Thanks,
Gokul

To sign up for Facebook, follow the link below:
http://www.facebook.com/p.php?i=12233939090k=Z6E3Y3UXVW4N6KDJPB63QUXV2W1GVX5NUV1PJBUCU2r

Already have an account? Add this email address to your account:
http://www.facebook.com/n/?merge_accounts.phpe=gmx-users%40gromacs.orgc=f3a89fc32e182e6abbd64f366ef3f589

===
gmx-users@gromacs.org was invited to join Facebook by Gokul Algates. If you 
don't want to receive these emails from Facebook in the future, please follow 
the link below to unsubscribe.
http://www.facebook.com/o.php?k=7c1955u=11958234951mid=40390ecG5af385328b47G0G8
Learn more about this email: http://www.facebook.com/help/?faq=17151\nFacebook, 
Inc. P.O. Box 10005, Palo Alto, CA 94303

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Umbrella Sampling

2011-04-04 Thread Gavin Melaugh
Hi

I assume that the order of the file names in the tpr-files.dat and
pullx-files.dat is irrelevant for g_wham .

Cheers

Gavin
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Umbrella Sampling

2011-04-04 Thread Justin A. Lemkul


From g_wham -h:

The tpr and pullx files must be in corresponding order, i.e. the first tpr 
created the first pullx, etc.


-Justin

Gavin Melaugh wrote:

Hi

I assume that the order of the file names in the tpr-files.dat and
pullx-files.dat is irrelevant for g_wham .

Cheers

Gavin


--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] minimization and simulation problems

2011-04-04 Thread politr

Dear all,
I'm trying to run simulation of 30 proteins in water using the Martini  
force field. I used water.gro file in order to solvate the proteins.  
For minimization I used the em.mdp file published at Martini site  
(http://md.chem.rug.nl/cgmartini/index.php/home). When I set the emtol  
parameter to 10 the system can't converge. So I used emtol 100 and  
then the system converged. I use it as an input for the simulation.  
The file can't be attached as it is too big nut I can send it if needed.
However, the sumulation crushes when I'm trying to run MD using md.mdp  
also from the Martini site. I'm getting the following warnings and  
errors:
Warning: Only triclinic boxes with the first vector parallel to the  
x-axis and the second vector in the xy-plane are supported.

 Box (3x3):
Box[0]={ nan,  nan,  nan}
Box[1]={ nan,  nan,  nan}
Box[2]={ nan,  nan,  nan}
 Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the  
x-axis and the second vector in the xy-plane are supported.

 Box (3x3):
Box[0]={ nan,  nan,  nan}
Box[1]={ nan,  nan,  nan}
Box[2]={ nan,  nan,  nan}
 Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the  
x-axis and the second vector in the xy-plane are supported.

 Box (3x3):
Box[0]={ nan,  nan,  nan}
Box[1]={ nan,  nan,  nan}
Box[2]={ nan,  nan,  nan}
 Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the  
x-axis and the second vector in the xy-plane are supported.

 Box (3x3):
Box[0]={ nan,  nan,  nan}
Box[1]={ nan,  nan,  nan}
Box[2]={ nan,  nan,  nan}
 Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the  
x-axis and the second vector in the xy-plane are supported.

 Box (3x3):
Box[0]={ nan,  nan,  nan}
Box[1]={ nan,  nan,  nan}
Box[2]={ nan,  nan,  nan}
 Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the  
x-axis and the second vector in the xy-plane are supported.

 Box (3x3):
Box[0]={ nan,  nan,  nan}
Box[1]={ nan,  nan,  nan}
Box[2]={ nan,  nan,  nan}
 Can not fix pbc.

---
Program mdrun_mpi, VERSION 4.0.3
Source code file: nsgrid.c, line: 348

Fatal error:
Number of grid cells is zero. Probably the system and box collapsed.

---

It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

Error on node 0, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 0 out of 8

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0[cli_0]:  
aborting job:

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0

---
Program mdrun_mpi, VERSION 4.0.3
Source code file: nsgrid.c, line: 348

Fatal error:
Number of grid cells is zero. Probably the system and box collapsed.

---
Error on node 1, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 1 out of 8

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3[cli_3]:  
aborting job:

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5[cli_5]:  
aborting job:

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4[cli_4]:  
aborting job:

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 6[cli_6]:  
aborting job:

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 6

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2[cli_2]:  
aborting job:

application called 

[gmx-users] FEP and loss of performance

2011-04-04 Thread chris . neale
If we accept your text at face value, then the simulation slowed down  
by a factor of 1500%, certainly not the 16% of the load balancing.


Please let us know what version of gromacs and cut and paste your  
cammands that you used to run gromacs (so we can verify that you ran  
on the same number of processors) and cut and paste a diff of the .mdp  
files (so that we can verify that you ran for the same number of steps).


You might be correct about the slowdown, but let's rule out some other  
more obvious problems first.


Chris.

-- original message --


Dear all,
when I run a single free energy simulation
i noticed that there is a loss of performace with respect to
the normal MD

free_energy= yes
init_lambda= 0.9
delta_lambda   = 0.0
couple-moltype = Protein_Chain_P
couple-lambda0 = vdw-q
couple-lambda0 = none
couple-intramol= yes

   Average load imbalance: 16.3 %
   Part of the total run time spent waiting due to load imbalance: 12.2 %
   Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X0 %
   Time:   1852.712   1852.712100.0

free_energy= no
   Average load imbalance: 2.7 %
   Part of the total run time spent waiting due to load imbalance: 1.7 %
   Time:127.394127.394100.0

It seems that the loss of performace is due in part to in the load imbalance
in the domain decomposition, however I tried to change
these keywords without benefit
Any comment is welcome.

Thanks


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] minimization and simulation problems

2011-04-04 Thread chris . neale
What are your initial box dimensions prior to em? Also, please copy  
and paste your .mdp options. Also, what happens when you run the same  
post-em simulation with nsteps=1 ?


-- original message --


Dear all,
I'm trying to run simulation of 30 proteins in water using the Martini
force field. I used water.gro file in order to solvate the proteins.
For minimization I used the em.mdp file published at Martini site
(http://md.chem.rug.nl/cgmartini/index.php/home). When I set the emtol
parameter to 10 the system can't converge. So I used emtol 100 and
then the system converged. I use it as an input for the simulation.
The file can't be attached as it is too big nut I can send it if needed.
However, the sumulation crushes when I'm trying to run MD using md.mdp
also from the Martini site. I'm getting the following warnings and
errors:
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
  Box (3x3):
 Box[0]={ nan,  nan,  nan}
 Box[1]={ nan,  nan,  nan}
 Box[2]={ nan,  nan,  nan}
  Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
  Box (3x3):
 Box[0]={ nan,  nan,  nan}
 Box[1]={ nan,  nan,  nan}
 Box[2]={ nan,  nan,  nan}
  Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
  Box (3x3):
 Box[0]={ nan,  nan,  nan}
 Box[1]={ nan,  nan,  nan}
 Box[2]={ nan,  nan,  nan}
  Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
  Box (3x3):
 Box[0]={ nan,  nan,  nan}
 Box[1]={ nan,  nan,  nan}
 Box[2]={ nan,  nan,  nan}
  Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
  Box (3x3):
 Box[0]={ nan,  nan,  nan}
 Box[1]={ nan,  nan,  nan}
 Box[2]={ nan,  nan,  nan}
  Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
  Box (3x3):
 Box[0]={ nan,  nan,  nan}
 Box[1]={ nan,  nan,  nan}
 Box[2]={ nan,  nan,  nan}
  Can not fix pbc.

---
Program mdrun_mpi, VERSION 4.0.3
Source code file: nsgrid.c, line: 348

Fatal error:
Number of grid cells is zero. Probably the system and box collapsed.

---

It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

Error on node 0, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 0 out of 8

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0[cli_0]:
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0

---
Program mdrun_mpi, VERSION 4.0.3
Source code file: nsgrid.c, line: 348

Fatal error:
Number of grid cells is zero. Probably the system and box collapsed.

---
Error on node 1, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 1 out of 8

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3[cli_3]:
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5[cli_5]:
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4[cli_4]:
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 6[cli_6]:
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 

Re: [gmx-users] Umbrella Sampling

2011-04-04 Thread Gavin Melaugh
Hi Justin

Yeah I know the tpr files must be in corresponding order with pullx.xvg
files. What I meant was should they be in order of distance. i.e say If
my windows go from 0 to 1.0 nm with windows every 0.1nm, could I list
the files in any order or does it have to like 0.1 0.2 0.3 0.4 

Gavin

Justin A. Lemkul wrote:

 From g_wham -h:

 The tpr and pullx files must be in corresponding order, i.e. the
 first tpr created the first pullx, etc.

 -Justin

 Gavin Melaugh wrote:
 Hi

 I assume that the order of the file names in the tpr-files.dat and
 pullx-files.dat is irrelevant for g_wham .

 Cheers

 Gavin


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Umbrella Sampling

2011-04-04 Thread Justin A. Lemkul



Gavin Melaugh wrote:

Hi Justin

Yeah I know the tpr files must be in corresponding order with pullx.xvg
files. What I meant was should they be in order of distance. i.e say If
my windows go from 0 to 1.0 nm with windows every 0.1nm, could I list
the files in any order or does it have to like 0.1 0.2 0.3 0.4 



As long as the order (in terms of matching) is correct, you shouldn't have a 
problem.  You can easily test this by switching the position of just one pair of 
.tpr/.xvg files and seeing if you get the same result.


-Justin


Gavin

Justin A. Lemkul wrote:

From g_wham -h:

The tpr and pullx files must be in corresponding order, i.e. the
first tpr created the first pullx, etc.

-Justin

Gavin Melaugh wrote:

Hi

I assume that the order of the file names in the tpr-files.dat and
pullx-files.dat is irrelevant for g_wham .

Cheers

Gavin





--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] minimization and simulation problems

2011-04-04 Thread politr

my box dimensions are 368A


Quoting chris.ne...@utoronto.ca:

What are your initial box dimensions prior to em? Also, please copy  
and paste your .mdp options. Also, what happens when you run the  
same post-em simulation with nsteps=1 ?


-- original message --


Dear all,
I'm trying to run simulation of 30 proteins in water using the Martini
force field. I used water.gro file in order to solvate the proteins.
For minimization I used the em.mdp file published at Martini site
(http://md.chem.rug.nl/cgmartini/index.php/home). When I set the emtol
parameter to 10 the system can't converge. So I used emtol 100 and
then the system converged. I use it as an input for the simulation.
The file can't be attached as it is too big nut I can send it if needed.
However, the sumulation crushes when I'm trying to run MD using md.mdp
also from the Martini site. I'm getting the following warnings and
errors:
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
  Box (3x3):
 Box[0]={ nan,  nan,  nan}
 Box[1]={ nan,  nan,  nan}
 Box[2]={ nan,  nan,  nan}
  Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
  Box (3x3):
 Box[0]={ nan,  nan,  nan}
 Box[1]={ nan,  nan,  nan}
 Box[2]={ nan,  nan,  nan}
  Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
  Box (3x3):
 Box[0]={ nan,  nan,  nan}
 Box[1]={ nan,  nan,  nan}
 Box[2]={ nan,  nan,  nan}
  Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
  Box (3x3):
 Box[0]={ nan,  nan,  nan}
 Box[1]={ nan,  nan,  nan}
 Box[2]={ nan,  nan,  nan}
  Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
  Box (3x3):
 Box[0]={ nan,  nan,  nan}
 Box[1]={ nan,  nan,  nan}
 Box[2]={ nan,  nan,  nan}
  Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
  Box (3x3):
 Box[0]={ nan,  nan,  nan}
 Box[1]={ nan,  nan,  nan}
 Box[2]={ nan,  nan,  nan}
  Can not fix pbc.

---
Program mdrun_mpi, VERSION 4.0.3
Source code file: nsgrid.c, line: 348

Fatal error:
Number of grid cells is zero. Probably the system and box collapsed.

---

It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

Error on node 0, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 0 out of 8

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0[cli_0]:
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0

---
Program mdrun_mpi, VERSION 4.0.3
Source code file: nsgrid.c, line: 348

Fatal error:
Number of grid cells is zero. Probably the system and box collapsed.

---
Error on node 1, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 1 out of 8

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3[cli_3]:
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5[cli_5]:
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4[cli_4]:
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 6[cli_6]:
aborting 

Re: [gmx-users] FEP and loss of performance

2011-04-04 Thread Luca Bellucci
Hi Chris,
thank for the suggestions,
in the previous mail there is a mistake because   
couple-moltype = SOL (for solvent) and not Protein_chaim_P.
Now the problem of the load balance seems reasonable, because
the water box is large ~9.0 nm.
However the problem exist and the performance loss is very high, so I have 
redone calculations with this command:

grompp -f 
md.mdp -c ../Run-02/confout.gro -t ../Run-02/state.cpt -p ../topo.top -n 
../index.ndx -o 
md.tpr -maxwarn 1

mdrun -s md.tpr -o md

this is part of the md.mdp file: 

; Run parameters
; define  = -DPOSRES
integrator  = md; 
nsteps  = 1000  ; 
dt  = 0.002 ; 
[..]
free_energy= yes ; /no
init_lambda= 0.9
delta_lambda   = 0.0
couple-moltype = SOL; solvent water
couple-lambda0 = vdw-q
couple-lambda1 = none
couple-intramol= yes

Result for free energy calculation  
 Computing: Nodes Number G-CyclesSeconds %
---
 Domain decomp.   8126   22.0508.3 0.1
 DD comm. load  8 150.0090.0 0.0
 DD comm. bounds 8 120.0310.0 0.0
 Comm. coord.8   1001   17.3196.5 0.0
 Neighbor search8127  436.569  163.7 1.1
 Force   8   100134241.57612840.987.8
 Wait + Comm. F8   1001   19.4867.3 0.0
 PME mesh  8   1001 4190.758 1571.610.7
 Write traj.  8  71.8270.7 0.0
 Update  8   1001   12.5574.7 0.0
 Constraints   8   1001   26.4969.9 0.1
 Comm. energies  8   1002   10.7104.0 0.0
 Rest   8  25.1429.4 0.1
---
 Total  8   39004.53114627.1   100.0
---
---
 PME redist. X/F  8   3003 3479.771 1304.9 8.9
 PME spread/gather   8   4004  277.574  104.1 0.7
 PME 3D-FFT   8   4004  378.090  141.8 1.0
 PME solve  8   2002   55.033   20.6 0.1
---
Parallel run - timing based on wallclock.

   NODE (s)   Real (s)  (%)
   Time:   1828.385   1828.385100.0
   30:28
 (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
Performance:  3.115  3.223  0.095253.689

 I Switched off only the free_energy keyword and I redone the calculation 
I have:
 Computing: Nodes Number G-CyclesSeconds %
---
 Domain decomp.  8 77   10.9754.1 0.6
 DD comm. load 8  10.0010.0 0.0
 Comm. coord.   8   1001   14.4805.4 0.8
 Neighbor search   8 78  136.479   51.2 7.3
 Force 8   1001 1141.115  427.961.3
 Wait + Comm. F  8   1001   17.8456.7 1.0
 PME mesh8   1001  484.581  181.726.0
 Write traj.   8  51.2210.5 0.1
 Update   8   10019.9763.7 0.5
 Constraints8   1001   20.2757.6 1.1
 Comm. energies 89925.9332.2 0.3
 Rest 8  19.6707.4 1.1
---
 Total  81862.552  698.5   100.0
---
---
 PME redist. X/F8   2002   92.204   34.6 5.0
 PME spread/gather  8   2002  192.337   72.110.3
 PME 3D-FFT 8   2002  177.373   66.5 9.5
 PME solve  8   1001   22.5128.4 1.2
---
Parallel run - timing based on wallclock.

   NODE (s)   Real (s)  (%)
   Time: 87.309 87.309100.0
   1:27
 (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
Performance:439.731 23.995  1.981 12.114
Finished mdrun on node 0 Mon Apr  4 16:52:04 2011

Luca




 If we accept your text at face value, then the simulation 

Re: [gmx-users] FEP and loss of performance

2011-04-04 Thread Justin A. Lemkul



Luca Bellucci wrote:

Hi Chris,
thank for the suggestions,
in the previous mail there is a mistake because   
couple-moltype = SOL (for solvent) and not Protein_chaim_P.

Now the problem of the load balance seems reasonable, because
the water box is large ~9.0 nm.


Now your outcome makes a lot more sense.  You're decoupling all of the solvent? 
 I don't see how that is going to be physically stable or terribly meaningful, 
but it explains your performance loss.  You're annihilating a significant number 
of interactions (probably the vast majority of all the nonbonded interactions in 
the system), which I would expect would cause continuous load balancing issues.


-Justin

However the problem exist and the performance loss is very high, so I have 
redone calculations with this command:


grompp -f 
md.mdp -c ../Run-02/confout.gro -t ../Run-02/state.cpt -p ../topo.top -n ../index.ndx -o 
md.tpr -maxwarn 1


mdrun -s md.tpr -o md

this is part of the md.mdp file: 


; Run parameters
; define  = -DPOSRES
integrator	= md		; 
nsteps		= 1000 	; 
dt		= 0.002		; 
[..]

free_energy= yes ; /no
init_lambda= 0.9
delta_lambda   = 0.0

couple-moltype = SOL; solvent water
couple-lambda0 = vdw-q
couple-lambda1 = none
couple-intramol= yes

Result for free energy calculation  
 Computing: Nodes Number G-CyclesSeconds %

---
 Domain decomp.   8126   22.0508.3 0.1
 DD comm. load  8 150.0090.0 0.0
 DD comm. bounds 8 120.0310.0 0.0
 Comm. coord.8   1001   17.3196.5 0.0
 Neighbor search8127  436.569  163.7 1.1
 Force   8   100134241.57612840.987.8
 Wait + Comm. F8   1001   19.4867.3 0.0
 PME mesh  8   1001 4190.758 1571.610.7
 Write traj.  8  71.8270.7 0.0
 Update  8   1001   12.5574.7 0.0
 Constraints   8   1001   26.4969.9 0.1
 Comm. energies  8   1002   10.7104.0 0.0
 Rest   8  25.1429.4 0.1
---
 Total  8   39004.53114627.1   100.0
---
---
 PME redist. X/F  8   3003 3479.771 1304.9 8.9
 PME spread/gather   8   4004  277.574  104.1 0.7
 PME 3D-FFT   8   4004  378.090  141.8 1.0
 PME solve  8   2002   55.033   20.6 0.1
---
Parallel run - timing based on wallclock.

   NODE (s)   Real (s)  (%)
   Time:   1828.385   1828.385100.0
   30:28
 (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
Performance:  3.115  3.223  0.095253.689

 I Switched off only the free_energy keyword and I redone the calculation 
I have:

 Computing: Nodes Number G-CyclesSeconds %
---
 Domain decomp.  8 77   10.9754.1 0.6
 DD comm. load 8  10.0010.0 0.0
 Comm. coord.   8   1001   14.4805.4 0.8
 Neighbor search   8 78  136.479   51.2 7.3
 Force 8   1001 1141.115  427.961.3
 Wait + Comm. F  8   1001   17.8456.7 1.0
 PME mesh8   1001  484.581  181.726.0
 Write traj.   8  51.2210.5 0.1
 Update   8   10019.9763.7 0.5
 Constraints8   1001   20.2757.6 1.1
 Comm. energies 89925.9332.2 0.3
 Rest 8  19.6707.4 1.1
---
 Total  81862.552  698.5   100.0
---
---
 PME redist. X/F8   2002   92.204   34.6 5.0
 PME spread/gather  8   2002  192.337   72.110.3
 PME 3D-FFT 8   2002  177.373   66.5 9.5
 PME solve  8   1001   22.5128.4 1.2
---
  

[gmx-users] FEP and loss of performance

2011-04-04 Thread Chris Neale
Load balancing problems I can understand, but why would it take longer 
in absolute time? I would have thought that some nodes would simple be 
sitting idle, but this should not cause an increase in the overall 
simulation time (15x at that!).


There must be some extra communication?

I agree with Justin that this seems like a strange thing to do, but 
still I think that there must be some underlying coding issue (probably 
one that only exists because of a reasonable assumption that nobody 
would annihilate the largest part of their system).


Chris.


Luca Bellucci wrote:

/  Hi Chris,

//  thank for the suggestions,
//  in the previous mail there is a mistake because
//  couple-moltype = SOL (for solvent) and not Protein_chaim_P.
//  Now the problem of the load balance seems reasonable, because
//  the water box is large ~9.0 nm.
/
Now your outcome makes a lot more sense.  You're decoupling all of the solvent?
  I don't see how that is going to be physically stable or terribly meaningful,
but it explains your performance loss.  You're annihilating a significant number
of interactions (probably the vast majority of all the nonbonded interactions in
the system), which I would expect would cause continuous load balancing issues.

-Justin


/  However the problem exist and the performance loss is very high, so I have

//  redone calculations with this command:
//
//  grompp -f
//  md.mdp -c ../Run-02/confout.gro -t ../Run-02/state.cpt -p ../topo.top -n 
../index.ndx -o
//  md.tpr -maxwarn 1
//
//  mdrun -s md.tpr -o md
//
//  this is part of the md.mdp file:
//
//  ; Run parameters
//  ; define  = -DPOSRES
//  integrator  = md;
//  nsteps  = 1000  ;
//  dt  = 0.002 ;
//  [..]
//  free_energy= yes ; /no
//  init_lambda= 0.9
//  delta_lambda   = 0.0
//  couple-moltype = SOL; solvent water
//  couple-lambda0 = vdw-q
//  couple-lambda1 = none
//  couple-intramol= yes
//
//  Result for free energy calculation
//   Computing: Nodes Number G-CyclesSeconds %
//  ---
//   Domain decomp.   8126   22.0508.3 0.1
//   DD comm. load  8 150.0090.0 0.0
//   DD comm. bounds 8 120.0310.0 0.0
//   Comm. coord.8   1001   17.3196.5 0.0
//   Neighbor search8127  436.569  163.7 1.1
//   Force   8   100134241.57612840.9
87.8
//   Wait + Comm. F8   1001   19.4867.3 0.0
//   PME mesh  8   1001 4190.758 1571.610.7
//   Write traj.  8  71.8270.7 0.0
//   Update  8   1001   12.5574.7 0.0
//   Constraints   8   1001   26.4969.9 0.1
//   Comm. energies  8   1002   10.7104.0 0.0
//   Rest   8  25.1429.4 0.1
//  ---
//   Total  8   39004.53114627.1   100.0
//  ---
//  ---
//   PME redist. X/F  8   3003 3479.771 1304.9 8.9
//   PME spread/gather   8   4004  277.574  104.1 0.7
//   PME 3D-FFT   8   4004  378.090  141.8 1.0
//   PME solve  8   2002   55.033   20.6 0.1
//  ---
//  Parallel run - timing based on wallclock.
//
// NODE (s)   Real (s)  (%)
// Time:   1828.385   1828.385100.0
// 30:28
//   (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
//  Performance:  3.115  3.223  0.095253.689
//
//   I Switched off only the free_energy keyword and I redone the calculation
//  I have:
//   Computing: Nodes Number G-CyclesSeconds %
//  ---
//   Domain decomp.  8 77   10.9754.1 0.6
//   DD comm. load 8  10.0010.0 0.0
//   Comm. coord.   8   1001   14.4805.4 0.8
//   Neighbor search   8 78  136.479   51.2 7.3
//   Force 8   1001 1141.115  427.961.3
//   Wait + Comm. F  8   1001   17.8456.7 1.0
//   PME mesh8   1001  484.581  181.726.0
//   Write traj.   8  51.2210.5 0.1
//   Update   8   1001

Re: [gmx-users] FEP and loss of performance

2011-04-04 Thread Luca Bellucci
Dear Chris and Justin
Thank you for your precious suggestions 
This is a test that i perform in a single machine with 8 cores 
and gromacs 4.5.4.

I am trying  to enhance the  sampling of a protein using the decoupling scheme 
of the free energy module of gromacs.  However when i decouple only the 
protein, the protein collapsed. Because i simulated in NVT i thought that 
this was an effect of the solvent. I was trying to decouple also the solvent 
to understand the system behavior.

 I expected a loss of performance, but not so drastic. 
Luca 

 Load balancing problems I can understand, but why would it take longer
 in absolute time? I would have thought that some nodes would simple be
 sitting idle, but this should not cause an increase in the overall
 simulation time (15x at that!).

 There must be some extra communication?

 I agree with Justin that this seems like a strange thing to do, but
 still I think that there must be some underlying coding issue (probably
 one that only exists because of a reasonable assumption that nobody
 would annihilate the largest part of their system).

 Chris.

 Luca Bellucci wrote:
 /  Hi Chris,

 //  thank for the suggestions,
 //  in the previous mail there is a mistake because
 //  couple-moltype = SOL (for solvent) and not Protein_chaim_P.
 //  Now the problem of the load balance seems reasonable, because
 //  the water box is large ~9.0 nm.
 /
 Now your outcome makes a lot more sense.  You're decoupling all of the
 solvent? I don't see how that is going to be physically stable or terribly
 meaningful, but it explains your performance loss.  You're annihilating a
 significant number of interactions (probably the vast majority of all the
 nonbonded interactions in the system), which I would expect would cause
 continuous load balancing issues.

 -Justin

 /  However the problem exist and the performance loss is very high, so I
  have

 //  redone calculations with this command:
 //
 //  grompp -f
 //  md.mdp -c ../Run-02/confout.gro -t ../Run-02/state.cpt -p ../topo.top
 -n ../index.ndx -o //  md.tpr -maxwarn 1
 //
 //  mdrun -s md.tpr -o md
 //
 //  this is part of the md.mdp file:
 //
 //  ; Run parameters
 //  ; define  = -DPOSRES
 //  integrator   = md;
 //  nsteps   = 1000  ;
 //  dt   = 0.002 ;
 //  [..]
 //  free_energy= yes ; /no
 //  init_lambda= 0.9
 //  delta_lambda   = 0.0
 //  couple-moltype = SOL; solvent water
 //  couple-lambda0 = vdw-q
 //  couple-lambda1 = none
 //  couple-intramol= yes
 //
 //  Result for free energy calculation
 //   Computing: Nodes Number G-CyclesSeconds %
 // 
 --- //
   Domain decomp.   8126   22.0508.3 0.1 //  
 DD comm. load  8 150.0090.0 0.0 //  
 DD comm. bounds 8 120.0310.0 0.0 //  
 Comm. coord.8   1001   17.3196.5 0.0 //  
 Neighbor search8127  436.569  163.7 1.1 //  
 Force   8   100134241.57612840.9   
 87.8 //   Wait + Comm. F8   1001   19.4867.3
 0.0 //   PME mesh  8   1001 4190.758 1571.6   
 10.7 //   Write traj.  8  71.827   
 0.7 0.0 //   Update  8   1001   12.557
4.7 0.0 //   Constraints   8   1001   26.496   
 9.9 0.1 //   Comm. energies  8   1002   10.710   
 4.0 0.0 //   Rest   8  25.142   
 9.4 0.1 // 
 --- //
   Total  8   39004.53114627.1   100.0 // 
 --- //
  ---
 //   PME redist. X/F  8   3003 3479.771 1304.9 8.9
 //   PME spread/gather   8   4004  277.574  104.1 0.7 // 
  PME 3D-FFT   8   4004  378.090  141.8 1.0 // 
  PME solve  8   2002   55.033   20.6 0.1
 // 
 --- //
   Parallel run - timing based on wallclock.
 //
 // NODE (s)   Real (s)  (%)
 // Time:   1828.385   1828.385100.0
 // 30:28
 //   (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
 //  Performance:  3.115  3.223  0.095253.689
 //
 //   I Switched off only the free_energy keyword and I redone the
 calculation //  I have:
 //   Computing: Nodes Number G-CyclesSeconds %
 // 
 --- //
   

[gmx-users] g_mdmat distance matrices

2011-04-04 Thread Yulian Gavrilov
Dear gmx users,

I used g_mdmat and have got a distance matrices in .xpm format.

Now I want to compare two matrices: for a protein var1 and for protein var2
and create one output file with this compare. How can I do it?

If there is no such function in gromacs, how can I convert .xpm format to
get a matrices with numbers?

-- 

Sincerely,

Yulian Gavrilov
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] minimization and simulation problems

2011-04-04 Thread politr

Quoting pol...@fh.huji.ac.il:
Dear gromacs users,
my box dimensions are 368A and when I run the simulation with  
nsteps=1 it works fine. The mdp files used for minimization and  
post-em simulation are attached.

Thanks again for your help.
Regina



Quoting chris.ne...@utoronto.ca:

What are your initial box dimensions prior to em? Also, please copy  
 and paste your .mdp options. Also, what happens when you run the   
same post-em simulation with nsteps=1 ?


-- original message --


Dear all,
I'm trying to run simulation of 30 proteins in water using the Martini
force field. I used water.gro file in order to solvate the proteins.
For minimization I used the em.mdp file published at Martini site
(http://md.chem.rug.nl/cgmartini/index.php/home). When I set the emtol
parameter to 10 the system can't converge. So I used emtol 100 and
then the system converged. I use it as an input for the simulation.
The file can't be attached as it is too big nut I can send it if needed.
However, the sumulation crushes when I'm trying to run MD using md.mdp
also from the Martini site. I'm getting the following warnings and
errors:
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
 Box (3x3):
Box[0]={ nan,  nan,  nan}
Box[1]={ nan,  nan,  nan}
Box[2]={ nan,  nan,  nan}
 Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
 Box (3x3):
Box[0]={ nan,  nan,  nan}
Box[1]={ nan,  nan,  nan}
Box[2]={ nan,  nan,  nan}
 Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
 Box (3x3):
Box[0]={ nan,  nan,  nan}
Box[1]={ nan,  nan,  nan}
Box[2]={ nan,  nan,  nan}
 Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
 Box (3x3):
Box[0]={ nan,  nan,  nan}
Box[1]={ nan,  nan,  nan}
Box[2]={ nan,  nan,  nan}
 Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
 Box (3x3):
Box[0]={ nan,  nan,  nan}
Box[1]={ nan,  nan,  nan}
Box[2]={ nan,  nan,  nan}
 Can not fix pbc.
Warning: Only triclinic boxes with the first vector parallel to the
x-axis and the second vector in the xy-plane are supported.
 Box (3x3):
Box[0]={ nan,  nan,  nan}
Box[1]={ nan,  nan,  nan}
Box[2]={ nan,  nan,  nan}
 Can not fix pbc.

---
Program mdrun_mpi, VERSION 4.0.3
Source code file: nsgrid.c, line: 348

Fatal error:
Number of grid cells is zero. Probably the system and box collapsed.

---

It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

Error on node 0, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 0 out of 8

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0[cli_0]:
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0

---
Program mdrun_mpi, VERSION 4.0.3
Source code file: nsgrid.c, line: 348

Fatal error:
Number of grid cells is zero. Probably the system and box collapsed.

---
Error on node 1, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 1 out of 8

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3[cli_3]:
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5[cli_5]:
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5

gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4[cli_4]:
aborting job:
application called 

[gmx-users] minimization and simulation problems

2011-04-04 Thread Chris Neale
Can you please redo the md part with gen_vel=yes and see if that makes 
any difference?


Generally, you need to narrow down the problem for us. Does it crash in 
serial as well as parallel? How many steps does it go before the crash? 
what happens to the system volume as a function of time for the duration 
of the simulation prior to the crash.


Chris.

Quoting politr at fh.huji.ac.il 
http://lists.gromacs.org/mailman/listinfo/gmx-users:


Dear gromacs users,

/  my box dimensions are 368A and when I run the simulation with

//  nsteps=1 it works fine. The mdp files used for minimization and
//  post-em simulation are attached.
/Thanks again for your help.
Regina

/

//
//  Quotingchris.neale at utoronto.ca  
http://lists.gromacs.org/mailman/listinfo/gmx-users:
//
//  What are your initial box dimensions prior to em? Also, please copy
//   and paste your .mdp options. Also, what happens when you run the
//  same post-em simulation with nsteps=1 ?
//
//  -- original message --
//
//
//  Dear all,
//  I'm trying to run simulation of 30 proteins in water using the Martini
//  force field. I used water.gro file in order to solvate the proteins.
//  For minimization I used the em.mdp file published at Martini site
//  (http://md.chem.rug.nl/cgmartini/index.php/home). When I set the emtol
//  parameter to 10 the system can't converge. So I used emtol 100 and
//  then the system converged. I use it as an input for the simulation.
//  The file can't be attached as it is too big nut I can send it if needed.
//  However, the sumulation crushes when I'm trying to run MD using md.mdp
//  also from the Martini site. I'm getting the following warnings and
//  errors:
//  Warning: Only triclinic boxes with the first vector parallel to the
//  x-axis and the second vector in the xy-plane are supported.
//   Box (3x3):
//  Box[0]={ nan,  nan,  nan}
//  Box[1]={ nan,  nan,  nan}
//  Box[2]={ nan,  nan,  nan}
//   Can not fix pbc.
//  Warning: Only triclinic boxes with the first vector parallel to the
//  x-axis and the second vector in the xy-plane are supported.
//   Box (3x3):
//  Box[0]={ nan,  nan,  nan}
//  Box[1]={ nan,  nan,  nan}
//  Box[2]={ nan,  nan,  nan}
//   Can not fix pbc.
//  Warning: Only triclinic boxes with the first vector parallel to the
//  x-axis and the second vector in the xy-plane are supported.
//   Box (3x3):
//  Box[0]={ nan,  nan,  nan}
//  Box[1]={ nan,  nan,  nan}
//  Box[2]={ nan,  nan,  nan}
//   Can not fix pbc.
//  Warning: Only triclinic boxes with the first vector parallel to the
//  x-axis and the second vector in the xy-plane are supported.
//   Box (3x3):
//  Box[0]={ nan,  nan,  nan}
//  Box[1]={ nan,  nan,  nan}
//  Box[2]={ nan,  nan,  nan}
//   Can not fix pbc.
//  Warning: Only triclinic boxes with the first vector parallel to the
//  x-axis and the second vector in the xy-plane are supported.
//   Box (3x3):
//  Box[0]={ nan,  nan,  nan}
//  Box[1]={ nan,  nan,  nan}
//  Box[2]={ nan,  nan,  nan}
//   Can not fix pbc.
//  Warning: Only triclinic boxes with the first vector parallel to the
//  x-axis and the second vector in the xy-plane are supported.
//   Box (3x3):
//  Box[0]={ nan,  nan,  nan}
//  Box[1]={ nan,  nan,  nan}
//  Box[2]={ nan,  nan,  nan}
//   Can not fix pbc.
//
//  ---
//  Program mdrun_mpi, VERSION 4.0.3
//  Source code file: nsgrid.c, line: 348
//
//  Fatal error:
//  Number of grid cells is zero. Probably the system and box collapsed.
//
//  ---
//
//  It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)
//
//  Error on node 0, will try to stop all the nodes
//  Halting parallel program mdrun_mpi on CPU 0 out of 8
//
//  gcq#166: It Wouldn't Hurt to Wipe Once In a While (Beavis and Butthead)
//
//  application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0[cli_0]:
//  aborting job:
//  application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
//
//  ---
//  Program mdrun_mpi, VERSION 4.0.3
//  Source code file: nsgrid.c, line: 348
//
//  Fatal error:
//  Number of grid cells is zero. 

Re: [gmx-users] FEP and loss of performance

2011-04-04 Thread Justin A. Lemkul



Luca Bellucci wrote:

Dear Chris and Justin
Thank you for your precious suggestions 
This is a test that i perform in a single machine with 8 cores 
and gromacs 4.5.4.


I am trying  to enhance the  sampling of a protein using the decoupling scheme 
of the free energy module of gromacs.  However when i decouple only the 
protein, the protein collapsed. Because i simulated in NVT i thought that 
this was an effect of the solvent. I was trying to decouple also the solvent 
to understand the system behavior.




Rather than suspect that the solvent is the problem, it's more likely that 
decoupling an entire protein simply isn't stable.  I have never tried anything 
that enormous, but the volume change in the system could be unstable, along with 
any number of factors, depending on how you approach it.


If you're looking for better sampling, REMD is a much more robust approach than 
trying to manipulate the interactions of huge parts of your system using the 
free energy code.


-Justin

 I expected a loss of performance, but not so drastic. 
Luca 


Load balancing problems I can understand, but why would it take longer
in absolute time? I would have thought that some nodes would simple be
sitting idle, but this should not cause an increase in the overall
simulation time (15x at that!).

There must be some extra communication?

I agree with Justin that this seems like a strange thing to do, but
still I think that there must be some underlying coding issue (probably
one that only exists because of a reasonable assumption that nobody
would annihilate the largest part of their system).

Chris.

Luca Bellucci wrote:

/  Hi Chris,

//  thank for the suggestions,
//  in the previous mail there is a mistake because
//  couple-moltype = SOL (for solvent) and not Protein_chaim_P.
//  Now the problem of the load balance seems reasonable, because
//  the water box is large ~9.0 nm.
/
Now your outcome makes a lot more sense.  You're decoupling all of the
solvent? I don't see how that is going to be physically stable or terribly
meaningful, but it explains your performance loss.  You're annihilating a
significant number of interactions (probably the vast majority of all the
nonbonded interactions in the system), which I would expect would cause
continuous load balancing issues.

-Justin


/  However the problem exist and the performance loss is very high, so I
have

//  redone calculations with this command:
//
//  grompp -f
//  md.mdp -c ../Run-02/confout.gro -t ../Run-02/state.cpt -p ../topo.top
-n ../index.ndx -o //  md.tpr -maxwarn 1
//
//  mdrun -s md.tpr -o md
//
//  this is part of the md.mdp file:
//
//  ; Run parameters
//  ; define  = -DPOSRES
//  integrator  = md;
//  nsteps  = 1000  ;
//  dt  = 0.002 ;
//  [..]
//  free_energy= yes ; /no
//  init_lambda= 0.9
//  delta_lambda   = 0.0
//  couple-moltype = SOL; solvent water
//  couple-lambda0 = vdw-q
//  couple-lambda1 = none
//  couple-intramol= yes
//
//  Result for free energy calculation
//   Computing: Nodes Number G-CyclesSeconds %
// 
--- //
  Domain decomp.   8126   22.0508.3 0.1 //  
DD comm. load  8 150.0090.0 0.0 //  
DD comm. bounds 8 120.0310.0 0.0 //  
Comm. coord.8   1001   17.3196.5 0.0 //  
Neighbor search8127  436.569  163.7 1.1 //  
Force   8   100134241.57612840.9   
87.8 //   Wait + Comm. F8   1001   19.4867.3
0.0 //   PME mesh  8   1001 4190.758 1571.6   
10.7 //   Write traj.  8  71.827   
0.7 0.0 //   Update  8   1001   12.557
   4.7 0.0 //   Constraints   8   1001   26.496   
9.9 0.1 //   Comm. energies  8   1002   10.710   
4.0 0.0 //   Rest   8  25.142   
9.4 0.1 // 
--- //
  Total  8   39004.53114627.1   100.0 // 
--- //

 ---
//   PME redist. X/F  8   3003 3479.771 1304.9 8.9
//   PME spread/gather   8   4004  277.574  104.1 0.7 // 
 PME 3D-FFT   8   4004  378.090  141.8 1.0 // 
 PME solve  8   2002   55.033   20.6 0.1
// 
--- //

Parallel run - timing based on wallclock.
//
// NODE (s)   Real (s)  (%)
// Time:   1828.385   1828.385

[gmx-users] FEP and loss of performance

2011-04-04 Thread Chris Neale

 Dear Chris and Justin


/  Thank you for your precious suggestions

//  This is a test that i perform in a single machine with 8 cores
//  and gromacs 4.5.4.
//
//  I am trying  to enhance the  sampling of a protein using the decoupling 
scheme
//  of the free energy module of gromacs.  However when i decouple only the
//  protein, the protein collapsed. Because i simulated in NVT i thought that
//  this was an effect of the solvent. I was trying to decouple also the 
solvent
//  to understand the system behavior.
//
/

Rather than suspect that the solvent is the problem, it's more likely that
decoupling an entire protein simply isn't stable.  I have never tried anything
that enormous, but the volume change in the system could be unstable, along with
any number of factors, depending on how you approach it.

If you're looking for better sampling, REMD is a much more robust approach than
trying to manipulate the interactions of huge parts of your system using the
free energy code.


Presumably Luca is interested in some type of hamiltonian exchange where lambda 
represents the interactions between the protein and the solvent?
This can actually be a useful method for enhancing sampling. I think it's dangerous if we 
rely to heavily on try something else. I still see no methodological
reason a priori why there should be any actual slowdown, so that makes me think 
that it's an implementation thing, and there is at least the possibility that 
this is
something that could be fixed as an enhancement.

Chris.


-Justin


/   I expected a loss of performance, but not so drastic.

//  Luca
//
//  Load balancing problems I can understand, but why would it take longer
//  in absolute time? I would have thought that some nodes would simple be
//  sitting idle, but this should not cause an increase in the overall
//  simulation time (15x at that!).
//
//  There must be some extra communication?
//
//  I agree with Justin that this seems like a strange thing to do, but
//  still I think that there must be some underlying coding issue (probably
//  one that only exists because of a reasonable assumption that nobody
//  would annihilate the largest part of their system).
//
//  Chris.
//
//  Luca Bellucci wrote:
//  /  Hi Chris,
//  //  thank for the suggestions,
//  //  in the previous mail there is a mistake because
//  //  couple-moltype = SOL (for solvent) and not Protein_chaim_P.
//  //  Now the problem of the load balance seems reasonable, because
//  //  the water box is large ~9.0 nm.
//  /
//  Now your outcome makes a lot more sense.  You're decoupling all of the
//  solvent? I don't see how that is going to be physically stable or terribly
/

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] FEP and loss of performance

2011-04-04 Thread Justin A. Lemkul



Chris Neale wrote:

   Dear Chris and Justin

/ Thank you for your precious suggestions 
// This is a test that i perform in a single machine with 8 cores 
// and gromacs 4.5.4.
// 
// I am trying  to enhance the  sampling of a protein using the decoupling scheme 
// of the free energy module of gromacs.  However when i decouple only the 
// protein, the protein collapsed. Because i simulated in NVT i thought that 
// this was an effect of the solvent. I was trying to decouple also the solvent 
// to understand the system behavior.
// 
/
Rather than suspect that the solvent is the problem, it's more likely that 
decoupling an entire protein simply isn't stable.  I have never tried anything 
that enormous, but the volume change in the system could be unstable, along with 
any number of factors, depending on how you approach it.


If you're looking for better sampling, REMD is a much more robust approach than 
trying to manipulate the interactions of huge parts of your system using the 
free energy code.


Presumably Luca is interested in some type of hamiltonian exchange where lambda 
represents the interactions between the protein and the solvent?
This can actually be a useful method for enhancing sampling. I think it's dangerous if we rely to heavily on try something else. I still see no methodological 
reason a priori why there should be any actual slowdown, so that makes me think that it's an implementation thing, and there is at least the possibility that this is

something that could be fixed as an enhancement.



Then perhaps we can get some clarification.  Based on the earlier .mdp snippet:

free_energy= yes
init_lambda= 0.9
delta_lambda   = 0.0
couple-moltype = Protein_Chain_P
couple-lambda0 = vdw-q
couple-lambda0 = none
couple-intramol= yes

It looked to me as if the intent was to decouple some protein complex from the 
system by simultaneously annihilating all solute-solvent and solute-solute 
nonbonded interactions (which comes with its own set of methodological issues - 
stability, convergence, etc).


-Justin


Chris.


-Justin

/  I expected a loss of performance, but not so drastic. 
// Luca 
// 
// Load balancing problems I can understand, but why would it take longer

// in absolute time? I would have thought that some nodes would simple be
// sitting idle, but this should not cause an increase in the overall
// simulation time (15x at that!).
//
// There must be some extra communication?
//
// I agree with Justin that this seems like a strange thing to do, but
// still I think that there must be some underlying coding issue (probably
// one that only exists because of a reasonable assumption that nobody
// would annihilate the largest part of their system).
//
// Chris.
//
// Luca Bellucci wrote:
// /  Hi Chris,
// //  thank for the suggestions,
// //  in the previous mail there is a mistake because
// //  couple-moltype = SOL (for solvent) and not Protein_chaim_P.
// //  Now the problem of the load balance seems reasonable, because
// //  the water box is large ~9.0 nm.
// /
// Now your outcome makes a lot more sense.  You're decoupling all of the
// solvent? I don't see how that is going to be physically stable or terribly
/



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] autocorrelation functions

2011-04-04 Thread shivangi nangia
 Hello all,


I need to calculate the end-to-end vector autocorrelation function of my
polymer chains. I could get the velocity autocorrelation function using
g_velacc tool.

Is there a tool available for calculating end-to-end vector autocorrelation
function? If not, then is there an easy way to modify/morph the g_velacc.c
program to  do other autocorrelation function calculations?


Thanks,
SN
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] how to Installing GROMACS in rocks cluster

2011-04-04 Thread Miguel Quiliano Meza
Dear Colleagues.

I have been searching information about HOW TO INSTALL GROMACS in rocks
cluster?. Unfortunately, the information that I found is not clear.

Someone can help me with this question. Maybe there are basic but important
steps that I have to keep in mind. Could you please share yours experiences?


Thank you in advance.

Miguel Quiliano.

P.D I have installed rocks cluster 5.4
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] autocorrelation functions

2011-04-04 Thread shivangi nangia
Hello all,


I need to calculate the end-to-end vector autocorrelation function of my
polymer chains. I could get the velocity autocorrelation function using
g_velacc tool.

Is there a tool available for calculating end-to-end vector autocorrelation
function? If not, then is there an easy way to modify/morph the g_velacc.c
program to  do other autocorrelation function calculations?


Thanks,
Shivangi
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] FEP and loss of performance

2011-04-04 Thread Luca Bellucci
Yes i am testing the possibility to perform an Hamiltonian-REMD
Energy barriers can be overcome  increasing the temperature system or scaling 
potential energy  with a lambda value, these methods are equivalent.
Both have advantages and disavantages, at this stage it is not the right place 
to debate on it. The main problem seems to be how to overcome to the the loss 
of gromacs performance in such calculation.  At this moment it seems an 
intrinsic code problem.
Is it possible?

   Dear Chris and Justin
 
 /  Thank you for your precious suggestions

 //  This is a test that i perform in a single machine with 8 cores
 //  and gromacs 4.5.4.
 //
 //  I am trying  to enhance the  sampling of a protein using the
 decoupling scheme //  of the free energy module of gromacs.  However when
 i decouple only the //  protein, the protein collapsed. Because i
 simulated in NVT i thought that //  this was an effect of the solvent. I
 was trying to decouple also the solvent //  to understand the system
 behavior.
 //
 /

 Rather than suspect that the solvent is the problem, it's more likely that
 decoupling an entire protein simply isn't stable.  I have never tried
  anything that enormous, but the volume change in the system could be
  unstable, along with any number of factors, depending on how you approach
  it.
 
 If you're looking for better sampling, REMD is a much more robust approach
  than trying to manipulate the interactions of huge parts of your system
  using the free energy code.

 Presumably Luca is interested in some type of hamiltonian exchange where
 lambda represents the interactions between the protein and the solvent?
 This can actually be a useful method for enhancing sampling. I think it's
 dangerous if we rely to heavily on try something else. I still see no
 methodological reason a priori why there should be any actual slowdown, so
 that makes me think that it's an implementation thing, and there is at
 least the possibility that this is something that could be fixed as an
 enhancement.

 Chris.


 -Justin

 /   I expected a loss of performance, but not so drastic.

 //  Luca
 //
 //  Load balancing problems I can understand, but why would it take
 longer //  in absolute time? I would have thought that some nodes would
 simple be //  sitting idle, but this should not cause an increase in the
 overall //  simulation time (15x at that!).
 //
 //  There must be some extra communication?
 //
 //  I agree with Justin that this seems like a strange thing to do, but
 //  still I think that there must be some underlying coding issue
 (probably //  one that only exists because of a reasonable assumption
 that nobody //  would annihilate the largest part of their system).
 //
 //  Chris.
 //
 //  Luca Bellucci wrote:
 //  /  Hi Chris,
 //  //  thank for the suggestions,
 //  //  in the previous mail there is a mistake because
 //  //  couple-moltype = SOL (for solvent) and not Protein_chaim_P.
 //  //  Now the problem of the load balance seems reasonable, because
 //  //  the water box is large ~9.0 nm.
 //  /
 //  Now your outcome makes a lot more sense.  You're decoupling all of
 the //  solvent? I don't see how that is going to be physically stable or
 terribly /


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_chi

2011-04-04 Thread simon sham
Hi,
I have 2 questions on using g_chi to calculate only one omega angle for X-Pro.

1. I used the following command:

g_chi -s md.gro/tpr -f md.xtc -omega -o test.xvg and got the following 
message:
Fatal error:
Library file in current dir nor  not found aminoacids.datin default directories.
(You can set the directories to search with the GMXLIB path variable)
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors;

2. The command does not allow using index file. How can I calculate just one 
dihedral angle?

Thanks for your help in advance.

Simon Sham





-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] g_chi

2011-04-04 Thread Francesco Oteri

Hi Simon,
Regarding the first question you should set GMXLIB as $GMXDATA/gromacs/top.

I don't know how to solve the second problem bacause I never used g_chi

Il 04/04/2011 22:19, simon sham ha scritto:

Hi,
I have 2 questions on using g_chi to calculate only one omega angle 
for X-Pro.


1. I used the following command:

g_chi -s md.gro/tpr -f md.xtc -omega -o test.xvg and got the 
following message:

Fatal error:
Library file in current dir nor  not found aminoacids.datin default 
directories.

(You can set the directories to search with the GMXLIB path variable)
For more information and tips for troubleshooting, please check the 
GROMACS

website at http://www.gromacs.org/Documentation/Errors;

2. The command does not allow using index file. How can I calculate 
just one dihedral angle?


Thanks for your help in advance.

Simon Sham







-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] g_chi

2011-04-04 Thread Justin A. Lemkul



simon sham wrote:

Hi,
I have 2 questions on using g_chi to calculate only one omega angle for 
X-Pro.


1. I used the following command:

g_chi -s md.gro/tpr -f md.xtc -omega -o test.xvg and got the following 
message:

Fatal error:
Library file in current dir nor  not found aminoacids.datin default 
directories.

(You can set the directories to search with the GMXLIB path variable)
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors;



Upgrade to a newer version of Gromacs.  This bug has been fixed.

2. The command does not allow using index file. How can I calculate just 
one dihedral angle?




Not sure on this one, but once you have a properly-functioning executable, you 
should be able to tell from the output files (of which there is a .log file that 
should contain everything).


-Justin


Thanks for your help in advance.

Simon Sham







--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Postdoc position available

2011-04-04 Thread Marcelo A. Carignano

Hi all,

A post-doctoral research position is available at Northwestern  
University at Evanston, IL.
The position is in the field of computational physical chemistry, with  
a focus on molecular dynamics simulation and force field development.
The systems to be studied include water/air, water/ice and ice/air  
interfaces, and the chemical processes occurring at these interfaces.


The candidate must have a Ph.D. in chemistry or physics, excellent  
practical and theoretical understanding of atomistic simulation
methods, programming experience and working experience in a Linux  
environment.
The candidate must also be able to prepare manuscripts and present  
ongoing research.


For further information please contact:

Marcelo Carignano
c...@northwestern.edu


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] coordination number and g_analysis

2011-04-04 Thread Nilesh Dhumal
Hello,

I want to calculat the coordination number of solute in first solvation
shell.

integration of (4*pi*r^2*g(r)) from 0 to 2.6 A(first solvation shell)

If I calcualte the g_rdf for first solvation shell (till 2.6 A) and then I
integrate this using g_analysis.

Can I go this way.

Nilesh



-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: gmx-users Digest, Vol 84, Issue 28

2011-04-04 Thread Miguel Quiliano Meza
Dear colleagues.

I would like to share with the community this. Searching I can find this:

http://software.intel.com/en-us/articles/compile-and-run-gromacs-453-in-icr/
Compile and run GROMACS 4.5.3 in the Intel(R) Cluster Ready Reference Recipe
S5520UR-ICR1.1-ROCKS5.3-CENTOS5.4-C2 v1.0I think is very useful for People
who have the same objective. However, in one section the tutorial refers
that the GROMACS version 4.5.3 had some bugs. I would like to install
version GROMACS 4.5.4, Does somebody know if this version has the same
problems?

Thanks in advance.

Miguel Quiliano.



2011/4/4 gmx-users-requ...@gromacs.org

 Send gmx-users mailing list submissions to
gmx-users@gromacs.org

 To subscribe or unsubscribe via the World Wide Web, visit
http://lists.gromacs.org/mailman/listinfo/gmx-users
 or, via email, send a message with subject or body 'help' to
gmx-users-requ...@gromacs.org

 You can reach the person managing the list at
gmx-users-ow...@gromacs.org

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of gmx-users digest...


 Today's Topics:

   1. how to Installing GROMACS in rocks cluster (Miguel Quiliano Meza)
   2. autocorrelation functions (shivangi nangia)
   3. Re: FEP and loss of performance (Luca Bellucci)
   4. g_chi (simon sham)
   5. Re: g_chi (Francesco Oteri)
   6. Re: g_chi (Justin A. Lemkul)


 --

 Message: 1
 Date: Mon, 4 Apr 2011 13:58:40 -0400
 From: Miguel Quiliano Meza rifaxim...@gmail.com
 Subject: [gmx-users] how to Installing GROMACS in rocks cluster
 To: gmx-users@gromacs.org
 Message-ID: BANLkTimh_Lx_E9B=fJ-WRypK-vh=cf2...@mail.gmail.com
 Content-Type: text/plain; charset=iso-8859-1

 Dear Colleagues.

 I have been searching information about HOW TO INSTALL GROMACS in rocks
 cluster?. Unfortunately, the information that I found is not clear.

 Someone can help me with this question. Maybe there are basic but important
 steps that I have to keep in mind. Could you please share yours
 experiences?


 Thank you in advance.

 Miguel Quiliano.

 P.D I have installed rocks cluster 5.4
 -- next part --
 An HTML attachment was scrubbed...
 URL:
 http://lists.gromacs.org/pipermail/gmx-users/attachments/20110404/ef4b39fe/attachment-0001.html

 --

 Message: 2
 Date: Mon, 4 Apr 2011 14:11:48 -0400
 From: shivangi nangia shivangi.nan...@gmail.com
 Subject: [gmx-users] autocorrelation functions
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 Message-ID: BANLkTi=HS9T7zpWkhEVFsYmZjrV=he1...@mail.gmail.com
 Content-Type: text/plain; charset=iso-8859-1

 Hello all,


 I need to calculate the end-to-end vector autocorrelation function of my
 polymer chains. I could get the velocity autocorrelation function using
 g_velacc tool.

 Is there a tool available for calculating end-to-end vector autocorrelation
 function? If not, then is there an easy way to modify/morph the g_velacc.c
 program to  do other autocorrelation function calculations?


 Thanks,
 Shivangi
 -- next part --
 An HTML attachment was scrubbed...
 URL:
 http://lists.gromacs.org/pipermail/gmx-users/attachments/20110404/cb2a4e0a/attachment-0001.html

 --

 Message: 3
 Date: Mon, 4 Apr 2011 20:36:37 +0200
 From: Luca Bellucci lcbl...@gmail.com
 Subject: Re: [gmx-users] FEP and loss of performance
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 Message-ID: 201104042036.37450.lcbl...@gmail.com
 Content-Type: text/plain;  charset=utf-8

 Yes i am testing the possibility to perform an Hamiltonian-REMD
 Energy barriers can be overcome  increasing the temperature system or
 scaling
 potential energy  with a lambda value, these methods are equivalent.
 Both have advantages and disavantages, at this stage it is not the right
 place
 to debate on it. The main problem seems to be how to overcome to the the
 loss
 of gromacs performance in such calculation.  At this moment it seems an
 intrinsic code problem.
 Is it possible?

Dear Chris and Justin
  
  /  Thank you for your precious suggestions
 
  //  This is a test that i perform in a single machine with 8 cores
  //  and gromacs 4.5.4.
  //
  //  I am trying  to enhance the  sampling of a protein using the
  decoupling scheme //  of the free energy module of gromacs.  However
 when
  i decouple only the //  protein, the protein collapsed. Because i
  simulated in NVT i thought that //  this was an effect of the solvent.
 I
  was trying to decouple also the solvent //  to understand the system
  behavior.
  //
  /
 
  Rather than suspect that the solvent is the problem, it's more likely
 that
  decoupling an entire protein simply isn't stable.  I have never tried
   anything that enormous, but the volume change in the system could be
   unstable, along with any number of factors, depending on how you
 approach

Re: [gmx-users] Re: gmx-users Digest, Vol 84, Issue 28

2011-04-04 Thread Justin A. Lemkul



Miguel Quiliano Meza wrote:

Dear colleagues.

I would like to share with the community this. Searching I can find this:

http://software.intel.com/en-us/articles/compile-and-run-gromacs-453-in-icr/


  Compile and run GROMACS 4.5.3 in the Intel(R) Cluster Ready Reference
  Recipe S5520UR-ICR1.1-ROCKS5.3-CENTOS5.4-C2 v1.0

I think is very useful for People who have the same objective. However, 
in one section the tutorial refers that the GROMACS version 4.5.3 had 
some bugs. I would like to install version GROMACS 4.5.4, Does somebody 
know if this version has the same problems?




It was fixed for 4.5.4.

-Justin


Thanks in advance.

Miguel Quiliano.

 

2011/4/4 gmx-users-requ...@gromacs.org 
mailto:gmx-users-requ...@gromacs.org


Send gmx-users mailing list submissions to
   gmx-users@gromacs.org mailto:gmx-users@gromacs.org

To subscribe or unsubscribe via the World Wide Web, visit
   http://lists.gromacs.org/mailman/listinfo/gmx-users
or, via email, send a message with subject or body 'help' to
   gmx-users-requ...@gromacs.org
mailto:gmx-users-requ...@gromacs.org

You can reach the person managing the list at
   gmx-users-ow...@gromacs.org mailto:gmx-users-ow...@gromacs.org

When replying, please edit your Subject line so it is more specific
than Re: Contents of gmx-users digest...


Today's Topics:

  1. how to Installing GROMACS in rocks cluster (Miguel Quiliano Meza)
  2. autocorrelation functions (shivangi nangia)
  3. Re: FEP and loss of performance (Luca Bellucci)
  4. g_chi (simon sham)
  5. Re: g_chi (Francesco Oteri)
  6. Re: g_chi (Justin A. Lemkul)


--

Message: 1
Date: Mon, 4 Apr 2011 13:58:40 -0400
From: Miguel Quiliano Meza rifaxim...@gmail.com
mailto:rifaxim...@gmail.com
Subject: [gmx-users] how to Installing GROMACS in rocks cluster
To: gmx-users@gromacs.org mailto:gmx-users@gromacs.org
Message-ID: BANLkTimh_Lx_E9B=fJ-WRypK-vh=cf2...@mail.gmail.com
mailto:cf2...@mail.gmail.com
Content-Type: text/plain; charset=iso-8859-1

Dear Colleagues.

I have been searching information about HOW TO INSTALL GROMACS in rocks
cluster?. Unfortunately, the information that I found is not clear.

Someone can help me with this question. Maybe there are basic but
important
steps that I have to keep in mind. Could you please share yours
experiences?


Thank you in advance.

Miguel Quiliano.

P.D I have installed rocks cluster 5.4
-- next part --
An HTML attachment was scrubbed...
URL:

http://lists.gromacs.org/pipermail/gmx-users/attachments/20110404/ef4b39fe/attachment-0001.html

--

Message: 2
Date: Mon, 4 Apr 2011 14:11:48 -0400
From: shivangi nangia shivangi.nan...@gmail.com
mailto:shivangi.nan...@gmail.com
Subject: [gmx-users] autocorrelation functions
To: Discussion list for GROMACS users gmx-users@gromacs.org
mailto:gmx-users@gromacs.org
Message-ID: BANLkTi=HS9T7zpWkhEVFsYmZjrV=he1...@mail.gmail.com
mailto:he1...@mail.gmail.com
Content-Type: text/plain; charset=iso-8859-1

Hello all,


I need to calculate the end-to-end vector autocorrelation function of my
polymer chains. I could get the velocity autocorrelation function using
g_velacc tool.

Is there a tool available for calculating end-to-end vector
autocorrelation
function? If not, then is there an easy way to modify/morph the
g_velacc.c
program to  do other autocorrelation function calculations?


Thanks,
Shivangi
-- next part --
An HTML attachment was scrubbed...
URL:

http://lists.gromacs.org/pipermail/gmx-users/attachments/20110404/cb2a4e0a/attachment-0001.html

--

Message: 3
Date: Mon, 4 Apr 2011 20:36:37 +0200
From: Luca Bellucci lcbl...@gmail.com mailto:lcbl...@gmail.com
Subject: Re: [gmx-users] FEP and loss of performance
To: Discussion list for GROMACS users gmx-users@gromacs.org
mailto:gmx-users@gromacs.org
Message-ID: 201104042036.37450.lcbl...@gmail.com
mailto:201104042036.37450.lcbl...@gmail.com
Content-Type: text/plain;  charset=utf-8

Yes i am testing the possibility to perform an Hamiltonian-REMD
Energy barriers can be overcome  increasing the temperature system
or scaling
potential energy  with a lambda value, these methods are equivalent.
Both have advantages and disavantages, at this stage it is not the
right place
to debate on it. The main problem seems to be how to overcome to the
the loss
of gromacs performance in such calculation.  At this moment it seems an
intrinsic code problem.
Is it possible?

Dear Chris and Justin
  
  /  Thank you for your precious suggestions

Re: [gmx-users] coordination number and g_analysis

2011-04-04 Thread Marcelo A. Carignano

I'll rather use: g_rdf -cn

Marcelo.

On Apr 4, 2011, at 4:17 PM, Nilesh Dhumal wrote:


Hello,

I want to calculat the coordination number of solute in first  
solvation

shell.

integration of (4*pi*r^2*g(r)) from 0 to 2.6 A(first solvation shell)

If I calcualte the g_rdf for first solvation shell (till 2.6 A) and  
then I

integrate this using g_analysis.

Can I go this way.

Nilesh



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search 
 before posting!

Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Splitted DMPC bilayer

2011-04-04 Thread Dr. Ramón Garduño-Juárez

Justin,

Thank you for your comments after finishing the MD production run for up 
to 20 ns...


Since this step was over very quickly, now I have a simple question  
¿How long, in human time, should a production run last?


The production run was carried out in six processors Intel Xeon (R) 
E5405 2.00 GHz. The last few lines of the md_0_1.log are:


-
Parallel run - timing based on wallclock.

   NODE (s)   Real (s)  (%)
   Time: 180685.417 180685.417100.0
   2d02h11:25
   (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
Performance:232.900 12.351  9.564  2.510
-

Is this correct?   In my opinion it should lasted much more longer...

Before reaching this point, this is an update of what we did...

First we eliminated the SOL_SOL group and the only special index group 
was Protein_DMPC.


Since the NVT equilibration failed, we took option # 2 of the Advanced 
Troubleshooting, for the 1st phase of Equilibration.


After this step we proceeded with the equilibration phase 2 with a 1-ns 
NPT equilibration which ended fine.


Next, we proceeded with a 20 ns production run. Thus, the modified lines 
of the .mpd file found in the tutorial page were:


nsteps   =  1000  ;  2 * 1000  =  2000  ps   (20 ns)
tc-grps  =  Protein DMPC  SOL
comm-grps  =  Protein_DMPC  SOL

With this instructions the 20 ns simulation took  2d02h11:25

I believe the error comes from the line

constrains  =  all-bonds   which surely must be changed to

constrains  =  none or  hbonds

Looking forward to your comments...

Much obliged,
Ramon

El 30/03/2011 12:25 p.m., Justin A. Lemkul escribió:



Dr. Ramón Garduño-Juárez wrote:

Dear all,
Dear Justin,

This time I want to ask the gurus about this problem I encountered in 
the Equilibration step of my system made of 3 individual (small) 
protein chains in a solvated DMPC bilayer, no ions present since the 
protein system is neutral...


Following the tutorial I started with

 make_ndx_d -f em_after_solv.gro -o index_after_solv.ndx

for which I got the following list:
-
Reading structure file
Going to read 0 old index file(s)
Analysing residue names:
There are:   129Protein residues
There are:   123  Other residues
There are:  3215  Water residues
Analysing Protein...
Analysing residues not classified as Protein/DNA/RNA/Water and 
splitting into groups...


  0 System  : 16649 atoms
  1 Protein :  1346 atoms
  2 Protein-H   :  1025 atoms
  3 C-alpha :   129 atoms
  4 Backbone:   387 atoms
  5 MainChain   :   519 atoms
  6 MainChain+Cb:   636 atoms
  7 MainChain+H :   649 atoms
  8 SideChain   :   697 atoms
  9 SideChain-H :   506 atoms
 10 Prot-Masses :  1346 atoms
 11 non-Protein : 15303 atoms
 12 Other   :  5658 atoms
 13 DMPC:  5658 atoms
 14 Water   :  9645 atoms
 15 SOL :  9645 atoms
 16 non-Water   :  7004 atoms
-

Since I did not add ions I have formed a (merged) group named SOL_SOL 


Why would you merge solvent with itself?

after chosing  15 | 15 , and another merged group named Protein_DMPC 
by choosing  1 | 13...


Next, I started the NVT equilibration with:

grompp_d  -f nvt.mdp  -c em_after_solv.gro  -p 
topol_mod_lip_solv.top  -n index_after_solv.ndx  -o nvt.tpr


The nvt.mpd file is the same as the one given in the tutorial, the 
only changes I made were:


tc-grps= Protein DMPC SOL_SOL
and
comm-grps= Protein_DMPC SOL_SOL



I would think that using this weird SOL_SOL group would create 
problems related to degrees of freedom, etc.  If you have no ions, 
there is no need to merge any sort of solvent-related groups.



After this I ran

mdrun_mpi_d  -v  -deffnm nvt

When this process is finished I looked at the resulting nvt.gro file 
and found the following:


1) The 3 protein chains complex is fine, at the center of the box as 
it should be, but
2) The 2 DMPC layer are separated (splitted) leaving a large gap 
between them forming a )( shape where the top and bottom of this 
figure contain one layer of DMPC plus water molecules, while in the 
narrow section the protein complex is found... In the void between 
the two DMPC layers no water molecules are present...  Very odd!...


Please advice...



This is covered in the Advanced Troubleshooting section of my tutorial:

http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/membrane_protein/advanced_troubleshooting.html 



-Justin


Cheers,
Ramon Garduno



attachment: ramon.vcf-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the 

Re: [gmx-users] Splitted DMPC bilayer

2011-04-04 Thread Justin A. Lemkul



Dr. Ramón Garduño-Juárez wrote:

Justin,

Thank you for your comments after finishing the MD production run for up 
to 20 ns...


Since this step was over very quickly, now I have a simple question  
¿How long, in human time, should a production run last?




There is no way to answer that.  It depends on the hardware, number of atoms, 
system load, application of any number of the Gromacs algorithms, .mdp settings...


The production run was carried out in six processors Intel Xeon (R) 
E5405 2.00 GHz. The last few lines of the md_0_1.log are:


-
Parallel run - timing based on wallclock.

   NODE (s)   Real (s)  (%)
   Time: 180685.417 180685.417100.0
   2d02h11:25
   (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
Performance:232.900 12.351  9.564  2.510
-

Is this correct?   In my opinion it should lasted much more longer...



Nope, Gromacs is just fast :)


Before reaching this point, this is an update of what we did...

First we eliminated the SOL_SOL group and the only special index group 
was Protein_DMPC.


Since the NVT equilibration failed, we took option # 2 of the Advanced 
Troubleshooting, for the 1st phase of Equilibration.


After this step we proceeded with the equilibration phase 2 with a 1-ns 
NPT equilibration which ended fine.


Next, we proceeded with a 20 ns production run. Thus, the modified lines 
of the .mpd file found in the tutorial page were:


nsteps   =  1000  ;  2 * 1000  =  2000  ps   (20 ns)
tc-grps  =  Protein DMPC  SOL
comm-grps  =  Protein_DMPC  SOL

With this instructions the 20 ns simulation took  2d02h11:25

I believe the error comes from the line

constrains  =  all-bonds   which surely must be changed to

constrains  =  none or  hbonds



Why do you say that?  What error is occurring?  You said your simulations were 
running fine.  You most certainly should not remove constraints if you're 
sticking with a 2-fs timestep.  The system will be unstable without constraints. 
 You might be able to get away with hbonds, but certainly not none.


-Justin


Looking forward to your comments...

Much obliged,
Ramon

El 30/03/2011 12:25 p.m., Justin A. Lemkul escribió:



Dr. Ramón Garduño-Juárez wrote:

Dear all,
Dear Justin,

This time I want to ask the gurus about this problem I encountered in 
the Equilibration step of my system made of 3 individual (small) 
protein chains in a solvated DMPC bilayer, no ions present since the 
protein system is neutral...


Following the tutorial I started with

 make_ndx_d -f em_after_solv.gro -o index_after_solv.ndx

for which I got the following list:
-
Reading structure file
Going to read 0 old index file(s)
Analysing residue names:
There are:   129Protein residues
There are:   123  Other residues
There are:  3215  Water residues
Analysing Protein...
Analysing residues not classified as Protein/DNA/RNA/Water and 
splitting into groups...


  0 System  : 16649 atoms
  1 Protein :  1346 atoms
  2 Protein-H   :  1025 atoms
  3 C-alpha :   129 atoms
  4 Backbone:   387 atoms
  5 MainChain   :   519 atoms
  6 MainChain+Cb:   636 atoms
  7 MainChain+H :   649 atoms
  8 SideChain   :   697 atoms
  9 SideChain-H :   506 atoms
 10 Prot-Masses :  1346 atoms
 11 non-Protein : 15303 atoms
 12 Other   :  5658 atoms
 13 DMPC:  5658 atoms
 14 Water   :  9645 atoms
 15 SOL :  9645 atoms
 16 non-Water   :  7004 atoms
-

Since I did not add ions I have formed a (merged) group named SOL_SOL 


Why would you merge solvent with itself?

after chosing  15 | 15 , and another merged group named Protein_DMPC 
by choosing  1 | 13...


Next, I started the NVT equilibration with:

grompp_d  -f nvt.mdp  -c em_after_solv.gro  -p 
topol_mod_lip_solv.top  -n index_after_solv.ndx  -o nvt.tpr


The nvt.mpd file is the same as the one given in the tutorial, the 
only changes I made were:


tc-grps= Protein DMPC SOL_SOL
and
comm-grps= Protein_DMPC SOL_SOL



I would think that using this weird SOL_SOL group would create 
problems related to degrees of freedom, etc.  If you have no ions, 
there is no need to merge any sort of solvent-related groups.



After this I ran

mdrun_mpi_d  -v  -deffnm nvt

When this process is finished I looked at the resulting nvt.gro file 
and found the following:


1) The 3 protein chains complex is fine, at the center of the box as 
it should be, but
2) The 2 DMPC layer are separated (splitted) leaving a large gap 
between them forming a )( shape where the top and bottom of this 
figure contain one layer of DMPC plus water 

[gmx-users] g_chi

2011-04-04 Thread simon sham
Hi,
Thanks for those who replied my previous questions on g_chi.
I just installed the latest version of gromacs 4.5.4 and could run the command. 
I still have 
a question about the command:
Again, I used the following command:
g_chi -s md.tpr -f md.xtc -omega
It generated a series of xmgrace files for each residue, but it does not give a 
residue #.

In the chi.log file, it only listed the four omega atom numbers for each 
residue...that's it.

Again thanks for your help in advance.

Simon Sham


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Splitted DMPC bilayer

2011-04-04 Thread Dr. Ramón Garduño-Juárez

Justin,

Again much obliged for your comments. They are most illustrative...

I would like to make a final note on the issue of these many e-mails...

I am sure that GROMACS is fast, but that fast?...

For the sake of knowing that we are doing the right things, this is our 
topol.top file in which we eliminated all POSRES for the Protein and 
DMPC, not so for the WATER...

-
; Include forcefield parameters
#include ./gromos53a6_lipid.ff/forcefield.itp

; Include chain topologies
#include topol_Protein_chain_A.itp
#include topol_Protein_chain_B.itp
#include topol_Protein_chain_C.itp

; Include water topology
#include ./gromos53a6_lipid.ff/spc.itp

#ifdef POSRES_WATER
; Position restraint for each water oxygen
[ position_restraints ]
;  i funct   fcxfcyfcz
   11   1000   1000   1000
#endif

; Include topology for ions
#include ./gromos53a6_lipid.ff/ions.itp

[ system ]
; Name
mod.pdb

[ molecules ]
; Compound#mols
Protein_chain_A 1
Protein_chain_B 1
Protein_chain_C 1
--

On Protein_chain_A  there are 342 atoms
On Protein_chain_B  there are 289 atoms
On Protein_chain_C  there are 715 atoms
On DMPC there are 123 molecules of 46 atoms each
On SOL there are 3205 molecules of 3 atoms each
For a total of 16619 atoms

I know that this is a medium size system for which I was expecting 
longer CPU time for a 20 ns MD run.


I know that there was no error, which I meant is that I was surprised 
by the outcome...


May be GROMACS is as fast as it is claimed...

Cheers,
Ramon

El 04/04/2011 05:27 p.m., Justin A. Lemkul escribió:



Dr. Ramón Garduño-Juárez wrote:

Justin,

Thank you for your comments after finishing the MD production run for 
up to 20 ns...


Since this step was over very quickly, now I have a simple question  
¿How long, in human time, should a production run last?




There is no way to answer that.  It depends on the hardware, number of 
atoms, system load, application of any number of the Gromacs 
algorithms, .mdp settings...


The production run was carried out in six processors Intel Xeon (R) 
E5405 2.00 GHz. The last few lines of the md_0_1.log are:


-
Parallel run - timing based on wallclock.

   NODE (s)   Real (s)  (%)
   Time: 180685.417 180685.417100.0
   2d02h11:25
   (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
Performance:232.900 12.351  9.564  2.510
-

Is this correct?   In my opinion it should lasted much more longer...



Nope, Gromacs is just fast :)


Before reaching this point, this is an update of what we did...

First we eliminated the SOL_SOL group and the only special index 
group was Protein_DMPC.


Since the NVT equilibration failed, we took option # 2 of the 
Advanced Troubleshooting, for the 1st phase of Equilibration.


After this step we proceeded with the equilibration phase 2 with a 
1-ns NPT equilibration which ended fine.


Next, we proceeded with a 20 ns production run. Thus, the modified 
lines of the .mpd file found in the tutorial page were:


nsteps   =  1000  ;  2 * 1000  =  2000  ps   (20 ns)
tc-grps  =  Protein DMPC  SOL
comm-grps  =  Protein_DMPC  SOL

With this instructions the 20 ns simulation took  2d02h11:25

I believe the error comes from the line

constrains  =  all-bonds   which surely must be changed to

constrains  =  none or  hbonds



Why do you say that?  What error is occurring?  You said your 
simulations were running fine.  You most certainly should not remove 
constraints if you're sticking with a 2-fs timestep.  The system will 
be unstable without constraints.  You might be able to get away with 
hbonds, but certainly not none.


-Justin


Looking forward to your comments...

Much obliged,
Ramon



attachment: ramon.vcf-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Splitted DMPC bilayer

2011-04-04 Thread Justin A. Lemkul



Dr. Ramón Garduño-Juárez wrote:

Justin,

Again much obliged for your comments. They are most illustrative...

I would like to make a final note on the issue of these many e-mails...

I am sure that GROMACS is fast, but that fast?...



Yes.  Your results prove it.  With quality hardware, you get great performance.

For the sake of knowing that we are doing the right things, this is our 
topol.top file in which we eliminated all POSRES for the Protein and 
DMPC, not so for the WATER...


To what end, I do not know.  One generally does not find much use in restraining 
water while everything else moves, but syntactically, it is correct.



-
; Include forcefield parameters
#include ./gromos53a6_lipid.ff/forcefield.itp

; Include chain topologies
#include topol_Protein_chain_A.itp
#include topol_Protein_chain_B.itp
#include topol_Protein_chain_C.itp

; Include water topology
#include ./gromos53a6_lipid.ff/spc.itp

#ifdef POSRES_WATER
; Position restraint for each water oxygen
[ position_restraints ]
;  i funct   fcxfcyfcz
   11   1000   1000   1000
#endif

; Include topology for ions
#include ./gromos53a6_lipid.ff/ions.itp

[ system ]
; Name
mod.pdb

[ molecules ]
; Compound#mols
Protein_chain_A 1
Protein_chain_B 1
Protein_chain_C 1
--

On Protein_chain_A  there are 342 atoms
On Protein_chain_B  there are 289 atoms
On Protein_chain_C  there are 715 atoms
On DMPC there are 123 molecules of 46 atoms each
On SOL there are 3205 molecules of 3 atoms each
For a total of 16619 atoms

I know that this is a medium size system for which I was expecting 
longer CPU time for a 20 ns MD run.


I know that there was no error, which I meant is that I was surprised 
by the outcome...


May be GROMACS is as fast as it is claimed...



Indeed.

-Justin


Cheers,
Ramon

El 04/04/2011 05:27 p.m., Justin A. Lemkul escribió:



Dr. Ramón Garduño-Juárez wrote:

Justin,

Thank you for your comments after finishing the MD production run for 
up to 20 ns...


Since this step was over very quickly, now I have a simple question  
¿How long, in human time, should a production run last?




There is no way to answer that.  It depends on the hardware, number of 
atoms, system load, application of any number of the Gromacs 
algorithms, .mdp settings...


The production run was carried out in six processors Intel Xeon (R) 
E5405 2.00 GHz. The last few lines of the md_0_1.log are:


-
Parallel run - timing based on wallclock.

   NODE (s)   Real (s)  (%)
   Time: 180685.417 180685.417100.0
   2d02h11:25
   (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
Performance:232.900 12.351  9.564  2.510
-

Is this correct?   In my opinion it should lasted much more longer...



Nope, Gromacs is just fast :)


Before reaching this point, this is an update of what we did...

First we eliminated the SOL_SOL group and the only special index 
group was Protein_DMPC.


Since the NVT equilibration failed, we took option # 2 of the 
Advanced Troubleshooting, for the 1st phase of Equilibration.


After this step we proceeded with the equilibration phase 2 with a 
1-ns NPT equilibration which ended fine.


Next, we proceeded with a 20 ns production run. Thus, the modified 
lines of the .mpd file found in the tutorial page were:


nsteps   =  1000  ;  2 * 1000  =  2000  ps   (20 ns)
tc-grps  =  Protein DMPC  SOL
comm-grps  =  Protein_DMPC  SOL

With this instructions the 20 ns simulation took  2d02h11:25

I believe the error comes from the line

constrains  =  all-bonds   which surely must be changed to

constrains  =  none or  hbonds



Why do you say that?  What error is occurring?  You said your 
simulations were running fine.  You most certainly should not remove 
constraints if you're sticking with a 2-fs timestep.  The system will 
be unstable without constraints.  You might be able to get away with 
hbonds, but certainly not none.


-Justin


Looking forward to your comments...

Much obliged,
Ramon





--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_chi

2011-04-04 Thread Justin A. Lemkul



simon sham wrote:

Hi,
Thanks for those who replied my previous questions on g_chi.
I just installed the latest version of gromacs 4.5.4 and could run the 
command. I still have

a question about the command:
Again, I used the following command:
g_chi -s md.tpr -f md.xtc -omega
It generated a series of xmgrace files for each residue, but it does not 
give a residue #.


In the chi.log file, it only listed the four omega atom numbers for each 
residue...that's it.




g_chi is intended for dihedral transitions and order parameters.  If you just 
want an actual dihedral angle measurement, use g_angle.


-Justin


Again thanks for your help in advance.

Simon Sham




--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] autocorrelation functions

2011-04-04 Thread Mark Abraham

On 5/04/2011 2:55 AM, shivangi nangia wrote:

 Hello all,


I need to calculate the end-to-end vector autocorrelation function of 
my polymer chains. I could get the velocity autocorrelation function 
using g_velacc tool.


Is there a tool available for calculating end-to-end vector 
autocorrelation function? If not, then is there an easy way to 
modify/morph the g_velacc.c program to  do other autocorrelation 
function calculations?


Use g_dist to get the distances and then g_analyze to find the 
autocorrelation.


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Protein_thermal_Unfolding

2011-04-04 Thread satya s
Dear users,
I am new to Gromacs.
I am trying to study thermal unfolding of a protein having intra molecular
disulfide bonds.
As during simulations these bonds will not break, will I be able to study
the unfolding pathway.
Are there other ways to study such type of systems.

S Satya
Gulbraga University
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] the total charge of system is not an integer

2011-04-04 Thread ahmet yıldırım
Dear Tsjerk,

Hi Ahmet,

 As suggested, it's better to break up your molecule into smaller
 charge groups. Note that charge groups don't need to have zero charge,
 nor integer charge. In your case, I'd suggest two COH groups for EDO,
 which will have zero net charge each, and for TRS I'd take the COH
 groups as separate charge groups. I also note that the COH groups,
 although chemically identical - H3NC(COH)3, right?-, have different
 charges. That doesn't seem proper.

 Hope it helps,

 Tsjerk


nonrevised .itp file:
EDO  3

[ atoms ]
;   nr  type  resnr resid  atom  cgnr   charge mass
 1OA 1  EDO OAB 1   -0.111  15.9994
 2 H 1  EDO HAE 10.031   1.0080
 3   CH2 1  EDO CAA 10.080  14.0270
 4   CH2 1  EDO CAC 10.080  14.0270
 5OA 1  EDO OAD 1   -0.111  15.9994
 6 H 1  EDO HAF 10.031   1.0080

nonrevised .itp file:
EDO  3

[ atoms ]
;   nr  type  resnr resid  atom  cgnr   charge mass
 1OA 1  EDO OAB 1   -0.111  15.9994
 2 H 1  EDO HAE 10.031   1.0080
 3   CH2 1  EDO CAA 1*0.000*  14.0270
 4   CH2 1  EDO CAC 1*0.000*  14.0270
 5OA 1  EDO OAD 1   -0.111  15.9994
 6 H 1  EDO HAF 10.031   1.0080

can you show me on the itp file? how do I seperate two COH groups? Please
help me

31 Mart 2011 12:10 tarihinde Tsjerk Wassenaar tsje...@gmail.com yazdı:

 Hi Ahmet,

 Why would I get angry? :) Sending a reply to the list will not usually
 be taken as asking for private tutoring...

 As Mark pointed out, you need to get familiar with the format of the
 files. That's the first thing you should do if you get to the point of
 needing to use non standard topologies. Read the manual, look at
 existing files. As for the immediate question, under the [ atoms ]
 section is a line indicating which column denotes what. You'd need to
 modify the columns 'cgnr' (charge group number) and probably 'charge'.
 For finding proper charge groups, in general you best draw your
 molecule, with the charges added, and then see which atoms would
 almost naturally group together.

 TRS.itp:
 ..
 [ moleculetype ]
 ; Name nrexcl
 TRS  3

 [ atoms ]
 ;   nr  type  resnr resid  atom  cgnr   charge mass
 1OA 1  TRS  O1 1   -0.119  15.9994
 2 H 1  TRS H13 10.032   1.0080
 3   CH2 1  TRS  C1 10.087  14.0270
 4  CCl4 1  TRS   C 20.055  12.0110
 5   CH2 1  TRS  C3 20.049  14.0270
 6OA 1  TRS  O3 2   -0.205  15.9994

 Hope it helps,

 Tsjerk



 2011/3/31 ahmet yıldırım ahmedo...@gmail.com:
  Dear Tsjerk,
 
  I will ask you one thing but please do not get angry (I know you are not
 a
  private tutor but I need your helps).
 
  How do I apply on the files (EDO.itp and TRS.itp) that you said? (or can
 you
  suggest a tutorial?)
 
  Thanks
 
  2011/3/31 Mark Abraham mark.abra...@anu.edu.au
 
  On 31/03/2011 5:18 PM, ahmet yıldırım wrote:
 
  Dear users,
 
  Before energy minimization step , I performed the preprosessing step
 using
  grompp .
  However, there are two note that :
 
  NOTE 1 [file topol.top, line 52]:
System has non-zero total charge: -1.50e+01
 
  This is an integer. See
  http://en.wikipedia.org/wiki/Scientific_notation#E_notation and
  http://www.gromacs.org/Documentation/Floating_Point_Arithmetic
 
  NOTE 2 [file topol.top]:
The largest charge group contains 11 atoms.
Since atoms only see each other when the centers of geometry of the
  charge
groups they belong to are within the cut-off distance, too large
 charge
groups can lead to serious cut-off artifacts.
For efficiency and accuracy, charge group should consist of a few
 atoms.
For all-atom force fields use: CH3, CH2, CH, NH2, NH, OH, CO2, CO,
 etc.
 
  See Tsjerk's email.
 
  Mark
 
 
  PS: TRS and EDO are not aminoacid
 
  TRS.itp:
  ..
  [ moleculetype ]
  ; Name nrexcl
  TRS  3
 
  [ atoms ]
  ;   nr  type  resnr resid  atom  cgnr   charge mass
   1OA 1  TRS  O1 1   -0.119  15.9994
   2 H 1  TRS H13 10.032   1.0080
   3   CH2 1  TRS  C1 10.087  14.0270
   4  CCl4 1  TRS   C 20.055  12.0110
   5   CH2 1  TRS  C3 20.049  14.0270
   6OA 1  TRS  O3 2   -0.205  15.9994
   7 H 1  TRS H33 20.019   1.0080
   8NL 1  TRS   N 20.206  14.0067
   9 H 1  TRS  H2 20.004   1.0080
  10 H 1  TRS  H3 20.004   1.0080
  11 H 1  TRS  H1 20.004   1.0080
  12   CH2 1  TRS  C2 2