[gmx-users] diffusion coefficient

2010-02-10 Thread Amit Choubey
Hi Everyone,

I have been trying to calculate diffusion coefficient for water. I am trying
to reproduce the numbers published in journal papers.
I am using SPCE water model. I use the g_msd analysis tool.

g_msd -f traj.trr -n index.ndx -s npt.tpr -b 2 -e 8

I use a box of volume 6x6x6 nm^3 which has 7161 water molecules. I
equilibriate the system for a ns and then run for additional 10 ps for
analysis. Here are some of the numbers that i get

a. With Berendsen's T coupling and P coupling on i get 4.4941 (+/-
0.2992) 1e-5 cm^2/s
b. With Berendsen's T coupling on and P coupling off I get 3.2469 (+/-
0.1076) 1e-5 cm^2/s
c. With Berendsen's T coupling on and P coupling off for 1ns and then T,P
coupling both off (for analysis part) i get 2.8085 (+/- 0.0310) 1e-5 cm^2/s
.

c is closest to the widely accepted experimental value of 2.3 1e-5 cm^2/s
but its not quite right.

Could someone explain to me why the values obtained in above 3 cases are
widely different and may be give some tips about the right procedure to
calculate diffusion (method and invoking the g_msd tool)?

Thank you
Amit
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] Re: saving the protein conformation after 10ps simulation

2010-02-10 Thread bharat gupta
Hi all,

I am trying to save the conformation of my protein after 10ps
simulation .. I am getting the following error :-


Software inconsistency error:
Not supported in write_sto_conf

Can anybody tell me how to fix this error ..


-- 
Bharat
M.Sc. Bioinformatics (Final year)
Centre for Bioinformatics
Pondicherry University
Puducherry
India
Mob. +919962670525
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] diffusion coefficient

2010-02-10 Thread Florian Dommert
Hi,

 you already have your solution at hand.

On 10.02.2010, at 10:05, Amit Choubey wrote:

 Hi Everyone,
 
 I have been trying to calculate diffusion coefficient for water. I am trying 
 to reproduce the numbers published in journal papers.
 I am using SPCE water model. I use the g_msd analysis tool.
 
 g_msd -f traj.trr -n index.ndx -s npt.tpr -b 2 -e 8 
 
 I use a box of volume 6x6x6 nm^3 which has 7161 water molecules. I 
 equilibriate the system for a ns and then run for additional 10 ps for 
 analysis. Here are some of the numbers that i get 
 
 a. With Berendsen's T coupling and P coupling on i get 4.4941 (+/- 0.2992) 
 1e-5 cm^2/s
 b. With Berendsen's T coupling on and P coupling off I get 3.2469 (+/- 
 0.1076) 1e-5 cm^2/s
 c. With Berendsen's T coupling on and P coupling off for 1ns and then T,P 
 coupling both off (for analysis part) i get 2.8085 (+/- 0.0310) 1e-5 cm^2/s .
 
 c is closest to the widely accepted experimental value of 2.3 1e-5 cm^2/s but 
 its not quite right. 
 

You obtain a quite good estimation of the diffusion coefficient when not 
applying the Berendsen coupling scheme. The reason for this is that Berendsen 
coupling does not allow to generate a correct thermodynamic ensemble. Applying 
v-rescale or Nose-Hoover in combination with Parrinello-Rahman should yield 
more accurate results, because the take the way of sampling the phase space in 
an appropriate manner into account.

Cheers,

Flo


 Could someone explain to me why the values obtained in above 3 cases are 
 widely different and may be give some tips about the right procedure to 
 calculate diffusion (method and invoking the g_msd tool)? 
 
 Thank you
 Amit
 
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php

--
Florian Dommert
Dipl.-Phys.

Institute for Computational Physics

University Stuttgart

Pfaffenwaldring 27
70569 Stuttgart

Phone: +49(0)711/685-6-3613
Fax:   +49-(0)711/685-6-3658 

EMail: domm...@icp.uni-stuttgart.de
Home: http://www.icp.uni-stuttgart.de/~icp/Florian_Dommert



PGP.sig
Description: This is a digitally signed message part
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] Re: saving the protein conformation after 10ps simulation

2010-02-10 Thread Justin A. Lemkul



bharat gupta wrote:

Hi all,

I am trying to save the conformation of my protein after 10ps
simulation .. I am getting the following error :-


Software inconsistency error:
Not supported in write_sto_conf

Can anybody tell me how to fix this error ..



Not without more information, like the Gromacs version you're using and what the 
exact command was that gave this error.  Looks like you may have a version 
incompatibility (i.e., mixing versions between prep/simulation/analysis).


-Justin



--
Bharat
M.Sc. Bioinformatics (Final year)
Centre for Bioinformatics
Pondicherry University
Puducherry
India
Mob. +919962670525



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] REMD demux problem.

2010-02-10 Thread zuole

Hi all,

I met a problem when I try to use trjcat to connect the trajectory files of 
REMD. I used 16 replicas for my simulation, and used demux.pl perl script to 
generate the replica_index.xvg and replica_temp.xvg. Then I used trjcat -f 
xtc_*.xtc -demux replica_index.avg, however got an error information as below:

Reading frame   0 time0.000   Segmentation fault


Then when I tried trjcat -f traj*.trr -o traj.trr -demux replica_index.xvg, got 
another error as this:

Fatal error:
You have specified 17 files and 16 entries in the demux table


Can anyone give me some advice on what have gone wrong? Thanks a lot.
  
_
您的电子邮件和更灵活的即时通信。免费获取 Windows Live Hotmail。
https://signup.live.com/signup.aspx?id=60969-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] Topology

2010-02-10 Thread Justin A. Lemkul



tekle...@ualberta.ca wrote:

Dear Justin,

First of all thank you for your help..

I have developed the toplogy file for my molecule using the PRODRG 
server. But the topology did not include properly the carboxylic acid 
functional group ( with proton) instead the software assumes both 
oxygens as identical due to resonance. Therefore I want to modify my 
topology .what do I need to do.




From the PRODRG FAQ:

Q: PRODRG doesn't properly protonate my molecule.
A: This can be fixed by adding the command

ADDHYD atomname

or

DELHYD atomname

to your input drawing/PDB file.

No need to hack the topology.  Run PRODRG once to identify which atom name it 
will assign to the oxygens, then follow the instructions above.


-Justin


In side my topology (initial)
===
44 CH1 1 UNK CA130.14313.0190
45 C   1 UNK C 130.37212.0110
46 OM 1  UNK OXT   13-0.757   15.9994
47 OM 1  UNK O 13-0.758   15.9994


MODIFIED to the following topology

44 CH1 1 UNK CA13   XXX 13.0190
45 C   1 UNK C 13   XXX 12.0110
46 O   1 UNK OXT   13   XXX 15.9994
47 OA  1 UNK O 13   XXX 15.9994
48 HO  1 UNK HAA   13   XXX 1.00800

===

XXX refers to a charge of each ..
How can I allocate a charge distribution for each united atom molecule 
in the modified topology ...


Remark

I checked the .RTP file and I have found the following information for the
[ ASPH] residue 

   CG C 0.53000 2
  OD1 O-0.38000 2
  OD2OA-0.54800 2
  HD2HO 0.39800 2

But in this case eventhough this describes for the carboxilic acid 
functional group . It does not include CH1 in the same group.. what 
do I need to do to


Example
  CA CH1   XXX 2
  CG C 0.53000 2
  OD1 O-0.38000 2
  OD2OA-0.54800 2
  HD2HO 0.39800 2

XX is the new charge for CH1 which I do not know. can you 
help. or simply I have to put 0.0


have a great day

Rob

***
The entire Topology is the following

;   nr  type  resnr resid  atom  cgnr   charge mass
 1   CH3 1  UNK CAZ 10.000  15.0350
 2   CH2 1  UNK CAK 10.000  14.0270
 3   CH2 1  UNK CBS 20.000  14.0270
 4   CH2 1  UNK CBL 20.000  14.0270
 5   CH2 1  UNK CBG 30.000  14.0270
 6   CH2 1  UNK CBM 30.000  14.0270
 7   CH1 1  UNK CBA 30.000  13.0190
 8   CH2 1  UNK CBD 30.000  14.0270
 9   CH2 1  UNK CBE 30.000  14.0270
10   CH2 1  UNK CBF 40.000  14.0270
11   CH2 1  UNK CBJ 40.000  14.0270
12   CH2 1  UNK CBN 50.101  14.0270
13   CH3 1  UNK CBT 50.057  15.0350
14  NR6* 1  UNK NBU 50.069  14.0067
15CB 1  UNK CBP 50.346  12.0110
16 O 1  UNK OCB 5   -0.573  15.9994
17CB 1  UNK CAG 60.351  12.0110
18 O 1  UNK OCA 6   -0.565  15.9994
19   CH1 1  UNK CBI 60.178  13.0190
20  CR61 1  UNK CBO 60.018  13.0190
21  CR61 1  UNK CAV 60.018  13.0190
22CB 1  UNK CAN 70.000  12.0110
23CB 1  UNK CAR 70.000  12.0110
24CB 1  UNK CAX 80.000  12.0110
25CB 1  UNK CBC 80.000  12.0110
26  CR61 1  UNK CBH 80.000  13.0190
27  CR61 1  UNK CAW 80.000  13.0190
28   CH1 1  UNK CAQ 90.074  13.0190
29CB 1  UNK CAM 90.001  12.0110
30  CR61 1  UNK CAU 9   -0.037  13.0190
31  CR61 1  UNK CAI 9   -0.038  13.0190
32CB 1  UNK CAC100.003  12.0110
33CB 1  UNK CAE100.442  12.0110
34 O 1  UNK OBX10   -0.448  15.9994
35CB 1  UNK CAB100.003  12.0110
36CB 1  UNK CAF110.004  12.0110
37CB 1  UNK CAJ110.004  12.0110
38  CR61 1  UNK CAP11   -0.008  13.0190
39  CR61 1  UNK CAH12   -0.013  13.0190
40CB 1  UNK CAA120.003  12.0110
41CB 1  UNK CAD120.411  12.0110
42 O 1  UNK OBW12   -0.483  15.9994
43  NR6* 1  UNK   N120.082  14.0067

44   CH1 

Re: [gmx-users] diffusion coefficient

2010-02-10 Thread Omer Markovitch
10 ps is too short of a trajectory, even for such a large system (for pure
water it is considered large). i would guess that it is a typing-error and
you ran for 10 ns?
omer.

On Wed, Feb 10, 2010 at 11:05, Amit Choubey kgp.a...@gmail.com wrote:

 I use a box of volume 6x6x6 nm^3 which has 7161 water molecules. I
 equilibriate the system for a ns and then run for additional 10 ps for
 analysis. Here are some of the numbers that i get


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] REMD demux problem.

2010-02-10 Thread Justin A. Lemkul



zuole wrote:

Hi all,

I met a problem when I try to use trjcat to connect the trajectory files 
of REMD. I used 16 replicas for my simulation, and used demux.pl perl 
script to generate the replica_index.xvg and replica_temp.xvg. Then I 
used trjcat -f xtc_*.xtc -demux replica_index.avg, however got an error 
information as below:


Reading frame   0 time0.000   Segmentation fault


Then when I tried trjcat -f traj*.trr -o traj.trr -demux 
replica_index.xvg, got another error as this:


Fatal error:
You have specified 17 files and 16 entries in the demux table


Can anyone give me some advice on what have gone wrong? Thanks a lot.



If you have 17 input files, you need to have 17 demuxed output trajectories. 
Either specify 17 file names, or leave off the -o option and Gromacs will do it 
for you.


-Justin



您的电子邮件和更灵活的即时通信。免费获取 Windows Live Hotmail。 立即注 
册。 https://signup.live.com/signup.aspx?id=60969




--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] compilation problems orte error

2010-02-10 Thread Jennifer Williams


Sorry for the delay in replying back. I start the job using the  
following script file:


#$ -S /bin/bash
#$ -l h_rt=47:59:00
#$ -j y
#$ -pe mpich2 8
#$ -cwd
cd /home/jwillia4/GRO/gromacs-4.0.7/JJW_003/PH_TORUN
/home/jwillia4/GRO/bin/mpirun -np 8 /home/jwillia4/GRO/bin/mdrun_mpi  
-v -s md.tpr


The strange thing is that sometimes it works and the job runs to  
completion and sometimes it crashes immediately with the orte error so  
I know that it is not the input files causing the problems. It seems  
entirely random.


Has it to do with the -pe mpich2 8 line? I was previously using Open  
MPI installed on the cluster for common use but now have downloaded  
everything into my home directory. The script has been adapted from  
the time when I didn't have my own OpenMPI in my home directory.  
Perhaps it needs further alteration but I don't know what.


How would I do about checking whether MPI is running?

If you spot anything suspicious in the above commands please let me know.

Thanks

Jenny


Quoting Chandan Choudhury iitd...@gmail.com:


As Justin said give the command line options for mdrun and also check that
your mpi environment is running.  Better to run a parallel job and check its
output.

Chadnan

--
Chandan kumar Choudhury
NCL, Pune
INDIA


On Mon, Feb 8, 2010 at 8:02 PM, Justin A. Lemkul jalem...@vt.edu wrote:




Jennifer Williams wrote:



Dear All,

I am having problems compiling gromacs 4.0.7 in parallel. I am following
the
Quick and Dirty Installation instructions on the gromacs webpage.
I downloaded the the versions of fftw, OpenMPI and gromacs-4.0.7 following
these instructions.

Everything seems to compile OK and I get all the serial executables
including mdrun written to my bin directory and they seem to run fine.
However when I try to run mdrun_mpi on 6 nodes I get the following:

[vlxbig16:08666] [NO-NAME] ORTE_ERROR_LOG: Not found in file
runtime/orte_init_stage1.c at line 182
[vlxbig16:08667] [NO-NAME] ORTE_ERROR_LOG: Not found in file
runtime/orte_init_stage1.c at line 182
[vlxbig16:08700] [NO-NAME] ORTE_ERROR_LOG: Not found in file
runtime/orte_init_stage1.c at line 182
[vlxbig16:08670] [NO-NAME] ORTE_ERROR_LOG: Not found in file
runtime/orte_init_stage1.c at line 182
[vlxbig16:08681] [NO-NAME] ORTE_ERROR_LOG: Not found in file
runtime/orte_init_stage1.c at line 182
[vlxbig16:08659] [NO-NAME] ORTE_ERROR_LOG: Not found in file
runtime/orte_init_stage1.c at line 182
--
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

 orte_rml_base_select failed
 -- Returned value -13 instead of ORTE_SUCCESS


Does anyone have any idea what is causing this? Computer support at my
University is not sure.



How are you launching mdrun_mpi (command line)?

-Justin



Thanks





--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface
or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php







Dr. Jennifer Williams
Institute for Materials and Processes
School of Engineering
University of Edinburgh
Sanderson Building
The King's Buildings
Mayfield Road
Edinburgh, EH9 3JL, United Kingdom
Phone: ++44 (0)131 650 4 861


--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] compilation problems orte error

2010-02-10 Thread Mark Abraham
On 02/10/10, Jennifer Williams  jennifer.willi...@ed.ac.uk wrote:Sorry for the delay in replying back. I start the job using the following script file:#$ -S /bin/bash#$ -l h_rt=47:59:00#$ -j y#$ -pe mpich2 8#$ -cwdcd /home/jwillia4/GRO/gromacs-4.0.7/JJW_003/PH_TORUN/home/jwillia4/GRO/bin/mpirun -np 8 /home/jwillia4/GRO/bin/mdrun_mpi -v -s md.tprThe strange thing is that sometimes it works and the job runs to completion and sometimes it crashes immediately with the orte error so I know that it is not the input files causing the problems. It seems entirely random.That sounds like some kind of dynamic linking problem. You may be able to constrain the GROMACS configure program to link statically to your choice of MPI library with --enable-static or something - but only if static versions of the MPI libraries exist.Has it to do with the -pe mpich2 8 line? I was previously using Open MPI installed on the cluster for common use but now have downloaded everything into my home directory. The script has been adapted from the time when I didn't have my own OpenMPI in my home directory. Perhaps it needs further alteration but I don't know what.Try things and see. We've no idea what your queueing flags are or should be doing, but involving two different MPI libraries is asking for trouble.How would I do about checking whether MPI is running?By running a test program. Either get a Hello world program from an MPI tutorial, or perhaps something available with the library itself.MarkIf you spot anything suspicious in the above commands please let me know.ThanksJennyQuoting Chandan Choudhury iitd...@gmail.com:As Justin said give the command line options for mdrun and also check thatyour mpi environment is running.  Better to run a parallel job and check itsoutput.Chadnan--Chandan kumar ChoudhuryNCL, PuneINDIAOn Mon, Feb 8, 2010 at 8:02 PM, Justin A. Lemkul jalem...@vt.edu wrote:Jennifer Williams wrote:Dear All,I am having problems compiling gromacs 4.0.7 in parallel. I am followingtheQuick and Dirty Installation instructions on the gromacs webpage.I downloaded the the versions of fftw, OpenMPI and gromacs-4.0.7 followingthese instructions.Everything seems to compile OK and I get all the serial executablesincluding mdrun written to my bin directory and they seem to run fine.However when I try to run mdrun_mpi on 6 nodes I get the following:[vlxbig16:08666] [NO-NAME] ORTE_ERROR_LOG: Not found in fileruntime/orte_init_stage1.c at line 182[vlxbig16:08667] [NO-NAME] ORTE_ERROR_LOG: Not found in fileruntime/orte_init_stage1.c at line 182[vlxbig16:08700] [NO-NAME] ORTE_ERROR_LOG: Not found in fileruntime/orte_init_stage1.c at line 182[vlxbig16:08670] [NO-NAME] ORTE_ERROR_LOG: Not found in fileruntime/orte_init_stage1.c at line 182[vlxbig16:08681] [NO-NAME] ORTE_ERROR_LOG: Not found in fileruntime/orte_init_stage1.c at line 182[vlxbig16:08659] [NO-NAME] ORTE_ERROR_LOG: Not found in fileruntime/orte_init_stage1.c at line 182--It looks like orte_init failed for some reason; your parallel process islikely to abort.  There are many reasons that a parallel process canfail during orte_init; some of which are due to configuration orenvironment problems.  This failure appears to be an internal failure;here's some additional information (which may only be relevant to anOpen MPI developer): orte_rml_base_select failed -- Returned value -13 instead of ORTE_SUCCESSDoes anyone have any idea what is causing this? Computer support at myUniversity is not sure.How are you launching mdrun_mpi (command line)?-JustinThanks--Justin A. LemkulPh.D. CandidateICTAS Doctoral ScholarMILES-IGERT TraineeDepartment of BiochemistryVirginia TechBlacksburg, VAjalemkul[at]vt.edu | (540) 231-9080http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin--gmx-users mailing list    gmx-users@gromacs.orghttp://lists.gromacs.org/mailman/listinfo/gmx-usersPlease search the archive at http://www.gromacs.org/search before posting!Please don't post (un)subscribe requests to the list. Use the www interfaceor send it to gmx-users-requ...@gromacs.org.Can't post? Read http://www.gromacs.org/mailing_lists/users.phpDr. Jennifer WilliamsInstitute for Materials and ProcessesSchool of EngineeringUniversity of EdinburghSanderson BuildingThe King's BuildingsMayfield RoadEdinburgh, EH9 3JL, United KingdomPhone: ++44 (0)131 650 4 861-- The University of Edinburgh is a charitable body, registered inScotland, with registration number SC005336.-- gmx-users mailing list    gmx-users@gromacs.orghttp://lists.gromacs.org/mailman/listinfo/gmx-usersPlease search the archive at http://www.gromacs.org/search before posting!Please don't post (un)subscribe requests to the list. Use thewww interface or send it to gmx-users-requ...@gromacs.org.Can't post? Read http://www.gromacs.org/mailing_lists/users.php
-- 
gmx-users mailing list

[gmx-users] Re: Minimum simulation time needed to have a completely minimized structure

2010-02-10 Thread bharat gupta
Hi all

I wanna know that for how long shall I run the energy minimization
step to minimize my modelled protein structure .. and what all
parameters shall I look for the minimized structure - rms , rmsf ,
potential , g_energy . In g_energy total energy and potential both
have to be checked ...

-- 
Bharat
M.Sc. Bioinformatics (Final year)
Centre for Bioinformatics
Pondicherry University
Puducherry
India
Mob. +919962670525
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Re: Minimum simulation time needed to have a completely minimized structure

2010-02-10 Thread Justin A. Lemkul



bharat gupta wrote:

Hi all

I wanna know that for how long shall I run the energy minimization
step to minimize my modelled protein structure .. and what all
parameters shall I look for the minimized structure - rms , rmsf ,
potential , g_energy . In g_energy total energy and potential both
have to be checked ...



You probably won't get much in the way of RMSD or RMSF during EM; the structure 
shouldn't be changing drastically.  Convergence of EM is usually judged based on 
the magnitude of Fmax and the potential (printed by mdrun at the end of EM). 
There is no total energy term during EM.  Since there is no kinetic energy, the 
total energy and potential energy are the same.  If you run g_energy on the .edr 
file, you'll see for yourself :)


-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] gromacs 4.0.7 compilation problem

2010-02-10 Thread sarbani chattopadhyay
Hi ,
 I want to install gromacs 4.0.7 in double precision in a 64 bit Mac 
computer with 8 
nodes.
I got the lam7.1.4 source code files and installed them using the following 
commands
   ./configure --without-fc  ( it was giving an error for the fortran compiler)
   make
   make install

then I get the gromacs 4.0.7 source code files and installed it as
./configure --disable-float
make
make install

After that  I try get the mpi  version for mdrun
   make clean
 ./configure --enable-mpi --disable-nice --program-suffix=_mpi
  make mdrun
I GET ERROR IN THIS STEP , With error message
undefined symbols:
  _lam_mpi_double, referenced from:
  _gmx_sumd_sim in libgmx_mpi.a(network.o)
  _gmx_sumd in libgmx_mpi.a(network.o)
  _gmx_sumd in libgmx_mpi.a(network.o)
  _wallcycle_sum in libmd_mpi.a(gmx_wallcycle.o)
  _lam_mpi_byte, referenced from:
  _exchange_rvecs in repl_ex.o
  _replica_exchange in repl_ex.o
  _replica_exchange in repl_ex.o
  _replica_exchange in repl_ex.o
  _finish_run in libmd_mpi.a(sim_util.o)
  _dd_collect_vec in libmd_mpi.a(domdec.o)
  _dd_collect_vec in libmd_mpi.a(domdec.o)
  _set_dd_cell_sizes in libmd_mpi.a(domdec.o)
  _dd_distribute_vec in libmd_mpi.a(domdec.o)
  _dd_distribute_vec in libmd_mpi.a(domdec.o)
  _dd_partition_system in libmd_mpi.a(domdec.o)
  _partdec_init_local_state in libmd_mpi.a(partdec.o)
  _partdec_init_local_state in libmd_mpi.a(partdec.o)
  _gmx_rx in libmd_mpi.a(partdec.o)
  _gmx_tx in libmd_mpi.a(partdec.o)
  _gmx_bcast_sim in libgmx_mpi.a(network.o)
  _gmx_bcast in libgmx_mpi.a(network.o)
  _gmx_pme_do in libmd_mpi.a(pme.o)
  _gmx_pme_do in libmd_mpi.a(pme.o)
  _gmx_pme_do in libmd_mpi.a(pme.o)
  _gmx_pme_do in libmd_mpi.a(pme.o)
  _gmx_pme_do in libmd_mpi.a(pme.o)
  _gmx_pme_do in libmd_mpi.a(pme.o)
  _gmx_pme_do in libmd_mpi.a(pme.o)
  _gmx_pme_do in libmd_mpi.a(pme.o)
  _gmx_pme_do in libmd_mpi.a(pme.o)
  _gmx_pme_do in libmd_mpi.a(pme.o)
  _gmx_pme_do in libmd_mpi.a(pme.o)
  _gmx_pme_do in libmd_mpi.a(pme.o)
  _write_traj in libmd_mpi.a(stat.o)
  _write_traj in libmd_mpi.a(stat.o)
  _gmx_pme_receive_f in libmd_mpi.a(pme_pp.o)
  _gmx_pme_send_q_x in libmd_mpi.a(pme_pp.o)
  _gmx_pme_send_q_x in libmd_mpi.a(pme_pp.o)
  _gmx_pme_send_q_x in libmd_mpi.a(pme_pp.o)
  _gmx_pme_send_q_x in libmd_mpi.a(pme_pp.o)
  _gmx_pme_send_force_vir_ener in libmd_mpi.a(pme_pp.o)
  _gmx_pme_send_force_vir_ener in libmd_mpi.a(pme_pp.o)
  _gmx_pme_recv_q_x in libmd_mpi.a(pme_pp.o)
  _gmx_pme_recv_q_x in libmd_mpi.a(pme_pp.o)
  _gmx_pme_recv_q_x in libmd_mpi.a(pme_pp.o)
  _gmx_pme_recv_q_x in libmd_mpi.a(pme_pp.o)
  _dd_gatherv in libmd_mpi.a(domdec_network.o)
  _dd_scatterv in libmd_mpi.a(domdec_network.o)
  _dd_gather in libmd_mpi.a(domdec_network.o)
  _dd_scatter in libmd_mpi.a(domdec_network.o)
  _dd_bcastc in libmd_mpi.a(domdec_network.o)
  _dd_bcast in libmd_mpi.a(domdec_network.o)
  _dd_sendrecv2_rvec in libmd_mpi.a(domdec_network.o)
  _dd_sendrecv2_rvec in libmd_mpi.a(domdec_network.o)
  _dd_sendrecv2_rvec in libmd_mpi.a(domdec_network.o)
  _dd_sendrecv2_rvec in libmd_mpi.a(domdec_network.o)
  _dd_sendrecv2_rvec in libmd_mpi.a(domdec_network.o)
  _dd_sendrecv_rvec in libmd_mpi.a(domdec_network.o)
  _dd_sendrecv_rvec in libmd_mpi.a(domdec_network.o)
  _dd_sendrecv_rvec in libmd_mpi.a(domdec_network.o)
  _dd_sendrecv_int in libmd_mpi.a(domdec_network.o)
  _dd_sendrecv_int in libmd_mpi.a(domdec_network.o)
  _dd_sendrecv_int in libmd_mpi.a(domdec_network.o)
  _lam_mpi_prod, referenced from:
  _gprod in do_gct.o
  _do_coupling in do_gct.o
  _do_coupling in do_gct.o
  _do_coupling in do_gct.o
  _lam_mpi_float, referenced from:
  _gprod in do_gct.o
  _do_coupling in do_gct.o
  _do_coupling in do_gct.o
  _do_coupling in do_gct.o
  _gmx_tx_rx_real in libmd_mpi.a(partdec.o)
  _gmx_sumf_sim in libgmx_mpi.a(network.o)
  _gmx_sumf in libgmx_mpi.a(network.o)
  _gmx_sumf in libgmx_mpi.a(network.o)
  _gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
  _gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
  _gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
  _gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
  _gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
  _gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
  _pmeredist in libmd_mpi.a(pme.o)
  _gmx_pme_init in libmd_mpi.a(pme.o)
  _gmx_sum_qgrid in libmd_mpi.a(pme.o)
  _gmx_sum_qgrid in libmd_mpi.a(pme.o)
  _gmx_parallel_transpose_xy in libmd_mpi.a(gmx_parallel_3dfft.o)
  _gmx_parallel_transpose_xy in libmd_mpi.a(gmx_parallel_3dfft.o)
  _lam_mpi_int, referenced from:
  _make_dd_communicators in libmd_mpi.a(domdec.o)
  _make_dd_communicators in libmd_mpi.a(domdec.o)
  _make_dd_communicators in libmd_mpi.a(domdec.o)
  _gmx_sumi_sim 

Re: [gmx-users] Re: Minimum simulation time needed to have a completely minimized structure

2010-02-10 Thread bharat gupta
so how shall i proceed now ... can u guide ??
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] segmentation fault with grompp

2010-02-10 Thread Justin A. Lemkul


I just used this topology in conjunction with popc128a.pdb (with some naming 
adjustments to match the topologies), and everything worked fine.  I am using 
version 4.0.5, as well.


Are you running grompp on a local workstation, or on a remote filesystem?  I 
have noticed sporadic, unpredictable seg faults in grompp which I presume are 
due to NFS blips on our cluster.


Have you recompiled without --enable-mpi, as I suggested before?  I don't know 
if that's the problem or not; do other systems (proteins in water, or something 
else reasonably simple) result in the same problem?


-Justin

Gard Nelson wrote:

Ok, here's my topology file:

; Include forcefield parameters
#include ffgmx.itp
#include lipid.itp

; Include Lipid Topologies
#include popc.itp

; Include water topology
#include spc.itp

#ifdef POSRES_WATER
; Position restraint for each water oxygen
[ position_restraints ]
;  i funct   fcxfcyfcz
   11   1000   1000   1000
#endif

; Include generic topology for ions
#include ions.itp

[ system ]
; Name
Berger Membrane in water

[ molecules ]
; Compound#mols
POP   128
SOL  2460

and here's my ff_dum.itp:

; These constraints are used for dummy constructions as generated by 
pdb2gmx.
; Values depend on the details of the forcefield, vis. bondlengths and 
angles

; These parameters are designed to be used with the GROMACS forcefields
; ffgmx and ffgmx2 and with the GROMOS96 forcefields G43a1, G43a2 and G43b1.

; Constraints for the rigid NH3/CH3 groups depend on the hygrogen mass,
; since an increased hydrogen mass translates into increased momentum of
; inertia which translates into a larger distance between the dummy masses.
#ifdef HEAVY_H
; now the constraints for the rigid NH3 groups
#define DC_MNC1 0.175695
#define DC_MNC2 0.188288
#define DC_MNMN 0.158884
; now the constraints for the rigid CH3 groups
#define DC_MCN  0.198911
#define DC_MCS  0.226838
#define DC_MCC  0.204247
#define DC_MCNR 0.199798
#define DC_MCMC 0.184320
#else
; now the constraints for the rigid NH3 groups
#define DC_MNC1 0.144494
#define DC_MNC2 0.158002
#define DC_MNMN 0.079442
; now the constraints for the rigid CH3 groups
#define DC_MCN  0.161051
#define DC_MCS  0.190961
#define DC_MCC  0.166809
#define DC_MCNR 0.162009
#define DC_MCMC 0.092160
#endif
; and the angle-constraints for OH and SH groups in proteins:
#define DC_CS  0.23721
#define DC_CO  0.19849
#define DC_PO  0.21603

Thanks for your help!
Gard



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] how to calculate binding free energy and electrostatics potential for ligand-protein complex

2010-02-10 Thread pawan gupta
hello

How can we calculate binding free energy and electrostatics potential  for
ligand-protein complex?

which parameter is required in mdrun input file for that?

I tried online manual but did not get output.

It is appriciable if any body have idea


Thanks in advance


Regards
Pawan Gupta
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] pairs interactions

2010-02-10 Thread XAvier Periole


Dears,

One questions about pair interaction. I mean the one defined in the
topology files under the [ pairs ].

They interaction with plain electrostatic, meaning without shift, switch
function applied to it. If it was possible to change this, it would be
pretty nice, unless there are specific use for this ...

The question: Does the pairs electrostatic interactions include an  
eventual

epsilon ?

Thanks,
XAvier.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Fatal error: Number of grid cells is zero. Probably the system and box collapsed.

2010-02-10 Thread Lum Nforbi
Hello all,

After minimizing the energy of my system of 200 particle coordinates of
box dimension 3.5 nm to an acceptable value, I proceeded to doing mdrun but
got the error message:

Source code file: nsgrid.c, line: 348. Fatal error: Number of grid cells is
zero. Probably the system and box collapsed.

I was wondering what could be the cause. Below are my energy minimization
parameter file, the results of the minimization and the mdrun parameter
file.

oxymin.mdp file

title= NPT simulation of a Lennard-Jones Fluid
cpp  = /lib/cpp
include  = -I../top
define   =
constraints  = none
integrator   = steep
nsteps   = 5000
emtol= 1000
emstep   =
0.10
nstlist  = 10
rlist= 0.9
ns_type  = grid
coulombtype  = PME
rcoulomb = 0.9
vdwtype  = cut-off
rvdw = 0.9
fourierspacing   = 0.12
fourier_nx   = 0
fourier_ny   = 0
fourier_nz   = 0
pme_order= 4
ewald_rtol   = 1e-05
optimize_fft = yes

Step=  673, Dmax= 6.9e-03 nm, Epot=  6.11020e+05 Fmax= 7.64685e+03, atom= 19
Step=  675, Dmax= 4.1e-03 nm, Epot=  6.10956e+05 Fmax= 1.71624e+03, atom= 53
Step=  677, Dmax= 2.5e-03 nm, Epot=  6.10933e+05 Fmax= 3.60118e+03, atom= 30
Step=  678, Dmax= 3.0e-03 nm, Epot=  6.10927e+05 Fmax= 3.03499e+03, atom= 19
Step=  679, Dmax= 3.6e-03 nm, Epot=  6.10915e+05 Fmax= 4.80541e+03, atom= 30
Step=  680, Dmax= 4.3e-03 nm, Epot=  6.10913e+05 Fmax= 4.79865e+03, atom= 19
Step=  681, Dmax= 5.2e-03 nm, Epot=  6.10910e+05 Fmax= 6.50875e+03, atom= 30
Step=  683, Dmax= 3.1e-03 nm, Epot=  6.10857e+05 Fmax= 9.58182e+02, atom=
160

writing lowest energy coordinates.

Back Off! I just backed up oxymin.gro to ./#oxymin.gro.6#

Steepest Descents converged to Fmax  1000 in 684 steps
Potential Energy  =  6.1085662e+05
Maximum force =  9.5818250e+02 on atom 160
Norm of force =  2.9207101e+02

oxymdrun.mdp file

title= NPT simulation of a LJ FLUID
cpp  = /lib/cpp
include  = -I../top
define   =
integrator   = md ; a leap-frog algorithm for
integrating Newton's equations of motion
dt   = 0.002  ; time-step in ps
nsteps   = 50 ; total number of steps; total time (1
ns)

nstcomm  = 1  ; frequency for com removal
nstxout  = 1000   ; freq. x_out
nstvout  = 1000   ; freq. v_out
nstfout  = 0  ; freq. f_out
nstlog   = 500; energies to log file
nstenergy= 500; energies to energy file

nstlist  = 10 ; frequency to update neighbour list
ns_type  = grid   ; neighbour searching type
rlist= 0.9; cut-off distance for the short range
neighbour list

coulombtype  = PME; particle-mesh-ewald electrostatics
rcoulomb = 0.9; distance for the coulomb cut-off
vdw-type = Cut-off; van der Waals interactions
rvdw = 0.9; distance for the LJ or Buckingham
cut-off

fourierspacing   = 0.12   ; max. grid spacing for the FFT grid
for PME
fourier_nx   = 0  ; highest magnitude in reciprocal
space when using Ewald
fourier_ny   = 0  ; highest magnitude in reciprocal
space when using Ewald
fourier_nz   = 0  ; highest magnitude in reciprocal
space when using Ewald
pme_order= 4  ; cubic interpolation order for PME
ewald_rtol   = 1e-5   ; relative strength of the
Ewald-shifted direct potential
optimize_fft = yes; calculate optimal FFT plan for the
grid at start up.
DispCorr = no ;

Tcoupl   = nose-hoover; temp. coupling with vel. rescaling
with a stochastic term.
tau_t= 0.5; time constant for coupling
tc-grps  = OXY; groups to couple separately to temp.
bath
ref_t= 80 ; ref. temp. for coupling

Pcoupl   = parrinello-rahman  ; exponential relaxation
pressure coupling (box is scaled every timestep)
Pcoupltype   = isotropic  ; box expands or contracts evenly in
all directions (xyz) to maintain proper pressure
tau_p= 0.9; time constant for coupling (ps)
compressibility  = 4.5e-5 ; compressibility of solvent used in
simulation
ref_p= 1.0; ref. pressure for coupling (bar)

gen_vel  = yes; generate velocities according to a
Maxwell distr. at gen_temp
gen_temp   

Re: [gmx-users] pairs interactions

2010-02-10 Thread XAvier Periole


Well ... the answer is yes: pairs interactions are scaled by
if epsilon different than 1.0 is defined in the mdp file ...
but easier to fix that the shift/switch thingy ...

Another point (more scary) is that using fudge LJ = 0.0 does not turn
off the pairs LJ interactions; fudge QQ does for electrostatic!

this is using the standard gmx-4.0.7 ...
---
some numbers for the purists:

epsilon = 2.5; fudge LJ = 1.0; fudge QQ = 0.5

   Step   Time Lambda
  00.00.0

   Energies (kJ/mol)
   BondTab. Angles  Improper Dih.  Tab. Dih.   
LJ-14
1.56855e+023.84067e+011.68183e+01   -1.36500e+03 
2.26253e+01
 Coulomb-14LJ (SR)   Coulomb (SR)  Potential 
Kinetic En.
7.92958e+01   -1.41790e+05   -2.97492e+02   -1.43139e+05 
2.07062e+04

   Total EnergyTemperature Pressure (bar)
   -1.22432e+053.07112e+02   -2.18939e+01


epsilon = 25; fudge LJ = 1.0; fudge QQ = 0.5

   Energies (kJ/mol)
   BondTab. Angles  Improper Dih.  Tab. Dih.   
LJ-14
1.56855e+023.84067e+011.68183e+01   -1.36500e+03 
2.26253e+01
 Coulomb-14LJ (SR)   Coulomb (SR)  Potential 
Kinetic En.
7.92958e+00   -1.41790e+05   -2.97493e+01   -1.42942e+05 
2.07033e+04

   Total EnergyTemperature Pressure (bar)
   -1.22239e+053.07069e+02   -2.01929e+01


epsilon = 25; fudge LJ = 0.0; fudge QQ = 0.5

   Energies (kJ/mol)
   BondTab. Angles  Improper Dih.  Tab. Dih.   
LJ-14
1.56855e+023.84067e+011.68183e+01   -1.36500e+03 
2.26253e+01
 Coulomb-14LJ (SR)   Coulomb (SR)  Potential 
Kinetic En.
7.92958e+00   -1.41790e+05   -2.97493e+01   -1.42942e+05 
2.07033e+04

   Total EnergyTemperature Pressure (bar)
   -1.22239e+053.07069e+02   -2.01929e+01


epsilon = 25; fudge LJ = 0.0; fudge QQ = 0.0

   Energies (kJ/mol)
   BondTab. Angles  Improper Dih.  Tab. Dih.   
LJ-14
1.56855e+023.84067e+011.68183e+01   -1.36500e+03 
2.26253e+01
 Coulomb-14LJ (SR)   Coulomb (SR)  Potential 
Kinetic En.
0.0e+00   -1.41790e+05   -2.97493e+01   -1.42950e+05 
2.07033e+04

   Total EnergyTemperature Pressure (bar)
   -1.22247e+053.07069e+02   -2.02610e+01


On Feb 10, 2010, at 5:59 PM, XAvier Periole wrote:



Dears,

One questions about pair interaction. I mean the one defined in the
topology files under the [ pairs ].

They interaction with plain electrostatic, meaning without shift,  
switch

function applied to it. If it was possible to change this, it would be
pretty nice, unless there are specific use for this ...

The question: Does the pairs electrostatic interactions include an  
eventual

epsilon ?

Thanks,
XAvier.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before  
posting!
Please don't post (un)subscribe requests to the list. Use thewww  
interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Load balancing between PME and PP on more than 12 processors

2010-02-10 Thread XAvier Periole


Hi,

Have a look at g_tunepme on the gromacs web site or on google. It does  
this fine tuning for you.


It is pretty good

On Feb 10, 2010, at 21:09, Warren Gallin wgal...@ualberta.ca wrote:

I have a question about the procedure for running a parallel  
simulation on more than 12 processors using GROMACS 4.0.7.


I understand that partitioning the PME calculations and PP  
calculations improves performance, and that there is an initial  
automated guess at deciding how many processors to devote to PME  
vs. PP calculations.


When I ran a simulation of a system on 16 processors, GROMACS  
automatically devoted 6 processors to PMA calculations and 10 to PP  
calculations.  A 1 nsec simulation using 2 fs steps took only  
marginally less time (5 h 12 m) as runningthe same simulation on 8  
processors (6 h 45 m), without the separation of PME and PP  
calculations and the following note was in the log file:



NOTE: 35.7 % performance was lost because the PME nodes
 had more work to do than the PP nodes.
 You might want to increase the number of PME nodes
 or increase the cut-off and the grid spacing.


I reran the same simulation, this time including -npme 8 in the  
mdrun call, and the simulation completed in 4 h 19 min, with the  
following note in the log file:


NOTE: 16.8 % performance was lost because the PME nodes
 had more work to do than the PP nodes.
 You might want to increase the number of PME nodes
 or increase the cut-off and the grid spacing.


So I conclude that I need to increase cut-off and grid spacing,  
since this is recommended in the manual and in the paper describing  
the GROMACS 4 algoithm changes.


Unfortunately
a) I am unclear on which parameters in the .mdp file represent cut- 
off and grid spacing


and

b) When the manual says For changing the electrostatics settings it  
is useful to know the accuracy of the electrostatics remains nearly  
constant when the Coulomb cut-off and the PME grid spacing are  
scaled by the same factor.
Does this mean that cut-off and grid-spacing parameters need to be  
changed by the same proportion?


I hope this is sufficiently specific a question - if not, let me  
knwo what I need to be clearer on.


Thanks,

Warren Gallin
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before  
posting!
Please don't post (un)subscribe requests to the list. Use thewww  
interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] g_cluster group for output

2010-02-10 Thread Itamar Kass
Hi all,

I would really appreciate if someone can help me with the group for
output option on g_cluster.

When I use g_cluster, it asks me to select two groups. A gourd for fit
and RMSD calculation and a group for output:

Select group for least squares fit and RMSD calculation:
Group 0 (  System) has 205722 elements
Group 1 ( Protein) has 10337 elements
Group 2 (   Protein-H) has  8036 elements
Group 3 ( C-alpha) has  1005 elements

Select group for output:
Group 0 (  System) has 205722 elements
Group 1 ( Protein) has 10337 elements
Group 2 (   Protein-H) has  8036 elements
Group 3 ( C-alpha) has  1005 elements


Know, I have always assumed that g_cluster do a fit and calculate the
RMSD matrix based on the trajectory of the first group, and the second
group is just for control what atoms will be written to the output
file. Recently, however, I did some tests and it seems that the
different groups chosen for output group effect not only the atoms
written to the output file, but also the clustering results.

 Can anyone confirm this?

Cheers,
Itamar

-- 


In theory, there is no difference between theory and practice. But,
in practice, there is. - Jan L.A. van de Snepscheut

===
| Itamar Kass, Ph.D.
| Postdoctoral Research Fellow
|
| Department of Biochemistry and Molecular Biology
| Building 77 Clayton Campus
| Wellington Road
| Monash University,
| Victoria 3800
| Australia
|
| Tel: +61 3 9902 9376
| Fax: +61 3 9902 9500
| E-mail: itamar.k...@med.monash.edu.au

the second group?
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] concaternating remd trajectories using trjcat demux

2010-02-10 Thread Segun Jung
Dear gromacs users,

I am trying to collect trajectories corresponding to each temperature by
using trjcat with demux.

There were similar issues posted earlier, but I do not see the solution on
the problem I am facing following:

 I have 64 replicas simulated using namd  in dcd trajectory format and saved
them in gromacs format (.trr) using vmd.

I inspected the gromacs format trajectories by eyes in vmd and looked fine.
However, when I tried trjcat -f g*.trr -demux replica_index.xvg,

the output looked weird. So I tested a small set of the trajectories (using
only replicas 0 to 8 and first two frames) and

noticed that the output does not match to the replica_index.xvg file.


 trjcat -f *g*.trr -demux replica_index.xvg

replica_index.xvg

0 0 1 2 3 4 5 7 6

 2 1 0 2 3 5 4 6 7


I am using Ubuntu (32bit) and the gromas version is 4.0.5.

0_trajout.xtc should have the 1st frame from replica 0 and 2nd frame from
replica 1, but both frames for 0_trajout.xtc are from the replica 0.

It seems the index file does not cooperate properly with the trjcat and
-demux. Does anyone have clue about this?


 Many thanks,

 Segun
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] concaternating remd trajectories using trjcat demux

2010-02-10 Thread Justin A. Lemkul



Segun Jung wrote:

Dear gromacs users,

I am trying to collect trajectories corresponding to each temperature by 
using trjcat with demux.


There were similar issues posted earlier, but I do not see the solution 
on the problem I am facing following:




Like what?  It will save time if you can post links to these similar issues to 
avoid posting non-solutions that have already been ruled out.


I have 64 replicas simulated using namd  in dcd trajectory format and 
saved them in gromacs format (.trr) using vmd.


I inspected the gromacs format trajectories by eyes in vmd and looked 
fine. However, when I tried trjcat -f g*.trr -demux replica_index.xvg,


the output looked weird. So I tested a small set of the trajectories 
(using only replicas 0 to 8 and first two frames) and


noticed that the output does not match to the replica_index.xvg file.



If you ran your simulations with NAMD, how did you generate this file?  In 
Gromacs, one would run demux.pl on the md.log file.  I presume NAMD prints 
different output.




trjcat -f *g*.trr -demux replica_index.xvg

replica_index.xvg

0 0 1 2 3 4 5 7 6

2 1 0 2 3 5 4 6 7



If you analyzed replicas 0 to 8 (inclusive) then you should have an 8 somewhere, 
right?




I am using Ubuntu (32bit) and the gromas version is 4.0.5.

0_trajout.xtc should have the 1st frame from replica 0 and 2nd frame 
from replica 1, but both frames for 0_trajout.xtc are from the replica 0.




This might go along with my comment above.  If there are nine replicas (0 to 8), 
then there may be some mis-translation of the .xvg file.


It seems the index file does not cooperate properly with the trjcat and 
-demux. Does anyone have clue about this?




I know I can attest to demultiplexing working as advertised, so I assume intact 
trajectories with a correct index file should work properly.  You have a few 
variables to deal with: .dcd-.trr translation, however you generated the .xvg 
file, and if you've even told us the right number of replicas, among perhaps 
others.


Also, what does gmxcheck tell you about each of the .trr files?  Do they contain 
what you would expect them to?


-Justin



Many thanks,

Segun



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] simulations in parallel using the pull code

2010-02-10 Thread Sumanth Jamadagni

Dear Gromacs community, 

 I modified the pull code in GROMACS to apply forces on atoms that are in 
a certain region of 
space.  It compiles error free and runs fine to give me the expected
results. I then recompiled with 
mpi support. The cluster I am using has 8 cores/processors per node. When 
I use 1, 2 or 4 nodes 
to run, I get identical results, but the results are very different when
I run parallel on all 8 cores. 

1. I am not sure how the forces I apply in the pull code are parallelized. 
2. Is this easy for me to edit in the source code or is it best if I use  
only upto 4 processors for my 
job? 
3. Will the number of processors that I am able to use change depending
on the system size? 

I currently have on protein and about 7500 water molecules to give a
total of about 25000 atoms. I am using SHAKE with constraints on h-bonds. 

Thanks for any help!
Sumanth 


Sumanth N Jamadagni
Graduate Student 
Isermann Dept of Chemical and Biological Engg
Rensselaer Polytechnic Institute

jam...@rpi.edu
(Cell)518-598-2786

http://boyle.che.rpi.edu/~sumanth




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] follow up: simulations in parallel using the pull code

2010-02-10 Thread Sumanth Jamadagni

As a follow up to my post (pasted below), I am providing some more details. 

1. I am applying forces on heavy atoms (water oxygens and protein
non-hydrogens). 
2. I tried switching of SHAKE. Results improve marginally, but the 8
processor data are still wrong.   

3. There is no pressure coupling.  I have left a vapor layer of a few
nanometers to allow for any 
density changes. 

I hope this further information helps in debugging. 

Thanks
Sumanth Jamadagni

Dear Gromacs community, 

 I modified the pull code in GROMACS to apply forces on atoms that are in 
a certain region of 
space.  It compiles error free and runs fine to give me the expected
results. I then recompiled with 
mpi support. The cluster I am using has 8 cores/processors per node. When 
I use 1, 2 or 4 nodes 
to run, I get identical results, but the results are very different when
I run parallel on all 8 cores. 

1. I am not sure how the forces I apply in the pull code are parallelized. 
2. Is this easy for me to edit in the source code or is it best if I use  
only upto 4 processors for my 
job? 
3. Will the number of processors that I am able to use change depending
on the system size? 

I currently have on protein and about 7500 water molecules to give a
total of about 25000 atoms. I am using SHAKE with constraints on h-bonds. 

Thanks for any help!
Sumanth 


Sumanth N Jamadagni
Graduate Student 
Isermann Dept of Chemical and Biological Engg
Rensselaer Polytechnic Institute

jam...@rpi.edu
(Cell)518-598-2786

http://boyle.che.rpi.edu/~sumanth


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] A question about dihedral angles

2010-02-10 Thread Mark Abraham

On 11/02/10 06:52, Amir Marcovitz wrote:

Hi all,
i think my question is kind of trivial but i'll ask it any way:
suppose you have atoms that are arranged on the XY plane in a square
lattice arrangment with a spacing of 1. you bond them , add angles etc.
  and now you want to add proper dihedral angles using function 1.
What i know, is that the dihedral angle 'phi' between a quartet of atoms
i,j,k,l is the angle between the two surface i,j,k and j,k,l. (i think
this is also written in the manual..)
yet, i experience some problems with a system similar to the one i
described  above - i.e., the planar geometry of the atoms gets twisted
until it blows up which make me confused and think that i may did
some mistakes in defining these angles in the  [ dihedrals ] section of
the topology file.


I think this is all covered in chapter 4 of the manual.

Mark


it could realy be useful if someone can tell me the value of the
dihedral phi for the following quartets (given the x,y coordinates):
1)   i (0,0)   ,  j (0,1) , k (1,1)  ,  l(1,2)
2)   i(0,0)   ,   j (1,0)   , k(2,0)  , l(3,0)
3)i(0,0)  , j(1,0)   ,  k (2,0),l(2,1)
4 )  i(0,0)   , j(1,0)  ,  k(1,1)   , l(0,1)
i think that 1 is 180 degrees , and 2,3,4 are 0 degrees am i right?   is
there a difference between 0 and 180?
(i used a multipilicty of 2)
i.e., it looks like

[ dihedrals ]

; ai aj ak al funct phi cp mult

7 1 2 31 0.00e+00 3.347200e+01 2.00e+00

14 8 9 151 1.80e+02 3.347200e+01 2.00e+00

  and so on..
sorry for the bizzare question..


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] gromacs 4.0.7 compilation problem

2010-02-10 Thread Mark Abraham

On 11/02/10 01:55, sarbani chattopadhyay wrote:

Hi ,
I want to install gromacs 4.0.7 in double precision in a 64 bit Mac
computer with 8
nodes.
I got the lam7.1.4 source code files and installed them using the
following commands
./configure --without-fc ( it was giving an error for the fortran compiler)
make
make install



then I get the gromacs 4.0.7 source code files and installed it as
./configure --disable-float
make
make install

After that I try get the mpi  version for mdrun
make clean
./configure --enable-mpi --disable-nice --program-suffix=_mpi
make mdrun
I GET ERROR IN THIS STEP , With error message
undefined symbols:
_lam_mpi_double, referenced from:


Apparently the linker can find some MPI libraries during configure, but 
can't find the right ones during compilations.


I suggest checking for and removing other MPI libraries, or using 
OpenMPI rather than the deprecated LAM, and reading their documentation 
for how to install correctly on your OS. Any way, this is not a problem 
specific to GROMACS.


Mark


_gmx_sumd_sim in libgmx_mpi.a(network.o)
_gmx_sumd in libgmx_mpi.a(network.o)
_gmx_sumd in libgmx_mpi.a(network.o)
_wallcycle_sum in libmd_mpi.a(gmx_wallcycle.o)
_lam_mpi_byte, referenced from:
_exchange_rvecs in repl_ex.o
_replica_exchange in repl_ex.o
_replica_exchange in repl_ex.o
_replica_exchange in repl_ex.o
_finish_run in libmd_mpi.a(sim_util.o)
_dd_collect_vec in libmd_mpi.a(domdec.o)
_dd_collect_vec in libmd_mpi.a(domdec.o)
_set_dd_cell_sizes in libmd_mpi.a(domdec.o)
_dd_distribute_vec in libmd_mpi.a(domdec.o)
_dd_distribute_vec in libmd_mpi.a(domdec.o)
_dd_partition_system in libmd_mpi.a(domdec.o)
_partdec_init_local_state in libmd_mpi.a(partdec.o)
_partdec_init_local_state in libmd_mpi.a(partdec.o)
_gmx_rx in libmd_mpi.a(partdec.o)
_gmx_tx in libmd_mpi.a(partdec.o)
_gmx_bcast_sim in libgmx_mpi.a(network.o)
_gmx_bcast in libgmx_mpi.a(network.o)
_gmx_pme_do in libmd_mpi.a(pme.o)
_gmx_pme_do in libmd_mpi.a(pme.o)
_gmx_pme_do in libmd_mpi.a(pme.o)
_gmx_pme_do in libmd_mpi.a(pme.o)
_gmx_pme_do in libmd_mpi.a(pme.o)
_gmx_pme_do in libmd_mpi.a(pme.o)
_gmx_pme_do in libmd_mpi.a(pme.o)
_gmx_pme_do in libmd_mpi.a(pme.o)
_gmx_pme_do in libmd_mpi.a(pme.o)
_gmx_pme_do in libmd_mpi.a(pme.o)
_gmx_pme_do in libmd_mpi.a(pme.o)
_gmx_pme_do in libmd_mpi.a(pme.o)
_write_traj in libmd_mpi.a(stat.o)
_write_traj in libmd_mpi.a(stat.o)
_gmx_pme_receive_f in libmd_mpi.a(pme_pp.o)
_gmx_pme_send_q_x in libmd_mpi.a(pme_pp.o)
_gmx_pme_send_q_x in libmd_mpi.a(pme_pp.o)
_gmx_pme_send_q_x in libmd_mpi.a(pme_pp.o)
_gmx_pme_send_q_x in libmd_mpi.a(pme_pp.o)
_gmx_pme_send_force_vir_ener in libmd_mpi.a(pme_pp.o)
_gmx_pme_send_force_vir_ener in libmd_mpi.a(pme_pp.o)
_gmx_pme_recv_q_x in libmd_mpi.a(pme_pp.o)
_gmx_pme_recv_q_x in libmd_mpi.a(pme_pp.o)
_gmx_pme_recv_q_x in libmd_mpi.a(pme_pp.o)
_gmx_pme_recv_q_x in libmd_mpi.a(pme_pp.o)
_dd_gatherv in libmd_mpi.a(domdec_network.o)
_dd_scatterv in libmd_mpi.a(domdec_network.o)
_dd_gather in libmd_mpi.a(domdec_network.o)
_dd_scatter in libmd_mpi.a(domdec_network.o)
_dd_bcastc in libmd_mpi.a(domdec_network.o)
_dd_bcast in libmd_mpi.a(domdec_network.o)
_dd_sendrecv2_rvec in libmd_mpi.a(domdec_network.o)
_dd_sendrecv2_rvec in libmd_mpi.a(domdec_network.o)
_dd_sendrecv2_rvec in libmd_mpi.a(domdec_network.o)
_dd_sendrecv2_rvec in libmd_mpi.a(domdec_network.o)
_dd_sendrecv2_rvec in libmd_mpi.a(domdec_network.o)
_dd_sendrecv_rvec in libmd_mpi.a(domdec_network.o)
_dd_sendrecv_rvec in libmd_mpi.a(domdec_network.o)
_dd_sendrecv_rvec in libmd_mpi.a(domdec_network.o)
_dd_sendrecv_int in libmd_mpi.a(domdec_network.o)
_dd_sendrecv_int in libmd_mpi.a(domdec_network.o)
_dd_sendrecv_int in libmd_mpi.a(domdec_network.o)
_lam_mpi_prod, referenced from:
_gprod in do_gct.o
_do_coupling in do_gct.o
_do_coupling in do_gct.o
_do_coupling in do_gct.o
_lam_mpi_float, referenced from:
_gprod in do_gct.o
_do_coupling in do_gct.o
_do_coupling in do_gct.o
_do_coupling in do_gct.o
_gmx_tx_rx_real in libmd_mpi.a(partdec.o)
_gmx_sumf_sim in libgmx_mpi.a(network.o)
_gmx_sumf in libgmx_mpi.a(network.o)
_gmx_sumf in libgmx_mpi.a(network.o)
_gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
_gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
_gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
_gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
_gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
_gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
_pmeredist in libmd_mpi.a(pme.o)
_gmx_pme_init in libmd_mpi.a(pme.o)
_gmx_sum_qgrid in libmd_mpi.a(pme.o)
_gmx_sum_qgrid in libmd_mpi.a(pme.o)
_gmx_parallel_transpose_xy in libmd_mpi.a(gmx_parallel_3dfft.o)
_gmx_parallel_transpose_xy in libmd_mpi.a(gmx_parallel_3dfft.o)
_lam_mpi_int, referenced from:
_make_dd_communicators in libmd_mpi.a(domdec.o)
_make_dd_communicators in libmd_mpi.a(domdec.o)
_make_dd_communicators in libmd_mpi.a(domdec.o)
_gmx_sumi_sim in libgmx_mpi.a(network.o)
_gmx_sumi in libgmx_mpi.a(network.o)
_gmx_sumi in libgmx_mpi.a(network.o)
_pmeredist in libmd_mpi.a(pme.o)
_lam_mpi_sum, referenced from:

[gmx-users] Software inconsistency error: Not enough water

2010-02-10 Thread Chandan Choudhury
Hello gmxusers !!
I am simulating a protein and it  is bound to ATP.
Simulation of protein alone (without) works fine. Solely ATP simulation too
works. But the problem arises on adding ions to the protein + ATP (1QHH.pdb)
file.
Error:


$ genion -s em.tpr -o ion.pdb -p topol.top -np 48
WARNING: turning of free energy, will use lambda=0
Reading file em.tpr, VERSION 4.0.7 (single precision)
Using a coulomb cut-off of 0.9 nm
Will try to add 48 Na ions and 0 Cl ions.
Select a continuous group of solvent molecules
Opening library file /usr/local/gromacs/share/gromacs/top/aminoacids.dat
Group 0 (  System) has 70056 elements
Group 1 ( Protein) has 10214 elements
Group 2 (   Protein-H) has  5107 elements
Group 3 ( C-alpha) has   623 elements
Group 4 (Backbone) has  1869 elements
Group 5 (   MainChain) has  2488 elements
Group 6 (MainChain+Cb) has  3083 elements
Group 7 ( MainChain+H) has  3099 elements
Group 8 (   SideChain) has  7115 elements
Group 9 ( SideChain-H) has  2619 elements
Group10 ( Prot-Masses) has 10214 elements
Group11 ( Non-Protein) has 59842 elements
Group12 ( ATP) has43 elements
Group13 ( SOL) has 59799 elements
Group14 (   Other) has 59842 elements
Select a group: 13
Selected 13: 'SOL'
Number of (3-atomic) solvent molecules: 19933

Processing topology

Back Off! I just backed up temp.top to ./#temp.top.1#

---
Program genion, VERSION 4.0.7
Source code file: gmx_genion.c, line: 269

Software inconsistency error:
Not enough water
---

Though my system has sufficient amount of water (19933) molecules. Can not
understand the error. Any information would be useful.


Chadan
--
Chandan kumar Choudhury
NCL, Pune
INDIA
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php