[gmx-users] total charge of the system

2011-01-25 Thread ahmet yıldırım
Hi,

In my simulation, total charge of the system is a noninteger number (System
has non-zero total charge: 8.04e+00). I neutralized it with 8 chlorine
atoms.
Then, grompp -f em.mdp -p topol.top -c solvated.gro -o em.tpr

Fatal error:
moleculetype CU1 is redefined
is it some thing wrong?

Below is the first and final version of the .top file:

First topol.top File
**
[ molecules ]
; Compound#mols
Protein_chain_P 1
Protein_chain_L  1
Protein_chain_H 1
SOL 10
SOL127
SOL157
SOL 41779
**

Final topol.top File
*#include ions.itp*

[ molecules ]
; Compound#mols
Protein_chain_P 1
Protein_chain_L  1
Protein_chain_H 1
SOL 10
SOL127
SOL157
SOL *41771*
*CL- 8*






-- 
Ahmet YILDIRIM
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] total charge of the system

2011-01-25 Thread Mark Abraham


On 01/25/11, ahmet yıldırım  ahmedo...@gmail.com wrote:
 Hi,
 
 In my simulation, total charge of the system is a noninteger number (System 
 has non-zero total charge: 8.04e+00). I neutralized it with 8 chlorine 
 atoms. 
 Then, grompp -f em.mdp -p topol.top -c solvated.gro -o em.tpr
 
 
 Fatal error:
 
 moleculetype CU1 is redefined
 is it some thing wrong?
 

ions.itp defines molecule types for ions. Molecule types cannot be redefined. 
When you #included ions.itp GROMACS thought you were doing illegal 
redefinitions. Look back in the .top to find the original definitions, and then 
take suitable action.

Mark


 
 
 Below is the first and final version of the .top file:
 
 
 First topol.top File
 
 [ molecules ]
 ; Compound    #mols
 Protein_chain_P 1
 Protein_chain_L  1
 Protein_chain_H 1
 SOL 10
 SOL    127
 
 SOL        157
 SOL 41779
 
 
 Final topol.top File
 #include ions.itp
 
 [ molecules ]
 ; Compound    #mols
 Protein_chain_P 1
 Protein_chain_L  1
 
 Protein_chain_H 1
 SOL 10
 SOL    127
 SOL        157
 SOL 41771
 CL-         8
 
 
 
 
 
 
 
 -- 
 Ahmet YILDIRIM
 
 

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] total charge of the system

2011-01-25 Thread ahmet yıldırım
Dear Mark,

I looked at gromacs mail list but I could not find a proper solution
.Whatshould
I add to the .top file? Please look at the following reconstructed1 .top and
reconstructed1 .top files

I have error as the following reconstructed1 .top file:
*Fatal error:*
moleculetype CU1 is redefined

I have error as the following reconstructed2 .top file:
*Fatal error:*
No such moleculetype CL-
*
Original .top file:*
; Include topology for ions
#include gromos43a1.ff/ions.itp

[ system ]
; Name
GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; ANTI-HIV-1
ANTIBODY 2F5 HEAVY CHAIN in water

[ molecules ]
; Compound#mols
Protein_chain_P 1
Protein_chain_L 1
Protein_chain_H 1
SOL10
SOL   127
SOL   157
SOL 41779

*reconstructed1 .top file*
; Include topology for ions
#include gromos43a1.ff/ions.itp
*#include ions.itp*

[ system ]
; Name
GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; ANTI-HIV-1
ANTIBODY 2F5 HEAVY CHAIN in water

[ molecules ]
; Compound#mols
Protein_chain_P 1
Protein_chain_L 1
Protein_chain_H 1
SOL10
SOL   127
SOL   157
SOL 4177*1*
CL-   *8*
*reconstructed2 .top file*
; Include topology for ions
#include gromos43a1.ff/ions.itp
**
[ system ]
; Name
GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; ANTI-HIV-1
ANTIBODY 2F5 HEAVY CHAIN in water

[ molecules ]
; Compound#mols
Protein_chain_P 1
Protein_chain_L 1
Protein_chain_H 1
SOL10
SOL   127
SOL   157
SOL 4177*1*
CL-   *8*


25 Ocak 2011 10:49 tarihinde Mark Abraham mark.abra...@anu.edu.au yazdı:



 On 01/25/11, *ahmet yıldırım * ahmedo...@gmail.com wrote:

 Hi,

 In my simulation, total charge of the system is a noninteger number (System
 has non-zero total charge: 8.04e+00). I neutralized it with 8 chlorine
 atoms.
 Then, grompp -f em.mdp -p topol.top -c solvated.gro -o em.tpr

 Fatal error:
 moleculetype CU1 is redefined
 is it some thing wrong?


 ions.itp defines molecule types for ions. Molecule types cannot be
 redefined. When you #included ions.itp GROMACS thought you were doing
 illegal redefinitions. Look back in the .top to find the original
 definitions, and then take suitable action.

 Mark




 Below is the first and final version of the .top file:

 First topol.top File
 **
 [ molecules ]
 ; Compound#mols
 Protein_chain_P 1
 Protein_chain_L  1
 Protein_chain_H 1
 SOL 10
 SOL127
 SOL157
 SOL 41779
 **

 Final topol.top File
 *#include ions.itp*

 [ molecules ]
 ; Compound#mols
 Protein_chain_P 1
 Protein_chain_L  1
 Protein_chain_H 1
 SOL 10
 SOL127
 SOL157
 SOL *41771*
 *CL- 8*






 --
 Ahmet YILDIRIM


 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Ahmet YILDIRIM
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] total charge of the system

2011-01-25 Thread Mark Abraham


On 01/25/11, ahmet yıldırım  ahmedo...@gmail.com wrote:
 Dear Mark,
 
 I looked at gromacs mail list but I could not find a proper solution .What 
 should I add to the .top file? Please look at the following reconstructed1 
 .top and reconstructed1 .top files
 
 
 I have error as the following reconstructed1 .top file:
 
 Fatal error:
 
 moleculetype CU1 is redefined
 
 
 I have error as the following reconstructed2 .top file:
 
 Fatal error:
 
 No such moleculetype CL-
 
 

I don't have any knowledge of the context, so can't answer. It looks to me like 
you are mixing copies of ions.itp from multiple sources. Don't. Use the one for 
the force field you are targetting. pdb2gmx generated the right invocation - 
all you should have to do is use that by generating correctly-named ions. See 
http://www.gromacs.org/Documentation/Gromacs_Utilities/genion(http://www.gromacs.org/Documentation/Gromacs_Utilities/genion)

Mark


 Original .top file:
 ; Include topology for ions
 #include gromos43a1.ff/ions.itp
 
 [ system ]
 ; Name
 GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; ANTI-HIV-1 
 ANTIBODY 2F5 HEAVY CHAIN in water
 
 
 [ molecules ]
 ; Compound    #mols
 Protein_chain_P 1
 Protein_chain_L 1
 Protein_chain_H 1
 SOL    10
 SOL   127
 SOL   157
 SOL 41779
 
 
 reconstructed1 .top file
 ; Include topology for ions
 #include gromos43a1.ff/ions.itp
 #include ions.itp
 
 [ system ]
 ; Name
 GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; ANTI-HIV-1 
 ANTIBODY 2F5 HEAVY CHAIN in water
 
 
 [ molecules ]
 ; Compound    #mols
 Protein_chain_P 1
 Protein_chain_L 1
 Protein_chain_H 1
 SOL    10
 SOL   127
 SOL   157
 SOL 41771
 
 CL-           8
 reconstructed2 .top file
 ; Include topology for ions
 #include gromos43a1.ff/ions.itp
 
 [ system ]
 ; Name
 GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; ANTI-HIV-1 
 ANTIBODY 2F5 HEAVY CHAIN in water
 
 
 [ molecules ]
 ; Compound    #mols
 Protein_chain_P 1
 Protein_chain_L 1
 Protein_chain_H 1
 SOL    10
 SOL   127
 SOL   157
 SOL 41771
 
 CL-           8
 
 
 25 Ocak 2011 10:49 tarihinde Mark Abraham mark.abra...@anu.edu.au yazdı:
 
  
  
  
  On 01/25/11, ahmet yıldırım  ahmedo...@gmail.com wrote:
   
   Hi,
   
   In my simulation, total charge of the system is a noninteger number 
   (System has non-zero total charge: 8.04e+00). I neutralized it with 8 
   chlorine atoms. 
   Then, grompp -f em.mdp -p topol.top -c solvated.gro -o em.tpr
   
   
   
   Fatal error:
   
   moleculetype CU1 is redefined
   is it some thing wrong?
   
  
  
  ions.itp defines molecule types for ions. Molecule types cannot be 
  redefined. When you #included ions.itp GROMACS thought you were doing 
  illegal redefinitions. Look back in the .top to find the original 
  definitions, and then take suitable action.
  
  
  Mark
  
  
  
   
   
   Below is the first and final version of the .top file:
   
   
   
   First topol.top File
   
   [ molecules ]
   ; Compound    #mols
   Protein_chain_P 1
   Protein_chain_L  1
   Protein_chain_H 1
   SOL 10
   SOL    127
   
   
   SOL        157
   SOL 41779
   
   
   Final topol.top File
   #include ions.itp
   
   [ molecules ]
   ; Compound    #mols
   Protein_chain_P 1
   Protein_chain_L  1
   
   
   Protein_chain_H 1
   SOL 10
   SOL    127
   SOL        157
   SOL 41771
   CL-         8
   
   
   
   
   
   
   
   
   -- 
   Ahmet YILDIRIM
   
   
   
  
  
  
  
  --
  
  gmx-users mailing list    gmx-users@gromacs.org
  
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  
  Please search the archive at 
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  
  Please don't post (un)subscribe requests to the list. Use the
  
  www interface or send it to gmx-users-requ...@gromacs.org.
  
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
 
 
 
 
 -- 
 Ahmet YILDIRIM
 
 

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Re: Secondary structure loss in implicit solvent simulations

2011-01-25 Thread K. Singhal
Hi

It's not necessarily GPU-specific, it's implicit solvent-specific. I don't get 
these problems in explicit solvent simulations on CPU, only in implicit solvent 
simulations both on GPU as well as CPU. One of the problems that I can think of 
is unbalanced charges that I would have balanced out using NaCl ions, but not 
any more.

Regards
Kush



--
Kushagra Singhal
Promovendus, Computational Chemistry
van 't Hoff Institute of Molecular Sciences
Science Park 904, room C2.119
1098 XH Amsterdam, The Netherlands
+31 205256965
Universiteit van Amsterdam
k.sing...@uva.nl


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Antw: Re: [gmx-users] total charge of the system

2011-01-25 Thread Emanuel Peter
Hello,

You have to look into your ions.itp which is included in your top-file by
#include ions.itp.
There all the types of ions have to be defined.
The atom-types which you can see in the ions.itp, you will
find in the ffyour-forcefield.itp where your atomtypes are defined.
On top of your .top file the Atomtype-itp-file is included.
All these files are normally placed in /usr/share/gromacs/top/.
But you also can place them into your current directory.

Bests,

Emanuel

 Mark Abraham  25.01.11 10.48 Uhr 


On 01/25/11, ahmet yıldırım   wrote:Dear Mark,

I looked at gromacs mail list but I could not find a proper solution .What
should I add to the .top file? Please look at the following reconstructed1 .top
and reconstructed1 .top files

I have error as the following reconstructed1 .top file:
Fatal error:
moleculetype CU1 is redefined

I have error as the following reconstructed2 .top file:
Fatal error:
No such moleculetype CL-


I don't have any knowledge of the context, so can't answer. It looks to me like
you are mixing copies of ions.itp from multiple sources. Don't. Use the one for
the force field you are targetting. pdb2gmx generated the right invocation -
all you should have to do is use that by generating correctly-named ions. See
http://www.gromacs.org/Documentation/Gromacs_Utilities/genion

Mark

Original .top file:
; Include topology for ions
#include gromos43a1.ff/ions.itp

[ system ]
; Name
GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; ANTI-HIV-1
ANTIBODY 2F5 HEAVY CHAIN in water

[ molecules ]
; Compound#mols
Protein_chain_P 1
Protein_chain_L 1
Protein_chain_H 1
SOL10
SOL   127
SOL   157
SOL 41779

reconstructed1 .top file
; Include topology for ions
#include gromos43a1.ff/ions.itp
#include ions.itp

[ system ]
; Name
GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; ANTI-HIV-1
ANTIBODY 2F5 HEAVY CHAIN in water

[ molecules ]
; Compound#mols
Protein_chain_P 1
Protein_chain_L 1
Protein_chain_H 1
SOL10
SOL   127
SOL   157
SOL 41771
CL-   8
reconstructed2 .top file
; Include topology for ions
#include gromos43a1.ff/ions.itp

[ system ]
; Name
GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; ANTI-HIV-1
ANTIBODY 2F5 HEAVY CHAIN in water

[ molecules ]
; Compound#mols
Protein_chain_P 1
Protein_chain_L 1
Protein_chain_H 1
SOL10
SOL   127
SOL   157
SOL 41771
CL-   8


25 Ocak 2011 10:49 tarihinde Mark Abraham mark.abra...@anu.edu.au yazdı:


On 01/25/11, ahmet yıldırım  ahmedo...@gmail.com wrote:Hi,

In my simulation, total charge of the system is a noninteger number (System has
non-zero total charge: 8.04e+00). I neutralized it with 8 chlorine atoms. 
Then, grompp -f em.mdp -p topol.top -c solvated.gro -o em.tpr

Fatal error:
moleculetype CU1 is redefined
is it some thing wrong?


ions.itp defines molecule types for ions. Molecule types cannot be redefined.
When you #included ions.itp GROMACS thought you were doing illegal
redefinitions. Look back in the .top to find the original definitions, and then
take suitable action.

Mark





Below is the first and final version of the .top file:

First topol.top File

[ molecules ]
; Compound#mols
Protein_chain_P 1
Protein_chain_L  1
Protein_chain_H 1
SOL 10
SOL127
SOL157
SOL 41779


Final topol.top File
#include ions.itp

[ molecules ]
; Compound#mols
Protein_chain_P 1
Protein_chain_L  1
Protein_chain_H 1
SOL 10
SOL127
SOL157
SOL 41771
CL- 8






-- 
Ahmet YILDIRIM




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Ahmet YILDIRIM


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: Re: [gmx-users] total charge of the system

2011-01-25 Thread ahmet yıldırım
Dear Mark and Emanuel,

I am sending the ions.tip and the topol.top files. Everything seems ok. Any
problem about these files?

By the way, I am using the Gromacs 4.5.3.
Force Field 43a1

25 Ocak 2011 12:03 tarihinde Emanuel Peter 
emanuel.pe...@chemie.uni-regensburg.de yazdı:

 Hello,

 You have to look into your ions.itp which is included in your top-file by
 #include ions.itp.
 There all the types of ions have to be defined.
 The atom-types which you can see in the ions.itp, you will
 find in the ffyour-forcefield.itp where your atomtypes are defined.
 On top of your .top file the Atomtype-itp-file is included.
 All these files are normally placed in /usr/share/gromacs/top/.
 But you also can place them into your current directory.

 Bests,

 Emanuel

  Mark Abraham 25.01.11 10.48 Uhr 



 On 01/25/11, *ahmet yıldırım * ahmedo...@gmail.com wrote:

 Dear Mark,

 I looked at gromacs mail list but I could not find a proper solution 
 .Whatshould
 I add to the .top file? Please look at the following reconstructed1 .top
 and reconstructed1 .top files

 I have error as the following reconstructed1 .top file:
 *Fatal error:*
 moleculetype CU1 is redefined

 I have error as the following reconstructed2 .top file:
 *Fatal error:*
 No such moleculetype CL-
 **


 I don't have any knowledge of the context, so can't answer. It looks to me
 like you are mixing copies of ions.itp from multiple sources. Don't. Use the
 one for the force field you are targetting. pdb2gmx generated the right
 invocation - all you should have to do is use that by generating
 correctly-named ions. 
 Seehttp://www.gromacs.org/Documentation/Gromacs_Utilities/genion

 Mark

 *Original .top file:*
 ; Include topology for ions
 #include gromos43a1.ff/ions.itp

 [ system ]
 ; Name
 GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; ANTI-HIV-1
 ANTIBODY 2F5 HEAVY CHAIN in water

 [ molecules ]
 ; Compound#mols
 Protein_chain_P 1
 Protein_chain_L 1
 Protein_chain_H 1
 SOL10
 SOL   127
 SOL   157
 SOL 41779

 *reconstructed1 .top file*
 ; Include topology for ions
 #include gromos43a1.ff/ions.itp
 *#include ions.itp*

 [ system ]
 ; Name
 GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; ANTI-HIV-1
 ANTIBODY 2F5 HEAVY CHAIN in water

 [ molecules ]
 ; Compound#mols
 Protein_chain_P 1
 Protein_chain_L 1
 Protein_chain_H 1
 SOL10
 SOL   127
 SOL   157
 SOL 4177*1*
 CL-   *8*
 *reconstructed2 .top file*
 ; Include topology for ions
 #include gromos43a1.ff/ions.itp
 **
 [ system ]
 ; Name
 GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; ANTI-HIV-1
 ANTIBODY 2F5 HEAVY CHAIN in water

 [ molecules ]
 ; Compound#mols
 Protein_chain_P 1
 Protein_chain_L 1
 Protein_chain_H 1
 SOL10
 SOL   127
 SOL   157
 SOL 4177*1*
 CL-   *8*


 25 Ocak 2011 10:49 tarihinde Mark Abraham mark.abra...@anu.edu.au yazdı:



 On 01/25/11, *ahmet yıldırım * ahmedo...@gmail.com wrote:

 Hi,

 In my simulation, total charge of the system is a noninteger number
 (System has non-zero total charge: 8.04e+00). I neutralized it with 8
 chlorine atoms.
 Then, grompp -f em.mdp -p topol.top -c solvated.gro -o em.tpr

 Fatal error:
 moleculetype CU1 is redefined
 is it some thing wrong?


 ions.itp defines molecule types for ions. Molecule types cannot be
 redefined. When you #included ions.itp GROMACS thought you were doing
 illegal redefinitions. Look back in the .top to find the original
 definitions, and then take suitable action.

 Mark





 Below is the first and final version of the .top file:

 First topol.top File
 **
 [ molecules ]
 ; Compound#mols
 Protein_chain_P 1
 Protein_chain_L  1
 Protein_chain_H 1
 SOL 10
 SOL127
 SOL157
 SOL 41779
 **

 Final topol.top File
 *#include ions.itp*

 [ molecules ]
 ; Compound#mols
 Protein_chain_P 1
 Protein_chain_L  1
 Protein_chain_H 1
 SOL 10
 SOL127
 SOL157
 SOL *41771*
 *CL- 8*






 --
 Ahmet YILDIRIM


 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




 --
 Ahmet YILDIRIM


 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't 

Re: Re: [gmx-users] total charge of the system

2011-01-25 Thread Mark Abraham


On 01/25/11, ahmet yıldırım  ahmedo...@gmail.com wrote:
 Dear Mark and Emanuel,
 
 I am sending the ions.tip and the topol.top files. Everything seems ok. Any 
 problem about these files?
 

No. So one of the other #include files is erroneously defining ion molecule 
types. See www.gromacs.org/Documentation/Include_File_Mechanism. Go and read 
them and fix it.

As the link I sent last time hinted, you need to name *molecules* in the 
[molecules] directive. Look carefully at the *molecule* name of chloride where 
it is defined.

Mark

 
 
 
 By the way, I am using the Gromacs 4.5.3.
 Force Field 43a1
 
 25 Ocak 2011 12:03 tarihinde Emanuel Peter 
 emanuel.pe...@chemie.uni-regensburg.de yazdı:
 
 
  Hello,
  
  You have to look into your ions.itp which is included in your top-file by 
  #include ions.itp.
  
  There all the types of ions have to be defined.
  The atom-types which you can see in the ions.itp, you will
  find in the ffyour-forcefield.itp where your atomtypes are defined.
  On top of your .top file the Atomtype-itp-file is included.
  
  All these files are normally placed in /usr/share/gromacs/top/.
  But you also can place them into your current directory.
  
  Bests,
  
  Emanuel
  
   Mark Abraham  25.01.11 10.48 Uhr 
  
  
  
  
  
  On 01/25/11, ahmet yıldırım  ahmedo...@gmail.com wrote:
   
   Dear Mark,
   
   I looked at gromacs mail list but I could not find a proper solution 
   .What should I add to the .top file? Please look at the following 
   reconstructed1 .top and reconstructed1 .top files
   
   
   
   I have error as the following reconstructed1 .top file:
   
   Fatal error:
   
   moleculetype CU1 is redefined
   
   
   I have error as the following reconstructed2 .top file:
   
   Fatal error:
   
   No such moleculetype CL-
   
   
  
  I don't have any knowledge of the context, so can't answer. It looks to me 
  like you are mixing copies of ions.itp from multiple sources. Don't. Use 
  the one for the force field you are targetting. pdb2gmx generated the right 
  invocation - all you should have to do is use that by generating 
  correctly-named ions. See 
  http://www.gromacs.org/Documentation/Gromacs_Utilities/genion(http://www.gromacs.org/Documentation/Gromacs_Utilities/genion)
  
  
  Mark
  
  
   Original .top file:
   ; Include topology for ions
   #include gromos43a1.ff/ions.itp
   
   
   [ system ]
   ; Name
   GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; 
   ANTI-HIV-1 ANTIBODY 2F5 HEAVY CHAIN in water
   
   
   [ molecules ]
   ; Compound    #mols
   Protein_chain_P 1
   Protein_chain_L 1
   Protein_chain_H 1
   SOL    10
   SOL   127
   SOL   157
   SOL 41779
   
   
   
   reconstructed1 .top file
   ; Include topology for ions
   #include gromos43a1.ff/ions.itp
   #include ions.itp
   
   [ system ]
   ; Name
   GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; 
   ANTI-HIV-1 ANTIBODY 2F5 HEAVY CHAIN in water
   
   
   
   [ molecules ]
   ; Compound    #mols
   Protein_chain_P 1
   Protein_chain_L 1
   Protein_chain_H 1
   SOL    10
   SOL   127
   SOL   157
   SOL 41771
   
   
   CL-           8
   reconstructed2 .top file
   ; Include topology for ions
   #include gromos43a1.ff/ions.itp
   
   [ system ]
   ; Name
   GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; 
   ANTI-HIV-1 ANTIBODY 2F5 HEAVY CHAIN in water
   
   
   
   [ molecules ]
   ; Compound    #mols
   Protein_chain_P 1
   Protein_chain_L 1
   Protein_chain_H 1
   SOL    10
   SOL   127
   SOL   157
   SOL 41771
   
   
   CL-           8
   
   
   25 Ocak 2011 10:49 tarihinde Mark Abraham mark.abra...@anu.edu.au yazdı:
   
   



On 01/25/11, ahmet yıldırım  ahmedo...@gmail.com wrote:
 
 
 Hi,
 
 In my simulation, total charge of the system is a noninteger number 
 (System has non-zero total charge: 8.04e+00). I neutralized it 
 with 8 chlorine atoms. 
 Then, grompp -f em.mdp -p topol.top -c solvated.gro -o em.tpr
 
 
 
 
 Fatal error:
 
 moleculetype CU1 is redefined
 is it some thing wrong?
 


ions.itp defines molecule types for ions. Molecule types cannot be 
redefined. When you #included ions.itp GROMACS thought you were doing 
illegal redefinitions. Look back in the .top to find the original 
definitions, and then take suitable action.



Mark




 
 
 Below is the first and final version of the .top file:
 
 
 
 
 First topol.top File
 
 [ molecules ]
 ; Compound    #mols
 Protein_chain_P 1
 Protein_chain_L  1
 Protein_chain_H 1
 SOL 10
 SOL    127
 
 

Re: Re: [gmx-users] total charge of the system

2011-01-25 Thread ahmet yıldırım
Dear Mark,

A million thanks. Problem is solved.
Finally,
Select a continuous group of solvent molecules:
...
Group12 (  Water) has 126219 elements
Group13 (SOL) has 126219 elements

Which one should I choose 12 or 13?

*Final topol.top file:


*;File 'topol.top' was generated
;By user: onbekend (0)
;On host: onbekend
;At date: Tue Jan 25 10:12:14 2011
;
;This is a standalone topology file
;
;It was generated using program:
;pdb2gmx - VERSION 4.5.3
;
;Command line was:
;pdb2gmx -f 3MOA.pdb -water spc -ter
;
;Force field was read from the standard Gromacs share directory.
;

; Include forcefield parameters
#include gromos43a1.ff/forcefield.itp

; Include chain topologies
#include topol_Protein_chain_P.itp
#include topol_Protein_chain_L.itp
#include topol_Protein_chain_H.itp

; Include water topology
#include gromos43a1.ff/spc.itp

#ifdef POSRES_WATER
; Position restraint for each water oxygen
[ position_restraints ]
;  i funct   fcxfcyfcz
   11   1000   1000   1000
#endif

; Include topology for ions
#include gromos43a1.ff/ions.itp

[ system ]
; Name
GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; ANTI-HIV-1
ANTIBODY 2F5 HEAVY CHAIN in water

[ molecules ]
; Compound#mols
Protein_chain_P 1
Protein_chain_L 1
Protein_chain_H 1
SOL10
SOL   127
SOL   157
SOL 41771
*CL 8
*


25 Ocak 2011 13:13 tarihinde Mark Abraham mark.abra...@anu.edu.au yazdı:



 On 01/25/11, *ahmet yıldırım * ahmedo...@gmail.com wrote:

 Dear Mark and Emanuel,

 I am sending the ions.tip and the topol.top files. Everything seems ok. Any
 problem about these files?


 No. So one of the other #include files is erroneously defining ion molecule
 types. See www.gromacs.org/Documentation/Include_File_Mechanism. Go and
 read them and fix it.

 As the link I sent last time hinted, you need to name *molecules* in the
 [molecules] directive. Look carefully at the *molecule* name of chloride
 where it is defined.

 Mark



 By the way, I am using the Gromacs 4.5.3.
 Force Field 43a1

 25 Ocak 2011 12:03 tarihinde Emanuel Peter 
 emanuel.pe...@chemie.uni-regensburg.de yazdı:

 Hello,

 You have to look into your ions.itp which is included in your top-file by
 #include ions.itp.
 There all the types of ions have to be defined.
 The atom-types which you can see in the ions.itp, you will
 find in the ffyour-forcefield.itp where your atomtypes are defined.
 On top of your .top file the Atomtype-itp-file is included.
 All these files are normally placed in /usr/share/gromacs/top/.
 But you also can place them into your current directory.

 Bests,

 Emanuel

  Mark Abraham 25.01.11 10.48 Uhr 




 On 01/25/11, *ahmet yıldırım * ahmedo...@gmail.com wrote:

 Dear Mark,

 I looked at gromacs mail list but I could not find a proper solution .
 What should I add to the .top file? Please look at the following
 reconstructed1 .top and reconstructed1 .top files

 I have error as the following reconstructed1 .top file:
 *Fatal error:*
 moleculetype CU1 is redefined

 I have error as the following reconstructed2 .top file:
 *Fatal error:*
 No such moleculetype CL-
 **


 I don't have any knowledge of the context, so can't answer. It looks to me
 like you are mixing copies of ions.itp from multiple sources. Don't. Use the
 one for the force field you are targetting. pdb2gmx generated the right
 invocation - all you should have to do is use that by generating
 correctly-named ions. 
 Seehttp://www.gromacs.org/Documentation/Gromacs_Utilities/genion

 Mark

 *Original .top file:*
 ; Include topology for ions
 #include gromos43a1.ff/ions.itp

 [ system ]
 ; Name
 GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; ANTI-HIV-1
 ANTIBODY 2F5 HEAVY CHAIN in water

 [ molecules ]
 ; Compound#mols
 Protein_chain_P 1
 Protein_chain_L 1
 Protein_chain_H 1
 SOL10
 SOL   127
 SOL   157
 SOL 41779

 *reconstructed1 .top file*
 ; Include topology for ions
 #include gromos43a1.ff/ions.itp
 *#include ions.itp*

 [ system ]
 ; Name
 GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; ANTI-HIV-1
 ANTIBODY 2F5 HEAVY CHAIN in water

 [ molecules ]
 ; Compound#mols
 Protein_chain_P 1
 Protein_chain_L 1
 Protein_chain_H 1
 SOL10
 SOL   127
 SOL   157
 SOL 4177*1*
 CL-   *8*
 *reconstructed2 .top file*
 ; Include topology for ions
 #include gromos43a1.ff/ions.itp
 **
 [ system ]
 ; Name
 GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; ANTI-HIV-1
 ANTIBODY 2F5 HEAVY CHAIN in water

 [ molecules ]
 ; Compound#mols
 Protein_chain_P 1
 Protein_chain_L 1
 Protein_chain_H 1
 SOL10
 SOL   127
 SOL   157
 SOL 4177*1*
 CL-

Re: Re: [gmx-users] total charge of the system

2011-01-25 Thread Mark Abraham


On 01/25/11, ahmet yıldırım  ahmedo...@gmail.com wrote:
 Dear Mark,
 
 A million thanks. Problem is solved. 
 Finally,
 Select a continuous group of solvent molecules:
 ...
 Group    12 (  Water) has 126219 elements
 Group    13 (    SOL) has 126219 elements
 
 
 Which one should I choose 12 or 13?
 

They're probably the same.

Mark

 
 
 Final topol.top file:
 
 
 ;    File 'topol.top' was generated
 ;    By user: onbekend (0)
 ;    On host: onbekend
 ;    At date: Tue Jan 25 10:12:14 2011
 
 ;
 ;    This is a standalone topology file
 ;
 ;    It was generated using program:
 ;    pdb2gmx - VERSION 4.5.3
 ;
 ;    Command line was:
 ;    pdb2gmx -f 3MOA.pdb -water spc -ter 
 ;
 ;    Force field was read from the standard Gromacs share directory.
 
 ;
 
 ; Include forcefield parameters
 #include gromos43a1.ff/forcefield.itp
 
 ; Include chain topologies
 #include topol_Protein_chain_P.itp
 #include topol_Protein_chain_L.itp
 
 #include topol_Protein_chain_H.itp
 
 ; Include water topology
 #include gromos43a1.ff/spc.itp
 
 #ifdef POSRES_WATER
 ; Position restraint for each water oxygen
 [ position_restraints ]
 
 ;  i funct   fcx    fcy    fcz
    1    1   1000   1000   1000
 #endif
 
 ; Include topology for ions
 #include gromos43a1.ff/ions.itp
 
 [ system ]
 ; Name
 GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; ANTI-HIV-1 
 ANTIBODY 2F5 HEAVY CHAIN in water
 
 
 [ molecules ]
 ; Compound    #mols
 Protein_chain_P 1
 Protein_chain_L 1
 Protein_chain_H 1
 SOL    10
 SOL   127
 SOL   157
 SOL 41771
 
 CL         8
 
 
 
 25 Ocak 2011 13:13 tarihinde Mark Abraham mark.abra...@anu.edu.au yazdı:
 
  
  
  
  On 01/25/11, ahmet yıldırım  ahmedo...@gmail.com wrote:
  
   
   Dear Mark and Emanuel,
   
   I am sending the ions.tip and the topol.top files. Everything seems ok. 
   Any problem about these files?
   
   
  
  
  No. So one of the other #include files is erroneously defining ion molecule 
  types. See 
  www.gromacs.org/Documentation/Include_File_Mechanism(http://www.gromacs.org/Documentation/Include_File_Mechanism).
   Go and read them and fix it.
  
  
  As the link I sent last time hinted, you need to name *molecules* in the 
  [molecules] directive. Look carefully at the *molecule* name of chloride 
  where it is defined.
  
  Mark
  
  
  
   
   
   
   By the way, I am using the Gromacs 4.5.3.
   Force Field 43a1
   
   25 Ocak 2011 12:03 tarihinde Emanuel Peter 
   emanuel.pe...@chemie.uni-regensburg.de yazdı:
   
   
   
Hello,

You have to look into your ions.itp which is included in your top-file 
by #include ions.itp.


There all the types of ions have to be defined.
The atom-types which you can see in the ions.itp, you will
find in the ffyour-forcefield.itp where your atomtypes are defined.
On top of your .top file the Atomtype-itp-file is included.


All these files are normally placed in /usr/share/gromacs/top/.
But you also can place them into your current directory.

Bests,

Emanuel

 Mark Abraham  25.01.11 10.48 Uhr 







On 01/25/11, ahmet yıldırım  ahmedo...@gmail.com wrote:
 
 
 Dear Mark,
 
 I looked at gromacs mail list but I could not find a proper solution 
 .What should I add to the .top file? Please look at the following 
 reconstructed1 .top and reconstructed1 .top files
 
 
 
 
 I have error as the following reconstructed1 .top file:
 
 Fatal error:
 
 moleculetype CU1 is redefined
 
 
 I have error as the following reconstructed2 .top file:
 
 Fatal error:
 
 No such moleculetype CL-
 
 

I don't have any knowledge of the context, so can't answer. It looks to 
me like you are mixing copies of ions.itp from multiple sources. Don't. 
Use the one for the force field you are targetting. pdb2gmx generated 
the right invocation - all you should have to do is use that by 
generating correctly-named ions. See 
http://www.gromacs.org/Documentation/Gromacs_Utilities/genion(http://www.gromacs.org/Documentation/Gromacs_Utilities/genion)



Mark


 Original .top file:
 ; Include topology for ions
 #include gromos43a1.ff/ions.itp
 
 
 
 [ system ]
 ; Name
 GP41 MPER-DERIVED PEPTIDE; ANTI-HIV-1 ANTIBODY 2F5 LIGHT CHAIN; 
 ANTI-HIV-1 ANTIBODY 2F5 HEAVY CHAIN in water
 
 
 [ molecules ]
 ; Compound    #mols
 Protein_chain_P 1
 Protein_chain_L 1
 Protein_chain_H 1
 SOL    10
 SOL   127
 SOL   157
 SOL 41779
 
 
 
 
 reconstructed1 .top file
 ; Include topology for ions
 #include gromos43a1.ff/ions.itp
 #include ions.itp
 

[gmx-users] Can not open file: run.xtc

2011-01-25 Thread ahmet yıldırım
Dear users,

g_rms -s em.tpr -f run.xtc
Select group for least squares fit:
Selected 2: 'Protein-H'
Select group for RMSD calculation
Selected 2: 'Protein-H'

I didn't had such a problem in other samples. But now, I have the error as
the following:

Program g_rms, VERSION 4.5.3
Source code file: gmxfio.c, line: 519

Can not open file:
run.xtc

-- 
Ahmet YILDIRIM
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] V-rescale thermostat, PME, Estimate for the relative computational load of the PME mesh part: 0.97

2011-01-25 Thread gromacs
HI Friends,

I get the following note,

The Berendsen thermostat does not generate the correct kinetic energy
  distribution. You might want to consider using the V-rescale thermostat.

I want to keep the T at 300K, so does it matter to select any thermostat method?


Another note when i use PME:

Estimate for the relative computational load of the PME mesh part: 0.97

NOTE 1 [file aminoacids.dat, line 1]:
  The optimal PME mesh load for parallel simulations is below 0.5
  and for highly parallel simulations between 0.25 and 0.33,
  for higher performance, increase the cut-off and the PME grid spacing

So what is the reason? I use type=PME

Is my setting proper?

Thanks
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Can not open file: run.xtc

2011-01-25 Thread Tsjerk Wassenaar
Hi Ahmet,

That sort of indicates that the file is not there, doesn't it?
Maybe you're not doing what you expect to be doing, or doing it somewhere else.

Cheers,

Tsjerk


2011/1/25 ahmet yıldırım ahmedo...@gmail.com:
 Dear users,

 g_rms -s em.tpr -f run.xtc
 Select group for least squares fit:
 Selected 2: 'Protein-H'
 Select group for RMSD calculation
 Selected 2: 'Protein-H'

 I didn't had such a problem in other samples. But now, I have the error as
 the following:

 Program g_rms, VERSION 4.5.3
 Source code file: gmxfio.c, line: 519

 Can not open file:
 run.xtc

 --
 Ahmet YILDIRIM

 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Tsjerk A. Wassenaar, Ph.D.

post-doctoral researcher
Molecular Dynamics Group
* Groningen Institute for Biomolecular Research and Biotechnology
* Zernike Institute for Advanced Materials
University of Groningen
The Netherlands
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] V-rescale thermostat, PME, Estimate for the relative computational load of the PME mesh part: 0.97

2011-01-25 Thread Justin A. Lemkul



gromacs wrote:

HI Friends,

I get the following note,

The Berendsen thermostat does not generate the correct kinetic energy
  distribution. You might want to consider using the V-rescale thermostat.

I want to keep the T at 300K, so does it matter to select any thermostat 
method?




The choice of thermostat certainly does matter, otherwise you wouldn't get this 
note.  Refer to the numerous discussions in the list archive as to why one would 
or would not (usually) use the Berendsen thermostat, as well as:


http://www.gromacs.org/Documentation/Terminology/Thermostats
http://www.gromacs.org/Documentation/Terminology/Berendsen



Another note when i use PME:

Estimate for the relative computational load of the PME mesh part: 0.97

NOTE 1 [file aminoacids.dat, line 1]:
  The optimal PME mesh load for parallel simulations is below 0.5
  and for highly parallel simulations between 0.25 and 0.33,
  for higher performance, increase the cut-off and the PME grid spacing

So what is the reason? I use type=PME



Your combination of settings (rcoulomb, fourierspacing, and perhaps a few 
others) indicate that your simulation is going to spend an inordinate amount of 
time doing PME calculations, so your performance will suffer.  Seeing your 
entire .mdp file would be necessary if you want further guidance.


-Justin


Is my setting proper?

Thanks




--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Can not open file: run.xtc

2011-01-25 Thread ahmet yıldırım
Dear Tsjerk,

Thank you. Problem solved.
g_rms -s em.tpr -f run.trr -o rmsd.xvg
xmgrace rmsd.xvg

25 Ocak 2011 14:51 tarihinde Tsjerk Wassenaar tsje...@gmail.com yazdı:

 Hi Ahmet,

 That sort of indicates that the file is not there, doesn't it?
 Maybe you're not doing what you expect to be doing, or doing it somewhere
 else.

 Cheers,

 Tsjerk


 2011/1/25 ahmet yıldırım ahmedo...@gmail.com:
  Dear users,
 
  g_rms -s em.tpr -f run.xtc
  Select group for least squares fit:
  Selected 2: 'Protein-H'
  Select group for RMSD calculation
  Selected 2: 'Protein-H'
 
  I didn't had such a problem in other samples. But now, I have the error
 as
  the following:
 
  Program g_rms, VERSION 4.5.3
  Source code file: gmxfio.c, line: 519
 
  Can not open file:
  run.xtc
 
  --
  Ahmet YILDIRIM
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 



 --
 Tsjerk A. Wassenaar, Ph.D.

 post-doctoral researcher
 Molecular Dynamics Group
 * Groningen Institute for Biomolecular Research and Biotechnology
 * Zernike Institute for Advanced Materials
 University of Groningen
 The Netherlands
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Ahmet YILDIRIM
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] compiling error in tools

2011-01-25 Thread Jennifer Williams


Hello

I know this is possibly an issue for the IT support at my Uni but I  
was wondering if someone could shed some light on what may have gone  
wrong so at least I can point them in the right direction.


As I am getting very small diffusion coefficients when using g_msd, I  
want to increase the number of decimal points the diffusion  
coefficient is displayed to.

In the file gmx_msd.c within src/tools I changed

 fprintf(out,# D[%10s] = %.4f (+/- %.4f) (1e-5 cm^2/s)\n,

to

 fprintf(out,# D[%10s] = %.8f (+/- %.8f) (1e-5 cm^2/s)\n,

then, as I have always done, I just typed in make g_msd to recompile  
this tool.

I get the following error message:

src/tools make g_msd
mpicc -DHAVE_CONFIG_H -I. -I../../src  -I../../include  
-DGMXLIBDIR=\/home/jwillia4/GRO/share/top\  
-I/home/jwillia4/GRO/include  -O3 -fomit-frame-pointer  
-finline-functions -Wall -Wno-unused -funroll-all-loops -MT g_msd.o  
-MD -MP -MF .deps/g_msd.Tpo -c -o g_msd.o g_msd.c

mv -f .deps/g_msd.Tpo .deps/g_msd.Po
/bin/sh ../../libtool --tag=CC   --mode=link mpicc  -O3  
-fomit-frame-pointer -finline-functions -Wall -Wno-unused  
-funroll-all-loops  -L/home/jwillia4/GRO/lib  -lgslcblas  -o g_msd  
g_msd.o libgmxana_mpi.la ../mdlib/libmd_mpi.la ../gmxlib/libgmx_mpi.la  
-lgsl -lnsl -lfftw3f -lm
mpicc -O3 -fomit-frame-pointer -finline-functions -Wall -Wno-unused  
-funroll-all-loops -o g_msd g_msd.o  -L/home/jwillia4/GRO/lib  
./.libs/libgmxana_mpi.a  
/home/jwillia4/GRO/gromacs-4.0.7/src/mdlib/.libs/libmd_mpi.a  
../mdlib/.libs/libmd_mpi.a  
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a  
../gmxlib/.libs/libgmx_mpi.a /home/jwillia4/GRO/lib/libgslcblas.so  
/home/jwillia4/GRO/lib/libgsl.so -lnsl  
/home/jwillia4/GRO/lib/libfftw3f.a -lm   -Wl,--rpath  
-Wl,/home/jwillia4/GRO/lib -Wl,--rpath -Wl,/home/jwillia4/GRO/lib
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(main.o): In  
function `init_par':

main.c:(.text+0xc0): undefined reference to `ompi_mpi_comm_world'
main.c:(.text+0xc8): undefined reference to `ompi_mpi_comm_world'
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(main.o): In  
function `init_multisystem':

main.c:(.text+0xb99): undefined reference to `ompi_mpi_comm_world'
main.c:(.text+0xbd9): undefined reference to `ompi_mpi_comm_world'
main.c:(.text+0xbee): undefined reference to `ompi_mpi_comm_world'
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(network.o):network.c:(.text+0x15): more undefined references to `ompi_mpi_comm_world'  
follow
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(network.o): In  
function `gmx_sumi_sim':

network.c:(.text+0x96): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x9b): undefined reference to `ompi_mpi_int'
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(network.o): In  
function `gmx_sumf_sim':

network.c:(.text+0x4e6): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x4eb): undefined reference to `ompi_mpi_float'
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(network.o): In  
function `gmx_sumd_sim':

network.c:(.text+0x9a6): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x9ab): undefined reference to `ompi_mpi_double'
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(network.o): In  
function `gmx_sumi':

network.c:(.text+0xe8f): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0xe94): undefined reference to `ompi_mpi_int'
network.c:(.text+0xec0): undefined reference to `ompi_mpi_int'
network.c:(.text+0xed7): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0xedc): undefined reference to `ompi_mpi_int'
network.c:(.text+0x12fe): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x1303): undefined reference to `ompi_mpi_int'
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(network.o): In  
function `gmx_sumf':

network.c:(.text+0x134e): undefined reference to `ompi_mpi_float'
network.c:(.text+0x1354): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x1380): undefined reference to `ompi_mpi_float'
network.c:(.text+0x1397): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x139c): undefined reference to `ompi_mpi_float'
network.c:(.text+0x183e): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x1843): undefined reference to `ompi_mpi_float'
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(network.o): In  
function `gmx_sumd':

network.c:(.text+0x1890): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x1895): undefined reference to `ompi_mpi_double'
network.c:(.text+0x18c2): undefined reference to `ompi_mpi_double'
network.c:(.text+0x18d7): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x18dc): undefined reference to `ompi_mpi_double'
network.c:(.text+0x1d86): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x1d8b): undefined reference to `ompi_mpi_double'

Re: [gmx-users] compiling error in tools

2011-01-25 Thread Justin A. Lemkul



Jennifer Williams wrote:


Hello

I know this is possibly an issue for the IT support at my Uni but I was 
wondering if someone could shed some light on what may have gone wrong 
so at least I can point them in the right direction.


As I am getting very small diffusion coefficients when using g_msd, I 
want to increase the number of decimal points the diffusion coefficient 
is displayed to.

In the file gmx_msd.c within src/tools I changed

 fprintf(out,# D[%10s] = %.4f (+/- %.4f) (1e-5 cm^2/s)\n,

to

 fprintf(out,# D[%10s] = %.8f (+/- %.8f) (1e-5 cm^2/s)\n,

then, as I have always done, I just typed in make g_msd to recompile 
this tool.

I get the following error message:

src/tools make g_msd
mpicc -DHAVE_CONFIG_H -I. -I../../src  -I../../include 
-DGMXLIBDIR=\/home/jwillia4/GRO/share/top\ 
-I/home/jwillia4/GRO/include  -O3 -fomit-frame-pointer 
-finline-functions -Wall -Wno-unused -funroll-all-loops -MT g_msd.o -MD 
-MP -MF .deps/g_msd.Tpo -c -o g_msd.o g_msd.c

mv -f .deps/g_msd.Tpo .deps/g_msd.Po
/bin/sh ../../libtool --tag=CC   --mode=link mpicc  -O3 
-fomit-frame-pointer -finline-functions -Wall -Wno-unused 
-funroll-all-loops  -L/home/jwillia4/GRO/lib  -lgslcblas  -o g_msd 
g_msd.o libgmxana_mpi.la ../mdlib/libmd_mpi.la ../gmxlib/libgmx_mpi.la 
-lgsl -lnsl -lfftw3f -lm
mpicc -O3 -fomit-frame-pointer -finline-functions -Wall -Wno-unused 
-funroll-all-loops -o g_msd g_msd.o  -L/home/jwillia4/GRO/lib 
./.libs/libgmxana_mpi.a 
/home/jwillia4/GRO/gromacs-4.0.7/src/mdlib/.libs/libmd_mpi.a 
../mdlib/.libs/libmd_mpi.a 
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a 
../gmxlib/.libs/libgmx_mpi.a /home/jwillia4/GRO/lib/libgslcblas.so 
/home/jwillia4/GRO/lib/libgsl.so -lnsl 
/home/jwillia4/GRO/lib/libfftw3f.a -lm   -Wl,--rpath 
-Wl,/home/jwillia4/GRO/lib -Wl,--rpath -Wl,/home/jwillia4/GRO/lib
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(main.o): 
In function `init_par':

main.c:(.text+0xc0): undefined reference to `ompi_mpi_comm_world'
main.c:(.text+0xc8): undefined reference to `ompi_mpi_comm_world'
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(main.o): 
In function `init_multisystem':

main.c:(.text+0xb99): undefined reference to `ompi_mpi_comm_world'
main.c:(.text+0xbd9): undefined reference to `ompi_mpi_comm_world'
main.c:(.text+0xbee): undefined reference to `ompi_mpi_comm_world'
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(network.o):network.c:(.text+0x15): 
more undefined references to `ompi_mpi_comm_world' follow
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(network.o): 
In function `gmx_sumi_sim':

network.c:(.text+0x96): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x9b): undefined reference to `ompi_mpi_int'
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(network.o): 
In function `gmx_sumf_sim':

network.c:(.text+0x4e6): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x4eb): undefined reference to `ompi_mpi_float'
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(network.o): 
In function `gmx_sumd_sim':

network.c:(.text+0x9a6): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x9ab): undefined reference to `ompi_mpi_double'
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(network.o): 
In function `gmx_sumi':

network.c:(.text+0xe8f): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0xe94): undefined reference to `ompi_mpi_int'
network.c:(.text+0xec0): undefined reference to `ompi_mpi_int'
network.c:(.text+0xed7): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0xedc): undefined reference to `ompi_mpi_int'
network.c:(.text+0x12fe): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x1303): undefined reference to `ompi_mpi_int'
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(network.o): 
In function `gmx_sumf':

network.c:(.text+0x134e): undefined reference to `ompi_mpi_float'
network.c:(.text+0x1354): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x1380): undefined reference to `ompi_mpi_float'
network.c:(.text+0x1397): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x139c): undefined reference to `ompi_mpi_float'
network.c:(.text+0x183e): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x1843): undefined reference to `ompi_mpi_float'
/home/jwillia4/GRO/gromacs-4.0.7/src/gmxlib/.libs/libgmx_mpi.a(network.o): 
In function `gmx_sumd':

network.c:(.text+0x1890): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x1895): undefined reference to `ompi_mpi_double'
network.c:(.text+0x18c2): undefined reference to `ompi_mpi_double'
network.c:(.text+0x18d7): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x18dc): undefined reference to `ompi_mpi_double'
network.c:(.text+0x1d86): undefined reference to `ompi_mpi_op_sum'
network.c:(.text+0x1d8b): undefined reference to `ompi_mpi_double'

Re: [gmx-users] dipole moment

2011-01-25 Thread David van der Spoel

On 2011-01-25 14.51, Olga Ivchenko wrote:

Dear gromacs users,

I would like to ask if there is a possibility in gromacs to calculate
dipole moment between two atoms. For example one from water and another
one from ligand.


best,
Olga

If you mean the combined dipole moment of a particular water and a 
ligand then you can do it with g_dipoles and an index file.



--
David van der Spoel, Ph.D., Professor of Biology
Dept. of Cell  Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segmentation fault while running eneconv

2011-01-25 Thread Anna Marabotti
Dear all,
 
I launched on my system a first simulation of 5 ns, then I prolonged it to
50 ns using 
tpbconv -s tpr1_5ns.tpr -until 5 -o tpr2_50ns.tpr
and then 
mdrun -s tpr2_50ns.tpr -deffnm md2_50ns -cpi md1_5ns.cpt
Since my simulation was interrupted several times, every time I relaunched
it simply doing:
mdrun -s tpr2_50ns.tpr -cpi md2_50ns.cpt -deffnm md2_50ns_2/3/4
 
At the end of these simulations I obtained the following files:
- md1_5ns.xtc and .edr: files obtained from the first MD of 5 ns long
- md2_50ns.xtc and .edr: files obtained by prolonging the first MD until
50ns
- md2_50ns_2.xtc and .edr: files obtained by restarting the previous
dynamics that was interrupted before 50 ns
- md2_50ns_3.xtc and .edr: same as before
- md2_50ns_4.xtc and .edr: same as before
 
After all these runs, I want to concatenate all the dynamics in order to
have a single .xtc file md_50ns_tot and a single .edr file md_50ns_tot.edr.
For the first, I used:
trjcat -f md1_5ns.xtc md2_50ns.xtc md2_50ns_2.xtc md2_50ns_3.xtc
md2_50ns_4.xtc -o md_50ns_tot.xtc
and all worked fine: I obtained the output file with no errors (there are no
errors also in the .log files)
 
On the contrary, when I tried to do the same with eneconv:
eneconv -f md1_5ns.edr md2_50ns.edr md2_50ns_2.edr md2_50ns_3.edr
md2_50ns_4.edr -o md_50ns_tot.edr
I obtained the following output:
 
Opened 2GH9openmod4_pH10_5ns.edr as double precision energy file
Reading energy frame  1 time  100.000
Opened 2GH9openmod4_pH10_50ns.edr as double precision energy file
Reading energy frame  0 time0.000
Opened 2GH9openmod4_pH10_50ns_2.part0002.edr as double precision energy file
Reading energy frame  0 time 14900.000
Opened 2GH9openmod4_pH10_50ns_3.part0003.edr as double precision energy file
Reading energy frame  0 time 27800.000
Opened 2GH9openmod4_pH10_50ns_4.part0004.edr as double precision energy file
Reading energy frame  0 time 38800.000
 
Summary of files and start times used:
 
  FileStart time
-
2GH9openmod4_pH10_5ns.edr0.000
2GH9openmod4_pH10_50ns.edr0.000
2GH9openmod4_pH10_50ns_2.part0002.edr14900.000
2GH9openmod4_pH10_50ns_3.part0003.edr27800.000
2GH9openmod4_pH10_50ns_4.part0004.edr38800.000
 
Opened 2GH9openmod4_pH10_5ns.edr as double precision energy file
Segmentation fault

Looking for some hints in the gmx-users list the only thing I found that
could be similar to my problem is this old message:
http://lists.gromacs.org/pipermail/gmx-users/2007-January/025657.html
 
I see in the output error message that the start time for the first two
simulations is the same: could be this one the problem for my system?
However, I did use tpbconv each time to make restarts of my simulations, I
really don't know why the start time is 0.000 in the first two cases. 
Is there a problem in the results of simulations if these two simulations
have the same start time? Practically, what can I do to concatenate my .edr
files? 
 
Many thanks in advance and best regards
Anna Marabotti
 

Anna Marabotti, Ph.D.
Laboratory of Bioinformatics and Computational Biology
Institute of Food Science, CNR
Via Roma, 64
83100 Avellino (Italy)
Phone: +39 0825 299651
Fax: +39 0825 781585
Email: anna.marabo...@isa.cnr.it
Skype account: annam1972
Web page: http://bioinformatica.isa.cnr.it/anna/anna.htm
 
When a man with a gun meets a man with a pen, the man with a gun is a dead
man
 
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] segmentation fault while running eneconv

2011-01-25 Thread Justin A. Lemkul



Anna Marabotti wrote:

Dear all,
 
I launched on my system a first simulation of 5 ns, then I prolonged it 
to 50 ns using

tpbconv -s tpr1_5ns.tpr -until 5 -o tpr2_50ns.tpr
and then
mdrun -s tpr2_50ns.tpr -deffnm md2_50ns -cpi md1_5ns.cpt
Since my simulation was interrupted several times, every time I 
relaunched it simply doing:

mdrun -s tpr2_50ns.tpr -cpi md2_50ns.cpt -deffnm md2_50ns_2/3/4
 
At the end of these simulations I obtained the following files:

- md1_5ns.xtc and .edr: files obtained from the first MD of 5 ns long
- md2_50ns.xtc and .edr: files obtained by prolonging the first MD until 
50ns
- md2_50ns_2.xtc and .edr: files obtained by restarting the previous 
dynamics that was interrupted before 50 ns

- md2_50ns_3.xtc and .edr: same as before
- md2_50ns_4.xtc and .edr: same as before
 
After all these runs, I want to concatenate all the dynamics in order to 
have a single .xtc file md_50ns_tot and a single .edr file 
md_50ns_tot.edr. For the first, I used:
trjcat -f md1_5ns.xtc md2_50ns.xtc md2_50ns_2.xtc md2_50ns_3.xtc 
md2_50ns_4.xtc -o md_50ns_tot.xtc
and all worked fine: I obtained the output file with no errors (there 
are no errors also in the .log files)
 
On the contrary, when I tried to do the same with eneconv:
eneconv -f md1_5ns.edr md2_50ns.edr md2_50ns_2.edr md2_50ns_3.edr 
md2_50ns_4.edr -o md_50ns_tot.edr

I obtained the following output:
 
Opened 2GH9openmod4_pH10_5ns.edr as double precision energy file

Reading energy frame  1 time  100.000
Opened 2GH9openmod4_pH10_50ns.edr as double precision energy file
Reading energy frame  0 time0.000
Opened 2GH9openmod4_pH10_50ns_2.part0002.edr as double precision energy file
Reading energy frame  0 time 14900.000
Opened 2GH9openmod4_pH10_50ns_3.part0003.edr as double precision energy file
Reading energy frame  0 time 27800.000
Opened 2GH9openmod4_pH10_50ns_4.part0004.edr as double precision energy file
Reading energy frame  0 time 38800.000
 
Summary of files and start times used:
 
  FileStart time

-
2GH9openmod4_pH10_5ns.edr0.000
2GH9openmod4_pH10_50ns.edr0.000
2GH9openmod4_pH10_50ns_2.part0002.edr14900.000
2GH9openmod4_pH10_50ns_3.part0003.edr27800.000
2GH9openmod4_pH10_50ns_4.part0004.edr38800.000
 
Opened 2GH9openmod4_pH10_5ns.edr as double precision energy file

Segmentation fault
Looking for some hints in the gmx-users list the only thing I found that 
could be similar to my problem is this old message:

http://lists.gromacs.org/pipermail/gmx-users/2007-January/025657.html
 


What Gromacs version are you using?  If it is not 4.5.3, then you're probably 
running into a bug regarding double precision .edr files that was fixed some 
time ago.


I see in the output error message that the start time for the first two 
simulations is the same: could be this one the problem for my system? 
However, I did use tpbconv each time to make restarts of my simulations, 
I really don't know why the start time is 0.000 in the first two cases.


Well, your commands don't agree with the output of eneconv.  The names are 
different.  Perhaps you've confused what files you think you're using, or 
otherwise attempted to append to a file and then gave it a new name.  In any 
case, gmxcheck is your friend here.


Is there a problem in the results of simulations if these two 
simulations have the same start time? Practically, what can I do to 
concatenate my .edr files?
 


Presumably, yes.  As long as the .edr files have no internal corruptions (which, 
unfortunately, is quite possible if the job frequently went down), then you 
should be able to concatenate them.  That also depends on the version of Gromacs 
you're using, if you're running into the old bug.  It's always helpful to state 
right up front which version you're using when reporting a problem.


-Justin


Many thanks in advance and best regards
Anna Marabotti
 


Anna Marabotti, Ph.D.
Laboratory of Bioinformatics and Computational Biology
Institute of Food Science, CNR
Via Roma, 64
83100 Avellino (Italy)
Phone: +39 0825 299651
Fax: +39 0825 781585
Email: anna.marabo...@isa.cnr.it mailto:anna.marabo...@isa.cnr.it
Skype account: annam1972
Web page: http://bioinformatica.isa.cnr.it/anna/anna.htm
 
When a man with a gun meets a man with a pen, the man with a gun is a 
dead man
 



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe 

Re: [gmx-users] dipole moment

2011-01-25 Thread Olga Ivchenko
Dear David,

I mean a dipole moment between a particle atom in a ligand and a particular
atom in water molecule. G_dipoles works for molecules, as I understood, not
for atoms.

best,
Olga


2011/1/25 David van der Spoel sp...@xray.bmc.uu.se

 On 2011-01-25 14.51, Olga Ivchenko wrote:

 Dear gromacs users,

 I would like to ask if there is a possibility in gromacs to calculate
 dipole moment between two atoms. For example one from water and another
 one from ligand.


 best,
 Olga

  If you mean the combined dipole moment of a particular water and a ligand
 then you can do it with g_dipoles and an index file.


 --
 David van der Spoel, Ph.D., Professor of Biology
 Dept. of Cell  Molec. Biol., Uppsala University.
 Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
 sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] dipole moment

2011-01-25 Thread David van der Spoel

On 2011-01-25 16.01, Olga Ivchenko wrote:

Dear David,

I mean a dipole moment between a particle atom in a ligand and a
particular atom in water molecule. G_dipoles works for molecules, as I
understood, not for atoms.


You can measure the distance between them, the dipole moment will be 
ill-defined unless the two atoms together have a zero charge.




best,
Olga


2011/1/25 David van der Spoel sp...@xray.bmc.uu.se
mailto:sp...@xray.bmc.uu.se

On 2011-01-25 14.51, Olga Ivchenko wrote:

Dear gromacs users,

I would like to ask if there is a possibility in gromacs to
calculate
dipole moment between two atoms. For example one from water and
another
one from ligand.


best,
Olga

If you mean the combined dipole moment of a particular water and a
ligand then you can do it with g_dipoles and an index file.


--
David van der Spoel, Ph.D., Professor of Biology
Dept. of Cell  Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
sp...@xray.bmc.uu.se mailto:sp...@xray.bmc.uu.se
http://folding.bmc.uu.se
--
gmx-users mailing list gmx-users@gromacs.org
mailto:gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org
mailto:gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists





--
David van der Spoel, Ph.D., Professor of Biology
Dept. of Cell  Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] dipole moment

2011-01-25 Thread Olga Ivchenko
Thank you David,

Olga

2011/1/25 David van der Spoel sp...@xray.bmc.uu.se

 On 2011-01-25 16.01, Olga Ivchenko wrote:

 Dear David,

 I mean a dipole moment between a particle atom in a ligand and a
 particular atom in water molecule. G_dipoles works for molecules, as I
 understood, not for atoms.


 You can measure the distance between them, the dipole moment will be
 ill-defined unless the two atoms together have a zero charge.


 best,
 Olga


 2011/1/25 David van der Spoel sp...@xray.bmc.uu.se
 mailto:sp...@xray.bmc.uu.se


On 2011-01-25 14.51, Olga Ivchenko wrote:

Dear gromacs users,

I would like to ask if there is a possibility in gromacs to
calculate
dipole moment between two atoms. For example one from water and
another
one from ligand.


best,
Olga

If you mean the combined dipole moment of a particular water and a
ligand then you can do it with g_dipoles and an index file.


--
David van der Spoel, Ph.D., Professor of Biology
Dept. of Cell  Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
sp...@xray.bmc.uu.se mailto:sp...@xray.bmc.uu.se

http://folding.bmc.uu.se
--
gmx-users mailing list gmx-users@gromacs.org
mailto:gmx-users@gromacs.org

http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org
mailto:gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




 --
 David van der Spoel, Ph.D., Professor of Biology
 Dept. of Cell  Molec. Biol., Uppsala University.
 Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
 sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Umbrella Sampling for a single molecule

2011-01-25 Thread Sai Pooja
Hi,

I am interested in carrying out umbrella sampling for a protein in explicit
solvent with the charmm forcefield. I want to impose a harmonic potential in
the dihedral space of only some specific atoms in the protein molecule. I am
having trouble figuring out a way to apply this using gromacs. Can I get
some help on this?

Thanks
Pooja

p.s. I have seen the tutorial on US by Justin but I am not sure if that is
applicable to a single molecule when the purpose is to obtain the
free-energy function associated with the transition in dihedral angles
of specific some atoms in a protein molecule

-- 
Quaerendo Invenietis-Seek and you shall discover.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Umbrella Sampling for a single molecule

2011-01-25 Thread Justin A. Lemkul



Sai Pooja wrote:

Hi,
 
I am interested in carrying out umbrella sampling for a protein in 
explicit solvent with the charmm forcefield. I want to impose a harmonic 
potential in the dihedral space of only some specific atoms in the 
protein molecule. I am having trouble figuring out a way to apply this 
using gromacs. Can I get some help on this?
 


http://www.gromacs.org/Documentation/How-tos/Dihedral_Restraints

You'll have to build various configurations that correspond to different 
dihedral angles (which form the sampling windows), then restrain them.


The energy attributed to the restraints is then stored in the .edr file.  From 
these energies, you should be able to construct the energy curve over the 
sampling windows.  There are examples of this in the literature, so I suspect 
you should be able to find some demonstrations of how it's applied.


-Justin


Thanks
Pooja
 
p.s. I have seen the tutorial on US by Justin but I am not sure if that 
is applicable to a single molecule when the purpose is to obtain the 
free-energy function associated with the transition in dihedral angles 
of specific some atoms in a protein molecule


--
Quaerendo Invenietis-Seek and you shall discover.



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: segmentation fault while running eneconv

2011-01-25 Thread anna . marabotti
Dear Justin,
thank you for your answer. I'm currently using Gromacs version 4.0.7
which was compiled in double precision, therefore I strongly suspect
that this is a problem linked to the old bug. It seems to me that I used
eneconv previously on this machine to concatenate .edr files, but I
cannot be sure. However, before writing you I copied the .edr files on
another machine, which still has Gromacs 4.0.7 installed, but compiled
in single precision, and still I found the same error Segmentation fault.

Concerning the name of the files, when I described the commands in my
previous message I used a simplified version of their names in order
to avoid writing the long 2GH9openmod4_pH...blabla, but I checked
several times the names I used and I can assure you that I tried to
concatenate the correct file names. Therefore, the problem is not a
typo. Sorry for not explaining it before.

I made a check wit gmxcheck on 2 files: the .edr and .xtc files from the
MD of 5 ns and the .edr and .xtc files from the MD of 50 ns (started
immediately after the 5ns using tpbconv): these are the results:
Checking file 2GH9openmod4_pH10_5ns.xtc
Reading frame   0 time0.000
# Atoms  40139
Precision 0.001 (nm)
Last frame   1000 time 5000.000
Item#frames Timestep (ps)
Step  10015
Time  10015
Lambda   0
Coords10015
Velocities   0
Forces   0
Box   10015

Checking energy file 2GH9openmod4_pH10_5ns.edr
Opened 2GH9openmod4_pH10_5ns.edr as double precision energy file
frame:  0 (index  0), t:  0.000
Last energy frame read 50 time 5000.000
Found 51 frames with a timestep of 100 ps.

Checking file 2GH9openmod4_pH10_50ns.xtc
Reading frame   0 time0.000
# Atoms  40139
Precision 0.001 (nm)
Reading frame2000 time 1.000
Item#frames Timestep (ps)
Step  29815
Time  29815
Lambda   0
Coords29815
Velocities   0
Forces   0
Box   29815

Checking energy file 2GH9openmod4_pH10_50ns.edr
Opened 2GH9openmod4_pH10_50ns.edr as double precision energy file
frame:  0 (index  0), t:  0.000
Last energy frame read 149 time 14900.000
Found 150 frames with a timestep of 100 ps.

It seems to me that all is regular; the only strange thing is that both
start at time 0, despite I used tpbconv+mdrun with cpi to continue the
MD after the first 5 ns.

I return with my previous question: how can I manage this problem,
especially if I have the version 4.0.7? Do I have to ask administrators
to fix the bug? Do I have to restart all my simulations?

Thank you very much and best regards
Anna

--

Message: 1
Date: Tue, 25 Jan 2011 09:43:20 -0500
From: Justin A. Lemkul jalem...@vt.edu
Subject: Re: [gmx-users] segmentation fault while running eneconv
To: Discussion list for GROMACS users gmx-users@gromacs.org
Message-ID: 4d3ee188.7000...@vt.edu
Content-Type: text/plain; charset=ISO-8859-1; format=flowed



Anna Marabotti wrote:
 Dear all,

 I launched on my system a first simulation of 5 ns, then I prolonged it
 to 50 ns using
 tpbconv -s tpr1_5ns.tpr -until 5 -o tpr2_50ns.tpr
 and then
 mdrun -s tpr2_50ns.tpr -deffnm md2_50ns -cpi md1_5ns.cpt
 Since my simulation was interrupted several times, every time I
 relaunched it simply doing:
 mdrun -s tpr2_50ns.tpr -cpi md2_50ns.cpt -deffnm md2_50ns_2/3/4

 At the end of these simulations I obtained the following files:
 - md1_5ns.xtc and .edr: files obtained from the first MD of 5 ns long
 - md2_50ns.xtc and .edr: files obtained by prolonging the first MD until
 50ns
 - md2_50ns_2.xtc and .edr: files obtained by restarting the previous
 dynamics that was interrupted before 50 ns
 - md2_50ns_3.xtc and .edr: same as before
 - md2_50ns_4.xtc and .edr: same as before

 After all these runs, I want to concatenate all the dynamics in order to
 have a single .xtc file md_50ns_tot and a single .edr file
 md_50ns_tot.edr. For the first, I used:
 trjcat -f md1_5ns.xtc md2_50ns.xtc md2_50ns_2.xtc md2_50ns_3.xtc
 md2_50ns_4.xtc -o md_50ns_tot.xtc
 and all worked fine: I obtained the output file with no errors (there
 are no errors also in the .log files)

 On the contrary, when I tried to do the same with eneconv:
 eneconv -f md1_5ns.edr md2_50ns.edr md2_50ns_2.edr md2_50ns_3.edr
 md2_50ns_4.edr -o md_50ns_tot.edr
 I obtained the following output:

 Opened 2GH9openmod4_pH10_5ns.edr as double precision energy file
 Reading energy frame  1 time  100.000
 Opened 2GH9openmod4_pH10_50ns.edr as double precision energy file
 Reading energy frame  0 time0.000
 Opened 2GH9openmod4_pH10_50ns_2.part0002.edr as double precision energy file
 Reading energy frame  0 time 14900.000
 Opened 2GH9openmod4_pH10_50ns_3.part0003.edr as double precision energy file
 Reading energy frame  0 time 27800.000
 Opened 2GH9openmod4_pH10_50ns_4.part0004.edr as double precision energy 

Re: [gmx-users] Re: Secondary structure loss in implicit solvent simulations

2011-01-25 Thread Michael Shirts
OK, that is what I was trying to figure out -- is the problem
reproducible on both GPU and CPU.  Now, you havent answered the direct
question, if the energies are the same for at least the first 5 steps
are so -- without that knowledge, then here might be different errors
occurring in GPU vs CPU.

At this point, the question is then whether this works with another
code with the same input parameters.  Sander (part of the AmberTools
system)is downloadable (I believe to anyone, not just academics), so
you can try running the same system on Sander, using the best fit to
the implicit solvent model in gromacs.

If THAT works, and Gromacs GPU fails, then it would appear to be a
problem with Gromacs implicit solvent implementation, and should be
posted to redmine as a bug, as well as described on the list with full
details (more than you have provided so far!) so that it can be
reproduced by the developers.

Best,

On Tue, Jan 25, 2011 at 5:05 AM, K. Singhal k.sing...@uva.nl wrote:
 Hi

 It's not necessarily GPU-specific, it's implicit solvent-specific. I don't 
 get these problems in explicit solvent simulations on CPU, only in implicit 
 solvent simulations both on GPU as well as CPU. One of the problems that I 
 can think of is unbalanced charges that I would have balanced out using NaCl 
 ions, but not any more.

 Regards
 Kush



 --
 Kushagra Singhal
 Promovendus, Computational Chemistry
 van 't Hoff Institute of Molecular Sciences
 Science Park 904, room C2.119
 1098 XH Amsterdam, The Netherlands
 +31 205256965
 Universiteit van Amsterdam
 k.sing...@uva.nl


 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: segmentation fault while running eneconv

2011-01-25 Thread Justin A. Lemkul



anna.marabo...@isa.cnr.it wrote:

Dear Justin,
thank you for your answer. I'm currently using Gromacs version 4.0.7
which was compiled in double precision, therefore I strongly suspect
that this is a problem linked to the old bug. It seems to me that I used
eneconv previously on this machine to concatenate .edr files, but I
cannot be sure. However, before writing you I copied the .edr files on
another machine, which still has Gromacs 4.0.7 installed, but compiled
in single precision, and still I found the same error Segmentation fault.



I don't believe you have a problem related to the bug; it was introduced in the 
4.5 series, so 4.0.7 should be unaffected.  Since gmxcheck seems to have worked, 
that's more evidence that there is no bug.  The problem was in the .edr 
formating, causing all Gromacs utilities to stop reading double-precision .edr 
files.



Concerning the name of the files, when I described the commands in my
previous message I used a simplified version of their names in order
to avoid writing the long 2GH9openmod4_pH...blabla, but I checked
several times the names I used and I can assure you that I tried to
concatenate the correct file names. Therefore, the problem is not a
typo. Sorry for not explaining it before.



OK.  Literal is always better, and often requires a simple cut and paste from 
the terminal, rather than any manual typing at all :)



I made a check wit gmxcheck on 2 files: the .edr and .xtc files from the
MD of 5 ns and the .edr and .xtc files from the MD of 50 ns (started
immediately after the 5ns using tpbconv): these are the results:
Checking file 2GH9openmod4_pH10_5ns.xtc
Reading frame   0 time0.000
# Atoms  40139
Precision 0.001 (nm)
Last frame   1000 time 5000.000
Item#frames Timestep (ps)
Step  10015
Time  10015
Lambda   0
Coords10015
Velocities   0
Forces   0
Box   10015

Checking energy file 2GH9openmod4_pH10_5ns.edr
Opened 2GH9openmod4_pH10_5ns.edr as double precision energy file
frame:  0 (index  0), t:  0.000
Last energy frame read 50 time 5000.000
Found 51 frames with a timestep of 100 ps.

Checking file 2GH9openmod4_pH10_50ns.xtc
Reading frame   0 time0.000
# Atoms  40139
Precision 0.001 (nm)
Reading frame2000 time 1.000
Item#frames Timestep (ps)
Step  29815
Time  29815
Lambda   0
Coords29815
Velocities   0
Forces   0
Box   29815

Checking energy file 2GH9openmod4_pH10_50ns.edr
Opened 2GH9openmod4_pH10_50ns.edr as double precision energy file
frame:  0 (index  0), t:  0.000
Last energy frame read 149 time 14900.000
Found 150 frames with a timestep of 100 ps.

It seems to me that all is regular; the only strange thing is that both
start at time 0, despite I used tpbconv+mdrun with cpi to continue the
MD after the first 5 ns.



What does the log file tell you regarding the outset of the second job?  It 
should say that it's starting from a checkpoint, and the first interval written 
should have a time of 5000 ps.  If not, then the checkpoint file was not 
properly used.  If both .edr files start at the same point, that could be a 
reason for the seg fault, but that's a bit of a guess.


From the gmxcheck output, it appears that your _50ns.edr file does indeed start 
from 0 instead of 5000, as you would hope.



I return with my previous question: how can I manage this problem,
especially if I have the version 4.0.7? Do I have to ask administrators
to fix the bug? Do I have to restart all my simulations?



If the continuation instead started from t=0, then you don't have to re-do 
anything, just don't use the _5ns.* information (.xtc, .edr, etc) since you 
basically did that portion of the simulation over.  The log files will tell you 
what's going on.


-Justin


Thank you very much and best regards
Anna


--

Message: 1
Date: Tue, 25 Jan 2011 09:43:20 -0500
From: Justin A. Lemkul jalem...@vt.edu
Subject: Re: [gmx-users] segmentation fault while running eneconv
To: Discussion list for GROMACS users gmx-users@gromacs.org
Message-ID: 4d3ee188.7000...@vt.edu
Content-Type: text/plain; charset=ISO-8859-1; format=flowed



Anna Marabotti wrote:

Dear all,

I launched on my system a first simulation of 5 ns, then I prolonged it
to 50 ns using
tpbconv -s tpr1_5ns.tpr -until 5 -o tpr2_50ns.tpr
and then
mdrun -s tpr2_50ns.tpr -deffnm md2_50ns -cpi md1_5ns.cpt
Since my simulation was interrupted several times, every time I
relaunched it simply doing:
mdrun -s tpr2_50ns.tpr -cpi md2_50ns.cpt -deffnm md2_50ns_2/3/4

At the end of these simulations I obtained the following files:
- md1_5ns.xtc and .edr: files obtained from the first MD of 5 ns long
- md2_50ns.xtc and .edr: files obtained by prolonging the first MD until
50ns
- md2_50ns_2.xtc and .edr: files obtained by restarting the previous

[gmx-users] Two machines, same job, one fails

2011-01-25 Thread TJ Mustard



  

  
Hi all,



I am running MD/FEP on a protein-ligand system with gromacs 4.5.3 and FFTW 3.2.2.



My iMac will run the job (over 4000 steps, till I killed it) at 4fs steps. (I am using heavy H)



Once I put this on our groups AMD Cluster the jobs fail even with 2fs steps. (with thousands of lincs errors)



We have recompiled the clusters gromacs 4.5.3 build, with no change. I know the system is the same since I copied the job from the server to my machine, to rerun it.



What is going on? Why can one machine run a job perfectly and the other cannot? I also know there is adequate memory on both machines.



Below is my command sequence:



echo ==
date RNAP-C.joblog
echo g453s-grompp -f em.mdp -c RNAP-C_b4em.gro -p RNAP-C.top -o RNAP-C_em.tpr
/share/apps/gromacs-4.5.3-single/bin/g453s-grompp -f em.mdp -c RNAP-C_b4em.gro -p RNAP-C.top -o RNAP-C_em.tpr
date RNAP-C.joblog
echo g453s-mdrun -v -s RNAP-C_em.tpr -c RNAP-C_after_em.gro -g emlog.log -cpo state_em.cpt -nt 2
/share/apps/gromacs-4.5.3-single/bin/g453s-mdrun -v -s RNAP-C_em.tpr -c RNAP-C_after_em.gro -g emlog.log -cpo stat_em.cpt -nt 2
date RNAP-C.joblog
echo g453s-grompp -f pr.mdp -c RNAP-C_after_em.gro -p RNAP-C.top -o RNAP-C_pr.tpr
/share/apps/gromacs-4.5.3-single/bin/g453s-grompp -f pr.mdp -c RNAP-C_after_em.gro -p RNAP-C.top -o RNAP-C_pr.tpr
echo g453s-mdrun -v -s RNAP-C_pr.tpr -e pr.edr -c RNAP-C_after_pr.gro -g prlog.log -cpo state_pr.cpt -nt 2 -dhdl dhdl-pr.xvg
/share/apps/gromacs-4.5.3-single/bin/g453s-mdrun -v -s RNAP-C_pr.tpr -e pr.edr -c RNAP-C_after_pr.gro -g prlog.log -cpo state_pr.cpt -nt 2 -dhdl dhdl-pr.xvg
date RNAP-C.joblog
echo g453s-grompp -f md.mdp -c RNAP-C_after_pr.gro -p RNAP-C.top -o RNAP-C_md.tpr
/share/apps/gromacs-4.5.3-single/bin/g453s-grompp -f md.mdp -c RNAP-C_after_pr.gro -p RNAP-C.top -o RNAP-C_md.tpr
date RNAP-C.joblog
echo g453s-mdrun -v -s RNAP-C_md.tpr -o RNAP-C_md.trr -c RNAP-C_after_md.gro -g md.log -e md.edr -cpo state_md.cpt -nt 2 -dhdl dhdl-md.xvg
/share/apps/gromacs-4.5.3-single/bin/g453s-mdrun -v -s RNAP-C_md.tpr -o RNAP-C_md.trr -c RNAP-C_after_md.gro -g md.log -e md.edr -cpo state_md.cpt -nt 2 -dhdl dhdl-md.xvg
date RNAP-C.joblog
echo g453s-grompp -f FEP.mdp -c RNAP-C_after_md.gro -p RNAP-C.top -o RNAP-C_fep.tpr
/share/apps/gromacs-4.5.3-single/bin/g453s-grompp -f FEP.mdp -c RNAP-C_after_md.gro -p RNAP-C.top -o RNAP-C_fep.tpr
date RNAP-C.joblog
echo g453s-mdrun -v -s RNAP-C_fep.tpr -o RNAP-C_fep.trr -c RNAP-C_after_fep.gro -g fep.log -e fep.edr -cpo state_fep.cpt -nt 2 -dhdl dhdl-fep.xvg
/share/apps/gromacs-4.5.3-single/bin/g453s-mdrun -v -s RNAP-C_fep.tpr -o RNAP-C_fep.trr -c RNAP-C_after_fep.gro -g fep.log -e fep.edr -cpo state_fep.cpt -nt 2 -dhdl dhdl-fep.xvg





I can add my .mdps but I do not think they are the problem since I know it works on my personal iMac.



Thank you,

TJ Mustard
Email: musta...@onid.orst.edu
  

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] nstcomm nstcalcenergy defeats the purpose of nstcalcenergy

2011-01-25 Thread ahmet yıldırım
Dear users,

How can I fixed the NOTE the following? Will this note/warning cause a
problem in the calculations?What is the meaning of this note?

NOTE 1 [file pr.mdp]:
  nstcomm  nstcalcenergy defeats the purpose of nstcalcenergy, setting
  nstcomm to nstcalcenergy


-- 
Ahmet YILDIRIM
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun_mpi executable not found

2011-01-25 Thread Justin Kat
Alright. So meaning I should have instead issued:

./configure --enable-mpi --program-suffix=_mpi

make mdrun

make install-mdrun

make links


to have installed an MPI-enabled executable called mdrun_mpi apart from the
existing mdrun executable? (Would I also need to append the _mpi suffix when
issuing the first two make and make install commands above?

Thanks,
Justin

On Mon, Jan 24, 2011 at 8:08 PM, Justin A. Lemkul jalem...@vt.edu wrote:



 Justin Kat wrote:
  Thank you for the reply!
 
  hmm mdrun_mpi does not appear in the list of executables in
  /usr/local/gromacs/bin (and well therefore not in /usr/local/bin).
 
  Which set of installation commands that I used should have compiled the
  mdrun_mpi executable? And how should I go about getting the mdrun_mpi
  executable at this point?
 

 I see it now.  When you configured with --enable-mpi, you didn't specify
 --program-suffix=_mpi, so the installation procedure over-wrote your
 existing
 (serial) mdrun with an MPI-enabled one simply called mdrun.  The
 configure
 output should have warned you about this.  You could, in theory, simply
 re-name
 your existing executable mdrun_mpi and then re-install a serial mdrun, if
 you
 need it.

 -Justin

 --
 

 Justin A. Lemkul
 Ph.D. Candidate
 ICTAS Doctoral Scholar
 MILES-IGERT Trainee
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] nstcomm nstcalcenergy defeats the purpose of nstcalcenergy

2011-01-25 Thread Mark Abraham

On 26/01/2011 8:10 AM, ahmet yıldırım wrote:

Dear users,

How can I fixed the NOTE the following? Will this note/warning cause a 
problem in the calculations?What is the meaning of this note?


NOTE 1 [file pr.mdp]:
  nstcomm  nstcalcenergy defeats the purpose of nstcalcenergy, setting
  nstcomm to nstcalcenergy


Did you look these definitions up in the manual before firing off an 
email? :-)


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun_mpi executable not found

2011-01-25 Thread Mark Abraham

On 26/01/2011 8:50 AM, Justin Kat wrote:

Alright. So meaning I should have instead issued:

./configure --enable-mpi --program-suffix=_mpi||
make mdrun
make install-mdrun
make links

to have installed an MPI-enabled executable called mdrun_mpi apart 
from the existing mdrun executable? (Would I also need to append the 
_mpi suffix when issuing the first two make and make install commands 
above?


No. See http://www.gromacs.org/Downloads/Installation_Instructions

Mark



Thanks,
Justin

On Mon, Jan 24, 2011 at 8:08 PM, Justin A. Lemkul jalem...@vt.edu 
mailto:jalem...@vt.edu wrote:




Justin Kat wrote:
 Thank you for the reply!

 hmm mdrun_mpi does not appear in the list of executables in
 /usr/local/gromacs/bin (and well therefore not in /usr/local/bin).

 Which set of installation commands that I used should have
compiled the
 mdrun_mpi executable? And how should I go about getting the
mdrun_mpi
 executable at this point?


I see it now.  When you configured with --enable-mpi, you didn't
specify
--program-suffix=_mpi, so the installation procedure over-wrote
your existing
(serial) mdrun with an MPI-enabled one simply called mdrun.  The
configure
output should have warned you about this.  You could, in theory,
simply re-name
your existing executable mdrun_mpi and then re-install a serial
mdrun, if you
need it.

-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu http://vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing list gmx-users@gromacs.org
mailto:gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org
mailto:gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] V-rescale thermostat, PME, Estimate for the relative computational load of the PME mesh part: 0.97

2011-01-25 Thread Mark Abraham

On 26/01/2011 12:05 AM, Justin A. Lemkul wrote:



gromacs wrote:

HI Friends,

I get the following note,

The Berendsen thermostat does not generate the correct kinetic energy
  distribution. You might want to consider using the V-rescale 
thermostat.


I want to keep the T at 300K, so does it matter to select any 
thermostat method?




The choice of thermostat certainly does matter, otherwise you wouldn't 
get this note.  Refer to the numerous discussions in the list archive 
as to why one would or would not (usually) use the Berendsen 
thermostat, as well as:


http://www.gromacs.org/Documentation/Terminology/Thermostats
http://www.gromacs.org/Documentation/Terminology/Berendsen


... and the relevant manual sections.

Mark





Another note when i use PME:

Estimate for the relative computational load of the PME mesh part: 0.97

NOTE 1 [file aminoacids.dat, line 1]:
  The optimal PME mesh load for parallel simulations is below 0.5
  and for highly parallel simulations between 0.25 and 0.33,
  for higher performance, increase the cut-off and the PME grid spacing

So what is the reason? I use type=PME



Your combination of settings (rcoulomb, fourierspacing, and perhaps a 
few others) indicate that your simulation is going to spend an 
inordinate amount of time doing PME calculations, so your performance 
will suffer.  Seeing your entire .mdp file would be necessary if you 
want further guidance.


-Justin


Is my setting proper?

Thanks






--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Two machines, same job, one fails

2011-01-25 Thread Mark Abraham

On 26/01/2011 5:50 AM, TJ Mustard wrote:


Hi all,

I am running MD/FEP on a protein-ligand system with gromacs 4.5.3 and 
FFTW 3.2.2.


My iMac will run the job (over 4000 steps, till I killed it) at 4fs 
steps. (I am using heavy H)


Once I put this on our groups AMD Cluster the jobs fail even with 2fs 
steps. (with thousands of lincs errors)


We have recompiled the clusters gromacs 4.5.3 build, with no change. I 
know the system is the same since I copied the job from the server to 
my machine, to rerun it.


What is going on? Why can one machine run a job perfectly and the 
other cannot? I also know there is adequate memory on both machines.




You've posted this before, and I made a number of diagnostic 
suggestions. What did you learn?


Mark


Below is my command sequence:

echo 
==

date RNAP-C.joblog
echo g453s-grompp -f em.mdp -c RNAP-C_b4em.gro -p RNAP-C.top -o 
RNAP-C_em.tpr
/share/apps/gromacs-4.5.3-single/bin/g453s-grompp -f em.mdp -c 
RNAP-C_b4em.gro -p RNAP-C.top -o RNAP-C_em.tpr

date RNAP-C.joblog
echo g453s-mdrun -v -s RNAP-C_em.tpr -c RNAP-C_after_em.gro -g 
emlog.log -cpo state_em.cpt -nt 2
/share/apps/gromacs-4.5.3-single/bin/g453s-mdrun -v -s RNAP-C_em.tpr 
-c RNAP-C_after_em.gro -g emlog.log -cpo stat_em.cpt -nt 2

date RNAP-C.joblog
echo g453s-grompp -f pr.mdp -c RNAP-C_after_em.gro -p RNAP-C.top -o 
RNAP-C_pr.tpr
/share/apps/gromacs-4.5.3-single/bin/g453s-grompp -f pr.mdp -c 
RNAP-C_after_em.gro -p RNAP-C.top -o RNAP-C_pr.tpr
echo g453s-mdrun -v -s RNAP-C_pr.tpr -e pr.edr -c RNAP-C_after_pr.gro 
-g prlog.log -cpo state_pr.cpt -nt 2 -dhdl dhdl-pr.xvg
/share/apps/gromacs-4.5.3-single/bin/g453s-mdrun -v -s RNAP-C_pr.tpr 
-e pr.edr -c RNAP-C_after_pr.gro -g prlog.log -cpo state_pr.cpt -nt 2 
-dhdl dhdl-pr.xvg

date RNAP-C.joblog
echo g453s-grompp -f md.mdp -c RNAP-C_after_pr.gro -p RNAP-C.top -o 
RNAP-C_md.tpr
/share/apps/gromacs-4.5.3-single/bin/g453s-grompp -f md.mdp -c 
RNAP-C_after_pr.gro -p RNAP-C.top -o RNAP-C_md.tpr

date RNAP-C.joblog
echo g453s-mdrun -v -s RNAP-C_md.tpr -o RNAP-C_md.trr -c 
RNAP-C_after_md.gro -g md.log -e md.edr -cpo state_md.cpt -nt 2 -dhdl 
dhdl-md.xvg
/share/apps/gromacs-4.5.3-single/bin/g453s-mdrun -v -s RNAP-C_md.tpr 
-o RNAP-C_md.trr -c RNAP-C_after_md.gro -g md.log -e md.edr -cpo 
state_md.cpt -nt 2 -dhdl dhdl-md.xvg

date RNAP-C.joblog
echo g453s-grompp -f FEP.mdp -c RNAP-C_after_md.gro -p RNAP-C.top -o 
RNAP-C_fep.tpr
/share/apps/gromacs-4.5.3-single/bin/g453s-grompp -f FEP.mdp -c 
RNAP-C_after_md.gro -p RNAP-C.top -o RNAP-C_fep.tpr

date RNAP-C.joblog
echo g453s-mdrun -v -s RNAP-C_fep.tpr -o RNAP-C_fep.trr -c 
RNAP-C_after_fep.gro -g fep.log -e fep.edr -cpo state_fep.cpt -nt 2 
-dhdl dhdl-fep.xvg
/share/apps/gromacs-4.5.3-single/bin/g453s-mdrun -v -s RNAP-C_fep.tpr 
-o RNAP-C_fep.trr -c RNAP-C_after_fep.gro -g fep.log -e fep.edr -cpo 
state_fep.cpt -nt 2 -dhdl dhdl-fep.xvg


I can add my .mdps but I do not think they are the problem since I 
know it works on my personal iMac.


Thank you,

TJ Mustard
Email: musta...@onid.orst.edu



-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Two machines, same job, one fails

2011-01-25 Thread TJ Mustard



  

  



  
   On January 25, 2011 at 2:08 PM Mark Abraham mark.abra...@anu.edu.au wrote:
  

  
On 26/01/2011 5:50 AM, TJ Mustard wrote: 


  Hi all,

  

  I am running MD/FEP on a protein-ligand system with gromacs 4.5.3 and FFTW 3.2.2.

  

  My iMac will run the job (over 4000 steps, till I killed it) at 4fs steps. (I am using heavy H)

  

  Once I put this on our groups AMD Cluster the jobs fail even with 2fs steps. (with thousands of lincs errors)

  

  We have recompiled the clusters gromacs 4.5.3 build, with no change. I know the system is the same since I copied the job from the server to my machine, to rerun it.

  

  What is going on? Why can one machine run a job perfectly and the other cannot? I also know there is adequate memory on both machines.

 Youve posted this before, and I made a number of diagnostic suggestions. What did you learn?

 Mark
  


Mark and all,



First thank you for all our help. What you suggested last time helped considerably with our jobs/calculations. I have learned that using the standard mdp settings allow my heavyh 4fs jobs to run on my iMac (intel) and have made these my new standard for future jobs. We chose to use the smaller 0.8nm PME/Cutoff due to others papers/tutorials, but now we understand why we need these standard settings. Now what I see to be our problem is that our machines have some sort of variable we cannot account for. If I am blind to my error, please show me. I just dont understand why one computer works while the other does not. We have recompiled gromacs 4.5.3 single precission on our cluster, and still have this problem.



Now I understand that my iMac works, but it only has 2 cpus and the cluster has 320. Since we are running our jobs via a Bennets Acceptance Ratio FEP with 21 lambda windows, using just one 2 cpu machine would take too long. Especially since we wish to start pseudo high throughput drug testing.





In my .mdp files now, the only changes are:

(the default setting is on the right of the ;)





define =  ; =

; RUN CONTROL PARAMETERS
integrator = sd ; = md
; Start time and timestep in ps
tinit = 0 ; = 0
dt = 0.004 ; = 0.001
nsteps = 75  ; = 0 (this one depends on the window and particular part of our job)

; OUTPUT CONTROL OPTIONS
; Output frequency for coords (x), velocities (v) and forces (f)
nstxout = 1 ; = 100 (to save on disk space)
nstvout = 1 ; = 100



; OPTIONS FOR ELECTROSTATICS AND VDW
; Method for doing electrostatics
coulombtype = PME ; = Cutoff
rcoulomb-switch = 0 ; = 0
rcoulomb = 1 ; = 1
; Relative dielectric constant for the medium and the reaction field
epsilon_r = 1 ; = 1
epsilon_rf = 1 ; = 1
; Method for doing Van der Waals
vdw-type = Cut-off ; = Cut-off
; cut-off lengths 
rvdw-switch = 0 ; = 0
rvdw = 1 ; = 1
; Spacing for the PME/PPPM FFT grid
fourierspacing = 0.12 ; = 0.12
; EWALD/PME/PPPM parameters
pme_order = 4 ; = 4
ewald_rtol = 1e-05 ; = 1e-05
ewald_geometry = 3d ; = 3d
epsilon_surface = 0 ; = 0
optimize_fft = yes ; = no



; OPTIONS FOR WEAK COUPLING ALGORITHMS
; Temperature coupling 
tcoupl = v-rescale ; = No
nsttcouple = -1 ; = -1
nh-chain-length = 10 ; = 10
; Groups to couple separately
tc-grps = System ; =
; Time constant (ps) and reference temperature (K)
tau-t = 0.1 ; =
ref-t = 300 ; =
; Pressure coupling 
Pcoupl = Parrinello-Rahman ; = No
Pcoupltype = Isotropic
nstpcouple = -1 ; = -1
; Time constant (ps), compressibility (1/bar) and reference P (bar)
tau-p = 1 ; = 1
compressibility = 4.5e-5 ; =
ref-p = 1.0 ; =



; OPTIONS FOR BONDS 
constraints = all-bonds ; = none
; Type of constraint algorithm
constraint-algorithm = Lincs ; = Lincs



; Free energy control stuff
free-energy = yes ; = no
init-lambda = 0.00  ; = 0
delta-lambda = 0 ; = 0
foreign_lambda =  0.05 ; =
sc-alpha = 0.5 ; = 0
sc-power = 1.0 ; = 0
sc-sigma = 0.3 ; = 0.3
nstdhdl = 1 ; = 10
separate-dhdl-file = yes ; = yes
dhdl-derivatives = yes ; = yes
dh_hist_size = 0 ; = 0
dh_hist_spacing = 0.1 ; = 0.1
couple-moltype = LGD  ; =
couple-lambda0 = vdw-q ; = vdw-q
couple-lambda1 = none ; = vdw-q
couple-intramol = no  ; = no





Some of these change due to positional restraint md and energy minimization.



All of these settings have come from either tutorials, papers or peoples advice.



If it would be advantageous I can post my entire energy minimization, positional restraint, md, and FEP mdp files.



Thank you,

TJ Mustard






  


Re: [gmx-users] Two machines, same job, one fails

2011-01-25 Thread Justin A. Lemkul



TJ Mustard wrote:



 



On January 25, 2011 at 2:08 PM Mark Abraham mark.abra...@anu.edu.au wrote:


On 26/01/2011 5:50 AM, TJ Mustard wrote:


Hi all,

 

I am running MD/FEP on a protein-ligand system with gromacs 4.5.3 and 
FFTW 3.2.2.


 

My iMac will run the job (over 4000 steps, till I killed it) at 4fs 
steps. (I am using heavy H)


 

Once I put this on our groups AMD Cluster the jobs fail even with 2fs 
steps. (with thousands of lincs errors)


 

We have recompiled the clusters gromacs 4.5.3 build, with no change. 
I know the system is the same since I copied the job from the server 
to my machine, to rerun it.


 

What is going on? Why can one machine run a job perfectly and the 
other cannot? I also know there is adequate memory on both machines.




You've posted this before, and I made a number of diagnostic 
suggestions. What did you learn?


Mark


Mark and all,

 

First thank you for all our help. What you suggested last time helped 
considerably with our jobs/calculations. I have learned that using the 
standard mdp settings allow my heavyh 4fs jobs to run on my iMac (intel) 
and have made these my new standard for future jobs. We chose to use the 
smaller 0.8nm PME/Cutoff due to others papers/tutorials, but now we 
understand why we need these standard settings. Now what I see to be our 
problem is that our machines have some sort of variable we cannot 
account for. If I am blind to my error, please show me. I just don't 
understand why one computer works while the other does not. We have 
recompiled gromacs 4.5.3 single precission on our cluster, and still 
have this problem.




I know the feeling all too well.  PowerPC jobs crash instantly, on our cluster, 
despite working beautifully on our lab machines.  There's a bug report about 
that one, but I haven't heard anything about AMD failures.  It remains a 
possibility that something beyond your control is going on.  To explore a bit 
further:


1. Do the systems in question crash immediately (i.e., step zero) or do they run 
for some time?


2. If they give you even a little bit of output, you can analyze which energy 
terms, etc go haywire with the tips listed here:


http://www.gromacs.org/Documentation/Terminology/Blowing_Up#Diagnosing_an_Unstable_System

That would help in tracking down any potential bug or error.

3. Is it just the production runs that are crashing, or everything?  If EM isn't 
even working, that smells even buggier.


4. Are the compilers the same on the iMac vs. AMD cluster?

-Justin

 

Now I understand that my iMac works, but it only has 2 cpus and the 
cluster has 320. Since we are running our jobs via a Bennet's Acceptance 
Ratio FEP with 21 lambda windows, using just one 2 cpu machine would 
take too long. Especially since we wish to start pseudo high throughput 
drug testing.


 

 


In my .mdp files now, the only changes are:

(the default setting is on the right of the ;)

 

 


define   = ; =

; RUN CONTROL PARAMETERS
integrator   = sd; = md
; Start time and timestep in ps
tinit= 0; = 0
dt   = 0.004; = 0.001
nsteps   = 75   ; = 0 (this one depends on the 
window and particular part of our job)


; OUTPUT CONTROL OPTIONS
; Output frequency for coords (x), velocities (v) and forces (f)
nstxout  = 1; = 100 (to save on disk space)
nstvout  = 1; = 100

 


; OPTIONS FOR ELECTROSTATICS AND VDW
; Method for doing electrostatics
coulombtype  = PME; = Cutoff
rcoulomb-switch  = 0; = 0
rcoulomb = 1  ; = 1
; Relative dielectric constant for the medium and the reaction field
epsilon_r= 1; = 1
epsilon_rf   = 1; = 1
; Method for doing Van der Waals
vdw-type = Cut-off; = Cut-off
; cut-off lengths   
rvdw-switch  = 0; = 0

rvdw = 1  ; = 1
; Spacing for the PME/PPPM FFT grid
fourierspacing   = 0.12; = 0.12
; EWALD/PME/PPPM parameters
pme_order= 4; = 4
ewald_rtol   = 1e-05; = 1e-05
ewald_geometry   = 3d; = 3d
epsilon_surface  = 0; = 0
optimize_fft = yes; = no

 


; OPTIONS FOR WEAK COUPLING ALGORITHMS
; Temperature coupling  
tcoupl   = v-rescale; = No

nsttcouple   = -1; = -1
nh-chain-length  = 10; = 10
; Groups to couple separately
tc-grps  = System; =
; Time constant (ps) and reference temperature (K)
tau-t= 0.1; =
ref-t= 300; =
; Pressure coupling 
Pcoupl   = Parrinello-Rahman; = No

Pcoupltype   = Isotropic
nstpcouple   = -1; = -1
; Time constant (ps), compressibility (1/bar) and reference P (bar)
tau-p= 1; = 1
compressibility  = 

[gmx-users] (no subject)

2011-01-25 Thread trevor brown
Dear Justin/Mark,
I would like to have a private course about Gromacs in Holland.
Is there such a facility for this? Who should I talk with?

Or is there a workshop in near future?
best wishes
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Two machines, same job, one fails

2011-01-25 Thread TJ Mustard



  

  



  On January 25, 2011 at 3:24 PM Justin A. Lemkul jalem...@vt.edu wrote:
  
  
  
   TJ Mustard wrote:
   
   
   
   
   
On January 25, 2011 at 2:08 PM Mark Abraham mark.abra...@anu.edu.au wrote:
   
On 26/01/2011 5:50 AM, TJ Mustard wrote:
   
Hi all,
   
   
   
I am running MD/FEP on a protein-ligand system with gromacs 4.5.3 and
FFTW 3.2.2.
   
   
   
My iMac will run the job (over 4000 steps, till I killed it) at 4fs
steps. (I am using heavy H)
   
   
   
Once I put this on our groups AMD Cluster the jobs fail even with 2fs
steps. (with thousands of lincs errors)
   
   
   
We have recompiled the clusters gromacs 4.5.3 build, with no change.
I know the system is the same since I copied the job from the server
to my machine, to rerun it.
   
   
   
What is going on? Why can one machine run a job perfectly and the
other cannot? I also know there is adequate memory on both machines.
   
   
Youve posted this before, and I made a number of diagnostic
suggestions. What did you learn?
   
Mark
   
Mark and all,
   
   
   
First thank you for all our help. What you suggested last time helped
considerably with our jobs/calculations. I have learned that using the
standard mdp settings allow my heavyh 4fs jobs to run on my iMac (intel)
and have made these my new standard for future jobs. We chose to use the
smaller 0.8nm PME/Cutoff due to others papers/tutorials, but now we
understand why we need these standard settings. Now what I see to be our
problem is that our machines have some sort of variable we cannot
account for. If I am blind to my error, please show me. I just dont
understand why one computer works while the other does not. We have
recompiled gromacs 4.5.3 single precission on our cluster, and still
have this problem.
   
  
   I know the feeling all too well. PowerPC jobs crash instantly, on our cluster,
   despite working beautifully on our lab machines. Theres a bug report about
   that one, but I havent heard anything about AMD failures. It remains a
   possibility that something beyond your control is going on. To explore a bit
   further:
  
   1. Do the systems in question crash immediately (i.e., step zero) or do they run
   for some time?
  


Step 0, every time.




   2. If they give you even a little bit of output, you can analyze which energy
   terms, etc go haywire with the tips listed here:
  


All I have seen on these is LINCS Errors and Water molecules unable to be settled.



But I will check this out right now, and email if I smell trouble.




   http://www.gromacs.org/Documentation/Terminology/Blowing_Up#Diagnosing_an_Unstable_System
  
   That would help in tracking down any potential bug or error.
  
   3. Is it just the production runs that are crashing, or everything? If EM isnt
   even working, that smells even buggier.


Awesome question here, we have seen some weird stuff. Sometimes the cluster will give us segmentation faults, then it will fail on our machines or sometimes not on our iMacs. I know weird! If EM starts on the cluster it will finish. Where we have issues is in positional restraint (PR) and MD and MD/FEP. It doesnt matter if FEP is on or off in a MD (although we are using SD for these MD/FEP runs).




  
   4. Are the compilers the same on the iMac vs. AMD cluster?


No I am using x86_64-apple-darwin10 GCC 4.4.4 and the cluster is using x86_64-redhat-linux 4.1.2 GCC.

I just did a quick yum search and there doesnt seem to be a newer GCC. We know you are going to cmake but we have yet to get it implemented on our cluster successfully.



Thank you,

TJ Mustard




  
   -Justin
  
   
   
Now I understand that my iMac works, but it only has 2 cpus and the
cluster has 320. Since we are running our jobs via a Bennets Acceptance
Ratio FEP with 21 lambda windows, using just one 2 cpu machine would
take too long. Especially since we wish to start pseudo high throughput
drug testing.
   
   
   
   
   
In my .mdp files now, the only changes are:
   
(the default setting is on the right of the ;)
   
   
   
   
   
define =  ; =
   
; RUN CONTROL PARAMETERS
integrator   = sd  ; = md
; Start time and timestep in ps
tinit  = 0  ; = 0
dt   = 0.004  ; = 0.001
nsteps = 75   ; = 0 (this 

Re: [gmx-users] Need help troubleshooting an mdrun-gpu error message!

2011-01-25 Thread Szilárd Páll
Hi,

There are two things you should test:

a) Does your NVIDIA driver + CUDA setup work? Try to run a different
CUDA-based program, e.g. you can get the CUDA SDK and compile one of
the simple programs like deviceQuery or bandwidthTest.

b) If the above works, try to compile OpenMM from source with the same
CUDA version as the one used when compiling Gromacs.

Let me know if you succeed!

Cheers,
--
Szilárd



On Tue, Jan 25, 2011 at 3:49 AM, Solomon Berman smber...@bu.edu wrote:
 Hello friends,
 I have installed mdrun-gpu v. 4.5.3 without incident on my group's computer
 with a suitable GPU.  The computer uses a Linux OS.  CUDA and OpenMM are
 installed in the usual places.  I created the topol.tpr file with grompp v.
 4.5.3, also without incident.  When I run ./mdrun-gpu -v, the following
 error message is produced:
 Program mdrun-gpu, VERSION 4.5.3
 Source code file:
 /home/smberman/sourcecode/gromacs-4.5.3/src/kernel/openmm_wrapper.cpp, line:
 1259
 Fatal error:
 The requested platform CUDA could not be found.
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 ---
 I have the following in my .bash_profile:
 export PATH=~/bin:~/usr/bin:~/usr/local/bin:/usr/local/cuda/bin:$PATH:.
 LD_LIBRARY_PATH=/usr/local/openmm/lib:/usr/local/cuda/lib
 export LD_LIBRARY_PATH
 I have run the available tests that came with the OpenMM library, and the
 CUDA tests pass.
 Could someone please explain what this error means and the appropriate way
 to remedy it?  Thank you!!
 Best,
 Solomon Berman
 Chemistry Department
 Boston University

 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Two machines, same job, one fails

2011-01-25 Thread Mark Abraham


On 01/26/11, TJ Mustard  musta...@onid.orst.edu wrote:
 
 
 
 
   
 
 
   
 
 
 
 
 
  
 
 
 
   On January 25, 2011 at 3:24 PM Justin A. Lemkul jalem...@vt.edu 
 wrote:
 
   
 
   
 
   
 
TJ Mustard wrote:
 

 

 
 
 

 

 
 On January 25, 2011 at 2:08 PM Mark Abraham 
 mark.abra...@anu.edu.au wrote:
 

 
 On 26/01/2011 5:50 AM, TJ Mustard wrote:
 

 
 Hi all,
 

 
 
 

 
 I am running MD/FEP on a protein-ligand system with gromacs 4.5.3 
 and
 
 FFTW 3.2.2.
 

 
 
 

 
 My iMac will run the job (over 4000 steps, till I killed it) at 
 4fs
 
 steps. (I am using heavy H)
 

 
 
 

 
 Once I put this on our groups AMD Cluster the jobs fail even with 
 2fs
 
 steps. (with thousands of lincs errors)
 

 
 
 

 
 We have recompiled the clusters gromacs 4.5.3 build, with no 
 change.
 
 I know the system is the same since I copied the job from the 
 server
 
 to my machine, to rerun it.
 

 
 
 

 
 What is going on? Why can one machine run a job perfectly and the
 
 other cannot? I also know there is adequate memory on both 
 machines.
 

 

 
 You've posted this before, and I made a number of diagnostic
 
 suggestions. What did you learn?
 

 
 Mark
 

 
 Mark and all,
 

 
 
 

 
 First thank you for all our help. What you suggested last time 
 helped
 
 considerably with our jobs/calculations. I have learned that using 
 the
 
 standard mdp settings allow my heavyh 4fs jobs to run on my iMac 
 (intel)
 
 and have made these my new standard for future jobs. We chose to 
 use the
 
 smaller 0.8nm PME/Cutoff due to others papers/tutorials, but now we
 
 understand why we need these standard settings. Now what I see to 
 be our
 
 problem is that our machines have some sort of variable we cannot
 
 account for. If I am blind to my error, please show me. I just don't
 
 understand why one computer works while the other does not. We have
 
 recompiled gromacs 4.5.3 single precission on our cluster, and still
 
 have this problem.
 

 
   
 
I know the feeling all too well.  PowerPC jobs crash instantly, on 
 our cluster,
 
despite working beautifully on our lab machines.  There's a bug 
 report about
 
that one, but I haven't heard anything about AMD failures.  It 
 remains a
 
possibility that something beyond your control is going on.  To 
 explore a bit
 
further:
 
   
 
1. Do the systems in question crash immediately (i.e., step zero) or 
 do they run
 
for some time?
 
   
 
 
 
 
 Step 0, every time.
 
 
 
  
 
 
 
2. If they give you even a little bit of output, you can analyze 
 which energy
 
terms, etc go haywire with the tips listed here:
 
   
 
 
 
 
 All I have seen on these is LINCS Errors and Water molecules unable to be 
 settled.
 
 
 
  
 
 
 
 But I will check this out right now, and email if I smell trouble.
 
 
 
  
 
 
 

 http://www.gromacs.org/Documentation/Terminology/Blowing_Up#Diagnosing_an_Unstable_System
 
   
 
That would help in tracking down any potential bug or error.
 
   
 
3. Is it just the production runs that are crashing, or everything?  
 If EM isn't
 
even working, that smells even buggier.
 
 
 
 
 Awesome question here, we have seen some weird stuff. Sometimes the cluster 
 will give us segmentation faults, then it will fail on our machines or 
 sometimes not on our iMacs. I know weird! If EM starts on the cluster it will 
 finish. Where we have issues is in positional restraint (PR) and MD and 
 MD/FEP. It doesn't matter if FEP is on or off in a MD (although we are using 
 SD for these MD/FEP runs).
 
 
 
 
 
 
Good. That rules out FEP as the source of the problem, like I asked in your 
previous thread.
 
 
 
 
 
 
 
   
 
4. Are the compilers the same on the iMac vs. AMD cluster?
 
 
 
 
 No I am using x86_64-apple-darwin10 GCC 4.4.4 and the cluster is using 
 x86_64-redhat-linux 4.1.2 GCC.
 
 
 
 I just did a quick yum search and there 
 doesn't seem to be a newer GCC. We know you are going to cmake but we 
 have yet to get it implemented on our cluster successfully.
 
 
 
 

There
 have been doubts about the 4.1.x series of GCC compilers for GROMACS - 
and IIRC 4.1.2 in particular (do search the archives yourself). Some 
time back, Berk solicited actual accounts of problems and nobody 
presented one. So we no longer have an official warning against using 

Re: [gmx-users] Two machines, same job, one fails

2011-01-25 Thread Justin A. Lemkul



TJ Mustard wrote:

snip

  1. Do the systems in question crash immediately (i.e., step zero) or 
do they run

  for some time?
 

Step 0, every time.

 

  2. If they give you even a little bit of output, you can analyze 
which energy

  terms, etc go haywire with the tips listed here:
 

All I have seen on these is LINCS Errors and Water molecules unable to 
be settled.


 


But I will check this out right now, and email if I smell trouble.

 

  
http://www.gromacs.org/Documentation/Terminology/Blowing_Up#Diagnosing_an_Unstable_System

 
  That would help in tracking down any potential bug or error.
 
  3. Is it just the production runs that are crashing, or everything?  
If EM isn't

  even working, that smells even buggier.

Awesome question here, we have seen some weird stuff. Sometimes the 
cluster will give us segmentation faults, then it will fail on our 
machines or sometimes not on our iMacs. I know weird! If EM starts on 
the cluster it will finish. Where we have issues is in positional 
restraint (PR) and MD and MD/FEP. It doesn't matter if FEP is on or off 
in a MD (although we are using SD for these MD/FEP runs).


 


Does sometimes refer to different simulations, or multiple invocations of the 
same simulation system?  If you're referencing the fact that system A works 
while system B doesn't, we're talking apples and oranges and it's irrelevant to 
the diagnosis (and perhaps some systems simply require greater finesse or a 
different protocol).  If one system continually fails on one system and works on 
another, that's what we need to be discussing.  Sorry if I've missed something, 
I'm just getting confused.




 
  4. Are the compilers the same on the iMac vs. AMD cluster?

No I am using x86_64-apple-darwin10 GCC 4.4.4 and the cluster is using 
x86_64-redhat-linux 4.1.2 GCC.




Well, I know that for years weird behavior has been attributed to the gcc-4.1.x 
series, including the famous warning on the downloads page:


WARNING: do not use the gcc 4.1.x set of compilers. They are broken. These 
compilers come with recent Linux distributions like Fedora 5/6 etc.


I don't know if those issues were ever resolved (some error in Gromacs that 
wasn't playing nice with gcc, or vice versa).


I just did a quick yum search and there doesn't seem to be a newer GCC. 
We know you are going to cmake but we have yet to get it implemented on 
our cluster successfully.




The build system is irrelevant.  You still need a reliable C compiler, whether 
using autoconf or cmake.


-Justin

 


Thank you,

TJ Mustard

 


 
  -Justin
 
   
  

   Now I understand that my iMac works, but it only has 2 cpus and the
   cluster has 320. Since we are running our jobs via a Bennet's 
Acceptance

   Ratio FEP with 21 lambda windows, using just one 2 cpu machine would
   take too long. Especially since we wish to start pseudo high throughput
   drug testing.
  
   
  
   
  

   In my .mdp files now, the only changes are:
  
   (the default setting is on the right of the ;)
  
   
  
   
  

   define   = ; =
  
   ; RUN CONTROL PARAMETERS
   integrator   = sd; = md
   ; Start time and timestep in ps
   tinit= 0; = 0
   dt   = 0.004; = 0.001
   nsteps   = 75   ; = 0 (this one depends on the
   window and particular part of our job)
  
   ; OUTPUT CONTROL OPTIONS
   ; Output frequency for coords (x), velocities (v) and forces (f)
   nstxout  = 1; = 100 (to save on disk space)
   nstvout  = 1; = 100
  
   
  

   ; OPTIONS FOR ELECTROSTATICS AND VDW
   ; Method for doing electrostatics
   coulombtype  = PME; = Cutoff
   rcoulomb-switch  = 0; = 0
   rcoulomb = 1  ; = 1
   ; Relative dielectric constant for the medium and the reaction field
   epsilon_r= 1; = 1
   epsilon_rf   = 1; = 1
   ; Method for doing Van der Waals
   vdw-type = Cut-off; = Cut-off
   ; cut-off lengths   
   rvdw-switch  = 0; = 0

   rvdw = 1  ; = 1
   ; Spacing for the PME/PPPM FFT grid
   fourierspacing   = 0.12; = 0.12
   ; EWALD/PME/PPPM parameters
   pme_order= 4; = 4
   ewald_rtol   = 1e-05; = 1e-05
   ewald_geometry   = 3d; = 3d
   epsilon_surface  = 0; = 0
   optimize_fft = yes; = no
  
   
  

   ; OPTIONS FOR WEAK COUPLING ALGORITHMS
   ; Temperature coupling 
   tcoupl   = v-rescale; = No

   nsttcouple   = -1; = -1
   nh-chain-length  = 10; = 10
   ; Groups to couple separately
   tc-grps  = System; =
   ; Time constant (ps) and reference temperature (K)
   tau-t= 0.1; =
   ref-t= 300; =
   ; Pressure coupling 
   Pcoupl

Re: [gmx-users] (no subject)

2011-01-25 Thread Justin A. Lemkul



trevor brown wrote:

Dear Justin/Mark,
I would like to have a private course about Gromacs in Holland.
Is there such a facility for this? Who should I talk with?  
 


Can't help you there.  I'm in the U.S.


Or is there a workshop in near future?


Any workshops will surely be announced on the list, but there haven't been any 
Gromacs-specific ones for several years.


-Justin


best wishes
 



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Two machines, same job, one fails

2011-01-25 Thread TJ Mustard



  

  



  
   On January 25, 2011 at 3:54 PM Mark Abraham mark.abra...@anu.edu.au wrote:
  

  


 On 01/26/11, TJ Mustard musta...@onid.orst.edu wrote: 


  
 




  

  





  On January 25, 2011 at 3:24 PM Justin A. Lemkul jalem...@vt.edu wrote:
  
   
   
TJ Mustard wrote:





 On January 25, 2011 at 2:08 PM Mark Abraham mark.abra...@anu.edu.au wrote:

 On 26/01/2011 5:50 AM, TJ Mustard wrote:

 Hi all,



 I am running MD/FEP on a protein-ligand system with gromacs 4.5.3 and
 FFTW 3.2.2.



 My iMac will run the job (over 4000 steps, till I killed it) at 4fs
 steps. (I am using heavy H)



 Once I put this on our groups AMD Cluster the jobs fail even with 2fs
 steps. (with thousands of lincs errors)



 We have recompiled the clusters gromacs 4.5.3 build, with no change.
 I know the system is the same since I copied the job from the server
 to my machine, to rerun it.



 What is going on? Why can one machine run a job perfectly and the
 other cannot? I also know there is adequate memory on both machines.


 Youve posted this before, and I made a number of diagnostic
 suggestions. What did you learn?

 Mark

 Mark and all,



 First thank you for all our help. What you suggested last time helped
 considerably with our jobs/calculations. I have learned that using the
 standard mdp settings allow my heavyh 4fs jobs to run on my iMac (intel)
 and have made these my new standard for future jobs. We chose to use the
 smaller 0.8nm PME/Cutoff due to others papers/tutorials, but now we
 understand why we need these standard settings. Now what I see to be our
 problem is that our machines have some sort of variable we cannot
 account for. If I am blind to my error, please show me. I just dont
 understand why one computer works while the other does not. We have
 recompiled gromacs 4.5.3 single precission on our cluster, and still
 have this problem.

   
I know the feeling all too well. PowerPC jobs crash instantly, on our cluster,
despite working beautifully on our lab machines. Theres a bug report about
that one, but I havent heard anything about AMD failures. It remains a
possibility that something beyond your control is going on. To explore a bit
further:
   
1. Do the systems in question crash immediately (i.e., step zero) or do they run
for some time?
   


Step 0, every time.




   2. If they give you even a little bit of output, you can analyze which energy
terms, etc go haywire with the tips listed here:
   


All I have seen on these is LINCS Errors and Water molecules unable to be settled.



But I will check this out right now, and email if I smell trouble.




   http://www.gromacs.org/Documentation/Terminology/Blowing_Up#Diagnosing_an_Unstable_System
   
  

Re: [gmx-users] Two machines, same job, one fails

2011-01-25 Thread TJ Mustard



  

  



  On January 25, 2011 at 3:53 PM Justin A. Lemkul jalem...@vt.edu wrote:
  
  
  
   TJ Mustard wrote:
  
   snip
  
 1. Do the systems in question crash immediately (i.e., step zero) or
do they run
 for some time?

   
Step 0, every time.
   
   
   
 2. If they give you even a little bit of output, you can analyze
which energy
 terms, etc go haywire with the tips listed here:

   
All I have seen on these is LINCS Errors and Water molecules unable to
be settled.
   
   
   
But I will check this out right now, and email if I smell trouble.
   
   
   

http://www.gromacs.org/Documentation/Terminology/Blowing_Up#Diagnosing_an_Unstable_System

 That would help in tracking down any potential bug or error.

 3. Is it just the production runs that are crashing, or everything?
If EM isnt
 even working, that smells even buggier.
   
Awesome question here, we have seen some weird stuff. Sometimes the
cluster will give us segmentation faults, then it will fail on our
machines or sometimes not on our iMacs. I know weird! If EM starts on
the cluster it will finish. Where we have issues is in positional
restraint (PR) and MD and MD/FEP. It doesnt matter if FEP is on or off
in a MD (although we are using SD for these MD/FEP runs).
   
   
  
   Does sometimes refer to different simulations, or multiple invocations of the
   same simulation system? If youre referencing the fact that system A works
   while system B doesnt, were talking apples and oranges and its irrelevant to
   the diagnosis (and perhaps some systems simply require greater finesse or a
   different protocol). If one system continually fails on one system and works on
   another, thats what we need to be discussing. Sorry if Ive missed something,
   Im just getting confused.




What trends I see are that some jobs that segment fault on the cluster will also fault on our machines. But then when they fail on the cluster, via segmentation faults, they will work on our machines. Never does a single job sometimes fail and sometimes not, it either does or does not, every time. We have just gotten used to rebuilding our system and starting these again. We have also gotten used to doing EM on our machines and transferring this folder to be completed on the cluster. Once there it can finish the PR, MD and FEP.




  
   

 4. Are the compilers the same on the iMac vs. AMD cluster?
   
No I am using x86_64-apple-darwin10 GCC 4.4.4 and the cluster is using
x86_64-redhat-linux 4.1.2 GCC.
   
  
   Well, I know that for years weird behavior has been attributed to the gcc-4.1.x
   series, including the famous warning on the downloads page:
  
   WARNING: do not use the gcc 4.1.x set of compilers. They are broken. These
   compilers come with recent Linux distributions like Fedora 5/6 etc.
  
   I dont know if those issues were ever resolved (some error in Gromacs that
   wasnt playing nice with gcc, or vice versa).
  




Yes I am looking into this now. I hope this is our problem and not some hugely time consuming underlying problem with our system. Thank you.



Thanks again,

TJ Mustard




I just did a quick yum search and there doesnt seem to be a newer GCC.
We know you are going to cmake but we have yet to get it implemented on
our cluster successfully.
   
  
   The build system is irrelevant. You still need a reliable C compiler, whether
   using autoconf or cmake.
  
   -Justin
  
   
   
Thank you,
   
TJ Mustard
   
   
   

 -Justin

 
 
  Now I understand that my iMac works, but it only has 2 cpus and the
  cluster has 320. Since we are running our jobs via a Bennets
Acceptance
  Ratio FEP with 21 lambda windows, using just one 2 cpu machine would
  take too long. Especially since we wish to start pseudo high throughput
  drug testing.
 
 
 
 
 
  In my .mdp files now, the only changes are:
 
  (the default setting is on the right of the ;)
 
 
 
 
 
  define =  ; =
 
  ; RUN CONTROL PARAMETERS
  integrator   = sd  ; = md
  ; Start time and timestep in ps
  tinit  = 0  ; = 0
  dt   = 0.004  ; = 0.001
  nsteps = 75   ; = 0 (this one depends on the
  window and particular part of our 

[gmx-users] Fw:Re:gmx-users Digest, Vol 81, Issue 192; PME, Estimate for the relative

2011-01-25 Thread gromacs
 
 
 

Hi Justin,


.mdp and fourierspacing are at below, can you tell me where is not wrong?



; NEIGHBORSEARCHING PARAMETERS
; nblist update frequency
nstlist  = 5
; ns algorithm (simple or grid)
ns_type  = grid
; Periodic boundary conditions: xyz (default), no (vacuum)
; or full (infinite systems only)
pbc  = xyz
; nblist cut-off   
rlist= 0.9
domain-decomposition = no

; OPTIONS FOR ELECTROSTATICS AND VDW
; Method for doing electrostatics
coulombtype  = PME
rcoulomb-switch  = 0
rcoulomb = 0.9
; Dielectric constant (DC) for cut-off or DC of reaction field
epsilon-r= 1
; Method for doing Van der Waals
vdw-type = Cut-off
; cut-off lengths  
rvdw-switch  = 0
rvdw = 1.2
; Apply long range dispersion corrections for Energy and Pressure
DispCorr = EnerPres
; Extension of the potential lookup tables beyond the cut-off
table-extension  = 1
; Spacing for the PME/PPPM FFT grid
fourierspacing   = 0.12
; FFT grid size, when a value is 0 fourierspacing will be used
fourier_nx   = 0
fourier_ny   = 0
fourier_nz   = 0
; EWALD/PME/PPPM parameters
pme_order= 4
ewald_rtol   = 1e-05
ewald_geometry   = 3d
epsilon_surface  = 0
optimize_fft = no

; GENERALIZED BORN ELECTROSTATICS
; Algorithm for calculating Born radii
gb_algorithm = Still
; Frequency of calculating the Born radii inside rlist
nstgbradii   = 1
; Cutoff for Born radii calculation; the contribution from atoms
; between rlist and rgbradii is updated every nstlist steps
rgbradii = 2
; Salt concentration in M for Generalized Born models
gb_saltconc  = 0

; IMPLICIT SOLVENT (for use with Generalized Born electrostatics)
implicit_solvent = No

; OPTIONS FOR WEAK COUPLING ALGORITHMS
; Temperature coupling 
Tcoupl   = v-rescale
; Groups to couple separately
tc-grps  = System
; Time constant (ps) and reference temperature (K)
tau_t= 0.1
ref_t= 300
; Pressure coupling
Pcoupl   = no
Pcoupltype   = isotropic
; Time constant (ps), compressibility (1/bar) and reference P (bar)
tau_p= 1
compressibility  = 4.5e-5
ref_p= 1.0
; Random seed for Andersen thermostat
andersen_seed= 815131

; SIMULATED ANNEALING 
; Type of annealing for each temperature group (no/single/periodic)
annealing= no
; Number of time points to use for specifying annealing in each group
annealing_npoints=
; List of times at the annealing points for each group
annealing_time   =
; Temp. at each annealing point, for each group.
annealing_temp   =

; GENERATE VELOCITIES FOR STARTUP RUN
gen_vel  = yes
gen_temp = 300
gen_seed = 1993










Message: 4
Date: Tue, 25 Jan 2011 08:05:44 -0500
From: Justin A. Lemkul jalem...@vt.edu
Subject: Re: [gmx-users] V-rescale thermostat, PME, Estimate for the
   relativecomputational load of the PME mesh part: 0.97
To: Discussion list for GROMACS users gmx-users@gromacs.org
Message-ID: 4d3ecaa8.60...@vt.edu
Content-Type: text/plain; charset=UTF-8; format=flowed



gromacs wrote:
 HI Friends,
 
 I get the following note,
 
 The Berendsen thermostat does not generate the correct kinetic energy
   distribution. You might want to consider using the V-rescale thermostat.
 
 I want to keep the T at 300K, so does it matter to select any thermostat 
 method?
 

The choice of thermostat certainly does matter, otherwise you wouldn't get 
this 
note.  Refer to the numerous discussions in the list archive as to why one 
would 
or would not (usually) use the Berendsen thermostat, as well as:

http://www.gromacs.org/Documentation/Terminology/Thermostats
http://www.gromacs.org/Documentation/Terminology/Berendsen

 
 Another note when i use PME:
 
 Estimate for the relative computational load of the PME mesh part: 0.97
 
 NOTE 1 [file aminoacids.dat, line 1]:
   The optimal PME mesh load for parallel simulations is below 0.5
   and for highly parallel simulations between 0.25 and 0.33,
   for higher performance, increase the cut-off and the PME grid spacing
 
 So what is the reason? I use type=PME
 

Your combination of settings (rcoulomb, fourierspacing, and perhaps a few 
others) indicate that your simulation is going to spend an inordinate amount 
of 
time doing PME calculations, so your performance will suffer.  Seeing your 
entire .mdp file would be necessary if you want further guidance.

-Justin

 Is my setting proper?
 
 Thanks
 
 

-- 


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee

Re: [gmx-users] V-rescale thermostat, PME, Estimate for the relative computational load of the PME mesh part: 0.97

2011-01-25 Thread Justin A. Lemkul



gromacs wrote:
 
 
 


Hi Justin,

.mdp and fourierspacing are at below, can you tell me where is not wrong?




I see nothing wrong with the .mdp file, per se, although it does not appear 
complete (no integrator, nsteps, etc).  Perhaps that was your intention, but if 
nsteps is left at default (0), then you shouldn't worry about poor performance :)


The amount of time spent doing PME calculations depends on what is in your 
system.  While your settings seems reasonable in a generic sense, if you have a 
large, highly-charged system, then much of the work will be allocated to PME. 
In that case, you may have to re-think how you balance the real space and 
reciprocal space terms.  This involves possibly increasing (slightly) the values 
of rcoulomb and fourierspacing, if and only if the PME load is prohibitive and 
you are sure you won't incur any unnecessary error.  g_pme_error may help there.


-Justin




; NEIGHBORSEARCHING PARAMETERS
; nblist update frequency
nstlist  = 5
; ns algorithm (simple or grid)
ns_type  = grid
; Periodic boundary conditions: xyz (default), no (vacuum)
; or full (infinite systems only)
pbc  = xyz
; nblist cut-off   
rlist= 0.9

domain-decomposition = no

; OPTIONS FOR ELECTROSTATICS AND VDW
; Method for doing electrostatics
coulombtype  = PME
rcoulomb-switch  = 0
rcoulomb = 0.9
; Dielectric constant (DC) for cut-off or DC of reaction field
epsilon-r= 1
; Method for doing Van der Waals
vdw-type = Cut-off
; cut-off lengths  
rvdw-switch  = 0

rvdw = 1.2
; App ly long range dispersion corrections for Energy and Pressure
DispCorr = EnerPres
; Extension of the potential lookup tables beyond the cut-off
table-extension  = 1
; Spacing for the PME/PPPM FFT grid
fourierspacing   = 0.12
; FFT grid size, when a value is 0 fourierspacing will be used
fourier_nx   = 0
fourier_ny   = 0
fourier_nz   = 0
; EWALD/PME/PPPM parameters
pme_order= 4
ewald_rtol   n bsp;   = 1e-05
ewald_geometry   = 3d
epsilon_surface  = 0
optimize_fft = no

; GENERALIZED BORN ELECTROSTATICS
; Algorithm for calculating Born radii
gb_algorithm = Still
; Frequency of calculating the Born radii inside rlist
nstgbradii   = 1
; Cutoff for Born radii calculation; the contribution from atoms
; between rlist and rgbradii is updated every nstlist steps
rgbradii = 2
; Salt concentration in M for Generalized Born models
gb_saltconc nbs p;= 0

; IMPLICIT SOLVENT (for use with Generalized Born electrostatics)
implicit_solvent = No

; OPTIONS FOR WEAK COUPLING ALGORITHMS
; Temperature coupling 
Tcoupl   = v-rescale

; Groups to couple separately
tc-grps  = System
; Time constant (ps) and reference temperature (K)
tau_t= 0.1
ref_t= 300
; Pressure coupling
Pcoupl   nbsp;= no

Pcoupltype   = isotropic
; Time constant (ps), compressibility (1/bar) and reference P (bar)
tau_p= 1
compressibility  = 4.5e-5
ref_p= 1.0
; Random seed for Andersen thermostat
andersen_seed= 815131

; SIMULATED ANNEALING 
; Type of annealing for each temperature group (no/single/periodic)

annealing= no
; Number of time points to use for specify ing annealing in each group
annealing_npoints=
; List of times at the annealing points for each group
annealing_time   =
; Temp. at each annealing point, for each group.
annealing_temp   =

; GENERATE VELOCITIES FOR STARTUP RUN
gen_vel  = yes
gen_temp = 300
gen_seed = 1993









Message: 4
Date: Tue, 25 Jan 2011 08:05:44 -0500
From: Justin A. Lemkul jalem...@vt.edu mailto:jalem...@vt.edu
Subject: Re: [gmx-users] V-rescale thermostat, PME, Estimate for the
relativecomputational load of the PME mesh part: 0.97
To: Discussion list for GROMACS users gmx-users@gromacs.org 
mailto:gmx-users@gromacs.org
Message-ID: 4d3ecaa8.60...@vt.edu mailto:4d3ecaa8.60...@vt.edu
Content-Type: text/plain; charset=UTF-8; format=flowed



gromacs wrote:

HI Friends,

I get the following note,

The Berendsen thermostat does not generate the correct kinetic energy
  distribution. You might want to consider using the V-rescale thermostat.

I want to keep the T at 300K, so does it matter to select any thermostat 
method?




The choice of thermostat certainly does matter, otherwise you wouldn't get this 
note.  Refer to the numerous discussions in the list archive as to why one would 
or would not (usually) use the Berendsen thermostat, as well as:


http://www.gromacs.org/Documentation/Terminology/Thermostats

[gmx-users] More than one settle type

2011-01-25 Thread Shalabh

Hi,

I am trying to find the free energy of a water molecule as it adsorbs 
on a surface. My outline is:


1) Run the simulation
2) Analyze the results, find out which water molecule(s) are adsorbed
3) Create different groups for the water adsordbed molecule(s), create 
new .gro file (to identify the adsorbed water with different residue 
name), .mdp file and .tpe file. The remaining water molecules have the 
same previous group name.
4) Use mdrun -rerun to evaluate the energy between the new water 
group(s), the surface and the remaining water.


When I try to implement this, I get an error in step 4 while running 
mdrun -rerun, saying More than one settle type. Suggestion: change the 
least use settle constraints into 3 normal constraints. The .itp file 
which I created for the new water group is identical to the spc.itp 
file (with the only change as the residue name).


This similar problem was raised earlier as well, which I am not sure 
if it was resolved:

http://lists.gromacs.org/pipermail/gmx-users/2010-January/048058.html

Basically, what I am trying to do is if I have 1000 water molecules in 
my system, can I have 999 of them in one group SOL, and the remaning 1 
water molecule in a different group, say SOL1. The .itp files and 
parameters are identical, apart from the group (residue) name. And I am 
getting the above error. Anyone any ideas?


Thanks in advance!

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] More than one settle type

2011-01-25 Thread Mark Abraham


On 01/26/11, Shalabh  shal...@ufl.edu wrote:
 Hi,
 
 I am trying to find the free energy of a water molecule as it adsorbs on a 
 surface. My outline is:
 
 1) Run the simulation
 2) Analyze the results, find out which water molecule(s) are adsorbed
 3) Create different groups for the water adsordbed molecule(s), create new 
 .gro file (to identify the adsorbed water with different residue name), .mdp 
 file and .tpe file. The remaining water molecules have the same previous 
 group name.
 

You're making life hard for yourself by renaming the residue, which you don't 
need to do. You merely need to divide the solvent into two groups for energy 
evaluation, and you can do that with a suitably constructed index file and 
matching .mdp as grompp input. If you use g_select with a suitable geometric 
criterion for adsorbation, then it can write the index file for you (and there 
are various ways you might visualise the result to check it). All you need to 
do is decide on the criterion, name the groups, modify the .mdp and feed it to 
grompp.


 
 4) Use mdrun -rerun to evaluate the energy between the new water group(s), 
 the surface and the remaining water.
 
 When I try to implement this, I get an error in step 4 while running mdrun 
 -rerun, saying More than one settle type. Suggestion: change the least use 
 settle constraints into 3 normal constraints. The .itp file which I created 
 for the new water group is identical to the spc.itp file (with the only 
 change as the residue name).
 
 This similar problem was raised earlier as well, which I am not sure if it 
 was resolved:
 http://lists.gromacs.org/pipermail/gmx-users/2010-January/048058.html
 
 Basically, what I am trying to do is if I have 1000 water molecules in my 
 system, can I have 999 of them in one group SOL, and the remaning 1 water 
 molecule in a different group, say SOL1. The .itp files and parameters are 
 identical, apart from the group (residue) name. And I am getting the above 
 error. Anyone any ideas?
 

Groups and molecule blocks in the topology need not relate to each other at 
all. It is simplest to leave your .top file alone and follow the approach I 
suggested above.

Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] QMMM with ORCA

2011-01-25 Thread Xiaohu Li
Hi, All,
 I'm trying to see if anybody has experience of using the interface of
gromacs and ORCA(since it's free). I know that the following link gave
information on how
http://wwwuser.gwdg.de/~ggroenh/qmmm.html#code
 But.But, the gromacs in the above link is quite old(3.2). I
download the latest 4.5.3 and followed the instructions in the above link
and I was trying to optimize an simple cluster(no pbc) where part of it are
treated using QM. here is the example mdp file
=
title   =  cpeptide
integrator  =  steep   ;integrator includes energy minimization
algorithms
dt  =  0.002; ps !
nsteps  =   1
nstlist =  1
ns_type =  simple
rlist   =  3.0
rcoulomb=  3.0
coulombtype = cut-off
vdwtype = cut-off
rvdw= 3.0
pbc =  no
periodic_molecules  =  no
constraints = none
energygrps  = qm_part mm_part
; QM/MM calculation stuff
QMMM = yes
QMMM-grps = qm_part
QMmethod = rhf
QMbasis = 3-21G
QMMMscheme = oniom
QMcharge = 0
QMmult = 1
;
;   Energy minimizing stuff
;
emtol   =  60   ; minimization thresold (kj/mol.nm-1)1
hartree/bohr= 49614.75241 kj/mol.nm-1  1 kj/mol.nm-1=2.01553e-5 hartree/bohr
emstep  =  0.01  ; minimization step in nm
=
I set up the BASENAME and ORCA_PATH as told in the instruction.
first of all, the normal electronic embedding just simply gave segmentation
fault error right after the it prints information on number of steps of
optimization.

So I switch to ONIOM, this time, at least, orca is called and energy and
gradient are both generated. However, when it comes to read the energy and
gradient, it always crashes when tried to read gradient, this is at *line
346* source code src/mdlib/qm_orca.c

sscanf(buf,%lf\n, QMgrad[k][XX]);

a segmentation fault error is printed. If I replace the QMgrad[k][XX] by an
temporary variable temp
 sscanf(buf,%lf\n, temp);
temp gets the correct value and if I use,
QMgrad[k][XX]=temp
and tries to print QMgrad[k][XX], a bus error will be printed.
I did some research online, seems that usually this implies an memory bug in
the code which is the most difficult bug one can ever encounter.
So has anyone successfully used gromacs and orca to do QMMM?
Generally, would anyone recommend using gromacs to do QMMM?

Cheers,
Xiaohu
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] RE: stopping a simulation in between without crashing any trajectories

2011-01-25 Thread bharat gupta
Hi,

I ran a simulation of 10ns and now it has completed till 5ns ..  I want to
stop it now and retrieve the trajectories without any errors.. Is it
possible to do that ??

-- 
Bharat
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists