Re: [gmx-users] Re: Bilayer COM removal issue: Large VCM

2013-11-13 Thread Tsjerk Wassenaar
Hi Rajat,

If you remove comm on the bilayer, there may be relative comm between
leaflets. If that relative motion is significant and you switch to removing
comm per leaflet, the program suddenly finds itself resetting the com over
a large distance. About equilibration, you equilibrated with comm_grps =
SOL DMPC, the system is not equilibrated for another scheme. You can solve
this issue by regenerating velocities, or by running short cycles with the
time step increasing from very small to normal.

Hope it helps,

Tsjerk


On Wed, Nov 13, 2013 at 8:06 AM, rajat desikan rajatdesi...@gmail.comwrote:

 Hi All,
 Any suggestions?

 Thanks,


 On Mon, Nov 11, 2013 at 12:38 AM, rajat desikan rajatdesi...@gmail.com
 wrote:

  Hi All,
  I am experiencing a few problems in membrane simulations wrt COM removal.
  I downloaded a 400 ns pre-equilibrated Slipid-DMPC membrane with all the
  accompanying files. I then carried out the following steps:
  1) energy minimization
  2) NVT Eq - 100 ps
  3) NPT Eq - 250 ps (Berendsen temp, Pres coupling)
 
  Then I used g_select to select the upper and lower DMPC leaflets. The
 then
  carried out a 250 ps NPT eq again. The only change was:
  comm-grps= SOL DMPC ==
  comm-grps= SOL upper lower
 
  On every step in log file, I get the following message:
 
 
 
 
 
 
 
 
 
 
 
 
 
  *Step   Time Lambda 124000
  248.00.0 Large VCM(group lower): -0.00051,
  -0.00515, -0.00652, Temp-cm:  8.11828e+29   Energies
  (kJ/mol)U-BProper Dih.  Improper Dih.  LJ-14
  Coulomb-147.23818e+044.19778e+046.46641e+024.54801e+03
  -1.45245e+05 LJ (SR)LJ (LR)  Disper. corr.   Coulomb (SR)
  Coul. recip.2.79689e+04   -3.78407e+03   -2.10679e+03   -5.84134e+05
  -8.87497e+04  PotentialKinetic En.   Total EnergyTemperature
  Pres. DC (bar)-6.76497e+051.76468e+05   -5.00029e+05
  3.10424e+02   -1.05704e+02 Pressure (bar)   Constr. rmsd   -1.85927e+02
  6.42934e-06*
 
 
 
 
 
 
 
 
 
  *Large VCM(group lower): -0.00187, -0.00369,  0.00032,
  Temp-cm:  2.02076e+29 Large VCM(group lower): -0.00725,
  -0.00278, -0.00549, Temp-cm:  1.05988e+30Large VCM(group lower):
  0.00020,  0.00308, -0.00176, Temp-cm:  1.48126e+29Large VCM(group
  lower): -0.00541,  0.00546, -0.00166, Temp-cm:  7.24656e+29
  Large VCM(group lower): -0.00220,  0.00362, -0.00741,
 Temp-cm:
  8.53812e+29Large VCM(group lower):  0.00140, -0.00160,
  0.00029, Temp-cm:  5.39679e+28Large VCM(group lower): -0.00056,
  -0.00293, -0.00364, Temp-cm:  2.59422e+29 Large VCM(group lower):
  -0.00172, -0.00260,  0.00494, Temp-cm:  3.99945e+29Large
 VCM(group
  lower):  0.00252,  0.00594,  0.00068, Temp-cm:  4.93342e+29*
  *DD  step 124999  vol min/aver 0.702  load imb.: force  1.3%  pme
  mesh/force 0.636*
 
  I do not know what to make of it. There are no issues when I remove COM
  for the entire system. I have seen this issue come up a few times in the
  archives too, but I didn't find a satisfactory solution since the bilayer
  was very well equilibrated.
 
  I would appreciate any suggestions. Thank you.
 
 
  --
  Rajat Desikan (Ph.D Scholar)
  Prof. K. Ganapathy Ayappa's Lab (no 13),
  Dept. of Chemical Engineering,
  Indian Institute of Science, Bangalore
 



 --
 Rajat Desikan (Ph.D Scholar)
 Prof. K. Ganapathy Ayappa's Lab (no 13),
 Dept. of Chemical Engineering,
 Indian Institute of Science, Bangalore
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Tsjerk A. Wassenaar, Ph.D.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Bilayer COM removal issue: Large VCM

2013-11-13 Thread rajat desikan
Hi Tsjerk,
That was very sage advice! Thank you. I will try regenerating velocities
and see if the motion goes away...


On Wed, Nov 13, 2013 at 2:00 PM, Tsjerk Wassenaar tsje...@gmail.com wrote:

 Hi Rajat,

 If you remove comm on the bilayer, there may be relative comm between
 leaflets. If that relative motion is significant and you switch to removing
 comm per leaflet, the program suddenly finds itself resetting the com over
 a large distance. About equilibration, you equilibrated with comm_grps =
 SOL DMPC, the system is not equilibrated for another scheme. You can solve
 this issue by regenerating velocities, or by running short cycles with the
 time step increasing from very small to normal.

 Hope it helps,

 Tsjerk


 On Wed, Nov 13, 2013 at 8:06 AM, rajat desikan rajatdesi...@gmail.com
 wrote:

  Hi All,
  Any suggestions?
 
  Thanks,
 
 
  On Mon, Nov 11, 2013 at 12:38 AM, rajat desikan rajatdesi...@gmail.com
  wrote:
 
   Hi All,
   I am experiencing a few problems in membrane simulations wrt COM
 removal.
   I downloaded a 400 ns pre-equilibrated Slipid-DMPC membrane with all
 the
   accompanying files. I then carried out the following steps:
   1) energy minimization
   2) NVT Eq - 100 ps
   3) NPT Eq - 250 ps (Berendsen temp, Pres coupling)
  
   Then I used g_select to select the upper and lower DMPC leaflets. The
  then
   carried out a 250 ps NPT eq again. The only change was:
   comm-grps= SOL DMPC ==
   comm-grps= SOL upper lower
  
   On every step in log file, I get the following message:
  
  
  
  
  
  
  
  
  
  
  
  
  
   *Step   Time Lambda 124000
   248.00.0 Large VCM(group lower): -0.00051,
   -0.00515, -0.00652, Temp-cm:  8.11828e+29   Energies
   (kJ/mol)U-BProper Dih.  Improper Dih.  LJ-14
   Coulomb-147.23818e+044.19778e+046.46641e+024.54801e+03
   -1.45245e+05 LJ (SR)LJ (LR)  Disper. corr.   Coulomb
 (SR)
   Coul. recip.2.79689e+04   -3.78407e+03   -2.10679e+03
 -5.84134e+05
   -8.87497e+04  PotentialKinetic En.   Total Energy
  Temperature
   Pres. DC (bar)-6.76497e+051.76468e+05   -5.00029e+05
   3.10424e+02   -1.05704e+02 Pressure (bar)   Constr. rmsd   -1.85927e+02
   6.42934e-06*
  
  
  
  
  
  
  
  
  
   *Large VCM(group lower): -0.00187, -0.00369,  0.00032,
   Temp-cm:  2.02076e+29 Large VCM(group lower): -0.00725,
   -0.00278, -0.00549, Temp-cm:  1.05988e+30Large VCM(group lower):
   0.00020,  0.00308, -0.00176, Temp-cm:  1.48126e+29Large
 VCM(group
   lower): -0.00541,  0.00546, -0.00166, Temp-cm:  7.24656e+29
   Large VCM(group lower): -0.00220,  0.00362, -0.00741,
  Temp-cm:
   8.53812e+29Large VCM(group lower):  0.00140, -0.00160,
   0.00029, Temp-cm:  5.39679e+28Large VCM(group lower): -0.00056,
   -0.00293, -0.00364, Temp-cm:  2.59422e+29 Large VCM(group lower):
   -0.00172, -0.00260,  0.00494, Temp-cm:  3.99945e+29Large
  VCM(group
   lower):  0.00252,  0.00594,  0.00068, Temp-cm:
  4.93342e+29*
   *DD  step 124999  vol min/aver 0.702  load imb.: force  1.3%  pme
   mesh/force 0.636*
  
   I do not know what to make of it. There are no issues when I remove COM
   for the entire system. I have seen this issue come up a few times in
 the
   archives too, but I didn't find a satisfactory solution since the
 bilayer
   was very well equilibrated.
  
   I would appreciate any suggestions. Thank you.
  
  
   --
   Rajat Desikan (Ph.D Scholar)
   Prof. K. Ganapathy Ayappa's Lab (no 13),
   Dept. of Chemical Engineering,
   Indian Institute of Science, Bangalore
  
 
 
 
  --
  Rajat Desikan (Ph.D Scholar)
  Prof. K. Ganapathy Ayappa's Lab (no 13),
  Dept. of Chemical Engineering,
  Indian Institute of Science, Bangalore
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 



 --
 Tsjerk A. Wassenaar, Ph.D.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Rajat Desikan (Ph.D Scholar)
Prof. K. Ganapathy Ayappa's Lab (no 13),
Dept. of Chemical Engineering,
Indian Institute of Science, Bangalore
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please 

Re: [gmx-users] Re: Bilayer COM removal issue: Large VCM

2013-11-13 Thread rajat desikan
An update to anyone interested:
Regenerating velocities by itself did not solve the problem. I had to
regenerate velocities and couple the upper and lower leaflets separately to
the thermostat to equilibrate the system. To smoothen the equilibration
process further, I used a 0.5 fs timestep instead of 2 fs (though this is
probably unnecessary). Thank you once more, Tsjerk.

Old .mdp:
comm-grps= SOL DMPC
tcoupl   = v-rescale; Thermostat
tc-grps  = DMPC SOL   ; Couple lipids and SOL
separately
tau-t= 0.1 0.1   ; Time constant for
temperature coupling
ref-t= 310 310   ; Desired temperature (K)

New .mdp:
comm-grps= SOL upper lower
tcoupl   = v-rescale; Thermostat, v-rescale is
also fine
tc-grps  = upper lower SOL ; Couple lipid
leaflets and SOL separately
tau-t= 0.1 0.1 0.1 ; Time constant for
temperature coupling
ref-t= 310 310 310 ; Desired temperature (K)


On Wed, Nov 13, 2013 at 4:07 PM, rajat desikan rajatdesi...@gmail.comwrote:

 Hi Tsjerk,
 That was very sage advice! Thank you. I will try regenerating velocities
 and see if the motion goes away...


 On Wed, Nov 13, 2013 at 2:00 PM, Tsjerk Wassenaar tsje...@gmail.comwrote:

 Hi Rajat,

 If you remove comm on the bilayer, there may be relative comm between
 leaflets. If that relative motion is significant and you switch to
 removing
 comm per leaflet, the program suddenly finds itself resetting the com over
 a large distance. About equilibration, you equilibrated with comm_grps =
 SOL DMPC, the system is not equilibrated for another scheme. You can solve
 this issue by regenerating velocities, or by running short cycles with the
 time step increasing from very small to normal.

 Hope it helps,

 Tsjerk


 On Wed, Nov 13, 2013 at 8:06 AM, rajat desikan rajatdesi...@gmail.com
 wrote:

  Hi All,
  Any suggestions?
 
  Thanks,
 
 
  On Mon, Nov 11, 2013 at 12:38 AM, rajat desikan rajatdesi...@gmail.com
  wrote:
 
   Hi All,
   I am experiencing a few problems in membrane simulations wrt COM
 removal.
   I downloaded a 400 ns pre-equilibrated Slipid-DMPC membrane with all
 the
   accompanying files. I then carried out the following steps:
   1) energy minimization
   2) NVT Eq - 100 ps
   3) NPT Eq - 250 ps (Berendsen temp, Pres coupling)
  
   Then I used g_select to select the upper and lower DMPC leaflets. The
  then
   carried out a 250 ps NPT eq again. The only change was:
   comm-grps= SOL DMPC ==
   comm-grps= SOL upper lower
  
   On every step in log file, I get the following message:
  
  
  
  
  
  
  
  
  
  
  
  
  
   *Step   Time Lambda 124000
   248.00.0 Large VCM(group lower): -0.00051,
   -0.00515, -0.00652, Temp-cm:  8.11828e+29   Energies
   (kJ/mol)U-BProper Dih.  Improper Dih.  LJ-14
   Coulomb-147.23818e+044.19778e+046.46641e+024.54801e+03
   -1.45245e+05 LJ (SR)LJ (LR)  Disper. corr.   Coulomb
 (SR)
   Coul. recip.2.79689e+04   -3.78407e+03   -2.10679e+03
 -5.84134e+05
   -8.87497e+04  PotentialKinetic En.   Total Energy
  Temperature
   Pres. DC (bar)-6.76497e+051.76468e+05   -5.00029e+05
   3.10424e+02   -1.05704e+02 Pressure (bar)   Constr. rmsd
 -1.85927e+02
   6.42934e-06*
  
  
  
  
  
  
  
  
  
   *Large VCM(group lower): -0.00187, -0.00369,  0.00032,
   Temp-cm:  2.02076e+29 Large VCM(group lower): -0.00725,
   -0.00278, -0.00549, Temp-cm:  1.05988e+30Large VCM(group lower):
   0.00020,  0.00308, -0.00176, Temp-cm:  1.48126e+29Large
 VCM(group
   lower): -0.00541,  0.00546, -0.00166, Temp-cm:
  7.24656e+29
   Large VCM(group lower): -0.00220,  0.00362, -0.00741,
  Temp-cm:
   8.53812e+29Large VCM(group lower):  0.00140, -0.00160,
   0.00029, Temp-cm:  5.39679e+28Large VCM(group lower): -0.00056,
   -0.00293, -0.00364, Temp-cm:  2.59422e+29 Large VCM(group lower):
   -0.00172, -0.00260,  0.00494, Temp-cm:  3.99945e+29Large
  VCM(group
   lower):  0.00252,  0.00594,  0.00068, Temp-cm:
  4.93342e+29*
   *DD  step 124999  vol min/aver 0.702  load imb.: force  1.3%  pme
   mesh/force 0.636*
  
   I do not know what to make of it. There are no issues when I remove
 COM
   for the entire system. I have seen this issue come up a few times in
 the
   archives too, but I didn't find a satisfactory solution since the
 bilayer
   was very well equilibrated.
  
   I would appreciate any suggestions. Thank you.
  
  
   --
   Rajat Desikan (Ph.D Scholar)
   Prof. K. Ganapathy Ayappa's Lab (no 13),
   Dept. of Chemical Engineering,
   Indian Institute of Science, Bangalore
  
 
 
 
  --
  Rajat Desikan (Ph.D Scholar)
  

Re: [gmx-users] Re: g_analyze

2013-11-12 Thread bharat gupta
Sorry, I attached the wrong file . Here's the average file generate from
one of the files I sent in my last mail. I used the command g_analyze -f
hbond_115-water.xvg -av hbond_115-water-avg.xvg. Here's the file obtained
from this command :-

https://www.dropbox.com/s/sovzk40cudznfjw/hbond_115-water-avg.xvg

Now, if you see, the graph (in previous mail) and average file, both
correlates well. I have a doubt about interpreting the result from
g_analyze. The value 7.150740e+00 implies that on average 7 hydrogen
bonds are formed during the simulation time of 5ns to 10ns. What does then
the average file or its graph tells ??



On Mon, Nov 11, 2013 at 9:58 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/11/13 4:06 AM, bharat gupta wrote:

 In addition to my previous question, I have another question about
 g_analyze. When I used the hbond.xvg file to get the average and plotted
 the average.xvg file I found that the average value is round 4 to 5
 according to the graph. But g_analyze in its final calculation gives 7.150
 as the average values... Here's the link for the graph and result of
 average value calculated by g_analyze :-

std. dev.relative deviation of
 standard   -   cumulants from those of
 set  average   deviation  sqrt(n-1)   a Gaussian distribition
cum. 3   cum. 4
 SS1  * 7.150740e+00 *  8.803173e-01   1.760635e-02   0.0620.163

 SS2   1.490604e+00   1.164761e+00   2.329523e-02   0.4950.153

 https://www.dropbox.com/s/1vqixenyerha7qq/115-water.png

 Here's the  link hbond.xvg file and its averaged file
 https://www.dropbox.com/s/4n0m47o3mrjn3o8/hbond_115-water.xvg
 https://www.dropbox.com/s/4n0m47o3mrjn3o8/hbond_115-water.xvg


 Neither of these files produce output that corresponds to the PNG image
 above. Both files have values in 6-9 H-bond range and thus agree with the
 g_analyze output, which I can reproduce.  I suspect you're somehow getting
 your files mixed up.


 -Justin


 On Mon, Nov 11, 2013 at 3:30 PM, bharat gupta bharat.85.m...@gmail.com
 wrote:

  thank you informing about g_rdf...

 Is it possible to dump the structure with those average water molecules
 interacting with the residues. I generated the hbond.log file which gives
 the details but I need to generate a figure for this ??



 On Mon, Nov 11, 2013 at 10:40 AM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/10/13 8:38 PM, bharat gupta wrote:

  But trjorder can be used to calculate the hydration layer or shell
 around
 residues ... Right ??


  Yes, but I also tend to think that integrating an RDF is also a more
 straightforward way of doing that.  With trjorder, you set some
 arbitrary
 cutoff that may or may not be an informed decision - with an RDF it is
 clear where the hydration layers are.

 -Justin



  On Mon, Nov 11, 2013 at 10:35 AM, Justin Lemkul jalem...@vt.edu
 wrote:



 On 11/10/13 8:30 PM, bharat gupta wrote:

   Thanks for your reply. I was missing the scientific notation part.
 Now

 everything is fine.

 Regarding trjorder, it doesn't measure h-bonds but gives the water
 nearest
 to protein.


   I wouldn't try to draw any sort of comparison between the output of

 trjorder and g_hbond.  If you want to measure H-bonds, there's only
 one
 tool for that.


 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


  --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists






 --
 

Re: [gmx-users] Re: g_analyze

2013-11-12 Thread bharat gupta
Hi,

I tried g_select to dump the structure with the interacting water
molecules, but I don't know know how to do that. I searched for some
threads in the discussion but wasn't able to find anything related to my
need. Can you explain how can I do that ?


On Tue, Nov 12, 2013 at 7:39 AM, bharat gupta bharat.85.m...@gmail.comwrote:

 Sorry, I attached the wrong file . Here's the average file generate from
 one of the files I sent in my last mail. I used the command g_analyze -f
 hbond_115-water.xvg -av hbond_115-water-avg.xvg. Here's the file obtained
 from this command :-

 https://www.dropbox.com/s/sovzk40cudznfjw/hbond_115-water-avg.xvg

 Now, if you see, the graph (in previous mail) and average file, both
 correlates well. I have a doubt about interpreting the result from
 g_analyze. The value 7.150740e+00 implies that on average 7 hydrogen
 bonds are formed during the simulation time of 5ns to 10ns. What does then
 the average file or its graph tells ??



 On Mon, Nov 11, 2013 at 9:58 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/11/13 4:06 AM, bharat gupta wrote:

 In addition to my previous question, I have another question about
 g_analyze. When I used the hbond.xvg file to get the average and plotted
 the average.xvg file I found that the average value is round 4 to 5
 according to the graph. But g_analyze in its final calculation gives
 7.150
 as the average values... Here's the link for the graph and result of
 average value calculated by g_analyze :-

std. dev.relative deviation of
 standard   -   cumulants from those
 of
 set  average   deviation  sqrt(n-1)   a Gaussian distribition
cum. 3   cum. 4
 SS1  * 7.150740e+00 *  8.803173e-01   1.760635e-02   0.0620.163

 SS2   1.490604e+00   1.164761e+00   2.329523e-02   0.4950.153

 https://www.dropbox.com/s/1vqixenyerha7qq/115-water.png

 Here's the  link hbond.xvg file and its averaged file
 https://www.dropbox.com/s/4n0m47o3mrjn3o8/hbond_115-water.xvg
 https://www.dropbox.com/s/4n0m47o3mrjn3o8/hbond_115-water.xvg


 Neither of these files produce output that corresponds to the PNG image
 above. Both files have values in 6-9 H-bond range and thus agree with the
 g_analyze output, which I can reproduce.  I suspect you're somehow getting
 your files mixed up.


 -Justin


 On Mon, Nov 11, 2013 at 3:30 PM, bharat gupta bharat.85.m...@gmail.com
 wrote:

  thank you informing about g_rdf...

 Is it possible to dump the structure with those average water molecules
 interacting with the residues. I generated the hbond.log file which
 gives
 the details but I need to generate a figure for this ??



 On Mon, Nov 11, 2013 at 10:40 AM, Justin Lemkul jalem...@vt.edu
 wrote:



 On 11/10/13 8:38 PM, bharat gupta wrote:

  But trjorder can be used to calculate the hydration layer or shell
 around
 residues ... Right ??


  Yes, but I also tend to think that integrating an RDF is also a more
 straightforward way of doing that.  With trjorder, you set some
 arbitrary
 cutoff that may or may not be an informed decision - with an RDF it is
 clear where the hydration layers are.

 -Justin



  On Mon, Nov 11, 2013 at 10:35 AM, Justin Lemkul jalem...@vt.edu
 wrote:



 On 11/10/13 8:30 PM, bharat gupta wrote:

   Thanks for your reply. I was missing the scientific notation part.
 Now

 everything is fine.

 Regarding trjorder, it doesn't measure h-bonds but gives the water
 nearest
 to protein.


   I wouldn't try to draw any sort of comparison between the output
 of

 trjorder and g_hbond.  If you want to measure H-bonds, there's only
 one
 tool for that.


 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


  --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 

[gmx-users] Re: installation error under openSuse 12.2

2013-11-12 Thread kolnkempff
Thank you so much Justin.  On the one hand, I feel dumb because I could have
sworn that I was using a clean build directory.  On the other hand, I
obviously lost track of what I was doing because your suggestion worked like
a charm!

Koln

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/installation-error-under-openSuse-12-2-tp5012430p5012436.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Reaction field zero and ions

2013-11-12 Thread Justin Lemkul



On 11/11/13 12:08 PM, Williams Ernesto Miranda Delgado wrote:

Hello
If I did the MD simulation using PME and neutralized with ions, and I want
to rerun this time with reaction field zero, is there any problem if I
keep the ions? This is for LIE calculation. I am using AMBER99SB.


Why do you think it necessary to delete them?

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Reaction field zero and ions

2013-11-12 Thread Dr. Vitaly Chaban
There are no problems to have ions while using Reaction-Field treatment.


Dr. Vitaly V. Chaban


On Mon, Nov 11, 2013 at 7:06 PM, Justin Lemkul jalem...@vt.edu wrote:


 On 11/11/13 12:08 PM, Williams Ernesto Miranda Delgado wrote:

 Hello
 If I did the MD simulation using PME and neutralized with ions, and I want
 to rerun this time with reaction field zero, is there any problem if I
 keep the ions? This is for LIE calculation. I am using AMBER99SB.


 Why do you think it necessary to delete them?

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: g_analyze

2013-11-12 Thread Justin Lemkul



On 11/11/13 5:39 PM, bharat gupta wrote:

Sorry, I attached the wrong file . Here's the average file generate from
one of the files I sent in my last mail. I used the command g_analyze -f
hbond_115-water.xvg -av hbond_115-water-avg.xvg. Here's the file obtained
from this command :-

https://www.dropbox.com/s/sovzk40cudznfjw/hbond_115-water-avg.xvg

Now, if you see, the graph (in previous mail) and average file, both
correlates well. I have a doubt about interpreting the result from
g_analyze. The value 7.150740e+00 implies that on average 7 hydrogen
bonds are formed during the simulation time of 5ns to 10ns. What does then
the average file or its graph tells ??



It's an average over sets.  It is not equivalent to the output printed to the 
screen, nor is it supposed to.  The value printed to the screen is the actual 
average of the data set of interest, as is intuitive from your values.  An 
average of 4 is impossible if all the data points are in the range of 6-9.


-Justin




On Mon, Nov 11, 2013 at 9:58 PM, Justin Lemkul jalem...@vt.edu wrote:




On 11/11/13 4:06 AM, bharat gupta wrote:


In addition to my previous question, I have another question about
g_analyze. When I used the hbond.xvg file to get the average and plotted
the average.xvg file I found that the average value is round 4 to 5
according to the graph. But g_analyze in its final calculation gives 7.150
as the average values... Here's the link for the graph and result of
average value calculated by g_analyze :-

std. dev.relative deviation of
 standard   -   cumulants from those of
set  average   deviation  sqrt(n-1)   a Gaussian distribition
cum. 3   cum. 4
SS1  * 7.150740e+00 *  8.803173e-01   1.760635e-02   0.0620.163

SS2   1.490604e+00   1.164761e+00   2.329523e-02   0.4950.153

https://www.dropbox.com/s/1vqixenyerha7qq/115-water.png

Here's the  link hbond.xvg file and its averaged file
https://www.dropbox.com/s/4n0m47o3mrjn3o8/hbond_115-water.xvg
https://www.dropbox.com/s/4n0m47o3mrjn3o8/hbond_115-water.xvg



Neither of these files produce output that corresponds to the PNG image
above. Both files have values in 6-9 H-bond range and thus agree with the
g_analyze output, which I can reproduce.  I suspect you're somehow getting
your files mixed up.


-Justin



On Mon, Nov 11, 2013 at 3:30 PM, bharat gupta bharat.85.m...@gmail.com
wrote:

  thank you informing about g_rdf...


Is it possible to dump the structure with those average water molecules
interacting with the residues. I generated the hbond.log file which gives
the details but I need to generate a figure for this ??



On Mon, Nov 11, 2013 at 10:40 AM, Justin Lemkul jalem...@vt.edu wrote:




On 11/10/13 8:38 PM, bharat gupta wrote:

  But trjorder can be used to calculate the hydration layer or shell

around
residues ... Right ??


  Yes, but I also tend to think that integrating an RDF is also a more

straightforward way of doing that.  With trjorder, you set some
arbitrary
cutoff that may or may not be an informed decision - with an RDF it is
clear where the hydration layers are.

-Justin



  On Mon, Nov 11, 2013 at 10:35 AM, Justin Lemkul jalem...@vt.edu

wrote:




On 11/10/13 8:30 PM, bharat gupta wrote:

   Thanks for your reply. I was missing the scientific notation part.
Now


everything is fine.

Regarding trjorder, it doesn't measure h-bonds but gives the water
nearest
to protein.


   I wouldn't try to draw any sort of comparison between the output of


trjorder and g_hbond.  If you want to measure H-bonds, there's only
one
tool for that.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


  --

==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 

Re: [gmx-users] Re: g_analyze

2013-11-12 Thread Justin Lemkul



On 11/11/13 6:56 PM, bharat gupta wrote:

Hi,

I tried g_select to dump the structure with the interacting water
molecules, but I don't know know how to do that. I searched for some
threads in the discussion but wasn't able to find anything related to my
need. Can you explain how can I do that ?



Start with g_select -select 'help all' and see what you can determine.  Such 
selections are rather straightforward and have been explained several times on 
the list.  If you need help, show us what you're doing and describe why it isn't 
what you want.  It will ultimately save a lot of time.


-Justin



On Tue, Nov 12, 2013 at 7:39 AM, bharat gupta bharat.85.m...@gmail.comwrote:


Sorry, I attached the wrong file . Here's the average file generate from
one of the files I sent in my last mail. I used the command g_analyze -f
hbond_115-water.xvg -av hbond_115-water-avg.xvg. Here's the file obtained
from this command :-

https://www.dropbox.com/s/sovzk40cudznfjw/hbond_115-water-avg.xvg

Now, if you see, the graph (in previous mail) and average file, both
correlates well. I have a doubt about interpreting the result from
g_analyze. The value 7.150740e+00 implies that on average 7 hydrogen
bonds are formed during the simulation time of 5ns to 10ns. What does then
the average file or its graph tells ??



On Mon, Nov 11, 2013 at 9:58 PM, Justin Lemkul jalem...@vt.edu wrote:




On 11/11/13 4:06 AM, bharat gupta wrote:


In addition to my previous question, I have another question about
g_analyze. When I used the hbond.xvg file to get the average and plotted
the average.xvg file I found that the average value is round 4 to 5
according to the graph. But g_analyze in its final calculation gives
7.150
as the average values... Here's the link for the graph and result of
average value calculated by g_analyze :-

std. dev.relative deviation of
 standard   -   cumulants from those
of
set  average   deviation  sqrt(n-1)   a Gaussian distribition
cum. 3   cum. 4
SS1  * 7.150740e+00 *  8.803173e-01   1.760635e-02   0.0620.163

SS2   1.490604e+00   1.164761e+00   2.329523e-02   0.4950.153

https://www.dropbox.com/s/1vqixenyerha7qq/115-water.png

Here's the  link hbond.xvg file and its averaged file
https://www.dropbox.com/s/4n0m47o3mrjn3o8/hbond_115-water.xvg
https://www.dropbox.com/s/4n0m47o3mrjn3o8/hbond_115-water.xvg



Neither of these files produce output that corresponds to the PNG image
above. Both files have values in 6-9 H-bond range and thus agree with the
g_analyze output, which I can reproduce.  I suspect you're somehow getting
your files mixed up.


-Justin



On Mon, Nov 11, 2013 at 3:30 PM, bharat gupta bharat.85.m...@gmail.com
wrote:

  thank you informing about g_rdf...


Is it possible to dump the structure with those average water molecules
interacting with the residues. I generated the hbond.log file which
gives
the details but I need to generate a figure for this ??



On Mon, Nov 11, 2013 at 10:40 AM, Justin Lemkul jalem...@vt.edu
wrote:




On 11/10/13 8:38 PM, bharat gupta wrote:

  But trjorder can be used to calculate the hydration layer or shell

around
residues ... Right ??


  Yes, but I also tend to think that integrating an RDF is also a more

straightforward way of doing that.  With trjorder, you set some
arbitrary
cutoff that may or may not be an informed decision - with an RDF it is
clear where the hydration layers are.

-Justin



  On Mon, Nov 11, 2013 at 10:35 AM, Justin Lemkul jalem...@vt.edu

wrote:




On 11/10/13 8:30 PM, bharat gupta wrote:

   Thanks for your reply. I was missing the scientific notation part.
Now


everything is fine.

Regarding trjorder, it doesn't measure h-bonds but gives the water
nearest
to protein.


   I wouldn't try to draw any sort of comparison between the output
of


trjorder and g_hbond.  If you want to measure H-bonds, there's only
one
tool for that.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


  --

==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 

Re: [gmx-users] Re: g_analyze

2013-11-12 Thread bharat gupta
Thanks justin for your replies. I understood the g_analyze related data. I
tired g_analyze to dump the structures as you said. But, I didn't find any
switch that can be used to dump the structure in pdb format.


On Tue, Nov 12, 2013 at 10:15 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/11/13 6:56 PM, bharat gupta wrote:

 Hi,

 I tried g_select to dump the structure with the interacting water
 molecules, but I don't know know how to do that. I searched for some
 threads in the discussion but wasn't able to find anything related to my
 need. Can you explain how can I do that ?


 Start with g_select -select 'help all' and see what you can determine.
  Such selections are rather straightforward and have been explained several
 times on the list.  If you need help, show us what you're doing and
 describe why it isn't what you want.  It will ultimately save a lot of time.

 -Justin



 On Tue, Nov 12, 2013 at 7:39 AM, bharat gupta bharat.85.m...@gmail.com
 wrote:

  Sorry, I attached the wrong file . Here's the average file generate from
 one of the files I sent in my last mail. I used the command g_analyze -f
 hbond_115-water.xvg -av hbond_115-water-avg.xvg. Here's the file obtained
 from this command :-

 https://www.dropbox.com/s/sovzk40cudznfjw/hbond_115-water-avg.xvg

 Now, if you see, the graph (in previous mail) and average file, both
 correlates well. I have a doubt about interpreting the result from
 g_analyze. The value 7.150740e+00 implies that on average 7 hydrogen
 bonds are formed during the simulation time of 5ns to 10ns. What does
 then
 the average file or its graph tells ??



 On Mon, Nov 11, 2013 at 9:58 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/11/13 4:06 AM, bharat gupta wrote:

  In addition to my previous question, I have another question about
 g_analyze. When I used the hbond.xvg file to get the average and
 plotted
 the average.xvg file I found that the average value is round 4 to 5
 according to the graph. But g_analyze in its final calculation gives
 7.150
 as the average values... Here's the link for the graph and result of
 average value calculated by g_analyze :-

 std. dev.relative deviation of
  standard   -   cumulants from
 those
 of
 set  average   deviation  sqrt(n-1)   a Gaussian
 distribition
 cum. 3   cum. 4
 SS1  * 7.150740e+00 *  8.803173e-01   1.760635e-02   0.0620.163

 SS2   1.490604e+00   1.164761e+00   2.329523e-02   0.4950.153

 https://www.dropbox.com/s/1vqixenyerha7qq/115-water.png

 Here's the  link hbond.xvg file and its averaged file
 https://www.dropbox.com/s/4n0m47o3mrjn3o8/hbond_115-water.xvg
 https://www.dropbox.com/s/4n0m47o3mrjn3o8/hbond_115-water.xvg


  Neither of these files produce output that corresponds to the PNG
 image
 above. Both files have values in 6-9 H-bond range and thus agree with
 the
 g_analyze output, which I can reproduce.  I suspect you're somehow
 getting
 your files mixed up.


 -Justin


  On Mon, Nov 11, 2013 at 3:30 PM, bharat gupta 
 bharat.85.m...@gmail.com
 wrote:

   thank you informing about g_rdf...


 Is it possible to dump the structure with those average water
 molecules
 interacting with the residues. I generated the hbond.log file which
 gives
 the details but I need to generate a figure for this ??



 On Mon, Nov 11, 2013 at 10:40 AM, Justin Lemkul jalem...@vt.edu
 wrote:



 On 11/10/13 8:38 PM, bharat gupta wrote:

   But trjorder can be used to calculate the hydration layer or shell

 around
 residues ... Right ??


   Yes, but I also tend to think that integrating an RDF is also a
 more

 straightforward way of doing that.  With trjorder, you set some
 arbitrary
 cutoff that may or may not be an informed decision - with an RDF it
 is
 clear where the hydration layers are.

 -Justin



   On Mon, Nov 11, 2013 at 10:35 AM, Justin Lemkul jalem...@vt.edu

 wrote:



  On 11/10/13 8:30 PM, bharat gupta wrote:

Thanks for your reply. I was missing the scientific notation
 part.
 Now

  everything is fine.

 Regarding trjorder, it doesn't measure h-bonds but gives the water
 nearest
 to protein.


I wouldn't try to draw any sort of comparison between the
 output
 of

  trjorder and g_hbond.  If you want to measure H-bonds, there's
 only
 one
 tool for that.


 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before 

Re: [gmx-users] Re: g_analyze

2013-11-12 Thread Justin Lemkul



On 11/12/13 8:33 AM, bharat gupta wrote:

Thanks justin for your replies. I understood the g_analyze related data. I
tired g_analyze to dump the structures as you said. But, I didn't find any
switch that can be used to dump the structure in pdb format.



Because that's not the function of g_analyze.  Use trjconv -dump with a suitable 
index file (from g_select).


-Justin



On Tue, Nov 12, 2013 at 10:15 PM, Justin Lemkul jalem...@vt.edu wrote:




On 11/11/13 6:56 PM, bharat gupta wrote:


Hi,

I tried g_select to dump the structure with the interacting water
molecules, but I don't know know how to do that. I searched for some
threads in the discussion but wasn't able to find anything related to my
need. Can you explain how can I do that ?



Start with g_select -select 'help all' and see what you can determine.
  Such selections are rather straightforward and have been explained several
times on the list.  If you need help, show us what you're doing and
describe why it isn't what you want.  It will ultimately save a lot of time.

-Justin




On Tue, Nov 12, 2013 at 7:39 AM, bharat gupta bharat.85.m...@gmail.com
wrote:

  Sorry, I attached the wrong file . Here's the average file generate from

one of the files I sent in my last mail. I used the command g_analyze -f
hbond_115-water.xvg -av hbond_115-water-avg.xvg. Here's the file obtained
from this command :-

https://www.dropbox.com/s/sovzk40cudznfjw/hbond_115-water-avg.xvg

Now, if you see, the graph (in previous mail) and average file, both
correlates well. I have a doubt about interpreting the result from
g_analyze. The value 7.150740e+00 implies that on average 7 hydrogen
bonds are formed during the simulation time of 5ns to 10ns. What does
then
the average file or its graph tells ??



On Mon, Nov 11, 2013 at 9:58 PM, Justin Lemkul jalem...@vt.edu wrote:




On 11/11/13 4:06 AM, bharat gupta wrote:

  In addition to my previous question, I have another question about

g_analyze. When I used the hbond.xvg file to get the average and
plotted
the average.xvg file I found that the average value is round 4 to 5
according to the graph. But g_analyze in its final calculation gives
7.150
as the average values... Here's the link for the graph and result of
average value calculated by g_analyze :-

 std. dev.relative deviation of
  standard   -   cumulants from
those
of
set  average   deviation  sqrt(n-1)   a Gaussian
distribition
 cum. 3   cum. 4
SS1  * 7.150740e+00 *  8.803173e-01   1.760635e-02   0.0620.163

SS2   1.490604e+00   1.164761e+00   2.329523e-02   0.4950.153

https://www.dropbox.com/s/1vqixenyerha7qq/115-water.png

Here's the  link hbond.xvg file and its averaged file
https://www.dropbox.com/s/4n0m47o3mrjn3o8/hbond_115-water.xvg
https://www.dropbox.com/s/4n0m47o3mrjn3o8/hbond_115-water.xvg


  Neither of these files produce output that corresponds to the PNG

image
above. Both files have values in 6-9 H-bond range and thus agree with
the
g_analyze output, which I can reproduce.  I suspect you're somehow
getting
your files mixed up.


-Justin


  On Mon, Nov 11, 2013 at 3:30 PM, bharat gupta 

bharat.85.m...@gmail.com
wrote:

   thank you informing about g_rdf...



Is it possible to dump the structure with those average water
molecules
interacting with the residues. I generated the hbond.log file which
gives
the details but I need to generate a figure for this ??



On Mon, Nov 11, 2013 at 10:40 AM, Justin Lemkul jalem...@vt.edu
wrote:




On 11/10/13 8:38 PM, bharat gupta wrote:

   But trjorder can be used to calculate the hydration layer or shell


around
residues ... Right ??


   Yes, but I also tend to think that integrating an RDF is also a
more


straightforward way of doing that.  With trjorder, you set some
arbitrary
cutoff that may or may not be an informed decision - with an RDF it
is
clear where the hydration layers are.

-Justin



   On Mon, Nov 11, 2013 at 10:35 AM, Justin Lemkul jalem...@vt.edu


wrote:



  On 11/10/13 8:30 PM, bharat gupta wrote:


Thanks for your reply. I was missing the scientific notation
part.
Now

  everything is fine.


Regarding trjorder, it doesn't measure h-bonds but gives the water
nearest
to protein.


I wouldn't try to draw any sort of comparison between the
output
of

  trjorder and g_hbond.  If you want to measure H-bonds, there's

only
one
tool for that.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* 

[gmx-users] Re: ok, thank you

2013-11-12 Thread Williams Ernesto Miranda Delgado
 Send gmx-users mailing list submissions to
   gmx-users@gromacs.org

 To subscribe or unsubscribe via the World Wide Web, visit
   http://lists.gromacs.org/mailman/listinfo/gmx-users
 or, via email, send a message with subject or body 'help' to
   gmx-users-requ...@gromacs.org

 You can reach the person managing the list at
   gmx-users-ow...@gromacs.org

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of gmx-users digest...


 Today's Topics:

1. Restarting a simulation after replacing an empty md.trr file
   (arun kumar)
2. Re: Re: Reaction field zero and ions (Justin Lemkul)
3. Re: Calculating diffusion coefficient in three dimension
   (Dr. Vitaly Chaban)
4. Re: Re: Reaction field zero and ions (Dr. Vitaly Chaban)
5. Re: Re: g_analyze (Justin Lemkul)
6. Re: Re: g_analyze (Justin Lemkul)


 --

 Message: 1
 Date: Tue, 12 Nov 2013 11:45:05 +0530
 From: arun kumar arunjones.kuma...@gmail.com
 Subject: [gmx-users] Restarting a simulation after replacing an empty
   md.trr  file
 To: gmx-users@gromacs.org
 Message-ID:
   cagm9vj8dn+fqb-cigjdvv+i1mq7mc2cxvkde3gsp5+lwhds...@mail.gmail.com
 Content-Type: text/plain; charset=ISO-8859-1

 Dear Gromacs users,

 I am running a 50ns simulation of a protein having nearly 700 residues on
 60 threads (Gromacs 4.6.3).
 At one point i got a disk space problem, so i have deleted the md.trr file
 and created an empty md.trr file. when i tried to restart the simulation
 from check point file on 100 threads, [ mdrun -v -deffnm md -cpi md.cpt
 -nt
 100 ]
 i am getting a note and an error as fallows

 Reading checkpoint file md.cpt generated:
   #PME-nodes mismatch,
 current program: 100
 checkpoint file: 60
 Gromacs binary or parallel settings not identical to previous run.
 Continuation is exact, but is not guaranteed to be binary identical.
 ...

 Source code file: checkpoint.c, line: 1767
 Fatal error:
 Can't read 1048576 bytes of 'md.trr' to compute checksum. The file
 has been replaced or its contents has been modified.

 please help me in overcoming this problem.

 Thanking you.

 --
 Arun Kumar Somavarapu
 Project-JRF
 Dr. Pawan Gupta's lab
 Protein Science and Engineering Dept,
 Institute of Microbial Tecnology,
 Sec 39-A, Chandigarh - 160036.


 --

 Message: 2
 Date: Mon, 11 Nov 2013 13:06:32 -0500
 From: Justin Lemkul jalem...@vt.edu
 Subject: Re: [gmx-users] Re: Reaction field zero and ions
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 Message-ID: 52811ca8.5030...@vt.edu
 Content-Type: text/plain; charset=ISO-8859-1; format=flowed



 On 11/11/13 12:08 PM, Williams Ernesto Miranda Delgado wrote:
 Hello
 If I did the MD simulation using PME and neutralized with ions, and I
 want
 to rerun this time with reaction field zero, is there any problem if I
 keep the ions? This is for LIE calculation. I am using AMBER99SB.

 Why do you think it necessary to delete them?

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==


 --

 Message: 3
 Date: Tue, 12 Nov 2013 13:02:54 +0100
 From: Dr. Vitaly Chaban vvcha...@gmail.com
 Subject: Re: [gmx-users] Calculating diffusion coefficient in three
   dimension
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 Message-ID:
   capxdd+abay6mj_dkn6_k+mkbuty4eyzspdvdxj+m-m-2zae...@mail.gmail.com
 Content-Type: text/plain; charset=ISO-8859-1

 MSD is 3D by default.


 Dr. Vitaly V. Chaban


 On Tue, Nov 12, 2013 at 6:01 AM, Venkat Reddy venkat...@gmail.com wrote:
 Dear all,
 I am simulating a spherical lipid vesicle. I want to calculate the
 diffusion coefficient for each lipid component in 3D. How to calculate
 it
 using g_msd (or any other tool like g_velacc)?

 Thank you for your concern

 --
 With Best Wishes
 Venkat Reddy Chirasani
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


 --

 Message: 4
 Date: Tue, 12 Nov 2013 13:04:10 +0100
 From: Dr. Vitaly Chaban vvcha...@gmail.com
 Subject: Re: [gmx-users] Re: Reaction field zero and ions
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 Message-ID:
   capxdd+zuwgcfdx+hymrgi7tm4pa5fnczemr+tdswd5yqvaq

Re: mdrun on 8-core AMD + GTX TITAN (was: Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs)

2013-11-12 Thread Szilárd Páll
501 358.1200.072 1.8
 -
  Total  20029.8464.006   100.0
 -

 Force evaluation time GPU/CPU: 4.006 ms/2.578 ms = 1.554
 For optimal performance this ratio should be close to 1!


 NOTE: The GPU has 20% more load than the CPU. This imbalance causes
   performance loss, consider using a shorter cut-off and a finer PME
 grid.

Core t (s)   Wall t (s)(%)
Time:   216205.51027036.812  799.7
  7h30:36
  (ns/day)(hour/ns)
 Performance:   31.9560.751


 ### Two GPUs #

  R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

  Computing: Nodes   Th. Count  Wall t (s) G-Cycles   %
 -
  Domain decomp. 24 10 339.49010900.191 1.5
  DD comm. load  24  49989   0.2628.410 0.0
  Neighbor search24 11 481.58315462.464 2.2
  Launch GPU ops.24   1002 579.28318599.358 2.6
  Comm. coord.   24490 523.09616795.351 2.3
  Force  245011545.58449624.951 6.9
  Wait + Comm. F 24501 821.74026384.083 3.7
  PME mesh   24501   11097.880   356326.03049.5
  Wait GPU nonlocal  245011001.86832167.550 4.5
  Wait GPU local 24501   8.613  276.533 0.0
  NB X/F buffer ops. 24   19821061.23834073.781 4.7
  Write traj.24   1025   5.681  182.419 0.0
  Update 245011692.23354333.503 7.6
  Constraints245012316.14574365.78810.3
  Comm. energies 24101  15.802  507.373 0.1
  Rest   2 908.38329165.963 4.1
 -
  Total  2   22398.880   719173.747   100.0
 -
 -
  PME redist. X/F24   10021519.28848780.654 6.8
  PME spread/gather  24   10025398.693   173338.93624.1
  PME 3D-FFT 24   10022798.48289852.48212.5
  PME 3D-FFT Comm.   24   1002 947.03330406.937 4.2
  PME solve  24501 420.66713506.611 1.9
 -

Core t (s)   Wall t (s)(%)
Time:   178961.45022398.880  799.0
  6h13:18
  (ns/day)(hour/ns)
 Performance:   38.5730.622






 --
 View this message in context: 
 http://gromacs.5086.x6.nabble.com/mdrun-on-8-core-AMD-GTX-TITAN-was-Re-gmx-users-Re-Gromacs-4-6-on-two-Titans-GPUs-tp5012330p5012391.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: segmentation fault on gromacs 4.5.5 after mdrun

2013-11-12 Thread cjalmeciga
I run 

grompp -f nvt.mdp -c em.gro -p topol.top -n index.ndx -o nvt.tpr 

and everything looks fine. I check the nvt.tpr, and temperature is ok. 

the real problem is with the mdrun function.

could be a problem of the software?

Thanks

Javier



Justin Lemkul wrote
 On 11/11/13 11:24 AM, Carlos Javier Almeciga Diaz wrote:
 Hello evryone,

 I doing a simulation of a ligand-protein interaction with gromacs 4.5.5.
 Everything looks fine after I equilibrate the protein-ligand complex. I'm
 running these commands:


 grompp -f nvt.mdp -c em.gro -p topol.top -n index.ndx -o nvt.tpr

 mdrun -deffnm nvt

 Nevertheless, I got this error:

 Reading file nvt.tpr, VERSION 4.5.5 (double precision)
 Segmentation fault

 What should I do?

 
 Instantaneous failure typically indicates that the forces are
 nonsensically high 
 and the constraint algorithm immediately fails.  Likely the previous
 energy 
 minimization did not adequately complete.
 
 -Justin
 
 -- 
 ==
 
 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow
 
 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalemkul@.umaryland

  | (410) 706-7441
 
 ==
 -- 
 gmx-users mailing list

 gmx-users@

 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to 

 gmx-users-request@

 .
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



--
View this message in context: 
http://gromacs.5086.x6.nabble.com/segmentation-fault-on-gromacs-4-5-5-after-mdrun-tp5012431p5012458.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Restarting a simulation after replacing an empty md.trr file

2013-11-12 Thread arunjones
Hello Sir,
Thanks for the reply.
now is it fine if i use 100 threads in my restart.
is there any impact on the over all simulation? 

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Restarting-a-simulation-after-replacing-an-empty-md-trr-file-tp5012443p5012459.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: segmentation fault on gromacs 4.5.5 after mdrun

2013-11-12 Thread Justin Lemkul



On 11/12/13 10:58 AM, cjalmeciga wrote:

I run

grompp -f nvt.mdp -c em.gro -p topol.top -n index.ndx -o nvt.tpr

and everything looks fine. I check the nvt.tpr, and temperature is ok.



The fact that grompp completes indicates there is nothing syntactically wrong 
with the input files.  Whether or not the content of the .mdp is physically 
sensible or the input configuration is plausible is an entirely different 
matter.  Please tell us what the exact outcome of the previous energy 
minimization was (potential energy, maximum force, copied and pasted from screen 
output or .log file).



the real problem is with the mdrun function.

could be a problem of the software?



You have presented no evidence that would lead anyone to believe the problem is 
with mdrun.  In the vast majority of cases, user input is the problem.


-Justin


Thanks

Javier



Justin Lemkul wrote

On 11/11/13 11:24 AM, Carlos Javier Almeciga Diaz wrote:

Hello evryone,

I doing a simulation of a ligand-protein interaction with gromacs 4.5.5.
Everything looks fine after I equilibrate the protein-ligand complex. I'm
running these commands:


grompp -f nvt.mdp -c em.gro -p topol.top -n index.ndx -o nvt.tpr

mdrun -deffnm nvt

Nevertheless, I got this error:

Reading file nvt.tpr, VERSION 4.5.5 (double precision)
Segmentation fault

What should I do?



Instantaneous failure typically indicates that the forces are
nonsensically high
and the constraint algorithm immediately fails.  Likely the previous
energy
minimization did not adequately complete.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201



jalemkul@.umaryland



  | (410) 706-7441

==
--
gmx-users mailing list



gmx-users@



http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to



gmx-users-request@



.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




--
View this message in context: 
http://gromacs.5086.x6.nabble.com/segmentation-fault-on-gromacs-4-5-5-after-mdrun-tp5012431p5012458.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.



--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Restarting a simulation after replacing an empty md.trr file

2013-11-12 Thread Justin Lemkul



On 11/12/13 11:10 AM, arunjones wrote:

Hello Sir,
Thanks for the reply.
now is it fine if i use 100 threads in my restart.
is there any impact on the over all simulation?



Only if that is the number of threads originally used in the run.  If not, there 
will be a mismatch between the DD grid setup, the .cpt file will complain, and 
the run will fail.  Rule of thumb: don't change settings or alter files mid-run ;)


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Restarting a simulation after replacing an empty md.trr file

2013-11-12 Thread arunjones
Thank you Sir,
initially i was running on 60 threads, now i changed it to 100. simulation
is running with out any error, but i found a note in the log file as fallows

  #nodes mismatch,
current program: 100
checkpoint file: 60

  #PME-nodes mismatch,
current program: -1
checkpoint file: 12

Gromacs binary or parallel settings not identical to previous run.
Continuation is exact, but is not guaranteed to be binary identical.

Initializing Domain Decomposition on 100 nodes


is it a good idea to continue or shall i stick with 60 threads.

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Restarting-a-simulation-after-replacing-an-empty-md-trr-file-tp5012443p5012463.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Restarting a simulation after replacing an empty md.trr file

2013-11-12 Thread Justin Lemkul



On 11/12/13 12:07 PM, arunjones wrote:

Thank you Sir,
initially i was running on 60 threads, now i changed it to 100. simulation
is running with out any error, but i found a note in the log file as fallows

   #nodes mismatch,
 current program: 100
 checkpoint file: 60

   #PME-nodes mismatch,
 current program: -1
 checkpoint file: 12

Gromacs binary or parallel settings not identical to previous run.
Continuation is exact, but is not guaranteed to be binary identical.

Initializing Domain Decomposition on 100 nodes


is it a good idea to continue or shall i stick with 60 threads.



Like I said, I think it is a bad idea to switch settings haphazardly during the 
run.  As the note indicates, the continuation is exact, but not binary 
identical.  Check the website for what all that means if you're not sure.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: segmentation fault on gromacs 4.5.5 after mdrun

2013-11-12 Thread cjalmeciga

The output of energy minimization was

Potential Energy  = -1.42173622068236e+06
Maximum force =  9.00312066109319e+02 on atom 148
Norm of force =  2.06087515037187e+01

Thanks

Javier


Justin Lemkul wrote
 On 11/12/13 10:58 AM, cjalmeciga wrote:
 I run

 grompp -f nvt.mdp -c em.gro -p topol.top -n index.ndx -o nvt.tpr

 and everything looks fine. I check the nvt.tpr, and temperature is ok.

 
 The fact that grompp completes indicates there is nothing syntactically
 wrong 
 with the input files.  Whether or not the content of the .mdp is
 physically 
 sensible or the input configuration is plausible is an entirely different 
 matter.  Please tell us what the exact outcome of the previous energy 
 minimization was (potential energy, maximum force, copied and pasted from
 screen 
 output or .log file).
 
 the real problem is with the mdrun function.

 could be a problem of the software?

 
 You have presented no evidence that would lead anyone to believe the
 problem is 
 with mdrun.  In the vast majority of cases, user input is the problem.
 
 -Justin
 
 Thanks

 Javier



 Justin Lemkul wrote
 On 11/11/13 11:24 AM, Carlos Javier Almeciga Diaz wrote:
 Hello evryone,

 I doing a simulation of a ligand-protein interaction with gromacs
 4.5.5.
 Everything looks fine after I equilibrate the protein-ligand complex.
 I'm
 running these commands:


 grompp -f nvt.mdp -c em.gro -p topol.top -n index.ndx -o nvt.tpr

 mdrun -deffnm nvt

 Nevertheless, I got this error:

 Reading file nvt.tpr, VERSION 4.5.5 (double precision)
 Segmentation fault

 What should I do?


 Instantaneous failure typically indicates that the forces are
 nonsensically high
 and the constraint algorithm immediately fails.  Likely the previous
 energy
 minimization did not adequately complete.

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalemkul@.umaryland

   | (410) 706-7441

 ==
 --
 gmx-users mailing list

 gmx-users@

 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to

 gmx-users-request@

 .
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



 --
 View this message in context:
 http://gromacs.5086.x6.nabble.com/segmentation-fault-on-gromacs-4-5-5-after-mdrun-tp5012431p5012458.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.

 
 -- 
 ==
 
 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow
 
 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalemkul@.umaryland

  | (410) 706-7441
 
 ==
 -- 
 gmx-users mailing list

 gmx-users@

 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to 

 gmx-users-request@

 .
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



--
View this message in context: 
http://gromacs.5086.x6.nabble.com/segmentation-fault-on-gromacs-4-5-5-after-mdrun-tp5012431p5012464.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: segmentation fault on gromacs 4.5.5 after mdrun

2013-11-12 Thread Justin Lemkul



On 11/12/13 12:14 PM, cjalmeciga wrote:


The output of energy minimization was

Potential Energy  = -1.42173622068236e+06
Maximum force =  9.00312066109319e+02 on atom 148
Norm of force =  2.06087515037187e+01



OK, reasonable enough.  How about a description of what the system is, which 
force field you chose, how you derived the ligand topology, and the full 
contents of your .mdp file?


-Justin


Thanks

Javier


Justin Lemkul wrote

On 11/12/13 10:58 AM, cjalmeciga wrote:

I run

grompp -f nvt.mdp -c em.gro -p topol.top -n index.ndx -o nvt.tpr

and everything looks fine. I check the nvt.tpr, and temperature is ok.



The fact that grompp completes indicates there is nothing syntactically
wrong
with the input files.  Whether or not the content of the .mdp is
physically
sensible or the input configuration is plausible is an entirely different
matter.  Please tell us what the exact outcome of the previous energy
minimization was (potential energy, maximum force, copied and pasted from
screen
output or .log file).


the real problem is with the mdrun function.

could be a problem of the software?



You have presented no evidence that would lead anyone to believe the
problem is
with mdrun.  In the vast majority of cases, user input is the problem.

-Justin


Thanks

Javier



Justin Lemkul wrote

On 11/11/13 11:24 AM, Carlos Javier Almeciga Diaz wrote:

Hello evryone,

I doing a simulation of a ligand-protein interaction with gromacs
4.5.5.
Everything looks fine after I equilibrate the protein-ligand complex.
I'm
running these commands:


grompp -f nvt.mdp -c em.gro -p topol.top -n index.ndx -o nvt.tpr

mdrun -deffnm nvt

Nevertheless, I got this error:

Reading file nvt.tpr, VERSION 4.5.5 (double precision)
Segmentation fault

What should I do?



Instantaneous failure typically indicates that the forces are
nonsensically high
and the constraint algorithm immediately fails.  Likely the previous
energy
minimization did not adequately complete.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201



jalemkul@.umaryland



   | (410) 706-7441

==
--
gmx-users mailing list



gmx-users@



http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to



gmx-users-request@



.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




--
View this message in context:
http://gromacs.5086.x6.nabble.com/segmentation-fault-on-gromacs-4-5-5-after-mdrun-tp5012431p5012458.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.



--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201



jalemkul@.umaryland



  | (410) 706-7441

==
--
gmx-users mailing list



gmx-users@



http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to



gmx-users-request@



.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




--
View this message in context: 
http://gromacs.5086.x6.nabble.com/segmentation-fault-on-gromacs-4-5-5-after-mdrun-tp5012431p5012464.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.



--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Change in the position of structural Zinc and calcium ions during MD

2013-11-12 Thread Rama
Hi Justin,

Below I pasted .mdp file and topology. In .log file I could see energy term
for position restraints.

.mdp file---
title   = NPT Equilibration 
define  = -DPOSRES  ; position restraints for protein
; Run parameters
integrator  = md; leap-frog integrator
nsteps  = 50; 2 * 50 = 1000 ps (1 ns)
dt  = 0.002 ; 2 fs
; Output control
nstxout = 500   ; save coordinates every 1 ps
nstvout = 500   ; save velocities every 1 ps
nstenergy   = 500   ; save energies every 1 ps
nstlog  = 500   ; update log file every 1 ps
; Bond parameters
continuation= yes   ; Restarting after NVT 
constraint_algorithm = lincs; holonomic constraints 
constraints = all-bonds ; all bonds (even heavy atom-H bonds)
constrained
lincs_iter  = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
; Neighborsearching
ns_type = grid  ; search neighboring grid cels
nstlist = 5 ; 10 fs
rlist   = 1.2   ; short-range neighborlist cutoff (in nm)
rcoulomb= 1.2   ; short-range electrostatic cutoff (in nm)
rvdw= 1.2   ; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype = PME   ; Particle Mesh Ewald for long-range 
electrostatics
pme_order   = 4 ; cubic interpolation
fourierspacing  = 0.16  ; grid spacing for FFT
; Temperature coupling is on
tcoupl  = Nose-Hoover   ; More accurate thermostat
tc-grps =  Protein_CA_ZN   DMPC   SOL_CL; three coupling groups 
- more
accurate
tau_t   =   0.50.5 0.5  ; time constant, in ps
ref_t   =   298298 310  ; reference 
temperature, one for
each group, in K
; Pressure coupling is on
pcoupl  = Parrinello-Rahman ; Pressure coupling on in NPT
pcoupltype  = semiisotropic ; uniform scaling of x-y box 
vectors,
independent z
tau_p   = 5.0   ; time constant, in ps
ref_p   = 1.0   1.0 ; reference pressure, x-y, z 
(in bar)
compressibility = 4.5e-54.5e-5  ; isothermal compressibility, bar^-1
; Periodic boundary conditions
pbc = xyz   ; 3-D PBC
; Dispersion correction
DispCorr= EnerPres  ; account for cut-off vdW scheme
; Velocity generation
gen_vel = no; Velocity generation is off
; COM motion removal
; These options remove motion of the protein/bilayer relative to the
solvent/ions
nstcomm = 1
comm-mode   = Linear
comm-grps   = Protein_DMPC SOL_CL
; Scale COM of reference coordinates
refcoord_scaling = com


topol.top
; Include Position restraint file
#ifdef POSRES
#include posre.itp
#endif

; Strong position restraints for InflateGRO
#ifdef STRONG_POSRES
#include strong_posre.itp
#endif

; Include DMPC topology
 #include rama4LJ.ff/dmpcLJ.itp

; Include water topology
#include rama4LJ.ff/spc.itp

#ifdef POSRES_WATER
; Position restraint for each water oxygen
[ position_restraints ]
;  i funct   fcxfcyfcz
   11   1000   1000   1000
#endif

; Include topology for ions
#include rama4LJ.ff/ions.itp

---.log file
   Energies (kJ/mol)
  AngleProper Dih. Ryckaert-Bell.  Improper Dih.  LJ-14
1.77761e+043.10548e+037.97673e+034.40586e+028.14131e+03
 Coulomb-14LJ (SR)  Disper. corr.   Coulomb (SR)   Coul. recip.
2.59758e+042.74092e+04   -2.56846e+03   -4.68637e+05   -1.67418e+05
 Position Rest.  PotentialKinetic En.   Total EnergyTemperature
7.09403e+02   -5.47088e+058.83115e+04   -4.58777e+053.07118e+02
 Pres. DC (bar) Pressure (bar)   Constr. rmsd
   -2.00493e+021.00080e+000.0e+00


Thanks

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Change-in-the-positon-of-structural-Zinc-and-calcium-ions-during-MD-tp5012467p5012474.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: mdrun on 8-core AMD + GTX TITAN (was: Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs)

2013-11-12 Thread Dwey Kauffman
 more hardware at it. To hope to see some
 scaling, you'd need to be able to drop the PME mesh time by about a factor
 of two (coarser grid, and compensating increase to rcoulomb), and hope
 there was enough PP work that using two GPUs for a single simulation is
 even worth considering. Achieving throughput-style scaling by running two
 independent simulations on the same node may be all that is practical (but
 I don't even know how many atoms you are simulating!)

 Mark
In the configuration of two GPUs, there is NO such GPU timing table, while 
in that of 1 GPU there is one.  See logs.

Again, it is interesting to know if there was enough PP work for two GPUs. 
Increasing cuf-offs indeed achieves this purpose when cutoff 1.6 but the
total performance (ns/day) decreases severely. That's NOT what I want
because I would like to assign 0.8 or 1.0 or 1.2 nm to cutoffs for general
purposes.  In this tested case, I am testing Justin's umbrella's pull-code
in his tutorial. 

I should also mention that using two GPUs with INTEL 12-core CPUs,  I work
on a protein with 35,000 atoms including solvent and ions for general
purposes. Its performance only increases 5-8% more with 2 GPUs. 

Beside your hint of PP workload, any better suggestions in practice are
highly appreciated.  I can test your suggestion.

Thanks,
Dewey


--
View this message in context: 
http://gromacs.5086.x6.nabble.com/mdrun-on-8-core-AMD-GTX-TITAN-was-Re-gmx-users-Re-Gromacs-4-6-on-two-Titans-GPUs-tp5012330p5012475.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Change in the position of structural Zinc and calcium ions during MD

2013-11-12 Thread Justin Lemkul



On 11/12/13 1:47 PM, Rama wrote:

Hi Justin,

Below I pasted .mdp file and topology. In .log file I could see energy term
for position restraints.

.mdp file---
title   = NPT Equilibration
define  = -DPOSRES  ; position restraints for protein
; Run parameters
integrator  = md; leap-frog integrator
nsteps  = 50; 2 * 50 = 1000 ps (1 ns)
dt  = 0.002 ; 2 fs
; Output control
nstxout = 500   ; save coordinates every 1 ps
nstvout = 500   ; save velocities every 1 ps
nstenergy   = 500   ; save energies every 1 ps
nstlog  = 500   ; update log file every 1 ps
; Bond parameters
continuation= yes   ; Restarting after NVT
constraint_algorithm = lincs; holonomic constraints
constraints = all-bonds ; all bonds (even heavy atom-H bonds)
constrained
lincs_iter  = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
; Neighborsearching
ns_type = grid  ; search neighboring grid cels
nstlist = 5 ; 10 fs
rlist   = 1.2   ; short-range neighborlist cutoff (in nm)
rcoulomb= 1.2   ; short-range electrostatic cutoff (in nm)
rvdw= 1.2   ; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype = PME   ; Particle Mesh Ewald for long-range 
electrostatics
pme_order   = 4 ; cubic interpolation
fourierspacing  = 0.16  ; grid spacing for FFT
; Temperature coupling is on
tcoupl  = Nose-Hoover   ; More accurate thermostat
tc-grps =  Protein_CA_ZN   DMPC   SOL_CL; three coupling groups 
- more
accurate
tau_t   =   0.50.5 0.5  ; time constant, in ps
ref_t   =   298298 310  ; reference 
temperature, one for
each group, in K
; Pressure coupling is on
pcoupl  = Parrinello-Rahman ; Pressure coupling on in NPT
pcoupltype  = semiisotropic ; uniform scaling of x-y box 
vectors,
independent z
tau_p   = 5.0   ; time constant, in ps
ref_p   = 1.0   1.0 ; reference pressure, x-y, z 
(in bar)
compressibility = 4.5e-54.5e-5  ; isothermal compressibility, bar^-1
; Periodic boundary conditions
pbc = xyz   ; 3-D PBC
; Dispersion correction
DispCorr= EnerPres  ; account for cut-off vdW scheme
; Velocity generation
gen_vel = no; Velocity generation is off
; COM motion removal
; These options remove motion of the protein/bilayer relative to the
solvent/ions
nstcomm = 1
comm-mode   = Linear
comm-grps   = Protein_DMPC SOL_CL
; Scale COM of reference coordinates
refcoord_scaling = com


topol.top
; Include Position restraint file
#ifdef POSRES
#include posre.itp
#endif

; Strong position restraints for InflateGRO
#ifdef STRONG_POSRES
#include strong_posre.itp
#endif

; Include DMPC topology
  #include rama4LJ.ff/dmpcLJ.itp

; Include water topology
#include rama4LJ.ff/spc.itp

#ifdef POSRES_WATER
; Position restraint for each water oxygen
[ position_restraints ]
;  i funct   fcxfcyfcz
11   1000   1000   1000
#endif

; Include topology for ions
#include rama4LJ.ff/ions.itp



Do you have appropriate [position_restraints] assigned in this topology?  None 
of the above, as shown, pertains to the ions, and the only relevant #ifdef block 
that would be triggered by -DPOSRES is for the protein.


-Justin


---.log file
Energies (kJ/mol)
   AngleProper Dih. Ryckaert-Bell.  Improper Dih.  LJ-14
 1.77761e+043.10548e+037.97673e+034.40586e+028.14131e+03
  Coulomb-14LJ (SR)  Disper. corr.   Coulomb (SR)   Coul. recip.
 2.59758e+042.74092e+04   -2.56846e+03   -4.68637e+05   -1.67418e+05
  Position Rest.  PotentialKinetic En.   Total EnergyTemperature
 7.09403e+02   -5.47088e+058.83115e+04   -4.58777e+053.07118e+02
  Pres. DC (bar) Pressure (bar)   Constr. rmsd
-2.00493e+021.00080e+000.0e+00


Thanks

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Change-in-the-positon-of-structural-Zinc-and-calcium-ions-during-MD-tp5012467p5012474.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.



--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing 

[gmx-users] Re: Bilayer COM removal issue: Large VCM

2013-11-12 Thread rajat desikan
Hi All,
Any suggestions?

Thanks,


On Mon, Nov 11, 2013 at 12:38 AM, rajat desikan rajatdesi...@gmail.comwrote:

 Hi All,
 I am experiencing a few problems in membrane simulations wrt COM removal.
 I downloaded a 400 ns pre-equilibrated Slipid-DMPC membrane with all the
 accompanying files. I then carried out the following steps:
 1) energy minimization
 2) NVT Eq - 100 ps
 3) NPT Eq - 250 ps (Berendsen temp, Pres coupling)

 Then I used g_select to select the upper and lower DMPC leaflets. The then
 carried out a 250 ps NPT eq again. The only change was:
 comm-grps= SOL DMPC ==
 comm-grps= SOL upper lower

 On every step in log file, I get the following message:













 *Step   Time Lambda 124000
 248.00.0 Large VCM(group lower): -0.00051,
 -0.00515, -0.00652, Temp-cm:  8.11828e+29   Energies
 (kJ/mol)U-BProper Dih.  Improper Dih.  LJ-14
 Coulomb-147.23818e+044.19778e+046.46641e+024.54801e+03
 -1.45245e+05 LJ (SR)LJ (LR)  Disper. corr.   Coulomb (SR)
 Coul. recip.2.79689e+04   -3.78407e+03   -2.10679e+03   -5.84134e+05
 -8.87497e+04  PotentialKinetic En.   Total EnergyTemperature
 Pres. DC (bar)-6.76497e+051.76468e+05   -5.00029e+05
 3.10424e+02   -1.05704e+02 Pressure (bar)   Constr. rmsd   -1.85927e+02
 6.42934e-06*









 *Large VCM(group lower): -0.00187, -0.00369,  0.00032,
 Temp-cm:  2.02076e+29 Large VCM(group lower): -0.00725,
 -0.00278, -0.00549, Temp-cm:  1.05988e+30Large VCM(group lower):
 0.00020,  0.00308, -0.00176, Temp-cm:  1.48126e+29Large VCM(group
 lower): -0.00541,  0.00546, -0.00166, Temp-cm:  7.24656e+29
 Large VCM(group lower): -0.00220,  0.00362, -0.00741, Temp-cm:
 8.53812e+29Large VCM(group lower):  0.00140, -0.00160,
 0.00029, Temp-cm:  5.39679e+28Large VCM(group lower): -0.00056,
 -0.00293, -0.00364, Temp-cm:  2.59422e+29 Large VCM(group lower):
 -0.00172, -0.00260,  0.00494, Temp-cm:  3.99945e+29Large VCM(group
 lower):  0.00252,  0.00594,  0.00068, Temp-cm:  4.93342e+29*
 *DD  step 124999  vol min/aver 0.702  load imb.: force  1.3%  pme
 mesh/force 0.636*

 I do not know what to make of it. There are no issues when I remove COM
 for the entire system. I have seen this issue come up a few times in the
 archives too, but I didn't find a satisfactory solution since the bilayer
 was very well equilibrated.

 I would appreciate any suggestions. Thank you.


 --
 Rajat Desikan (Ph.D Scholar)
 Prof. K. Ganapathy Ayappa's Lab (no 13),
 Dept. of Chemical Engineering,
 Indian Institute of Science, Bangalore




-- 
Rajat Desikan (Ph.D Scholar)
Prof. K. Ganapathy Ayappa's Lab (no 13),
Dept. of Chemical Engineering,
Indian Institute of Science, Bangalore
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: g_analyze

2013-11-11 Thread bharat gupta
In addition to my previous question, I have another question about
g_analyze. When I used the hbond.xvg file to get the average and plotted
the average.xvg file I found that the average value is round 4 to 5
according to the graph. But g_analyze in its final calculation gives 7.150
as the average values... Here's the link for the graph and result of
average value calculated by g_analyze :-

  std. dev.relative deviation of
   standard   -   cumulants from those of
set  average   deviation  sqrt(n-1)   a Gaussian distribition
  cum. 3   cum. 4
SS1  * 7.150740e+00 *  8.803173e-01   1.760635e-02   0.0620.163
SS2   1.490604e+00   1.164761e+00   2.329523e-02   0.4950.153

https://www.dropbox.com/s/1vqixenyerha7qq/115-water.png

Here's the  link hbond.xvg file and its averaged file
https://www.dropbox.com/s/4n0m47o3mrjn3o8/hbond_115-water.xvg
https://www.dropbox.com/s/4n0m47o3mrjn3o8/hbond_115-water.xvg


On Mon, Nov 11, 2013 at 3:30 PM, bharat gupta bharat.85.m...@gmail.comwrote:

 thank you informing about g_rdf...

 Is it possible to dump the structure with those average water molecules
 interacting with the residues. I generated the hbond.log file which gives
 the details but I need to generate a figure for this ??



 On Mon, Nov 11, 2013 at 10:40 AM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/10/13 8:38 PM, bharat gupta wrote:

 But trjorder can be used to calculate the hydration layer or shell around
 residues ... Right ??


 Yes, but I also tend to think that integrating an RDF is also a more
 straightforward way of doing that.  With trjorder, you set some arbitrary
 cutoff that may or may not be an informed decision - with an RDF it is
 clear where the hydration layers are.

 -Justin



 On Mon, Nov 11, 2013 at 10:35 AM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/10/13 8:30 PM, bharat gupta wrote:

  Thanks for your reply. I was missing the scientific notation part. Now
 everything is fine.

 Regarding trjorder, it doesn't measure h-bonds but gives the water
 nearest
 to protein.


  I wouldn't try to draw any sort of comparison between the output of
 trjorder and g_hbond.  If you want to measure H-bonds, there's only one
 tool for that.


 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists





-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: g_analyze

2013-11-11 Thread Justin Lemkul



On 11/11/13 1:30 AM, bharat gupta wrote:

thank you informing about g_rdf...

Is it possible to dump the structure with those average water molecules
interacting with the residues. I generated the hbond.log file which gives
the details but I need to generate a figure for this ??



g_select

-Justin



On Mon, Nov 11, 2013 at 10:40 AM, Justin Lemkul jalem...@vt.edu wrote:




On 11/10/13 8:38 PM, bharat gupta wrote:


But trjorder can be used to calculate the hydration layer or shell around
residues ... Right ??



Yes, but I also tend to think that integrating an RDF is also a more
straightforward way of doing that.  With trjorder, you set some arbitrary
cutoff that may or may not be an informed decision - with an RDF it is
clear where the hydration layers are.

-Justin




On Mon, Nov 11, 2013 at 10:35 AM, Justin Lemkul jalem...@vt.edu wrote:




On 11/10/13 8:30 PM, bharat gupta wrote:

  Thanks for your reply. I was missing the scientific notation part. Now

everything is fine.

Regarding trjorder, it doesn't measure h-bonds but gives the water
nearest
to protein.


  I wouldn't try to draw any sort of comparison between the output of

trjorder and g_hbond.  If you want to measure H-bonds, there's only one
tool for that.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: g_analyze

2013-11-11 Thread Justin Lemkul



On 11/11/13 4:06 AM, bharat gupta wrote:

In addition to my previous question, I have another question about
g_analyze. When I used the hbond.xvg file to get the average and plotted
the average.xvg file I found that the average value is round 4 to 5
according to the graph. But g_analyze in its final calculation gives 7.150
as the average values... Here's the link for the graph and result of
average value calculated by g_analyze :-

   std. dev.relative deviation of
standard   -   cumulants from those of
set  average   deviation  sqrt(n-1)   a Gaussian distribition
   cum. 3   cum. 4
SS1  * 7.150740e+00 *  8.803173e-01   1.760635e-02   0.0620.163
SS2   1.490604e+00   1.164761e+00   2.329523e-02   0.4950.153

https://www.dropbox.com/s/1vqixenyerha7qq/115-water.png

Here's the  link hbond.xvg file and its averaged file
https://www.dropbox.com/s/4n0m47o3mrjn3o8/hbond_115-water.xvg
https://www.dropbox.com/s/4n0m47o3mrjn3o8/hbond_115-water.xvg



Neither of these files produce output that corresponds to the PNG image above. 
Both files have values in 6-9 H-bond range and thus agree with the g_analyze 
output, which I can reproduce.  I suspect you're somehow getting your files 
mixed up.


-Justin



On Mon, Nov 11, 2013 at 3:30 PM, bharat gupta bharat.85.m...@gmail.comwrote:


thank you informing about g_rdf...

Is it possible to dump the structure with those average water molecules
interacting with the residues. I generated the hbond.log file which gives
the details but I need to generate a figure for this ??



On Mon, Nov 11, 2013 at 10:40 AM, Justin Lemkul jalem...@vt.edu wrote:




On 11/10/13 8:38 PM, bharat gupta wrote:


But trjorder can be used to calculate the hydration layer or shell around
residues ... Right ??



Yes, but I also tend to think that integrating an RDF is also a more
straightforward way of doing that.  With trjorder, you set some arbitrary
cutoff that may or may not be an informed decision - with an RDF it is
clear where the hydration layers are.

-Justin




On Mon, Nov 11, 2013 at 10:35 AM, Justin Lemkul jalem...@vt.edu wrote:




On 11/10/13 8:30 PM, bharat gupta wrote:

  Thanks for your reply. I was missing the scientific notation part. Now

everything is fine.

Regarding trjorder, it doesn't measure h-bonds but gives the water
nearest
to protein.


  I wouldn't try to draw any sort of comparison between the output of

trjorder and g_hbond.  If you want to measure H-bonds, there's only one
tool for that.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists








--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Reaction field zero and ions

2013-11-11 Thread Williams Ernesto Miranda Delgado
Hello
If I did the MD simulation using PME and neutralized with ions, and I want
to rerun this time with reaction field zero, is there any problem if I
keep the ions? This is for LIE calculation. I am using AMBER99SB.
Thanks
Williams

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: mdrun on 8-core AMD + GTX TITAN (was: Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs)

2013-11-10 Thread Mark Abraham
 that is practical (but
I don't even know how many atoms you are simulating!)

Mark


Core t (s)   Wall t (s)(%)
Time:   178961.45022398.880  799.0
  6h13:18
  (ns/day)(hour/ns)
 Performance:   38.5730.622






 --
 View this message in context:
 http://gromacs.5086.x6.nabble.com/mdrun-on-8-core-AMD-GTX-TITAN-was-Re-gmx-users-Re-Gromacs-4-6-on-two-Titans-GPUs-tp5012330p5012391.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: choosing force field

2013-11-10 Thread pratibha
Thank you Justin for your kind help. The simple reason for considering only
gromos parameter sets is that the parameters for the metal ions (in my
protein) are not defined in other force fields.


On Sat, Nov 9, 2013 at 7:18 PM, Justin Lemkul [via GROMACS] 
ml-node+s5086n5012376...@n6.nabble.com wrote:



 On 11/9/13 12:48 AM, pratibha wrote:
  Sorry for the previous mistake. Instead of 53a7, the force field which I
  used was 53a6.
 
 

 53A6 is known to under-stabilize helices, so if a helix did not appear in
 a
 simulation using this force field, it is not definitive proof that the
 structure
 does not populate helical structures.  I generally see mixed opinions in
 the
 literature in terms of which Gromos parameter set is the most reliable.
  As was
 asked by someone else, is there a reason you are only considering Gromos
 parameter sets?  Others may be better suited to your study.

 -Justin

  On Fri, Nov 8, 2013 at 12:10 AM, Justin Lemkul [via GROMACS] 
  [hidden email] http://user/SendEmail.jtp?type=nodenode=5012376i=0
 wrote:
 
 
 
  On 11/7/13 12:14 PM, pratibha wrote:
  My protein contains metal ions which are parameterized only in gromos
  force
  field. Since I am a newbie to MD simulations, it would be difficult
 for
  me
  to parameterize those myself.
  Can you please guide me as per my previous mail  which out of the two
  simulations should I consider  more reliable-43a1 or 53a7?
 
  AFAIK, there is no such thing as 53A7, and your original message was
 full
  of
  similar typos, making it nearly impossible to figure out what you were
  actually
  doing.  Can you indicate the actual force field(s) that you have been
  using in
  case someone has any ideas?  The difference between 53A6 and 54A7
 should
  be
  quite pronounced, in my experience, thus any guesses as to what 53A7
  should be
  doing are not productive because I don't know what that is.
 
  -Justin
 
  --
  ==
 
  Justin A. Lemkul, Ph.D.
  Postdoctoral Fellow
 
  Department of Pharmaceutical Sciences
  School of Pharmacy
  Health Sciences Facility II, Room 601
  University of Maryland, Baltimore
  20 Penn St.
  Baltimore, MD 21201
 
  [hidden email] http://user/SendEmail.jtp?type=nodenode=5012325i=0
 |
  (410) 706-7441
 
  ==
  --
  gmx-users mailing list[hidden email]
 http://user/SendEmail.jtp?type=nodenode=5012325i=1
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to [hidden email]
 http://user/SendEmail.jtp?type=nodenode=5012325i=2.
 
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
  --
If you reply to this email, your message will be added to the
 discussion
  below:
 
 

  .
  NAML
 http://gromacs.5086.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml

 
 
 
  --
  View this message in context:
 http://gromacs.5086.x6.nabble.com/choosing-force-field-tp5012242p5012370.html
  Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 [hidden email] http://user/SendEmail.jtp?type=nodenode=5012376i=1 |
 (410) 706-7441

 ==
 --
 gmx-users mailing list[hidden 
 email]http://user/SendEmail.jtp?type=nodenode=5012376i=2
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to [hidden 
 email]http://user/SendEmail.jtp?type=nodenode=5012376i=3.

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


 --
  If you reply to this email, your message will be added to the discussion
 below:

 http://gromacs.5086.x6.nabble.com/choosing-force-field-tp5012242p5012376.html
  To unsubscribe from choosing force field, click 
 herehttp://gromacs.5086.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=5012242code=a2Fwb29ycHJhdGliaGE3QGdtYWlsLmNvbXw1MDEyMjQyfC02NjkwNjY5MjU=
 .
 

Re: [gmx-users] Re: g_analyze

2013-11-10 Thread Justin Lemkul



On 11/10/13 12:20 AM, bharat gupta wrote:

Hi,
I used the command g_hbond to find h-bond between  residues 115-118 and
water. Then I used g_analyze to find out the average and it gives the value
for the hbonds like this :-

   std. dev.relative deviation of
standard   -   cumulants from those of
set  average   deviation  sqrt(n-1)   a Gaussian distribition
   cum. 3   cum. 4
SS1   6.877249e-02   2.546419e-01   5.092839e-03   2.1813.495
SS2   6.997201e-02   2.673450e-01   5.346901e-03   2.4215.001

When I calculated the average manually, by taking the average of numbers in
second column of hbnum.xvg file, I got a value of around 13.5.. What is the
reason for such a large difference.



Hard to say, but I've never known g_analyze to be wrong, so I'd suspect 
something is amiss in your manual calculation.  The difference between 13.5 and 
0.0069 is huge; you should be able to scan through the data file to see what the 
expected value should be.



In another case, g_analyze gives avg values of aroun 6.9 for hbond between
two residues and when I calculated it maually I got the avg values as 6.8

..


Whats the meaning of SS1 and SS2,?? Does it mean that SS1 refers to time
and SS2 refers to hbond numbers in the hbnum.xvg obtained from g_hbond
analysis ??


Data sets 1 and 2.  You will note that there are two columns of data in the 
-hbnum output produced by g_hbond, with titles explaining both.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: g_analyze

2013-11-10 Thread bharat gupta
I checked the file hbnum.xvg file and it contains three columns - time,
hbonds, hbonds that donot follow the angle criteria. In that case SS1 is
the average of actual hbonds (2nd column ) and SS2 is the average of 3rd
column. Am I right here or not ??

I tried to calculate the h-bond for residues 115-118 individually, and then
checked the average for each residue. For single residue calculation, the
g_analyze average value is correct.

But when I calculate the h-bond as a range 115-118, I get the g_analyze
value as 1.62 . I calculated the average manually in excel, got the average
values as 16.2 [which is (g_analyze avg value)/10].

I then added up the average values of h-bonds of individual residues and
the final comes around 16.2, same as that of the 115-118 range h-bonds.
This means that my calculation is correct.

I also used trjorder to calculate h-bond at distance 0.34 for residues
115-118. I got the average value around 2.51 from g_analyze, where as
manual calculation gives 25.1. I don't knw why for the range the g_analyze
give avg as (actual avg value)/10 ??

Why does trjorder and g_hbond gives different number of hydrogen bonds for
the same residue set??

Thanks
---
BHARAT



On Sun, Nov 10, 2013 at 10:01 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/10/13 12:20 AM, bharat gupta wrote:

 Hi,
 I used the command g_hbond to find h-bond between  residues 115-118 and
 water. Then I used g_analyze to find out the average and it gives the
 value
 for the hbonds like this :-

std. dev.relative deviation of
 standard   -   cumulants from those of
 set  average   deviation  sqrt(n-1)   a Gaussian distribition
cum. 3   cum. 4
 SS1   6.877249e-02   2.546419e-01   5.092839e-03   2.1813.495
 SS2   6.997201e-02   2.673450e-01   5.346901e-03   2.4215.001

 When I calculated the average manually, by taking the average of numbers
 in
 second column of hbnum.xvg file, I got a value of around 13.5.. What is
 the
 reason for such a large difference.


 Hard to say, but I've never known g_analyze to be wrong, so I'd suspect
 something is amiss in your manual calculation.  The difference between 13.5
 and 0.0069 is huge; you should be able to scan through the data file to see
 what the expected value should be.


  In another case, g_analyze gives avg values of aroun 6.9 for hbond between
 two residues and when I calculated it maually I got the avg values as 6.8

 ..


 Whats the meaning of SS1 and SS2,?? Does it mean that SS1 refers to time
 and SS2 refers to hbond numbers in the hbnum.xvg obtained from g_hbond
 analysis ??


 Data sets 1 and 2.  You will note that there are two columns of data in
 the -hbnum output produced by g_hbond, with titles explaining both.

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Thankful

2013-11-10 Thread Williams Ernesto Miranda Delgado
Justin, thank you very much for your kind help about LIE and PME
Williams

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: g_analyze

2013-11-10 Thread Justin Lemkul



On 11/10/13 7:18 PM, bharat gupta wrote:

I checked the file hbnum.xvg file and it contains three columns - time,
hbonds, hbonds that donot follow the angle criteria. In that case SS1 is


The third column is not actually H-bonds, then ;)


the average of actual hbonds (2nd column ) and SS2 is the average of 3rd
column. Am I right here or not ??



Yes.


I tried to calculate the h-bond for residues 115-118 individually, and then
checked the average for each residue. For single residue calculation, the
g_analyze average value is correct.

But when I calculate the h-bond as a range 115-118, I get the g_analyze
value as 1.62 . I calculated the average manually in excel, got the average
values as 16.2 [which is (g_analyze avg value)/10].



That is impossible.  You cannot get a different average by examining the same 
numbers.  Read the g_analyze output again - I am willing to bet that you're not 
seeing the exponent of the scientific notation.



I then added up the average values of h-bonds of individual residues and
the final comes around 16.2, same as that of the 115-118 range h-bonds.
This means that my calculation is correct.

I also used trjorder to calculate h-bond at distance 0.34 for residues
115-118. I got the average value around 2.51 from g_analyze, where as
manual calculation gives 25.1. I don't knw why for the range the g_analyze
give avg as (actual avg value)/10 ??

Why does trjorder and g_hbond gives different number of hydrogen bonds for
the same residue set??



All of this comes down to correctly reading the screen output.  I have no idea 
what you're doing with trjorder, though.  It doesn't measure H-bonds.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: g_analyze

2013-11-10 Thread bharat gupta
Thanks for your reply. I was missing the scientific notation part. Now
everything is fine.

Regarding trjorder, it doesn't measure h-bonds but gives the water nearest
to protein.




On Mon, Nov 11, 2013 at 10:12 AM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/10/13 7:18 PM, bharat gupta wrote:

 I checked the file hbnum.xvg file and it contains three columns - time,
 hbonds, hbonds that donot follow the angle criteria. In that case SS1 is


 The third column is not actually H-bonds, then ;)


  the average of actual hbonds (2nd column ) and SS2 is the average of 3rd
 column. Am I right here or not ??


 Yes.


  I tried to calculate the h-bond for residues 115-118 individually, and
 then
 checked the average for each residue. For single residue calculation, the
 g_analyze average value is correct.

 But when I calculate the h-bond as a range 115-118, I get the g_analyze
 value as 1.62 . I calculated the average manually in excel, got the
 average
 values as 16.2 [which is (g_analyze avg value)/10].


 That is impossible.  You cannot get a different average by examining the
 same numbers.  Read the g_analyze output again - I am willing to bet that
 you're not seeing the exponent of the scientific notation.


  I then added up the average values of h-bonds of individual residues and
 the final comes around 16.2, same as that of the 115-118 range h-bonds.
 This means that my calculation is correct.

 I also used trjorder to calculate h-bond at distance 0.34 for residues
 115-118. I got the average value around 2.51 from g_analyze, where as
 manual calculation gives 25.1. I don't knw why for the range the g_analyze
 give avg as (actual avg value)/10 ??

 Why does trjorder and g_hbond gives different number of hydrogen bonds for
 the same residue set??


 All of this comes down to correctly reading the screen output.  I have no
 idea what you're doing with trjorder, though.  It doesn't measure H-bonds.


 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: g_analyze

2013-11-10 Thread Justin Lemkul



On 11/10/13 8:30 PM, bharat gupta wrote:

Thanks for your reply. I was missing the scientific notation part. Now
everything is fine.

Regarding trjorder, it doesn't measure h-bonds but gives the water nearest
to protein.



I wouldn't try to draw any sort of comparison between the output of trjorder and 
g_hbond.  If you want to measure H-bonds, there's only one tool for that.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: g_analyze

2013-11-10 Thread bharat gupta
But trjorder can be used to calculate the hydration layer or shell around
residues ... Right ??


On Mon, Nov 11, 2013 at 10:35 AM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/10/13 8:30 PM, bharat gupta wrote:

 Thanks for your reply. I was missing the scientific notation part. Now
 everything is fine.

 Regarding trjorder, it doesn't measure h-bonds but gives the water nearest
 to protein.


 I wouldn't try to draw any sort of comparison between the output of
 trjorder and g_hbond.  If you want to measure H-bonds, there's only one
 tool for that.


 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: g_analyze

2013-11-10 Thread Justin Lemkul



On 11/10/13 8:38 PM, bharat gupta wrote:

But trjorder can be used to calculate the hydration layer or shell around
residues ... Right ??



Yes, but I also tend to think that integrating an RDF is also a more 
straightforward way of doing that.  With trjorder, you set some arbitrary cutoff 
that may or may not be an informed decision - with an RDF it is clear where the 
hydration layers are.


-Justin



On Mon, Nov 11, 2013 at 10:35 AM, Justin Lemkul jalem...@vt.edu wrote:




On 11/10/13 8:30 PM, bharat gupta wrote:


Thanks for your reply. I was missing the scientific notation part. Now
everything is fine.

Regarding trjorder, it doesn't measure h-bonds but gives the water nearest
to protein.



I wouldn't try to draw any sort of comparison between the output of
trjorder and g_hbond.  If you want to measure H-bonds, there's only one
tool for that.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: g_analyze

2013-11-10 Thread bharat gupta
thank you informing about g_rdf...

Is it possible to dump the structure with those average water molecules
interacting with the residues. I generated the hbond.log file which gives
the details but I need to generate a figure for this ??


On Mon, Nov 11, 2013 at 10:40 AM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/10/13 8:38 PM, bharat gupta wrote:

 But trjorder can be used to calculate the hydration layer or shell around
 residues ... Right ??


 Yes, but I also tend to think that integrating an RDF is also a more
 straightforward way of doing that.  With trjorder, you set some arbitrary
 cutoff that may or may not be an informed decision - with an RDF it is
 clear where the hydration layers are.

 -Justin



 On Mon, Nov 11, 2013 at 10:35 AM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/10/13 8:30 PM, bharat gupta wrote:

  Thanks for your reply. I was missing the scientific notation part. Now
 everything is fine.

 Regarding trjorder, it doesn't measure h-bonds but gives the water
 nearest
 to protein.


  I wouldn't try to draw any sort of comparison between the output of
 trjorder and g_hbond.  If you want to measure H-bonds, there's only one
 tool for that.


 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: choosing force field

2013-11-09 Thread Justin Lemkul



On 11/9/13 12:48 AM, pratibha wrote:

Sorry for the previous mistake. Instead of 53a7, the force field which I
used was 53a6.




53A6 is known to under-stabilize helices, so if a helix did not appear in a 
simulation using this force field, it is not definitive proof that the structure 
does not populate helical structures.  I generally see mixed opinions in the 
literature in terms of which Gromos parameter set is the most reliable.  As was 
asked by someone else, is there a reason you are only considering Gromos 
parameter sets?  Others may be better suited to your study.


-Justin


On Fri, Nov 8, 2013 at 12:10 AM, Justin Lemkul [via GROMACS] 
ml-node+s5086n5012325...@n6.nabble.com wrote:




On 11/7/13 12:14 PM, pratibha wrote:

My protein contains metal ions which are parameterized only in gromos

force

field. Since I am a newbie to MD simulations, it would be difficult for

me

to parameterize those myself.
Can you please guide me as per my previous mail  which out of the two
simulations should I consider  more reliable-43a1 or 53a7?


AFAIK, there is no such thing as 53A7, and your original message was full
of
similar typos, making it nearly impossible to figure out what you were
actually
doing.  Can you indicate the actual force field(s) that you have been
using in
case someone has any ideas?  The difference between 53A6 and 54A7 should
be
quite pronounced, in my experience, thus any guesses as to what 53A7
should be
doing are not productive because I don't know what that is.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

[hidden email] http://user/SendEmail.jtp?type=nodenode=5012325i=0 |
(410) 706-7441

==
--
gmx-users mailing list[hidden 
email]http://user/SendEmail.jtp?type=nodenode=5012325i=1
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [hidden 
email]http://user/SendEmail.jtp?type=nodenode=5012325i=2.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
  If you reply to this email, your message will be added to the discussion
below:

http://gromacs.5086.x6.nabble.com/choosing-force-field-tp5012242p5012325.html
  To unsubscribe from choosing force field, click 
herehttp://gromacs.5086.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=5012242code=a2Fwb29ycHJhdGliaGE3QGdtYWlsLmNvbXw1MDEyMjQyfC02NjkwNjY5MjU=
.
NAMLhttp://gromacs.5086.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml




--
View this message in context: 
http://gromacs.5086.x6.nabble.com/choosing-force-field-tp5012242p5012370.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.



--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Ligand simulation for LIE with PME

2013-11-09 Thread Justin Lemkul



On 11/8/13 3:32 PM, Williams Ernesto Miranda Delgado wrote:

Greetings again
If I use a salt concentration for neutralizing the protein-ligand complex
and run MD using PME, and the ligand is neutral, do I perform ligand MD
simulation without adding any salt concentration? It could be relevant for
LIE free energy calculation if I don't include salt in ligand (neutral)
simulation, even when I simulate Protein-ligand system with salt?


My assumption would be that you should introduce as few differences as possible. 
 Consider what LIE is doing - it is attempting to estimate the free energy of 
binding from simple interaction energies.  If you determine the strength of the 
ligand-protein interaction in the presence of some higher ionic strength medium, 
and then determine only the strength of ligand-water interactions rather than 
the interaction of the ligand with the same medium, then I'd say the calculation 
is flawed.  Think of what is really happening in real life - the ligand has to 
partition out of the solvent and into the protein's binding site.  The solvent 
is uniform throughout that process.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-09 Thread rajat desikan
Hi Justin,
I take it that both the sets of parameters should produce identical
macroscopic quantities.
For the GPU, is this a decent .mdp?

cutoff-scheme= Verlet
vdwtype = switch
rlist= 1.2
;rlistlong = 1.4 NOT USED IN GPU...IS THIS
OK?
rvdw   = 1.2
;rvdw-switch= 1.0NOT USED IN GPU...IS THIS OK?
coulombtype   = pme
DispCorr = EnerPres
rcoulomb= 1.2


On Fri, Nov 8, 2013 at 7:19 PM, Justin Lemkul [via GROMACS] 
ml-node+s5086n5012351...@n6.nabble.com wrote:



 On 11/7/13 11:32 PM, Rajat Desikan wrote:

  Dear All,
  The setting that I mentioned above are from Klauda et al., for a POPE
  membrane system. They can be found in charmm_npt.mdp in lipidbook (link
  below)
  http://lipidbook.bioch.ox.ac.uk/package/show/id/48.html
 
  Is there any reason not to use their .mdp parameters for a
 membrane-protein
  system? Justin's recommendation is highly valued since I am using his
  forcefield. Justin, your comments please
 

 Careful now, it's not my forcefield.  I derived only a very small part
 of it :)

  To summarize:
  Klauda et al., suggest
  rlist  = 1.0
  rlistlong= 1.4
  rvdw_switch  = 0.8
  vdwtype= Switch
  coulombtype  = pme
  DispCorr= EnerPres ;only usefull with reaction-field
  and pme or pppm
  rcoulomb   = 1.0
  rcoulomb_switch= 0.0
  rvdw = 1.2
 
  Justin's recommendation (per mail above)
  vdwtype = switch
  rlist = 1.2
  rlistlong = 1.4
  rvdw = 1.2
  rvdw-switch = 1.0
  rcoulomb = 1.2
 

 The differences between these two sets of run parameters are very small,
 dealing
 mostly with Coulomb and neighbor searching cutoffs.  I would suspect that
 any
 difference between simulations run with these two settings would be
 similarly
 small or nonexistent, given that rcoulomb is a bit flexible when using
 PME.  The
 value of rlist is rarely mentioned in papers, so it is good that the
 authors
 have provided the actual input file.  Previous interpretation of CHARMM
 usage
 generally advised setting rcoulomb = 1.2 to remain consistent with the
 original
 switching/shifting functions.  That setting becomes a bit less stringent
 when
 using PME.

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 [hidden email] http://user/SendEmail.jtp?type=nodenode=5012351i=0 |
 (410) 706-7441

 ==
 --
 gmx-users mailing list[hidden 
 email]http://user/SendEmail.jtp?type=nodenode=5012351i=1
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to [hidden 
 email]http://user/SendEmail.jtp?type=nodenode=5012351i=2.

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


 --
  If you reply to this email, your message will be added to the discussion
 below:

 http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012351.html
  To unsubscribe from CHARMM .mdp settings for GPU, click 
 herehttp://gromacs.5086.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=5012267code=cmFqYXRkZXNpa2FuQGdtYWlsLmNvbXw1MDEyMjY3fDM0NzUwNzcwNA==
 .
 NAMLhttp://gromacs.5086.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml




-- 
Rajat Desikan (Ph.D Scholar)
Prof. K. Ganapathy Ayappa's Lab (no 13),
Dept. of Chemical Engineering,
Indian Institute of Science, Bangalore
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-09 Thread Justin Lemkul



On 11/9/13 4:16 PM, rajat desikan wrote:

Hi Justin,
I take it that both the sets of parameters should produce identical
macroscopic quantities.
For the GPU, is this a decent .mdp?

cutoff-scheme= Verlet
vdwtype = switch
rlist= 1.2
;rlistlong = 1.4 NOT USED IN GPU...IS THIS
OK?
rvdw   = 1.2
;rvdw-switch= 1.0NOT USED IN GPU...IS THIS OK?
coulombtype   = pme
DispCorr = EnerPres
rcoulomb= 1.2



I have no basis for saying whether or not it will produce correct results.  I 
have never tested this force field on GPU with the Verlet scheme.  My biggest 
concern is with the treatment of van der Waals interactions, and I have not used 
the Verlet scheme enough to understand what it is doing and how it will treat 
the interactions that should be switched.  If someone else can comment, that 
would be useful to me, as well!


Test carefully and please report back.  A comparison between CPU and GPU would 
be very valuable.


-Justin



On Fri, Nov 8, 2013 at 7:19 PM, Justin Lemkul [via GROMACS] 
ml-node+s5086n5012351...@n6.nabble.com wrote:




On 11/7/13 11:32 PM, Rajat Desikan wrote:


Dear All,
The setting that I mentioned above are from Klauda et al., for a POPE
membrane system. They can be found in charmm_npt.mdp in lipidbook (link
below)
http://lipidbook.bioch.ox.ac.uk/package/show/id/48.html

Is there any reason not to use their .mdp parameters for a

membrane-protein

system? Justin's recommendation is highly valued since I am using his
forcefield. Justin, your comments please



Careful now, it's not my forcefield.  I derived only a very small part
of it :)


To summarize:
Klauda et al., suggest
rlist  = 1.0
rlistlong= 1.4
rvdw_switch  = 0.8
vdwtype= Switch
coulombtype  = pme
DispCorr= EnerPres ;only usefull with reaction-field
and pme or pppm
rcoulomb   = 1.0
rcoulomb_switch= 0.0
rvdw = 1.2

Justin's recommendation (per mail above)
vdwtype = switch
rlist = 1.2
rlistlong = 1.4
rvdw = 1.2
rvdw-switch = 1.0
rcoulomb = 1.2



The differences between these two sets of run parameters are very small,
dealing
mostly with Coulomb and neighbor searching cutoffs.  I would suspect that
any
difference between simulations run with these two settings would be
similarly
small or nonexistent, given that rcoulomb is a bit flexible when using
PME.  The
value of rlist is rarely mentioned in papers, so it is good that the
authors
have provided the actual input file.  Previous interpretation of CHARMM
usage
generally advised setting rcoulomb = 1.2 to remain consistent with the
original
switching/shifting functions.  That setting becomes a bit less stringent
when
using PME.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

[hidden email] http://user/SendEmail.jtp?type=nodenode=5012351i=0 |
(410) 706-7441

==
--
gmx-users mailing list[hidden 
email]http://user/SendEmail.jtp?type=nodenode=5012351i=1
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [hidden 
email]http://user/SendEmail.jtp?type=nodenode=5012351i=2.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
  If you reply to this email, your message will be added to the discussion
below:

http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012351.html
  To unsubscribe from CHARMM .mdp settings for GPU, click 
herehttp://gromacs.5086.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=5012267code=cmFqYXRkZXNpa2FuQGdtYWlsLmNvbXw1MDEyMjY3fDM0NzUwNzcwNA==
.
NAMLhttp://gromacs.5086.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml







--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org

[gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-09 Thread Gianluca Interlandi

On Sat, 9 Nov 2013, Gianluca Interlandi wrote:

Just to chime in. Here is a that paper might be helpful in understanding 
the role of cuoffs in the CHARMM force field:


AU STEINBACH, PJ
   BROOKS, BR
AF STEINBACH, PJ
   BROOKS, BR
TI NEW SPHERICAL-CUTOFF METHODS FOR LONG-RANGE FORCES IN MACROMOLECULAR
   SIMULATION
SO JOURNAL OF COMPUTATIONAL CHEMISTRY
SN 0192-8651
PD JUL
PY 1994
VL 15
IS 7
BP 667
EP 683
DI 10.1002/jcc.540150702
UT WOS:A1994NU1741

Gianluca


On Sat, 9 Nov 2013, Justin Lemkul wrote:




On 11/9/13 4:16 PM, rajat desikan wrote:

Hi Justin,
I take it that both the sets of parameters should produce identical
macroscopic quantities.
For the GPU, is this a decent .mdp?

cutoff-scheme= Verlet
vdwtype = switch
rlist= 1.2
;rlistlong = 1.4 NOT USED IN GPU...IS THIS
OK?
rvdw   = 1.2
;rvdw-switch= 1.0NOT USED IN GPU...IS THIS OK?
coulombtype   = pme
DispCorr = EnerPres
rcoulomb= 1.2



I have no basis for saying whether or not it will produce correct results. 
I have never tested this force field on GPU with the Verlet scheme.  My 
biggest concern is with the treatment of van der Waals interactions, and I 
have not used the Verlet scheme enough to understand what it is doing and 
how it will treat the interactions that should be switched.  If someone 
else can comment, that would be useful to me, as well!


Test carefully and please report back.  A comparison between CPU and GPU 
would be very valuable.


-Justin



On Fri, Nov 8, 2013 at 7:19 PM, Justin Lemkul [via GROMACS] 
ml-node+s5086n5012351...@n6.nabble.com wrote:




On 11/7/13 11:32 PM, Rajat Desikan wrote:


Dear All,
The setting that I mentioned above are from Klauda et al., for a POPE
membrane system. They can be found in charmm_npt.mdp in lipidbook (link
below)
http://lipidbook.bioch.ox.ac.uk/package/show/id/48.html

Is there any reason not to use their .mdp parameters for a

membrane-protein

system? Justin's recommendation is highly valued since I am using his
forcefield. Justin, your comments please



Careful now, it's not my forcefield.  I derived only a very small part
of it :)


To summarize:
Klauda et al., suggest
rlist  = 1.0
rlistlong= 1.4
rvdw_switch  = 0.8
vdwtype= Switch
coulombtype  = pme
DispCorr= EnerPres ;only usefull with reaction-field
and pme or pppm
rcoulomb   = 1.0
rcoulomb_switch= 0.0
rvdw = 1.2

Justin's recommendation (per mail above)
vdwtype = switch
rlist = 1.2
rlistlong = 1.4
rvdw = 1.2
rvdw-switch = 1.0
rcoulomb = 1.2



The differences between these two sets of run parameters are very small,
dealing
mostly with Coulomb and neighbor searching cutoffs.  I would suspect that
any
difference between simulations run with these two settings would be
similarly
small or nonexistent, given that rcoulomb is a bit flexible when using
PME.  The
value of rlist is rarely mentioned in papers, so it is good that the
authors
have provided the actual input file.  Previous interpretation of CHARMM
usage
generally advised setting rcoulomb = 1.2 to remain consistent with the
original
switching/shifting functions.  That setting becomes a bit less stringent
when
using PME.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

[hidden email] http://user/SendEmail.jtp?type=nodenode=5012351i=0 |
(410) 706-7441

==
--
gmx-users mailing list[hidden 
email]http://user/SendEmail.jtp?type=nodenode=5012351i=1

http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [hidden 
email]http://user/SendEmail.jtp?type=nodenode=5012351i=2.


* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
  If you reply to this email, your message will be added to the 
discussion

below:

http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012351.html
  To unsubscribe from CHARMM .mdp settings for GPU, click 
herehttp://gromacs.5086.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=5012267code=cmFqYXRkZXNpa2FuQGdtYWlsLmNvbXw1MDEyMjY3fDM0NzUwNzcwNA==

.

Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-09 Thread Justin Lemkul



On 11/9/13 9:51 PM, Gianluca Interlandi wrote:

On Sat, 9 Nov 2013, Gianluca Interlandi wrote:

Just to chime in. Here is a that paper might be helpful in understanding the
role of cuoffs in the CHARMM force field:

AU STEINBACH, PJ
BROOKS, BR
AF STEINBACH, PJ
BROOKS, BR
TI NEW SPHERICAL-CUTOFF METHODS FOR LONG-RANGE FORCES IN MACROMOLECULAR
SIMULATION
SO JOURNAL OF COMPUTATIONAL CHEMISTRY
SN 0192-8651
PD JUL
PY 1994
VL 15
IS 7
BP 667
EP 683
DI 10.1002/jcc.540150702
UT WOS:A1994NU1741


Yes, that's the one I posted several weeks back.  It describes the original 
implementation of cutoff schemes in CHARMM.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: mdrun on 8-core AMD + GTX TITAN (was: Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs)

2013-11-09 Thread Dwey Kauffman
!


NOTE: The GPU has 20% more load than the CPU. This imbalance causes
  performance loss, consider using a shorter cut-off and a finer PME
grid.

   Core t (s)   Wall t (s)(%)
   Time:   216205.51027036.812  799.7
 7h30:36
 (ns/day)(hour/ns)
Performance:   31.9560.751


### Two GPUs #

 R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

 Computing: Nodes   Th. Count  Wall t (s) G-Cycles   %
-
 Domain decomp. 24 10 339.49010900.191 1.5
 DD comm. load  24  49989   0.2628.410 0.0
 Neighbor search24 11 481.58315462.464 2.2
 Launch GPU ops.24   1002 579.28318599.358 2.6
 Comm. coord.   24490 523.09616795.351 2.3
 Force  245011545.58449624.951 6.9
 Wait + Comm. F 24501 821.74026384.083 3.7
 PME mesh   24501   11097.880   356326.03049.5
 Wait GPU nonlocal  245011001.86832167.550 4.5
 Wait GPU local 24501   8.613  276.533 0.0
 NB X/F buffer ops. 24   19821061.23834073.781 4.7
 Write traj.24   1025   5.681  182.419 0.0
 Update 245011692.23354333.503 7.6
 Constraints245012316.14574365.78810.3
 Comm. energies 24101  15.802  507.373 0.1
 Rest   2 908.38329165.963 4.1
-
 Total  2   22398.880   719173.747   100.0
-
-
 PME redist. X/F24   10021519.28848780.654 6.8
 PME spread/gather  24   10025398.693   173338.93624.1
 PME 3D-FFT 24   10022798.48289852.48212.5
 PME 3D-FFT Comm.   24   1002 947.03330406.937 4.2
 PME solve  24501 420.66713506.611 1.9
-

   Core t (s)   Wall t (s)(%)
   Time:   178961.45022398.880  799.0
 6h13:18
 (ns/day)(hour/ns)
Performance:   38.5730.622






--
View this message in context: 
http://gromacs.5086.x6.nabble.com/mdrun-on-8-core-AMD-GTX-TITAN-was-Re-gmx-users-Re-Gromacs-4-6-on-two-Titans-GPUs-tp5012330p5012391.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: g_analyze

2013-11-09 Thread bharat gupta
Hi,
I used the command g_hbond to find h-bond between  residues 115-118 and
water. Then I used g_analyze to find out the average and it gives the value
for the hbonds like this :-

  std. dev.relative deviation of
   standard   -   cumulants from those of
set  average   deviation  sqrt(n-1)   a Gaussian distribition
  cum. 3   cum. 4
SS1   6.877249e-02   2.546419e-01   5.092839e-03   2.1813.495
SS2   6.997201e-02   2.673450e-01   5.346901e-03   2.4215.001

When I calculated the average manually, by taking the average of numbers in
second column of hbnum.xvg file, I got a value of around 13.5.. What is the
reason for such a large difference.

In another case, g_analyze gives avg values of aroun 6.9 for hbond between
two residues and when I calculated it maually I got the avg values as 6.8
..

Whats the meaning of SS1 and SS2,?? Does it mean that SS1 refers to time
and SS2 refers to hbond numbers in the hbnum.xvg obtained from g_hbond
analysis ??

Please clarify these doubts..

Regards

Bharat
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-08 Thread Justin Lemkul



On 11/7/13 11:32 PM, Rajat Desikan wrote:

Dear All,
The setting that I mentioned above are from Klauda et al., for a POPE
membrane system. They can be found in charmm_npt.mdp in lipidbook (link
below)
http://lipidbook.bioch.ox.ac.uk/package/show/id/48.html

Is there any reason not to use their .mdp parameters for a membrane-protein
system? Justin's recommendation is highly valued since I am using his
forcefield. Justin, your comments please



Careful now, it's not my forcefield.  I derived only a very small part of it 
:)


To summarize:
Klauda et al., suggest
rlist  = 1.0
rlistlong= 1.4
rvdw_switch  = 0.8
vdwtype= Switch
coulombtype  = pme
DispCorr= EnerPres ;only usefull with reaction-field
and pme or pppm
rcoulomb   = 1.0
rcoulomb_switch= 0.0
rvdw = 1.2

Justin's recommendation (per mail above)
vdwtype = switch
rlist = 1.2
rlistlong = 1.4
rvdw = 1.2
rvdw-switch = 1.0
rcoulomb = 1.2



The differences between these two sets of run parameters are very small, dealing 
mostly with Coulomb and neighbor searching cutoffs.  I would suspect that any 
difference between simulations run with these two settings would be similarly 
small or nonexistent, given that rcoulomb is a bit flexible when using PME.  The 
value of rlist is rarely mentioned in papers, so it is good that the authors 
have provided the actual input file.  Previous interpretation of CHARMM usage 
generally advised setting rcoulomb = 1.2 to remain consistent with the original 
switching/shifting functions.  That setting becomes a bit less stringent when 
using PME.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: free energy

2013-11-08 Thread kghm
Dear Kieu Thu 

Thanks for your comment about free energy. Unfortunately, I could not send a
email to Paissoni Cristina in the Gromacs Forum.
Could you give me email address of Paissoni Cristina? Finding a tool for
calculation MM/PBSA with Gromacs is very vital for me.

Best Regards
Kiana

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/free-energy-tp5012246p5012363.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Ligand simulation for LIE with PME

2013-11-08 Thread Williams Ernesto Miranda Delgado
Greetings again
If I use a salt concentration for neutralizing the protein-ligand complex
and run MD using PME, and the ligand is neutral, do I perform ligand MD
simulation without adding any salt concentration? It could be relevant for
LIE free energy calculation if I don't include salt in ligand (neutral)
simulation, even when I simulate Protein-ligand system with salt?
Thanks

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: choosing force field

2013-11-08 Thread pratibha
Sorry for the previous mistake. Instead of 53a7, the force field which I
used was 53a6.


On Fri, Nov 8, 2013 at 12:10 AM, Justin Lemkul [via GROMACS] 
ml-node+s5086n5012325...@n6.nabble.com wrote:



 On 11/7/13 12:14 PM, pratibha wrote:
  My protein contains metal ions which are parameterized only in gromos
 force
  field. Since I am a newbie to MD simulations, it would be difficult for
 me
  to parameterize those myself.
  Can you please guide me as per my previous mail  which out of the two
  simulations should I consider  more reliable-43a1 or 53a7?

 AFAIK, there is no such thing as 53A7, and your original message was full
 of
 similar typos, making it nearly impossible to figure out what you were
 actually
 doing.  Can you indicate the actual force field(s) that you have been
 using in
 case someone has any ideas?  The difference between 53A6 and 54A7 should
 be
 quite pronounced, in my experience, thus any guesses as to what 53A7
 should be
 doing are not productive because I don't know what that is.

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 [hidden email] http://user/SendEmail.jtp?type=nodenode=5012325i=0 |
 (410) 706-7441

 ==
 --
 gmx-users mailing list[hidden 
 email]http://user/SendEmail.jtp?type=nodenode=5012325i=1
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to [hidden 
 email]http://user/SendEmail.jtp?type=nodenode=5012325i=2.

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


 --
  If you reply to this email, your message will be added to the discussion
 below:

 http://gromacs.5086.x6.nabble.com/choosing-force-field-tp5012242p5012325.html
  To unsubscribe from choosing force field, click 
 herehttp://gromacs.5086.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=5012242code=a2Fwb29ycHJhdGliaGE3QGdtYWlsLmNvbXw1MDEyMjQyfC02NjkwNjY5MjU=
 .
 NAMLhttp://gromacs.5086.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml



--
View this message in context: 
http://gromacs.5086.x6.nabble.com/choosing-force-field-tp5012242p5012370.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs

2013-11-07 Thread Mark Abraham
First, there is no value in ascribing problems to the hardware if the
simulation setup is not yet balanced, or not large enough to provide enough
atoms and long enough rlist to saturate the GPUs, etc. Look at the log
files and see what complaints mdrun makes about things like PME load
balance, and the times reported for different components of the simulation,
because these must differ between the two runs you report. diff -y -W 160
*log |less is your friend. Some (non-GPU-specific) background information
in part 5 here
http://www.gromacs.org/Documentation/Tutorials/GROMACS_USA_Workshop_and_Conference_2013/Topology_preparation%2c_%22What's_in_a_log_file%22%2c_basic_performance_improvements%3a_Mark_Abraham%2c_Session_1A
(though
I recommend the PDF version)

Mark


On Thu, Nov 7, 2013 at 6:34 AM, James Starlight jmsstarli...@gmail.comwrote:

 I've gone to conclusion that simulation with 1 or 2 GPU simultaneously gave
 me the same performance
 mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test,

 mdrun -ntmpi 2 -ntomp 6 -gpu_id 0 -v  -deffnm md_CaM_test,

 Doest it be due to the small CPU cores or addition RAM ( this system has 32
 gb) is needed ? OR may be some extra options are needed in the config?

 James




 2013/11/6 Richard Broadbent richard.broadben...@imperial.ac.uk

  Hi Dwey,
 
 
  On 05/11/13 22:00, Dwey Kauffman wrote:
 
  Hi Szilard,
 
  Thanks for your suggestions. I am  indeed aware of this page. In a
  8-core
  AMD with 1GPU, I am very happy about its performance. See below. My
  intention is to obtain a even better one because we have multiple nodes.
 
  ### 8 core AMD with  1 GPU,
  Force evaluation time GPU/CPU: 4.006 ms/2.578 ms = 1.554
  For optimal performance this ratio should be close to 1!
 
 
  NOTE: The GPU has 20% more load than the CPU. This imbalance causes
 performance loss, consider using a shorter cut-off and a finer
 PME
  grid.
 
  Core t (s)   Wall t (s)(%)
  Time:   216205.51027036.812  799.7
7h30:36
(ns/day)(hour/ns)
  Performance:   31.9560.751
 
  ### 8 core AMD with 2 GPUs
 
  Core t (s)   Wall t (s)(%)
  Time:   178961.45022398.880  799.0
6h13:18
(ns/day)(hour/ns)
  Performance:   38.5730.622
  Finished mdrun on node 0 Sat Jul 13 09:24:39 2013
 
 
  I'm almost certain that Szilard meant the lines above this that give the
  breakdown of where the time is spent in the simulation.
 
  Richard
 
 
   However, in your case I suspect that the
  bottleneck is multi-threaded scaling on the AMD CPUs and you should
  probably decrease the number of threads per MPI rank and share GPUs
  between 2-4 ranks.
 
 
 
  OK but can you give a example of mdrun command ? given a 8 core AMD
 with 2
  GPUs.
  I will try to run it again.
 
 
   Regarding scaling across nodes, you can't expect much from gigabit
  ethernet - especially not from the cheaper cards/switches, in my
  experience even reaction field runs don't scale across nodes with 10G
  ethernet if you have more than 4-6 ranks per node trying to
  communicate (let alone with PME). However, on infiniband clusters we
  have seen scaling to 100 atoms/core (at peak).
 
 
   From your comments, it sounds like a cluster of AMD cpus is difficult
 to
 
  scale across nodes in our current setup.
 
  Let's assume we install Infiniband (20 or 40GB/s) in the same system of
 16
  nodes of 8 core AMD with 1 GPU only. Considering the same AMD system,
 what
  is a good way to obtain better performance  when we run a task across
  nodes
  ? in other words, what dose mudrun_mpi look like ?
 
  Thanks,
  Dwey
 
 
 
 
  --
  View this message in context: http://gromacs.5086.x6.nabble.
  com/Gromacs-4-6-on-two-Titans-GPUs-tp5012186p5012279.html
  Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 
   --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at http://www.gromacs.org/
  Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the www
  interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't 

Re: [gmx-users] Re: single point calculation with gromacs

2013-11-07 Thread Mark Abraham
On Wed, Nov 6, 2013 at 4:07 PM, fantasticqhl fantastic...@gmail.com wrote:

 Dear Justin,

 I am sorry for the late reply. I still can't figure it out.


It isn't rocket science - your two .mdp files describe totally different
model physics. To compare things, change as few things as necessary to
generate the comparison. So use the same input .mdp file for the MD vs EM
single-point comparison, just changing the integrator line, and maybe
unconstrained-start (I forget the details). And be aware of
http://www.gromacs.org/Documentation/How-tos/Single-Point_Energy

Mark

Could you please send me the mdp file which was used for your single point
 calculations.
 I want to do some comparison and then solve the problem.
 Thanks very much!


 All the best,
 Qinghua

 --
 View this message in context:
 http://gromacs.5086.x6.nabble.com/single-point-calculation-with-gromacs-tp5012084p5012295.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread Rajat Desikan
Dear All,

Any suggestions? 

Thank you.

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012316.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread Mark Abraham
Hi,

It's not easy to be explicit. CHARMM wasn't parameterized with PME, so the
original paper's coulomb settings can be taken with a grain of salt for use
with PME - others' success in practice should be a guideline here. The good
news is that the default GROMACS PME settings are pretty good for at least
some problems (http://pubs.acs.org/doi/abs/10.1021/ct4005068), and the GPU
auto-tuning of parameters in 4.6 is designed to preserve the right sorts of
things.

LJ is harder because it would make good sense to preserve the way CHARMM
did it, but IIRC you can't use something equivalent to the CHARMM LJ shift
with the Verlet kernels, either natively or with a table. We hope to fix
that in 5.0, but code is not written yet. I would probably use vdwtype =
cut-off, vdw-modifier = potential-shift-verlet and rcoulomb=rlist=rvdw=1.2,
but I don't run CHARMM simulations for a living ;-)

Mark


On Thu, Nov 7, 2013 at 1:42 PM, Rajat Desikan rajatdesi...@gmail.comwrote:

 Dear All,

 Any suggestions?

 Thank you.

 --
 View this message in context:
 http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012316.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: choosing force field

2013-11-07 Thread pratibha
My protein contains metal ions which are parameterized only in gromos force
field. Since I am a newbie to MD simulations, it would be difficult for me
to parameterize those myself.
Can you please guide me as per my previous mail  which out of the two
simulations should I consider  more reliable-43a1 or 53a7?  
Thanks in advance.

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/choosing-force-field-tp5012242p5012322.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: choosing force field

2013-11-07 Thread Justin Lemkul



On 11/7/13 12:14 PM, pratibha wrote:

My protein contains metal ions which are parameterized only in gromos force
field. Since I am a newbie to MD simulations, it would be difficult for me
to parameterize those myself.
Can you please guide me as per my previous mail  which out of the two
simulations should I consider  more reliable-43a1 or 53a7?


AFAIK, there is no such thing as 53A7, and your original message was full of 
similar typos, making it nearly impossible to figure out what you were actually 
doing.  Can you indicate the actual force field(s) that you have been using in 
case someone has any ideas?  The difference between 53A6 and 54A7 should be 
quite pronounced, in my experience, thus any guesses as to what 53A7 should be 
doing are not productive because I don't know what that is.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: LIE method with PME

2013-11-07 Thread Williams Ernesto Miranda Delgado
Hello
I performed MD simulations of several Protein-ligand complexes and
solvated Ligands using PME for log range electrostatics. I want to
calculate the binding free energy using the LIE method, but when using
g_energy I only get Coul-SR. How can I deal with Ligand-environment long
range electrostatic interaction using gromacs? I have seen other
discussion lists but I couldn't arrive to a solution. Could you please
help me?
Thank you
Williams


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread rajat desikan
Thank you, Mark. I think that running it on CPUs is a safer choice at
present.


On Thu, Nov 7, 2013 at 9:41 PM, Mark Abraham mark.j.abra...@gmail.comwrote:

 Hi,

 It's not easy to be explicit. CHARMM wasn't parameterized with PME, so the
 original paper's coulomb settings can be taken with a grain of salt for use
 with PME - others' success in practice should be a guideline here. The good
 news is that the default GROMACS PME settings are pretty good for at least
 some problems (http://pubs.acs.org/doi/abs/10.1021/ct4005068), and the GPU
 auto-tuning of parameters in 4.6 is designed to preserve the right sorts of
 things.

 LJ is harder because it would make good sense to preserve the way CHARMM
 did it, but IIRC you can't use something equivalent to the CHARMM LJ shift
 with the Verlet kernels, either natively or with a table. We hope to fix
 that in 5.0, but code is not written yet. I would probably use vdwtype =
 cut-off, vdw-modifier = potential-shift-verlet and rcoulomb=rlist=rvdw=1.2,
 but I don't run CHARMM simulations for a living ;-)

 Mark


 On Thu, Nov 7, 2013 at 1:42 PM, Rajat Desikan rajatdesi...@gmail.com
 wrote:

  Dear All,
 
  Any suggestions?
 
  Thank you.
 
  --
  View this message in context:
 
 http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012316.html
  Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Rajat Desikan (Ph.D Scholar)
Prof. K. Ganapathy Ayappa's Lab (no 13),
Dept. of Chemical Engineering,
Indian Institute of Science, Bangalore
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread Mark Abraham
Reasonable, but CPU-only is not 100% conforming either; IIRC the CHARMM
switch differs from the GROMACS switch (Justin linked a paper here with the
CHARMM switch description a month or so back, but I don't have that link to
hand).

Mark


On Thu, Nov 7, 2013 at 8:45 PM, rajat desikan rajatdesi...@gmail.comwrote:

 Thank you, Mark. I think that running it on CPUs is a safer choice at
 present.


 On Thu, Nov 7, 2013 at 9:41 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

  Hi,
 
  It's not easy to be explicit. CHARMM wasn't parameterized with PME, so
 the
  original paper's coulomb settings can be taken with a grain of salt for
 use
  with PME - others' success in practice should be a guideline here. The
 good
  news is that the default GROMACS PME settings are pretty good for at
 least
  some problems (http://pubs.acs.org/doi/abs/10.1021/ct4005068), and the
 GPU
  auto-tuning of parameters in 4.6 is designed to preserve the right sorts
 of
  things.
 
  LJ is harder because it would make good sense to preserve the way CHARMM
  did it, but IIRC you can't use something equivalent to the CHARMM LJ
 shift
  with the Verlet kernels, either natively or with a table. We hope to fix
  that in 5.0, but code is not written yet. I would probably use vdwtype =
  cut-off, vdw-modifier = potential-shift-verlet and
 rcoulomb=rlist=rvdw=1.2,
  but I don't run CHARMM simulations for a living ;-)
 
  Mark
 
 
  On Thu, Nov 7, 2013 at 1:42 PM, Rajat Desikan rajatdesi...@gmail.com
  wrote:
 
   Dear All,
  
   Any suggestions?
  
   Thank you.
  
   --
   View this message in context:
  
 
 http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012316.html
   Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at
   http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   * Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 



 --
 Rajat Desikan (Ph.D Scholar)
 Prof. K. Ganapathy Ayappa's Lab (no 13),
 Dept. of Chemical Engineering,
 Indian Institute of Science, Bangalore
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: LIE method with PME

2013-11-07 Thread Mark Abraham
If the long-range component of your electrostatics model is not
decomposable by group (which it isn't), then you can't use that with LIE.
See the hundreds of past threads on this topic :-)

Mark


On Thu, Nov 7, 2013 at 8:34 PM, Williams Ernesto Miranda Delgado 
wmira...@fbio.uh.cu wrote:

 Hello
 I performed MD simulations of several Protein-ligand complexes and
 solvated Ligands using PME for log range electrostatics. I want to
 calculate the binding free energy using the LIE method, but when using
 g_energy I only get Coul-SR. How can I deal with Ligand-environment long
 range electrostatic interaction using gromacs? I have seen other
 discussion lists but I couldn't arrive to a solution. Could you please
 help me?
 Thank you
 Williams


 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread Gianluca Interlandi

Hi Mark!

I think that this is the paper that you are referring to:

dx.doi.org/10.1021/ct900549r

Also for your reference, these are the settings that Justin recommended 
using with CHARMM in gromacs:


vdwtype = switch
rlist = 1.2
rlistlong = 1.4
rvdw = 1.2
rvdw-switch = 1.0
rcoulomb = 1.2

As you mention the switch function in gromacs is different than in CHARMM 
but it appears that the difference is very small.


Gianluca

On Thu, 7 Nov 2013, Mark Abraham wrote:


Reasonable, but CPU-only is not 100% conforming either; IIRC the CHARMM
switch differs from the GROMACS switch (Justin linked a paper here with the
CHARMM switch description a month or so back, but I don't have that link to
hand).

Mark


On Thu, Nov 7, 2013 at 8:45 PM, rajat desikan rajatdesi...@gmail.comwrote:


Thank you, Mark. I think that running it on CPUs is a safer choice at
present.


On Thu, Nov 7, 2013 at 9:41 PM, Mark Abraham mark.j.abra...@gmail.com

wrote:



Hi,

It's not easy to be explicit. CHARMM wasn't parameterized with PME, so

the

original paper's coulomb settings can be taken with a grain of salt for

use

with PME - others' success in practice should be a guideline here. The

good

news is that the default GROMACS PME settings are pretty good for at

least

some problems (http://pubs.acs.org/doi/abs/10.1021/ct4005068), and the

GPU

auto-tuning of parameters in 4.6 is designed to preserve the right sorts

of

things.

LJ is harder because it would make good sense to preserve the way CHARMM
did it, but IIRC you can't use something equivalent to the CHARMM LJ

shift

with the Verlet kernels, either natively or with a table. We hope to fix
that in 5.0, but code is not written yet. I would probably use vdwtype =
cut-off, vdw-modifier = potential-shift-verlet and

rcoulomb=rlist=rvdw=1.2,

but I don't run CHARMM simulations for a living ;-)

Mark


On Thu, Nov 7, 2013 at 1:42 PM, Rajat Desikan rajatdesi...@gmail.com

wrote:



Dear All,

Any suggestions?

Thank you.

--
View this message in context:




http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012316.html

Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists





--
Rajat Desikan (Ph.D Scholar)
Prof. K. Ganapathy Ayappa's Lab (no 13),
Dept. of Chemical Engineering,
Indian Institute of Science, Bangalore
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



-
Gianluca Interlandi, PhD gianl...@u.washington.edu
+1 (206) 685 4435
http://artemide.bioeng.washington.edu/

Research Scientist at the Department of Bioengineering
at the University of Washington, Seattle WA U.S.A.
-
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


mdrun on 8-core AMD + GTX TITAN (was: Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs)

2013-11-07 Thread Szilárd Páll
Let's not hijack James' thread as your hardware is different from his.

On Tue, Nov 5, 2013 at 11:00 PM, Dwey Kauffman mpi...@gmail.com wrote:
 Hi Szilard,

Thanks for your suggestions. I am  indeed aware of this page. In a 8-core
 AMD with 1GPU, I am very happy about its performance. See below. My

Actually, I was jumping to conclusions too early, as you mentioned AMD
cluster, I assumed you must have 12-16-core Opteron CPUs. If you
have an 8-core (desktop?) AMD CPU, than you may not need to run more
than one rank per GPU.

 intention is to obtain a even better one because we have multiple nodes.

Btw, I'm not sure it's an economically viable solution to install
Infiniband network - especially if you have desktop-class machines.
Such a network will end up costing $500 per machine just for a single
network card, let alone cabling and switches.


 ### 8 core AMD with  1 GPU,
 Force evaluation time GPU/CPU: 4.006 ms/2.578 ms = 1.554
 For optimal performance this ratio should be close to 1!


 NOTE: The GPU has 20% more load than the CPU. This imbalance causes
   performance loss, consider using a shorter cut-off and a finer PME
 grid.

Core t (s)   Wall t (s)(%)
Time:   216205.51027036.812  799.7
  7h30:36
  (ns/day)(hour/ns)
 Performance:   31.9560.751

 ### 8 core AMD with 2 GPUs

Core t (s)   Wall t (s)(%)
Time:   178961.45022398.880  799.0
  6h13:18
  (ns/day)(hour/ns)
 Performance:   38.5730.622
 Finished mdrun on node 0 Sat Jul 13 09:24:39 2013


Indeed, as Richard pointed out, I was asking for *full* logs, these
summaries can't tell much, the table above the summary entitled R E A
L   C Y C L E   A N D   T I M E   A C C O U N T I N G as well as
other reported information across the log file is what I need to make
an assessment of your simulations' performance.

However, in your case I suspect that the
bottleneck is multi-threaded scaling on the AMD CPUs and you should
probably decrease the number of threads per MPI rank and share GPUs
between 2-4 ranks.


 OK but can you give a example of mdrun command ? given a 8 core AMD with 2
 GPUs.
 I will try to run it again.

You could try running
mpirun -np 4 mdrun -ntomp 2 -gpu_id 0011
but I suspect this won't help because your scaling issue



Regarding scaling across nodes, you can't expect much from gigabit
ethernet - especially not from the cheaper cards/switches, in my
experience even reaction field runs don't scale across nodes with 10G
ethernet if you have more than 4-6 ranks per node trying to
communicate (let alone with PME). However, on infiniband clusters we
have seen scaling to 100 atoms/core (at peak).

 From your comments, it sounds like a cluster of AMD cpus is difficult to
 scale across nodes in our current setup.

 Let's assume we install Infiniband (20 or 40GB/s) in the same system of 16
 nodes of 8 core AMD with 1 GPU only. Considering the same AMD system, what
 is a good way to obtain better performance  when we run a task across nodes
 ? in other words, what dose mudrun_mpi look like ?

 Thanks,
 Dwey




 --
 View this message in context: 
 http://gromacs.5086.x6.nabble.com/Gromacs-4-6-on-two-Titans-GPUs-tp5012186p5012279.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs

2013-11-07 Thread Szilárd Páll
On Thu, Nov 7, 2013 at 6:34 AM, James Starlight jmsstarli...@gmail.com wrote:
 I've gone to conclusion that simulation with 1 or 2 GPU simultaneously gave
 me the same performance
 mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test,

 mdrun -ntmpi 2 -ntomp 6 -gpu_id 0 -v  -deffnm md_CaM_test,

 Doest it be due to the small CPU cores or addition RAM ( this system has 32
 gb) is needed ? OR may be some extra options are needed in the config?

GROMACS does not really need much (or fast) ram and it's most probably
not configuration settings that are causing the lack of scaling.

Given your setup my guess is that your hardware for your system is
simply imbalanced to be efficiently used in GROMACS runs.

Please post *full* log files (FYI use e.g. http://pastebin.com), that
will help explain what is going on.


 James




 2013/11/6 Richard Broadbent richard.broadben...@imperial.ac.uk

 Hi Dwey,


 On 05/11/13 22:00, Dwey Kauffman wrote:

 Hi Szilard,

 Thanks for your suggestions. I am  indeed aware of this page. In a
 8-core
 AMD with 1GPU, I am very happy about its performance. See below. My
 intention is to obtain a even better one because we have multiple nodes.

 ### 8 core AMD with  1 GPU,
 Force evaluation time GPU/CPU: 4.006 ms/2.578 ms = 1.554
 For optimal performance this ratio should be close to 1!


 NOTE: The GPU has 20% more load than the CPU. This imbalance causes
performance loss, consider using a shorter cut-off and a finer PME
 grid.

 Core t (s)   Wall t (s)(%)
 Time:   216205.51027036.812  799.7
   7h30:36
   (ns/day)(hour/ns)
 Performance:   31.9560.751

 ### 8 core AMD with 2 GPUs

 Core t (s)   Wall t (s)(%)
 Time:   178961.45022398.880  799.0
   6h13:18
   (ns/day)(hour/ns)
 Performance:   38.5730.622
 Finished mdrun on node 0 Sat Jul 13 09:24:39 2013


 I'm almost certain that Szilard meant the lines above this that give the
 breakdown of where the time is spent in the simulation.

 Richard


  However, in your case I suspect that the
 bottleneck is multi-threaded scaling on the AMD CPUs and you should
 probably decrease the number of threads per MPI rank and share GPUs
 between 2-4 ranks.



 OK but can you give a example of mdrun command ? given a 8 core AMD with 2
 GPUs.
 I will try to run it again.


  Regarding scaling across nodes, you can't expect much from gigabit
 ethernet - especially not from the cheaper cards/switches, in my
 experience even reaction field runs don't scale across nodes with 10G
 ethernet if you have more than 4-6 ranks per node trying to
 communicate (let alone with PME). However, on infiniband clusters we
 have seen scaling to 100 atoms/core (at peak).


  From your comments, it sounds like a cluster of AMD cpus is difficult to

 scale across nodes in our current setup.

 Let's assume we install Infiniband (20 or 40GB/s) in the same system of 16
 nodes of 8 core AMD with 1 GPU only. Considering the same AMD system, what
 is a good way to obtain better performance  when we run a task across
 nodes
 ? in other words, what dose mudrun_mpi look like ?

 Thanks,
 Dwey




 --
 View this message in context: http://gromacs.5086.x6.nabble.
 com/Gromacs-4-6-on-two-Titans-GPUs-tp5012186p5012279.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.

  --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: LIE method with PME

2013-11-07 Thread Williams Ernesto Miranda Delgado
Thank you Mark
What do you think about making a rerun on the trajectories generated
previously with PME but this time using coulombtype: cut-off? Could you
suggest a cut off value?
Thanks again
Williams

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: LIE method with PME

2013-11-07 Thread Mark Abraham
I'd at least use RF! Use a cut-off consistent with the force field
parameterization. And hope the LIE correlates with reality!

Mark
On Nov 7, 2013 10:39 PM, Williams Ernesto Miranda Delgado 
wmira...@fbio.uh.cu wrote:

 Thank you Mark
 What do you think about making a rerun on the trajectories generated
 previously with PME but this time using coulombtype: cut-off? Could you
 suggest a cut off value?
 Thanks again
 Williams

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread Rajat Desikan
Dear All,
The setting that I mentioned above are from Klauda et al., for a POPE
membrane system. They can be found in charmm_npt.mdp in lipidbook (link
below)
http://lipidbook.bioch.ox.ac.uk/package/show/id/48.html

Is there any reason not to use their .mdp parameters for a membrane-protein
system? Justin's recommendation is highly valued since I am using his
forcefield. Justin, your comments please

To summarize:
Klauda et al., suggest
rlist  = 1.0
rlistlong= 1.4
rvdw_switch  = 0.8
vdwtype= Switch
coulombtype  = pme
DispCorr= EnerPres ;only usefull with reaction-field
and pme or pppm
rcoulomb   = 1.0
rcoulomb_switch= 0.0
rvdw = 1.2

Justin's recommendation (per mail above)
vdwtype = switch
rlist = 1.2
rlistlong = 1.4
rvdw = 1.2
rvdw-switch = 1.0
rcoulomb = 1.2


On Fri, Nov 8, 2013 at 2:20 AM, Gianluca Interlandi [via GROMACS] 
ml-node+s5086n5012329...@n6.nabble.com wrote:

 Hi Mark!

 I think that this is the paper that you are referring to:

 dx.doi.org/10.1021/ct900549r

 Also for your reference, these are the settings that Justin recommended
 using with CHARMM in gromacs:

 vdwtype = switch
 rlist = 1.2
 rlistlong = 1.4
 rvdw = 1.2
 rvdw-switch = 1.0
 rcoulomb = 1.2

 As you mention the switch function in gromacs is different than in CHARMM
 but it appears that the difference is very small.

 Gianluca

 On Thu, 7 Nov 2013, Mark Abraham wrote:

  Reasonable, but CPU-only is not 100% conforming either; IIRC the CHARMM
  switch differs from the GROMACS switch (Justin linked a paper here with
 the
  CHARMM switch description a month or so back, but I don't have that link
 to
  hand).
 
  Mark
 
 
  On Thu, Nov 7, 2013 at 8:45 PM, rajat desikan [hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=0wrote:

 
  Thank you, Mark. I think that running it on CPUs is a safer choice at
  present.
 
 
  On Thu, Nov 7, 2013 at 9:41 PM, Mark Abraham [hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=1
  wrote:
 
  Hi,
 
  It's not easy to be explicit. CHARMM wasn't parameterized with PME, so
  the
  original paper's coulomb settings can be taken with a grain of salt
 for
  use
  with PME - others' success in practice should be a guideline here. The
  good
  news is that the default GROMACS PME settings are pretty good for at
  least
  some problems (http://pubs.acs.org/doi/abs/10.1021/ct4005068), and
 the
  GPU
  auto-tuning of parameters in 4.6 is designed to preserve the right
 sorts
  of
  things.
 
  LJ is harder because it would make good sense to preserve the way
 CHARMM
  did it, but IIRC you can't use something equivalent to the CHARMM LJ
  shift
  with the Verlet kernels, either natively or with a table. We hope to
 fix
  that in 5.0, but code is not written yet. I would probably use vdwtype
 =
  cut-off, vdw-modifier = potential-shift-verlet and
  rcoulomb=rlist=rvdw=1.2,
  but I don't run CHARMM simulations for a living ;-)
 
  Mark
 
 
  On Thu, Nov 7, 2013 at 1:42 PM, Rajat Desikan [hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=2
  wrote:
 
  Dear All,
 
  Any suggestions?
 
  Thank you.
 
  --
  View this message in context:
 
 
 
 http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012316.html
  Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
  --
  gmx-users mailing list[hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=3
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to [hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=4.

  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  --
  gmx-users mailing list[hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=5
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to [hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=6.

  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
 
 
  --
  Rajat Desikan (Ph.D Scholar)
  Prof. K. Ganapathy Ayappa's Lab (no 13),
  Dept. of Chemical Engineering,
  Indian Institute of Science, Bangalore
  --
  gmx-users mailing list[hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=7
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to [hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=8.

  * Can't 

Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs

2013-11-06 Thread Richard Broadbent

Hi Dwey,

On 05/11/13 22:00, Dwey Kauffman wrote:

Hi Szilard,

Thanks for your suggestions. I am  indeed aware of this page. In a 8-core
AMD with 1GPU, I am very happy about its performance. See below. My
intention is to obtain a even better one because we have multiple nodes.

### 8 core AMD with  1 GPU,
Force evaluation time GPU/CPU: 4.006 ms/2.578 ms = 1.554
For optimal performance this ratio should be close to 1!


NOTE: The GPU has 20% more load than the CPU. This imbalance causes
   performance loss, consider using a shorter cut-off and a finer PME
grid.

Core t (s)   Wall t (s)(%)
Time:   216205.51027036.812  799.7
  7h30:36
  (ns/day)(hour/ns)
Performance:   31.9560.751

### 8 core AMD with 2 GPUs

Core t (s)   Wall t (s)(%)
Time:   178961.45022398.880  799.0
  6h13:18
  (ns/day)(hour/ns)
Performance:   38.5730.622
Finished mdrun on node 0 Sat Jul 13 09:24:39 2013



I'm almost certain that Szilard meant the lines above this that give the 
breakdown of where the time is spent in the simulation.


Richard



However, in your case I suspect that the
bottleneck is multi-threaded scaling on the AMD CPUs and you should
probably decrease the number of threads per MPI rank and share GPUs
between 2-4 ranks.



OK but can you give a example of mdrun command ? given a 8 core AMD with 2
GPUs.
I will try to run it again.



Regarding scaling across nodes, you can't expect much from gigabit
ethernet - especially not from the cheaper cards/switches, in my
experience even reaction field runs don't scale across nodes with 10G
ethernet if you have more than 4-6 ranks per node trying to
communicate (let alone with PME). However, on infiniband clusters we
have seen scaling to 100 atoms/core (at peak).



From your comments, it sounds like a cluster of AMD cpus is difficult to

scale across nodes in our current setup.

Let's assume we install Infiniband (20 or 40GB/s) in the same system of 16
nodes of 8 core AMD with 1 GPU only. Considering the same AMD system, what
is a good way to obtain better performance  when we run a task across nodes
? in other words, what dose mudrun_mpi look like ?

Thanks,
Dwey




--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Gromacs-4-6-on-two-Titans-GPUs-tp5012186p5012279.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Analysis tools and triclinic boxes

2013-11-06 Thread Justin Lemkul



On 11/5/13 7:14 PM, Stephanie Teich-McGoldrick wrote:

Message: 5
Date: Mon, 04 Nov 2013 13:32:52 -0500
From: Justin Lemkul jalem...@vt.edu
Subject: Re: [gmx-users] Analysis tools and triclinic boxes
To: Discussion list for GROMACS users gmx-users@gromacs.org
Message-ID: 5277e854.9000...@vt.edu
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi Justin,

Thanks for the response. My question was prompted by line 243 in
gmx_cluster.c which states /* Should use pbc_dx when analysing multiple
molecueles,but the box is not stored for every frame.*/ I just wanted to
verify that analysis tools are written for any box shape.



I have never had any problems with any of the analysis tools using any of the 
box shapes, though that of course does not negate the possibility of problems. 
The comments in pbc.h describe all of the functions quite well and what the 
potential issues might be.  If there is a demonstrable problem with something, 
that is certainly worth pursuing.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: single point calculation with gromacs

2013-11-06 Thread fantasticqhl
Dear Justin,

I am sorry for the late reply. I still can't figure it out.

Could you please send me the mdp file which was used for your single point
calculations. 
I want to do some comparison and then solve the problem. 
Thanks very much!


All the best,
Qinghua

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/single-point-calculation-with-gromacs-tp5012084p5012295.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs

2013-11-06 Thread James Starlight
I've gone to conclusion that simulation with 1 or 2 GPU simultaneously gave
me the same performance
mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test,

mdrun -ntmpi 2 -ntomp 6 -gpu_id 0 -v  -deffnm md_CaM_test,

Doest it be due to the small CPU cores or addition RAM ( this system has 32
gb) is needed ? OR may be some extra options are needed in the config?

James




2013/11/6 Richard Broadbent richard.broadben...@imperial.ac.uk

 Hi Dwey,


 On 05/11/13 22:00, Dwey Kauffman wrote:

 Hi Szilard,

 Thanks for your suggestions. I am  indeed aware of this page. In a
 8-core
 AMD with 1GPU, I am very happy about its performance. See below. My
 intention is to obtain a even better one because we have multiple nodes.

 ### 8 core AMD with  1 GPU,
 Force evaluation time GPU/CPU: 4.006 ms/2.578 ms = 1.554
 For optimal performance this ratio should be close to 1!


 NOTE: The GPU has 20% more load than the CPU. This imbalance causes
performance loss, consider using a shorter cut-off and a finer PME
 grid.

 Core t (s)   Wall t (s)(%)
 Time:   216205.51027036.812  799.7
   7h30:36
   (ns/day)(hour/ns)
 Performance:   31.9560.751

 ### 8 core AMD with 2 GPUs

 Core t (s)   Wall t (s)(%)
 Time:   178961.45022398.880  799.0
   6h13:18
   (ns/day)(hour/ns)
 Performance:   38.5730.622
 Finished mdrun on node 0 Sat Jul 13 09:24:39 2013


 I'm almost certain that Szilard meant the lines above this that give the
 breakdown of where the time is spent in the simulation.

 Richard


  However, in your case I suspect that the
 bottleneck is multi-threaded scaling on the AMD CPUs and you should
 probably decrease the number of threads per MPI rank and share GPUs
 between 2-4 ranks.



 OK but can you give a example of mdrun command ? given a 8 core AMD with 2
 GPUs.
 I will try to run it again.


  Regarding scaling across nodes, you can't expect much from gigabit
 ethernet - especially not from the cheaper cards/switches, in my
 experience even reaction field runs don't scale across nodes with 10G
 ethernet if you have more than 4-6 ranks per node trying to
 communicate (let alone with PME). However, on infiniband clusters we
 have seen scaling to 100 atoms/core (at peak).


  From your comments, it sounds like a cluster of AMD cpus is difficult to

 scale across nodes in our current setup.

 Let's assume we install Infiniband (20 or 40GB/s) in the same system of 16
 nodes of 8 core AMD with 1 GPU only. Considering the same AMD system, what
 is a good way to obtain better performance  when we run a task across
 nodes
 ? in other words, what dose mudrun_mpi look like ?

 Thanks,
 Dwey




 --
 View this message in context: http://gromacs.5086.x6.nabble.
 com/Gromacs-4-6-on-two-Titans-GPUs-tp5012186p5012279.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.

  --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Using mpirun on CentOS 6.0

2013-11-05 Thread bharat gupta
Hi,

I am getting the following error while using the command -

[root@localhost INGT]# mpirun -np 24 mdrun_mpi -v -deffnm npt

Error -

/usr/bin/mpdroot: open failed for root's mpd conf
filempiexec_localhost.localdomain (__init__ 1208): forked process failed;
status=255

I complied gromacs using - ./configure --enable-shared --enable-mpi. I have
installed the mpich package , this is what I get when I check for mpirun
and mpiexec -

[root@localhost /]# which mpirun
/usr/bin/mpirun
[root@localhost /]# which mpiexec
/usr/bin/mpiexec

What could be the problem here ??

Thanks

Bharat
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Energy minimization has stopped....

2013-11-05 Thread Kalyanashis
I have given my .mdp file,
; title =  trp_drg
warning =  10
cpp =  /usr/bin/cpp
define  =  -DPOSRES
constraints =  all-bonds
integrator  =  md
dt  =  0.002 ; ps !
nsteps  =  100 ; total 2000.0 ps.
nstcomm =  100
nstxout =  250 ; ouput coordinates every 0.5 ps
nstvout =  1000 ; output velocities every 2.0 ps
nstfout =  0
nstlog  =  100
nstenergy   =  100
nstlist =  100
ns_type =  grid
rlist   =  1.0
coulombtype =  PME
rcoulomb=  1.0
vdwtype =  cut-off
rvdw=  1.0
fourierspacing  =  0.12
fourier_nx  =  0
fourier_ny  =  0
fourier_nz  =  0
pme_order   =  6
ewald_rtol  =  1e-5
optimize_fft=  yes
; Berendsen temparature coupling is on
Tcoupl  =  berendsen
tau_t   =  1.01.0-0.1  1.0   1.0
tc_grps =  SOLNA protein   OMP   CL
ref_t   =  300300300   300   300
; Pressure coupling is on
pcoupl  =  berendsen ; Use Parrinello-Rahman for research work
pcoupltype  =  isotropic ; Use semiisotropic when working with
membranes
tau_p   =  2.0
compressibility =  4.5e-5
ref_p   =  1.0
refcoord-scaling=  all
; Generate velocites is on at 300 K.
gen_vel = yes
gen_temp= 300.0
gen_seed= 173529


And It is a large protein system containing drug molecule and the atoms of
whole system is near about 16000.
As I did not get any .gro file, thus the MD run was not properly finished.
Please suggest me the probable source this kind error.
Thank you so much..



On Tue, Nov 5, 2013 at 5:29 PM, jkrie...@mrc-lmb.cam.ac.uk [via GROMACS] 
ml-node+s5086n5012256...@n6.nabble.com wrote:

 What does your curve look like? What parameters are you using in the mdp?
 How big is your system and what kind of molecules are in there? Providing
 this kind of information would help people work out what the problem is.

 Then again it may be ok that the minimisation has converged without
 reaching the Fmax cutoff. 2 is a large number of steps.

  Hi,
Whenever I am trying to do position retrained MD run, It has been
  stopped
  at middle of the MD run. I have given the following error. Can you
 please
  suggest me something to resolve this error?
  Energy minimization has stopped, but the forces havenot converged to the
  requested precision Fmax  100 (whichmay not be possible for your
 system).
  It
  stoppedbecause the algorithm tried to make a new step whose sizewas too
  small, or there was no change in the energy sincelast step. Either way,
 we
  regard the minimization asconverged to within the available machine
  precision,given your starting configuration and EM parameters.
 
  Double precision normally gives you higher accuracy, butthis is often
 not
  needed for preparing to run moleculardynamics.
 
  writing lowest energy coordinates.
 
  Steepest Descents converged to machine precision in 20514 steps,
  but did not reach the requested Fmax  100.
  Potential Energy  = -9.9811250e+06
  Maximum force =  6.1228135e+03 on atom 15461
  Norm of force =  1.4393512e+01
 
  gcq#322: The Feeling of Power was Intoxicating, Magic (Frida Hyvonen)
 
  --
  Kalyanashis Jana
  email: [hidden email]http://user/SendEmail.jtp?type=nodenode=5012256i=0
  --
  gmx-users mailing list[hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012256i=1
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to [hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012256i=2.

  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 


 --
 gmx-users mailing list[hidden 
 email]http://user/SendEmail.jtp?type=nodenode=5012256i=3
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to [hidden 
 email]http://user/SendEmail.jtp?type=nodenode=5012256i=4.

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


 --
  If you reply to this email, your message will be added to the discussion
 below:

 http://gromacs.5086.x6.nabble.com/Energy-minimization-has-stopped-tp5012252p5012256.html
  To start a new topic under GROMACS Users Forum, email
 ml-node+s5086n4370410...@n6.nabble.com
 To unsubscribe from GROMACS, click 
 

Re: [gmx-users] Re: Energy minimization has stopped....

2013-11-05 Thread Justin Lemkul



On 11/5/13 7:19 AM, Kalyanashis wrote:

I have given my .mdp file,
; title =  trp_drg
warning =  10
cpp =  /usr/bin/cpp
define  =  -DPOSRES
constraints =  all-bonds
integrator  =  md
dt  =  0.002 ; ps !
nsteps  =  100 ; total 2000.0 ps.
nstcomm =  100
nstxout =  250 ; ouput coordinates every 0.5 ps
nstvout =  1000 ; output velocities every 2.0 ps
nstfout =  0
nstlog  =  100
nstenergy   =  100
nstlist =  100
ns_type =  grid
rlist   =  1.0
coulombtype =  PME
rcoulomb=  1.0
vdwtype =  cut-off
rvdw=  1.0
fourierspacing  =  0.12
fourier_nx  =  0
fourier_ny  =  0
fourier_nz  =  0
pme_order   =  6
ewald_rtol  =  1e-5
optimize_fft=  yes
; Berendsen temparature coupling is on
Tcoupl  =  berendsen
tau_t   =  1.01.0-0.1  1.0   1.0
tc_grps =  SOLNA protein   OMP   CL
ref_t   =  300300300   300   300


These settings make no sense.  Please read 
http://www.gromacs.org/Documentation/Terminology/Thermostats.



; Pressure coupling is on
pcoupl  =  berendsen ; Use Parrinello-Rahman for research work
pcoupltype  =  isotropic ; Use semiisotropic when working with
membranes
tau_p   =  2.0
compressibility =  4.5e-5
ref_p   =  1.0
refcoord-scaling=  all
; Generate velocites is on at 300 K.
gen_vel = yes
gen_temp= 300.0
gen_seed= 173529


And It is a large protein system containing drug molecule and the atoms of
whole system is near about 16000.
As I did not get any .gro file, thus the MD run was not properly finished.
Please suggest me the probable source this kind error.


The run crashes because your energy minimization effectively failed.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Replacing atom

2013-11-05 Thread J Alizadeh
Hi,
I need to replace an atom with another in the considered system.
I'd like to know if it is possible and if so what changes I need to do.

thanks
j.rahrow


On Thu, Oct 31, 2013 at 12:47 PM, J Alizadeh j.alizade...@gmail.com wrote:

 Hi,
   I need to replace an atom with another in the considered system.
   I'd like to know if it is possible and if so what changes I need to do.

 thanks
 j.rahrow

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Hardware for best gromacs performance?

2013-11-05 Thread Timo Graen

29420 Atoms with a some tuning of the write out and communication intervals:
nodes again: 2 x Xeon E5-2680v2 + 2 x NVIDIA K20X GPGPUs @ 4fs vsites
1 node   212 ns/day
2 nodes  295 ns/day
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Replacing atom

2013-11-05 Thread Justin Lemkul



On 11/5/13 10:34 AM, J Alizadeh wrote:

Hi,
I need to replace an atom with another in the considered system.
I'd like to know if it is possible and if so what changes I need to do.



The coordinate file replacement is trivial.  Just open the file in a text editor 
and repname the atom.  The topology is trickier, because you need a whole new 
set of parameters.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Using gromacs on Rocks cluster

2013-11-05 Thread Mark Abraham
You need to configure your MPI environment to do so (so read its docs).
GROMACS can only do whatever that makes available.

Mark


On Tue, Nov 5, 2013 at 2:16 AM, bharat gupta bharat.85.m...@gmail.comwrote:

 Hi,

 I have installed Gromcas 4.5.6 on Rocks cluster 6.0 andmy systme is having
 32 processors (cpu). But while running the nvt equilibration step, it uses
 only 1 cpu and the others remain idle. I have complied the Gromacs using
 enable-mpi option. How can make the mdrun use all the 32 processors ??

 --
 Bharat
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Hardware for best gromacs performance?

2013-11-05 Thread Szilárd Páll
Timo,

Have you used the default settings, that is one rank/GPU? If that is
the case, you may want to try using multiple ranks per GPU, this can
often help when you have 4-6 cores/GPU. Separate PME ranks are not
switched on by default with GPUs, have you tried using any?

Cheers,
--
Szilárd Páll


On Tue, Nov 5, 2013 at 3:29 PM, Timo Graen tgr...@gwdg.de wrote:
 29420 Atoms with a some tuning of the write out and communication intervals:
 nodes again: 2 x Xeon E5-2680v2 + 2 x NVIDIA K20X GPGPUs @ 4fs vsites
 1 node   212 ns/day
 2 nodes  295 ns/day

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Gromacs-4.6 on two Titans GPUs

2013-11-05 Thread Dwey
Hi Mike,


I have similar configurations except a cluster of AMD-based linux
platforms with 2 GPU cards.

Your  suggestion works. However, the performance of 2 GPU  discourages
me  because , for example,  with 1 GPU, our computer node can easily
obtain a  simulation of 31ns/day for a protein of 300 amino acids but
with 2 GPUs, it goes as far as 38 ns/day. I am very curious as to  why
 the performance of 2 GPUs is under expectation. Is there any overhead
that we should pay attention to ?  Note that these 2GPU cards are
linked by a SLI bridge within the same node.

Since the computer nodes of our cluster have at least one GPU  but
they are connected by slow network cards ( 1GB/sec), unfortunately, I
reasonably doubt that the performance will not be proportional to the
total number of  GPU cards.  I am wondering if you have any suggestion
about a cluster of GPU nodes.   For example, will a infiniband
networking help increase a final performance when we execute a mpi
task ? or what else ?  or forget about mpi and use single GPU instead.

Any suggestion is highly appreciated.
Thanks.

Dwey

 Date: Tue, 5 Nov 2013 16:20:39 +0100
 From: Mark Abraham mark.j.abra...@gmail.com
 Subject: Re: [gmx-users] Gromacs-4.6 on two Titans GPUs
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 Message-ID:
 camnumasm5ht40ub+unppv7gmhqzxsb6psewma+hblv+gnb2...@mail.gmail.com
 Content-Type: text/plain; charset=ISO-8859-1

 On Tue, Nov 5, 2013 at 12:55 PM, James Starlight 
 jmsstarli...@gmail.comwrote:

 Dear Richard,


 1)  mdrun -ntmpi 1 -ntomp 12 -gpu_id 0 -v  -deffnm md_CaM_test
 gave me performance about 25ns/day for the explicit solved system consisted
 of 68k atoms (charmm ff. 1.0 cutoofs)

 gaves slightly worse performation in comparison to the 1)


 Richard suggested

 mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test,

 which looks correct to me. -ntomp 6 is probably superfluous

 Mark


 finally

 3) mdrun -deffnm md_CaM_test
 running in the same regime as in the 2) so its also gave me 22ns/day for
 the same system.

 How the efficacy of using of dual-GPUs could be increased?

 James


 2013/11/5 Richard Broadbent richard.broadben...@imperial.ac.uk

  Dear James,
 
 
  On 05/11/13 11:16, James Starlight wrote:
 
  My suggestions:
 
  1) During compilstion using -march=corei7-avx-i I have obtained error
 that
  somethng now found ( sorry I didnt save log) so I compile gromacs
 without
  this flag
 
  2) I have twice as better performance using just 1 gpu by means of
 
  mdrun -ntmpi 1 -ntomp 12 -gpu_id 0 -v  -deffnm md_CaM_test
 
  than using of both gpus
 
  mdrun -ntmpi 2 -ntomp 12 -gpu_id 01 -v  -deffnm md_CaM_test
 
  in the last case I have obtained warning
 
  WARNING: Oversubscribing the available 12 logical CPU cores with 24
  threads.
This will cause considerable performance loss!
 
   here you are requesting 2 thread mpi processes each with 12 openmp
  threads, hence a total of 24 threads however even with hyper threading
  enabled there are only 12 threads on your machine. Therefore, only
 allocate
  12. Try
 
  mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test
 
  or even
 
  mdrun -v  -deffnm md_CaM_test
 
  I believe it should autodetect the GPUs and run accordingly for details
 of
  how to use gromacs with mpi/thread mpi openmp and GPUs see
 
  http://www.gromacs.org/Documentation/Acceleration_and_parallelization
 
  Which describes how to use these systems
 
  Richard
 
 
   How it could be fixed?
  All gpu are recognized correctly
 
 
  2 GPUs detected:
 #0: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
  compatible
 #1: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
  compatible
 
 
  James
 
 
  2013/11/4 Szilárd Páll pall.szil...@gmail.com
 
   You can use the -march=native flag with gcc to optimize for the CPU
  your are building on or e.g. -march=corei7-avx-i for Intel Ivy Bridge
  CPUs.
  --
  Szilárd Páll
 
 
  On Mon, Nov 4, 2013 at 12:37 PM, James Starlight 
 jmsstarli...@gmail.com
  
  wrote:
 
  Szilárd, thanks for suggestion!
 
  What kind of CPU optimisation should I take into account assumint that
 
  I'm
 
  using dual-GPU Nvidia TITAN workstation with 6 cores i7 (recognized as
  12
  nodes in Debian).
 
  James
 
 
  2013/11/4 Szilárd Páll pall.szil...@gmail.com
 
   That should be enough. You may want to use the -march (or equivalent)
  compiler flag for CPU optimization.
 
  Cheers,
  --
  Szilárd Páll
 
 
  On Sun, Nov 3, 2013 at 10:01 AM, James Starlight 
 
  jmsstarli...@gmail.com
 
  wrote:
 
  Dear Gromacs Users!
 
  I'd like to compile lattest 4.6 Gromacs with native GPU supporting
 on
 
  my
 
  i7
 
  cpu with dual GeForces Titans gpu mounted. With this config I'd like
 
  to
 
  perform simulations using cpu as well as both gpus simultaneously.
 
  What flags besides
 
  cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-5.5
 
 
  should I define to CMAKE for compiling optimized 

[gmx-users] Re: Hardware for best gromacs performance?

2013-11-05 Thread Dwey Kauffman
Hi Timo,

  Can you provide a benchmark with  1  Xeon E5-2680 with   1  Nvidia
k20x GPGPU on the same test of 29420 atoms ?

Are these two GPU cards (within the same node) connected by a SLI (Scalable
Link Interface) ? 

Thanks,
Dwey

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Hardware-for-best-gromacs-performance-tp5012124p5012276.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs

2013-11-05 Thread Szilárd Páll
Hi Dwey,

First and foremost, make sure to read the
http://www.gromacs.org/Documentation/Acceleration_and_parallelization
page, in particular the Multiple MPI ranks per GPU section which
applies in your case.

Secondly, please do post log files (pastebin is your friend), the
performance table at the end of the log tells much the performance
story and based on that I/we can make suggestions.

Using multiple GPU requires domain-decomposition which does have a
considerable overhead, especially comparing no DD with DD (i.e. 1 GPU
run with 2 GPU run). However, in your case I suspect that the
bottleneck is multi-threaded scaling on the AMD CPUs and you should
probably decrease the number of threads per MPI rank and share GPUs
between 2-4 ranks.

Regarding scaling across nodes, you can't expect much from gigabit
ethernet - especially not from the cheaper cards/switches, in my
experience even reaction field runs don't scale across nodes with 10G
ethernet if you have more than 4-6 ranks per node trying to
communicate (let alone with PME). However, on infiniband clusters we
have seen scaling to 100 atoms/core (at peak).

Cheers,
--
Szilárd

On Tue, Nov 5, 2013 at 9:29 PM, Dwey mpi...@gmail.com wrote:
 Hi Mike,


 I have similar configurations except a cluster of AMD-based linux
 platforms with 2 GPU cards.

 Your  suggestion works. However, the performance of 2 GPU  discourages
 me  because , for example,  with 1 GPU, our computer node can easily
 obtain a  simulation of 31ns/day for a protein of 300 amino acids but
 with 2 GPUs, it goes as far as 38 ns/day. I am very curious as to  why
  the performance of 2 GPUs is under expectation. Is there any overhead
 that we should pay attention to ?  Note that these 2GPU cards are
 linked by a SLI bridge within the same node.

 Since the computer nodes of our cluster have at least one GPU  but
 they are connected by slow network cards ( 1GB/sec), unfortunately, I
 reasonably doubt that the performance will not be proportional to the
 total number of  GPU cards.  I am wondering if you have any suggestion
 about a cluster of GPU nodes.   For example, will a infiniband
 networking help increase a final performance when we execute a mpi
 task ? or what else ?  or forget about mpi and use single GPU instead.

 Any suggestion is highly appreciated.
 Thanks.

 Dwey

 Date: Tue, 5 Nov 2013 16:20:39 +0100
 From: Mark Abraham mark.j.abra...@gmail.com
 Subject: Re: [gmx-users] Gromacs-4.6 on two Titans GPUs
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 Message-ID:
 camnumasm5ht40ub+unppv7gmhqzxsb6psewma+hblv+gnb2...@mail.gmail.com
 Content-Type: text/plain; charset=ISO-8859-1

 On Tue, Nov 5, 2013 at 12:55 PM, James Starlight 
 jmsstarli...@gmail.comwrote:

 Dear Richard,


 1)  mdrun -ntmpi 1 -ntomp 12 -gpu_id 0 -v  -deffnm md_CaM_test
 gave me performance about 25ns/day for the explicit solved system consisted
 of 68k atoms (charmm ff. 1.0 cutoofs)

 gaves slightly worse performation in comparison to the 1)


 Richard suggested

 mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test,

 which looks correct to me. -ntomp 6 is probably superfluous

 Mark


 finally

 3) mdrun -deffnm md_CaM_test
 running in the same regime as in the 2) so its also gave me 22ns/day for
 the same system.

 How the efficacy of using of dual-GPUs could be increased?

 James


 2013/11/5 Richard Broadbent richard.broadben...@imperial.ac.uk

  Dear James,
 
 
  On 05/11/13 11:16, James Starlight wrote:
 
  My suggestions:
 
  1) During compilstion using -march=corei7-avx-i I have obtained error
 that
  somethng now found ( sorry I didnt save log) so I compile gromacs
 without
  this flag
 
  2) I have twice as better performance using just 1 gpu by means of
 
  mdrun -ntmpi 1 -ntomp 12 -gpu_id 0 -v  -deffnm md_CaM_test
 
  than using of both gpus
 
  mdrun -ntmpi 2 -ntomp 12 -gpu_id 01 -v  -deffnm md_CaM_test
 
  in the last case I have obtained warning
 
  WARNING: Oversubscribing the available 12 logical CPU cores with 24
  threads.
This will cause considerable performance loss!
 
   here you are requesting 2 thread mpi processes each with 12 openmp
  threads, hence a total of 24 threads however even with hyper threading
  enabled there are only 12 threads on your machine. Therefore, only
 allocate
  12. Try
 
  mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test
 
  or even
 
  mdrun -v  -deffnm md_CaM_test
 
  I believe it should autodetect the GPUs and run accordingly for details
 of
  how to use gromacs with mpi/thread mpi openmp and GPUs see
 
  http://www.gromacs.org/Documentation/Acceleration_and_parallelization
 
  Which describes how to use these systems
 
  Richard
 
 
   How it could be fixed?
  All gpu are recognized correctly
 
 
  2 GPUs detected:
 #0: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
  compatible
 #1: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
  compatible
 
 
  James
 
 
  2013/11/4 

Re: [gmx-users] Re: Hardware for best gromacs performance?

2013-11-05 Thread Szilárd Páll
On Tue, Nov 5, 2013 at 9:55 PM, Dwey Kauffman mpi...@gmail.com wrote:
 Hi Timo,

   Can you provide a benchmark with  1  Xeon E5-2680 with   1  Nvidia
 k20x GPGPU on the same test of 29420 atoms ?

 Are these two GPU cards (within the same node) connected by a SLI (Scalable
 Link Interface) ?

Note that SLI has no use for compute, only for graphics.

--
Szilárd

 Thanks,
 Dwey

 --
 View this message in context: 
 http://gromacs.5086.x6.nabble.com/Hardware-for-best-gromacs-performance-tp5012124p5012276.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Gromacs-4.6 on two Titans GPUs

2013-11-05 Thread Dwey Kauffman
Hi Szilard,

   Thanks for your suggestions. I am  indeed aware of this page. In a 8-core
AMD with 1GPU, I am very happy about its performance. See below. My
intention is to obtain a even better one because we have multiple nodes.

### 8 core AMD with  1 GPU,
Force evaluation time GPU/CPU: 4.006 ms/2.578 ms = 1.554
For optimal performance this ratio should be close to 1!


NOTE: The GPU has 20% more load than the CPU. This imbalance causes
  performance loss, consider using a shorter cut-off and a finer PME
grid.

   Core t (s)   Wall t (s)(%)
   Time:   216205.51027036.812  799.7
 7h30:36
 (ns/day)(hour/ns)
Performance:   31.9560.751

### 8 core AMD with 2 GPUs

   Core t (s)   Wall t (s)(%)
   Time:   178961.45022398.880  799.0
 6h13:18
 (ns/day)(hour/ns)
Performance:   38.5730.622
Finished mdrun on node 0 Sat Jul 13 09:24:39 2013


However, in your case I suspect that the 
bottleneck is multi-threaded scaling on the AMD CPUs and you should 
probably decrease the number of threads per MPI rank and share GPUs 
between 2-4 ranks.


OK but can you give a example of mdrun command ? given a 8 core AMD with 2
GPUs.
I will try to run it again.


Regarding scaling across nodes, you can't expect much from gigabit 
ethernet - especially not from the cheaper cards/switches, in my 
experience even reaction field runs don't scale across nodes with 10G 
ethernet if you have more than 4-6 ranks per node trying to 
communicate (let alone with PME). However, on infiniband clusters we 
have seen scaling to 100 atoms/core (at peak). 

From your comments, it sounds like a cluster of AMD cpus is difficult to
scale across nodes in our current setup.

Let's assume we install Infiniband (20 or 40GB/s) in the same system of 16
nodes of 8 core AMD with 1 GPU only. Considering the same AMD system, what
is a good way to obtain better performance  when we run a task across nodes
? in other words, what dose mudrun_mpi look like ?

Thanks,
Dwey




--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Gromacs-4-6-on-two-Titans-GPUs-tp5012186p5012279.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Hardware for best gromacs performance?

2013-11-05 Thread Dwey Kauffman
Hi Szilard,

 Thanks.

From Timo's benchmark, 
1  node142 ns/day 
2  nodes FDR14 218 ns/day 
4  nodes FDR14 257 ns/day 
8  nodes FDR14 326 ns/day 


It looks like a infiniband network is required in order to scale up when
running a task across nodes. Is it correct ?   


Dwey


--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Hardware-for-best-gromacs-performance-tp5012124p5012280.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


RE: [gmx-users] RE: Gibbs Energy Calculation and charges

2013-11-05 Thread Dallas Warren
Thank you for the pointer Michael.

couple-intramol = no
What a diff of the output from gmxdump of the two tpr files shows is in both of 
these cases (normal and double charged), when:
lambda is set to 1.0 (atoms within both molecules will have zero charge)
lambda is set to 0.00 and 0.50, respectively (both will have the same 
charge)
There are the following differences:
functype[] = LJC14_Q, qi and qj are set to the original charges, not 
the ones scaled by lambda
functype[] = LJC_NB, qi and qj are set to the original charges, not the 
ones scaled by lambda
atom[] = q is set to the original charges, not the ones scaled by 
lambda

So this explains why I do not see the two topologies giving the same value at 
the same atomic charges, since the topologies being simulated are not the same. 
 The 1-4 charge interactions are still at the original charges, as are the 1-5 
and beyond.

What is the reason that the charges are being left untouched here with changing 
lambda?  I can understand when we are dealing with LJ/van der Waals, since the 
1-4 are important for the proper dihedrals, but what is the reason for charges 
being left untouched?  Having thought this through, answered it myself, it is 
because here we are interested in molecule - external environment interactions 
being turned off, we are moving the entire molecule from full interacting with 
its external environment to non-interacting.  The molecule itself should be 
left alone.

Turning to the side issue of turning couple-intramol on -

couple-intramol = yes
diff of the equivalent files, there are the following differences:
functype[] = LJC14_Q, qi and qj are set to the original charges, not 
the ones scaled by lambda
atom[] = q is set to the original charges, not the ones scaled by 
lambda

Which confirms what Michael mentioned earlier about couple-intramol only 
affecting those 1-5 and beyond i.e. LJC_NB

Which then begs the question, why does the value of dH/dl change so 
dramatically when this option is turned on, as I observed at 
http://ozreef.org/stuff/gromacs/couple-intramol.png  The only thing being 
changed is the fact that LJC_NB is now being scaled with lambda.

Catch ya,

Dr. Dallas Warren
Drug Delivery, Disposition and Dynamics
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.war...@monash.edu
+61 3 9903 9304
-
When the only tool you own is a hammer, every problem begins to resemble a 
nail. 


 -Original Message-
 From: gmx-users-boun...@gromacs.org [mailto:gmx-users-
 boun...@gromacs.org] On Behalf Of Michael Shirts
 Sent: Thursday, 31 October 2013 1:52 PM
 To: Discussion list for GROMACS users
 Subject: Re: [gmx-users] RE: Gibbs Energy Calculation and charges
 
 I likely won't have much time to look at it tonight, but you can see
 exactly what the option is doing to the topology.  run gmxdump on the
 tpr.  All of the stuff that couple-intramol does is in grompp, so the
 results will show up in the detailed listings of the interactions, and
 which ones have which values set for the A and B states.
 
 On Wed, Oct 30, 2013 at 5:36 PM, Dallas Warren
 dallas.war...@monash.edu wrote:
  Michael, thanks for taking the time to comment and have a look.
 
  The real issue I am having is a bit deeper into the topic than that,
 my last reply was just an observation on something else.  Will
 summarise what I have been doing etc.
 
  I have a molecule that are calculating the Gibbs energy of hydration
 and solvation (octanol).  In a second topology the only difference is
 that the atomic charges have been doubled.  Considering that charges
 are scaled linearly with lambda, the normal charge values of dH/dl from
 lambda 0 to 1 obtained should reproduce that of the double charged
 molecule from lambda 0.5 to 1.0.  Is that a correct interpretation?
 Since the only difference should be that charge of the atoms and over
 that range the charge will be identical.
 
  I was using couple-intramol = no and the following are the results
 from those simulations.
 
  For the OE atom within the molecule, I have plotted the following
 graphs of dH/dl versus charge of that atom for both of the topologies.
  octanol - http://ozreef.org/stuff/octanol.gif
  water - http://ozreef.org/stuff/water.gif
  mdp file - http://ozreef.org/stuff/gromacs/mdout.mdp
 
  The mismatch between the two topologies is the real issue that I am
 having.  I was hoping to get the two to overlap.
 
  My conclusion based on this is that there is actually something else
 being changed with the topology by GROMACS when the simulations are
 being run.  The comments in the manual allude to that, but not entirely
 sure what is going on.
 
  From the manual:
 
 couple-intramol:
 
 no
  All intra-molecular non-bonded interactions for moleculetype
 couple-moltype are replaced

[gmx-users] RE: Gibbs Energy Calculation and charges

2013-11-05 Thread Dallas Warren
Thanks for the suggestion Chris.  Had a quick look and can't see easily how to 
do this, but I think I am at a point now where it is not an issue and don't 
have to actually do this.

Catch ya,

Dr. Dallas Warren
Drug Delivery, Disposition and Dynamics
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.war...@monash.edu
+61 3 9903 9304
-
When the only tool you own is a hammer, every problem begins to resemble a 
nail. 


 -Original Message-
 From: gmx-users-boun...@gromacs.org [mailto:gmx-users-
 boun...@gromacs.org] On Behalf Of Christopher Neale
 Sent: Saturday, 2 November 2013 3:50 AM
 To: gmx-users@gromacs.org
 Subject: [gmx-users] Gibbs Energy Calculation and charges
 
 Dear Dallas:
 
 Seems like you could test Michael's idea by removing all 1-4 NB
 interactions from your topology. It won't produce any biologically
 useful results, but might be a worthwhile check to see if indeed this
 is the issue.
 
 To do this, I figure you would set gen-pairs to no in the [ defaults
 ] directive of forcefield.itp, remove the [ pairtypes ] section from
 ffnonbonded.itp, and remove the [ pairs ] section from your molecular
 .itp file. (You can quickly check that the 1-4 energy is zero in all
 states to ensure that this works).
 
 If that gives you the result that you expect, then you could go on to
 explicitely state the 1-4 interactions for the A and B states (I
 presume that this is possible). Of course, you should be able to jump
 directly to this second test, but the first test might be useful
 because it rules out the possibility that you make a typo somewhere.
 
 Chris.
 
 -- original message --
 
 I think the grammar got a little garbled there, so I'm not sure quite
 what you are claiming.
 
 One important thing to remember; 1-4 interactions are treated as
 bonded interactions right now FOR COUPLE intramol (not for lambda
 dependence of the potential energy function), so whether
 couple-intramol is set to yes or no does not affect these interactions
 at all.  It only affects the nonbondeds with distances greater than
 1-5.  At least to me, this is nonintuitive (and we're coming up with a
 better scheme for 5.0), but might that explain what you are getting?
 
 On Tue, Oct 29, 2013 at 9:44 PM, Dallas Warren Dallas.Warren at
 monash.edu wrote:
  Just want this to make another pass, just in case those in the know
 missed it.
 
  Using couple-intrmol = yes the resulting dH/dl plot actually looks
 like that at lamba = 1 it is actually equal to couple-intramol = no
 with lambda = 0.
 
  Should that be the case?
 
  Catch ya,
 
  Dr. Dallas Warren
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: gmx-users Digest, Vol 115, Issue 16

2013-11-05 Thread Stephanie Teich-McGoldrick
Message: 5
Date: Mon, 04 Nov 2013 13:32:52 -0500
From: Justin Lemkul jalem...@vt.edu
Subject: Re: [gmx-users] Analysis tools and triclinic boxes
To: Discussion list for GROMACS users gmx-users@gromacs.org
Message-ID: 5277e854.9000...@vt.edu
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi Justin,

Thanks for the response. My question was prompted by line 243 in
gmx_cluster.c which states /* Should use pbc_dx when analysing multiple
molecueles,but the box is not stored for every frame.*/ I just wanted to
verify that analysis tools are written for any box shape.

Cheers,
Stephanie



On 11/4/13 1:29 PM, Stephanie Teich-McGoldrick wrote:
 Dear all,

 I am using gromacs 4.6.3 with a triclinic box. Based on the manual and
mail
 list, it is my understanding that the default box shape in gromacs in a
 triclinic box. Can I assume that all the analysis tools also work for a
 triclinic box.


All analysis tools should work correctly for all box types.  Is there a
specific
issue you are having, or just speculation?

-Justin

--
==


Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==


On Mon, Nov 4, 2013 at 12:28 PM, gmx-users-requ...@gromacs.org wrote:

 Send gmx-users mailing list submissions to
 gmx-users@gromacs.org

 To subscribe or unsubscribe via the World Wide Web, visit
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 or, via email, send a message with subject or body 'help' to
 gmx-users-requ...@gromacs.org

 You can reach the person managing the list at
 gmx-users-ow...@gromacs.org

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of gmx-users digest...


 Today's Topics:

1. Re: Re: Installation Gromacs 4.5.7 on rocluster cluster   with
   centos 6.0 (Mark Abraham)
2. Analysis tools and triclinic boxes (Stephanie Teich-McGoldrick)
3. Group protein not found in indexfile (Steve Seibold)
4. Re: Group protein not found in indexfile (Justin Lemkul)
5. Re: Analysis tools and triclinic boxes (Justin Lemkul)
6. Re: TFE-water simulation (Archana Sonawani-Jagtap)
7. Re: Gentle heating with implicit solvent (Gianluca Interlandi)


 --

 Message: 1
 Date: Mon, 4 Nov 2013 17:05:36 +0100
 From: Mark Abraham mark.j.abra...@gmail.com
 Subject: Re: [gmx-users] Re: Installation Gromacs 4.5.7 on rocluster
 cluster with centos 6.0
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 Message-ID:
 CAMNuMAQcWLcKA=GPG1Ewr8s4A=PoTGOSWaqa=
 s_uzbyw8uf...@mail.gmail.com
 Content-Type: text/plain; charset=ISO-8859-1

 On Mon, Nov 4, 2013 at 12:01 PM, bharat gupta bharat.85.m...@gmail.com
 wrote:

  Hi,
 
  I am trying to install gromacs 4.5.7 on rocks cluster(6.0) and it works
  fine till .configure command, but I am getting error at the make command
 :-
 
  Error:
  
  [root@cluster gromacs-4.5.7]# make
 

 These is no need to run make as root - doing so guarantees you have almost
 no knowledge of the final state of your entire machine.


  /bin/sh ./config.status --recheck
  running CONFIG_SHELL=/bin/sh /bin/sh ./configure  --enable-mpi
  LDFLAGS=-L/opt/rocks/lib CPPFLAGS=-I/opt/rocks/include  --no-create
  --no-recursion
  checking build system type... x86_64-unknown-linux-gnu
  checking host system type... x86_64-unknown-linux-gnu
  ./configure: line 2050: syntax error near unexpected token `tar-ustar'
  ./configure: line 2050: `AM_INIT_AUTOMAKE(tar-ustar)'
  make: *** [config.status] Error 2
 

 Looks like the system has an archaic autotools setup. Probably you can
 comment out the line with tar-ustar from the original configure script, or
 remove tar-ustar. Or use the CMake build.


 
 
  I have another query regarding the gromacs that comes with the Rocks
  cluster distribution. The mdrun of that gromacs has been complied without
  mpi option. How can I recomplie with mpi option. As I need the .configure
  file which is not there in the installed gromacs folder of the rocks
  cluster ...
 

 The 4.5-era GROMACS installation instructions are up on the website.
 Whatever's distributed with Rocks is more-or-less irrelevant.

 Mark


 
 
  Thanks in advance for help
 
 
 
 
  Regards
  
  Bharat
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support

[gmx-users] Re: Hardware for best gromacs performance?

2013-11-05 Thread david.chalm...@monash.edu
Hi Szilárd and all,

Thanks very much for the information.  I am more interested in getting
single simulations to go as fast as possible (within reason!) rather than
overall throughput.  Would you expect that the more expensive dual
Xeon/Titan systems would perform better in this respect? 

Cheers

David

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Hardware-for-best-gromacs-performance-tp5012124p5012283.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Hardware for best gromacs performance?

2013-11-05 Thread Mark Abraham
Yes, that has been true for GROMACS for a few years. Low-latency
communication is essential if you want a whole MD step to happen in around
1ms wall time.

Mark
On Nov 5, 2013 11:24 PM, Dwey Kauffman mpi...@gmail.com wrote:

 Hi Szilard,

  Thanks.

 From Timo's benchmark,
 1  node142 ns/day
 2  nodes FDR14 218 ns/day
 4  nodes FDR14 257 ns/day
 8  nodes FDR14 326 ns/day


 It looks like a infiniband network is required in order to scale up when
 running a task across nodes. Is it correct ?


 Dwey


 --
 View this message in context:
 http://gromacs.5086.x6.nabble.com/Hardware-for-best-gromacs-performance-tp5012124p5012280.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Calculation of water density around certain protein residues

2013-11-04 Thread bharat gupta
Hi,

I want to know the exact way to calculate the density of water around
certain residues in my protein. I tried to calculate this by using
g_select, with the following command :-

g_select -f nvt.trr -s nvt.tpr -select Nearby water resname SOL and
within 0.5 of resnr 115 to 118 -os water.xvg

In the output, I got for each time step I some number of residues. For eg,

@ s0 legend Nearby water
  0.000  159.000
  0.200  168.000
  0.400  173.000
  0.600  171.000

Can I get the average the number of water moleculed for the entire
simulation time ?? and how can I get the density instead of number ??


Pls respond to this query ...

Thanks
--
Bharat
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Installation Gromacs 4.5.7 on rocluster cluster with centos 6.0

2013-11-04 Thread bharat gupta
Hi,

I am trying to install gromacs 4.5.7 on rocks cluster(6.0) and it works
fine till .configure command, but I am getting error at the make command :-

Error:

[root@cluster gromacs-4.5.7]# make
/bin/sh ./config.status --recheck
running CONFIG_SHELL=/bin/sh /bin/sh ./configure  --enable-mpi
LDFLAGS=-L/opt/rocks/lib CPPFLAGS=-I/opt/rocks/include  --no-create
--no-recursion
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
./configure: line 2050: syntax error near unexpected token `tar-ustar'
./configure: line 2050: `AM_INIT_AUTOMAKE(tar-ustar)'
make: *** [config.status] Error 2


I have another query regarding the gromacs that comes with the Rocks
cluster distribution. The mdrun of that gromacs has been complied without
mpi option. How can I recomplie with mpi option. As I need the .configure
file which is not there in the installed gromacs folder of the rocks
cluster ...


Thanks in advance for help




Regards

Bharat
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Hardware for best gromacs performance?

2013-11-04 Thread Timo Graen

just a small benchmark...

each node - 2 x Xeon E5-2680v2 + 2 x NVIDIA K20X GPGPUs
42827 atoms - vsites - 4fs
1  node142 ns/day 2  nodes FDR14 218 ns/day
4  nodes FDR14 257 ns/day
8  nodes FDR14 326 ns/day
16 nodes FDR14 391 ns/day (global warming)

best,
timo
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Hardware for best gromacs performance?

2013-11-04 Thread Szilárd Páll
Brad,

These numbers seems rather low for a standard simulation setup! Did
you use a particularly long cut-off or short time-step?

Cheers,
--
Szilárd Páll


On Fri, Nov 1, 2013 at 6:30 PM, Brad Van Oosten bv0...@brocku.ca wrote:
 Im not sure on the prices of these systems any more, they are getting dated
 so they will be on the low end price wise. I have a 30,000 ish atom lipid
 system for all my simulations so this might be helpful:

 System 1
 CPU - dual 6 core xeon @ 2.8 GHz
 GPU - 2x GTX 680
 50 ns/day

 System 2
 CPU - dual 4 core intel E5607 @ 2.26 GHz
 GPU - 2x M2070
 45 ns/day

 System 3
 CPU - dual 4 core intel E5620 @ 2.40 GHz
 GPU - 1x GTX 680
 40 ns/day



 --
 View this message in context: 
 http://gromacs.5086.x6.nabble.com/Hardware-for-best-gromacs-performance-tp5012124p5012152.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Installation Gromacs 4.5.7 on rocluster cluster with centos 6.0

2013-11-04 Thread Mark Abraham
On Mon, Nov 4, 2013 at 12:01 PM, bharat gupta bharat.85.m...@gmail.comwrote:

 Hi,

 I am trying to install gromacs 4.5.7 on rocks cluster(6.0) and it works
 fine till .configure command, but I am getting error at the make command :-

 Error:
 
 [root@cluster gromacs-4.5.7]# make


These is no need to run make as root - doing so guarantees you have almost
no knowledge of the final state of your entire machine.


 /bin/sh ./config.status --recheck
 running CONFIG_SHELL=/bin/sh /bin/sh ./configure  --enable-mpi
 LDFLAGS=-L/opt/rocks/lib CPPFLAGS=-I/opt/rocks/include  --no-create
 --no-recursion
 checking build system type... x86_64-unknown-linux-gnu
 checking host system type... x86_64-unknown-linux-gnu
 ./configure: line 2050: syntax error near unexpected token `tar-ustar'
 ./configure: line 2050: `AM_INIT_AUTOMAKE(tar-ustar)'
 make: *** [config.status] Error 2


Looks like the system has an archaic autotools setup. Probably you can
comment out the line with tar-ustar from the original configure script, or
remove tar-ustar. Or use the CMake build.




 I have another query regarding the gromacs that comes with the Rocks
 cluster distribution. The mdrun of that gromacs has been complied without
 mpi option. How can I recomplie with mpi option. As I need the .configure
 file which is not there in the installed gromacs folder of the rocks
 cluster ...


The 4.5-era GROMACS installation instructions are up on the website.
Whatever's distributed with Rocks is more-or-less irrelevant.

Mark




 Thanks in advance for help




 Regards
 
 Bharat
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Using gromacs on Rocks cluster

2013-11-04 Thread bharat gupta
Hi,

I have installed Gromcas 4.5.6 on Rocks cluster 6.0 andmy systme is having
32 processors (cpu). But while running the nvt equilibration step, it uses
only 1 cpu and the others remain idle. I have complied the Gromacs using
enable-mpi option. How can make the mdrun use all the 32 processors ??

--
Bharat
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: trjconv for pbc

2013-11-03 Thread rankinb
That last procedure works.  I really appreciate your help.  The only other
question I have is related to the selection process.  Is there a way to
select the oxygen atoms of water within a certain distance of a molecule, as
well as the corresponding hydrogen atoms on the water molecule?  Right now,
I am able to select hydrogen atoms and oxygen atoms that are within a
certain distance, but I would like to only select whole water molecules
whose oxygen atoms are within a cutoff.

Thanks,
Blake

PhD Candidate
Purdue University
Ben-Amotz Lab

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/trjconv-for-pbc-tp5012160p5012188.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: trjconv for pbc

2013-11-03 Thread Justin Lemkul



On 11/3/13 7:12 AM, rankinb wrote:

That last procedure works.  I really appreciate your help.  The only other
question I have is related to the selection process.  Is there a way to
select the oxygen atoms of water within a certain distance of a molecule, as
well as the corresponding hydrogen atoms on the water molecule?  Right now,
I am able to select hydrogen atoms and oxygen atoms that are within a
certain distance, but I would like to only select whole water molecules
whose oxygen atoms are within a cutoff.



Start your selection string with same residue as and it will select the whole 
residue for any atom that satisfies the selection criteria.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


  1   2   3   4   5   6   7   8   9   10   >