RE: [gmx-users] REMD vs MD

2013-09-04 Thread hanna pdb
Hi, 
well I guess it depends on what models you mean...
REMD is a technique to enhance the conformational sampling. So if you have a 
e.g. a protein that is disordered or has large 
disordered parts. Using REMD several copies of the same system are simulated, 
each replica at a 
different temperature. Then, each replica can explore a different portion of 
the conformational space: the higher temperature replicas are able to move 
between different regions of the potential energy surface without staying in 
any of them, while the lowest temperature replicas can get trapped in local 
minima and are able to accurately explore the regions of the potential energy 
surface. This way you can obtain more information about conformational space 
then by simple MD.

This paper might help: Y. Sugita, Y. Okamoto, Chem. Phys. Let., 314, 261 (1999)

best

> Date: Thu, 5 Sep 2013 11:34:47 +0800
> From: pqah...@gmail.com
> To: gmx-users@gromacs.org
> Subject: [gmx-users] REMD vs MD
> 
> Hi all,
> 
> I just want to ask you which is about REMD..I just understanding about
> the MD simulation which is the basic one..If i have a several models
> that i need to see the interaction between them is it okay to use
> MD?Or i need to use REMD instead?
> 
> Thanks in advance,
> 
> -- 
> Best Regards,
> 
> Nur Syafiqah Abdul Ghani,
> Theoretical and Computational Chemistry Laboratory,
> Department of Chemistry,
> Faculty of Science,
> Universiti Putra Malaysia,
> 43400 Serdang,
> Selangor.
> alternative email : syafiqahabdulgh...@gmail.com @ gs33...@mutiara.upm.edu.my
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  --
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Regenerating tpr files

2013-09-04 Thread Piduru Viswanath
Hi,

I have run my MD till 13 ns extending it by 500 ps simulations. But while
shifting the data from a system, accidentally I lost my tpr files. My
backup contains trr files till 13 ns (i.e. all trr files) and tpr files
till 6.5ns. How could I regenerate the tpr files now if I have to extend
the simulation further? While using tpbconv I received an error which
prompted me to use cpt file. But I don’t have cpt files, Is there a
solution to this problem?

Sincerely
Ajay
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] GROMOS53a6 c12 of CH2 and CH3 in pairtypes incosistent with FF paper

2013-09-04 Thread Dallas Warren
Going through the GROMOS53a6 parameters, found that there appears to be an 
inconsistency between what is present within the ffnonbonded.itp file and that 
quoted within the paper (Ooostenbrink et al 2004 
http://dx.doi.org/10.1002/Jcc.20090) for the c12 LJ values for CH2 and CH3 in 
the [ pairtypes ] section.

[ pairtypes ]
; ij func  c6   c12

  OA   OA  1  0.0022619536  1.265625e-06

 CH2  CH2  1  0.0047238129  4.7419261e-06

 CH3  CH3  1  0.0068525284  6.0308652e-06

Then taking these c12 parameters from the ffnonbonded.itp file and converting 
to square root values so can compare with the values presented in the FF paper, 
Table 9.  Have also done the reverse, converting the Table 9 c12 values to the 
squared value so can compare with the value within the ffnonbonded.itp
 
OA - OA
ffnonbonded.itp 1.265625e-06  =>0.001125000
Table 9 1.265625E-06  <=  0.001125

CH2 - OA
ffnonbonded.itp 4.7419261e-06 =>0.002177596
Table 9 4.743684E-06  <=  0.002178

CH3 - OA
ffnonbonded.itp 6.0308652e-06 =>0.002455782
Table 9 6.031936E-06  <=0.002456

As you can see, the parameters for OA are consistent, while those for CH2 and 
CH3 are not.

So, I suppose the questions are:

1) where did the values presented in the ffnonbonded.itp actually come from?
2) why are they not consistent with those that are defined in the FF paper, 
considering that the other cases are i.e. OA?
3) how much of a concern it that the values are that different (4th significant 
figure)?

Catch ya,

Dr. Dallas Warren
Drug Discovery Biology
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.war...@monash.edu
+61 3 9903 9304
-
When the only tool you own is a hammer, every problem begins to resemble a 
nail. 


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] simulation explode while switching from NVT to NPT

2013-09-04 Thread Rafael I. Silverman y de la Vega
Did you follow the link in the error message?


On Wed, Sep 4, 2013 at 7:17 PM, Golshan Hejazi wrote:

> Hi everyone,
>
> I am simulating a system of paracetamol crystal in ethanol solvent. I used
> pdb2gmx to generate the topology and gro file and I minimized the system
> using steepest decent. As long as I perform NVT simulations at any
> temperature, the simulations goes on! But as soon as I switch from NVT to
> NPT, the simulation crashes with the following error:
>
> I tried to perform NVT at very low temperature, say 50K and then switch to
> NPT ... but no WAY!
> Can you help me with that?
>
> Thanks
>
> Warning: 1-4 interaction between 1361 and 1368 at distance
> 10600663849073184.000 which is larger than the 1-4 table size 1.800 nm
> These are ignored for the rest of the simulation
> This usually means your system is exploding,
> if not, you should increase table-extension in your mdp file
> or with user tables increase the table size
>
> ---
> Program mdrun, VERSION 4.5.4
> Source code file: pme.c, line: 538
>
> Fatal error:
> 9 particles communicated to PME node 0 are more than 2/3 times the cut-off
> out of the domain decomposition cell of their charge group in dimension y.
> This usually means that your system is not well equilibrated.
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] REMD vs MD

2013-09-04 Thread Nur Syafiqah Abdul Ghani
Hi all,

I just want to ask you which is about REMD..I just understanding about
the MD simulation which is the basic one..If i have a several models
that i need to see the interaction between them is it okay to use
MD?Or i need to use REMD instead?

Thanks in advance,

-- 
Best Regards,

Nur Syafiqah Abdul Ghani,
Theoretical and Computational Chemistry Laboratory,
Department of Chemistry,
Faculty of Science,
Universiti Putra Malaysia,
43400 Serdang,
Selangor.
alternative email : syafiqahabdulgh...@gmail.com @ gs33...@mutiara.upm.edu.my
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] simulation explode while switching from NVT to NPT

2013-09-04 Thread Golshan Hejazi
Hi everyone,

I am simulating a system of paracetamol crystal in ethanol solvent. I used 
pdb2gmx to generate the topology and gro file and I minimized the system using 
steepest decent. As long as I perform NVT simulations at any temperature, the 
simulations goes on! But as soon as I switch from NVT to NPT, the simulation 
crashes with the following error:

I tried to perform NVT at very low temperature, say 50K and then switch to NPT 
... but no WAY!
Can you help me with that?

Thanks

Warning: 1-4 interaction between 1361 and 1368 at distance 
10600663849073184.000 which is larger than the 1-4 table size 1.800 nm
These are ignored for the rest of the simulation
This usually means your system is exploding,
if not, you should increase table-extension in your mdp file
or with user tables increase the table size

---
Program mdrun, VERSION 4.5.4
Source code file: pme.c, line: 538

Fatal error:
9 particles communicated to PME node 0 are more than 2/3 times the cut-off out 
of the domain decomposition cell of their charge group in dimension y.
This usually means that your system is not well equilibrated.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] The optimal PME mesh load for parallel simulations is below 0.5

2013-09-04 Thread Steven Neumann
I am not using any solvent. I mimic the presence of water by vdw tabulated
potentials. I wish to  see what electrostatics will change. And the coulomb
cutoff = 0 will completely remove the electrostatic, right?


On Wed, Sep 4, 2013 at 3:23 PM, Justin Lemkul  wrote:

>
>
> On 9/4/13 10:20 AM, Steven Neumann wrote:
>
>> Sorry it is a vacuum but I included implicit solvent in vdw
>> parameters...So
>> I need pbc as well.
>>
>>
>>
> Sorry, this doesn't make much sense to me.  If you're using implicit
> solvent (GB), then it's by definition not vacuum.  I also find the same to
> be true - finite cutoffs lead to artifacts in vacuo or when using GB.  The
> only stable simulations I have produced using GB us the all-vs-all settings
> I showed below.  Obviously, if your parameterization and tabulated
> interactions have different requirements, then what I said goes out the
> window, but using GB with PBC also suffers from artifacts.
>
> -Justin
>
>  On Wed, Sep 4, 2013 at 3:18 PM, Steven Neumann > >wrote:
>>
>>  Thank you. i am using my own vdw tables so need a cut off.
>>>
>>>
>>>
>>>
>>> On Wed, Sep 4, 2013 at 3:13 PM, Justin Lemkul  wrote:
>>>
>>>

 On 9/4/13 10:11 AM, Steven Neumann wrote:

  Thank you! Would you suggest just a cut-off for coulmb?
>
>
>  Not a finite one.  The best in vacuo settings are:

 pbc = no
 rlist = 0
 rvdw = 0
 rcoulomb = 0
 nstlist = 0
 vdwtype = cutoff
 coulombtype = cutoff

 -Justin

   On Wed, Sep 4, 2013 at 3:09 PM, Justin Lemkul 
 wrote:

>
>
>
>> On 9/4/13 10:03 AM, Steven Neumann wrote:
>>
>>   DEa Users,
>>
>>>
>>> My system involves protein in vacuum - 80 atoms in box of 9x9x9 nm3.
>>> I
>>> want
>>> to use PME in my mdp:
>>>
>>> rcoulomb = 2.0
>>> coulombtype  = PME
>>> pme_order= 4
>>> fourierspacing   = 0.12
>>>
>>> The cutoff needs to stay like this, I have my own tables with VDW,
>>> bonds,
>>> angles and dihedrals.
>>>
>>> i got the NOTE:
>>>
>>> The optimal PME mesh load for parallel simulations is below 0.5
>>>  and for highly parallel simulations between 0.25 and 0.33,
>>>  for higher performance, increase the cut-off and the PME grid
>>> spacing
>>>
>>> what setting would you suggest to use on 8 CPUs?
>>>
>>>
>>>   I would suggest not using PME :)  The problem is PME is extremely
>>>
>> inefficient in vacuo because it spends a lot of time doing nothing due
>> to
>> the empty space. Moreover, you're not likely really simulating in
>> vacuo
>> at
>> that point because you've got PBC and therefore are really doing a
>> simulation in more of a diffuse crystal environment, so there are
>> probably
>> artifacts.
>>
>> -Justin
>>
>> --
>> ==**
>>
>>
>>
>> Justin A. Lemkul, Ph.D.
>> Postdoctoral Fellow
>>
>> Department of Pharmaceutical Sciences
>> School of Pharmacy
>> Health Sciences Facility II, Room 601
>> University of Maryland, Baltimore
>> 20 Penn St.
>> Baltimore, MD 21201
>>
>> jalemkul@outerbanks.umaryland.**edu > umaryland.edu 
>> >>
>> | (410)
>> 706-7441
>>
>> ==**
>>
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/**mailman/listinfo/gmx-users
>> http://lists.gromacs.org/**mailman/listinfo/gmx-users>
>> >
>> http://lists.gromacs.org/mailman/**listinfo/gmx-users>
>> http://lists.gromacs.org/mailman/listinfo/gmx-users>
>> >
>>
>>
>>>  * Please search the archive at http://www.gromacs.org/**
>> Support/Mailing_Lists/Search<**h**ttp://www.gromacs.org/**Support/**
>>
>> Mailing_Lists/Search> Mailing_Lists/Search
>> >>before
>> posting!
>>
>> * Please don't post (un)subscribe requests to the list. Use the www
>> interface or send it to gmx-users-requ...@gromacs.org.
>> * Can't post? Read 
>> http://www.gromacs.org/**Support/Mailing_Lists
>> 
>> >
>> 
>> http://www.gromacs.org/Support/Mailing_Lists>
>> >
>>
>>>
>>>
>>
>>  --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceuti

Re: [gmx-users] The optimal PME mesh load for parallel simulations is below 0.5

2013-09-04 Thread Justin Lemkul



On 9/4/13 10:35 AM, Steven Neumann wrote:

I am not using any solvent. I mimic the presence of water by vdw tabulated
potentials. I wish to  see what electrostatics will change. And the coulomb
cutoff = 0 will completely remove the electrostatic, right?



No, it does the opposite.  Setting all cutoffs to zero triggers the all-vs-all 
kernels, which calculate every possible interaction.


-Justin



On Wed, Sep 4, 2013 at 3:23 PM, Justin Lemkul  wrote:




On 9/4/13 10:20 AM, Steven Neumann wrote:


Sorry it is a vacuum but I included implicit solvent in vdw
parameters...So
I need pbc as well.




Sorry, this doesn't make much sense to me.  If you're using implicit
solvent (GB), then it's by definition not vacuum.  I also find the same to
be true - finite cutoffs lead to artifacts in vacuo or when using GB.  The
only stable simulations I have produced using GB us the all-vs-all settings
I showed below.  Obviously, if your parameterization and tabulated
interactions have different requirements, then what I said goes out the
window, but using GB with PBC also suffers from artifacts.

-Justin

  On Wed, Sep 4, 2013 at 3:18 PM, Steven Neumann 
wrote:


  Thank you. i am using my own vdw tables so need a cut off.





On Wed, Sep 4, 2013 at 3:13 PM, Justin Lemkul  wrote:




On 9/4/13 10:11 AM, Steven Neumann wrote:

  Thank you! Would you suggest just a cut-off for coulmb?



  Not a finite one.  The best in vacuo settings are:


pbc = no
rlist = 0
rvdw = 0
rcoulomb = 0
nstlist = 0
vdwtype = cutoff
coulombtype = cutoff

-Justin

   On Wed, Sep 4, 2013 at 3:09 PM, Justin Lemkul 
wrote:






On 9/4/13 10:03 AM, Steven Neumann wrote:

   DEa Users,



My system involves protein in vacuum - 80 atoms in box of 9x9x9 nm3.
I
want
to use PME in my mdp:

rcoulomb = 2.0
coulombtype  = PME
pme_order= 4
fourierspacing   = 0.12

The cutoff needs to stay like this, I have my own tables with VDW,
bonds,
angles and dihedrals.

i got the NOTE:

The optimal PME mesh load for parallel simulations is below 0.5
  and for highly parallel simulations between 0.25 and 0.33,
  for higher performance, increase the cut-off and the PME grid
spacing

what setting would you suggest to use on 8 CPUs?


   I would suggest not using PME :)  The problem is PME is extremely


inefficient in vacuo because it spends a lot of time doing nothing due
to
the empty space. Moreover, you're not likely really simulating in
vacuo
at
that point because you've got PBC and therefore are really doing a
simulation in more of a diffuse crystal environment, so there are
probably
artifacts.

-Justin

--
==**



Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalemkul@outerbanks.umaryland.**edu >>
| (410)
706-7441

==**

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/**mailman/listinfo/gmx-users
http://lists.gromacs.org/**mailman/listinfo/gmx-users>



http://lists.gromacs.org/mailman/**listinfo/gmx-users>
http://lists.gromacs.org/mailman/listinfo/gmx-users>






  * Please search the archive at http://www.gromacs.org/**

Support/Mailing_Lists/Search<**h**ttp://www.gromacs.org/**Support/**

Mailing_Lists/Search

before

posting!

* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read 
http://www.gromacs.org/**Support/Mailing_Lists





http://www.gromacs.org/Support/Mailing_Lists>









  --

==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalemkul@outerbanks.umaryland.edu >| (410)
706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
http://lists.gromacs.org/mailman/listinfo/gmx-users>



* Please search the archive at http://www.gromacs.org/**
Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Search>>before
posting!
* Please don't post (un)subscribe requests to the list. Use the www

Re: [gmx-users] The optimal PME mesh load for parallel simulations is below 0.5

2013-09-04 Thread Steven Neumann
Thank you. But with rwdv = 0 and vdw_type =User the vdw parameters will be
taken into account at infinite cutoff or omitted?


On Wed, Sep 4, 2013 at 3:37 PM, Justin Lemkul  wrote:

>
>
> On 9/4/13 10:35 AM, Steven Neumann wrote:
>
>> I am not using any solvent. I mimic the presence of water by vdw tabulated
>> potentials. I wish to  see what electrostatics will change. And the
>> coulomb
>> cutoff = 0 will completely remove the electrostatic, right?
>>
>>
> No, it does the opposite.  Setting all cutoffs to zero triggers the
> all-vs-all kernels, which calculate every possible interaction.
>
> -Justin
>
>
>> On Wed, Sep 4, 2013 at 3:23 PM, Justin Lemkul  wrote:
>>
>>
>>>
>>> On 9/4/13 10:20 AM, Steven Neumann wrote:
>>>
>>>  Sorry it is a vacuum but I included implicit solvent in vdw
 parameters...So
 I need pbc as well.



  Sorry, this doesn't make much sense to me.  If you're using implicit
>>> solvent (GB), then it's by definition not vacuum.  I also find the same
>>> to
>>> be true - finite cutoffs lead to artifacts in vacuo or when using GB.
>>>  The
>>> only stable simulations I have produced using GB us the all-vs-all
>>> settings
>>> I showed below.  Obviously, if your parameterization and tabulated
>>> interactions have different requirements, then what I said goes out the
>>> window, but using GB with PBC also suffers from artifacts.
>>>
>>> -Justin
>>>
>>>   On Wed, Sep 4, 2013 at 3:18 PM, Steven Neumann >>
 wrote:
>

   Thank you. i am using my own vdw tables so need a cut off.

>
>
>
>
> On Wed, Sep 4, 2013 at 3:13 PM, Justin Lemkul  wrote:
>
>
>
>> On 9/4/13 10:11 AM, Steven Neumann wrote:
>>
>>   Thank you! Would you suggest just a cut-off for coulmb?
>>
>>>
>>>
>>>   Not a finite one.  The best in vacuo settings are:
>>>
>>
>> pbc = no
>> rlist = 0
>> rvdw = 0
>> rcoulomb = 0
>> nstlist = 0
>> vdwtype = cutoff
>> coulombtype = cutoff
>>
>> -Justin
>>
>>On Wed, Sep 4, 2013 at 3:09 PM, Justin Lemkul 
>> wrote:
>>
>>
>>>
>>>
>>>  On 9/4/13 10:03 AM, Steven Neumann wrote:

DEa Users,


> My system involves protein in vacuum - 80 atoms in box of 9x9x9
> nm3.
> I
> want
> to use PME in my mdp:
>
> rcoulomb = 2.0
> coulombtype  = PME
> pme_order= 4
> fourierspacing   = 0.12
>
> The cutoff needs to stay like this, I have my own tables with VDW,
> bonds,
> angles and dihedrals.
>
> i got the NOTE:
>
> The optimal PME mesh load for parallel simulations is below 0.5
>   and for highly parallel simulations between 0.25 and 0.33,
>   for higher performance, increase the cut-off and the PME grid
> spacing
>
> what setting would you suggest to use on 8 CPUs?
>
>
>I would suggest not using PME :)  The problem is PME is
> extremely
>
>  inefficient in vacuo because it spends a lot of time doing
 nothing due
 to
 the empty space. Moreover, you're not likely really simulating in
 vacuo
 at
 that point because you've got PBC and therefore are really doing a
 simulation in more of a diffuse crystal environment, so there are
 probably
 artifacts.

 -Justin

 --
 ==




 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalemkul@outerbanks.umaryland.edu >>> umaryland.edu 
 http://umaryland.edu>
 
 >>>
 | (410)
 706-7441

 ==


 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 http://lists.gromacs.org/mailman/listinfo/gmx-users>
 >
 http://lists.gromacs.org/mailman/listinfo/gmx-users>
 http://lists.gromacs.org/**mailman/listinfo/gmx-users>
 >

>
>  
> http://lists.gromacs.org/**mailman/**listinfo/gmx-users>
 http://lists.gromacs.org/mailman/**listinfo/gmx-users>
 >
 http://lists.gromacs.org/**mailman/listinfo/gmx-users>
 http://lists.gromacs.org/mailman/listinfo/gmx-users>
 >
>>

Re: [gmx-users] The optimal PME mesh load for parallel simulations is below 0.5

2013-09-04 Thread Justin Lemkul



On 9/4/13 10:44 AM, Steven Neumann wrote:

Thank you. But with rwdv = 0 and vdw_type =User the vdw parameters will be
taken into account at infinite cutoff or omitted?



As I said, setting the cutoffs to zero does not omit interactions.  The zero is 
used to trigger infinite cutoffs.


-Justin



On Wed, Sep 4, 2013 at 3:37 PM, Justin Lemkul  wrote:




On 9/4/13 10:35 AM, Steven Neumann wrote:


I am not using any solvent. I mimic the presence of water by vdw tabulated
potentials. I wish to  see what electrostatics will change. And the
coulomb
cutoff = 0 will completely remove the electrostatic, right?



No, it does the opposite.  Setting all cutoffs to zero triggers the
all-vs-all kernels, which calculate every possible interaction.

-Justin



On Wed, Sep 4, 2013 at 3:23 PM, Justin Lemkul  wrote:




On 9/4/13 10:20 AM, Steven Neumann wrote:

  Sorry it is a vacuum but I included implicit solvent in vdw

parameters...So
I need pbc as well.



  Sorry, this doesn't make much sense to me.  If you're using implicit

solvent (GB), then it's by definition not vacuum.  I also find the same
to
be true - finite cutoffs lead to artifacts in vacuo or when using GB.
  The
only stable simulations I have produced using GB us the all-vs-all
settings
I showed below.  Obviously, if your parameterization and tabulated
interactions have different requirements, then what I said goes out the
window, but using GB with PBC also suffers from artifacts.

-Justin

   On Wed, Sep 4, 2013 at 3:18 PM, Steven Neumann 
wrote:




   Thank you. i am using my own vdw tables so need a cut off.






On Wed, Sep 4, 2013 at 3:13 PM, Justin Lemkul  wrote:




On 9/4/13 10:11 AM, Steven Neumann wrote:

   Thank you! Would you suggest just a cut-off for coulmb?




   Not a finite one.  The best in vacuo settings are:



pbc = no
rlist = 0
rvdw = 0
rcoulomb = 0
nstlist = 0
vdwtype = cutoff
coulombtype = cutoff

-Justin

On Wed, Sep 4, 2013 at 3:09 PM, Justin Lemkul 
wrote:





  On 9/4/13 10:03 AM, Steven Neumann wrote:


DEa Users,



My system involves protein in vacuum - 80 atoms in box of 9x9x9
nm3.
I
want
to use PME in my mdp:

rcoulomb = 2.0
coulombtype  = PME
pme_order= 4
fourierspacing   = 0.12

The cutoff needs to stay like this, I have my own tables with VDW,
bonds,
angles and dihedrals.

i got the NOTE:

The optimal PME mesh load for parallel simulations is below 0.5
   and for highly parallel simulations between 0.25 and 0.33,
   for higher performance, increase the cut-off and the PME grid
spacing

what setting would you suggest to use on 8 CPUs?


I would suggest not using PME :)  The problem is PME is
extremely

  inefficient in vacuo because it spends a lot of time doing

nothing due
to
the empty space. Moreover, you're not likely really simulating in
vacuo
at
that point because you've got PBC and therefore are really doing a
simulation in more of a diffuse crystal environment, so there are
probably
artifacts.

-Justin

--
==




Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalemkul@outerbanks.umaryland.edu http://umaryland.edu>




| (410)
706-7441

==


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
http://lists.gromacs.org/mailman/listinfo/gmx-users>



http://lists.gromacs.org/mailman/listinfo/gmx-users>
http://lists.gromacs.org/**mailman/listinfo/gmx-users>






  
http://lists.gromacs.org/**mailman/**listinfo/gmx-users>

http://lists.gromacs.org/mailman/**listinfo/gmx-users>



http://lists.gromacs.org/**mailman/listinfo/gmx-users>
http://lists.gromacs.org/mailman/listinfo/gmx-users>









* Please search the archive at http://www.gromacs.org/**



Support/Mailing_Lists/Search<
http://www.gromacs.**org/Support/**




Mailing_Lists/Search

Mailing_Lists/Search





before



posting!


* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/
Support/Mailing_Lists





http://www.gromacs.

Re: [gmx-users] The optimal PME mesh load for parallel simulations is below 0.5

2013-09-04 Thread Steven Neumann
Thanks a lot!


On Wed, Sep 4, 2013 at 3:46 PM, Justin Lemkul  wrote:

>
>
> On 9/4/13 10:44 AM, Steven Neumann wrote:
>
>> Thank you. But with rwdv = 0 and vdw_type =User the vdw parameters will be
>> taken into account at infinite cutoff or omitted?
>>
>>
> As I said, setting the cutoffs to zero does not omit interactions.  The
> zero is used to trigger infinite cutoffs.
>
> -Justin
>
>
>> On Wed, Sep 4, 2013 at 3:37 PM, Justin Lemkul  wrote:
>>
>>
>>>
>>> On 9/4/13 10:35 AM, Steven Neumann wrote:
>>>
>>>  I am not using any solvent. I mimic the presence of water by vdw
 tabulated
 potentials. I wish to  see what electrostatics will change. And the
 coulomb
 cutoff = 0 will completely remove the electrostatic, right?


  No, it does the opposite.  Setting all cutoffs to zero triggers the
>>> all-vs-all kernels, which calculate every possible interaction.
>>>
>>> -Justin
>>>
>>>
>>>  On Wed, Sep 4, 2013 at 3:23 PM, Justin Lemkul  wrote:



> On 9/4/13 10:20 AM, Steven Neumann wrote:
>
>   Sorry it is a vacuum but I included implicit solvent in vdw
>
>> parameters...So
>> I need pbc as well.
>>
>>
>>
>>   Sorry, this doesn't make much sense to me.  If you're using implicit
>>
> solvent (GB), then it's by definition not vacuum.  I also find the same
> to
> be true - finite cutoffs lead to artifacts in vacuo or when using GB.
>   The
> only stable simulations I have produced using GB us the all-vs-all
> settings
> I showed below.  Obviously, if your parameterization and tabulated
> interactions have different requirements, then what I said goes out the
> window, but using GB with PBC also suffers from artifacts.
>
> -Justin
>
>On Wed, Sep 4, 2013 at 3:18 PM, Steven Neumann <
> s.neuman...@gmail.com
>
>  wrote:
>>
>>>
>>>
>>Thank you. i am using my own vdw tables so need a cut off.
>>
>>
>>>
>>>
>>>
>>> On Wed, Sep 4, 2013 at 3:13 PM, Justin Lemkul 
>>> wrote:
>>>
>>>
>>>
>>>  On 9/4/13 10:11 AM, Steven Neumann wrote:

Thank you! Would you suggest just a cut-off for coulmb?


>
>Not a finite one.  The best in vacuo settings are:
>
>
 pbc = no
 rlist = 0
 rvdw = 0
 rcoulomb = 0
 nstlist = 0
 vdwtype = cutoff
 coulombtype = cutoff

 -Justin

 On Wed, Sep 4, 2013 at 3:09 PM, Justin Lemkul 
 wrote:



>
>   On 9/4/13 10:03 AM, Steven Neumann wrote:
>
>>
>> DEa Users,
>>
>>
>>  My system involves protein in vacuum - 80 atoms in box of 9x9x9
>>> nm3.
>>> I
>>> want
>>> to use PME in my mdp:
>>>
>>> rcoulomb = 2.0
>>> coulombtype  = PME
>>> pme_order= 4
>>> fourierspacing   = 0.12
>>>
>>> The cutoff needs to stay like this, I have my own tables with
>>> VDW,
>>> bonds,
>>> angles and dihedrals.
>>>
>>> i got the NOTE:
>>>
>>> The optimal PME mesh load for parallel simulations is below 0.5
>>>and for highly parallel simulations between 0.25 and 0.33,
>>>for higher performance, increase the cut-off and the PME
>>> grid
>>> spacing
>>>
>>> what setting would you suggest to use on 8 CPUs?
>>>
>>>
>>> I would suggest not using PME :)  The problem is PME is
>>> extremely
>>>
>>>   inefficient in vacuo because it spends a lot of time doing
>>>
>> nothing due
>> to
>> the empty space. Moreover, you're not likely really simulating in
>> vacuo
>> at
>> that point because you've got PBC and therefore are really doing a
>> simulation in more of a diffuse crystal environment, so there are
>> probably
>> artifacts.
>>
>> -Justin
>>
>> --
>> ==**
>>
>>
>>
>>
>>
>> Justin A. Lemkul, Ph.D.
>> Postdoctoral Fellow
>>
>> Department of Pharmaceutical Sciences
>> School of Pharmacy
>> Health Sciences Facility II, Room 601
>> University of Maryland, Baltimore
>> 20 Penn St.
>> Baltimore, MD 21201
>>
>> jalemkul@outerbanks.umaryland.**edu > **
>> umaryland.edu http://**
>> umaryland.edu >
>> http://umaryland.edu><
>> jalemkul@outerbanks.**umaryland.edu
>> >
>>
>>>
>  | (410)
>

Re: [gmx-users] The optimal PME mesh load for parallel simulations is below 0.5

2013-09-04 Thread Steven Neumann
Sorry it is a vacuum but I included implicit solvent in vdw parameters...So
I need pbc as well.


On Wed, Sep 4, 2013 at 3:18 PM, Steven Neumann wrote:

> Thank you. i am using my own vdw tables so need a cut off.
>
>
>
>
> On Wed, Sep 4, 2013 at 3:13 PM, Justin Lemkul  wrote:
>
>>
>>
>> On 9/4/13 10:11 AM, Steven Neumann wrote:
>>
>>> Thank you! Would you suggest just a cut-off for coulmb?
>>>
>>>
>> Not a finite one.  The best in vacuo settings are:
>>
>> pbc = no
>> rlist = 0
>> rvdw = 0
>> rcoulomb = 0
>> nstlist = 0
>> vdwtype = cutoff
>> coulombtype = cutoff
>>
>> -Justin
>>
>>  On Wed, Sep 4, 2013 at 3:09 PM, Justin Lemkul  wrote:
>>>
>>>

 On 9/4/13 10:03 AM, Steven Neumann wrote:

  DEa Users,
>
> My system involves protein in vacuum - 80 atoms in box of 9x9x9 nm3. I
> want
> to use PME in my mdp:
>
> rcoulomb = 2.0
> coulombtype  = PME
> pme_order= 4
> fourierspacing   = 0.12
>
> The cutoff needs to stay like this, I have my own tables with VDW,
> bonds,
> angles and dihedrals.
>
> i got the NOTE:
>
> The optimal PME mesh load for parallel simulations is below 0.5
> and for highly parallel simulations between 0.25 and 0.33,
> for higher performance, increase the cut-off and the PME grid
> spacing
>
> what setting would you suggest to use on 8 CPUs?
>
>
>  I would suggest not using PME :)  The problem is PME is extremely
 inefficient in vacuo because it spends a lot of time doing nothing due
 to
 the empty space. Moreover, you're not likely really simulating in vacuo
 at
 that point because you've got PBC and therefore are really doing a
 simulation in more of a diffuse crystal environment, so there are
 probably
 artifacts.

 -Justin

 --
 ==


 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalemkul@outerbanks.umaryland.edu >>> umaryland.edu > | (410)
 706-7441

 ==

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 http://lists.gromacs.org/mailman/listinfo/gmx-users>
 >
 * Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Search>>> Mailing_Lists/Search>before
 posting!

 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read 
 http://www.gromacs.org/Support/Mailing_Lists
 
 >


>> --
>> ==**
>>
>> Justin A. Lemkul, Ph.D.
>> Postdoctoral Fellow
>>
>> Department of Pharmaceutical Sciences
>> School of Pharmacy
>> Health Sciences Facility II, Room 601
>> University of Maryland, Baltimore
>> 20 Penn St.
>> Baltimore, MD 21201
>>
>> jalemkul@outerbanks.umaryland.**edu | 
>> (410)
>> 706-7441
>>
>> ==**
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/**mailman/listinfo/gmx-users
>> * Please search the archive at http://www.gromacs.org/**
>> Support/Mailing_Lists/Searchbefore
>>  posting!
>> * Please don't post (un)subscribe requests to the list. Use the www
>> interface or send it to gmx-users-requ...@gromacs.org.
>> * Can't post? Read 
>> http://www.gromacs.org/**Support/Mailing_Lists
>>
>
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] The optimal PME mesh load for parallel simulations is below 0.5

2013-09-04 Thread Justin Lemkul



On 9/4/13 10:03 AM, Steven Neumann wrote:

DEa Users,

My system involves protein in vacuum - 80 atoms in box of 9x9x9 nm3. I want
to use PME in my mdp:

rcoulomb = 2.0
coulombtype  = PME
pme_order= 4
fourierspacing   = 0.12

The cutoff needs to stay like this, I have my own tables with VDW, bonds,
angles and dihedrals.

i got the NOTE:

The optimal PME mesh load for parallel simulations is below 0.5
   and for highly parallel simulations between 0.25 and 0.33,
   for higher performance, increase the cut-off and the PME grid spacing

what setting would you suggest to use on 8 CPUs?



I would suggest not using PME :)  The problem is PME is extremely inefficient in 
vacuo because it spends a lot of time doing nothing due to the empty space. 
Moreover, you're not likely really simulating in vacuo at that point because 
you've got PBC and therefore are really doing a simulation in more of a diffuse 
crystal environment, so there are probably artifacts.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] The optimal PME mesh load for parallel simulations is below 0.5

2013-09-04 Thread Justin Lemkul



On 9/4/13 10:18 AM, Steven Neumann wrote:

Thank you. i am using my own vdw tables so need a cut off.




Then I guess you have your answer.  Finite cutoffs in vacuo can lead to serious 
artifacts if you're not careful.  Tread lightly.


-Justin




On Wed, Sep 4, 2013 at 3:13 PM, Justin Lemkul  wrote:




On 9/4/13 10:11 AM, Steven Neumann wrote:


Thank you! Would you suggest just a cut-off for coulmb?



Not a finite one.  The best in vacuo settings are:

pbc = no
rlist = 0
rvdw = 0
rcoulomb = 0
nstlist = 0
vdwtype = cutoff
coulombtype = cutoff

-Justin

  On Wed, Sep 4, 2013 at 3:09 PM, Justin Lemkul  wrote:





On 9/4/13 10:03 AM, Steven Neumann wrote:

  DEa Users,


My system involves protein in vacuum - 80 atoms in box of 9x9x9 nm3. I
want
to use PME in my mdp:

rcoulomb = 2.0
coulombtype  = PME
pme_order= 4
fourierspacing   = 0.12

The cutoff needs to stay like this, I have my own tables with VDW,
bonds,
angles and dihedrals.

i got the NOTE:

The optimal PME mesh load for parallel simulations is below 0.5
 and for highly parallel simulations between 0.25 and 0.33,
 for higher performance, increase the cut-off and the PME grid
spacing

what setting would you suggest to use on 8 CPUs?


  I would suggest not using PME :)  The problem is PME is extremely

inefficient in vacuo because it spends a lot of time doing nothing due to
the empty space. Moreover, you're not likely really simulating in vacuo
at
that point because you've got PBC and therefore are really doing a
simulation in more of a diffuse crystal environment, so there are
probably
artifacts.

-Justin

--
==


Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalemkul@outerbanks.umaryland.edu > | (410)
706-7441

==

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
http://lists.gromacs.org/mailman/listinfo/gmx-users>



* Please search the archive at http://www.gromacs.org/**
Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Search>>before
posting!

* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read 
http://www.gromacs.org/Support/Mailing_Lists







--
==**

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalemkul@outerbanks.umaryland.**edu  | (410)
706-7441

==**
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/**mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/**
Support/Mailing_Lists/Searchbefore
 posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read 
http://www.gromacs.org/**Support/Mailing_Lists



--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] The optimal PME mesh load for parallel simulations is below 0.5

2013-09-04 Thread Justin Lemkul



On 9/4/13 10:11 AM, Steven Neumann wrote:

Thank you! Would you suggest just a cut-off for coulmb?



Not a finite one.  The best in vacuo settings are:

pbc = no
rlist = 0
rvdw = 0
rcoulomb = 0
nstlist = 0
vdwtype = cutoff
coulombtype = cutoff

-Justin


On Wed, Sep 4, 2013 at 3:09 PM, Justin Lemkul  wrote:




On 9/4/13 10:03 AM, Steven Neumann wrote:


DEa Users,

My system involves protein in vacuum - 80 atoms in box of 9x9x9 nm3. I
want
to use PME in my mdp:

rcoulomb = 2.0
coulombtype  = PME
pme_order= 4
fourierspacing   = 0.12

The cutoff needs to stay like this, I have my own tables with VDW, bonds,
angles and dihedrals.

i got the NOTE:

The optimal PME mesh load for parallel simulations is below 0.5
and for highly parallel simulations between 0.25 and 0.33,
for higher performance, increase the cut-off and the PME grid spacing

what setting would you suggest to use on 8 CPUs?



I would suggest not using PME :)  The problem is PME is extremely
inefficient in vacuo because it spends a lot of time doing nothing due to
the empty space. Moreover, you're not likely really simulating in vacuo at
that point because you've got PBC and therefore are really doing a
simulation in more of a diffuse crystal environment, so there are probably
artifacts.

-Justin

--
==**

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalemkul@outerbanks.umaryland.**edu  | (410)
706-7441

==**
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/**mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/**
Support/Mailing_Lists/Searchbefore
 posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read 
http://www.gromacs.org/**Support/Mailing_Lists



--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] The optimal PME mesh load for parallel simulations is below 0.5

2013-09-04 Thread Justin Lemkul



On 9/4/13 10:20 AM, Steven Neumann wrote:

Sorry it is a vacuum but I included implicit solvent in vdw parameters...So
I need pbc as well.




Sorry, this doesn't make much sense to me.  If you're using implicit solvent 
(GB), then it's by definition not vacuum.  I also find the same to be true - 
finite cutoffs lead to artifacts in vacuo or when using GB.  The only stable 
simulations I have produced using GB us the all-vs-all settings I showed below. 
 Obviously, if your parameterization and tabulated interactions have different 
requirements, then what I said goes out the window, but using GB with PBC also 
suffers from artifacts.


-Justin


On Wed, Sep 4, 2013 at 3:18 PM, Steven Neumann wrote:


Thank you. i am using my own vdw tables so need a cut off.




On Wed, Sep 4, 2013 at 3:13 PM, Justin Lemkul  wrote:




On 9/4/13 10:11 AM, Steven Neumann wrote:


Thank you! Would you suggest just a cut-off for coulmb?



Not a finite one.  The best in vacuo settings are:

pbc = no
rlist = 0
rvdw = 0
rcoulomb = 0
nstlist = 0
vdwtype = cutoff
coulombtype = cutoff

-Justin

  On Wed, Sep 4, 2013 at 3:09 PM, Justin Lemkul  wrote:





On 9/4/13 10:03 AM, Steven Neumann wrote:

  DEa Users,


My system involves protein in vacuum - 80 atoms in box of 9x9x9 nm3. I
want
to use PME in my mdp:

rcoulomb = 2.0
coulombtype  = PME
pme_order= 4
fourierspacing   = 0.12

The cutoff needs to stay like this, I have my own tables with VDW,
bonds,
angles and dihedrals.

i got the NOTE:

The optimal PME mesh load for parallel simulations is below 0.5
 and for highly parallel simulations between 0.25 and 0.33,
 for higher performance, increase the cut-off and the PME grid
spacing

what setting would you suggest to use on 8 CPUs?


  I would suggest not using PME :)  The problem is PME is extremely

inefficient in vacuo because it spends a lot of time doing nothing due
to
the empty space. Moreover, you're not likely really simulating in vacuo
at
that point because you've got PBC and therefore are really doing a
simulation in more of a diffuse crystal environment, so there are
probably
artifacts.

-Justin

--
==


Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalemkul@outerbanks.umaryland.edu > | (410)
706-7441

==

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
http://lists.gromacs.org/mailman/listinfo/gmx-users>



* Please search the archive at http://www.gromacs.org/**
Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Search>>before
posting!

* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read 
http://www.gromacs.org/Support/Mailing_Lists







--
==**

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalemkul@outerbanks.umaryland.**edu | (410)
706-7441

==**
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/**mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/**
Support/Mailing_Lists/Searchbefore
 posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read 
http://www.gromacs.org/**Support/Mailing_Lists






--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Documentation for the variables used in gromacs sourcecode

2013-09-04 Thread Justin Lemkul



On 9/4/13 10:15 AM, HANNIBAL LECTER wrote:

Hi,

I was wondering if there is a documentation of all the source code
variables is available or not.



No.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] The optimal PME mesh load for parallel simulations is below 0.5

2013-09-04 Thread Steven Neumann
Thank you. i am using my own vdw tables so need a cut off.




On Wed, Sep 4, 2013 at 3:13 PM, Justin Lemkul  wrote:

>
>
> On 9/4/13 10:11 AM, Steven Neumann wrote:
>
>> Thank you! Would you suggest just a cut-off for coulmb?
>>
>>
> Not a finite one.  The best in vacuo settings are:
>
> pbc = no
> rlist = 0
> rvdw = 0
> rcoulomb = 0
> nstlist = 0
> vdwtype = cutoff
> coulombtype = cutoff
>
> -Justin
>
>  On Wed, Sep 4, 2013 at 3:09 PM, Justin Lemkul  wrote:
>>
>>
>>>
>>> On 9/4/13 10:03 AM, Steven Neumann wrote:
>>>
>>>  DEa Users,

 My system involves protein in vacuum - 80 atoms in box of 9x9x9 nm3. I
 want
 to use PME in my mdp:

 rcoulomb = 2.0
 coulombtype  = PME
 pme_order= 4
 fourierspacing   = 0.12

 The cutoff needs to stay like this, I have my own tables with VDW,
 bonds,
 angles and dihedrals.

 i got the NOTE:

 The optimal PME mesh load for parallel simulations is below 0.5
 and for highly parallel simulations between 0.25 and 0.33,
 for higher performance, increase the cut-off and the PME grid
 spacing

 what setting would you suggest to use on 8 CPUs?


  I would suggest not using PME :)  The problem is PME is extremely
>>> inefficient in vacuo because it spends a lot of time doing nothing due to
>>> the empty space. Moreover, you're not likely really simulating in vacuo
>>> at
>>> that point because you've got PBC and therefore are really doing a
>>> simulation in more of a diffuse crystal environment, so there are
>>> probably
>>> artifacts.
>>>
>>> -Justin
>>>
>>> --
>>> ==
>>>
>>>
>>> Justin A. Lemkul, Ph.D.
>>> Postdoctoral Fellow
>>>
>>> Department of Pharmaceutical Sciences
>>> School of Pharmacy
>>> Health Sciences Facility II, Room 601
>>> University of Maryland, Baltimore
>>> 20 Penn St.
>>> Baltimore, MD 21201
>>>
>>> jalemkul@outerbanks.umaryland.edu >> umaryland.edu > | (410)
>>> 706-7441
>>>
>>> ==
>>>
>>> --
>>> gmx-users mailing listgmx-users@gromacs.org
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users>
>>> >
>>> * Please search the archive at http://www.gromacs.org/**
>>> Support/Mailing_Lists/Search>> Mailing_Lists/Search>before
>>> posting!
>>>
>>> * Please don't post (un)subscribe requests to the list. Use the www
>>> interface or send it to gmx-users-requ...@gromacs.org.
>>> * Can't post? Read 
>>> http://www.gromacs.org/Support/Mailing_Lists
>>> 
>>> >
>>>
>>>
> --
> ==**
>
> Justin A. Lemkul, Ph.D.
> Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 601
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> jalemkul@outerbanks.umaryland.**edu  | 
> (410)
> 706-7441
>
> ==**
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/**mailman/listinfo/gmx-users
> * Please search the archive at http://www.gromacs.org/**
> Support/Mailing_Lists/Searchbefore
>  posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read 
> http://www.gromacs.org/**Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Documentation for the variables used in gromacs sourcecode

2013-09-04 Thread HANNIBAL LECTER
Hi,

I was wondering if there is a documentation of all the source code
variables is available or not.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] gromacs 4.6.3 and Intel compiiler 11.x

2013-09-04 Thread Guanglei Cui
I was following
http://www.gromacs.org/Documentation/Installation_Instructions. The link to
4.6.3 regression test set isn't obvious. Following the pattern, I
downloaded the 4.6.3 regression test tarball (which apparently unpacks to a
folder named for 4.6.2). Now, GMX_CPU_ACCELERATION=None passes all tests.
SSE4.1 fails only 1 of the kernel tests
(nb_kernel_ElecCSTab_VdwCSTab_GeomW4W4) and nothing else.

Again, thanks for everyone's help.

Regards,
Guanglei


On Wed, Sep 4, 2013 at 2:54 AM, Mark Abraham wrote:

> Please test using the test set version that matches the code!
> On Sep 4, 2013 5:16 AM, "Guanglei Cui" 
> wrote:
>
> > Hi Szilard,
> >
> > Thanks for your reply. I may try your suggestions tomorrow when I get
> back
> > to work.
> >
> > Feeling curious, I downloaded and compiled gmx 4.6.3 on my home computer
> > (gcc-4.6.3 and ubuntu 12.04). Even with the default (below), kernel (38
> out
> > of 142) and freeenergy (2 out of 9) tests would still fail. I'm not sure
> > what is going on. Perhaps I should try an earlier version that matches
> the
> > regressiontests?
> >
> > cmake .. -DGMX_BUILD_OWN_FFTW=on
> > -DREGRESSIONTEST_PATH=/home/cuigl/Downloads/regressiontests-4.6.1
> >
> > Regards,
> > Guanglei
> >
> >
> > On Tue, Sep 3, 2013 at 7:40 PM, Szilárd Páll 
> > wrote:
> >
> > > On Tue, Sep 3, 2013 at 9:50 PM, Guanglei Cui
> > >  wrote:
> > > > Hi Mark,
> > > >
> > > > I agree with you and Justin, but let's just say there are things that
> > are
> > > > out of my control ;-) I just tried SSE2 and NONE. Both failed the
> > > > regression check.
> > >
> > > That's alarming, with GMX_CPU_ACCELERATION=None only the plain C
> > > kernels get compiled which should pass the regressiontests even with
> > > icc 11 - although I have not tried myself.
> > >
> > > Some things that don't take much time and may be useful to try:
> > > - make sure that when GMX_CPU_ACCELERATION=None the resulting binary
> > > does not get compiled with flags that instruct the compiler to
> > > auto-generate SSE4.1 code (e.g. -msse4.1 or -xHOST);
> > > - run with the GMX_NOOPTIMIZEDKERNELS environment variable set which
> > > disables the architecture-specific kernels at runtime (regardless of
> > > what mdrun was compiled with); the end result should be the same as
> > > above: the plain C kernels should be used (although with
> > > GMX_CPU_ACCELERATION != None the instructions of requested instruction
> > > set will be generated by the compiler as an optimization).
> > >
> > > --
> > > Szilárd
> > >
> > > > I think I've spent enough time on this, which justifies
> > > > escalating this to someone with the control, but is failing
> regression
> > > > check with no CPU/instruction optimization normal?
> > > >
> > > > Regards,
> > > > Guanglei
> > > >
> > > >
> > > > On Tue, Sep 3, 2013 at 3:35 PM, Mark Abraham <
> mark.j.abra...@gmail.com
> > > >wrote:
> > > >
> > > >> On Tue, Sep 3, 2013 at 7:47 PM, Guanglei Cui
> > > >>  wrote:
> > > >> > Dear GMX users,
> > > >> >
> > > >> > I'm attempting to compile gromacs 4.6.3 with an older Intel
> compiler
> > > (ver
> > > >> > 11.x). Here is how I compiled FFTW,
> > > >> >
> > > >> > ./configure CC=icc F77=ifort CFLAGS="-O3 -gcc"
> > > >> > --prefix=/tmp/gromacs-4.6.3/fftw-3.3.3/build-intel-threads
> > > >> --enable-threads
> > > >> > --enable-sse2 --with-combined-threads --with-our-malloc16
> > > --enable-float
> > > >>
> > > >> I can't imagine you'll benefit from threaded FFTW, but feel free to
> > > try...
> > > >>
> > > >> > And, here is how I invoked cmake,
> > > >> >
> > > >> > CC=icc CXX=icpc ../cmake-2.8.11/bin/cmake ..
> -DGMX_FFT_LIBRARY=fftw3
> > > >> >
> > > >>
> > >
> >
> -DFFTWF_LIBRARY=/tmp/gromacs-4.6.3/fftw-3.3.3/build-intel-threads/lib/libfftw3f.a
> > > >> >
> > > >>
> > >
> >
> -DFFTWF_INCLUDE_DIR=/tmp/gromacs-4.6.3/fftw-3.3.3/build-intel-threads/include
> > > >> > -DBUILD_SHARED_LIBS=no
> > > >> >
> > > >>
> > >
> >
> -DCMAKE_INSTALL_PREFIX=/home/gc603449/APPLICATIONS/gromac-4.6.3/thread-float-static
> > > >> > -DREGRESSIONTEST_PATH=/tmp/gromacs-4.6.3/regressiontests-4.6.1
> > > >> >
> > > >> > When I ran 'make check', 39 out of 142 kernel tests failed, and 2
> > out
> > > of
> > > >> 9
> > > >> > free energy tests failed.
> > > >> >
> > > >> > This is my first time to compile gromacs, which I am not very
> > familiar
> > > >> > with. I wonder if anyone can kindly point out what has gone wrong,
> > and
> > > >> > where to look for hints. Any help is much appreciated.
> > > >>
> > > >> As Justin said, get an up-to-date compiler. gcc regularly
> outperforms
> > > >> icc, anyway. You can use cmake -DGMX_CPU_ACCELERATION=SSE2 to get a
> > > >> least-common-denominator build, but if you care about threaded FFTW
> > > >> then you care enough to get a new compiler!
> > > >>
> > > >> Mark
> > > >>
> > > >> > Best regards,
> > > >> > --
> > > >> > Guanglei Cui
> > > >> > PS: I am aware of a warning about using older Intel compilers with
> > > 4.6.3,
> > > >> > but that's the o

Re: [gmx-users] The optimal PME mesh load for parallel simulations is below 0.5

2013-09-04 Thread Steven Neumann
Thank you! Would you suggest just a cut-off for coulmb?

Steven


On Wed, Sep 4, 2013 at 3:09 PM, Justin Lemkul  wrote:

>
>
> On 9/4/13 10:03 AM, Steven Neumann wrote:
>
>> DEa Users,
>>
>> My system involves protein in vacuum - 80 atoms in box of 9x9x9 nm3. I
>> want
>> to use PME in my mdp:
>>
>> rcoulomb = 2.0
>> coulombtype  = PME
>> pme_order= 4
>> fourierspacing   = 0.12
>>
>> The cutoff needs to stay like this, I have my own tables with VDW, bonds,
>> angles and dihedrals.
>>
>> i got the NOTE:
>>
>> The optimal PME mesh load for parallel simulations is below 0.5
>>and for highly parallel simulations between 0.25 and 0.33,
>>for higher performance, increase the cut-off and the PME grid spacing
>>
>> what setting would you suggest to use on 8 CPUs?
>>
>>
> I would suggest not using PME :)  The problem is PME is extremely
> inefficient in vacuo because it spends a lot of time doing nothing due to
> the empty space. Moreover, you're not likely really simulating in vacuo at
> that point because you've got PBC and therefore are really doing a
> simulation in more of a diffuse crystal environment, so there are probably
> artifacts.
>
> -Justin
>
> --
> ==**
>
> Justin A. Lemkul, Ph.D.
> Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 601
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> jalemkul@outerbanks.umaryland.**edu  | 
> (410)
> 706-7441
>
> ==**
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/**mailman/listinfo/gmx-users
> * Please search the archive at http://www.gromacs.org/**
> Support/Mailing_Lists/Searchbefore
>  posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read 
> http://www.gromacs.org/**Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] The optimal PME mesh load for parallel simulations is below 0.5

2013-09-04 Thread Steven Neumann
DEa Users,

My system involves protein in vacuum - 80 atoms in box of 9x9x9 nm3. I want
to use PME in my mdp:

rcoulomb = 2.0
coulombtype  = PME
pme_order= 4
fourierspacing   = 0.12

The cutoff needs to stay like this, I have my own tables with VDW, bonds,
angles and dihedrals.

i got the NOTE:

The optimal PME mesh load for parallel simulations is below 0.5
  and for highly parallel simulations between 0.25 and 0.33,
  for higher performance, increase the cut-off and the PME grid spacing

what setting would you suggest to use on 8 CPUs?

Steven
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_saxs

2013-09-04 Thread Justin Lemkul



On 9/4/13 9:23 AM, Kukol, Andreas wrote:

Thanks Justin, for your quick response, albeit what does it mean ('master 
branch' and 'git repro') ?   Can it be used ?



It means that it's in the development code (the git repository) in the master 
branch, which is the one in which new features are being added towards the 5.0 
release.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


RE: [gmx-users] g_saxs

2013-09-04 Thread Kukol, Andreas
Thanks Justin, for your quick response, albeit what does it mean ('master 
branch' and 'git repro') ?   Can it be used ?

Many thanks
Andreas

> -Original Message-
> From: gmx-users-boun...@gromacs.org [mailto:gmx-users-
> boun...@gromacs.org] On Behalf Of Justin Lemkul
> Sent: 04 September 2013 14:03
> To: Discussion list for GROMACS users
> Subject: Re: [gmx-users] g_saxs
> 
> 
> 
> On 9/4/13 8:55 AM, Kukol, Andreas wrote:
> > Hello,
> >
> > Does anyone know if there is a tool called g_saxs available in the latest
> version of Gromacs or planned for any future version. It is supposed to
> compute small-angle x-ray scattering profiles from trajectories.
> >
> 
> It is in the master branch in the git repo.
> 
> -Justin
> 
> --
> ==
> 
> Justin A. Lemkul, Ph.D.
> Postdoctoral Fellow
> 
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 601
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
> 
> jalem...@outerbanks.umaryland.edu | (410) 706-7441
> 
> ==
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www interface
> or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_saxs

2013-09-04 Thread Justin Lemkul



On 9/4/13 8:55 AM, Kukol, Andreas wrote:

Hello,

Does anyone know if there is a tool called g_saxs available in the latest 
version of Gromacs or planned for any future version. It is supposed to compute 
small-angle x-ray scattering profiles from trajectories.



It is in the master branch in the git repo.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_saxs

2013-09-04 Thread Kukol, Andreas
Hello,

Does anyone know if there is a tool called g_saxs available in the latest 
version of Gromacs or planned for any future version. It is supposed to compute 
small-angle x-ray scattering profiles from trajectories.

Many thanks
Andreas
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] help

2013-09-04 Thread Justin Lemkul



On 9/4/13 6:04 AM, Prajisha Sujaya wrote:

  I am facing a problem while simulating the tRNA molecule
while converting pdb to gro,
Fatal error:
Atom OP3 in residue A 1 was not found in rtp entry RA5 with 31 atoms
while sorting atoms.

force field used  3 (AMBER96 protein, nucleic AMBER94), water model TIP3P.
i checked in gromacs error list, in that they mentioned simply re-name the
atoms in your coordinate
file
,
how to rename the atom in coordinate file



Use a text editor.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] help

2013-09-04 Thread Prajisha Sujaya
 I am facing a problem while simulating the tRNA molecule
while converting pdb to gro,
Fatal error:
Atom OP3 in residue A 1 was not found in rtp entry RA5 with 31 atoms
while sorting atoms.

force field used  3 (AMBER96 protein, nucleic AMBER94), water model TIP3P.
i checked in gromacs error list, in that they mentioned simply re-name the
atoms in your coordinate
file
,
how to rename the atom in coordinate file

kindly give valuable suggestion how to rectify this error,


Awaiting For your reply

Thanking You
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] fluctuation of energy in rerun

2013-09-04 Thread Mark Abraham
Seems like a bug. Please open an issue at redmine.gromacs.org, and be sure
to mention the GROMACS version.

Why are you using -pd?
On Sep 4, 2013 2:20 AM, "Nilesh Dhumal"  wrote:

> Hello
>
> I am running a simulation with charge and without charge for 128 pairs of
> bmim-tf2n ioinc liquids.
> This is the command for the original run without charge
> mdrun -s gs.tpr -o gs.trr -c 0.pdb -e gs.edr -g gs.log -pd -nt 8
> This is the command for the rerun with charge
> mdrun -s es.tpr -o es.trr -c 0.pdb -e es.edr -g es.log -rerun gs.trr -pd
> -nt 8
>
> In the original trajectory, the energy of the system fluctuates around 5000
> kJ/mol. In the rerun trajectory, the energy of the system also mostly
> fluctuates around 5000 kJ/mol, but sometimes it jumps to 10^13 or 10^-13.
> In the most recent run (20 ns), 16 data points out of 200 were
> extremely large.
>
> Checking the trajectory files, it seems like caused by molecule stretching
> across the pbc.
>
> Nilesh
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] jwe1050i + jwe0019i errors = SIGSEGV (Fujitsu)

2013-09-04 Thread Mark Abraham
On Sep 4, 2013 7:59 AM, "James"  wrote:
>
> Dear all,
>
> I'm trying to run Gromacs on a Fujitsu supercomputer but the software is
> crashing.
>
> I run grompp:
>
> grompp_mpi_d -f parameters.mdp -c system.pdb -p overthe.top
>
> and it produces the error:
>
> jwe1050i-w The hardware barrier couldn't be used and continues processing
> using the software barrier.
> taken to (standard) corrective action, execution continuing.
> error summary (Fortran)
> error number error level error count
> jwe1050i w 1
> total error count = 1
>
> but still outputs topol.tpr so I can continue.

There's no value in compiling grompp with MPI or in double precision.

> I then run with
>
> export FLIB_FASTOMP=FALSE
> source /home/username/Gromacs463/bin/GMXRC.bash
> mpiexec mdrun_mpi_d -ntomp 16 -v
>
> but it crashes:
>
> starting mdrun 'testrun'
> 5 steps, 100.0 ps.
> jwe0019i-u The program was terminated abnormally with signal number
SIGSEGV.
> signal identifier = SEGV_MAPERR, address not mapped to object
> error occurs at calc_cell_indices._OMP_1 loc 00233474 offset
> 03b4
> calc_cell_indices._OMP_1 at loc 002330c0 called from loc
> 02088fa0 in start_thread
> start_thread at loc 02088e4c called from loc 029d19b4 in
> __thread_start
> __thread_start at loc 029d1988 called from o.s.
> error summary (Fortran)
> error number error level error count
> jwe0019i u 1
> jwe1050i w 1
> total error count = 2
> [ERR.] PLE 0014 plexec The process terminated
>
abnormally.(rank=1)(nid=0x03060006)(exitstatus=240)(CODE=2002,1966080,61440)
> [ERR.] PLE The program that the user specified may be illegal or
> inaccessible on the node.(nid=0x03060006)
>
> Any ideas what could be wrong? It works on my local intel machine.

Looks like it wasn't compiled correctly for the target machine. What was
the cmake command, what does mdrun -version output? Also, if this is the K
computer, probably we can't help, because the compiler docs are officially
unavailable to us. National secret, and all ;-)

Mark

>
> Thanks in advance,
>
> James
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists