Re: [gmx-users] MD workstation for Gromacs

2014-10-27 Thread Carsten Kutzner
Hi,

On 27 Oct 2014, at 07:11, Mohammad Hossein Borghei  wrote:

> Thank you Szilárd,
> 
> So I would be really thankful if you could tell me which configuration is
> the best:
> 
> 2x GTX 980
> 2x GTX 780 Ti
> 2x GTX Titan Black
I would choose between the 980 and the 780Ti, which will give
you about the same performance. Buy whatever card you can get
for a cheaper price. The Titan Black will be too expensive in
relation to performance.

Carsten


 
> 
> Sincerely,
> 
> 
> 
> 
> Sent with MailTrack
> 
> 
> On Mon, Oct 20, 2014 at 12:24 AM, Szilárd Páll 
> wrote:
> 
>> Please send such questions/requests to the GROMACS users' list, I'm
>> replying there.
>> 
>> - For GROMACS 2x GTX 980 will be faster than one TITAN Z.
>> - Consider getting plain DIMMs instead of ECC registered;
>> - If you care about memory bandwidth, AFAIK you need 8 memory modules;
>> this will not matter for GROMACS simulations, but it could matter for
>> analysis or other memory-intensive operations;
>> 
>> --
>> Szilárd
>> 
>> 
>> -- Forwarded message --
>> From: Mohammad Hossein Borghei 
>> Date: Sun, Oct 19, 2014 at 12:34 PM
>> Subject: MD workstation for Gromacs
>> To: pall.szil...@gmail.com
>> 
>> 
>> Dear Mr. Szilárd
>> 
>> I saw your comments in Gromacs mailing list and I thought you can answer
>> my question. I would be really thankful if you could tell me whether the
>> attached configurations are appropriate for GPU computing in Gromacs. Which
>> one is better? Can they be improved by not any increase in price?
>> 
>> Kind Regards,
>> 
>> --
>> Mohammad Hossein Borghei
>> 
>> 
>> 
>> 
>> Sent with MailTrack
>> 
>> 
>> 
> 
> 
> -- 
> Mohammad Hossein Borghei
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-27 Thread Nizar Masbukhin
and how to use that 2 cores? i think that would increase performace twice
as now i am running 1 core per replica.

On Mon, Oct 27, 2014 at 7:15 AM, Justin Lemkul  wrote:

>
>
> On 10/26/14 9:55 AM, Nizar Masbukhin wrote:
>
>> regarding gaining speed in implicit solvent simulation, i have tried to
>> parallelize using -ntmpi flag. However gromacs doesn't allow as i use
>> group
>> cutoff-scheme. Any recommendation how to parallelise implicit solvent
>> simulation? I do need parallelise my simulation. I have found the same
>> question in this mail list, one suggest use all-vs-all kernel which uses
>> zero cut-off.
>> This is my test run actually. I intend to run my simulation in cluster
>> computer.
>>
>>
> Unless the restriction was lifted at some point, implicit simulations
> won't run on more than 2 cores.  There were issues with constraints that
> led to the limitation.
>
> -Justin
>
>
>  On Sun, Oct 26, 2014 at 8:23 PM, Justin Lemkul  wrote:
>>
>>
>>>
>>> On 10/26/14 9:17 AM, Nizar Masbukhin wrote:
>>>
>>>  Thanks Justin.
 I have increased the cutoff, and yeah thats work. There were no error
 message anymore. The first 6 nanoseconds, i felt the simulation run
 slower.
 Felt so curious that  simulation run very fast the rest of time.


  Longer cutoffs mean there are more interactions to calculate, but the
>>> cutoffs aren't to be toyed with arbitrarily to gain speed.  They are a
>>> critical element of the force field itself, though in implicit solvent,
>>> it
>>> is common to increase (and never decrease) the cutoff values used in
>>> explicit solvent.  Physical validity should trump speed any day.
>>>
>>> -Justin
>>>
>>>
>>>   On Fri, Oct 24, 2014 at 7:37 PM, Justin Lemkul 
>>> wrote:
>>>



> On 10/24/14 8:31 AM, Nizar Masbukhin wrote:
>
>   Thanks for yor reply, Mark.
>
>>
>>
>> At first i was sure that the problem was table-exension because when I
>> enlarge table-extension value, warning message didn't  appear anymore.
>> Besides, i have successfully minimized and equilibrated the system
>> (indicated by Fmax < emtol reached; and no error messages during
>> NVT&NPT
>> equilibration, except a warning that the Pcouple is turned off in
>> vacuum
>> system).
>>
>> However, the error message appeared without table-extension warning
>> makes
>> me doubt also about my system stability. Here is my mdp setting.
>> Please
>> tell me if there are any 'weird' setting, and also kindly
>> suggest/recommend
>> a better setting.
>>
>>
>> *mdp file for Minimisation*
>>
>>
>> integrator = steep
>>
>> nsteps = 5000
>>
>> emtol = 200
>>
>> emstep = 0.01
>>
>> niter = 20
>>
>> nstlog = 1
>>
>> nstenergy = 1
>>
>> cutoff-scheme = group
>>
>> nstlist = 1
>>
>> ns_type = simple
>>
>> pbc = no
>>
>> rlist = 0.5
>>
>> coulombtype = cut-off
>>
>> rcoulomb = 0.5
>>
>> vdw-type = cut-off
>>
>> rvdw-switch = 0.8
>>
>> rvdw = 0.5
>>
>> DispCorr = no
>>
>> fourierspacing = 0.12
>>
>> pme_order = 6
>>
>> ewald_rtol = 1e-06
>>
>> epsilon_surface = 0
>>
>> optimize_fft = no
>>
>> tcoupl = no
>>
>> pcoupl = no
>>
>> free_energy = yes
>>
>> init_lambda = 0.0
>>
>> delta_lambda = 0
>>
>> foreign_lambda = 0.05
>>
>> sc-alpha = 0.5
>>
>> sc-power = 1.0
>>
>> sc-sigma  = 0.3
>>
>> couple-lambda0 = vdw
>>
>> couple-lambda1 = none
>>
>> couple-intramol = no
>>
>> nstdhdl = 10
>>
>> gen_vel = no
>>
>> constraints = none
>>
>> constraint-algorithm = lincs
>>
>> continuation = no
>>
>> lincs-order  = 12
>>
>> implicit-solvent = GBSA
>>
>> gb-algorithm = still
>>
>> nstgbradii = 1
>>
>> rgbradii = 0.5
>>
>> gb-epsilon-solvent = 80
>>
>> sa-algorithm = Ace-approximation
>>
>> sa-surface-tension = 2.05
>>
>>
>> *mdp file for NVT equilibration*
>>
>>
>> define = -DPOSRES
>>
>> integrator = md
>>
>> tinit = 0
>>
>> dt = 0.002
>>
>> nsteps = 25
>>
>> init-step = 0
>>
>> comm-mode = angular
>>
>> nstcomm = 100
>>
>> bd-fric = 0
>>
>> ld-seed = -1
>>
>> nstxout = 1000
>>
>> nstvout = 5
>>
>> nstfout = 5
>>
>> nstlog = 100
>>
>> nstcalcenergy = 100
>>
>> nstenergy = 1000
>>
>> nstxtcout = 100
>>
>> xtc-precision = 1000
>>
>> xtc-grps = system
>>
>> energygrps = system
>>
>> cutoff-scheme= group
>>
>> nstlist  = 1
>>
>> ns-type = simple
>>
>> pbc= no
>>
>> rlist= 0.5
>>
>> coulombtype = 

Re: [gmx-users] Time averaged ramachandran plot

2014-10-27 Thread andrea

Hi,

on the fly try this using ALA1 as example (it can be any of your residues):

`grep -v '^#\|^@' rama.xvg | grep "ALA1" | awk '{if($2 == 0) print $2}' 
| awk -f std.awk



where std.awk contains:

{
  x1 += $1
  x2 += $1*$1
}
END {
  x1 = x1/NR
  x2 = x2/NR
  sigma = sqrt(x2 - x1*x1)
  if (NR > 1) std_err = sigma/sqrt(NR -1)
  print "Number of points = " NR
  print "Mean = " x1
  print "Standard Deviation = " sigma
  print "Standard Error = " std_err
}


hope it helps

and


On 27/10/2014 01:21, Justin Lemkul wrote:



On 10/26/14 5:14 PM, Sanku M wrote:
Hi  I plan to plot the ramachandran plot of all the dihedral angles 
each of which is averaged over time-frames of trajectories. But, I 
find g_rama or g_chi gives the time profile of ramachandran plot. 
But, if I want to plot the time-averaged Phi.Psi angles of all 
residues, is there any method to do it.ThanksSanku




It's a process than can easily be written in any scripting language 
you like. You have all the data points, and you want an average.  Just 
post-process the output file with whatever kind of script (Perl, 
Python, etc) you like.


-Justin



--
---
Andrea Spitaleri PhD
Principal Investigator AIRC
D3 - Drug Discovery & Development
Istituto Italiano di Tecnologia
Via Morego, 30 16163 Genova
cell: +39 3485188790
http://www.iit.it/en/d3-people/andrea-spitaleri.html
ORCID: http://orcid.org/-0003-3012-3557

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Time averaged ramachandran plot

2014-10-27 Thread andrea

replace with this:

`grep -v '^#\|^@' rama.xvg | grep "ALA1" | awk '{print $2}' | awk -f 
std.awk




On 27/10/2014 11:04, andrea wrote:

Hi,

on the fly try this using ALA1 as example (it can be any of your 
residues):


`grep -v '^#\|^@' rama.xvg | grep "ALA1" | awk '{if($2 == 0) print 
$2}' | awk -f std.awk



where std.awk contains:

{
  x1 += $1
  x2 += $1*$1
}
END {
  x1 = x1/NR
  x2 = x2/NR
  sigma = sqrt(x2 - x1*x1)
  if (NR > 1) std_err = sigma/sqrt(NR -1)
  print "Number of points = " NR
  print "Mean = " x1
  print "Standard Deviation = " sigma
  print "Standard Error = " std_err
}


hope it helps

and


On 27/10/2014 01:21, Justin Lemkul wrote:



On 10/26/14 5:14 PM, Sanku M wrote:
Hi  I plan to plot the ramachandran plot of all the dihedral angles 
each of which is averaged over time-frames of trajectories. But, I 
find g_rama or g_chi gives the time profile of ramachandran plot. 
But, if I want to plot the time-averaged Phi.Psi angles of all 
residues, is there any method to do it.ThanksSanku




It's a process than can easily be written in any scripting language 
you like. You have all the data points, and you want an average.  
Just post-process the output file with whatever kind of script (Perl, 
Python, etc) you like.


-Justin





--
---
Andrea Spitaleri PhD
Principal Investigator AIRC
D3 - Drug Discovery & Development
Istituto Italiano di Tecnologia
Via Morego, 30 16163 Genova
cell: +39 3485188790
http://www.iit.it/en/d3-people/andrea-spitaleri.html
ORCID: http://orcid.org/-0003-3012-3557

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] trjconv gets stuck on frame

2014-10-27 Thread Mark Abraham
On Sun, Oct 26, 2014 at 11:44 PM, Eric Smoll  wrote:

> Hi Mark,
>
> Thank you for responding so rapidly. I should note that identical
> processing (I use a script) on the trajectories produced by slightly
> different chemical systems had no problem and trajconv produced a complete
> processed trajectory.
>
> However, when processing the problematic few with trajconv, the trajectory
> that is output is incomplete (the trjconv output has fewer frames than the
> input trajectory).
>
> This is definitely not problem with the change in output frequency of
> progress reports to the terminal.
>
> I am not sure if the -b flag is telling me anything. I move it around and
> it still seems to get stuck. I have ~30,000 atoms in my system. The first
> 120 ps are processed in ~ 5 seconds. The next 4 ps take ~ 30 sec. My
> trajectory is many nanoseconds long.
>

The point is to try to see whether the issue happens x steps into the
trajectory, or only at around t=120ps. Does it happen if you are using -pbc
somethingelse? Does it happen if you copy the file to some other filesystem
before using -pbc whole? One needs to find a pattern before one can guess
where the problem might lie.


> Again, my other chemically similar systems do no hang like this and the
> simulation procedure is scripted so it is consistent across my different
> chemical systems.
>

OK. It's possible your simulation system is doing something pathological in
that trajectory, which somehow does not agree with the implementation of
-pbc whole (I'm guessing wildly here), but one would need to try the above
kinds of experiments to probe that, and or visualize the trajectory in some
viewing program.

Mark

I am using gromacs/4.6.5.
>
> Best,
> Eric
>
> On Sun, Oct 26, 2014 at 5:21 PM, Mark Abraham 
> wrote:
>
> > Hi,
> >
> > The output does drop in frequency at some point, so that might be all you
> > are seeing. Experiment with -b and values around the putative problem
> area.
> >
> > Mark
> > On Oct 26, 2014 6:59 PM, "Eric Smoll"  wrote:
> >
> > > Hello Gromacs users,
> > >
> > > I have a trajectory file script18_o.trr that I am trying to process.
> > Using
> > > gmxcheck, this file appears to be complete. When I execute the command
> > > below
> > >
> > > trjconv -f ../script18/script18_o.trr -s ../script17/script17_o.tpr -o
> > > tmp1.trr -pbc whole << EOF
> > > 0
> > > EOF
> > >
> > > the code moves quickly through the first few hundred frames only to
> > > consistently get stuck on frame 300...
> > >
> > > trn version: GMX_trn_file (single precision)
> > >  ->  frame320 time  128.000->  frame300 time  120.000
> > >
> > > How do I troubleshoot the problem?
> > >
> > > -Eric
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] trjconv gets stuck on frame

2014-10-27 Thread Eric Smoll
Hi Mark,

i understand. It wasn't getting stuck in one place, if I skip over the
problem time when executed from the beginning the slowdown still occurs.

I am working on the Stampede Supercomputer using their install of gromacs
and for reasons I do not understand, this extremely slow processing only
occurs for some trajectories on Stampede. Thinking there was some problem
with the login nodes, I tried this on a compute node with the same results
- slow processing.

I transferred my trajectory to my laptop, installed the same version of
gromacs, and processed it with trjconv at a reasonable speed.

Best,
Eric


On Mon, Oct 27, 2014 at 5:33 AM, Mark Abraham 
wrote:

> On Sun, Oct 26, 2014 at 11:44 PM, Eric Smoll  wrote:
>
> > Hi Mark,
> >
> > Thank you for responding so rapidly. I should note that identical
> > processing (I use a script) on the trajectories produced by slightly
> > different chemical systems had no problem and trajconv produced a
> complete
> > processed trajectory.
> >
> > However, when processing the problematic few with trajconv, the
> trajectory
> > that is output is incomplete (the trjconv output has fewer frames than
> the
> > input trajectory).
> >
> > This is definitely not problem with the change in output frequency of
> > progress reports to the terminal.
> >
> > I am not sure if the -b flag is telling me anything. I move it around and
> > it still seems to get stuck. I have ~30,000 atoms in my system. The first
> > 120 ps are processed in ~ 5 seconds. The next 4 ps take ~ 30 sec. My
> > trajectory is many nanoseconds long.
> >
>
> The point is to try to see whether the issue happens x steps into the
> trajectory, or only at around t=120ps. Does it happen if you are using -pbc
> somethingelse? Does it happen if you copy the file to some other filesystem
> before using -pbc whole? One needs to find a pattern before one can guess
> where the problem might lie.
>
>
> > Again, my other chemically similar systems do no hang like this and the
> > simulation procedure is scripted so it is consistent across my different
> > chemical systems.
> >
>
> OK. It's possible your simulation system is doing something pathological in
> that trajectory, which somehow does not agree with the implementation of
> -pbc whole (I'm guessing wildly here), but one would need to try the above
> kinds of experiments to probe that, and or visualize the trajectory in some
> viewing program.
>
> Mark
>
> I am using gromacs/4.6.5.
> >
> > Best,
> > Eric
> >
> > On Sun, Oct 26, 2014 at 5:21 PM, Mark Abraham 
> > wrote:
> >
> > > Hi,
> > >
> > > The output does drop in frequency at some point, so that might be all
> you
> > > are seeing. Experiment with -b and values around the putative problem
> > area.
> > >
> > > Mark
> > > On Oct 26, 2014 6:59 PM, "Eric Smoll"  wrote:
> > >
> > > > Hello Gromacs users,
> > > >
> > > > I have a trajectory file script18_o.trr that I am trying to process.
> > > Using
> > > > gmxcheck, this file appears to be complete. When I execute the
> command
> > > > below
> > > >
> > > > trjconv -f ../script18/script18_o.trr -s ../script17/script17_o.tpr
> -o
> > > > tmp1.trr -pbc whole << EOF
> > > > 0
> > > > EOF
> > > >
> > > > the code moves quickly through the first few hundred frames only to
> > > > consistently get stuck on frame 300...
> > > >
> > > > trn version: GMX_trn_file (single precision)
> > > >  ->  frame320 time  128.000->  frame300 time  120.000
> > > >
> > > > How do I troubleshoot the problem?
> > > >
> > > > -Eric
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http

Re: [gmx-users] How to save trajectories of our interest into vmd

2014-10-27 Thread Justin Lemkul



On 10/27/14 1:25 AM, Seera Suryanarayana wrote:

Dear Gromacs Users

I would like to analyze frame number 150 to 160 out of 1000 frames. I have
been trying to load frames of my interest into vmd. But I was not able to
do it. Please tell me how to use it.



If you're having problems with VMD, they have a mailing list that might suit 
your question better.  If the question relates to how one views or analyzes only 
a subset of a trajectory, it's rather simple.  Gromacs tools make use of the -b 
and -e options, to allow you to -b(egin) and -e(nd) your analysis at any time 
frame in the trajectory.  If you want to visualize only a short segment of the 
trajectory, use trjconv -b and -e (using time in ps, not frame number) to write 
out a new trajectory for visualization.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-27 Thread Justin Lemkul



On 10/27/14 5:59 AM, Nizar Masbukhin wrote:

and how to use that 2 cores? i think that would increase performace twice
as now i am running 1 core per replica.



In the context of REMD, mdrun should figure this out if you issue the command 
over 2N processors, where N is the number of replicas.


-Justin


On Mon, Oct 27, 2014 at 7:15 AM, Justin Lemkul  wrote:




On 10/26/14 9:55 AM, Nizar Masbukhin wrote:


regarding gaining speed in implicit solvent simulation, i have tried to
parallelize using -ntmpi flag. However gromacs doesn't allow as i use
group
cutoff-scheme. Any recommendation how to parallelise implicit solvent
simulation? I do need parallelise my simulation. I have found the same
question in this mail list, one suggest use all-vs-all kernel which uses
zero cut-off.
This is my test run actually. I intend to run my simulation in cluster
computer.



Unless the restriction was lifted at some point, implicit simulations
won't run on more than 2 cores.  There were issues with constraints that
led to the limitation.

-Justin


  On Sun, Oct 26, 2014 at 8:23 PM, Justin Lemkul  wrote:





On 10/26/14 9:17 AM, Nizar Masbukhin wrote:

  Thanks Justin.

I have increased the cutoff, and yeah thats work. There were no error
message anymore. The first 6 nanoseconds, i felt the simulation run
slower.
Felt so curious that  simulation run very fast the rest of time.


  Longer cutoffs mean there are more interactions to calculate, but the

cutoffs aren't to be toyed with arbitrarily to gain speed.  They are a
critical element of the force field itself, though in implicit solvent,
it
is common to increase (and never decrease) the cutoff values used in
explicit solvent.  Physical validity should trump speed any day.

-Justin


   On Fri, Oct 24, 2014 at 7:37 PM, Justin Lemkul 
wrote:






On 10/24/14 8:31 AM, Nizar Masbukhin wrote:

   Thanks for yor reply, Mark.




At first i was sure that the problem was table-exension because when I
enlarge table-extension value, warning message didn't  appear anymore.
Besides, i have successfully minimized and equilibrated the system
(indicated by Fmax < emtol reached; and no error messages during
NVT&NPT
equilibration, except a warning that the Pcouple is turned off in
vacuum
system).

However, the error message appeared without table-extension warning
makes
me doubt also about my system stability. Here is my mdp setting.
Please
tell me if there are any 'weird' setting, and also kindly
suggest/recommend
a better setting.


*mdp file for Minimisation*


integrator = steep

nsteps = 5000

emtol = 200

emstep = 0.01

niter = 20

nstlog = 1

nstenergy = 1

cutoff-scheme = group

nstlist = 1

ns_type = simple

pbc = no

rlist = 0.5

coulombtype = cut-off

rcoulomb = 0.5

vdw-type = cut-off

rvdw-switch = 0.8

rvdw = 0.5

DispCorr = no

fourierspacing = 0.12

pme_order = 6

ewald_rtol = 1e-06

epsilon_surface = 0

optimize_fft = no

tcoupl = no

pcoupl = no

free_energy = yes

init_lambda = 0.0

delta_lambda = 0

foreign_lambda = 0.05

sc-alpha = 0.5

sc-power = 1.0

sc-sigma  = 0.3

couple-lambda0 = vdw

couple-lambda1 = none

couple-intramol = no

nstdhdl = 10

gen_vel = no

constraints = none

constraint-algorithm = lincs

continuation = no

lincs-order  = 12

implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05


*mdp file for NVT equilibration*


define = -DPOSRES

integrator = md

tinit = 0

dt = 0.002

nsteps = 25

init-step = 0

comm-mode = angular

nstcomm = 100

bd-fric = 0

ld-seed = -1

nstxout = 1000

nstvout = 5

nstfout = 5

nstlog = 100

nstcalcenergy = 100

nstenergy = 1000

nstxtcout = 100

xtc-precision = 1000

xtc-grps = system

energygrps = system

cutoff-scheme= group

nstlist  = 1

ns-type = simple

pbc= no

rlist= 0.5

coulombtype = cut-off

rcoulomb= 0.5

vdw-type = Cut-off

vdw-modifier = Potential-shift-Verlet

rvdw-switch= 0.8

rvdw = 0.5

table-extension = 500

fourierspacing = 0.12

fourier-nx  = 0

fourier-ny = 0

fourier-nz = 0

implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05

tcoupl = v-rescale

nsttcouple = -1

nh-chain-length = 10

print-nose-hoover-chain-variables = no

tc-grps = system

tau-t = 0.1

ref-t = 298.00

pcoupl = No

pcoupltype = Isotropic

nstpcouple = -1

tau-p = 1

refcoord-scaling = No

gen-vel = yes

gen-temp = 298.00

gen-seed  = -1

constraints= all-bonds

constraint-algorithm = Lincs

continuation = no

Shake-SOR = no

shake-tol = 0.0001

lincs-order = 4

lincs-iter = 1

lincs-warnangle = 30


*mdp file for NPT equilibration*


define = -DPOSRES

integrator = md

tinit = 0

dt = 0.002

nsteps = 50

init-step = 0

simulation-part = 1

comm-mode = angular

nstcomm = 100

bd-fric = 0

ld-seed = -1

nstxout = 1000

nstvout = 50

nstfout = 50

nstlog = 100

nstc

Re: [gmx-users] Time averaged ramachandran plot

2014-10-27 Thread Tsjerk Wassenaar
Hey :)

That should use circ.awk:

{
  x1 += sin($1)
  x2 += cos($1)
}
END {
  m = atan2(x1,x2)
  print "Number of points = " NR
  print "Circular Mean = " m
}

Otherwise, the mean of 180 and -180 gives you a 0 angle.

Cheers,

Tsjerk

On Mon, Oct 27, 2014 at 11:11 AM, andrea  wrote:

> replace with this:
>
> `grep -v '^#\|^@' rama.xvg | grep "ALA1" | awk '{print $2}' | awk -f
> std.awk
>
>
>
>
> On 27/10/2014 11:04, andrea wrote:
>
>> Hi,
>>
>> on the fly try this using ALA1 as example (it can be any of your
>> residues):
>>
>> `grep -v '^#\|^@' rama.xvg | grep "ALA1" | awk '{if($2 == 0) print $2}' |
>> awk -f std.awk
>>
>>
>> where std.awk contains:
>>
>> {
>>   x1 += $1
>>   x2 += $1*$1
>> }
>> END {
>>   x1 = x1/NR
>>   x2 = x2/NR
>>   sigma = sqrt(x2 - x1*x1)
>>   if (NR > 1) std_err = sigma/sqrt(NR -1)
>>   print "Number of points = " NR
>>   print "Mean = " x1
>>   print "Standard Deviation = " sigma
>>   print "Standard Error = " std_err
>> }
>>
>>
>> hope it helps
>>
>> and
>>
>>
>> On 27/10/2014 01:21, Justin Lemkul wrote:
>>
>>>
>>>
>>> On 10/26/14 5:14 PM, Sanku M wrote:
>>>
 Hi  I plan to plot the ramachandran plot of all the dihedral angles
 each of which is averaged over time-frames of trajectories. But, I find
 g_rama or g_chi gives the time profile of ramachandran plot. But, if I want
 to plot the time-averaged Phi.Psi angles of all residues, is there any
 method to do it.ThanksSanku


>>> It's a process than can easily be written in any scripting language you
>>> like. You have all the data points, and you want an average.  Just
>>> post-process the output file with whatever kind of script (Perl, Python,
>>> etc) you like.
>>>
>>> -Justin
>>>
>>>
>>
> --
> ---
> Andrea Spitaleri PhD
> Principal Investigator AIRC
> D3 - Drug Discovery & Development
> Istituto Italiano di Tecnologia
> Via Morego, 30 16163 Genova
> cell: +39 3485188790
> http://www.iit.it/en/d3-people/andrea-spitaleri.html
> ORCID: http://orcid.org/-0003-3012-3557
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>



-- 
Tsjerk A. Wassenaar, Ph.D.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] trjconv gets stuck on frame

2014-10-27 Thread Mark Abraham
Hi,

Parallel file systems do give uneven service (e.g. one file has a chunk
that lives somewhere that was under high load right when you asked for
it...), so given that you can read the files normally on a normal
filesystem, then you should double-check Stampede's user guides for how to
make best use of their file systems, and/or take up the issue further with
the admins there.

Cheers,

Mark

On Mon, Oct 27, 2014 at 11:40 AM, Eric Smoll  wrote:

> Hi Mark,
>
> i understand. It wasn't getting stuck in one place, if I skip over the
> problem time when executed from the beginning the slowdown still occurs.
>
> I am working on the Stampede Supercomputer using their install of gromacs
> and for reasons I do not understand, this extremely slow processing only
> occurs for some trajectories on Stampede. Thinking there was some problem
> with the login nodes, I tried this on a compute node with the same results
> - slow processing.
>
> I transferred my trajectory to my laptop, installed the same version of
> gromacs, and processed it with trjconv at a reasonable speed.
>
> Best,
> Eric
>
>
> On Mon, Oct 27, 2014 at 5:33 AM, Mark Abraham 
> wrote:
>
> > On Sun, Oct 26, 2014 at 11:44 PM, Eric Smoll 
> wrote:
> >
> > > Hi Mark,
> > >
> > > Thank you for responding so rapidly. I should note that identical
> > > processing (I use a script) on the trajectories produced by slightly
> > > different chemical systems had no problem and trajconv produced a
> > complete
> > > processed trajectory.
> > >
> > > However, when processing the problematic few with trajconv, the
> > trajectory
> > > that is output is incomplete (the trjconv output has fewer frames than
> > the
> > > input trajectory).
> > >
> > > This is definitely not problem with the change in output frequency of
> > > progress reports to the terminal.
> > >
> > > I am not sure if the -b flag is telling me anything. I move it around
> and
> > > it still seems to get stuck. I have ~30,000 atoms in my system. The
> first
> > > 120 ps are processed in ~ 5 seconds. The next 4 ps take ~ 30 sec. My
> > > trajectory is many nanoseconds long.
> > >
> >
> > The point is to try to see whether the issue happens x steps into the
> > trajectory, or only at around t=120ps. Does it happen if you are using
> -pbc
> > somethingelse? Does it happen if you copy the file to some other
> filesystem
> > before using -pbc whole? One needs to find a pattern before one can guess
> > where the problem might lie.
> >
> >
> > > Again, my other chemically similar systems do no hang like this and the
> > > simulation procedure is scripted so it is consistent across my
> different
> > > chemical systems.
> > >
> >
> > OK. It's possible your simulation system is doing something pathological
> in
> > that trajectory, which somehow does not agree with the implementation of
> > -pbc whole (I'm guessing wildly here), but one would need to try the
> above
> > kinds of experiments to probe that, and or visualize the trajectory in
> some
> > viewing program.
> >
> > Mark
> >
> > I am using gromacs/4.6.5.
> > >
> > > Best,
> > > Eric
> > >
> > > On Sun, Oct 26, 2014 at 5:21 PM, Mark Abraham <
> mark.j.abra...@gmail.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > The output does drop in frequency at some point, so that might be all
> > you
> > > > are seeing. Experiment with -b and values around the putative problem
> > > area.
> > > >
> > > > Mark
> > > > On Oct 26, 2014 6:59 PM, "Eric Smoll"  wrote:
> > > >
> > > > > Hello Gromacs users,
> > > > >
> > > > > I have a trajectory file script18_o.trr that I am trying to
> process.
> > > > Using
> > > > > gmxcheck, this file appears to be complete. When I execute the
> > command
> > > > > below
> > > > >
> > > > > trjconv -f ../script18/script18_o.trr -s ../script17/script17_o.tpr
> > -o
> > > > > tmp1.trr -pbc whole << EOF
> > > > > 0
> > > > > EOF
> > > > >
> > > > > the code moves quickly through the first few hundred frames only to
> > > > > consistently get stuck on frame 300...
> > > > >
> > > > > trn version: GMX_trn_file (single precision)
> > > > >  ->  frame320 time  128.000->  frame300 time
> 120.000
> > > > >
> > > > > How do I troubleshoot the problem?
> > > > >
> > > > > -Eric
> > > > > --
> > > > > Gromacs Users mailing list
> > > > >
> > > > > * Please search the archive at
> > > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > > posting!
> > > > >
> > > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > > >
> > > > > * For (un)subscribe requests visit
> > > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> > or
> > > > > send a mail to gmx-users-requ...@gromacs.org.
> > > > >
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >

Re: [gmx-users] MD workstation for Gromacs

2014-10-27 Thread Adelman, Joshua Lev

On Oct 27, 2014, at 3:14 AM, Carsten Kutzner wrote:

Hi,

On 27 Oct 2014, at 07:11, Mohammad Hossein Borghei 
mailto:mh.borg...@gmail.com>> wrote:

Thank you Szilárd,

So I would be really thankful if you could tell me which configuration is
the best:

2x GTX 980
2x GTX 780 Ti
2x GTX Titan Black
I would choose between the 980 and the 780Ti, which will give
you about the same performance. Buy whatever card you can get
for a cheaper price. The Titan Black will be too expensive in
relation to performance.

Carsten



Just a note about the GTX 780 Ti. While I don't know if people have had 
problems running with Gromacs, the Amber developers are recommending against 
this card due high failure rates:

See the "Supported GPUs" section of:
http://ambermd.org/gpus/

and the following mailing list thread:
http://archive.ambermd.org/201406/0289.html

Josh





On Mon, Oct 20, 2014 at 12:24 AM, Szilárd Páll 
mailto:pall.szil...@gmail.com>>
wrote:

Please send such questions/requests to the GROMACS users' list, I'm
replying there.

- For GROMACS 2x GTX 980 will be faster than one TITAN Z.
- Consider getting plain DIMMs instead of ECC registered;
- If you care about memory bandwidth, AFAIK you need 8 memory modules;
this will not matter for GROMACS simulations, but it could matter for
analysis or other memory-intensive operations;

--
Szilárd


-- Forwarded message --
From: Mohammad Hossein Borghei 
mailto:mh.borg...@gmail.com>>
Date: Sun, Oct 19, 2014 at 12:34 PM
Subject: MD workstation for Gromacs
To: pall.szil...@gmail.com


Dear Mr. Szilárd

I saw your comments in Gromacs mailing list and I thought you can answer
my question. I would be really thankful if you could tell me whether the
attached configurations are appropriate for GPU computing in Gromacs. Which
one is better? Can they be improved by not any increase in price?

Kind Regards,

--
Mohammad Hossein Borghei




Sent with MailTrack





--
Mohammad Hossein Borghei
--


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa




-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] MD workstation for Gromacs

2014-10-27 Thread Abhi Acharya
Hi,
We use systems with 780 Ti , 12 cores  and have had no problems running
Gromacs. Gives a performance of ~ 100 ns/day for a system with 35000 atoms.

Regards,
Abhishek

On Mon, Oct 27, 2014 at 7:05 PM, Adelman, Joshua Lev  wrote:

>
> On Oct 27, 2014, at 3:14 AM, Carsten Kutzner wrote:
>
> Hi,
>
> On 27 Oct 2014, at 07:11, Mohammad Hossein Borghei  > wrote:
>
> Thank you Szilárd,
>
> So I would be really thankful if you could tell me which configuration is
> the best:
>
> 2x GTX 980
> 2x GTX 780 Ti
> 2x GTX Titan Black
> I would choose between the 980 and the 780Ti, which will give
> you about the same performance. Buy whatever card you can get
> for a cheaper price. The Titan Black will be too expensive in
> relation to performance.
>
> Carsten
>
>
>
> Just a note about the GTX 780 Ti. While I don't know if people have had
> problems running with Gromacs, the Amber developers are recommending
> against this card due high failure rates:
>
> See the "Supported GPUs" section of:
> http://ambermd.org/gpus/
>
> and the following mailing list thread:
> http://archive.ambermd.org/201406/0289.html
>
> Josh
>
>
>
>
>
> On Mon, Oct 20, 2014 at 12:24 AM, Szilárd Páll  >
> wrote:
>
> Please send such questions/requests to the GROMACS users' list, I'm
> replying there.
>
> - For GROMACS 2x GTX 980 will be faster than one TITAN Z.
> - Consider getting plain DIMMs instead of ECC registered;
> - If you care about memory bandwidth, AFAIK you need 8 memory modules;
> this will not matter for GROMACS simulations, but it could matter for
> analysis or other memory-intensive operations;
>
> --
> Szilárd
>
>
> -- Forwarded message --
> From: Mohammad Hossein Borghei  mh.borg...@gmail.com>>
> Date: Sun, Oct 19, 2014 at 12:34 PM
> Subject: MD workstation for Gromacs
> To: pall.szil...@gmail.com
>
>
> Dear Mr. Szilárd
>
> I saw your comments in Gromacs mailing list and I thought you can answer
> my question. I would be really thankful if you could tell me whether the
> attached configurations are appropriate for GPU computing in Gromacs. Which
> one is better? Can they be improved by not any increase in price?
>
> Kind Regards,
>
> --
> Mohammad Hossein Borghei
>
>
>
>
> Sent with MailTrack
> <
> https://mailtrack.io/install?source=signature&lang=en&referral=mh.borg...@gmail.com&idSignature=22
> >
>
>
>
>
> --
> Mohammad Hossein Borghei
> --
>
>
> --
> Dr. Carsten Kutzner
> Max Planck Institute for Biophysical Chemistry
> Theoretical and Computational Biophysics
> Am Fassberg 11, 37077 Goettingen, Germany
> Tel. +49-551-2012313, Fax: +49-551-2012302
> http://www.mpibpc.mpg.de/grubmueller/kutzner
> http://www.mpibpc.mpg.de/grubmueller/sppexa
>
>
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] webpage for searching gromacs mailing archive

2014-10-27 Thread Sanku M
Hi   I used to find the link for searching the previous posted discussions and 
all the threads in the gromacs home page where there used to be a 'search' 
option for looking for discussions on a topic. But, now, looking for the same 
archive in Mailing Lists - Gromacs redirects me to gromacs home page where I 
can not find anything on the archive. I will appreciate if someone can redirect 
me the right webpage. ThanksSanku
|   |
|   |   |   |   |   |
| Mailing Lists - GromacsLog in Register Gromacs Support Bugs and Feature Re... 
Mailing Lists GMX-Developers List GMX-Users List The gmx-announce Li... The 
gmx-revision Lis... Online Manual Workshops Mailing Lists  |
|  |
| View on www.gromacs.org | Preview by Yahoo |
|  |
|   |

 
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Naughty Vacuum Bubble in our Vesicle!

2014-10-27 Thread Björn Sommer

Dear all,

we are trying to simulate a vesicle in water using united-atoms 
(Gromos96/ffG45a3). The system was modelled with the VesicleBuilder and 
the MembraneEditor. So first the vesicle was built (with 3 components: 2 
PC, 1 Chol), and then it was embedded in a water (spc216) box with 
genbox. The membrane-intersecting water was removed by a custom Python 
script in VMD. After the removel or the intersecting water, the water 
seems to be very well enclosed in the inner membrane, without 
intersecting water atoms and with only a little space between the inner 
head groups and the water.


The system minimization in water (spc216) worked pretty well, but after 
NPT-equillibration I found a vacuum bubble in the intracellular room of 
the vesicle.


I tried to do a NVT-equillibration before the NPT, which ended with the 
same result.


Using different barostats (parrinello-rahman, Berendsen) and 
refcoord-scaling options could'nt change anything, too.


To "repair" the vacuum I tried to manually insert some water using 
pymol, after equillibrating this system again, i got the same result 
with an even bigger bubble in the centre.


Analysing my system with g_energy showed a volume increase about 400nm³, 
which is rawly 4% more than the volume of the starting system. The 
systems energy increased by 20 kJ/mol.


Trying to simulate the vesicle with the vacuum bubble inside resulted in 
a deformed vesicle and an increased distance between the outer and the 
inner lipid layer.


The NPT.mdp file I used is the following:

;**
; NEIGHBORSEARCHING PARAMETERS =
; nblist update frequency =
nstlist  = 5
; ns algorithm (simple or grid) =
ns_type  = grid
; Periodic boundary conditions: xyz or none =
pbc  = xyz
; nblist cut-off =
rlist= 1.6

; OPTIONS FOR ELECTROSTATICS AND VDW =
; Method for doing electrostatics =
coulombtype  = PME
rcoulomb_switch  = 0.0
rcoulomb = 1.6
; Method for doing Van der Waals =
vdw_type = Shift
; cut-off lengths=
rvdw_switch  = 0.9
rvdw = 1.0
; Apply long range dispersion corrections for Energy and Pressure =
DispCorr = AllEnerPres

; OPTIONS FOR WEAK COUPLING ALGORITHMS =
; Temperature coupling   =
tcoupl   = Berendsen
; Groups to couple separately =
;TODO: für mehrere Lipidtypen anpassen
tc-grps = CHO DPC DPE SOL
; Time constant (ps) and reference temperature (K) =
;TAUT
tau_t= 0.1 0.1 0.1 0.1
;REFT
ref_t= 300 300 300 300
; Pressure coupling  =
Pcoupl   = berendsen
Pcoupltype   = isotropic ;semiisotropic
; Time constant (ps), compressibility (1/bar) and reference P (bar) =
tau_p= 4.0  4.0
compressibility  = 3e-5 3e-5
ref_p= 1.0  1.0
refcoord-scaling = no

; GENERATE VELOCITIES FOR STARTUP RUN =
gen_vel  = no
gen_temp = 105
gen_seed = 473529

; OPTIONS FOR BONDS =
constraints  = all-bonds
fourierspacing   =
pme_order=  6
optimize_fft =  yes
; Type of constraint algorithm =
constraint_algorithm = Lincs
; Do not constrain the start configuration =
unconstrained_start  = no
; Highest order in the expansion of the constraint coupling matrix =
lincs_order  = 4
; Lincs will write a warning to the stderr if in one step a bond =
; rotates over more degrees than =
lincs_warnangle  = 30

;**

Also, I tried to relax my vesicle under NPT in vacuum in order to add 
water in a later step, but the NPT ended with a lot of LINCS warning 
(rotation more than 30 degrees). Okay, we read already that there are 
some information in the gmx-list discussing this problem, but the 
question is if it basically makes sense to follow the idea of a vaccum 
simulation or if we should directly start with solvated system in any case.


Can you suggest us any method to solve this problem or maybe help us to 
improve our .mdp? Would be great!


Best wishes,
Manuel (and Björn)

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Naughty Vacuum Bubble in our Vesicle!

2014-10-27 Thread André Farias de Moura
Dear Manuel/Björn,

you cannot ignore that vesicle-like structures have a complex interfacial
energy, with terms arising from both the packing of lipids and the
curvature of the interface, among other factors. If it happens that you
placed the wrong number of water molecules inside the cavity, pressure
coupling with ordinary pressure values cannot fix a vacuum bubble just like
it would for an isotropic liquid, because the elimination of the bubble
would then require that both lipid packing and interface curvature should
change (your result clearly says that it is preferable to form a vacuum
bubble than to shrink the vesicle itself - and this is not a simulation
issue neither it is an artifact, this is just a balance between different
surface energy contributions arising from the vacuum cavity and the vesicle
interfaces). As I see it, you should try to remove fewer water molecules
from the original cavity (maybe relaxing the distance criteria to remove an
overlapping water molecule).

I hope it helps.

best,

Andre


On Mon, Oct 27, 2014 at 1:15 PM, Björn Sommer 
wrote:

> Dear all,
>
> we are trying to simulate a vesicle in water using united-atoms
> (Gromos96/ffG45a3). The system was modelled with the VesicleBuilder and the
> MembraneEditor. So first the vesicle was built (with 3 components: 2 PC, 1
> Chol), and then it was embedded in a water (spc216) box with genbox. The
> membrane-intersecting water was removed by a custom Python script in VMD.
> After the removel or the intersecting water, the water seems to be very
> well enclosed in the inner membrane, without intersecting water atoms and
> with only a little space between the inner head groups and the water.
>
> The system minimization in water (spc216) worked pretty well, but after
> NPT-equillibration I found a vacuum bubble in the intracellular room of the
> vesicle.
>
> I tried to do a NVT-equillibration before the NPT, which ended with the
> same result.
>
> Using different barostats (parrinello-rahman, Berendsen) and
> refcoord-scaling options could'nt change anything, too.
>
> To "repair" the vacuum I tried to manually insert some water using pymol,
> after equillibrating this system again, i got the same result with an even
> bigger bubble in the centre.
>
> Analysing my system with g_energy showed a volume increase about 400nm³,
> which is rawly 4% more than the volume of the starting system. The systems
> energy increased by 20 kJ/mol.
>
> Trying to simulate the vesicle with the vacuum bubble inside resulted in a
> deformed vesicle and an increased distance between the outer and the inner
> lipid layer.
>
> The NPT.mdp file I used is the following:
>
> ;**
> ; NEIGHBORSEARCHING PARAMETERS =
> ; nblist update frequency =
> nstlist  = 5
> ; ns algorithm (simple or grid) =
> ns_type  = grid
> ; Periodic boundary conditions: xyz or none =
> pbc  = xyz
> ; nblist cut-off =
> rlist= 1.6
>
> ; OPTIONS FOR ELECTROSTATICS AND VDW =
> ; Method for doing electrostatics =
> coulombtype  = PME
> rcoulomb_switch  = 0.0
> rcoulomb = 1.6
> ; Method for doing Van der Waals =
> vdw_type = Shift
> ; cut-off lengths=
> rvdw_switch  = 0.9
> rvdw = 1.0
> ; Apply long range dispersion corrections for Energy and Pressure =
> DispCorr = AllEnerPres
>
> ; OPTIONS FOR WEAK COUPLING ALGORITHMS =
> ; Temperature coupling   =
> tcoupl   = Berendsen
> ; Groups to couple separately =
> ;TODO: für mehrere Lipidtypen anpassen
> tc-grps = CHO DPC DPE SOL
> ; Time constant (ps) and reference temperature (K) =
> ;TAUT
> tau_t= 0.1 0.1 0.1 0.1
> ;REFT
> ref_t= 300 300 300 300
> ; Pressure coupling  =
> Pcoupl   = berendsen
> Pcoupltype   = isotropic ;semiisotropic
> ; Time constant (ps), compressibility (1/bar) and reference P (bar) =
> tau_p= 4.0  4.0
> compressibility  = 3e-5 3e-5
> ref_p= 1.0  1.0
> refcoord-scaling = no
>
> ; GENERATE VELOCITIES FOR STARTUP RUN =
> gen_vel  = no
> gen_temp = 105
> gen_seed = 473529
>
> ; OPTIONS FOR BONDS =
> constraints  = all-bonds
> fourierspacing   =
> pme_order=  6
> optimize_fft =  yes
> ; Type of constraint algorithm =
> constraint_algorithm = Lincs
> ; Do not constrain the start configuration =
> unconstrained_start  = no
> ; Highest order in the expansion of the constraint coupling matrix =
> lincs_order  = 4
> ; Lincs will write a warning to the stderr if in one step a bond =
> ; rotates over more degrees than =
> lincs_warnangle  = 30
>
> ;**
>
> Also, I trie

Re: [gmx-users] Naughty Vacuum Bubble in our Vesicle!

2014-10-27 Thread rajat desikan
Hi Bjorn,
I agree with Andre. Pack more water molecules inside the vesicle than what
you currently have. It is likely that the water penetrates quite a bit into
the headgroups, and hence you need more waters than you think (since water
can hydrogen bond with the lipid head groups). Also try warming the waters
slowly with an SA protocol while restraining the lipids.

Regards,

On Monday, October 27, 2014, André Farias de Moura  wrote:

> Dear Manuel/Björn,
>
> you cannot ignore that vesicle-like structures have a complex interfacial
> energy, with terms arising from both the packing of lipids and the
> curvature of the interface, among other factors. If it happens that you
> placed the wrong number of water molecules inside the cavity, pressure
> coupling with ordinary pressure values cannot fix a vacuum bubble just like
> it would for an isotropic liquid, because the elimination of the bubble
> would then require that both lipid packing and interface curvature should
> change (your result clearly says that it is preferable to form a vacuum
> bubble than to shrink the vesicle itself - and this is not a simulation
> issue neither it is an artifact, this is just a balance between different
> surface energy contributions arising from the vacuum cavity and the vesicle
> interfaces). As I see it, you should try to remove fewer water molecules
> from the original cavity (maybe relaxing the distance criteria to remove an
> overlapping water molecule).
>
> I hope it helps.
>
> best,
>
> Andre
>
>
> On Mon, Oct 27, 2014 at 1:15 PM, Björn Sommer  >
> wrote:
>
> > Dear all,
> >
> > we are trying to simulate a vesicle in water using united-atoms
> > (Gromos96/ffG45a3). The system was modelled with the VesicleBuilder and
> the
> > MembraneEditor. So first the vesicle was built (with 3 components: 2 PC,
> 1
> > Chol), and then it was embedded in a water (spc216) box with genbox. The
> > membrane-intersecting water was removed by a custom Python script in VMD.
> > After the removel or the intersecting water, the water seems to be very
> > well enclosed in the inner membrane, without intersecting water atoms and
> > with only a little space between the inner head groups and the water.
> >
> > The system minimization in water (spc216) worked pretty well, but after
> > NPT-equillibration I found a vacuum bubble in the intracellular room of
> the
> > vesicle.
> >
> > I tried to do a NVT-equillibration before the NPT, which ended with the
> > same result.
> >
> > Using different barostats (parrinello-rahman, Berendsen) and
> > refcoord-scaling options could'nt change anything, too.
> >
> > To "repair" the vacuum I tried to manually insert some water using pymol,
> > after equillibrating this system again, i got the same result with an
> even
> > bigger bubble in the centre.
> >
> > Analysing my system with g_energy showed a volume increase about 400nm³,
> > which is rawly 4% more than the volume of the starting system. The
> systems
> > energy increased by 20 kJ/mol.
> >
> > Trying to simulate the vesicle with the vacuum bubble inside resulted in
> a
> > deformed vesicle and an increased distance between the outer and the
> inner
> > lipid layer.
> >
> > The NPT.mdp file I used is the following:
> >
> > ;**
> > ; NEIGHBORSEARCHING PARAMETERS =
> > ; nblist update frequency =
> > nstlist  = 5
> > ; ns algorithm (simple or grid) =
> > ns_type  = grid
> > ; Periodic boundary conditions: xyz or none =
> > pbc  = xyz
> > ; nblist cut-off =
> > rlist= 1.6
> >
> > ; OPTIONS FOR ELECTROSTATICS AND VDW =
> > ; Method for doing electrostatics =
> > coulombtype  = PME
> > rcoulomb_switch  = 0.0
> > rcoulomb = 1.6
> > ; Method for doing Van der Waals =
> > vdw_type = Shift
> > ; cut-off lengths=
> > rvdw_switch  = 0.9
> > rvdw = 1.0
> > ; Apply long range dispersion corrections for Energy and Pressure =
> > DispCorr = AllEnerPres
> >
> > ; OPTIONS FOR WEAK COUPLING ALGORITHMS =
> > ; Temperature coupling   =
> > tcoupl   = Berendsen
> > ; Groups to couple separately =
> > ;TODO: für mehrere Lipidtypen anpassen
> > tc-grps = CHO DPC DPE SOL
> > ; Time constant (ps) and reference temperature (K) =
> > ;TAUT
> > tau_t= 0.1 0.1 0.1 0.1
> > ;REFT
> > ref_t= 300 300 300 300
> > ; Pressure coupling  =
> > Pcoupl   = berendsen
> > Pcoupltype   = isotropic ;semiisotropic
> > ; Time constant (ps), compressibility (1/bar) and reference P (bar) =
> > tau_p= 4.0  4.0
> > compressibility  = 3e-5 3e-5
> > ref_p= 1.0  1.0
> > refcoord-scaling = no
> >
> > ; GENERATE VELOCITIES FOR STARTUP RUN =
> > gen_vel  = no
> > gen_temp  

Re: [gmx-users] webpage for searching gromacs mailing archive

2014-10-27 Thread Justin Lemkul



On 10/27/14 11:07 AM, Sanku M wrote:

Hi   I used to find the link for searching the previous posted discussions
and all the threads in the gromacs home page where there used to be a
'search' option for looking for discussions on a topic. But, now, looking for
the same archive in Mailing Lists - Gromacs redirects me to gromacs home page
where I can not find anything on the archive. I will appreciate if someone
can redirect me the right webpage. ThanksSanku


Use Google.  For a variety of technical reasons, the search page and Nabble 
forums were disabled months ago.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-27 Thread Nizar Masbukhin
i dont really understand the point. could you please what do you mean in
the last reply?
what command should i use?

if, say i have 72 cores in 9 nodes, and 16 replicas to simulate in implicit
solvent.


On 10/27/14 5:59 AM, Nizar Masbukhin wrote:

> and how to use that 2 cores? i think that would increase performace twice
> as now i am running 1 core per replica.
>
>
In the context of REMD, mdrun should figure this out if you issue the
command over 2N processors, where N is the number of replicas.

-Justin

 On Mon, Oct 27, 2014 at 7:15 AM, Justin Lemkul  wrote:
>
>
>>
>> On 10/26/14 9:55 AM, Nizar Masbukhin wrote:
>>
>>  regarding gaining speed in implicit solvent simulation, i have tried to
>>> parallelize using -ntmpi flag. However gromacs doesn't allow as i use
>>> group
>>> cutoff-scheme. Any recommendation how to parallelise implicit solvent
>>> simulation? I do need parallelise my simulation. I have found the same
>>> question in this mail list, one suggest use all-vs-all kernel which uses
>>> zero cut-off.
>>> This is my test run actually. I intend to run my simulation in cluster
>>> computer.
>>>
>>>
>>>  Unless the restriction was lifted at some point, implicit simulations
>> won't run on more than 2 cores.  There were issues with constraints that
>> led to the limitation.
>>
>> -Justin
>>
>>
>>   On Sun, Oct 26, 2014 at 8:23 PM, Justin Lemkul  wrote:
>>
>>>
>>>
>>>
 On 10/26/14 9:17 AM, Nizar Masbukhin wrote:

   Thanks Justin.

> I have increased the cutoff, and yeah thats work. There were no error
> message anymore. The first 6 nanoseconds, i felt the simulation run
> slower.
> Felt so curious that  simulation run very fast the rest of time.
>
>
>   Longer cutoffs mean there are more interactions to calculate, but the
>
 cutoffs aren't to be toyed with arbitrarily to gain speed.  They are a
 critical element of the force field itself, though in implicit solvent,
 it
 is common to increase (and never decrease) the cutoff values used in
 explicit solvent.  Physical validity should trump speed any day.

 -Justin


On Fri, Oct 24, 2014 at 7:37 PM, Justin Lemkul 
 wrote:


>
>
>  On 10/24/14 8:31 AM, Nizar Masbukhin wrote:
>>
>>Thanks for yor reply, Mark.
>>
>>
>>>
>>> At first i was sure that the problem was table-exension because when
>>> I
>>> enlarge table-extension value, warning message didn't  appear
>>> anymore.
>>> Besides, i have successfully minimized and equilibrated the system
>>> (indicated by Fmax < emtol reached; and no error messages during
>>> NVT&NPT
>>> equilibration, except a warning that the Pcouple is turned off in
>>> vacuum
>>> system).
>>>
>>> However, the error message appeared without table-extension warning
>>> makes
>>> me doubt also about my system stability. Here is my mdp setting.
>>> Please
>>> tell me if there are any 'weird' setting, and also kindly
>>> suggest/recommend
>>> a better setting.
>>>
>>>
>>> *mdp file for Minimisation*
>>>
>>>
>>> integrator = steep
>>>
>>> nsteps = 5000
>>>
>>> emtol = 200
>>>
>>> emstep = 0.01
>>>
>>> niter = 20
>>>
>>> nstlog = 1
>>>
>>> nstenergy = 1
>>>
>>> cutoff-scheme = group
>>>
>>> nstlist = 1
>>>
>>> ns_type = simple
>>>
>>> pbc = no
>>>
>>> rlist = 0.5
>>>
>>> coulombtype = cut-off
>>>
>>> rcoulomb = 0.5
>>>
>>> vdw-type = cut-off
>>>
>>> rvdw-switch = 0.8
>>>
>>> rvdw = 0.5
>>>
>>> DispCorr = no
>>>
>>> fourierspacing = 0.12
>>>
>>> pme_order = 6
>>>
>>> ewald_rtol = 1e-06
>>>
>>> epsilon_surface = 0
>>>
>>> optimize_fft = no
>>>
>>> tcoupl = no
>>>
>>> pcoupl = no
>>>
>>> free_energy = yes
>>>
>>> init_lambda = 0.0
>>>
>>> delta_lambda = 0
>>>
>>> foreign_lambda = 0.05
>>>
>>> sc-alpha = 0.5
>>>
>>> sc-power = 1.0
>>>
>>> sc-sigma  = 0.3
>>>
>>> couple-lambda0 = vdw
>>>
>>> couple-lambda1 = none
>>>
>>> couple-intramol = no
>>>
>>> nstdhdl = 10
>>>
>>> gen_vel = no
>>>
>>> constraints = none
>>>
>>> constraint-algorithm = lincs
>>>
>>> continuation = no
>>>
>>> lincs-order  = 12
>>>
>>> implicit-solvent = GBSA
>>>
>>> gb-algorithm = still
>>>
>>> nstgbradii = 1
>>>
>>> rgbradii = 0.5
>>>
>>> gb-epsilon-solvent = 80
>>>
>>> sa-algorithm = Ace-approximation
>>>
>>> sa-surface-tension = 2.05
>>>
>>>
>>> *mdp file for NVT equilibration*
>>>
>>>
>>> define = -DPOSRES
>>>
>>> integrator = md
>>>
>>> tinit = 0
>>>
>>> dt = 0.002
>>>
>>> nsteps = 25
>>>
>>> in

[gmx-users] Naughty Vacuum Bubble in our Vesicle

2014-10-27 Thread ABEL Stephane 175950
Hello Bjorn

I don't know if it related to your problem, but I see a typo in our mdp file 
for the pressure coupling: 

Pcoupltype   = isotropic ;semiisotropic   
; Time constant (ps), compressibility (1/bar) and reference P (bar) =
tau_p= 4.0  4.0  
compressibility  = 3e-5 3e-5
ref_p= 1.0  1.0

You should have only one value for tau_p, compressibility and ref_p if you want 
to use an isotropic pressure coupling scheme in simulation (and two in case of 
semiisotropic). Strange that grompp did not mention this error.

HTH

Stephane
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] graphene top file

2014-10-27 Thread fatemeh ramezani
Dear gmx-users
I built a graphene sheet (graphene.pdb ) by VMD. I convert it to gro file with 
editconf command. Then I made top file with g_x2top command. But in top file 
connection between atoms is not correct(highlighted lines below). How can I 
solve this problem?

Connection in pdb file for 12 atoms; 

CONECT12
CONECT213
CONECT324
CONECT43
CONECT56
CONECT657
CONECT768
CONECT87
CONECT9   10
CONECT   109   11
CONECT   11   10   12


  Connection in top file for these atoms;
[ bonds ]
;  aiaj  functc0c1c2c3
1 21  1.42e-01  4.00e+05  1.42e-01  4.00e+05
1 61  1.42e-01  4.00e+05  1.42e-01  4.00e+05 
1  1936  1  1.42e-01  4.00e+05  1.42e-01  4.00e+05 
2 31  1.42e-01  4.00e+05  1.42e-01  4.00e+05
281   1  1.42e-01  4.00e+05  1.42e-01  4.00e+05 
3 41  1.42e-01  4.00e+05  1.42e-01  4.00e+05
384   1  1.42e-01  4.00e+05  1.42e-01  4.00e+05 
4 71  1.42e-01  4.00e+05  1.42e-01  4.00e+05
485   1  1.41e-01  4.00e+05  1.41e-01  4.00e+05 
5 61  1.42e-01  4.00e+05  1.42e-01  4.00e+05
510   1  1.41e-01  4.00e+05  1.41e-01  4.00e+05 
5  1940  1  1.42e-01  4.00e+05  1.42e-01  4.00e+05 
6 71  1.42e-01  4.00e+05  1.42e-01  4.00e+05
7 81  1.42e-01  4.00e+05  1.42e-01  4.00e+05 
8111  1.41e-01  4.00e+05  1.41e-01  4.00e+05 
8891  1.41e-01  4.00e+05  1.41e-01  4.00e+05 
9101  1.42e-01  4.00e+05  1.42e-01  4.00e+05
9141  1.42e-01  4.00e+05  1.42e-01  4.00e+05 
9  1944  1  1.42e-01  4.00e+05  1.42e-01  4.00e+05 
   1011   1  1.42e-01  4.00e+05  1.42e-01  4.00e+05
   1112   1  1.42e-01  4.00e+05  1.42e-01  4.00e+05 



Fatemeh Ramezani
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Naughty Vacuum Bubble in our Vesicle

2014-10-27 Thread Björn Sommer

Dear Andre, Rajat & Stephane,

thanks a lot for your light-speed suggestions!


@More Water Idea

I'll try to remove as less water as possible in my next try.

But, what bothers me is the fact, that I manually added some water after 
the vacuum bubble was formed and equillibrated again, which resulted in 
another vaccuum bubble with the same or even larger size!


From my understanding, this should not happen. Maybe I overlooked 
something?



@Typo in MDP

Thanks Stephane. We used a number of MDPs, we first have to check them 
all, if the typo was only an exception or if it was repeated several 
times. But we will take this into account but I fear, this is not 
causing the vacuum bubble - but we will check it!



By the way, we are using GMX 4.6.X - would it make sense to switch to GMX 5?

Thanks a lot & best wishes!
Manuel & Björn

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-27 Thread Mark Abraham
On Mon, Oct 27, 2014 at 6:05 PM, Nizar Masbukhin 
wrote:

> i dont really understand the point. could you please what do you mean in
> the last reply?
> what command should i use?
>
> if, say i have 72 cores in 9 nodes, and 16 replicas to simulate in implicit
> solvent.


Hi,

You can only use two MPI ranks per replica if there's a limit of two ranks
per simulation. So that's 32 ranks total. So something like

mpirun -np 32 mdrun_mpi -multidir  -repl_ex whatever

after setting up the MPI environment to fill four nodes.

Mark
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Naughty Vacuum Bubble in our Vesicle

2014-10-27 Thread rajat desikan
Dear Bjorn,

A few thoughts:
1) Are you simulating a coarse grained system (Martini) or an all-atom
system. Isotropic pressure coupling may be more appropriate for a vesicle
because of its spherical symmetry.
2) When you manually added water, did you do it in the vacuum bubble region
only?
3) What is that lateral tension in your vesicle? If your initial vesicle is
tightly packed and has a lot of tension, it may expand to relax, in which
case the internal density of the water may decrease in your production
simulations. (see the PNAS paper from Marrink's group for the procedure to
compute lateral tension).
4) Do you have sufficient water outside the vesicle to hydrate all the
lipids in the outer leaflet?

How about attaching a few snapshots so that we may take a look at them?

Regards,

On Tuesday, October 28, 2014, Björn Sommer 
wrote:

> Dear Andre, Rajat & Stephane,
>
> thanks a lot for your light-speed suggestions!
>
>
> @More Water Idea
>
> I'll try to remove as less water as possible in my next try.
>
> But, what bothers me is the fact, that I manually added some water after
> the vacuum bubble was formed and equillibrated again, which resulted in
> another vaccuum bubble with the same or even larger size!
>
> From my understanding, this should not happen. Maybe I overlooked
> something?
>
>
> @Typo in MDP
>
> Thanks Stephane. We used a number of MDPs, we first have to check them
> all, if the typo was only an exception or if it was repeated several times.
> But we will take this into account but I fear, this is not causing the
> vacuum bubble - but we will check it!
>
>
> By the way, we are using GMX 4.6.X - would it make sense to switch to GMX
> 5?
>
> Thanks a lot & best wishes!
> Manuel & Björn
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>


-- 
Rajat Desikan (Ph.D Scholar)
Prof. K. Ganapathy Ayappa's Lab (no 13),
Dept. of Chemical Engineering,
Indian Institute of Science, Bangalore
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Naughty Vacuum Bubble in our Vesicle

2014-10-27 Thread André Farias de Moura
Dear Manuel/Björn,

based on your description, I guess that placing some more water after the
bubble has formed may not work as expected, because the bilayer forming the
vesicle might be somehow strained, that's why a suggested stepping back to
the system before equilibration. I think the number of water molecules that
fit inside a vesicle is a typical trial and error problem.

and upgrading software doesn't seem to change anything, since this is
related to the physics of the model, not to the computation itself.

best,

Andre


On Mon, Oct 27, 2014 at 5:13 PM, Björn Sommer 
wrote:

> Dear Andre, Rajat & Stephane,
>
> thanks a lot for your light-speed suggestions!
>
>
> @More Water Idea
>
> I'll try to remove as less water as possible in my next try.
>
> But, what bothers me is the fact, that I manually added some water after
> the vacuum bubble was formed and equillibrated again, which resulted in
> another vaccuum bubble with the same or even larger size!
>
> From my understanding, this should not happen. Maybe I overlooked
> something?
>
>
> @Typo in MDP
>
> Thanks Stephane. We used a number of MDPs, we first have to check them
> all, if the typo was only an exception or if it was repeated several times.
> But we will take this into account but I fear, this is not causing the
> vacuum bubble - but we will check it!
>
>
> By the way, we are using GMX 4.6.X - would it make sense to switch to GMX
> 5?
>
> Thanks a lot & best wishes!
> Manuel & Björn
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
>


-- 
_

Prof. Dr. André Farias de Moura
Department of Chemistry
Federal University of São Carlos
São Carlos - Brazil
phone: +55-16-3351-8090
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Using make_ndx to couple molecules before pulling simulation

2014-10-27 Thread Agnivo Gosai
Dear Users

In my .gro file the molecules section contains the following :-
[ molecules ]
; Compound#mols
DNA_chain_D 1
Protein_chain_L 1
Protein_chain_H 1

Now , I want to do a pulling simulation where , I want both chain L and
chain H to be pulled simultaneously as an unit while keeping the chain D
fixed.
I am planning to position restrain chain D by specifying a #ifdef POSRES_D
block in the topology for chain D.
In my pull code , I want to specify pull_group0 = Chain_D and pull_group1 =
Chain_P , where Chain_P = total of Protein_chain_L and Protein_chain_H.
Following Dr. Lemkul's tutorial on Umbrella Sampling , I understand that I
can use make_ndx to achieve the same.

My DNA part consists 488 atoms and Protein part consists 4527 atoms. While
using make_ndx I tallied the number of atoms from the command prompt with
the residue numbers in my .gro ( structure / coordinate ) file and named
the DNA chain as Chain_D and the two protein chains as Chain_P.

I believe that this is the correct approach before I run the pulling
simulation.

I request the experienced users to suggest and comment on my understanding.

Thanks & Regards
Agnivo Gosai
Grad Student, Iowa State University.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Using make_ndx to couple molecules before pulling simulation

2014-10-27 Thread Justin Lemkul



On 10/27/14 5:27 PM, Agnivo Gosai wrote:

Dear Users

In my .gro file the molecules section contains the following :-
[ molecules ]
; Compound#mols
DNA_chain_D 1
Protein_chain_L 1
Protein_chain_H 1

Now , I want to do a pulling simulation where , I want both chain L and
chain H to be pulled simultaneously as an unit while keeping the chain D
fixed.
I am planning to position restrain chain D by specifying a #ifdef POSRES_D
block in the topology for chain D.
In my pull code , I want to specify pull_group0 = Chain_D and pull_group1 =
Chain_P , where Chain_P = total of Protein_chain_L and Protein_chain_H.
Following Dr. Lemkul's tutorial on Umbrella Sampling , I understand that I
can use make_ndx to achieve the same.

My DNA part consists 488 atoms and Protein part consists 4527 atoms. While
using make_ndx I tallied the number of atoms from the command prompt with
the residue numbers in my .gro ( structure / coordinate ) file and named
the DNA chain as Chain_D and the two protein chains as Chain_P.

I believe that this is the correct approach before I run the pulling
simulation.

I request the experienced users to suggest and comment on my understanding.



Use gmxcheck to verify the contents of the index group and you can decide for 
yourself if it's correct.  In reality, though, you don't even need make_ndx 
here.  You have a DNA as the reference and Protein as the pulled group.  Both 
are default groups, so


pull_group0 = DNA
pull_group1 = Protein

does what you want with no extra effort.  You can of course combine that with 
the restraints via define statement if you need to.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] graphene top file

2014-10-27 Thread Justin Lemkul



On 10/27/14 2:21 PM, fatemeh ramezani wrote:

Dear gmx-users
I built a graphene sheet (graphene.pdb ) by VMD. I convert it to gro file with 
editconf command. Then I made top file with g_x2top command. But in top file 
connection between atoms is not correct(highlighted lines below). How can I 
solve this problem?



I see no highlighting.


Connection in pdb file for 12 atoms;

CONECT12
CONECT213
CONECT324
CONECT43
CONECT56
CONECT657
CONECT768
CONECT87
CONECT9   10
CONECT   109   11
CONECT   11   10   12


   Connection in top file for these atoms;
[ bonds ]
;  aiaj  functc0c1c2c3
 1 21  1.42e-01  4.00e+05  1.42e-01  4.00e+05
 1 61  1.42e-01  4.00e+05  1.42e-01  4.00e+05
 1  1936  1  1.42e-01  4.00e+05  1.42e-01  4.00e+05
 2 31  1.42e-01  4.00e+05  1.42e-01  4.00e+05
 281   1  1.42e-01  4.00e+05  1.42e-01  4.00e+05
 3 41  1.42e-01  4.00e+05  1.42e-01  4.00e+05
 384   1  1.42e-01  4.00e+05  1.42e-01  4.00e+05
 4 71  1.42e-01  4.00e+05  1.42e-01  4.00e+05
 485   1  1.41e-01  4.00e+05  1.41e-01  4.00e+05
 5 61  1.42e-01  4.00e+05  1.42e-01  4.00e+05
 510   1  1.41e-01  4.00e+05  1.41e-01  4.00e+05
 5  1940  1  1.42e-01  4.00e+05  1.42e-01  4.00e+05
 6 71  1.42e-01  4.00e+05  1.42e-01  4.00e+05
 7 81  1.42e-01  4.00e+05  1.42e-01  4.00e+05
 8111  1.41e-01  4.00e+05  1.41e-01  4.00e+05
 8891  1.41e-01  4.00e+05  1.41e-01  4.00e+05
 9101  1.42e-01  4.00e+05  1.42e-01  4.00e+05
 9141  1.42e-01  4.00e+05  1.42e-01  4.00e+05
 9  1944  1  1.42e-01  4.00e+05  1.42e-01  4.00e+05
1011   1  1.42e-01  4.00e+05  1.42e-01  4.00e+05
1112   1  1.42e-01  4.00e+05  1.42e-01  4.00e+05




CONECT records are irrelevant for basically all Gromacs tools.  The bonds 
created by g_x2top are decided upon based on the criteria specified in the force 
field's .n2t file.  Based on the snippet you've shown, I see nothing wrong.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.