Thanks for your reply.
On Thu, Apr 25, 2013 at 7:18 PM, Richard Broadbent <
richard.broadben...@imperial.ac.uk> wrote:
> The 4.6.1 release is a more advanced version of gromacs with the latest
> kernels and features (GPU support, verlet cut-offs etc.).
>
> 4.5.7 is a maintenance release for thos
Thanks for the clarification.
On Thu, Apr 25, 2013 at 7:48 PM, Justin Lemkul wrote:
>
>
> On 4/25/13 9:48 AM, Richard Broadbent wrote:
>
>> The 4.6.1 release is a more advanced version of gromacs with the latest
>> kernels
>> and features (GPU support, verlet cut-offs etc.).
>>
>> 4.5.7 is a ma
Good. Note however that we do get the right temperature with a dt=20fs with
Martini so you energy leak might be in the cutoff scheme or the system is
really badly equilibrated.
On Apr 25, 2013, at 18:23, ABEL Stephane 175950 wrote:
> Xavier
>
> I have followed your suggestion and did a lon
Dear Mark & Junghans,
Thanks for your valuable suggestions.
I have gone through the README file. It says compatibility with 4.5.x
version. I am using 4.5.5. So, I think its not a problem.
@Junghans: I have installed pkg-config in fact the $ pkg-config --libs
libgmx returns
the expected output;
Thanks for your reply.
Actually I am interested to see how much structural deviation is occurring
in a protein during the simulation from its average position of atoms
rather than the initial position (crystal structure or starting structure).
The motivation of doing this analysis is the fact that
> Date: Thu, 25 Apr 2013 22:57:55 +0200
> From: Mark Abraham
> Subject: Re: [gmx-users] Manual installation of new analysis tool
> To: Discussion list for GROMACS users
> Message-ID:
>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On Wed, Apr 24, 2013 at 8:14 AM, Venkat Reddy wrote
Thanks for the answer. I'll check gmx4.5.7 and report back.
I am not sure what you mean by GROMACS swaps the coordinates not the ensemble
data. The coupling to P and T and not exchanged with it? That would explain
what I see, but let see what 4.5.7 has to say first.
Tks.
On Apr 25, 2013, a
On Wed, Apr 24, 2013 at 8:14 AM, Venkat Reddy wrote:
> Dear all,
> I have got an analysis tool for analyzing membrane density from Dr.Luca
> monticelli. I have followed the installation instructions as given.
>
> 1) First thing is to load GROMACS
>$ *source /usr/local/gromacs/bin/GMXRC*
> 2)
X-C-CN-X is not present in amber03 in the GROMACS distribution. You seem to
be using some modified version. Please ask the person who modified it :-)
Mark
On Wed, Apr 24, 2013 at 4:26 PM, Elisa Frezza wrote:
> Dear All,
>
> I am starting to use amber03 force field, but I have found something ve
Yes, I got exchanges. By construction! :-)
Email me off-list if you would like a methods description (for what it is
worth).
Mark
On Thu, Apr 25, 2013 at 1:11 PM, massimo sandal wrote:
> This hints at an interesting protocol/attempt, at least for a sort of
> newbie like me. Can you elaborate?
Thanks for the good report. There have been some known issues about the
timing of coupling stages with respect to various intervals between GROMACS
events for some algorithms. There are a lot of fixed problems in 4.5.7 that
are not specific to REMD, but I have a few lingering doubts about whether
w
Hi Bipin Singh,
That indeed gives you the RMSD against the average. Do think about it a bit
more: do you want the average of the whole structure, or should you account
for a phase of relaxation?
Cheers,
Tsjerk
On Wed, Apr 24, 2013 at 2:17 PM, Justin Lemkul wrote:
>
>
> On 4/24/13 3:06 AM, bi
Dear Gromacs users,
I have some corrupted frames in different trajectories. gmxcheck with .trr
trajectories gives extraordinary positions or velocities and with .xtc
trajectories gives rise to the magic number error. I am aware of the program
gmx_rescue kindly offered to us by its developers. H
This works well until you use a system that permits job suspension. Then
-maxh gets double-crossed... :-)
Mark
On Apr 25, 2013 3:41 PM, "Richard Broadbent" <
richard.broadben...@imperial.ac.uk> wrote:
> I generally build a tpr for the whole simulation then submit one job using
> a command such as
On 4/25/13 12:02 PM, sarah k wrote:
Dear all,
I'm interested in calculating the Gibbs free energy of some systems. I
have several questions in this regard:
1- The g_energy command gives some values in kJ/mol for total energy
and enthalpy. Does the reported total energy include entropy effects
Xavier
I have followed your suggestion and did a longer NPT equilibration with smaller
dt and ntlist values and It works. The Energy and Temp reach to stables values
as i want.
thank you again for your help
Stephane
--
Message: 2
Date: Thu, 25 Apr 2013 14:17:
Dear all,
I'm interested in calculating the Gibbs free energy of some systems. I
have several questions in this regard:
1- The g_energy command gives some values in kJ/mol for total energy
and enthalpy. Does the reported total energy include entropy effects?
(The calculated total energy and entha
Dear Szilárd:
Thank you for your assistance. I understand the importance of reading the
documentation and I read it about 5 times before I posted to this list. In
fact, it's kind of buried in my initial post, but I did run MPI gromacs with
mpirun -np 3 the first time and it didn't work.
I have
Hi,
You should really check out the documentation on how to use mdrun 4.6:
http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Running_simulations
Brief summary: when running on GPUs every domain is assigned to a set
of CPU cores and a GPU, hence you need to start as many PP MPI
Dear Gromacs users,
I'm simulating a water-oil interface. From litterature interfacial tension
is determined from pressure tensors as follows:1/2 (Pz - (px+py)/2))*Lz
Where Pz, Px and Py are the pressure tensors in the different axis.
I did a NPT simulation step and I want to check my interfacial
Thank you for your reply :) I will look further
On 25 April 2013 23:18, Justin Lemkul wrote:
>
>
> On 4/25/13 9:10 AM, Souilem Safa wrote:
>
>> Dear Justin,
>> Please I want to ask you for the calculation of interfacial tension from
>> the formula : 1/2 (Pz - (px+py)/2))*Lz
>> Should I calculat
Thank you Berk,
I am still getting an error when I try with MPI compiled gromacs 4.6.1 and -np
set as you suggested.
I ran like this:
mpirun -np 6 /nics/b/home/cneale/exe/gromacs-4.6.1_cuda/exec2/bin/mdrun_mpi
-notunepme -deffnm md3 -dlb yes -npme -1 -cpt 60 -maxh 0.1 -cpi md3.cpt -nsteps
5000
On 4/25/13 10:12 AM, Vivek Modi wrote:
Hello,
I am using g_rmsf for analysis of a protein simulation. I want to calculate
the fluctuations with respect to a reference structure (using -od option).
But I am encountering a problem. Please correct me if I am wrong at some
place. The following two
On 4/25/13 9:10 AM, Souilem Safa wrote:
Dear Justin,
Please I want to ask you for the calculation of interfacial tension from
the formula : 1/2 (Pz - (px+py)/2))*Lz
Should I calculate the interfacial tension in every step and take the
average interfacial tension for all the steps or should I ta
On 4/25/13 9:48 AM, Richard Broadbent wrote:
The 4.6.1 release is a more advanced version of gromacs with the latest kernels
and features (GPU support, verlet cut-offs etc.).
4.5.7 is a maintenance release for those of us who for whatever reason wish to
keep using the older 4.5.x series releas
On 4/25/13 10:05 AM, Dr. Vitaly Chaban wrote:
PME should NOT be used with charged systems, for obvious reasons.
FYI Gromacs provides a neutralizing background charge.
http://comments.gmane.org/gmane.science.biology.gromacs.user/639
There may, of course, be issues with the physical reality
@ Vitaly
of course. I know that. My system is neutral but with charged particles (AOT
and Na+).
@Xavier
I will try your suggestion and equilibrate my system for a longer period
Thanks again
Stephane
--
Message: 1
Date:
Hello,
I am using g_rmsf for analysis of a protein simulation. I want to calculate
the fluctuations with respect to a reference structure (using -od option).
But I am encountering a problem. Please correct me if I am wrong at some
place. The following two methods are giving me different results. I
PME should NOT be used with charged systems, for obvious reasons.
On Thu, Apr 25, 2013 at 4:00 PM, ABEL Stephane 175950
wrote:
> And ? sorry but i do not understand...
>
> Stephane
>
> --
>
> Message: 2
> Date: Thu, 25 Apr 2013 15:39:12 +0200
> From: "Dr. Vitaly Chaba
And ? sorry but i do not understand...
Stephane
--
Message: 2
Date: Thu, 25 Apr 2013 15:39:12 +0200
From: "Dr. Vitaly Chaban"
Subject: Re: [gmx-users] Martini with PME, temp two low
To: Discussion list for GROMACS users
Message-ID:
Content-Type: text/plain;
Hello,
My simulations crash when using sc-r-power 48, even if I'm running with
sc-coul = no, couple_lambda0 = vdw-q couple_lambda1 = vdw.
If I'm running couple_lambda0 = vdw and couple_lambda1 = none it works.
The fe config that is causing a crash is:
init_lambda-state = 0
couple-intramol
Well … 400 ps is rather small and you can expect deviations from so short
simulations if you start from an non-equilibrated system. I am not sure what
the void is but this indicates that your system might not be equilibrated …
You can try to decrease the time step and nstlist to see if you the
The 4.6.1 release is a more advanced version of gromacs with the latest
kernels and features (GPU support, verlet cut-offs etc.).
4.5.7 is a maintenance release for those of us who for whatever reason
wish to keep using the older 4.5.x series release. It mainly adds fixes
made to the 4.6.x ser
The salvation is to use:
mdrun -cpi state.cpt
Dr. Vitaly Chaban
On Thu, Apr 25, 2013 at 2:37 PM, Justin Lemkul wrote:
>
>
>> Can any body tell me how do it split script i such that i will get all
>> 20ns simulation
>>
>>
> You specified a given time limit for the job, and the run exceede
Sorry for the double post, but i forgot to remove the others message. I have
also added the average values obtained for this run
Statistics over 20001 steps using 4001 frames
Energies (kJ/mol)
Bond G96AngleLJ (SR) Coulomb (SR) Coul. recip.
1.65683e+04
I generally build a tpr for the whole simulation then submit one job
using a command such as:
mpirun -n ${NUM_PROCESSORS} mdrun -deffnm ${NAME} -maxh
${WALL_TIME_IN_HOURS}
copy all the files back at the end of the script if necessary then:
then resubmit it (sending out all the files again if
Hmmm
Aren't the keywords here "charged system" + "PME"?
Dr. Vitaly Chaban
On Thu, Apr 25, 2013 at 1:34 PM, XAvier Periole wrote:
>
> Did you visualise the system? T in function of time? Epot in function of
> time?
>
> As a side note (not relevant for PME) the mix of nstlist = 10 and the
Hi,
Please let me know which one is the recent version of gromacs (with latest
bugfixes) among the below.
4.5.7 and 4.6.1
And what is the reason behind the update for 4.5.x versions if 4.6.x
versions are already available.
--
*---
Thanks and Regards,
Bipin Singh*
--
gmx-use
Hello Xavier,
Thank you for your response.
>> nstlist = 10 and the rlist = 1.0
My mistake, i did not changes these values when i switched to PME,
I have rerun the simulations for 400 ps in NPT with these changes and plotted
Epot and Temp vs Time The Epot and Temp values are not stables. The
Dear Justin,
Please I want to ask you for the calculation of interfacial tension from
the formula : 1/2 (Pz - (px+py)/2))*Lz
Should I calculate the interfacial tension in every step and take the
average interfacial tension for all the steps or should I take the average
of the presssure tensors from
You can split the simulation in different part (for example 5 ns each),
every time you'll finish one you will extend it adding more time.
http://www.gromacs.org/Documentation/How-tos/Extending_Simulations?highlight=extend
My cluster uses a different "script system" than yours so I can't help
with
On 4/25/13 8:28 AM, Sainitin Donakonda wrote:
Hey all,
I recently ran 20ns simulation of protein ligand complex on cluster i used
following script to run simulation
grompp -f MD.mdp -c npt.gro -t npt.cpt -p topol.top -n index.ndx -o
md_test.tpr
mpirun -n 8 mdrun -s md_test.tpr -deffnm md_tes
Hey all,
I recently ran 20ns simulation of protein ligand complex on cluster i used
following script to run simulation
grompp -f MD.mdp -c npt.gro -t npt.cpt -p topol.top -n index.ndx -o
md_test.tpr
mpirun -n 8 mdrun -s md_test.tpr -deffnm md_test -np 8
*I saved this as MD.sh And then submited
Hi,
Twin-range will lead to extra errors, which could be negligible or not.
But the errors should be the same and have the same effects in different
versions.
I think nothing has changed in the twin-range treatment from 4.5 to 4.6, but I
am not 100% sure.
Which version with twin-range matches y
Did you visualise the system? T in function of time? Epot in function of time?
As a side note (not relevant for PME) the mix of nstlist = 10 and the rlist =
1.0 is pretty bad! You want at least rlist=1.2 when nstlist=5 and rlist=1.4 if
nstlist =10.
On Apr 25, 2013, at 1:10 PM, ABEL Stephane 1
Hi,
I have been recently using the REMD code in gmx-407 and gmx-453 and got a few
systems crashing for unclear reasons so far. The main tests I made are using
gmx407 but it is all reproducible with gmx453. The crashing was also reproduced
(not necessarily at the same time point) on several arc
Hello all,
I am trying to test the martini force field with PME for a charged system that
contains na+, water, surfactant, octane molecules at 298K and P=0.1MPa. My
system works well, if i use the standard shift parameters (correct temp, and
pressure). But for for the simulation with PME , the
This hints at an interesting protocol/attempt, at least for a sort of
newbie like me. Can you elaborate? Did they exchange?
On 25 Apr 2013 13:06, "Mark Abraham" wrote:
> Likewise, I agreed with what Massimo said.
>
> As an example, I recently did a fairly large set of REMD simulations of a
> 320-
Likewise, I agreed with what Massimo said.
As an example, I recently did a fairly large set of REMD simulations of a
320-atom disordered peptide with rather more water and many fewer replicas
than you propose. I did so because I expected low barriers and large
maximum diameter (the latter from an
Dear Berk, der GMXers,
On Apr 23, 2013, at 2:54 PM, Stefan Kesselheim
wrote:
> The temperature is 300.6, target temperature was 300. That should be fine. I
> did check weaker fields and weaker thermostat coupling. Everything stayed
> optimally consistent, within 4.5.5, however incompatible wi
Hi,
You're using thread-MPI, but you should compile with MPI. And then start as
many processes as total GPUs.
Cheers,
Berk
> From: chris.ne...@mail.utoronto.ca
> To: gmx-users@gromacs.org
> Date: Wed, 24 Apr 2013 17:08:28 +
> Subject: [gmx-users] How to use multiple nodes, each with 2 CPUs
51 matches
Mail list logo