Hi Dimitar,Thanks for the bug report. Would you mind trying the test program I attached on the same file system that you get the truncated files on? compile it with gcc testje.c -o testioSander
testje.c
Description: Binary data
On Jun 7, 2011, at 23:21 , Dimitar Pachov wrote:Hello,Just a quick upd
The walls are simply interactions between the atomtype of the wall you
specified (in your case opls_966 and opls_968), and the rest of the system, at
the planes defined by z=0 or z=z_box.
> wall_type= 9-3
> wall_r_linpot= 1
> wall_atomtype = opls_966 opls_968
> wall_density= 9-3 9
orrectly (in any 4.5 version).
As Berk said, these issues are bad enough for a new version to be released soon.
Sander
On 2 Nov 2010, at 11:55 , Michael Brunsteiner wrote:
>
> Sander pronk wrote:
>
>
>> Hi Michael,
>>
>> I've been able to reproduce both probl
Hi Michael,
I've been able to reproduce both problems - I'll fix them shortly.
Sander
On 2 Nov 2010, at 10:28 , Michael Brunsteiner wrote:
>
> Hi everybody,
>
> I run NPT simulations (with the double precision version of mdrun) of a
> polymer melt
> with anisotropic pressure scaling The sim
One trick I am working with right now, is to have
- periodic boundary conditions only xy
- remove the COM motion of the water and the graphite separately. The COM
motion is now removed only in the xy direction, so they are free to move in z.
- put the surface (your graphite) at coordinate 0
- enab
1 Oct 2010, at 14:04 , Carsten Kutzner wrote:
> Hi Sander,
>
> On Oct 21, 2010, at 12:27 PM, Sander Pronk wrote:
>
>> Hi Carsten,
>>
>> As Berk noted, we haven't had problems on 24-core machines, but quite
>> frankly I haven't looked at thread mig
On 21 Oct 2010, at 16:50 , Carsten Kutzner wrote:
> On Oct 21, 2010, at 4:44 PM, Sander Pronk wrote:
>
>>
>> Thanks for the information; the OpenMPI recommendation is probably because
>> OpenMPI goes to great lengths trying to avoid process migration. The numactl
>
n 21 Oct 2010, at 14:04 , Carsten Kutzner wrote:
> Hi Sander,
>
> On Oct 21, 2010, at 12:27 PM, Sander Pronk wrote:
>
>> Hi Carsten,
>>
>> As Berk noted, we haven't had problems on 24-core machines, but quite
>> frankly I haven't looked at thread mig
Hi Carsten,
As Berk noted, we haven't had problems on 24-core machines, but quite frankly I
haven't looked at thread migration.
Currently, the wait states actively yield to the scheduler, which is an
opportunity for the scheduler to re-assign threads to different cores. I could
set harder thr
e hydrophobic effect is involved in all ligand binding it seems quite
> hopeless to get any reliable numbers when neglecting entropy. No referee will
> buy that - I wouldn't.
>
>>
>> What do you think?
>>
>> ---
Hi Mohsen,
The mean energy difference is only one component of the free energy difference.
Before you go any further I'd suggest reading a good book on molecular
simulations, like 'Understanding Molecular Simulations' by Frenkel and Smit.
There's a good reason free energy calculations cover o
On 14 Oct 2010, at 05:49 , Sikandar Mashayak wrote:
> Recently I installed gromacs4.5.1 without mpi support on my workstation with
> 8 cores using cmake and make -j 8, make install commands ( as suggested on
> installation instructions).
>
> Now when I do mdrun it automatically utilizes all t
Hi,
The procedure you're describing is correct (and I believe g_analyze can do the
integration for you). However, with 4.5 there's a new way to calculate free
energies, using Bennett' s Acceptance Ratio.
That method relies on the energy differences between the state the simulation
is at (init
It sounds like there is something wrong with the 'B' topology. Have you tried
the 'decoupling' .mdp parameters? They're there specifically for your type of
calculations.
Sander
On Sep 2, 2010, at 04:58 , Emanuel Birru wrote:
> Hi,
>
> I am doing a partition coefficient of solute between 1-O
As David said, the reading/writing is done in src/gmxlib/enxio.c, but you could
also read edr files indirectly through gmxdump: that should also give you an
idea of the type of information in those files.
On Aug 22, 2010, at 13:14 , Alan Wilter Sousa da Silva wrote:
> Hi there,
>
> I am tryi
Hi Javier,
I've just committed a fix to the git 4.5 tree. Thanks for reporting this.
Sander
On 13 Aug 2010, at 18:35 , Javier Cerezo wrote:
> Hi all.
>
> I am trying to perform a tpi (test particle insertion) calculation on a
> trajectory generated with mpi_mdrun (gromacs 4.0.7, run in a Beo
Hi Alan,
I've fixed that bug yesterday; the fix should be in yesterday's new beta.
Thanks for the bug report,
Sander
On 31 Jul 2010, at 12:41 , Alan wrote:
> Hi there,
>
> I am trying gmx 4.5 beta on Mac SL 10.6.4 with Fink, doing:
>
> ./configure CPPFLAGS="-I/sw/include" LDFLAGS="-L/sw/lib
Hi Vedat,
If I understand it correctly, you're trying to calculate the free energy of
binding using the LIE method. Then keep in mind that LIE is an approximate
method, and that any accuracy you're trying to achieve is going to be limited
by the approximations of LIE.
Your 'primary literature
Hi Nisha,
Looking at your .mdp, there are some issues that might lead to the behavior
that you describe:
First: you should try to look up the published densities for tip3p water at
300K - they might actually be close to what you get.
Second: your neighbor list cut-off (rlist) might be too small
n any one direction if I had the same
> system setup with independent pressure coupling for each dimension
> (anisotropic)?
>
> Thanks,
> Sapna
>
> On Tue, Mar 30, 2010 at 11:53 AM, Sander Pronk wrote:
> It sounds like the normal thing that would happen if you have a system th
It sounds like the normal thing that would happen if you have a system that has
no shear elastic constant, like a fluid.
In that case, there are no restoring forces against growth of system size in
one coordinate with a concomitant decrease in the other coordinates, so
eventually this should ha
The reason grompp is failing is because there are some things it warns about
(that's quite clear from your error message), and by default it will refuse to
continue unless you explicitly tell it to.
Without the actual warning there is very little we can do to help you.
Sander
On Mar 30, 2010
It looks like you're you're using gcc; is that right? If so, you're probably
using an old (pre-2005) version. Try a newer one: it's free!
Sander
On Mar 30, 2010, at 13:00 , babu gokul wrote:
> I tried to install git version of Gromacs but when i make the file its shows
> the following error
Signal 11 on Linux is a segmentation fault: either you've hit a bug in mdrun,
or there was some faulty input causing it to crash.
You'll need to look at your md.log to see what happened.
Sander
On Mar 30, 2010, at 12:11 , 程迪 wrote:
> Hi, gmx-users
>
> I just encountered a singal 11 problem. I
Hi Anirban,
You *could* use the configurations in your trajectory to (re)calculate average
energies; by de-coupling your ligand this would get you the average free energy
change per coupling strength change at the point where the ligand is fully
bound.
If you're interested in free energy of bi
Pressure and volume are very slowly converging thermodynamic variables; it
might very well be that your system hasn't converged yet.
You should, however, see progression towards some average volume when you plot
the volume as a function of time with g_energy.
Sander
On Mar 21, 2010, at 16:54
Those derivatives should be zero: the kinetic energy is determined by your
thermostat and shouldn't change as a function of lambda - as it should for any
normal free energy calculation in constant-temperature ensembles.
Also, the constraint energy doesn't change as lambda changes (only the van de
You got pretty close to what the density of SPC water is at 298K
According to "Temperature dependence of TIP3P, SPC, and TIP4P water from NPT
Monte Carlo simulations: Seeking temperatures of maximum density", William L.
Jorgensen, Corky Jenson, J. Comp. Chem, Volume 19 Issue 10, Pages 1179 - 118
Hi,
In Linux, one can apparently set processor affinity (or, in this case
core affinity) with 'taskset'. The command is used like this:
taskset command
where is a hexadecimal bit mask with the cpus to use (in a
format like 0xff, where 0xff enables the first 8 cpus on a system).
For mdr
You need to run MPI jobs with 'mpirun'. The correct command would be
mpirun -np 8 mdrun -s topol.tpr
(I don't know what the -N 8 does).
Sander
On 16 Sep 2009, at 18:03 , Jarol E. Molina wrote:
Hi all
I have a single machine with multilpe processors. I want run mdrun
with 8 processors and
30 matches
Mail list logo