Re: [gmx-users] Multiple LINCS Warnings in NVT Equillibration

2013-07-09 Thread Matthew Zwier
Try using -DFLEXIBLE in your minimization prior to running NVT. MZ On Tue, Jul 9, 2013 at 10:22 AM, ashish24294 wrote: > I am simulating one urea molecule in water and am struggling with it > nvt.log > > > > > *The .mdp for energy minimi

Re: [gmx-users] chiller failure leads to truncated .cpt and _prev.cpt files using gromacs 4.6.1

2013-03-26 Thread Matthew Zwier
Dear Chris, While it's always possible that GROMACS can be improved (or debugged), this smells more like a system-level problem. The corrupt checkpoint files are precisely 1MiB or 2MiB, which suggests strongly either 1) GROMACS was in the middle of a buffer flush when it was killed (but the filesy

Re: [gmx-users] Format of .trr file

2013-03-23 Thread Matthew Zwier
The TRR format is based heavily on Unix-y C routines for architecture-independent encoding, so it's not so simple to explain how to read it in plain Fortran (you'd be calling out to system libraries, or potentially manually swapping the endianness of data). Your best bet is probably to wrap the xdr

Re: [gmx-users] Statistical uncertainty in gromacs

2013-02-21 Thread Matthew Zwier
Hi, I don't know of a GROMACS tool to do this. g_analyze may work (see manual page, option "-ee"), if you can generate a time series of A to look at. That said, what you've described is a classic propagation of error problem. If uncertainties are small and likely to be symmetric about the mean, t

Re: [gmx-users] What algorithm does g_sas use?

2012-11-07 Thread Matthew Zwier
Hi Guang, Be careful with this tool. It's very fast and very good at what it's designed to do, but it does not appear to be designed to give accurate single-residue SASA (as you will read in the paper on the method employed). Definitely use g_sas for entire proteins or large cavities (it's blazing

Re: [gmx-users] something wrong with BlueGene/P

2012-09-28 Thread Matthew Zwier
Hi Kai, A system that is marginally stable frequently succeeds in propagating on one machine and fails on another. I've observed this even between Xeon and Opteron systems, which is fairly minor architectural difference. Since your system works in NVT but not NPT, this would seem to imply that th

Re: [gmx-users] Re: gmx-users Digest, Vol 96, Issue 146

2012-04-20 Thread Matthew Zwier
Hi Neeru, No, unfortunately I don't know much about conformational flooding, or PLUMED. My own research has focused more on the path sampling family. PLUMED questions come up here and there on this list, so it seems like there are at least a few other people using it. MZ On Thu, Apr 19, 2012 a

Re: [gmx-users] Methods for accelerated MD simulation for Protein-Mg-GTP system in gromacs

2012-04-18 Thread Matthew Zwier
Hi Neeru, Any number of enhanced sampling techniques might do this, but weighted ensemble, forward flux sampling, milestoning, and transition path sampling (all described in (Zwier, M. C.; Chong, L. T. Current Opinion in Pharmacology 2010, 10, 745–752.) and the nudged elastic band method (Bergonzo

Re: [gmx-users] protein folding / pbc

2012-04-10 Thread Matthew Zwier
Correct. On Tue, Apr 10, 2012 at 1:22 PM, Shi, Huilin wrote: > So if I wanna run a simulation to unfold the protein, I need make a big box > that is large enough so that the unfolded protein is still smaller than the > box in any dimension. Is this correct? > > Thanks. > > Huilin > __

Re: [gmx-users] Different results from identical tpr after MD

2012-04-05 Thread Matthew Zwier
That's an interesting philosophical question. In this case, you'll wind up with a 50 ns trajectory where each configuration is consistent with the thermodynamic ensemble you're approximating. That's as close a definition to "realistic" as I think is worth worrying about. You'd only need to worry

Re: [gmx-users] Editing the functions of amber

2012-03-20 Thread Matthew Zwier
Dear Asaf, I think we need significantly more information in order to help you. What function are you trying to port? What are you trying to do with it (that is, what is the scientific question you're trying to answer)? GROMACS is a clean codebase, and remarkably easy to read for how much comput

[gmx-users] Re: g_mindist on 51-frame trajectory gives 51 minimum distances but <51 atom pairs

2012-01-30 Thread Matthew Zwier
of minimum/maximum distance when the first atom involved happens to be the first atom of the topology. I've created an issue and submitted a patch: http://redmine.gromacs.org/issues/872 Cheers, MZ On Thu, Jan 26, 2012 at 12:32 PM, Matthew Zwier wrote: > Hi all, > > I'm runnin

Re: [gmx-users] g_mindist on 51-frame trajectory gives 51 minimum distances but <51 atom pairs

2012-01-26 Thread Matthew Zwier
, > since for all intents and purposes the array search should be null in > that case? > > On 2012-01-26 12:32:15PM -0500, Matthew Zwier wrote: >> Hi all, >> >> I'm running g_mindist (from 4.5.5) on a slew of very short >> trajectories (51 frames) in order to o

[gmx-users] g_mindist on 51-frame trajectory gives 51 minimum distances but <51 atom pairs

2012-01-26 Thread Matthew Zwier
Hi all, I'm running g_mindist (from 4.5.5) on a slew of very short trajectories (51 frames) in order to obtain both minimum distances and the corresponding atom pairs, using echo 10 11 | g_mindist -nice 10 -f seg.xtc -n $NDX -s $TPR -nopbc -o mindist_pairs.out -xvg none where NDX and TPR are (va

Re: [gmx-users] LINCS warnings and number of cpus

2012-01-16 Thread Matthew Zwier
Ciao, I've seen this behavior (something running fine on one core but failing on multiple cores, or certain multiples of cores) frequently. It's almost always due to an unstable system. Have your user try equilibrating longer, or minimize with flexible water before trying equilibration. You can

Re: [gmx-users] diffusion coefficient: apparently, g_msd messes up the MSD due to PBC

2011-12-09 Thread Matthew Zwier
Hi Ruhollah, A while ago on the list there was a discussion of extreme memory use and possibly-incorrect results from g_msd under some conditions. The problem could be worked around by imaging the trajectory with trjconv first to remove jumps across the box, then running g_msd on the results. Pe

Re: [gmx-users] box vectors

2011-12-07 Thread Matthew Zwier
Impressive :) On Wed, Dec 7, 2011 at 1:34 PM, Tsjerk Wassenaar wrote: > #!/usr/bin/env python > > # Python compliant email -- Just save the content :) > > > """ > > Hey :) > > The neatest way is using python to extract them from the XTC file :) > > """ > > from struct   import unpack > import sys

Re: [gmx-users] intel grompp with pathscale mdrun

2011-12-06 Thread Matthew Zwier
Should work just fine. As far as compilation hanging...maybe hand-compile that .o with less aggressive optimization flags, then try "make" again? MZ On Tue, Dec 6, 2011 at 2:20 PM, Chris Neale wrote: > Dear users: > > can I use a .tpr file created with an intel icc compilation of grompp and > t

Re: [gmx-users] NVT Equilibration

2011-10-06 Thread Matthew Zwier
Concur. I just used this approach to equilibrate a box of water/acetonitrile. If your box shrinks too much, you can probably use editconf and genbox to replicate the equilibrated box into a larger box of arbitrary size and shape. MZ On Thu, Oct 6, 2011 at 4:36 PM, Dallas Warren wrote: > Ravi,

Re: [gmx-users] 1-week gromacs test at 112x12 cores

2011-10-06 Thread Matthew Zwier
Hi, If you don't get any takers, you could always just make a huge box of water (which usually dominates explicit-solvent MD costs) and run it. That way, you could scale up the size of the box arbitrarily to achieve good parallelization across that many cores. I'm not sure that'd be scientifical

Re: [gmx-users] Non zero total charge

2011-09-01 Thread Matthew Zwier
Not 5. 57. Look at the exponent. On Thu, Sep 1, 2011 at 11:46 AM, Munishika Kalia wrote: > Hi, > The genion command i used is > genion -s genion.tpr -o ago_water_ions.gro -nn 6 > I used this to add 6 CL ions and i got the following error: >   System has non-zero total charge: 5.70e+01 > > S

Re: [gmx-users] GROMACS 4.5.4 keep crashing all the time.

2011-08-18 Thread Matthew Zwier
My apologies; the remainder of the thread in which these suggestions were already proposed and discussed showed up where I didn't expect them. Sorry to repeat what's already been said. MZ On Thu, Aug 18, 2011 at 12:58 PM, Matthew Zwier wrote: > Hi Itamar, > > In my experie

Re: [gmx-users] GROMACS 4.5.4 keep crashing all the time.

2011-08-18 Thread Matthew Zwier
Hi Itamar, In my experience, the 4.5 series appears to be slightly less tolerant of unstable systems than the 4.0 series. Try minimizing and/or equilibrating your system longer. See http://www.gromacs.org/Documentation/Errors and http://www.gromacs.org/Documentation/Terminology/Blowing_Up MZ

Re: [gmx-users] GROMACS 4.5.4 keep crashing all the time.

2011-08-17 Thread Matthew Zwier
Could be a system blowing up, or perhaps a mis-compiled binary. What error messages do you get when the crash occurs? On Tue, Aug 16, 2011 at 9:48 PM, Itamar Kass wrote: > Hi all GROMACS useres and developers, > > I am interesting in simulating a small protein (~140 aa) in water, with and > wit

Re: [gmx-users] angular velocities in gromacs?

2011-08-04 Thread Matthew Zwier
Hi Tom, TRR files contain velocities for individual atoms as a function of time, which is all the velocity information there is in an MD simulation. You can reduce that information to rotation about a principle molecular axis, or rotation with respect to the simulation box, or whatever you need.

Re: [gmx-users] XTC I/O error in replica exchange

2011-07-14 Thread Matthew Zwier
Hi Sanku, How large is your XTC file? This error also shows up when the XTC file grows too large for the filesystem it's on (frequently 16 GB for ext3 filesystems on Linux, for example). MZ On Thu, Jul 14, 2011 at 1:06 PM, Sanku M wrote: > Hi, >   I am running a replica exchange simulation usi

Re: [gmx-users] g_msd bug

2011-07-07 Thread Matthew Zwier
I just experienced this myself. The problem appeared to manifest itself when I was using -mol on a molecule that straddled the box wall. Memory usage was extremely high and the resulting MSD plot did not show any linear behavior. Imaging the trajectory with -pbc nojump made g_msd's memory usage

Re: [gmx-users] Gromacs compilation on AMD multicore

2011-07-05 Thread Matthew Zwier
Sorry about that. The default options are nearly optimal, and the difference between a modern (4.4 or 4.5) series GCC and the Intel compilers are only a couple of percent. Just be sure to have FFTW available. On Tue, Jul 5, 2011 at 10:26 AM, Matthew Zwier wrote: > Hi Anthony, > > Th

Re: [gmx-users] Gromacs compilation on AMD multicore

2011-07-05 Thread Matthew Zwier
Hi Anthony, The default options are nearly optimal, and the difference between a modern (4.4 or 4.5) GCC On Tue, Jul 5, 2011 at 10:01 AM, Anthony Cruz Balberdi wrote: > Dear Users: > > We recently received our new computer.  This computer have 4 multicore > Opteron AMD cpus and I am planning to

Re: [gmx-users] Fwd: Parallellization problem

2011-06-20 Thread Matthew Zwier
I've had bad luck with parallel minimizations, particularly for the 4.0 series of GROMACS. Either domain decomposition fails or numeric problems appear (SETTLE failures and the like), but disappear when run serially. Minimization tends to be low cost compared to equilibration anyway, so my soluti

Re: [gmx-users] The problem of .trr file size limit?

2011-05-06 Thread Matthew Zwier
The filesystem you're storing on may not allow single files larger than 16.0 GB. What filesystem are you using? ext3? Also, do you really need to write such a large TRR file? Can you store to the TRR file less frequently (for restarts, etc) and store to XTC instead? You'll get *much* more info

Re: [gmx-users] pdb2gmx segmentation fault

2011-04-22 Thread Matthew Zwier
Nice catch on the readdir_r(). I wonder if the developers would appreciate a bug report and patch for your fix. On Fri, Apr 22, 2011 at 1:05 PM, Ragothaman Yennamalli wrote: > Hi all, > With the help of my colleague Nathan Weeks I am able to run pdb2gmx and all > other commands successfully . He

Re: [gmx-users] pdb2gmx segmentation fault

2011-04-20 Thread Matthew Zwier
e AMD opteron CPUs. > > On Wed, Apr 20, 2011 at 11:34 AM, Matthew Zwier wrote: >> >> I've never seen the -D_POSIX_PTHREAD_SEMANTICS before.  What caused >> you to need to define that flag? >> >> Also...what hardware (cpu) and operating system (linux? what distro?

Re: [gmx-users] pdb2gmx segmentation fault

2011-04-20 Thread Matthew Zwier
I've never seen the -D_POSIX_PTHREAD_SEMANTICS before. What caused you to need to define that flag? Also...what hardware (cpu) and operating system (linux? what distro? what version?) are you using? Matt Z. -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/list

Re: [gmx-users] Possible free energy bug?

2011-03-10 Thread Matthew Zwier
ed in single precision using standard options through > autoconf.  The cmake build system still does not work on our cluster due to > several outstanding bugs. > > -Justin > > Matthew Zwier wrote: >> >> Dear Justin, >> >> We recently experienced a similar prob

Re: [gmx-users] Possible free energy bug?

2011-03-10 Thread Matthew Zwier
Dear Justin, We recently experienced a similar problem (LINCS errors, step*.pdb files), and then GROMACS usually segfaulted. The cause was a miscompiled copy of GROMACS. Another member of our group had compiled GROMACS on an Intel Core2 quad (gcc -march=core2) and tried to run the copy without m

[gmx-users] Target implementation date for gb_saltconc?

2011-02-18 Thread Matthew Zwier
Dear GROMACS developers and users, Our research group is interested in performing GBSA simulations with GROMACS, but we would need to perform them with a nonzero salt concentration. I was wondering if there are plans to implement the gb_saltconc parameter, and if so, when it might become availabl