[gmx-users] dihedral problem when converging Charmm FF into Gromacs format

2011-08-22 Thread Baofu Qiao

Dear all,

I meet a problem in understanding how to converge the Charmm FF into 
Gromat format for RNA system. In  ../charmm27.ff/ffnabonded.itp 
(line 1084-1088), it says:

HN5ON5 CN7BCN7B9   180.0   0.0  6
HN5ON5 CN7BCN7B9   0.0 3.3472  3
HN5ON5 CN7BCN7B9   180.0   0.0  2
HN5ON5 CN7BCN7B9   180.0   0.0  1
HN5ON5 CN7BCN7  9   0.0 1.2552  3
HN5ON5 CN7BCN7  9   0.0 0.01

However, the original Charmm27 FF parameters are given as 
(Supporting_info in [Denning, Priyakumar, Nilsson & Macerell Jr., J. 
Comput. Chem. 2011, 32, 1929.])

Torsion   Vn/2 (c27)  Multiplicity   Phase
HN5   ON5   CN7B   CN7B   0.800   3   0.0
HN5   ON5   CN7B   CN7B   0.000   2   0.0
HN5   ON5   CN7B   CN7B   0.000   1   0.0
HN5   ON5   CN7B   CN7 0.300   3   0.0
HN5   ON5   CN7B   CN7 0.000   2   180.0
HN5   ON5   CN7B   CN7 0.000   1   0.0

Why there is a term with Multiplicity of 6 for HN5-ON5-CN7B-CN7B in 
Gromacs, but not in the original Charmm FF? and similarly a term of 
Multiplicity of 2 for HN5-ON5-CN7B-CN7 in the original Charmm FF, but 
not in Gromacs? Thanks a lot!


BTW: the Phase terms are also not equal to each other!

best wishes,
Baofu Qiao
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Connecting sidechains in gromacs file

2011-07-15 Thread Baofu Qiao

are you sure that your topology file is correctly written?

On 07/15/2011 11:34 AM, Zack Scholl wrote:

Hi all -

I'm running a simulation that has an implicit solvent.  I ran my
simulation and it seems that everything not connected to the backbone
has floated off into space.  Is there a way to define connections so
that side chains stay connected?  Has anyone had this problem before?

Thanks,

Sincerely,
Zack Scholl


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Residence time and trjorder

2011-07-12 Thread Baofu Qiao

HI Carla,

I wrote a similar code, see attached. But it is written for my 
condition. You should modify it accordingly.


regards,
Baofu Qiao

On 07/12/2011 02:04 PM, Carla Jamous wrote:

Dear gmx-users,

I used trjorder in order to study the water molecules that are closer 
than 5 A from my protein.


trjorder -s structure.tpr -f traj.xtc -n prot_water.ndx -o ordered.pdb 
-nshell nshell_.xvg -r 0.5 -b 0 -e 5000


But now I need to analyse the residence time of a water molecule, I 
mean the number of times a single water molecule stays in a radius of 
5 A of the protein and divide this number by the total number of 
conformations, in order to have a pourcentage value.


Please is there any gromacs tool able to do this calculation or else 
does anyone have an idea of how to do that?


Thank you

Carla


/*
 * $Id: gmx_density.c,v 1.9.2.1 2009/01/18 23:06:27 lindahl Exp $
 * 
 *This source code is part of
 * 
 * G   R   O   M   A   C   S
 * 
 *  GROningen MAchine for Chemical Simulations
 * 
 *VERSION 3.2.0
 * Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
 * Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 * Copyright (c) 2001-2004, The GROMACS development team,
 * check out http://www.gromacs.org for more information.

 * This program is free software; you can redistribute it and/or
 * modify it under the terms of the GNU General Public License
 * as published by the Free Software Foundation; either version 2
 * of the License, or (at your option) any later version.
 * 
 * If you want to redistribute modifications, please consider that
 * scientific software is very special. Version control is crucial -
 * bugs must be traceable. We will be happy to consider code for
 * inclusion in the official distribution, but derived work must not
 * be called official GROMACS. Details are found in the README & COPYING
 * files - if they are missing, get the official version at www.gromacs.org.
 * 
 * To help us fund GROMACS development, we humbly ask that you cite
 * the papers on the package - you can find them in the top README file.
 * 
 * For more info, check our website at http://www.gromacs.org
 * 
 * And Hey:
 * Green Red Orange Magenta Azure Cyan Skyblue
 */
#ifdef HAVE_CONFIG_H
#include 
#endif
#include 
#include 

#include "sysstuff.h"
#include "string.h"
#include "string2.h"
#include "typedefs.h"
#include "smalloc.h"
#include "macros.h"
#include "gstat.h"
#include "vec.h"
#include "xvgr.h"
#include "pbc.h"
#include "copyrite.h"
#include "futil.h"
#include "statutil.h"
#include "index.h"
#include "tpxio.h"
#include "physics.h"

int main(int argc,char *argv[])
{
  const char *desc[] = {
"This is to calculate the residence time of particles in certain slab region."
  };

  output_env_t  oenv;
  static int  axis = 2;  /* normal to memb. default z  */
  static const char *axtitle="Z"; 
  static real ZMin=-1.0, ZMax=-1.0;  
  t_pargs pa[] = {
{ "-d", FALSE, etSTR, {&axtitle}, 
  "Dimension for Residence Time calculation: X, Y or Z." },
{ "-z0",FALSE, etREAL, {&ZMin},
  "Minimum of Z coordinate (nm). <=0  means 0."},
{ "-z1",FALSE, etREAL, {&ZMax},
  "Maximum of Z coordinate (nm). <=0 measn BOX[Z][Z]."}
   };
#define NPA asize(pa)
  gmx_bool  bTop;

  t_filenm  fnm[] = {
{ efTRX, "-f", NULL,  ffREAD },  
{ efTPX, "-s", NULL,  ffREAD },	
{ efNDX, "-n", NULL,  ffOPTRD }, 
{ efXVG,"-o","residence",ffWRITE }, 	
  };
#define NFILE asize(fnm)
 
  FILE *fp_resid;
  t_topology   *top;   
  atom_id  *index;  
  rvec *x;
  matrix   box;
  t_trxstatus  *status;
  gmx_rmpbc_t  gpbc=NULL;
  char *grpname,title[256],*ylabel=NULL; 
  int  ePBC,natoms,gnx,nr_frames=0;
  int  ax1=0, ax2=0;
  real z;
  int i,*lifetime,*Cr,length=1;  
  real t,t0,t1,dt=0;
 

  time_t tstart,tend;
  intall_time,hour,min,sec;

  //CopyRight(stderr,argv[0]);
  parse_common_args(&argc,argv,PCA_CAN_VIEW | PCA_CAN_TIME | PCA_BE_NICE,
		NFILE,fnm,asize(pa),pa,asize(desc),desc,0,NULL,&oenv);

  /* Calculate axis */
  axis = toupper(axtitle[0]) - 'X';
  switch(axis) {
  case 0:ax1=1; ax2=2;  break;
  case 1:ax1=0; ax2=2;  break;
  case 2:ax1=0; ax2=1;  break;
  default:   gmx_fatal(FARGS,"\nInvalid axes. Terminating...\n\n");
  }

  //bTop = read_tps_conf(ftp2fn(efTPX,NFILE,fnm),title,&top,&ePBC,&x,NULL,box,TRUE);
  top = read_top(ftp2fn(efTPX,NFILE,fnm),&ePBC); /* read topology file */

  fprintf(st

[gmx-users] residue number starting from not 1

2011-05-26 Thread Baofu Qiao

Hi all,

My system contains water (SOL) + polyelectrolyte (PSS with DP 30) + 
sodium ions (Na). Before the EM, I have run "editconf -f -noc -resnr 1 
-o" to set the resnr. After the EM, the resnr becomes wrong. But the 
atomnr is still correct.


1. the resnr of the first water starting from "DP+1", not 1.  (I have 
checked some other systems with DP=15 or DP=12)
2. the resnr of the first PSS residue on the ALL the PSS chain starts 
from 1, but not the resnr(of previous water/PSS residue) +1.
3. the resnr of the first Na residue is the resnr(of the last water 
residue) +1, but not the resnr(of the previous PSS residue) +1


It seems that the problem is caused from the polymer. Any comment or 
suggestion is sincerely welcome!


Gromacs 4.5.3

0.1MNaCl + DP30
380329
   31SOL OW1   4.323  10.990  14.598 -0.2147  0.1640 -0.0659
   31SOLHW12   4.280  11.043  14.525 -0.8319  3.1315  2.3305
   31SOLHW23   4.253  10.953  14.659  0.3787 -0.3030  0.3439
   32SOL OW4   0.863   3.787  10.720  0.8624 -0.6556  0.5765
   32SOLHW15   0.785   3.827  10.671  1.9981  0.4779 -0.3465
   32SOLHW26   0.904   3.716  10.663  0.9971 -0.6900  0.7175
  
..
99316SOL OW97856   0.834   8.769  22.747 -0.2436 -0.7048  0.2608
99316SOLHW197857   0.931   8.745  22.754 -0.0076  0.0674 -0.2549
99316SOLHW297858   0.819   8.860  22.787 -0.9561 -1.2962  1.3663
1PSSCAA97859   9.971   3.747   8.913  0.0608 -0.0677 -0.0792
1PSS   HAA197860   9.949   3.849   8.936  0.1767 -0.4602  1.7875
.
 30PSS   HAB298430   8.204   6.232  10.780  0.9449 -0.2721  0.7855
   1PSSCAA98431   1.197   5.317  19.589 -0.1509 -0.1143 -0.2474
.
 30PSS   HAB299002   1.549   1.832  20.579  1.3918  3.3370  1.4906
   1PSSCAA99003   5.657   5.376   6.520 -0.1073 -0.0351 -0.8729
...
   30PSS   HAB2 9298   3.526   0.458  16.652 -1.6044  1.1441  1.3393
99317Na  Na 9299  12.146   5.676  14.924 -0.1802 -0.9861  0.2229
99318Na  Na 9300  11.363  11.645   8.887  0.4524 -0.4644 -0.0401
...
..
********

regards,
Baofu Qiao
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] memory problem of g_hbond?

2011-05-23 Thread Baofu Qiao


Thanks a lot, Erik!

But could you specify how to build the index of water in certain region, 
say 0



On 05/23/2011 11:05 AM, Erik Marklund wrote:

Baofu Qiao skrev 2011-05-23 10.47:

Hi all,

Very recently, I meet a problem when using g_hbond (gromacs 4.5.3 
with the bugfixed)

*
Select a group: Selected 1: 'SOL'
Select a group: Selected 1: 'SOL'
Calculating hydrogen bonds in SOL (98280 atoms)
Found 32760 donors and 32760 acceptors
*Segmentation fault*
*
I check the code of gmx_hbond.c. The problem comes from the function 
of "mk_hbmap". However, g_hbond doesn't complain  
"gmx_fatal(FARGS,"Could not allocate enough memory for hbmap")' while 
giving the "Segmentation fault". My first guess is 1) the function 
doesn't work correctly; 2) there is no enough memory for 32760 donors 
and 32760 acceptors.


What I really want to calculate is the HB correlation function is 
some slab structure of thickness of about 3ns, where there is only 
~3000 waters. Can someone give me some suggestions? Thanks a lot!


best regards,
Baofu Qiao

That sounds like a memory problem indeed, and it could be outside the 
control of g_hbond. From the manpages of malloc:


"By default, Linux follows an  optimistic  memory  allocation  
strategy. This  means  that  when malloc() returns non-NULL there is 
no guarantee that the memory really is available. This is a really bad 
bug."


g_hbond checks for NULL pointers to decide whether snew() was 
successful or not. Hence the menetioned bug could be the culprit. That 
said, the hbmap of your system requires 32760^2 pointers (> 8 Gb on 64 
bit systems) that in turn points to existence arrays with size that 
scales with trajectory length. Hence you will easily run out of 
memory. I suggest that you build the acf for a subset of your system, 
e.g. 1000 waters. The acf will converge slower, but have the same 
features. You can do this many times and take an average for better 
statistics.


Cheers,


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] memory problem of g_hbond?

2011-05-23 Thread Baofu Qiao


  
  
Hi all,

Very recently, I meet a problem when using g_hbond (gromacs 4.5.3
with the bugfixed)
*
Select a group: Selected 1: 'SOL'
Select a group: Selected 1: 'SOL'
Calculating hydrogen bonds in SOL (98280 atoms)
Found 32760 donors and 32760 acceptors
Segmentation fault
*
I check the code of gmx_hbond.c. The problem comes from the function
of "mk_hbmap". However, g_hbond doesn't complain 
"gmx_fatal(FARGS,"Could not allocate enough memory for hbmap")'
while giving the "Segmentation fault". My first guess is 1) the
function doesn't work correctly; 2) there is no enough memory for
32760 donors and 32760 acceptors.

What I really want to calculate is the HB correlation function is
some slab structure of thickness of about 3ns, where there is only
~3000 waters. Can someone give me some suggestions? Thanks a lot!

best regards,
Baofu Qiao

  

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Important bugfix for g_hbond

2011-05-17 Thread Baofu Qiao

Hi Erik and others,

After the bugfixed, I still have one problem: the average HB per SPC/E 
water is calculated to be ~1.8 in a pure water system from g_hbond, 
which has been previously reported to be 3.6 (Kumar, Schmidt & Skinner 
JCP, 2007, 126, 204107) . I have no idea where the difference comes 
from. Could you provide some help? Thanks a lot!


Some info:
**
4000 SPC/E water simulated at T=298K using NTP;
"Average number of hbonds per timeframe 7217.386 out of 8e+06 possible" 
was reported by (echo -e "SOL\nSOL"| g_hbond2 -f T298 -s T298 -n 
-merge). Note that the value of 7217 is independent of -merge/-nomerge.
I am using Gromacs 4.5.3 with your 'bugfix". Gromacs 4.0.7 give the 
similar value of about 7210 with -merge or -nomerge.

**

Any suggestion is appreciate!

Best wishes,
Baofu Qiao

On 05/10/2011 03:08 PM, Erik Marklund wrote:

Hi,

There have been reports about inconsistencies between older (<= 
4.0.7?) and newer versions of g_hbond, where the older seem to have 
been more reliable. I found and killed the bug that caused the newer 
versions to miscount the hbonds. Checkout release-4-5-patches to get 
the bugfix, or patch it yourself by commenting out line 1497, which 
reads "return hbNo;":


 if (bBox){
if (d>a && bMerge && (bContact || isInterchangable(hb, d, a, 
grpd, grpa))) { /* acceptor is also a donor a\

nd vice versa? */
/* return hbNo; */
daSwap = TRUE; /* If so, then their history should be 
filed with donor and acceptor swapped. */

}

Simple as that.




--

 Dr. Baofu Qiao
 Institute for Computational Physics
 Universität Stuttgart
 Pfaffenwaldring 27
 70569 Stuttgart

 Tel: +49(0)711 68563607
 Fax: +49(0)711 68563658

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: ionic liquids

2011-04-30 Thread Baofu Qiao


If you are sure about the force field (the one from Lopes et al.?) you 
are using, build a dilute system, and then run long enough simulations 
using NTP ensemble. possibly using simulation annealing.


? 2011-4-30 16:44, Vitaly Chaban ??:



 I am trying to create an ionic-liquid system comprising of
1-butyl-3-methyl-imidazolium(bmim) as cation and BF4-(bf4) as
anion.  I have generated the system using following command:  
 genbox_d -cp bmim.gro -ci

 bf4.gro  -nmol 125 -try 200 -o .pdb , well it created system of
total 250 molecules 125 each but after equilibration the density
is not matching with the reported.I tried  "simulated annealing"
procedure still no much difference. I wonder if there is any other
way to generate an ionic liquid system or how the people do it tp
reproduce the results , I urge for any quick suggestion.



You should report concrete values of density, topology and parameters, 
if you want help.


--
Dr. Vitaly V. Chaban, Department of Chemistry
University of Rochester, Rochester, New York 14627-0216


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] changing the velocity in trajectory file

2010-11-29 Thread Baofu Qiao

Why don't use different random (gen_vel =yes; gen_seed =-1 in  .mdp
file) seed to generate the velocity?


On 11/29/2010 02:55 PM, sreelakshmi ramesh wrote:
> Dear all,
>   could anybody help me out with the following issue. I have a
> trajectory file.i have to used that file  at a particular frame as the
> input  to continue the simulation and i need to use that file for 20
> independent simulations.since the starting file for the 20 simulations are
> the same coordinate and velocity the  20 simulations are producing  the same
> output trajectory.so is there a way to change the velocity  by  a small
> value in the input trajectory file so that every time i can get a diff
> trajectory from same input file.
>
> reagrds,
> sree.
>
>   

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3): SOLVED

2010-11-26 Thread Baofu Qiao

Hi all,

What Roland said is right! the lustre system causes the problem of 
"lock". Now I copy all the files to a folder of /tmp, then run the 
continuation. It works!


Thanks!

regards,


$于 2010-11-26 22:53, Florian Dommert 写道:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

To make things short. The used file system is lustre.

/Flo

On 11/26/2010 05:49 PM, Baofu Qiao wrote:

Hi Roland,

The output of "mount" is :
/dev/mapper/grid01-root on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/md0 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
172.30.100.254:/home on /home type nfs
(rw,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.254)
172.30.100.210:/opt on /opt type nfs
(rw,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.210)
172.30.100.210:/var/spool/torque/server_logs on
/var/spool/pbs/server_logs type nfs
(ro,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.210)
none on /ipathfs type ipathfs (rw)
172.31.100@o2ib,172.30.100@tcp:172.31.100@o2ib,172.30.100@tcp:/lprod
on /lustre/ws1 type lustre (rw,noatime,nodiratime)
172.31.100@o2ib,172.30.100@tcp:172.31.100@o2ib,172.30.100@tcp:/lbm
on /lustre/lbm type lustre (rw,noatime,nodiratime)
172.30.100.219:/export/necbm on /nfs/nec type nfs
(ro,bg,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.219)
172.30.100.219:/export/necbm-home on /nfs/nec/home type nfs
(rw,bg,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.219)


On 11/26/2010 05:41 PM, Roland Schulz wrote:

Hi Baofu,

could you provide more information about the file system?
The command "mount" provides the file system used. If it is a
network-file-system than the operating system and file system used on the
file server is also of interest.

Roland

On Fri, Nov 26, 2010 at 11:00 AM, Baofu Qiao  wrote:



Hi Roland,

Thanks a lot!

OS: Scientific Linux 5.5. But the system to store data is called as
WORKSPACE, different from the regular hardware system. Maybe this is the
reason.

I'll try what you suggest!

regards,
Baofu Qiao


On 11/26/2010 04:07 PM, Roland Schulz wrote:


Baofu,

what operating system are you using? On what file system do you try to


store


the log file? The error (should) mean that the file system you use


doesn't


support locking of files.
Try to store the log file on some other file system. If you want you can
still store the (large) trajectory files on the same file system.

Roland

On Fri, Nov 26, 2010 at 4:55 AM, Baofu Qiao  wrote:




Hi Carsten,

Thanks for your suggestion! But because my simulation will be run for
about 200ns, 10ns per day(24 hours is the maximum duration for one
single job on the Cluster I am using), which will generate about 20
trajectories!

Can anyone find the reason causing such error?

regards,
Baofu Qiao


On 11/26/2010 09:07 AM, Carsten Kutzner wrote:



Hi,

as a workaround you could run with -noappend and later
concatenate the output files. Then you should have no
problems with locking.

Carsten


On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:





Hi all,

I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is


about




30% slower than 4.5.3. So I really appreciate if anyone can help me with


it!




best regards,
Baofu Qiao


于 2010-11-25 20:17, Baofu Qiao 写道:




Hi all,

I got the error message when I am extending the simulation using the



following command:



mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi



pre.cpt -append



The previous simuluation is succeeded. I wonder why pre.log is


locked,




and the strange warning of "Function not implemented"?



Any suggestion is appreciated!

*
Getting Loaded...
Reading file pre.tpr, VERSION 4.5.3 (single precision)

Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010

---
Program mdrun, VERSION 4.5.3
Source code file: checkpoint.c, line: 1750

Fatal error:
Failed to lock: pre.log. Function not implemented.
For more information and tips for troubleshooting, please check the



GROMACS



website at http://www.gromacs.org/Documentation/Errors
---

"It Doesn't Have to Be Tip Top" (Pulp Fiction)

Error on node 0, will try to stop all the nodes
Halting parallel program mdrun on CPU 0 out of 64

gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)







--




MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with e

Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3)

2010-11-26 Thread Baofu Qiao
Hi Roland,

The output of "mount" is :
/dev/mapper/grid01-root on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/md0 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
172.30.100.254:/home on /home type nfs
(rw,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.254)
172.30.100.210:/opt on /opt type nfs
(rw,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.210)
172.30.100.210:/var/spool/torque/server_logs on
/var/spool/pbs/server_logs type nfs
(ro,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.210)
none on /ipathfs type ipathfs (rw)
172.31.100@o2ib,172.30.100@tcp:172.31.100@o2ib,172.30.100@tcp:/lprod
on /lustre/ws1 type lustre (rw,noatime,nodiratime)
172.31.100@o2ib,172.30.100@tcp:172.31.100@o2ib,172.30.100@tcp:/lbm
on /lustre/lbm type lustre (rw,noatime,nodiratime)
172.30.100.219:/export/necbm on /nfs/nec type nfs
(ro,bg,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.219)
172.30.100.219:/export/necbm-home on /nfs/nec/home type nfs
(rw,bg,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.219)


On 11/26/2010 05:41 PM, Roland Schulz wrote:
> Hi Baofu,
>
> could you provide more information about the file system?
> The command "mount" provides the file system used. If it is a
> network-file-system than the operating system and file system used on the
> file server is also of interest.
>
> Roland
>
> On Fri, Nov 26, 2010 at 11:00 AM, Baofu Qiao  wrote:
>
>   
>> Hi Roland,
>>
>> Thanks a lot!
>>
>> OS: Scientific Linux 5.5. But the system to store data is called as
>> WORKSPACE, different from the regular hardware system. Maybe this is the
>> reason.
>>
>> I'll try what you suggest!
>>
>> regards,
>> Baofu Qiao
>>
>>
>> On 11/26/2010 04:07 PM, Roland Schulz wrote:
>> 
>>> Baofu,
>>>
>>> what operating system are you using? On what file system do you try to
>>>   
>> store
>> 
>>> the log file? The error (should) mean that the file system you use
>>>   
>> doesn't
>> 
>>> support locking of files.
>>> Try to store the log file on some other file system. If you want you can
>>> still store the (large) trajectory files on the same file system.
>>>
>>> Roland
>>>
>>> On Fri, Nov 26, 2010 at 4:55 AM, Baofu Qiao  wrote:
>>>
>>>
>>>   
>>>> Hi Carsten,
>>>>
>>>> Thanks for your suggestion! But because my simulation will be run for
>>>> about 200ns, 10ns per day(24 hours is the maximum duration for one
>>>> single job on the Cluster I am using), which will generate about 20
>>>> trajectories!
>>>>
>>>> Can anyone find the reason causing such error?
>>>>
>>>> regards,
>>>> Baofu Qiao
>>>>
>>>>
>>>> On 11/26/2010 09:07 AM, Carsten Kutzner wrote:
>>>>
>>>> 
>>>>> Hi,
>>>>>
>>>>> as a workaround you could run with -noappend and later
>>>>> concatenate the output files. Then you should have no
>>>>> problems with locking.
>>>>>
>>>>> Carsten
>>>>>
>>>>>
>>>>> On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:
>>>>>
>>>>>
>>>>>
>>>>>   
>>>>>> Hi all,
>>>>>>
>>>>>> I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is
>>>>>> 
>> about
>> 
>>>>>> 
>>>> 30% slower than 4.5.3. So I really appreciate if anyone can help me with
>>>> 
>> it!
>> 
>>>> 
>>>>>> best regards,
>>>>>> Baofu Qiao
>>>>>>
>>>>>>
>>>>>> 于 2010-11-25 20:17, Baofu Qiao 写道:
>>>>>>
>>>>>>
>>>>>> 
>>>>>>> Hi all,
>>>>>>>
>>>>>>> I got the error message when I am extending the simulation using the
>>>>>>>
>>>>>>>   
>>>>

Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3)

2010-11-26 Thread Baofu Qiao
Hi Roland,

Thanks a lot!

OS: Scientific Linux 5.5. But the system to store data is called as
WORKSPACE, different from the regular hardware system. Maybe this is the
reason.

I'll try what you suggest!

regards,
Baofu Qiao


On 11/26/2010 04:07 PM, Roland Schulz wrote:
> Baofu,
>
> what operating system are you using? On what file system do you try to store
> the log file? The error (should) mean that the file system you use doesn't
> support locking of files.
> Try to store the log file on some other file system. If you want you can
> still store the (large) trajectory files on the same file system.
>
> Roland
>
> On Fri, Nov 26, 2010 at 4:55 AM, Baofu Qiao  wrote:
>
>   
>> Hi Carsten,
>>
>> Thanks for your suggestion! But because my simulation will be run for
>> about 200ns, 10ns per day(24 hours is the maximum duration for one
>> single job on the Cluster I am using), which will generate about 20
>> trajectories!
>>
>> Can anyone find the reason causing such error?
>>
>> regards,
>> Baofu Qiao
>>
>>
>> On 11/26/2010 09:07 AM, Carsten Kutzner wrote:
>> 
>>> Hi,
>>>
>>> as a workaround you could run with -noappend and later
>>> concatenate the output files. Then you should have no
>>> problems with locking.
>>>
>>> Carsten
>>>
>>>
>>> On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:
>>>
>>>
>>>   
>>>> Hi all,
>>>>
>>>> I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is about
>>>> 
>> 30% slower than 4.5.3. So I really appreciate if anyone can help me with it!
>> 
>>>> best regards,
>>>> Baofu Qiao
>>>>
>>>>
>>>> 于 2010-11-25 20:17, Baofu Qiao 写道:
>>>>
>>>> 
>>>>> Hi all,
>>>>>
>>>>> I got the error message when I am extending the simulation using the
>>>>>   
>> following command:
>> 
>>>>> mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi
>>>>>   
>> pre.cpt -append
>> 
>>>>> The previous simuluation is succeeded. I wonder why pre.log is locked,
>>>>>   
>> and the strange warning of "Function not implemented"?
>> 
>>>>> Any suggestion is appreciated!
>>>>>
>>>>> *
>>>>> Getting Loaded...
>>>>> Reading file pre.tpr, VERSION 4.5.3 (single precision)
>>>>>
>>>>> Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010
>>>>>
>>>>> ---
>>>>> Program mdrun, VERSION 4.5.3
>>>>> Source code file: checkpoint.c, line: 1750
>>>>>
>>>>> Fatal error:
>>>>> Failed to lock: pre.log. Function not implemented.
>>>>> For more information and tips for troubleshooting, please check the
>>>>>   
>> GROMACS
>> 
>>>>> website at http://www.gromacs.org/Documentation/Errors
>>>>> ---
>>>>>
>>>>> "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>>>>>
>>>>> Error on node 0, will try to stop all the nodes
>>>>> Halting parallel program mdrun on CPU 0 out of 64
>>>>>
>>>>> gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>>>>>
>>>>>
>>>>>   
>> --
>> 
>>>>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>>>>> with errorcode -1.
>>>>>
>>>>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>>>>> You may or may not see output from other processes, depending on
>>>>> exactly when Open MPI kills them.
>>>>>
>>>>>   
>> --
>> 
>>>>>   
>> --
>> 
>>>>> mpiexec has exited due to process rank 0 with PID 32758 on
>>>>>
>>>>>
>>>>>   
>>>> --
>>>> gmx-users mailing listgmx-users@gromacs.org
>>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>>> Please search the archive at
>>>> 
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> 
>>>> Please don't post (un)subscribe requests to the list. Use the
>>>> www interface or send it to gmx-users-requ...@gromacs.org.
>>>> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>>
>>>> 
>>>
>>>
>>>
>>>
>>>   

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3) Neither of 4.5.1, 4.5.2 and 4.5.3 works

2010-11-26 Thread Baofu Qiao
Hi all,

I just made some tests by using gmx 4.5.1, 4.5.2 and 4.5.3. Neither of
them works on the continuation.
---
Program mdrun, VERSION 4.5.1
Source code file: checkpoint.c, line: 1727

Fatal error:
Failed to lock: pre.log. Already running simulation?
---
Program mdrun, VERSION 4.5.2
Source code file: checkpoint.c, line: 1748

Fatal error:
Failed to lock: pre.log. Already running simulation?
---
Program mdrun, VERSION 4.5.3
Source code file: checkpoint.c, line: 1750

Fatal error:
Failed to lock: pre.log. Function not implemented.
=

The system to test is 895 SPC/E water,box size of 3nm, (genbox -box 3
-cs). The pre.mdp is attached.

I have tested two clusters:
Cluster A: 1)compiler/gnu/4.3 2) mpi/openmpi/1.2.8-gnu-4.3 3)FFTW 3.3.2
4) GMX 4.5.1/4.5.2/4.5.3
Cluster B: 1)compiler/gnu/4.3 2) mpi/openmpi/1.4.2-gnu-4.3 3)FFTW 3.3.2
4) GMX 4.5.3

GMX command:
mpiexec -np 8 mdrun -deffnm pre -npme 2 -maxh 0.15 -cpt 5 -cpi pre.cpt
-append

Can anyone provide further help? Thanks a lot!

best regards,



On 11/26/2010 09:07 AM, Carsten Kutzner wrote:
> Hi,
>
> as a workaround you could run with -noappend and later
> concatenate the output files. Then you should have no
> problems with locking.
>
> Carsten
>
>
> On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:
>
>   
>> Hi all,
>>
>> I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is about 30% 
>> slower than 4.5.3. So I really appreciate if anyone can help me with it!
>>
>> best regards,
>> Baofu Qiao
>>
>>
>> 于 2010-11-25 20:17, Baofu Qiao 写道:
>> 
>>> Hi all,
>>>
>>> I got the error message when I am extending the simulation using the 
>>> following command:
>>> mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi pre.cpt 
>>> -append 
>>>
>>> The previous simuluation is succeeded. I wonder why pre.log is locked, and 
>>> the strange warning of "Function not implemented"?
>>>
>>> Any suggestion is appreciated!
>>>
>>> *
>>> Getting Loaded...
>>> Reading file pre.tpr, VERSION 4.5.3 (single precision)
>>>
>>> Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010
>>>
>>> ---
>>> Program mdrun, VERSION 4.5.3
>>> Source code file: checkpoint.c, line: 1750
>>>
>>> Fatal error:
>>> Failed to lock: pre.log. Function not implemented.
>>> For more information and tips for troubleshooting, please check the GROMACS
>>> website at http://www.gromacs.org/Documentation/Errors
>>> ---
>>>
>>> "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>>>
>>> Error on node 0, will try to stop all the nodes
>>> Halting parallel program mdrun on CPU 0 out of 64
>>>
>>> gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>>>
>>> --
>>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>>> with errorcode -1.
>>>
>>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>>> You may or may not see output from other processes, depending on
>>> exactly when Open MPI kills them.
>>> --
>>> --
>>> mpiexec has exited due to process rank 0 with PID 32758 on
>>>   



pre.mdp
Description: application/mdp
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] system is exploding

2010-11-26 Thread Baofu Qiao

If you are really sure about the topology, the problem is the initial
structure. Try to use PackMol to build it.

On 11/26/2010 11:42 AM, Olga Ivchenko wrote:
> I tried today to run minimization in vacuum for my small molecules. This has
> the same error.
>
> 2010/11/26 Baofu Qiao 
>
>   
>> Have you run the energy minimization (or further simulation to optimize
>> the structure and test the FF) on the small molecule before you added it
>> into water?
>>
>> On 11/26/2010 11:26 AM, Olga Ivchenko wrote:
>> 
>>> Dear gromacs users,
>>>
>>> I am trying to run simulations for small molecules in water. Topology
>>>   
>> files
>> 
>>> I created by my self for charm ff. When I am trying to start energy
>>> minimization I got an error:
>>>
>>>
>>>  Steepest Descents:
>>>
>>> Tolerance (Fmax) = 1.0e+00
>>>
>>> Number of steps = 1000
>>>
>>>
>>> That's means my system is exploding. Please can you advice me on this,
>>>   
>> what
>> 
>>> I need to check.
>>>
>>> best,
>>>
>>> Olga
>>>
>>>
>>>   
>>
>> --
>> 
>>  Dr. Baofu Qiao
>>  Institute for Computational Physics
>>  Universität Stuttgart
>>  Pfaffenwaldring 27
>>  70569 Stuttgart
>>
>>  Tel: +49(0)711 68563607
>>  Fax: +49(0)711 68563658
>>
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> 
>   


-- 

 Dr. Baofu Qiao
 Institute for Computational Physics
 Universität Stuttgart
 Pfaffenwaldring 27
 70569 Stuttgart

 Tel: +49(0)711 68563607
 Fax: +49(0)711 68563658

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] system is exploding

2010-11-26 Thread Baofu Qiao

Have you run the energy minimization (or further simulation to optimize
the structure and test the FF) on the small molecule before you added it
into water?

On 11/26/2010 11:26 AM, Olga Ivchenko wrote:
> Dear gromacs users,
>
> I am trying to run simulations for small molecules in water. Topology files
> I created by my self for charm ff. When I am trying to start energy
> minimization I got an error:
>
>
>  Steepest Descents:
>
> Tolerance (Fmax) = 1.0e+00
>
> Number of steps = 1000
>
>
> That's means my system is exploding. Please can you advice me on this, what
> I need to check.
>
> best,
>
> Olga
>
>   


-- 

 Dr. Baofu Qiao
 Institute for Computational Physics
 Universität Stuttgart
 Pfaffenwaldring 27
 70569 Stuttgart

 Tel: +49(0)711 68563607
 Fax: +49(0)711 68563658

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3)

2010-11-26 Thread Baofu Qiao
Hi Carsten,

Thanks for your suggestion! But because my simulation will be run for
about 200ns, 10ns per day(24 hours is the maximum duration for one
single job on the Cluster I am using), which will generate about 20
trajectories!

Can anyone find the reason causing such error?

regards,
Baofu Qiao


On 11/26/2010 09:07 AM, Carsten Kutzner wrote:
> Hi,
>
> as a workaround you could run with -noappend and later
> concatenate the output files. Then you should have no
> problems with locking.
>
> Carsten
>
>
> On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:
>
>   
>> Hi all,
>>
>> I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is about 30% 
>> slower than 4.5.3. So I really appreciate if anyone can help me with it!
>>
>> best regards,
>> Baofu Qiao
>>
>>
>> 于 2010-11-25 20:17, Baofu Qiao 写道:
>> 
>>> Hi all,
>>>
>>> I got the error message when I am extending the simulation using the 
>>> following command:
>>> mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi pre.cpt 
>>> -append 
>>>
>>> The previous simuluation is succeeded. I wonder why pre.log is locked, and 
>>> the strange warning of "Function not implemented"?
>>>
>>> Any suggestion is appreciated!
>>>
>>> *
>>> Getting Loaded...
>>> Reading file pre.tpr, VERSION 4.5.3 (single precision)
>>>
>>> Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010
>>>
>>> ---
>>> Program mdrun, VERSION 4.5.3
>>> Source code file: checkpoint.c, line: 1750
>>>
>>> Fatal error:
>>> Failed to lock: pre.log. Function not implemented.
>>> For more information and tips for troubleshooting, please check the GROMACS
>>> website at http://www.gromacs.org/Documentation/Errors
>>> ---
>>>
>>> "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>>>
>>> Error on node 0, will try to stop all the nodes
>>> Halting parallel program mdrun on CPU 0 out of 64
>>>
>>> gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>>>
>>> --
>>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>>> with errorcode -1.
>>>
>>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>>> You may or may not see output from other processes, depending on
>>> exactly when Open MPI kills them.
>>> --
>>> --
>>> mpiexec has exited due to process rank 0 with PID 32758 on
>>>
>>>   
>> -- 
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> Please don't post (un)subscribe requests to the list. Use the 
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>
>
>
>
>   


-- 

 Dr. Baofu Qiao
 Institute for Computational Physics
 Universität Stuttgart
 Pfaffenwaldring 27
 70569 Stuttgart

 Tel: +49(0)711 68563607
 Fax: +49(0)711 68563658

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3)

2010-11-25 Thread Baofu Qiao
Hi all,

I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is about
30% slower than 4.5.3. So I really appreciate if anyone can help me with it!

best regards,
Baofu Qiao


于 2010-11-25 20:17, Baofu Qiao 写道:
> Hi all,
>
> I got the error message when I am extending the simulation using the
> following command:
> mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi
> pre.cpt -append
>
> The previous simuluation is succeeded. I wonder why pre.log is locked,
> and the strange warning of "*Function not implemented*"?
>
> Any suggestion is appreciated!
>
> *
> Getting Loaded...
> Reading file pre.tpr, VERSION 4.5.3 (single precision)
>
> Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010
>
> ---
> Program mdrun, VERSION 4.5.3
> Source code file: checkpoint.c, line: 1750
>
> Fatal error:
> *Failed to lock: pre.log. Function not implemented.*
> For more information and tips for troubleshooting, please check the
> GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> ---
>
> "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>
> Error on node 0, will try to stop all the nodes
> Halting parallel program mdrun on CPU 0 out of 64
>
> gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>
> --
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode -1.
>
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --
> --
> mpiexec has exited due to process rank 0 with PID 32758 on
>

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Failed to lock: pre.log (Gromacs 4.5.3)

2010-11-25 Thread Baofu Qiao
Hi all,

I got the error message when I am extending the simulation using the
following command:
mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi
pre.cpt -append

The previous simuluation is succeeded. I wonder why pre.log is locked,
and the strange warning of "*Function not implemented*"?

Any suggestion is appreciated!

*
Getting Loaded...
Reading file pre.tpr, VERSION 4.5.3 (single precision)

Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010

---
Program mdrun, VERSION 4.5.3
Source code file: checkpoint.c, line: 1750

Fatal error:
*Failed to lock: pre.log. Function not implemented.*
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

"It Doesn't Have to Be Tip Top" (Pulp Fiction)

Error on node 0, will try to stop all the nodes
Halting parallel program mdrun on CPU 0 out of 64

gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)

--
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode -1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--
--
mpiexec has exited due to process rank 0 with PID 32758 on

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] g_density options

2010-10-12 Thread Baofu Qiao





For a homogenious system, you can also use g_density: g_density -sl 1
-dens mass -f -s -o

Mark Abraham wrote:

  You get the density by running g_energy. If you've tried that and seen that the density wasn't there, that would be because your volume is constant in an NVT simulation. grompp probably reported the density when you made the .tpr, however.

Mark

- Original Message -
From: vferra...@units.it
Date: Tuesday, October 12, 2010 23:14
Subject: Re: [gmx-users] g_density options
To: jalem...@vt.edu, Discussion list for GROMACS users 

  
  
Thanks a lot, but... How can I exctract density information by 
using g_energy?

Valerio


"Justin A. Lemkul"  ha scritto:



  
Justin A. Lemkul wrote:
  
  

vferra...@units.it wrote:


  I'm a GROMACS user and I want to authomatize solvent 
  

  

parametrization in


  

  GROMOS force-field. The parametrization of a solvent converg 
  

  

also on


  

  the density. The problem is that g_density always ask on 
  

  

which element


  

  of the system compute the density and it is waiting for a 
  

  

replay that


  

  I can't authomatize. Is it possible to set the g-density 
  

  

calculation>>>on the whole system by default and skippink the 
choice step?


  
If you're analyzing a homogeneous liquid system, then 

  

g_density is the wrong tool.  It calculates density as a 
function of the box vector.  The whole density of the 
system is written to the .edr file and can be extracted using 
g_energy.>>


  And if you're looking to automate any process in Gromacs, see 
  

the following:


  http://www.gromacs.org/Documentation/How-
  

tos/Making_Commands_Non-Interactive


  -Justin

-- 


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
  

http://www.gromacs.org/Support/Mailing_Lists/Search before posting!


  Please don't post (un)subscribe requests to the list. Use the 
  

www interface or send it to gmx-users-requ...@gromacs.org.


  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

  




This message was sent using IMP, the Internet Messaging Program.


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use thewww 
interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

  
  
  




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] ARG Charmm gmx 4.5.1

2010-09-17 Thread Baofu Qiao

You  should change the charge group number. Every charge group include
several, like 3-4, atoms. Actually, the definition of charge group is
not significant if you use PME for the coulomb interaction.


nahren manuel wrote:
> Dear Gromacs Users,
>
> I am using plain cutoff for my 12-mer protein.
> The grompp reports ARG to have a big charge group. this was also highlighted 
> in the following mail
> http://www.mail-archive.com/gmx-users@gromacs.org/msg32098.html
>
> I was just think if changing the charges on these atoms would help,
> from
> 13CT2  1ARG CD 40.2 12.011   ; qtot 1.2
> 14 HA  1ARGHD1 4   0.09  1.008   ; qtot 1.29
> 15 HA  1ARGHD2 4   0.09  1.008   ; qtot 1.38
> 16NC2  1ARG NE 4   -0.7 14.007   ; qtot 0.68
> 17 HC  1ARG HE 4   0.44  1.008   ; qtot 1.12
> 18  C  1ARG CZ 4   0.64 12.011   ; qtot 1.76
> 19NC2  1ARGNH1 4   -0.8 14.007   ; qtot 0.96
> 20 HC  1ARG   HH11 4   0.46  1.008   ; qtot 1.42
> 21 HC  1ARG   HH12 4   0.46  1.008   ; qtot 1.88
> 22NC2  1ARGNH2 4   -0.8 14.007   ; qtot 1.08
> 23 HC  1ARG   HH21 4   0.46  1.008   ; qtot 1.54
> 24 HC  1ARG   HH22 4   0.46  1.008   ; qtot 2
>
>
>
> to
>
>
>
>
>   
>   
>   
>   
>   
>   
>CD0.18
>   HD10.06
>   HD20.06
>NE-0.7
>HE0.4
>CZ0.6
>   NH1-0.8
>  HH110.5
>  HH120.5
>   NH2-0.8
>  HH210.5
>  HH220.5
>
> The above transformation of charges seems reasonable.
>
> Would like to know if this is okay...
>
>
> Best,
> nahren
>
>
>
>
>
>   
>   

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] question about the format of [atomtypes] in ffnonbond.itp

2010-09-10 Thread Baofu Qiao




Hi all,

I just realized the the format of defining the atom-name in
[atomtypes] section in ffnonbond.ipt in different forcefields are
different. such as
1. oplsaa.ff/
[ atomtypes ]
; name  bond_type    mass    charge   ptype  sigma 
epsilon
 opls_001   C   6  12.01100 0.500   A   
3.75000e-01  4.39320e-01 ; SIG

2. charmm27.ff (similar as amber99.ff and gromos53a6.ff)
[ atomtypes ]
;name   at.num  mass    charge  ptype   sigma   epsilon
C   6   12.01100    0.51    A   0.356359487256 
0.46024

3. Martini FF
[ atomtypes ]
; name mass charge ptype c6 c12
P5 72.0 0.000 A 0.0 0.0

I wonder how gromacs deals with these difference smartly? And if I want
to write my force field files,  what should I pay attention to this
difference?

best wishes,
Baofu Qiao




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Precision in trajectory file

2010-08-03 Thread Baofu Qiao

The number of digits in .gro( or .pdb) is crudely fixed because of the
format. Otherwise, it cannot be recognized by other softwares, like VMD.

For the .trr files, I guess, if you are indeed running double-precision
GMX, the leap-frog integrator is using double-precision coordinates, but
the printed value in .trr is also truncated due to the format problem.



Inon Sharony wrote:
> Good afternoon!
>
> It seems to me that although I'm running a double-precision
> installation of GROMACS, the printout to file (of the positions and
> velocities) is of much less precision. i.e. the computation is done on
> numbers with something like 16 significant digits, but the last 10 of
> those are simply truncated and lost (e.g. positions are given in
> single-precision as 0.000 nm, and in double-precision as 0.0 nm).
> Since I've already spent computation time at getting double-precision,
> I'd like to make use of all of it -- for my own reasons. I already
> searched the manuals, mailing lists and source code for instruction
> but didn't find any.
> Could you please tell me how I can change the number of digits printed
> out (e.g. to the .trr file)? I'm looking for a more elegant solution
> than adding a printf line to the source code. Something along the
> lines of changing the format of numbers in the function that prints to
> .trr .
>
> Thanks in advance!
>

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] the job is not being distributed

2010-06-30 Thread Baofu Qiao

I think this is not a problem of Gromacs, but the cluster you are using.
Try to contact with your cluster administrator, and check the
administration software.


Syed Tarique Moin wrote:
> Hi,
>
> I am using MPICH2 library for gromacs. 
>
> Thanks and  Regards
>
> Syed Tarique Moin
>
>
>
>
>   
>   

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] question about Gromacs and Spectroscopy

2010-06-30 Thread Baofu Qiao
Hi Justin,

Thanks for your reply! And sorry for the vague question due to my little
knowledge on the spectroscopy.

What I want to reproduce is wavenumber of C-H vibrations in alkyl
chains. In NMR experiments, such wavenumber is measured to be 2000-3000
cm-1, namely in the middle region of infra red. I wonder whether I can
reproduce it or not?

I hope this is bit of clearer! Thanks a lot!

regards,
Baofu Qiao


Justin A. Lemkul wrote:
>
> Baofu Qiao wrote:
>> Hi all,
>>
>> I want to reproduce some experimental data on Spectroscopy using
>> Gromacs. I want to know is it possible? If yes, how to do that? Is there
>> any tutorial related to that?
>>
>> Any help is appreciate!
>>
>
> Your question is too vague to get much useful advice.  The term
> "spectroscopy" can refer to a large number of techniques.  I am
> unaware of any tutorial related to any spectroscopic technique,
> however, but you might get some useful advice if you ask a more
> specific question (i.e., what you're actually trying to do).
>
> -Justin
>
>> regards,
>> Baofu Qiao
>

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] question about Gromacs and Spectroscopy

2010-06-30 Thread Baofu Qiao
Hi all,

I want to reproduce some experimental data on Spectroscopy using
Gromacs. I want to know is it possible? If yes, how to do that? Is there
any tutorial related to that?

Any help is appreciate!

regards,
Baofu Qiao
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] how to extract the LJ interaction parameters from the tpr file

2010-02-08 Thread Baofu Qiao
Hi Mark,

Thanks for your reply!

I want to use the pre-saved trajectory.  Based on the distance criteria,
calculate the Coulomb and LJ interaction. However, I don't know the data
structure related with the LJ parameters. Do you have related knowledge?

The mdrun -rerun might be impossible for me, because I want to calculate
the energy terms based on the distance, which is not the cutoff distance
in the simulations, but one from RDF calculation.

regards,
Baofu Qiao

Mark Abraham wrote:
> On 08/02/10 21:03, Baofu Qiao wrote:
>> Hi all,
>>
>> I want to calculate the non-bonded interactions (LJ+Coulomb) between two
>> sub-groups when they are within the cutoff distance. The sub-groups are
>> only parts of the whole energy groups used in my .mdp file. Given the
>> partial charges of the atoms involved are easy to get from the
>> "top.atoms.atom[index[i]].q",   I have no idea how to get the LJ
>> parameters of the atoms?
>>
>> Does anyone know how to get them from the top.atoms.atomXXX? Or is there
>> some similar code to calculate such energies?
>
> If you want them calculated mid-simulation, then you'll have to find
> the relevant data structure.
>
> Otherwise, define useful energy groups, and calculate the terms from
> frames in a saved trajectory using mdrun -rerun. Get the data from the
> resulting .edr with g_energy in the usual way.
>
> Mark

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] best interval for saving configurations ?

2010-02-08 Thread Baofu Qiao
Hi,

it depends. Firstly how strong are the configurations related over time.
Then the maximum of your harddisk. Generally it is in the order of
magnitude of ps.

shahab shariati wrote:
> Hi all
>
>
>
> What is the best interval for saving configurations in full md step?  (every
> what ps?)
>
>   

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] how to extract the LJ interaction parameters from the tpr file

2010-02-08 Thread Baofu Qiao
Hi all,

I want to calculate the non-bonded interactions (LJ+Coulomb) between two
sub-groups when they are within the cutoff distance. The sub-groups are
only parts of the whole energy groups used in my .mdp file. Given the
partial charges of the atoms involved are easy to get from the 
"top.atoms.atom[index[i]].q",   I have no idea how to get the LJ
parameters of the atoms?

Does anyone know how to get them from the top.atoms.atomXXX? Or is there
some similar code to calculate such energies?

Thanks in advances!
Best wishes,
Baofu Qiao
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] RE: xmgrace

2010-01-29 Thread Baofu Qiao

mdrun -s topol.tpr

BTW: take a look at "mdrun -h "

bharat gupta wrote:
> Thanks sir
>
> But there is one problem that when I am running the mdrun step of 5th
> step of lysozyme tutorial,  I am getting an error :-
>
> Can not open file:
> topol.tpr
>
> Can u tell me how to rectify it ..
>   

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] strange empty space inside the system box

2010-01-24 Thread Baofu Qiao

Hi,

Thanks a lot for the suggestion! NPT does is difficult with the frozen 
graphite layers. On the other hand, I have also done another simulation 
using the implicit graphite wall with the same polymer solution 
(composition and system volume). These implicit wall systems seems good, 
at least the empty space is not there. Any other idea about it?


best regards,
Baofu Qiao

chris.ne...@utoronto.ca wrote:
Sounds like a bubble. Do some NpT equilibration by adding pressure 
coupling. That's going to be difficult with your freeze groups though. 
Basically, you need to find a way to get the correct solution density 
between the graphite layers.


-- original message --

Hi all,

Is there anyone who can give me some suggestions?

My system is composed of some polymer chains and waters. The density of
the solution is about 1g/cm^3. Then I added two explicit grahpite layers
(resname=WAL) at the bottom and top of the solution, which are both
freezed in three dimensions in all the simulations. After some
equilibrium runs (about 200ps), there appears a strange space at the
bottom of the solution in the Z direction. For example, the bottom
graphite layer is Z= 0-4nm, Z = 4-6nm is empty (no graphite, no polymer,
no water) , Z=6-30 is the solution, Z=30=34nm is the top graphite layer.
The void space doesn't disappear after 10ns simulation.  Could someone
give me some suggestion?  Thanks a lot in advance!

The following is the .mdp  file. (in the equilibrium run,
coulombtype=cut-off, not PME)

; RUN CONTROL PARAMETERS
integrator   = md
dt   = 0.0025
nsteps   = 800
comm-mode= Linear
nstcomm  = 1
comm-grps=

nstlog   = 80
nstenergy= 80
nstxtcout= 80
xtc-precision= 1000
energygrps   =  WAL

nstlist  = 10
ns_type  = grid
rlist = 1.2

coulombtype  = pme
rcoulomb   = 1.2
vdw-type   = cut-off
rvdw = 1.2
DispCorr   = EnerPres
table-extension   = 1
energygrp_table =
fourierspacing= 0.15
pme_order= 4
ewald_rtol = 1e-05
ewald_geometry= 3d
epsilon_surface  = 0
optimize_fft   = yes

Tcoupl   = nose-hoover
tc-grps  = system
tau_t= 0.5
ref_t= 298

gen_vel  = yes
gen_temp = 298

constraints  = all-bonds

freezegrps   = WAL
freezedim= Y Y Y
energygrp_excl   = WAL WA



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] strange empty space inside the system box

2010-01-24 Thread Baofu Qiao

Hi all,

Is there anyone who can give me some suggestions?

My system is composed of some polymer chains and waters. The density of 
the solution is about 1g/cm^3. Then I added two explicit grahpite layers 
(resname=WAL) at the bottom and top of the solution, which are both 
freezed in three dimensions in all the simulations. After some 
equilibrium runs (about 200ps), there appears a strange space at the 
bottom of the solution in the Z direction. For example, the bottom 
graphite layer is Z= 0-4nm, Z = 4-6nm is empty (no graphite, no polymer, 
no water) , Z=6-30 is the solution, Z=30=34nm is the top graphite layer. 
The void space doesn't disappear after 10ns simulation.  Could someone 
give me some suggestion?  Thanks a lot in advance!


The following is the .mdp  file. (in the equilibrium run, 
coulombtype=cut-off, not PME)


; RUN CONTROL PARAMETERS
integrator   = md
dt   = 0.0025
nsteps   = 800
comm-mode= Linear
nstcomm  = 1
comm-grps=

nstlog   = 80
nstenergy= 80
nstxtcout= 80
xtc-precision= 1000
energygrps   =  WAL

nstlist  = 10
ns_type  = grid
rlist = 1.2

coulombtype  = pme
rcoulomb   = 1.2
vdw-type   = cut-off
rvdw = 1.2
DispCorr   = EnerPres
table-extension   = 1
energygrp_table =
fourierspacing= 0.15
pme_order= 4
ewald_rtol = 1e-05
ewald_geometry= 3d
epsilon_surface  = 0
optimize_fft   = yes

Tcoupl   = nose-hoover
tc-grps  = system
tau_t= 0.5
ref_t= 298

gen_vel  = yes
gen_temp = 298

constraints  = all-bonds

freezegrps   = WAL
freezedim= Y Y Y
energygrp_excl   = WAL WAL





--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] implicit wall doesn't recognize the wall_atomtype correctly?

2010-01-15 Thread Baofu Qiao




Hi all,

I want to know whether it is a bug or not: When I use some nonsense
wall_atomtype for the implicit wall, the grompp doesn't complain about
it.  Or it is a mistake I make myself.
I am using gmx 4.0.7.  See the following flags of wall_atomtype.

nwall    = 2
wall_type    = 9-3   
wall_r_linpot   = 0.1
wall_atomtype    = abc   123    ;opls_abc  opls_abc
wall_density   = 144 144
wall_ewald_zfac    = 3


best wishes,
Baofu Qiao


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] 10-4-3 implicit wall potential

2010-01-15 Thread Baofu Qiao
Hi,

Thanks a lot!
I just tried again. "mdrun -table table" works.

best regards,
Baofu Qiao

Berk Hess wrote:
> Hi,
>
> Do you really need all those tables?
> I would think you onkly need to specify your wall table and no other tables.
>
> As the documentation says (I think), the cut-off for the wall table is the 
> length
> of the table you specify.
>
> Berk
>
> Date: Fri, 15 Jan 2010 11:02:40 +0100
> From: qia...@gmail.com
> To: gmx-users@gromacs.org
> Subject: Re: [gmx-users] 10-4-3 implicit wall potential
>
>
>
>
>
>
>   
>   
>
>
> Dear prof. Berk,
>
>
>
> Since it is the first time I use the tabulated potential, I have lots
> of questions about it. Could you help me out? Thanks a lot!
>
>
>
> I make the test in the system with only some waters between 2 implicit
> walls.
>
> 1. some related parameters of the .mdp file are listed  below. Is there
> anything wrong? 
>
> energygrps   = SOL
>
> pbc= xy
>
> rlist= 1.2
>
> coulombtype   = pme
>
> rcoulomb = 1.2
>
> vdw-type = cut-off
>
> rvdw = 1.2
>
> DispCorr  = EnerPres
>
> table-extension   = 1
>
> energygrp_table  =   
>
> ;Walls
>
> nwall = 2
>
> wall_type  = table 
>
> wall_r_linpot = 0.0
>
> wall_atomtype  = opls_135 opls_135   
>
> wall_density  = ;(the wall_density is included in
> the tabulated potential, right?)
>
> wall_ewald_zfac  = 3
>
>
>
> 2.  The flags of mdrun are:
>
>   mdrun -deffnm table  -tablep table_SOL_wall0.xvg
> table_SOL_wall1.xvg -table table.xvg
>
> where table.xvg is copied from  gmx_top_path/table6-12.xvg, and
> table_SOL_wall0.xvg and table_SOL_wall1.xvg are the self-generated 9-3
> potential (9-3 potential is used for the aim of testing and compare
> with the  default 9-3 potential of gromacs.)
>
> Since I use only the tabulated potential between SOL-wall0 and
> SOL-wall1, why do I need the table.xvg?  Does it affect the coulomb/vdw
> potential between SOL-SOL?
>
>
>
> 3. In the table_SOL_wall0.xvg and table_SOL_wall1.xvg, the distance
> range of 0-3nm is generated. max(rvdw, rcoulomb)=1.2 nm and
> table-extension=1. The question is what is the cutoff distance for
> the tabulated SOL-wall0/wall1 potential, 2.2nm or 3nm?
>
>
>
> 4. In the output file table.log,
>
> Table routines are used for coulomb: TRUE
>
> Table routines are used for vdw: FALSE
>
> Using a Gaussian width (1/beta) of 0.384195 nm for Ewald
>
> Cut-off's:   NS: 1.2   Coulomb: 1.2   LJ: 1.2
>
> System total charge: 0.000
>
> Generated table with 1100 data points for Ewald.
>
> Tabscale = 500 points/nm
>
> Generated table with 1100 data points for LJ6.
>
> Tabscale = 500 points/nm
>
> Generated table with 1100 data points for LJ12.
>
> Tabscale = 500 points/nm
>
> Reading user tables for 1 energy groups with 2 walls
>
> Read user tables from table_SOL_wall0.xvg with 1501 data points.
>
> Tabscale = 500 points/nm
>
> Read user tables from table_SOL_wall1.xvg with 1501 data points.
>
> Tabscale = 500 points/nm
>
>
>
> The question are: 
>
> a) Since Table is not used for vdw (FALSE),  why table is generated
> for LJ6/LJ12?
>
> b) Where does the value of  "1100" data points come from? (Since
> rvdw=1.2, tabscale=500, it should be 1400, not 1100!?)
>
> c) 1501 data points is generated for table_SOL_wall0/wall1, the answer
> of the 3rd question (cutoff value for the tabulated potential
> calculation) should be 3nm, right?
>
>
>
> Sorry for the lots of questions!
>
>
>
> best wishes,
>
> Baofu Qiao
>
>
>
> Berk Hess wrote:
>
>   Hi,
>
> Why not use the tabulated wall potential?
> That does not require are changes to the code.
>
> Berk
>
>   
>   
> Date: Tue, 5 Jan 2010 14:13:30 +0100
> From: qia...@gmail.com
> To: gmx-users@gromacs.org
> Subject: [gmx-users] 10-4-3 implicit wall potential
>
> HI all,
>
> Does anyone have experience to implement the 10-4-3 implicit wall
> potential?  If I want to use it, which files do I need to change, except
> wall.c? Can anyone give some suggestions?
>
> Thanks a lot!
>
> best wishes,
> Baofu Qiao
>   

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] 10-4-3 implicit wall potential

2010-01-15 Thread Baofu Qiao




Dear prof. Berk,

Since it is the first time I use the tabulated potential, I have lots
of questions about it. Could you help me out? Thanks a lot!

I make the test in the system with only some waters between 2 implicit
walls.
1. some related parameters of the .mdp file are listed  below. Is there
anything wrong? 
energygrps   = SOL
pbc        = xy
rlist    = 1.2
coulombtype   = pme
rcoulomb = 1.2
vdw-type = cut-off
rvdw = 1.2
DispCorr  = EnerPres
table-extension   = 1
energygrp_table      =   
;Walls
nwall = 2
wall_type      = table 
wall_r_linpot = 0.0
wall_atomtype  = opls_135 opls_135   
wall_density  = ;(the wall_density is included in
the tabulated potential, right?)    
wall_ewald_zfac  = 3

2.  The flags of mdrun are:
      mdrun -deffnm table  -tablep table_SOL_wall0.xvg
table_SOL_wall1.xvg -table table.xvg
where table.xvg is copied from  gmx_top_path/table6-12.xvg, and
table_SOL_wall0.xvg and table_SOL_wall1.xvg are the self-generated 9-3
potential (9-3 potential is used for the aim of testing and compare
with the  default 9-3 potential of gromacs.)
Since I use only the tabulated potential between SOL-wall0 and
SOL-wall1, why do I need the table.xvg?  Does it affect the coulomb/vdw
potential between SOL-SOL?

3. In the table_SOL_wall0.xvg and table_SOL_wall1.xvg, the distance
range of 0-3nm is generated. max(rvdw, rcoulomb)=1.2 nm and
table-extension=1. The question is what is the cutoff distance for
the tabulated SOL-wall0/wall1 potential, 2.2nm or 3nm?

4. In the output file table.log,
Table routines are used for coulomb: TRUE
Table routines are used for vdw: FALSE
Using a Gaussian width (1/beta) of 0.384195 nm for Ewald
Cut-off's:   NS: 1.2   Coulomb: 1.2   LJ: 1.2
System total charge: 0.000
Generated table with 1100 data points for Ewald.
Tabscale = 500 points/nm
Generated table with 1100 data points for LJ6.
Tabscale = 500 points/nm
Generated table with 1100 data points for LJ12.
Tabscale = 500 points/nm
Reading user tables for 1 energy groups with 2 walls
Read user tables from table_SOL_wall0.xvg with 1501 data points.
Tabscale = 500 points/nm
Read user tables from table_SOL_wall1.xvg with 1501 data points.
Tabscale = 500 points/nm

The question are: 
a) Since Table is not used for vdw (FALSE),  why table is generated
for LJ6/LJ12?
b) Where does the value of  "1100" data points come from? (Since
rvdw=1.2, tabscale=500, it should be 1400, not 1100!?)
c) 1501 data points is generated for table_SOL_wall0/wall1, the answer
of the 3rd question (cutoff value for the tabulated potential
calculation) should be 3nm, right?

Sorry for the lots of questions!

best wishes,
Baofu Qiao

Berk Hess wrote:

  Hi,

Why not use the tabulated wall potential?
That does not require are changes to the code.

Berk

  
  
Date: Tue, 5 Jan 2010 14:13:30 +0100
From: qia...@gmail.com
To: gmx-users@gromacs.org
Subject: [gmx-users] 10-4-3 implicit wall potential

HI all,

Does anyone have experience to implement the 10-4-3 implicit wall
potential?  If I want to use it, which files do I need to change, except
wall.c? Can anyone give some suggestions?

Thanks a lot!

best wishes,
Baofu Qiao
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

  
   		 	   		  
_
Express yourself instantly with MSN Messenger! Download today it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
  




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] question about the Silicon force field in gromacs

2010-01-13 Thread Baofu Qiao
Dear prof. van der Spoel,

I am planning to study the Silicon crystalline. But I have one question
about the force field of silicon. In the original paper, (E. J. W.
Wensink, et al., Properties of Adsorbed Water Layers and the Effect of
Adsorbed Layers on Interparticle forces by Liquid Bridging,  Langmuir,
2000, 16, 7392-7400.), the LJ parameters are given as C6=0.22617*10^(-2)
kJ nm^6/mol, and C12=0.22191*10^(-4) kJ nm^12/mol.
While in ffG53a6nb.itp,  C6= 0.01473796, C12= 2.2193521e-05. The value
of C6 is different, with the same value of C12. Is there an error in the
inconsistence, or I missed something?

PS: in ffoplsaanb.itp, sigma=3.38550e-01, epsilon= 2.44704e+00, which is
consistent with the ffG53a6nb.itp.

Since it is you who added the force field of silicon, and you have one
related publication (D. van der Spoel, et al.,  Lifting a Wet Glass from
a Table: A Microscopic Picture, Langmuir, 2006, 22, 5666.), I think you
might be able to give me some help. Thanks a lot!

best wishes,
Baofu Qiao
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] SiO2 simulation

2010-01-12 Thread Baofu Qiao

Hi,

As far as I know, you must build the structure file by yourself, and the 
topology file. For the force field, the SiO2 have been included in the 
ffG53a6 and opls force field. However, it seems that the LJ parameters 
of Si in both of them is inconsistent with the original publication.


regards,
Baofu Qiao

Justin A. Lemkul wrote:



Arden Perkins wrote:
I believe pdb files (protein data bank) are only for peptides, so 
unless you are simulating a peptide you will not need any pdb files 
for your 


There are numerous utilities that generate .pdb files; it is not true 
to say that .pdb files only contain proteins/peptides.  A .pdb file is 
simply a coordinate format; it can contain anything.


SiO2. Your structure file (.gro) for SiO2 may be in the fftw library 
files (or whatever forcefield you're using). Hope that helps.
 


What do you mean by this?  FFTW has nothing to do with force fields or 
structures of any sort.  And certainly none of the force fields 
distributed with Gromacs will topologies for SiO2, since they are 
primarily biomolecular.  They may contain atom types related to SiO2, 
however, but probably not pre-built topologies, and certainly not 
coordinate files.


-Justin


Arden Perkins

On Tue, Jan 12, 2010 at 3:23 AM, Batistakis, C. <mailto:c.batista...@tue.nl>> wrote:


Dear all


I am a new user of Gromacs. I am interested to simulate amorphous

SiO_2 and I would like to know if someone can send me the .pdb and
.top files.


Thanks in advance



Chrysostomos



--
gmx-users mailing listgmx-users@gromacs.org
<mailto:gmx-users@gromacs.org>
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before
posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org
<mailto:gmx-users-requ...@gromacs.org>.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php






--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] 10-4-3 implicit wall potential

2010-01-05 Thread Baofu Qiao
HI all,

Does anyone have experience to implement the 10-4-3 implicit wall
potential?  If I want to use it, which files do I need to change, except
wall.c? Can anyone give some suggestions?

Thanks a lot!

best wishes,
Baofu Qiao
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Coarse-Grain: GROMACS bond information to VMD

2009-10-21 Thread Baofu Qiao




HI all,

I have try the top2psf.tcl from Justin and the top2psf.pf from the VMD
website. But both of them can only deal with single chain system. Take
an example, if there are 10 proteins (+water) in my system,  how to
convert to topology of the total system to .psf file? I tried to append
the .psf. But VMD doesn't recognize the appended .psf file.

Another question: 
How to use the code from Nicolas? I got the following error information
from 
>>>> ./coarse_grain.tcl -tpr em.tpr

: command not found line 10:
./coarse_grain.tcl: line 14: proc: command not found
: command not found line 15:
./coarse_grain.tcl: line 17: global: command not found
./coarse_grain.tcl: line 18: global: command not found
./coarse_grain.tcl: line 19: global: command not found
./coarse_grain.tcl: line 20: global: command not found
./coarse_grain.tcl: line 21: global: command not found
./coarse_grain.tcl: line 22: global: command not found
./coarse_grain.tcl: line 23: global: command not found
./coarse_grain.tcl: line 24: global: command not found
./coarse_grain.tcl: line 25: global: command not found
: command not found line 26:
: command not found line 41:
: command not found line 45:
./coarse_grain.tcl: line 120: syntax error near unexpected token `}'
'/coarse_grain.tcl: line 120: ` } else {

best wishes,
Baofu Qiao

Nicolas SAPAY wrote:

  
Justin A. Lemkul wrote:


  I have also added a Perl script to the GROMACS site (the VMD page):

http://www.gromacs.org/Developer_Zone/Programming_Guide/VMD

The user provides an input topology file, and a .psf file is written,
which can be loaded as data for the structure in VMD.  The !NBOND
section seems to be the most important in this regard, so the other
sections are a bit rough, but it seems to work alright.

The caveat is the topology must be one generated by MARTINI, in order to
satisfy all the pattern matching and the order of the topology.  It
should be fairly easy to modify the program further to accommodate other
layouts, but I haven't had the need to do so.
  

I added text to the above page describing both scripts, which are
attached. I'd have done that yesterday but the website was
intermittently down.

  
  
Thanks for posting the tcl script on the website, although this script has
been mainly coded by someone else. I have simply modified it for my own
purpose.

I also want to mention it is a good think to create a psf file if one work
with VMD. You store at once, bonds, atom types and charges. Actually, I
should have somewhere a .rtf file for the Martini amino acids and lipids
(the equivalent of the Gromacs rtp file for CHARMM). It can be used by
psfgen to generate a psf file with Martini bonds, charges and atom types.
If I can retrieve it in my archives, I will post it on the website.

Nicolas

  
  
Mark



  Nicolas Sapay wrote:
  
  
Hello Thomas,

I have a tcl script in my personal script library that might do what
you want to do. I didn't use it for quite a while, but it was working
well as far as I remember. I think it has been adapted from a script
available on the VMD website, but I don't remember exactly its
history. It doesn't seem too difficult to understand. You should be
able to modify it for your own purpose, if needed.

Cheers,
Nicolas


Thomas Schmidt a écrit :


  Dear Omer,

many thanks for your answer, but your solution doesn't work for me.
We have Protein-Lipid models in the CG scale.
Only if I replace all atom names in the PDB file through "CA" I can
use
the "trace" drawing method, but get also wrong atoms connected to each
other. For example CG Beads with low distances to each other, e.g. in
coarse-grained benzene rings, were not connected. I guess that this
method is distance dependent, too, but in another way. :-)

Does anybody else have a solution (...to put GROMACS bond information
into VMD)?

Thomas


  

___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before
posting!
Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

  

___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php



  
  

  




___
gmx-users mailing listgmx-users@

Re: [gmx-users] question about the implicit walls

2009-09-21 Thread Baofu Qiao




HI Berk,

Thanks for your reply!

Questions related the implicit Walls. 
a) When I use PBC =xyz in combination with nwall=2, I get the following
error
           ERROR: walls only work with pbc=xy
     But If I use pbc=xy, there is no periodic image in the Z
direction, why should we use "wall_ewald_zfac"=2~3? 
c) Even though I have used half of the processors for -npme, there is
still more than 20% is lost due to PME have more work to do. Is there
any idea to improve the performance?

The following is the parameters for the walls I try to simulate the
graphine substrate.

; WALLS
; Number of walls, type, atom types, densities and box-z scale factor
for Ewald
nwall         = 2
wall_type     = 9-3
wall_r_linpot   = 0
wall_atomtype    = opls_135   opls_135
wall_density    = 114  114
wall_ewald_zfac     = 3



Berk Hess wrote:

  Hi,

Nearly all the information is in the mdp options page, but it is a bit concise.
The only really missing information is that there is not cut-off for the wall.
All atoms feel the force of one or both walls.

The density is required to determine the interaction strength of the wall
for the 9-3 and 10-4 options. The LJ potential is integrated over the volume
behind the wall or the plain of the wall, which requires a LJ particle density.

Berk

Date: Mon, 21 Sep 2009 10:02:29 +0200
From: qia...@gmail.com
To: gmx-users@gromacs.org
Subject: [gmx-users] question about the implicit walls






  
  




Hi, 



Is anyone who can clear the "Walls" feature in the gmx 4.0.x? Same as
Yves (see the end), I cannot find any related reference or publication
citing it. 



Generally the potential on wall is in continuum format. But if it is
also true here,  what the "wall_density" for?  I guess it is based on a
lattice model. If I want to build silicon or graphene walls, what kind
of values is suitable for them? Any reference?



regards,

Baofu Qiao



[gmx-users] Implicit walls.
Yves Dubief

Thu, 11 Jun 2009 03:32:11 -0700



Hi,




I am using the implicit wall method available in 4.0.x and I cannot
seem to find any published work on the method, aside from the
manual. I think I have an overall understanding of the
method, but I am not confident about some key details:
- Is the force generated from the wall derived from a
continuum representation of the wall based on the surface
density, or from a lattice of virtual atoms? I am leaning
toward #1 just because it works great under NPT
- What is the depth of the virtual wall when wall_r_linpot=0?
Is it LJ cutoff? or "zero"?
- Are any of developer willing to validate the paragraph we
plan on writing in my student thesis and future publications
(when it's all ready)?

   Thank you in advance.
   Yves
 		 	   		  
_
Express yourself instantly with MSN Messenger! Download today it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
  
  

___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php




___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] question about the implicit walls

2009-09-21 Thread Baofu Qiao






Hi, 

Is anyone who can clear the "Walls" feature in the gmx 4.0.x? Same as
Yves (see the end), I cannot find any related reference or publication
citing it. 

Generally the potential on wall is in continuum format. But if it is
also true here,  what the "wall_density" for?  I guess it is based on a
lattice model. If I want to build silicon or graphene walls, what kind
of values is suitable for them? Any reference?

regards,
Baofu Qiao

[gmx-users] Implicit walls.
Yves Dubief
Thu, 11 Jun 2009 03:32:11 -0700

Hi,


I am using the implicit wall method available in 4.0.x and I cannot
seem to find any published work on the method, aside from the
manual. I think I have an overall understanding of the
method, but I am not confident about some key details:
- Is the force generated from the wall derived from a
continuum representation of the wall based on the surface
density, or from a lattice of virtual atoms? I am leaning
toward #1 just because it works great under NPT
- What is the depth of the virtual wall when wall_r_linpot=0?
Is it LJ cutoff? or "zero"?
- Are any of developer willing to validate the paragraph we
plan on writing in my student thesis and future publications
(when it's all ready)?

   Thank you in advance.
   Yves



___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] question about mdrun -append

2009-08-07 Thread Baofu Qiao




Hi Berk,

Now it works. I copy the mdrun manually to the $GMX/bin folder.  I
checked the date of mdrun, and find that "mdrun -install mdrun" doesn't
copy the latest mdrun file to $GMX/bin folder.

Thank a lot for you help!

regards,
Baofu Qiao


Berk Hess wrote:

  Strange...

I tested this combining procedure for other parts of the codes and there it works.

What kind of operating system are you running and what is the file system?

Berk

Date: Fri, 7 Aug 2009 14:49:45 +0200
From: qia...@gmail.com
To: gmx-users@gromacs.org
Subject: Re: [gmx-users] question about mdrun -append






  


Hi Berk,



The error still exist!  

The following process is used:

1.  revise checkpoint.c  under the source folder 
"~/programes/backups/gromacs-4.0.5/src/gmxlib"

2.  make -j 4

3. cd ../../  (i. e., ~/programes/backups/gromacs-4.0.5)"make
install mdrun"

One of the output lines " /usr/bin/install -c -m 644 './ngmx.1'
'/data1/HLRS/dgrid/stbw0172/programes/gromacs-4.0.5/bin//share/man/man1/ngmx.1'"
says that the latest mdrun is installed in the right folder.



Is there something wrong in the above process? or some further
suggestions?



thanks a lot!



regards,

Baofu Qiao



Berk Hess wrote:

  I think this might be a bug in the checkpointing code.
The negative number is correct, these are the lower 32 bits of the file offset.
But they are combined incorrectly with the higher part.

Could you try changing line 781 of src/gmxlib/checkpoint.c from:
outputfiles[i].offset = ( ((off_t) offset_high) << 32 ) | ( (off_t) offset_low );
to
outputfiles[i].offset = ( ((off_t) offset_high) << 32 ) | ( (off_t) offset_low & 4294967295U );
recompile gmxlib and mdrun
and try to continue from the point that gave the error?
You might want to copy all the files to make sure you do not loose anything.

Please report back if this worked or not.

Berk

Date: Fri, 7 Aug 2009 12:10:04 +0200
From: qia...@gmail.com
To: gmx-users@gromacs.org
Subject: Re: [gmx-users] question about mdrun -append






  


Hi Berk,



Thanks for your reply!  The result of  "gmxdump -cp T298.cpt" is given
as follows (the long list of x, v, E is not shown)

**

number of output files = 4

output filename = T298.log

file_offset_high = 0

file_offset_low = 7228659

output filename = T298.trr

file_offset_high = 0

file_offset_low = 0

output filename = T298.xtc

file_offset_high = 0

file_offset_low = -2020699160

output filename = T298.edr

file_offset_high = 0

file_offset_low = 13270732

**

But I don't know what this number means! On the other hand, another
system, which can be continued using "mdrun -cpi T298.cpt", gives the
following result of "gmxdump -cp T298.cpt"

***

number of output files = 4

output filename = T298.log

file_offset_high = 0

file_offset_low = 7066641

output filename = T298.trr

file_offset_high = 0

file_offset_low = 0

output filename = T298.xtc

file_offset_high = 0

file_offset_low = 2144960064

output filename = T298.edr

file_offset_high = 0

file_offset_low = 19540796

***

It seems that the difference is: there is a negative number
(-2020699160) in the former case. 



So what should I do?



regards,





Berk Hess wrote:

  My guess would be that for some reason (which I don't know),
your xtc file has not been written to disk completely.
You can check this by using gmxdump -cp on your checkpoint file
and looking at the size of the xtc file.

Berk

  
  
Date: Fri, 7 Aug 2009 10:19:35 +0200
From: qia...@gmail.com
To: gmx-users@gromacs.org
Subject: [gmx-users] question about mdrun -append

Hi all,

I meet one problem when I am using mdrun -append. In some cases, the
following error information happens, and mdrun is halted
*
Program mdrun, VERSION 4.0.5
Source code file: checkpoint.c, line: 1248
Fatal error:
Truncation of file T298.xtc failed.
*
Line 1248 in checkpoint.c is the inner loop as follows
*
if(bAppendOutputFiles) {
for(i=0;igmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

  
  _
What can you do with the new Windows Live? Find out
http://www.microsoft.com/windows/windowslive/default.aspx
  
  
___
gmx-users mailing listgmx-users@gromacs.org
ht

Re: [gmx-users] question about mdrun -append

2009-08-07 Thread Baofu Qiao




Hi Berk,

The error still exist!  
The following process is used:
1.  revise checkpoint.c  under the source folder 
"~/programes/backups/gromacs-4.0.5/src/gmxlib"
2.  make -j 4
3. cd ../../  (i. e., ~/programes/backups/gromacs-4.0.5)    "make
install mdrun"
One of the output lines " /usr/bin/install -c -m 644 './ngmx.1'
'/data1/HLRS/dgrid/stbw0172/programes/gromacs-4.0.5/bin//share/man/man1/ngmx.1'"
says that the latest mdrun is installed in the right folder.

Is there something wrong in the above process? or some further
suggestions?

thanks a lot!

regards,
Baofu Qiao

Berk Hess wrote:

  I think this might be a bug in the checkpointing code.
The negative number is correct, these are the lower 32 bits of the file offset.
But they are combined incorrectly with the higher part.

Could you try changing line 781 of src/gmxlib/checkpoint.c from:
outputfiles[i].offset = ( ((off_t) offset_high) << 32 ) | ( (off_t) offset_low );
to
outputfiles[i].offset = ( ((off_t) offset_high) << 32 ) | ( (off_t) offset_low & 4294967295U );
recompile gmxlib and mdrun
and try to continue from the point that gave the error?
You might want to copy all the files to make sure you do not loose anything.

Please report back if this worked or not.

Berk

Date: Fri, 7 Aug 2009 12:10:04 +0200
From: qia...@gmail.com
To: gmx-users@gromacs.org
Subject: Re: [gmx-users] question about mdrun -append






  


Hi Berk,



Thanks for your reply!  The result of  "gmxdump -cp T298.cpt" is given
as follows (the long list of x, v, E is not shown)

**

number of output files = 4

output filename = T298.log

file_offset_high = 0

file_offset_low = 7228659

output filename = T298.trr

file_offset_high = 0

file_offset_low = 0

output filename = T298.xtc

file_offset_high = 0

file_offset_low = -2020699160

output filename = T298.edr

file_offset_high = 0

file_offset_low = 13270732

**

But I don't know what this number means! On the other hand, another
system, which can be continued using "mdrun -cpi T298.cpt", gives the
following result of "gmxdump -cp T298.cpt"

***

number of output files = 4

output filename = T298.log

file_offset_high = 0

file_offset_low = 7066641

output filename = T298.trr

file_offset_high = 0

file_offset_low = 0

output filename = T298.xtc

file_offset_high = 0

file_offset_low = 2144960064

output filename = T298.edr

file_offset_high = 0

file_offset_low = 19540796

***

It seems that the difference is: there is a negative number
(-2020699160) in the former case. 



So what should I do?



regards,





Berk Hess wrote:

  My guess would be that for some reason (which I don't know),
your xtc file has not been written to disk completely.
You can check this by using gmxdump -cp on your checkpoint file
and looking at the size of the xtc file.

Berk

  
  
Date: Fri, 7 Aug 2009 10:19:35 +0200
From: qia...@gmail.com
To: gmx-users@gromacs.org
Subject: [gmx-users] question about mdrun -append

Hi all,

I meet one problem when I am using mdrun -append. In some cases, the
following error information happens, and mdrun is halted
*
Program mdrun, VERSION 4.0.5
Source code file: checkpoint.c, line: 1248
Fatal error:
Truncation of file T298.xtc failed.
*
Line 1248 in checkpoint.c is the inner loop as follows
*
if(bAppendOutputFiles) {
for(i=0;igmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

  
  _
What can you do with the new Windows Live? Find out
http://www.microsoft.com/windows/windowslive/default.aspx
  
  
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php



_
Express yourself instantly with MSN Messenger! Download today it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
  
  

___
gmx-users mailing listgmx-user

Re: [gmx-users] question about mdrun -append

2009-08-07 Thread Baofu Qiao




Hi Berk,

Thanks for your reply!  The result of  "gmxdump -cp T298.cpt" is given
as follows (the long list of x, v, E is not shown)
**
number of output files = 4
output filename = T298.log
file_offset_high = 0
file_offset_low = 7228659
output filename = T298.trr
file_offset_high = 0
file_offset_low = 0
output filename = T298.xtc
file_offset_high = 0
file_offset_low = -2020699160
output filename = T298.edr
file_offset_high = 0
file_offset_low = 13270732
**
But I don't know what this number means! On the other hand, another
system, which can be continued using "mdrun -cpi T298.cpt", gives the
following result of "gmxdump -cp T298.cpt"
***
number of output files = 4
output filename = T298.log
file_offset_high = 0
file_offset_low = 7066641
output filename = T298.trr
file_offset_high = 0
file_offset_low = 0
output filename = T298.xtc
file_offset_high = 0
file_offset_low = 2144960064
output filename = T298.edr
file_offset_high = 0
file_offset_low = 19540796
***
It seems that the difference is: there is a negative number
(-2020699160) in the former case. 

So what should I do?

regards,


Berk Hess wrote:

  My guess would be that for some reason (which I don't know),
your xtc file has not been written to disk completely.
You can check this by using gmxdump -cp on your checkpoint file
and looking at the size of the xtc file.

Berk

  
  
Date: Fri, 7 Aug 2009 10:19:35 +0200
From: qia...@gmail.com
To: gmx-users@gromacs.org
Subject: [gmx-users] question about mdrun -append

Hi all,

I meet one problem when I am using mdrun -append. In some cases, the
following error information happens, and mdrun is halted
*
Program mdrun, VERSION 4.0.5
Source code file: checkpoint.c, line: 1248
Fatal error:
Truncation of file T298.xtc failed.
*
Line 1248 in checkpoint.c is the inner loop as follows
*
if(bAppendOutputFiles) {
for(i=0;igmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

  
  
_
What can you do with the new Windows Live? Find out
http://www.microsoft.com/windows/windowslive/default.aspx
  
  

___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php




___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] question about mdrun -append

2009-08-07 Thread Baofu Qiao
Hi all,

I meet one problem when I am using mdrun -append. In some cases, the
following error information happens, and mdrun is halted
*
Program mdrun, VERSION 4.0.5
Source code file: checkpoint.c, line: 1248
Fatal error:
Truncation of file T298.xtc failed.
*
Line 1248 in checkpoint.c is the inner loop as follows
*
if(bAppendOutputFiles) {
for(i=0;ihttp://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] question about continuation using tpbconv and mdrun -cpi

2009-07-30 Thread Baofu Qiao




Hi Berk,

Thanks for pointing out my mistake!
What I worry about is how big the deviation is after several
continuations? If it is unpredictable, it is a sad news.

regards,
Baofu Qiao

Berk Hess wrote:

  Hi,

Your command line is incorrect.
-reprod should be used without yes.
-reprod is yes
-noreprod is no

mdrun -cpi has no effect on your results at all.
But always remember that MD is chaotic, there is no point in trying
to reproduce trajectories exactly, unless for a very special purpose.

Berk

  
  
Date: Thu, 30 Jul 2009 16:53:13 +0200
From: qia...@gmail.com
To: gmx-users@gromacs.org
Subject: Re: [gmx-users] question about continuation using tpbconv and mdrun	-cpi

Hi Mark,

Sorry to trouble you again!

I made two tests, by using one-processor and one 32-processors on the
same cluster. I used 4000SPC/E waters (OPLS-AA ff.) The former
(one-processor) gives the exactly the same potential. However, the
latter still shown some deviation of the potential. When I use "-reprod
yes", the potential is -190098+-457.199 for "Comparison", and
-190088+-467.225 for "Reference". When I use "-dlb no", -190116+-476.512
for "Comparison", and -190114+-483.749 for "Reference".

The following is the job script lines when using "-reprod yes":

# Reference
grompp -f full.mdp -c initial.gro -p system.top -o full
mpirun -np 32 mdrun -deffnm full -reprod yes

# Comparison
grompp -f full.mdp -c initial.gro -p system.top -o full2
mpirun -np 32 mdrun -deffnm full2 -reprod yes -cpt 2 -maxh 0.2
mpirun -np 32 mdrun -deffnm full2 -reprod yes -cpi full2.cpt -append
****

Do you have some further ideas?

best wishes,
Baofu Qiao


Mark Abraham wrote:


  Baofu Qiao wrote:
  
  
Hi Mark,

Thanks!
Because the maximum time for one single job is set to be 24hours on the
cluster I'm using, I want to make sure which is the best way to continue
the gmx jobs. I wonder how strong effect "mdrun -cpi" has?  From the
introduction of mdrun, it seems that there are some EXTRA energy frames,
but for the trajectory file (.xtc), there is no extra frames? Am I
right?

"mdrun -h
--> The contents will be binary identical (unless you use dynamic load
balancing), but for technical reasons there might be some extra energy
frames when using checkpointing (necessary for restarts without
appending)."

  
  The intent with GROMACS 4.x is for a user to be able to construct a
.tpr with a very long simulation time, and perhaps constrain mdrun
with -maxh (or rely on the cluster killing the job), and to use the
information in the checkpoint file to restart correctly and perhaps to
then use mdrun -append so that when the simulation is running
smoothly, only one set of files needs to exist. Thus one doesn't need
to trouble with using tpbconv correctly, crashes can restart
transparently, etc. The old-style approach still works, however.

Obviously you should (be able to) verify with mdrun -reprod that
whatever approach you use when you construct your job scripts leads to
simulations that are in principle reproducible. For production, don't
use -reprod because you will want the better speed from dynamic load
balancing, etc.

Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before
posting!
Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

  


___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

  
  
_
Express yourself instantly with MSN Messenger! Download today it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
  
  

___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php




__

Re: [gmx-users] question about continuation using tpbconv and mdrun -cpi

2009-07-30 Thread Baofu Qiao
Hi Mark,

Sorry to trouble you again!

I made two tests, by using one-processor and one 32-processors on the
same cluster. I used 4000SPC/E waters (OPLS-AA ff.) The former
(one-processor) gives the exactly the same potential. However, the
latter still shown some deviation of the potential. When I use "-reprod
yes", the potential is -190098+-457.199 for "Comparison", and
-190088+-467.225 for "Reference". When I use "-dlb no", -190116+-476.512
for "Comparison", and -190114+-483.749 for "Reference".

The following is the job script lines when using "-reprod yes":

# Reference
grompp -f full.mdp -c initial.gro -p system.top -o full
mpirun -np 32 mdrun -deffnm full -reprod yes

# Comparison
grompp -f full.mdp -c initial.gro -p system.top -o full2
mpirun -np 32 mdrun -deffnm full2 -reprod yes -cpt 2 -maxh 0.2
mpirun -np 32 mdrun -deffnm full2 -reprod yes -cpi full2.cpt -append
****

Do you have some further ideas?

best wishes,
Baofu Qiao


Mark Abraham wrote:
> Baofu Qiao wrote:
>> Hi Mark,
>>
>> Thanks!
>> Because the maximum time for one single job is set to be 24hours on the
>> cluster I'm using, I want to make sure which is the best way to continue
>> the gmx jobs. I wonder how strong effect "mdrun -cpi" has?  From the
>> introduction of mdrun, it seems that there are some EXTRA energy frames,
>> but for the trajectory file (.xtc), there is no extra frames? Am I
>> right?
>>
>> "mdrun -h
>> --> The contents will be binary identical (unless you use dynamic load
>> balancing), but for technical reasons there might be some extra energy
>> frames when using checkpointing (necessary for restarts without
>> appending)."
>
> The intent with GROMACS 4.x is for a user to be able to construct a
> .tpr with a very long simulation time, and perhaps constrain mdrun
> with -maxh (or rely on the cluster killing the job), and to use the
> information in the checkpoint file to restart correctly and perhaps to
> then use mdrun -append so that when the simulation is running
> smoothly, only one set of files needs to exist. Thus one doesn't need
> to trouble with using tpbconv correctly, crashes can restart
> transparently, etc. The old-style approach still works, however.
>
> Obviously you should (be able to) verify with mdrun -reprod that
> whatever approach you use when you construct your job scripts leads to
> simulations that are in principle reproducible. For production, don't
> use -reprod because you will want the better speed from dynamic load
> balancing, etc.
>
> Mark
> ___
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before
> posting!
> Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>


___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] question about continuation using tpbconv and mdrun -cpi

2009-07-29 Thread Baofu Qiao
Hi Mark,

Thanks!
Because the maximum time for one single job is set to be 24hours on the
cluster I'm using, I want to make sure which is the best way to continue
the gmx jobs. I wonder how strong effect "mdrun -cpi" has?  From the
introduction of mdrun, it seems that there are some EXTRA energy frames,
but for the trajectory file (.xtc), there is no extra frames? Am I right?

"mdrun -h
--> The contents will be binary identical (unless you use dynamic load
balancing), but for technical reasons there might be some extra energy
frames when using checkpointing (necessary for restarts without appending)."

best wishes,

Mark Abraham wrote:
> Baofu Qiao wrote:
>> Hi all,
>>
>> Recently I made some tests on the continuation of gmx 4.0.5. Firstly, I
>> run a complete simulation as the reference. In the full.mdp file,
>> gen_seed=123456 & continuation=yes are used to build the same initial
>> structure.  In all the simulations, "mdrun -dlb no" is used because I
>> want to reproduce exactly the same potential energy.
>>
>> 1. Using tpbconv.  (part1.mdp is the same as full.mdp except the
>> "nsteps")
>>   1a) grompp  -s part1
>>   mpirun -np 32 mdrun -deffnm part1 -dlb no
>>   tpbconv -s part1.tpr -f part1.trr -e part1.edr -extend 100 -o
>> part2
>>   mpirun -np 32 mdrun -deffnm part2 -dlb no
>>   eneconv -f part1.edr part2.edr -o part_all.edr
>>   1b) tpbconv -s part1.tpr -extend 100 -o part2.tpr
>>   mpirun -np 32 mdrun -deffnm part2 -cpi part1.cpt -dlb no
>>   eneconv -f part1.edr part2.edr -o part_all.edr
>> The potential energy is generated from g_energy, and compared. In this
>> method, I met two problems:  Q.a) The potential energies of
>> part2  in both 1a and 1b are not
>> exactly the same as the reference potential!  And also the potential of
>> part2 in 1a is different from that in 1b. (Both potentials of part1 from
>> 1a and 1b are identical to the reference.)
>>  Q.b) The RMSD of the potential energies of part_all.edr is very big
>> (1 order of magnitude bigger than the corresponding one from the
>> separate .edr files)
>>
>> 2) using mdrun -cpi -append
>> grompp -s full2
>> mpirun -np 32 mdrun -deffnm full2 -dlb no  (stop the job at some
>> time point, then run the following)
>> mpirun -np 32 mdrun -deffnm full2 -dlb no  -cpi full2_prev.cpt
>> -append
>> The second (-append) section of the potential energy is also different
>> from the result of the reference potential, even though I am using the
>> same full.mdp file.  (the potential in the former section is identical
>> to the reference)
>>
>> Then, how to reproduce exactly the potential?
>
> While "mdrun -dlb no" is a good start, "mdrun -reprod" takes care of
> some other things too...
>
>>  It is said that the
>> continuation from "mdrun -cpi" is binary identical. 
>
> Not quite... it's an exact restart from the old point in the ensemble,
> but the manner in which mdrun does future integration is only
> reproducible if you force it to be so.
>
>> However, it seems
>> not in my tests. What's the problem of the very big RMSD of the
>> potential from eneconv?
>
> Not sure.
>
> Mark
> ___
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before
> posting!
> Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>

___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] question about continuation using tpbconv and mdrun -cpi

2009-07-28 Thread Baofu Qiao
Hi all,

Recently I made some tests on the continuation of gmx 4.0.5. Firstly, I
run a complete simulation as the reference. In the full.mdp file,
gen_seed=123456 & continuation=yes are used to build the same initial
structure.  In all the simulations, "mdrun -dlb no" is used because I
want to reproduce exactly the same potential energy.

1. Using tpbconv.  (part1.mdp is the same as full.mdp except the "nsteps")
  1a) grompp  -s part1
  mpirun -np 32 mdrun -deffnm part1 -dlb no
  tpbconv -s part1.tpr -f part1.trr -e part1.edr -extend 100 -o part2
  mpirun -np 32 mdrun -deffnm part2 -dlb no
  eneconv -f part1.edr part2.edr -o part_all.edr
  1b) tpbconv -s part1.tpr -extend 100 -o part2.tpr
  mpirun -np 32 mdrun -deffnm part2 -cpi part1.cpt -dlb no
  eneconv -f part1.edr part2.edr -o part_all.edr
The potential energy is generated from g_energy, and compared. In this
method, I met two problems: 
 Q.a) The potential energies of part2  in both 1a and 1b are not
exactly the same as the reference potential!  And also the potential of
part2 in 1a is different from that in 1b. (Both potentials of part1 from
1a and 1b are identical to the reference.)
 Q.b) The RMSD of the potential energies of part_all.edr is very big
(1 order of magnitude bigger than the corresponding one from the
separate .edr files)

2) using mdrun -cpi -append
grompp -s full2
mpirun -np 32 mdrun -deffnm full2 -dlb no  (stop the job at some
time point, then run the following)
mpirun -np 32 mdrun -deffnm full2 -dlb no  -cpi full2_prev.cpt
-append
The second (-append) section of the potential energy is also different
from the result of the reference potential, even though I am using the
same full.mdp file.  (the potential in the former section is identical
to the reference)

Then, how to reproduce exactly the potential?  It is said that the
continuation from "mdrun -cpi" is binary identical. However, it seems
not in my tests. What's the problem of the very big RMSD of the
potential from eneconv?

Thanks in advance!

regards,
Baofu Qiao
 
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] question about positive potential

2009-05-22 Thread Baofu Qiao
Hi Mark,

There are two polyelectrolytes in my systems: poly(sodium
4-styrenesulfonate) and poly(diallyl dimethyl ammonium chloride). And
Actually sodium and chloride are absent, only the polycation and the
polyanion. I didn't find suitable force field for sulfonate, thus I used
the reported force field in Ionic liquid for sulfonate (which is based
on opls-aa) , and opls-aa force field parameters for the other atoms.

When I add some water into the polyelectrolytes complex systems, 
potential becomes negative.  See the comparison below, (both at 400K)
  system  potential, 
kJ/molNo. of atoms
  PE+ & PE-  7.89481e+03(+-347) 
  4080
PE+ & PE-+water:   -3.04580e+04(+-537)   7080
Any other suggestions?

I wonder that is the size of the system related to this problem? and
when the potential is positive, why doesn't the system becomes expanded
(Does positive potential have some physics)?

thanks in advance!

best wishes,
Baofu Qiao

Mark Abraham wrote:
> Baofu Qiao wrote:
>> Hi all,
>>
>> After a 3ns MD run, I took a look at the  .log file, and found the
>> potential is positive. How could it be?!  I wonder what is the
>> problem.  The standard deviation of density is small, while the SD of
>> potential is big. Is the system not equilibrated (if yes, why the sd
>> of density is so small?)?  Or there is something wrong with the force
>> field (I used OPLS-aa, while some parameter of sulfonate are taken
>> from Ionic liquid publication, because I didn't find related
>> parameters)? Or something else I have no idea?  How to solve it?
>
> Your bonded terms are much larger in magnitude than your nonbonded
> potentials. It's hard to diagnose when you haven't told us anything
> about the contents of your simulation system.
>
>> PS: A subsequent md at lower temperature (340K) gives negative
>> potential, however the sd of potential is still quite big:
>> -819(+-)231 kJ/mol.
>
> Thus it would seem your force field may be unsuited to elevated
> temperatures.
>
> Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] question about positive potential

2009-05-22 Thread Baofu Qiao




Hi all,

After a 3ns MD run, I took a look at the  .log file, and found the
potential is positive. How could it be?!  I wonder what is the
problem.  The standard deviation of density is small, while the SD of
potential is big. Is the system not equilibrated (if yes, why the sd of
density is so small?)?  Or there is something wrong with the force
field (I used OPLS-aa, while some parameter of sulfonate are taken from
Ionic liquid publication, because I didn't find related parameters)? Or
something else I have no idea?  How to solve it?

PS: A subsequent md at lower temperature (340K) gives negative
potential, however the sd of potential is still quite big: -819(+-)231
kJ/mol.

Thanks in advance if you have some suggestions!

Some parameters used:
gmx 4.0.4, T_couple=Nose-Hoover(400K), P_couple=Parrinello-Rahman
(1bar), PME for coulomb and cut-off for LJ interactions. 
rlist=rvdw=rcoulomb=1.0, dt=1fs and constraints=none.

Results in .log file:
    <  A V E R A G E S  >
    <==  ###  ==>
Energies (kJ/mol)
   Bond      Angle              Proper Dih.   
Ryckaert-Bell. LJ-14
    7.51939e+03       1.21907e+04    1.12043e+02  9.00085e+03 
5.06298e+03
 Coulomb-14    LJ (SR)  Disper. corr.   Coulomb
(SR)    Coul. recip.
   -4.27726e+03  -9.61433e+03    -7.25985e+02    -3.76464e+03
-1.30800e+04
  Potential        Kinetic En. Total Energy 
Temperature       Pressure (bar)
    2.42375e+03  2.03489e+04 2.27727e+04 4.0e+02 
1.48081e+00

  Box-X     Box-Y          Box-Z
Volume  Density (SI)
    3.45196e+00    3.45196e+00    3.45196e+00    4.11340e+01   
1.12914e+03
 pV
    1.27896e-01
    <==  ###  ==>
    <  R M S - F L U C T U A T I O N S  >
   Energies (kJ/mol)
   Bond     Angle  Proper Dih.   
Ryckaert-Bell.   LJ-14
    1.63642e+02 1.84279e+02 6.75325e+00 1.01384e+02        
5.80187e+01
 Coulomb-14  LJ (SR)   Disper. corr.   Coulomb
(SR)   Coul. recip.
    2.44975e+01 9.62751e+01 3.70809e+00 6.08742e+01
3.71360e+01
  Potential   Kinetic En.    Total Energy 
Temperature  Pressure (bar)
    2.73923e+02 2.61094e+02    3.90285e+02  5.13234e+00   
1.21958e+03

  Box-X  Box-Y    Box-Z         
Volume  Density (SI)
    5.87539e-03    5.87539e-03    5.87539e-03    2.10006e-01   
5.76728e+00



___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php