Re: [gmx-users] mdrun

2013-11-05 Thread Justin Lemkul



On 11/5/13 7:37 AM, MUSYOKA THOMMAS wrote:

Dear Users,
I am running MD simulations of a protein-ligand system. Sometimes when i do
an mdrun, be it for the energy minimization or during the nvt and npt
equillibration  or the actual md run step, sometimes the output files are
named in a very odd way (strange extensions) e.g em.gro.tprr or md.tpr.cpt,
md.tpr.xtc.

Can anyone explain the cause of this?



You are issuing the command in a way that you probably don't want.  I suspect 
what you are doing is:


mdrun -deffnm md.tpr

The -deffnm option is for the base file name and should not include an 
extension.  mdrun is only doing what you tell it; you're saying, all my files 
are named md.tpr, and you can put whatever the necessary extension is on them.


What you want is:

mdrun -deffnm md

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun

2013-11-05 Thread MUSYOKA THOMMAS
Dear Dr Justin,
Much appreciation. You nailed it.
Kind regards.


On Tue, Nov 5, 2013 at 2:41 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/5/13 7:37 AM, MUSYOKA THOMMAS wrote:

 Dear Users,
 I am running MD simulations of a protein-ligand system. Sometimes when i
 do
 an mdrun, be it for the energy minimization or during the nvt and npt
 equillibration  or the actual md run step, sometimes the output files are
 named in a very odd way (strange extensions) e.g em.gro.tprr or
 md.tpr.cpt,
 md.tpr.xtc.

 Can anyone explain the cause of this?


 You are issuing the command in a way that you probably don't want.  I
 suspect what you are doing is:

 mdrun -deffnm md.tpr

 The -deffnm option is for the base file name and should not include an
 extension.  mdrun is only doing what you tell it; you're saying, all my
 files are named md.tpr, and you can put whatever the necessary extension is
 on them.

 What you want is:

 mdrun -deffnm md

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 

*MUSYOKA THOMMAS MUTEMIMob nos **+27844846540*
*B.Sc Biochemistry (Kenyatta University),MSc Pharmaceutical Science
(Nagasaki University)*

*PhD Student-Bioinformatics (Rhodes University)*Skype ID- MUSYOKA THOMMAS
MUTEMI
Alternative email - thom...@sia.co.ke








*Do all the good you can, By all the means you can, In all the ways you
can, In all the places you can,At all the times you can,To all the people
you can,As long as ever you can. - John Wesley. *
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun cpt

2013-10-29 Thread Mark Abraham
On Oct 29, 2013 1:26 AM, Pavan Ghatty pavan.grom...@gmail.com wrote:

 Now /afterok/ might not work since technically the job is killed due to
 walltime limits - making it not ok.

Hence use -maxh!

Mark

 So I suppose /afterany/ is a better
 option. But I do appreciate your warning about spamming the queue and yes
I
 will re-read PBS docs.


 On Mon, Oct 28, 2013 at 5:11 PM, Mark Abraham mark.j.abra...@gmail.com
wrote:

  On Mon, Oct 28, 2013 at 7:53 PM, Pavan Ghatty pavan.grom...@gmail.com
  wrote:
 
   Mark,
  
   The problem with one .tpr file set for 100ns is that when job number
  (say)
   4 hits the wall limit, it crashes and never gets a chance to submit
the
   next job. So it's not really automated.
  
 
  That's why I suggested -maxh, so you can have an orderly shutdown.
(Though
  if a job can get suspended, that won't always help, because mdrun can't
  find out about the suspension...)
 
  Now I could initiate job 5 before /mdrun/ in job 4's script and hold
job 5
   till job 4 ends.
 
 
  Sure - read your PBS docs and find the environment variable to read so
that
  job 4 knows its ID so it can submit job 5 with an afterok hold on job 4
on
  it. But don't tell your sysadmins where I live. ;-) Seriously, if you
live
  on this edge, you could spam infinite jobs, which tends to get your
account
  cut off. That's why you want the afterok hold - you only want the next
job
  to start if the exit code from the first script correctly indicates that
  mdrun exited correctly. Test carefully!
 
  Mark
 
  But the PBS queuing system is sometime weird and takes a
   bit of time to recognize a job and give back its jobID. So I could
submit
   job 5 but be unable to change its status to /hold/ because PBS does
not
   return its ID. Another problem is that if resources are available,
job 5
   could start before I ever get a chance to /hold/ it.
  
  
  
  
   On Mon, Oct 28, 2013 at 11:47 AM, Mark Abraham 
mark.j.abra...@gmail.com
   wrote:
  
On Mon, Oct 28, 2013 at 4:27 PM, Pavan Ghatty 
pavan.grom...@gmail.com
wrote:
   
 I have need to collect 100ns but I can collect only ~1ns
(1000steps)
   per
 run. Since I dont have .trr files, I rely on .cpt files for
restarts.
   For
 example,

 grompp -f md.mdp  -c md_14.gro -t md_14.cpt -p system.top -o md_15

 This runs into a problem when the run gets killed due to walltime
limits. I
 now have a .xtc file which has run (say) 700 steps and a .cpt file
   which
 was last written at 600th step.

   
You seem to have no need to use grompp, because you don't need to
use a
workflow that generates multiple .tpr files. Do the equivalent of
what
   the
restart page advises: mdrun -s topol.tpr -cpi state.cpt. Thus, make
a
   .tpr
for the whole 100ns run, and then keep doing
   
mdrun -s whole-run -cpi whateverwaslast -deffnm
  whateversuitsyouthistime
   
with or without -append, perhaps with -maxh, keeping whatever manual
backups you feel necessary. Then perhaps concatenate your final
   trajectory
files, according to your earlier choices.
   
- To set up the next run I use the .cpt file from 600th step.
 - Now during analysis if I want to center the protein and such,
   /trjconv/
 needs an .xtc and .tpr file but not a .cpt file. So how does
  /trjconv/
know
 to stop at 600th step?
   
   
trjconv just operates on the contents of the trajectory file, as
  modified
by things like -b -e and -dt. The .tpr just gives it context, such
as
   atom
names. You could give it a .tpr from any point during the run.
   
Mark
   
If this has to be put in manually, it becomes
 cumbersome.

 Thoughts?





 On Sun, Oct 27, 2013 at 11:38 AM, Justin Lemkul jalem...@vt.edu
   wrote:

 
 
  On 10/27/13 9:37 AM, Pavan Ghatty wrote:
 
  Hello All,
 
  Is there a way to make mdrun put out .cpt file with the same
   frequency
 as
  a
  .xtc or .trr file. From here
  http://www.gromacs.org/**Documentation/How-tos/Doing_**Restarts

 http://www.gromacs.org/Documentation/How-tos/Doing_RestartsI see
  that
we
  can choose how often (time in mins) the .cpt file is written.
But
 clearly
  if the frequency of output of .cpt (frequency in mins) and .xtc
 (frequency
  in simulation steps) do not match, it can create problems
during
 analysis;
  especially in the event of frequent crashes. Also, I am not
  storing
.trr
  file since I dont need that precision.
  I am using Gromacs 4.6.1.
 
 
  What problems are you experiencing?  There is no need for .cpt
frequency
  to be the same as .xtc frequency, because any duplicate frames
  should
be
  handled elegantly when appending.
 
  -Justin
 
  --
  ==**
 
  Justin A. Lemkul, Ph.D.
  Postdoctoral Fellow
   

Re: [gmx-users] mdrun cpt

2013-10-28 Thread Pavan Ghatty
I have need to collect 100ns but I can collect only ~1ns (1000steps) per
run. Since I dont have .trr files, I rely on .cpt files for restarts. For
example,

grompp -f md.mdp  -c md_14.gro -t md_14.cpt -p system.top -o md_15

This runs into a problem when the run gets killed due to walltime limits. I
now have a .xtc file which has run (say) 700 steps and a .cpt file which
was last written at 600th step.
- To set up the next run I use the .cpt file from 600th step.
- Now during analysis if I want to center the protein and such, /trjconv/
needs an .xtc and .tpr file but not a .cpt file. So how does /trjconv/ know
to stop at 600th step? If this has to be put in manually, it becomes
cumbersome.

Thoughts?





On Sun, Oct 27, 2013 at 11:38 AM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/27/13 9:37 AM, Pavan Ghatty wrote:

 Hello All,

 Is there a way to make mdrun put out .cpt file with the same frequency as
 a
 .xtc or .trr file. From here
 http://www.gromacs.org/**Documentation/How-tos/Doing_**Restartshttp://www.gromacs.org/Documentation/How-tos/Doing_RestartsI
  see that we
 can choose how often (time in mins) the .cpt file is written. But clearly
 if the frequency of output of .cpt (frequency in mins) and .xtc (frequency
 in simulation steps) do not match, it can create problems during analysis;
 especially in the event of frequent crashes. Also, I am not storing .trr
 file since I dont need that precision.
 I am using Gromacs 4.6.1.


 What problems are you experiencing?  There is no need for .cpt frequency
 to be the same as .xtc frequency, because any duplicate frames should be
 handled elegantly when appending.

 -Justin

 --
 ==**

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalemkul@outerbanks.umaryland.**edu jalem...@outerbanks.umaryland.edu | 
 (410)
 706-7441

 ==**
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun cpt

2013-10-28 Thread Mark Abraham
On Mon, Oct 28, 2013 at 4:27 PM, Pavan Ghatty pavan.grom...@gmail.comwrote:

 I have need to collect 100ns but I can collect only ~1ns (1000steps) per
 run. Since I dont have .trr files, I rely on .cpt files for restarts. For
 example,

 grompp -f md.mdp  -c md_14.gro -t md_14.cpt -p system.top -o md_15

 This runs into a problem when the run gets killed due to walltime limits. I
 now have a .xtc file which has run (say) 700 steps and a .cpt file which
 was last written at 600th step.


You seem to have no need to use grompp, because you don't need to use a
workflow that generates multiple .tpr files. Do the equivalent of what the
restart page advises: mdrun -s topol.tpr -cpi state.cpt. Thus, make a .tpr
for the whole 100ns run, and then keep doing

mdrun -s whole-run -cpi whateverwaslast -deffnm whateversuitsyouthistime

with or without -append, perhaps with -maxh, keeping whatever manual
backups you feel necessary. Then perhaps concatenate your final trajectory
files, according to your earlier choices.

- To set up the next run I use the .cpt file from 600th step.
 - Now during analysis if I want to center the protein and such, /trjconv/
 needs an .xtc and .tpr file but not a .cpt file. So how does /trjconv/ know
 to stop at 600th step?


trjconv just operates on the contents of the trajectory file, as modified
by things like -b -e and -dt. The .tpr just gives it context, such as atom
names. You could give it a .tpr from any point during the run.

Mark

If this has to be put in manually, it becomes
 cumbersome.

 Thoughts?





 On Sun, Oct 27, 2013 at 11:38 AM, Justin Lemkul jalem...@vt.edu wrote:

 
 
  On 10/27/13 9:37 AM, Pavan Ghatty wrote:
 
  Hello All,
 
  Is there a way to make mdrun put out .cpt file with the same frequency
 as
  a
  .xtc or .trr file. From here
  http://www.gromacs.org/**Documentation/How-tos/Doing_**Restarts
 http://www.gromacs.org/Documentation/How-tos/Doing_RestartsI see that we
  can choose how often (time in mins) the .cpt file is written. But
 clearly
  if the frequency of output of .cpt (frequency in mins) and .xtc
 (frequency
  in simulation steps) do not match, it can create problems during
 analysis;
  especially in the event of frequent crashes. Also, I am not storing .trr
  file since I dont need that precision.
  I am using Gromacs 4.6.1.
 
 
  What problems are you experiencing?  There is no need for .cpt frequency
  to be the same as .xtc frequency, because any duplicate frames should be
  handled elegantly when appending.
 
  -Justin
 
  --
  ==**
 
  Justin A. Lemkul, Ph.D.
  Postdoctoral Fellow
 
  Department of Pharmaceutical Sciences
  School of Pharmacy
  Health Sciences Facility II, Room 601
  University of Maryland, Baltimore
  20 Penn St.
  Baltimore, MD 21201
 
  jalemkul@outerbanks.umaryland.**edu jalem...@outerbanks.umaryland.edu
 | (410)
  706-7441
 
  ==**
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/**mailman/listinfo/gmx-users
 http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at http://www.gromacs.org/**
  Support/Mailing_Lists/Search
 http://www.gromacs.org/Support/Mailing_Lists/Searchbefore posting!
  * Please don't post (un)subscribe requests to the list. Use the www
  interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists
 http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun cpt

2013-10-28 Thread Pavan Ghatty
Mark,

The problem with one .tpr file set for 100ns is that when job number (say)
4 hits the wall limit, it crashes and never gets a chance to submit the
next job. So it's not really automated.

Now I could initiate job 5 before /mdrun/ in job 4's script and hold job 5
till job 4 ends. But the PBS queuing system is sometime weird and takes a
bit of time to recognize a job and give back its jobID. So I could submit
job 5 but be unable to change its status to /hold/ because PBS does not
return its ID. Another problem is that if resources are available, job 5
could start before I ever get a chance to /hold/ it.




On Mon, Oct 28, 2013 at 11:47 AM, Mark Abraham mark.j.abra...@gmail.comwrote:

 On Mon, Oct 28, 2013 at 4:27 PM, Pavan Ghatty pavan.grom...@gmail.com
 wrote:

  I have need to collect 100ns but I can collect only ~1ns (1000steps) per
  run. Since I dont have .trr files, I rely on .cpt files for restarts. For
  example,
 
  grompp -f md.mdp  -c md_14.gro -t md_14.cpt -p system.top -o md_15
 
  This runs into a problem when the run gets killed due to walltime
 limits. I
  now have a .xtc file which has run (say) 700 steps and a .cpt file which
  was last written at 600th step.
 

 You seem to have no need to use grompp, because you don't need to use a
 workflow that generates multiple .tpr files. Do the equivalent of what the
 restart page advises: mdrun -s topol.tpr -cpi state.cpt. Thus, make a .tpr
 for the whole 100ns run, and then keep doing

 mdrun -s whole-run -cpi whateverwaslast -deffnm whateversuitsyouthistime

 with or without -append, perhaps with -maxh, keeping whatever manual
 backups you feel necessary. Then perhaps concatenate your final trajectory
 files, according to your earlier choices.

 - To set up the next run I use the .cpt file from 600th step.
  - Now during analysis if I want to center the protein and such, /trjconv/
  needs an .xtc and .tpr file but not a .cpt file. So how does /trjconv/
 know
  to stop at 600th step?


 trjconv just operates on the contents of the trajectory file, as modified
 by things like -b -e and -dt. The .tpr just gives it context, such as atom
 names. You could give it a .tpr from any point during the run.

 Mark

 If this has to be put in manually, it becomes
  cumbersome.
 
  Thoughts?
 
 
 
 
 
  On Sun, Oct 27, 2013 at 11:38 AM, Justin Lemkul jalem...@vt.edu wrote:
 
  
  
   On 10/27/13 9:37 AM, Pavan Ghatty wrote:
  
   Hello All,
  
   Is there a way to make mdrun put out .cpt file with the same frequency
  as
   a
   .xtc or .trr file. From here
   http://www.gromacs.org/**Documentation/How-tos/Doing_**Restarts
  http://www.gromacs.org/Documentation/How-tos/Doing_RestartsI see that
 we
   can choose how often (time in mins) the .cpt file is written. But
  clearly
   if the frequency of output of .cpt (frequency in mins) and .xtc
  (frequency
   in simulation steps) do not match, it can create problems during
  analysis;
   especially in the event of frequent crashes. Also, I am not storing
 .trr
   file since I dont need that precision.
   I am using Gromacs 4.6.1.
  
  
   What problems are you experiencing?  There is no need for .cpt
 frequency
   to be the same as .xtc frequency, because any duplicate frames should
 be
   handled elegantly when appending.
  
   -Justin
  
   --
   ==**
  
   Justin A. Lemkul, Ph.D.
   Postdoctoral Fellow
  
   Department of Pharmaceutical Sciences
   School of Pharmacy
   Health Sciences Facility II, Room 601
   University of Maryland, Baltimore
   20 Penn St.
   Baltimore, MD 21201
  
   jalemkul@outerbanks.umaryland.**edu jalem...@outerbanks.umaryland.edu
 
  | (410)
   706-7441
  
   ==**
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/**mailman/listinfo/gmx-users
  http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at http://www.gromacs.org/**
   Support/Mailing_Lists/Search
  http://www.gromacs.org/Support/Mailing_Lists/Searchbefore posting!
   * Please don't post (un)subscribe requests to the list. Use the www
   interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists
  http://www.gromacs.org/Support/Mailing_Lists
  
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the 

Re: [gmx-users] mdrun cpt

2013-10-28 Thread jkrieger
No this isn't a problem. You can use job names under the -hold_jid flag.
As long as you change the job name in the submit script between
submissions this isn't a problem. You could have a submit script for job 4
with -N md_job4 and -hold_jid md_job3 then change these to -N md_job5 and
-hold_jid md_job4 for the next job. Then you can submit job 5 as soon as
you have made this change which will be within seconds of submitting job
4.

 Mark,

 The problem with one .tpr file set for 100ns is that when job number (say)
 4 hits the wall limit, it crashes and never gets a chance to submit the
 next job. So it's not really automated.

 Now I could initiate job 5 before /mdrun/ in job 4's script and hold job 5
 till job 4 ends. But the PBS queuing system is sometime weird and takes a
 bit of time to recognize a job and give back its jobID. So I could submit
 job 5 but be unable to change its status to /hold/ because PBS does not
 return its ID. Another problem is that if resources are available, job 5
 could start before I ever get a chance to /hold/ it.




 On Mon, Oct 28, 2013 at 11:47 AM, Mark Abraham
 mark.j.abra...@gmail.comwrote:

 On Mon, Oct 28, 2013 at 4:27 PM, Pavan Ghatty pavan.grom...@gmail.com
 wrote:

  I have need to collect 100ns but I can collect only ~1ns (1000steps)
 per
  run. Since I dont have .trr files, I rely on .cpt files for restarts.
 For
  example,
 
  grompp -f md.mdp  -c md_14.gro -t md_14.cpt -p system.top -o md_15
 
  This runs into a problem when the run gets killed due to walltime
 limits. I
  now have a .xtc file which has run (say) 700 steps and a .cpt file
 which
  was last written at 600th step.
 

 You seem to have no need to use grompp, because you don't need to use a
 workflow that generates multiple .tpr files. Do the equivalent of what
 the
 restart page advises: mdrun -s topol.tpr -cpi state.cpt. Thus, make a
 .tpr
 for the whole 100ns run, and then keep doing

 mdrun -s whole-run -cpi whateverwaslast -deffnm whateversuitsyouthistime

 with or without -append, perhaps with -maxh, keeping whatever manual
 backups you feel necessary. Then perhaps concatenate your final
 trajectory
 files, according to your earlier choices.

 - To set up the next run I use the .cpt file from 600th step.
  - Now during analysis if I want to center the protein and such,
 /trjconv/
  needs an .xtc and .tpr file but not a .cpt file. So how does /trjconv/
 know
  to stop at 600th step?


 trjconv just operates on the contents of the trajectory file, as
 modified
 by things like -b -e and -dt. The .tpr just gives it context, such as
 atom
 names. You could give it a .tpr from any point during the run.

 Mark

 If this has to be put in manually, it becomes
  cumbersome.
 
  Thoughts?
 
 
 
 
 
  On Sun, Oct 27, 2013 at 11:38 AM, Justin Lemkul jalem...@vt.edu
 wrote:
 
  
  
   On 10/27/13 9:37 AM, Pavan Ghatty wrote:
  
   Hello All,
  
   Is there a way to make mdrun put out .cpt file with the same
 frequency
  as
   a
   .xtc or .trr file. From here
   http://www.gromacs.org/**Documentation/How-tos/Doing_**Restarts
  http://www.gromacs.org/Documentation/How-tos/Doing_RestartsI see that
 we
   can choose how often (time in mins) the .cpt file is written. But
  clearly
   if the frequency of output of .cpt (frequency in mins) and .xtc
  (frequency
   in simulation steps) do not match, it can create problems during
  analysis;
   especially in the event of frequent crashes. Also, I am not storing
 .trr
   file since I dont need that precision.
   I am using Gromacs 4.6.1.
  
  
   What problems are you experiencing?  There is no need for .cpt
 frequency
   to be the same as .xtc frequency, because any duplicate frames
 should
 be
   handled elegantly when appending.
  
   -Justin
  
   --
   ==**
  
   Justin A. Lemkul, Ph.D.
   Postdoctoral Fellow
  
   Department of Pharmaceutical Sciences
   School of Pharmacy
   Health Sciences Facility II, Room 601
   University of Maryland, Baltimore
   20 Penn St.
   Baltimore, MD 21201
  
   jalemkul@outerbanks.umaryland.**edu
 jalem...@outerbanks.umaryland.edu
 
  | (410)
   706-7441
  
   ==**
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/**mailman/listinfo/gmx-users
  http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at http://www.gromacs.org/**
   Support/Mailing_Lists/Search
  http://www.gromacs.org/Support/Mailing_Lists/Searchbefore posting!
   * Please don't post (un)subscribe requests to the list. Use the www
   interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists
  http://www.gromacs.org/Support/Mailing_Lists
  
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before 

Re: [gmx-users] mdrun cpt

2013-10-28 Thread Pavan Ghatty
Aah yes of course. Thanks James.



On Mon, Oct 28, 2013 at 3:16 PM, jkrie...@mrc-lmb.cam.ac.uk wrote:

 No this isn't a problem. You can use job names under the -hold_jid flag.
 As long as you change the job name in the submit script between
 submissions this isn't a problem. You could have a submit script for job 4
 with -N md_job4 and -hold_jid md_job3 then change these to -N md_job5 and
 -hold_jid md_job4 for the next job. Then you can submit job 5 as soon as
 you have made this change which will be within seconds of submitting job
 4.

  Mark,
 
  The problem with one .tpr file set for 100ns is that when job number
 (say)
  4 hits the wall limit, it crashes and never gets a chance to submit the
  next job. So it's not really automated.
 
  Now I could initiate job 5 before /mdrun/ in job 4's script and hold job
 5
  till job 4 ends. But the PBS queuing system is sometime weird and takes a
  bit of time to recognize a job and give back its jobID. So I could submit
  job 5 but be unable to change its status to /hold/ because PBS does not
  return its ID. Another problem is that if resources are available, job 5
  could start before I ever get a chance to /hold/ it.
 
 
 
 
  On Mon, Oct 28, 2013 at 11:47 AM, Mark Abraham
  mark.j.abra...@gmail.comwrote:
 
  On Mon, Oct 28, 2013 at 4:27 PM, Pavan Ghatty pavan.grom...@gmail.com
  wrote:
 
   I have need to collect 100ns but I can collect only ~1ns (1000steps)
  per
   run. Since I dont have .trr files, I rely on .cpt files for restarts.
  For
   example,
  
   grompp -f md.mdp  -c md_14.gro -t md_14.cpt -p system.top -o md_15
  
   This runs into a problem when the run gets killed due to walltime
  limits. I
   now have a .xtc file which has run (say) 700 steps and a .cpt file
  which
   was last written at 600th step.
  
 
  You seem to have no need to use grompp, because you don't need to use a
  workflow that generates multiple .tpr files. Do the equivalent of what
  the
  restart page advises: mdrun -s topol.tpr -cpi state.cpt. Thus, make a
  .tpr
  for the whole 100ns run, and then keep doing
 
  mdrun -s whole-run -cpi whateverwaslast -deffnm whateversuitsyouthistime
 
  with or without -append, perhaps with -maxh, keeping whatever manual
  backups you feel necessary. Then perhaps concatenate your final
  trajectory
  files, according to your earlier choices.
 
  - To set up the next run I use the .cpt file from 600th step.
   - Now during analysis if I want to center the protein and such,
  /trjconv/
   needs an .xtc and .tpr file but not a .cpt file. So how does /trjconv/
  know
   to stop at 600th step?
 
 
  trjconv just operates on the contents of the trajectory file, as
  modified
  by things like -b -e and -dt. The .tpr just gives it context, such as
  atom
  names. You could give it a .tpr from any point during the run.
 
  Mark
 
  If this has to be put in manually, it becomes
   cumbersome.
  
   Thoughts?
  
  
  
  
  
   On Sun, Oct 27, 2013 at 11:38 AM, Justin Lemkul jalem...@vt.edu
  wrote:
  
   
   
On 10/27/13 9:37 AM, Pavan Ghatty wrote:
   
Hello All,
   
Is there a way to make mdrun put out .cpt file with the same
  frequency
   as
a
.xtc or .trr file. From here
http://www.gromacs.org/**Documentation/How-tos/Doing_**Restarts
   http://www.gromacs.org/Documentation/How-tos/Doing_RestartsI see
 that
  we
can choose how often (time in mins) the .cpt file is written. But
   clearly
if the frequency of output of .cpt (frequency in mins) and .xtc
   (frequency
in simulation steps) do not match, it can create problems during
   analysis;
especially in the event of frequent crashes. Also, I am not storing
  .trr
file since I dont need that precision.
I am using Gromacs 4.6.1.
   
   
What problems are you experiencing?  There is no need for .cpt
  frequency
to be the same as .xtc frequency, because any duplicate frames
  should
  be
handled elegantly when appending.
   
-Justin
   
--
==**
   
Justin A. Lemkul, Ph.D.
Postdoctoral Fellow
   
Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201
   
jalemkul@outerbanks.umaryland.**edu
  jalem...@outerbanks.umaryland.edu
  
   | (410)
706-7441
   
==**
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/**mailman/listinfo/gmx-users
   http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/**
Support/Mailing_Lists/Search
   http://www.gromacs.org/Support/Mailing_Lists/Searchbefore posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read 

Re: [gmx-users] mdrun cpt

2013-10-28 Thread jkrieger
You're welcome

On 28 Oct 2013, at 20:03, Pavan Ghatty pavan.grom...@gmail.com wrote:

 Aah yes of course. Thanks James.
 
 
 
 On Mon, Oct 28, 2013 at 3:16 PM, jkrie...@mrc-lmb.cam.ac.uk wrote:
 
 No this isn't a problem. You can use job names under the -hold_jid flag.
 As long as you change the job name in the submit script between
 submissions this isn't a problem. You could have a submit script for job 4
 with -N md_job4 and -hold_jid md_job3 then change these to -N md_job5 and
 -hold_jid md_job4 for the next job. Then you can submit job 5 as soon as
 you have made this change which will be within seconds of submitting job
 4.
 
 Mark,
 
 The problem with one .tpr file set for 100ns is that when job number
 (say)
 4 hits the wall limit, it crashes and never gets a chance to submit the
 next job. So it's not really automated.
 
 Now I could initiate job 5 before /mdrun/ in job 4's script and hold job
 5
 till job 4 ends. But the PBS queuing system is sometime weird and takes a
 bit of time to recognize a job and give back its jobID. So I could submit
 job 5 but be unable to change its status to /hold/ because PBS does not
 return its ID. Another problem is that if resources are available, job 5
 could start before I ever get a chance to /hold/ it.
 
 
 
 
 On Mon, Oct 28, 2013 at 11:47 AM, Mark Abraham
 mark.j.abra...@gmail.comwrote:
 
 On Mon, Oct 28, 2013 at 4:27 PM, Pavan Ghatty pavan.grom...@gmail.com
 wrote:
 
 I have need to collect 100ns but I can collect only ~1ns (1000steps)
 per
 run. Since I dont have .trr files, I rely on .cpt files for restarts.
 For
 example,
 
 grompp -f md.mdp  -c md_14.gro -t md_14.cpt -p system.top -o md_15
 
 This runs into a problem when the run gets killed due to walltime
 limits. I
 now have a .xtc file which has run (say) 700 steps and a .cpt file
 which
 was last written at 600th step.
 
 You seem to have no need to use grompp, because you don't need to use a
 workflow that generates multiple .tpr files. Do the equivalent of what
 the
 restart page advises: mdrun -s topol.tpr -cpi state.cpt. Thus, make a
 .tpr
 for the whole 100ns run, and then keep doing
 
 mdrun -s whole-run -cpi whateverwaslast -deffnm whateversuitsyouthistime
 
 with or without -append, perhaps with -maxh, keeping whatever manual
 backups you feel necessary. Then perhaps concatenate your final
 trajectory
 files, according to your earlier choices.
 
 - To set up the next run I use the .cpt file from 600th step.
 - Now during analysis if I want to center the protein and such,
 /trjconv/
 needs an .xtc and .tpr file but not a .cpt file. So how does /trjconv/
 know
 to stop at 600th step?
 
 
 trjconv just operates on the contents of the trajectory file, as
 modified
 by things like -b -e and -dt. The .tpr just gives it context, such as
 atom
 names. You could give it a .tpr from any point during the run.
 
 Mark
 
 If this has to be put in manually, it becomes
 cumbersome.
 
 Thoughts?
 
 
 
 
 
 On Sun, Oct 27, 2013 at 11:38 AM, Justin Lemkul jalem...@vt.edu
 wrote:
 
 
 
 On 10/27/13 9:37 AM, Pavan Ghatty wrote:
 
 Hello All,
 
 Is there a way to make mdrun put out .cpt file with the same
 frequency
 as
 a
 .xtc or .trr file. From here
 http://www.gromacs.org/**Documentation/How-tos/Doing_**Restarts
 http://www.gromacs.org/Documentation/How-tos/Doing_RestartsI see
 that
 we
 can choose how often (time in mins) the .cpt file is written. But
 clearly
 if the frequency of output of .cpt (frequency in mins) and .xtc
 (frequency
 in simulation steps) do not match, it can create problems during
 analysis;
 especially in the event of frequent crashes. Also, I am not storing
 .trr
 file since I dont need that precision.
 I am using Gromacs 4.6.1.
 What problems are you experiencing?  There is no need for .cpt
 frequency
 to be the same as .xtc frequency, because any duplicate frames
 should
 be
 handled elegantly when appending.
 
 -Justin
 
 --
 ==**
 
 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow
 
 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201
 
 jalemkul@outerbanks.umaryland.**edu
 jalem...@outerbanks.umaryland.edu
 
 | (410)
 706-7441
 
 ==**
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-users
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Search
 http://www.gromacs.org/Support/Mailing_Lists/Searchbefore posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists
 http://www.gromacs.org/Support/Mailing_Lists
 --
 gmx-users mailing listgmx-users@gromacs.org
 

Re: [gmx-users] mdrun cpt

2013-10-28 Thread Mark Abraham
On Mon, Oct 28, 2013 at 7:53 PM, Pavan Ghatty pavan.grom...@gmail.comwrote:

 Mark,

 The problem with one .tpr file set for 100ns is that when job number (say)
 4 hits the wall limit, it crashes and never gets a chance to submit the
 next job. So it's not really automated.


That's why I suggested -maxh, so you can have an orderly shutdown. (Though
if a job can get suspended, that won't always help, because mdrun can't
find out about the suspension...)

Now I could initiate job 5 before /mdrun/ in job 4's script and hold job 5
 till job 4 ends.


Sure - read your PBS docs and find the environment variable to read so that
job 4 knows its ID so it can submit job 5 with an afterok hold on job 4 on
it. But don't tell your sysadmins where I live. ;-) Seriously, if you live
on this edge, you could spam infinite jobs, which tends to get your account
cut off. That's why you want the afterok hold - you only want the next job
to start if the exit code from the first script correctly indicates that
mdrun exited correctly. Test carefully!

Mark

But the PBS queuing system is sometime weird and takes a
 bit of time to recognize a job and give back its jobID. So I could submit
 job 5 but be unable to change its status to /hold/ because PBS does not
 return its ID. Another problem is that if resources are available, job 5
 could start before I ever get a chance to /hold/ it.




 On Mon, Oct 28, 2013 at 11:47 AM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

  On Mon, Oct 28, 2013 at 4:27 PM, Pavan Ghatty pavan.grom...@gmail.com
  wrote:
 
   I have need to collect 100ns but I can collect only ~1ns (1000steps)
 per
   run. Since I dont have .trr files, I rely on .cpt files for restarts.
 For
   example,
  
   grompp -f md.mdp  -c md_14.gro -t md_14.cpt -p system.top -o md_15
  
   This runs into a problem when the run gets killed due to walltime
  limits. I
   now have a .xtc file which has run (say) 700 steps and a .cpt file
 which
   was last written at 600th step.
  
 
  You seem to have no need to use grompp, because you don't need to use a
  workflow that generates multiple .tpr files. Do the equivalent of what
 the
  restart page advises: mdrun -s topol.tpr -cpi state.cpt. Thus, make a
 .tpr
  for the whole 100ns run, and then keep doing
 
  mdrun -s whole-run -cpi whateverwaslast -deffnm whateversuitsyouthistime
 
  with or without -append, perhaps with -maxh, keeping whatever manual
  backups you feel necessary. Then perhaps concatenate your final
 trajectory
  files, according to your earlier choices.
 
  - To set up the next run I use the .cpt file from 600th step.
   - Now during analysis if I want to center the protein and such,
 /trjconv/
   needs an .xtc and .tpr file but not a .cpt file. So how does /trjconv/
  know
   to stop at 600th step?
 
 
  trjconv just operates on the contents of the trajectory file, as modified
  by things like -b -e and -dt. The .tpr just gives it context, such as
 atom
  names. You could give it a .tpr from any point during the run.
 
  Mark
 
  If this has to be put in manually, it becomes
   cumbersome.
  
   Thoughts?
  
  
  
  
  
   On Sun, Oct 27, 2013 at 11:38 AM, Justin Lemkul jalem...@vt.edu
 wrote:
  
   
   
On 10/27/13 9:37 AM, Pavan Ghatty wrote:
   
Hello All,
   
Is there a way to make mdrun put out .cpt file with the same
 frequency
   as
a
.xtc or .trr file. From here
http://www.gromacs.org/**Documentation/How-tos/Doing_**Restarts
   http://www.gromacs.org/Documentation/How-tos/Doing_RestartsI see that
  we
can choose how often (time in mins) the .cpt file is written. But
   clearly
if the frequency of output of .cpt (frequency in mins) and .xtc
   (frequency
in simulation steps) do not match, it can create problems during
   analysis;
especially in the event of frequent crashes. Also, I am not storing
  .trr
file since I dont need that precision.
I am using Gromacs 4.6.1.
   
   
What problems are you experiencing?  There is no need for .cpt
  frequency
to be the same as .xtc frequency, because any duplicate frames should
  be
handled elegantly when appending.
   
-Justin
   
--
==**
   
Justin A. Lemkul, Ph.D.
Postdoctoral Fellow
   
Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201
   
jalemkul@outerbanks.umaryland.**edu 
 jalem...@outerbanks.umaryland.edu
  
   | (410)
706-7441
   
==**
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/**mailman/listinfo/gmx-users
   http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/**
Support/Mailing_Lists/Search
   http://www.gromacs.org/Support/Mailing_Lists/Searchbefore posting!
* Please don't 

Re: [gmx-users] mdrun cpt

2013-10-28 Thread Pavan Ghatty
Now /afterok/ might not work since technically the job is killed due to
walltime limits - making it not ok. So I suppose /afterany/ is a better
option. But I do appreciate your warning about spamming the queue and yes I
will re-read PBS docs.


On Mon, Oct 28, 2013 at 5:11 PM, Mark Abraham mark.j.abra...@gmail.comwrote:

 On Mon, Oct 28, 2013 at 7:53 PM, Pavan Ghatty pavan.grom...@gmail.com
 wrote:

  Mark,
 
  The problem with one .tpr file set for 100ns is that when job number
 (say)
  4 hits the wall limit, it crashes and never gets a chance to submit the
  next job. So it's not really automated.
 

 That's why I suggested -maxh, so you can have an orderly shutdown. (Though
 if a job can get suspended, that won't always help, because mdrun can't
 find out about the suspension...)

 Now I could initiate job 5 before /mdrun/ in job 4's script and hold job 5
  till job 4 ends.


 Sure - read your PBS docs and find the environment variable to read so that
 job 4 knows its ID so it can submit job 5 with an afterok hold on job 4 on
 it. But don't tell your sysadmins where I live. ;-) Seriously, if you live
 on this edge, you could spam infinite jobs, which tends to get your account
 cut off. That's why you want the afterok hold - you only want the next job
 to start if the exit code from the first script correctly indicates that
 mdrun exited correctly. Test carefully!

 Mark

 But the PBS queuing system is sometime weird and takes a
  bit of time to recognize a job and give back its jobID. So I could submit
  job 5 but be unable to change its status to /hold/ because PBS does not
  return its ID. Another problem is that if resources are available, job 5
  could start before I ever get a chance to /hold/ it.
 
 
 
 
  On Mon, Oct 28, 2013 at 11:47 AM, Mark Abraham mark.j.abra...@gmail.com
  wrote:
 
   On Mon, Oct 28, 2013 at 4:27 PM, Pavan Ghatty pavan.grom...@gmail.com
   wrote:
  
I have need to collect 100ns but I can collect only ~1ns (1000steps)
  per
run. Since I dont have .trr files, I rely on .cpt files for restarts.
  For
example,
   
grompp -f md.mdp  -c md_14.gro -t md_14.cpt -p system.top -o md_15
   
This runs into a problem when the run gets killed due to walltime
   limits. I
now have a .xtc file which has run (say) 700 steps and a .cpt file
  which
was last written at 600th step.
   
  
   You seem to have no need to use grompp, because you don't need to use a
   workflow that generates multiple .tpr files. Do the equivalent of what
  the
   restart page advises: mdrun -s topol.tpr -cpi state.cpt. Thus, make a
  .tpr
   for the whole 100ns run, and then keep doing
  
   mdrun -s whole-run -cpi whateverwaslast -deffnm
 whateversuitsyouthistime
  
   with or without -append, perhaps with -maxh, keeping whatever manual
   backups you feel necessary. Then perhaps concatenate your final
  trajectory
   files, according to your earlier choices.
  
   - To set up the next run I use the .cpt file from 600th step.
- Now during analysis if I want to center the protein and such,
  /trjconv/
needs an .xtc and .tpr file but not a .cpt file. So how does
 /trjconv/
   know
to stop at 600th step?
  
  
   trjconv just operates on the contents of the trajectory file, as
 modified
   by things like -b -e and -dt. The .tpr just gives it context, such as
  atom
   names. You could give it a .tpr from any point during the run.
  
   Mark
  
   If this has to be put in manually, it becomes
cumbersome.
   
Thoughts?
   
   
   
   
   
On Sun, Oct 27, 2013 at 11:38 AM, Justin Lemkul jalem...@vt.edu
  wrote:
   


 On 10/27/13 9:37 AM, Pavan Ghatty wrote:

 Hello All,

 Is there a way to make mdrun put out .cpt file with the same
  frequency
as
 a
 .xtc or .trr file. From here
 http://www.gromacs.org/**Documentation/How-tos/Doing_**Restarts
http://www.gromacs.org/Documentation/How-tos/Doing_RestartsI see
 that
   we
 can choose how often (time in mins) the .cpt file is written. But
clearly
 if the frequency of output of .cpt (frequency in mins) and .xtc
(frequency
 in simulation steps) do not match, it can create problems during
analysis;
 especially in the event of frequent crashes. Also, I am not
 storing
   .trr
 file since I dont need that precision.
 I am using Gromacs 4.6.1.


 What problems are you experiencing?  There is no need for .cpt
   frequency
 to be the same as .xtc frequency, because any duplicate frames
 should
   be
 handled elegantly when appending.

 -Justin

 --
 ==**

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalemkul@outerbanks.umaryland.**edu 
  

Re: [gmx-users] mdrun cpt

2013-10-27 Thread Justin Lemkul



On 10/27/13 9:37 AM, Pavan Ghatty wrote:

Hello All,

Is there a way to make mdrun put out .cpt file with the same frequency as a
.xtc or .trr file. From here
http://www.gromacs.org/Documentation/How-tos/Doing_Restarts I see that we
can choose how often (time in mins) the .cpt file is written. But clearly
if the frequency of output of .cpt (frequency in mins) and .xtc (frequency
in simulation steps) do not match, it can create problems during analysis;
especially in the event of frequent crashes. Also, I am not storing .trr
file since I dont need that precision.
I am using Gromacs 4.6.1.



What problems are you experiencing?  There is no need for .cpt frequency to be 
the same as .xtc frequency, because any duplicate frames should be handled 
elegantly when appending.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun error

2013-07-21 Thread Justin Lemkul



On 7/21/13 12:18 AM, Collins Nganou wrote:

Dear Users,

when trying to run the following command:
mdrun -v -deffnm protein-EM-solvated -c protein-EM-solvated.pdb
I have received the error below.



Reading file dna-EM-solvated.tpr, VERSION 4.5.5 (single precision)
Starting 2 threads

---
Program mdrun, VERSION 4.5.5
Source code file: /build/buildd/gromacs-4.5.5/src/mdlib/domdec.c, line: 6005

Fatal error:
Domain decomposition does not support simple neighbor searching, use grid
searching or use particle decomposition
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors


I am looking any suggestion that can help me to overcome this error.



Not all combinations of options are compatible, and mdrun already told you 
exactly what to do.  If you want to use DD, you can't use nstype = simple, so 
you have to invoke mdrun -pd in this case or otherwise switch to nstype = 
grid.  How you proceed depends on what you're trying to achieve with your .mdp 
settings.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun no error, but hangs no results

2013-07-17 Thread Mark Abraham
Perhaps you need a less prehistoric compiler. Or the affinity-setting bug
fix in 4.6.3. Or both.
On Jul 17, 2013 6:25 PM, Shi, Yu (shiy4) sh...@mail.uc.edu wrote:

 Dear gmx-users,

 My problem is weird.
 My mdrun worked well using the old  serial version 4.5.5 (about two years
 ago). And I have these top, ndx, mdp, and gro files.
 Basing on those old files, for the serial 4.6.2, the grompp works through,
 resulting the .tpr file successfully.
 After that when I make the mdrun,
  mdrun -v -s em-nv.tpr -deffnm ss
 it only shows:
 Reading file em-nv.tpr, VERSION 4.6.2 (double precision)
 Using 8 MPI threads
 Killed
 and there is no further processing. Later, it is killed.

 Also, using cmake my installation process works well, so does anyone meet
 this problem before?


 And part of the logfile is:
 Log file opened on Wed Jul 17 12:17:30 2013
 Host: opt-login03.osc.edu  pid: 32177  nodeid: 0  nnodes:  1
 Gromacs version:VERSION 4.6.2
 Precision:  double
 Memory model:   64 bit
 MPI library:thread_mpi
 OpenMP support: disabled
 GPU support:disabled
 invsqrt routine:gmx_software_invsqrt(x)
 CPU acceleration:   SSE2
 FFT library:fftw-3.3-sse2
 Large file support: enabled
 RDTSCP usage:   enabled
 Built on:   Wed Jul 17 10:51:22 EDT 2013
 Built by:   ucn1...@opt-login03.osc.edu [CMAKE]
 Build OS/arch:  Linux 2.6.18-308.11.1.el5 x86_64
 Build CPU vendor:   AuthenticAMD
 Build CPU brand:Dual-Core AMD Opteron(tm) Processor 8218
 Build CPU family:   15   Model: 65   Stepping: 2
 Build CPU features: apic clfsh cmov cx8 cx16 htt lahf_lm mmx msr pse
 rdtscp sse2 sse3
 C compiler: /usr/bin/cc GNU cc (GCC) 4.1.2 20080704 (Red Hat
 4.1.2-48)
 C compiler flags:   -msse2-Wextra -Wno-missing-field-initializers
 -Wno-sign-compare -Wall -Wno-unused -Wunused-value -Wno-unknown-pragmas
 -fomit-frame-pointer -funroll-all-loops  -O3 -DNDEBUG
 .
 .
 .
 .
 Initializing Domain Decomposition on 8 nodes
 Dynamic load balancing: no
 Will sort the charge groups at every domain (re)decomposition
 Using 0 separate PME nodes, as there are too few total
  nodes for efficient splitting
 Optimizing the DD grid for 8 cells with a minimum initial size of 0.000 nm
 Domain decomposition grid 8 x 1 x 1, separate PME nodes 0
 PME domain decomposition: 8 x 1 x 1
 Domain decomposition nodeid 0, coordinates 0 0 0

 Using 8 MPI threads

 Detecting CPU-specific acceleration.
 Present hardware specification:
 Vendor: AuthenticAMD
 Brand:  Dual-Core AMD Opteron(tm) Processor 8218
 Family: 15  Model: 65  Stepping:  2
 Features: apic clfsh cmov cx8 cx16 htt lahf_lm mmx msr pse rdtscp sse2 sse3
 Acceleration most likely to fit this hardware: SSE2
 Acceleration selected at GROMACS compile time: SSE2

 Table routines are used for coulomb: FALSE
 Table routines are used for vdw: FALSE
 Will do PME sum in reciprocal space.

  PLEASE READ AND CITE THE FOLLOWING REFERENCE 
 U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G.
 Pedersen
 A smooth particle mesh Ewald method
 J. Chem. Phys. 103 (1995) pp. 8577-8592
   --- Thank You ---  


 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun segmentation fault for new build of gromacs 4.6.1

2013-06-06 Thread Roland Schulz
Hi,

I recommend to run the regressiontests. The simplest way is to build
GROMACS with cmake -DREGRESSIONTEST_DOWNLOAD, and run make check.
See 
http://www.gromacs.org/Documentation/Installation_Instructions#.c2.a7_4.12._Testing_GROMACS_for_correctness
for more details.

Roland

On Thu, Jun 6, 2013 at 4:56 PM, Amil G. Anderson
aander...@wittenberg.edu wrote:
 Gromacs users:

 I have just built and installed gromacs-4.6.1 on my Xeon 5500 compute cluster 
 running Centos 5.  The installation was done with gcc 4.7.0

 I have run a simple test (the old tutor/gmxdemo) which fails at the first 
 mdrun step with a segmentation fault.  The command line for this step is:

 mdrun -nt 1 -s cpeptide_em -o cpeptide_em -c cpeptide_b4pr -v -debug 1

 where I have included the debug flag and have restricted the run to one core. 
  The files associated with this run are located at:

 https://www.dropbox.com/sh/h6867f7ivl5pcl9/j9gt9CsVdP


 I have done a test build of gromacs-4.5.4 (version I have been running the 
 last year) with the same build environment as the 4.6.1 build, including 
 using cmake.  The rebuild of gromacs-4.5.4 runs the demo completely.

 Given the limited information for the run (segmentation fault seems to occur 
 just after reading in the parameters), I'm not sure how to further pursue the 
 source of this error.  I have also tried building gromacs-4.6.2 but have the 
 same error for mdrun.

 Thanks for any insight that you may be able to provide.

 Dr. Amil Anderson
 Associate Professor of Chemistry
 Wittenberg University


 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists







-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun outputs incorrect resnames

2013-05-26 Thread Tsjerk Wassenaar
No, the residue names are the those from the .top file. But that's not the
same as the moleculetypes. You have to change the residue names in the [
atoms ] section.

Cheers,

Tsjerk


On Sun, May 26, 2013 at 12:57 AM, Mark Abraham mark.j.abra...@gmail.comwrote:

 AFAIK, the residue names in the mdrun output .gro file are those of the
 structure file you gave to grompp.

 Mark


 On Sun, May 26, 2013 at 12:31 AM, Reid Van Lehn rvanl...@gmail.com
 wrote:

  Hello,
 
  I am simulating a lipid bilayer and wish to apply position restraints to
  only a subset of the lipids in the bilayer. Since position restraints are
  applied to all molecules of the same molecule type, I defined a new
  molecule type (DOPR) which is identical to my lipid species (DOPC) by
  copying the lipid itp file, renaming it and renaming the corresponding
  molecule type. I then manually edited a starting .gro file to change a
  subset of the DOPC molecules to DOPR, edited my topology,
  renumbered/reordered, etc. I also recreated the index file to account for
  the new molecules so that temperature coupling could be used correctly.
 
  Everything seemed ok when I ran the mdrun - grompp didn't complain, the
  program ran normally, the output trajectory clearly used the correct
  position restraints, etc. The weird part, though, is that the output .gro
  file at the end of the simulation only had DOPC molecules in it - the
 DOPR
  molecules that I had renamed by hand had somehow been output as DOPC
  instead. Positions, number of atoms, everything else was fine, just the
  name of the residues was different. I can't figure out why this is
  happening. It was reproducible across both GMX 4.5.5 / 4.6 and multiple
  different starting files.
 
  It's not a huge issue since the trajectories themselves are fine, I'm
 just
  worried this issue might indicate other, less obvious problems. A snippet
  of the topol file is below if that is helpful.
 
  Any suggestions / advice would be appreciated!
 
  
  #include forcefield.itp
  #include dopc.itp
 
  #ifdef POSRES
  #include dopc-posre.itp
  #endif
 
  ; Always include DOPR restraints for restrained lipids
  #include dopc_restrained.itp
  #include dopr-posre.itp
 
  #include spc.itp
  #include ions.itp
 
  [ system ]
  ; Name
  frame t= 1.000 in water
 
  [ molecules ]
  ; Compound#mols
  DOPC 398
  DOPR   2
  SOL63882
  NA   179
  CL   179
  *
 
  Thanks,
  Reid
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Tsjerk A. Wassenaar, Ph.D.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun outputs incorrect resnames

2013-05-26 Thread Reid Van Lehn
Great, thank you that did the trick. My fault for not realizing this
earlier.

Best,
Reid


On Sun, May 26, 2013 at 2:12 AM, Tsjerk Wassenaar tsje...@gmail.com wrote:

 No, the residue names are the those from the .top file. But that's not the
 same as the moleculetypes. You have to change the residue names in the [
 atoms ] section.

 Cheers,

 Tsjerk


 On Sun, May 26, 2013 at 12:57 AM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

  AFAIK, the residue names in the mdrun output .gro file are those of the
  structure file you gave to grompp.
 
  Mark
 
 
  On Sun, May 26, 2013 at 12:31 AM, Reid Van Lehn rvanl...@gmail.com
  wrote:
 
   Hello,
  
   I am simulating a lipid bilayer and wish to apply position restraints
 to
   only a subset of the lipids in the bilayer. Since position restraints
 are
   applied to all molecules of the same molecule type, I defined a new
   molecule type (DOPR) which is identical to my lipid species (DOPC) by
   copying the lipid itp file, renaming it and renaming the corresponding
   molecule type. I then manually edited a starting .gro file to change a
   subset of the DOPC molecules to DOPR, edited my topology,
   renumbered/reordered, etc. I also recreated the index file to account
 for
   the new molecules so that temperature coupling could be used correctly.
  
   Everything seemed ok when I ran the mdrun - grompp didn't complain, the
   program ran normally, the output trajectory clearly used the correct
   position restraints, etc. The weird part, though, is that the output
 .gro
   file at the end of the simulation only had DOPC molecules in it - the
  DOPR
   molecules that I had renamed by hand had somehow been output as DOPC
   instead. Positions, number of atoms, everything else was fine, just the
   name of the residues was different. I can't figure out why this is
   happening. It was reproducible across both GMX 4.5.5 / 4.6 and multiple
   different starting files.
  
   It's not a huge issue since the trajectories themselves are fine, I'm
  just
   worried this issue might indicate other, less obvious problems. A
 snippet
   of the topol file is below if that is helpful.
  
   Any suggestions / advice would be appreciated!
  
   
   #include forcefield.itp
   #include dopc.itp
  
   #ifdef POSRES
   #include dopc-posre.itp
   #endif
  
   ; Always include DOPR restraints for restrained lipids
   #include dopc_restrained.itp
   #include dopr-posre.itp
  
   #include spc.itp
   #include ions.itp
  
   [ system ]
   ; Name
   frame t= 1.000 in water
  
   [ molecules ]
   ; Compound#mols
   DOPC 398
   DOPR   2
   SOL63882
   NA   179
   CL   179
   *
  
   Thanks,
   Reid
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at
   http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   * Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 



 --
 Tsjerk A. Wassenaar, Ph.D.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Reid Van Lehn
NSF/MIT Presidential Fellow
Alfredo Alexander-Katz Research Group
Ph.D Candidate - Materials Science
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun outputs incorrect resnames

2013-05-25 Thread Mark Abraham
AFAIK, the residue names in the mdrun output .gro file are those of the
structure file you gave to grompp.

Mark


On Sun, May 26, 2013 at 12:31 AM, Reid Van Lehn rvanl...@gmail.com wrote:

 Hello,

 I am simulating a lipid bilayer and wish to apply position restraints to
 only a subset of the lipids in the bilayer. Since position restraints are
 applied to all molecules of the same molecule type, I defined a new
 molecule type (DOPR) which is identical to my lipid species (DOPC) by
 copying the lipid itp file, renaming it and renaming the corresponding
 molecule type. I then manually edited a starting .gro file to change a
 subset of the DOPC molecules to DOPR, edited my topology,
 renumbered/reordered, etc. I also recreated the index file to account for
 the new molecules so that temperature coupling could be used correctly.

 Everything seemed ok when I ran the mdrun - grompp didn't complain, the
 program ran normally, the output trajectory clearly used the correct
 position restraints, etc. The weird part, though, is that the output .gro
 file at the end of the simulation only had DOPC molecules in it - the DOPR
 molecules that I had renamed by hand had somehow been output as DOPC
 instead. Positions, number of atoms, everything else was fine, just the
 name of the residues was different. I can't figure out why this is
 happening. It was reproducible across both GMX 4.5.5 / 4.6 and multiple
 different starting files.

 It's not a huge issue since the trajectories themselves are fine, I'm just
 worried this issue might indicate other, less obvious problems. A snippet
 of the topol file is below if that is helpful.

 Any suggestions / advice would be appreciated!

 
 #include forcefield.itp
 #include dopc.itp

 #ifdef POSRES
 #include dopc-posre.itp
 #endif

 ; Always include DOPR restraints for restrained lipids
 #include dopc_restrained.itp
 #include dopr-posre.itp

 #include spc.itp
 #include ions.itp

 [ system ]
 ; Name
 frame t= 1.000 in water

 [ molecules ]
 ; Compound#mols
 DOPC 398
 DOPR   2
 SOL63882
 NA   179
 CL   179
 *

 Thanks,
 Reid
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun generate a lot of files with #

2013-05-23 Thread Justin Lemkul



On 5/23/13 12:53 PM, mu xiaojia wrote:

Dear users,

I have used gromacs a while, however, sometime, when I run it on
supercomputer-clusters, I saw mdrun will generate a lot of files with #,
which occupied a lot of space, does anyone know why and how to avoid it?

Thanks

example, my commandmdrun_mpi -s -deffnm job -cpi -append

then besides ordinary md result files,  it will also generate things like:

#job.xtc.1#   #job.xvg.1#  job



These are backups of previous runs that have been issued with the same command. 
 All Gromacs commands have this behavior - rather than overwriting your files, 
it backs them up.  Either clean up your directories prior to issuing your 
commands, or use unique file names or directories.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun generate a lot of files with #

2013-05-23 Thread Mark Abraham
Or take your backup life into your own hands and set the environment
variable GMX_MAXBACKUP=-1

Mark


On Thu, May 23, 2013 at 7:04 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 5/23/13 12:53 PM, mu xiaojia wrote:

 Dear users,

 I have used gromacs a while, however, sometime, when I run it on
 supercomputer-clusters, I saw mdrun will generate a lot of files with #,
 which occupied a lot of space, does anyone know why and how to avoid it?

 Thanks

 example, my commandmdrun_mpi -s -deffnm job -cpi -append

 then besides ordinary md result files,  it will also generate things like:

 #job.xtc.1#   #job.xvg.1#  job


 These are backups of previous runs that have been issued with the same
 command.  All Gromacs commands have this behavior - rather than overwriting
 your files, it backs them up.  Either clean up your directories prior to
 issuing your commands, or use unique file names or directories.

 -Justin

 --
 ==**==

 Justin A. Lemkul, Ph.D.
 Research Scientist
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justinhttp://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 ==**==

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun and simulation time

2013-05-13 Thread Justin Lemkul



On 5/13/13 6:41 AM, Francesco wrote:

Good morning all,
This morning I checked the output of a 8ns (4 x 2ns) of simulation and I
noticed a strange behaviour:
The fist two simulations (each 2ns) ended up correctly and they both
took 2h 06min to finish.
The second two were still running when the cluster time was over  (I
asked for 2.30)  and
were truncated.
My system contains around 160k atoms and all the previous 2ns simulation
took between 2h and 2h:10min (77 cores, no gpu).

I had a look at the log and it seems that in the last two simulations
mdrun
did only 120.000 steps instead of 1.000.000.

Is it strange or it is possible/normal that with the increase of the ns
(always splitted in 2ns and extended) the running time is bigger?



Random performance loss often happens when one or more nodes being used for the 
job get stuck or have errors.  If you're submitting to a queuing system, there 
should be diagnostic information that your admins can access that would suggest 
why this is happening.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun and simulation time

2013-05-13 Thread Francesco
thank you for the reply, I'm in contact with my admin and I hope that he
will tell me something soon.
One thing that I really don't understand is why only the last
nanoseconds are affected.
I run the same simulation (with the same paramenters) and I've never had
problems in the first 4 ns , only in the last 4.

Fra

On Mon, 13 May 2013, at 11:53 AM, Justin Lemkul wrote:
 
 
 On 5/13/13 6:41 AM, Francesco wrote:
  Good morning all,
  This morning I checked the output of a 8ns (4 x 2ns) of simulation and I
  noticed a strange behaviour:
  The fist two simulations (each 2ns) ended up correctly and they both
  took 2h 06min to finish.
  The second two were still running when the cluster time was over  (I
  asked for 2.30)  and
  were truncated.
  My system contains around 160k atoms and all the previous 2ns simulation
  took between 2h and 2h:10min (77 cores, no gpu).
 
  I had a look at the log and it seems that in the last two simulations
  mdrun
  did only 120.000 steps instead of 1.000.000.
 
  Is it strange or it is possible/normal that with the increase of the ns
  (always splitted in 2ns and extended) the running time is bigger?
 
 
 Random performance loss often happens when one or more nodes being used
 for the 
 job get stuck or have errors.  If you're submitting to a queuing system,
 there 
 should be diagnostic information that your admins can access that would
 suggest 
 why this is happening.
 
 -Justin
 
 -- 
 
 
 Justin A. Lemkul, Ph.D.
 Research Scientist
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
 
 
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


-- 
Francesco Carbone
PhD student
Institute of Structural and Molecular Biology
UCL, London
fra.carbone...@ucl.ac.uk
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun-gpu error message on Gromacs 4.5.5

2013-05-07 Thread Justin Lemkul



On 5/6/13 9:39 PM, Andrew DeYoung wrote:

Hi,

I am running mdrun-gpu on Gromacs 4.5.5 (with OpenMM).  This is my first
time using a GPU.  I get the following error message when attempting to run
mdrun-gpu with my .tpr file:

---
Program mdrun-gpu, VERSION 4.5.5
Source code file:
/usr/local/src/gromacs-4.5.5/src/kernel/openmm_wrapper.cpp, line: 1365

Fatal error:
OpenMM exception caught while initializating: getPropertyValue: Illegal
property name
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

I get this error when calling with:

mdrun-gpu -s topol.tpr

and with:

mdrun-gpu -device
OpenMM:platform=Cuda,memtest=15,deviceid=0,force-device=no -s topol.tpr

I can't seem to find this particular error message in the documentation or
previously discussed on this mailing list.  Does this error message suggest
that I am calling mdrun-gpu incorrectly, or that OpenMM is improperly
installed?



It looks to be a rather generic OpenMM error message, which unfortunately isn't 
very helpful.  I got lots of those, along with random failures, that led me to 
abandon using OpenMM within Gromacs.  Support for OpenMM in Gromacs is limited 
at best, as OpenMM is being deprecated.  Is there any reason you're not using 
the native GPU support from 4.6.1?  The only reason to try to use OpenMM is for 
GPU-accelerated implicit solvent simulations, so if that's what you're doing I 
can understand.  Otherwise, use 4.6.1.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun on GPU

2013-04-26 Thread Justin Lemkul



On 4/26/13 10:50 AM, Juliette N. wrote:

Hi all,

I am going to use 4.6 version of gmx on GPU. I am not sure of the mdrun
command though. I used to use mpirun -np 4 mdrun_mpi -deffnm .. in 4.5.4.
Can I use the same command line as before for mdrun or other tools?



Please read through the documentation here first:

http://www.gromacs.org/Documentation/Acceleration_and_parallelization

The exact details depend on the configuration of your hardware.

-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun -rdd and -dds flag

2013-04-11 Thread Justin Lemkul
On Thu, Apr 11, 2013 at 6:17 AM, manara r. (rm16g09) rm16...@soton.ac.ukwrote:

 Dear gmx-users,

 I am having a problem with a periodic molecule and the domain
 decomposition, I wish to use a high number of processors (circa 180, but
 can obviously be reduced) and therefore need to use the -rdd or -dds flags
 (I believe), how do these value effect the simulation?

 The box size is 11.5 x 11.5 x 45.

 The polymer contains charged groups every repeating unit.

 The error I'm getting is (note this is a test run on my home machine, not
 the cluster):
 Fatal error:
 There is no domain decomposition for 4 nodes that is compatible with the
 given box and a minimum cell size of 61.7923 nm
 Change the number of nodes or mdrun option -rdd or -dds


There is no amount of hacking of -rdd or -dds that will solve this
problem. DD cell size should be on the order of the longest cutoff. Your
minimum size is huge, indicating you probably have either a bad topology or
incorrect .mdp settings.

http://www.gromacs.org/Documentation/Errors#There_is_no_domain_decomposition_for_n_nodes_that_is_compatible_with_the_given_box_and_a_minimum_cell_size_of_x_nm

-Justin

-- 



Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540)
231-9080http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun -rdd and -dds flag

2013-04-11 Thread Broadbent, Richard


On 11/04/2013 11:17, manara r. (rm16g09) rm16...@soton.ac.uk wrote:

Dear gmx-users,

I am having a problem with a periodic molecule and the domain
decomposition, I wish to use a high number of processors (circa 180, but
can obviously be reduced) and therefore need to use the -rdd or -dds
flags (I believe), how do these value effect the simulation?
This is well documented and can be easily viewed by running

mdrun -h

In general these will be set automatically and do not need to be altered.
 

The box size is 11.5 x 11.5 x 45.

The polymer contains charged groups every repeating unit.

Depending on how these are setup this might be where your problem lies.
Charge groups are generally be small neutral groups of atoms say CH3 and
CH2 in a polyethylene molecule or CH in benzene. If your monomer is large
and treated as 1 charge group that could cause some problems. The exact
setup depends on how they were parameterised in the forcefield. If you are
using gromacs 4.6 or 4.6.1 with the verlet-cutoff charge groups have no
effect anyway.



The error I'm getting is (note this is a test run on my home machine, not
the cluster):
Fatal error:
There is no domain decomposition for 4 nodes that is compatible with the
given box and a minimum cell size of 61.7923 nm
Change the number of nodes or mdrun option -rdd or -dds

Since that is larger than your box size it suggests something is
dramatically wrong with your setup. The two things I know of that could
potentially cause this are strange constraints in the topology combined
with LINCS, or inappropriate choice of charge groups in the topology. My
guess is the latter.


Regards

Rich Manara
PhD Student
Chemistry
Southampton University
UK
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun WARING and crash

2013-03-13 Thread Justin Lemkul



On 3/12/13 5:14 AM, l@utwente.nl wrote:

Hallo Justin,

Thank you for your reply, I uploaded the images, Please find following the link 
below,

start box:
http://s1279.beta.photobucket.com/user/Li_Liu/media/image0_zpsf95d10fe.jpeg.html?filters[user]=134822327filters[recent]=1filters[publicOnly]=1sort=1o=1

and  snapshot for first step:
http://s1279.beta.photobucket.com/user/Li_Liu/media/image1_zps06f589e6.jpeg.html?filters[user]=134822327filters[recent]=1filters[publicOnly]=1sort=1o=0



Whatever physical model you are using is causing the particles to become 
regularly arranged.  There's no problem from the Gromacs side that I can see. 
The problem is in your setup (i.e. force field) or its use.  On that matter, I 
cannot comment.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


RE: [gmx-users] mdrun WARING and crash

2013-03-12 Thread L.Liu
Hallo Justin,

Thank you for your reply, I uploaded the images, Please find following the link 
below,

start box:
http://s1279.beta.photobucket.com/user/Li_Liu/media/image0_zpsf95d10fe.jpeg.html?filters[user]=134822327filters[recent]=1filters[publicOnly]=1sort=1o=1

and  snapshot for first step:
http://s1279.beta.photobucket.com/user/Li_Liu/media/image1_zps06f589e6.jpeg.html?filters[user]=134822327filters[recent]=1filters[publicOnly]=1sort=1o=0

Thanks a lot, and have a nice day.

Kind regards,
Li

From: gmx-users-boun...@gromacs.org [gmx-users-boun...@gromacs.org] on behalf 
of Justin Lemkul [jalem...@vt.edu]
Sent: Monday, March 11, 2013 10:06 PM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] mdrun WARING and crash

On Monday, March 11, 2013, wrote:

 Hallo Justin,

 Thank you for your comments.
 Taking your suggestions, I set nrexcl=1, and comment the [pairs] section,
 because there is no special case of non-bonded interactions to declare,
  then we try to see what happens.
 Now we minimize it by steep, then by cg, both of the processor are very
 quick done, because after around 7000 steps, the energy can not go further
 down any more.
 Then we finish the mdrun, the energies output are like:

Step   Time Lambda
   1   10.00.0

Energies (kJ/mol)
 LJ (SR)   Coulomb (SR)  PotentialKinetic En.   Total Energy
 1.49468e+050.0e+001.49468e+058.73890e+031.58207e+05
 Temperature Pressure (bar)
 4.38208e+029.93028e+04

 Although this time no running error, we find the outputs are extremely
 wired, for example through
 VMD conf.gro traj.xtc
 we find 0 frame a homogeneous box, starting from the first step, the box
 becomes a lattice, which is far away from our expected the Polymer melt
 system should be.

 The force parameters are taken from literature PRL 85(5), 1128(2000), I am
 still very worried about the format of my input files. Could you please
 give me, a very beginner a help.


Please provide links to images. This is probably not a big deal as long as
the simulation is actually running, since a triclinic representation of the
unit cell is used.

-Justin



 Thanks a lot.
 Kind regards,
 Li
 
 From: gmx-users-boun...@gromacs.org javascript:; [
 gmx-users-boun...@gromacs.org javascript:;] on behalf of Justin Lemkul [
 jalem...@vt.edu javascript:;]
 Sent: Thursday, February 28, 2013 3:02 PM
 To: Discussion list for GROMACS users
 Subject: Re: [gmx-users] mdrun WARING and crash

 On 2/28/13 6:59 AM, l@utwente.nl wrote:
  Hallo Justin,
 
  Thank you for you help. I have read the previous discussions on this
 topic, which is very helpful.
  The link is:
 http://gromacs.5086.n6.nabble.com/What-is-the-purpose-of-the-pairs-section-td5003598.html
  Well, there are still something I want to make sure, which might be the
 reason of mdrun crash of my system.
 
  ###Introduction of system##
  Linear Polyethylene melt:  each chain contains 16 beads, each bead
 coarse grained 3 monomers. Number of  chain in the box is 64.
 
  Force Field##
  ffbonded.itp
  [ bondtypes ]
  ; FENE, Ro = 1.5 sigma and kb = 30 epsilon/sigma^2
  ;   ij funcb0 (nm) kb (kJ/mol nm^2)
CH2   CH27   0.795   393.
 
  ffnonbonded.itp
  [ atomtypes ]
  ; epsilon / kB = 443K
  ;name  at.num  mass (au)   charge   ptype sigma (nm)
  epsilon (kJ/mol)
  CH2  142.3   0.000   A   0.5300
  3.68133
 
  [ nonbond_params ]
 ; i  jfunc  sigma   epsilon
  CH2   CH210.5303.68133
 
  [ pairtypes ]
 ;  i  jfunc  sigma   epsilon
  CH2   CH21  0.533.68133
 
  topology##
  [ defaults ]
  ; nbfunccomb-rule   gen-pairs   fudgeLJ   fudgeQQ
   1  2no  1.0  1.0
 
  ; The force field files to be included
  #include ../forcefield/forcefield.itp
 
  [ moleculetype ]
  ; name  nrexcl
  PE  0
  [atoms]
  ;   nrtype   resnr  residuatomcgnr  charge
1 CH2   1PE  C   1  0.0
2 CH2   1PE  C   2  0.0
3 CH2   1PE  C   3  0.0
4 CH2   1PE  C   4  0.0
   ..
15CH2   1PE  C   15 0.0
16CH2   1PE  C   16 0.0
 
  [ bonds ]
  ;  aiaj  funct   c0   c1
   1 2  7  0.795 393.
   2 3 7  0.795 393.
   3 4 7  0.795 393.
  ..
   1415 7  0.795 393.
   1516 7

RE: [gmx-users] mdrun WARING and crash

2013-03-12 Thread L.Liu
Hallo Justin,

One update on the wired snapshot mentioned on previous email.
I checked over the output coordinates and xmgrace it with xy directions, 
finding that it is not crystal, instead it is a normal homogeneous box. All 
these give us a clue that it might be the trajectory file goes wrong, cause we 
calculated the RDF and MSD and view VMD all through traj.xtc. 
I am checking the way I output the trajectory. Could you please give me your 
suggestions if there is something wrong on my .mdp file( included in earlier 
email).

It seems that the system now is working, though with still some output errors 
or perhaps further unexpected problems. Well, so far we got a progress, and I 
want to say thank you very much for all your help.

Kind regards,
Li

From: gmx-users-boun...@gromacs.org [gmx-users-boun...@gromacs.org] on behalf 
of Justin Lemkul [jalem...@vt.edu]
Sent: Monday, March 11, 2013 10:06 PM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] mdrun WARING and crash

On Monday, March 11, 2013, wrote:

 Hallo Justin,

 Thank you for your comments.
 Taking your suggestions, I set nrexcl=1, and comment the [pairs] section,
 because there is no special case of non-bonded interactions to declare,
  then we try to see what happens.
 Now we minimize it by steep, then by cg, both of the processor are very
 quick done, because after around 7000 steps, the energy can not go further
 down any more.
 Then we finish the mdrun, the energies output are like:

Step   Time Lambda
   1   10.00.0

Energies (kJ/mol)
 LJ (SR)   Coulomb (SR)  PotentialKinetic En.   Total Energy
 1.49468e+050.0e+001.49468e+058.73890e+031.58207e+05
 Temperature Pressure (bar)
 4.38208e+029.93028e+04

 Although this time no running error, we find the outputs are extremely
 wired, for example through
 VMD conf.gro traj.xtc
 we find 0 frame a homogeneous box, starting from the first step, the box
 becomes a lattice, which is far away from our expected the Polymer melt
 system should be.

 The force parameters are taken from literature PRL 85(5), 1128(2000), I am
 still very worried about the format of my input files. Could you please
 give me, a very beginner a help.


Please provide links to images. This is probably not a big deal as long as
the simulation is actually running, since a triclinic representation of the
unit cell is used.

-Justin



 Thanks a lot.
 Kind regards,
 Li
 
 From: gmx-users-boun...@gromacs.org javascript:; [
 gmx-users-boun...@gromacs.org javascript:;] on behalf of Justin Lemkul [
 jalem...@vt.edu javascript:;]
 Sent: Thursday, February 28, 2013 3:02 PM
 To: Discussion list for GROMACS users
 Subject: Re: [gmx-users] mdrun WARING and crash

 On 2/28/13 6:59 AM, l@utwente.nl wrote:
  Hallo Justin,
 
  Thank you for you help. I have read the previous discussions on this
 topic, which is very helpful.
  The link is:
 http://gromacs.5086.n6.nabble.com/What-is-the-purpose-of-the-pairs-section-td5003598.html
  Well, there are still something I want to make sure, which might be the
 reason of mdrun crash of my system.
 
  ###Introduction of system##
  Linear Polyethylene melt:  each chain contains 16 beads, each bead
 coarse grained 3 monomers. Number of  chain in the box is 64.
 
  Force Field##
  ffbonded.itp
  [ bondtypes ]
  ; FENE, Ro = 1.5 sigma and kb = 30 epsilon/sigma^2
  ;   ij funcb0 (nm) kb (kJ/mol nm^2)
CH2   CH27   0.795   393.
 
  ffnonbonded.itp
  [ atomtypes ]
  ; epsilon / kB = 443K
  ;name  at.num  mass (au)   charge   ptype sigma (nm)
  epsilon (kJ/mol)
  CH2  142.3   0.000   A   0.5300
  3.68133
 
  [ nonbond_params ]
 ; i  jfunc  sigma   epsilon
  CH2   CH210.5303.68133
 
  [ pairtypes ]
 ;  i  jfunc  sigma   epsilon
  CH2   CH21  0.533.68133
 
  topology##
  [ defaults ]
  ; nbfunccomb-rule   gen-pairs   fudgeLJ   fudgeQQ
   1  2no  1.0  1.0
 
  ; The force field files to be included
  #include ../forcefield/forcefield.itp
 
  [ moleculetype ]
  ; name  nrexcl
  PE  0
  [atoms]
  ;   nrtype   resnr  residuatomcgnr  charge
1 CH2   1PE  C   1  0.0
2 CH2   1PE  C   2  0.0
3 CH2   1PE  C   3  0.0
4 CH2   1PE  C   4  0.0
   ..
15CH2   1PE  C   15 0.0
16CH2   1PE  C   16 0.0
 
  [ bonds ]
  ;  aiaj

RE: [gmx-users] mdrun WARING and crash

2013-03-11 Thread L.Liu
Hallo Justin,

Thank you for your comments. 
Taking your suggestions, I set nrexcl=1, and comment the [pairs] section, 
because there is no special case of non-bonded interactions to declare,  then 
we try to see what happens.
Now we minimize it by steep, then by cg, both of the processor are very quick 
done, because after around 7000 steps, the energy can not go further down any 
more.
Then we finish the mdrun, the energies output are like:

   Step   Time Lambda
  1   10.00.0

   Energies (kJ/mol)
LJ (SR)   Coulomb (SR)  PotentialKinetic En.   Total Energy
1.49468e+050.0e+001.49468e+058.73890e+031.58207e+05
Temperature Pressure (bar)
4.38208e+029.93028e+04

Although this time no running error, we find the outputs are extremely wired, 
for example through 
VMD conf.gro traj.xtc
we find 0 frame a homogeneous box, starting from the first step, the box 
becomes a lattice, which is far away from our expected the Polymer melt system 
should be.

The force parameters are taken from literature PRL 85(5), 1128(2000), I am 
still very worried about the format of my input files. Could you please give 
me, a very beginner a help. 

Thanks a lot.

Kind regards,
Li

From: gmx-users-boun...@gromacs.org [gmx-users-boun...@gromacs.org] on behalf 
of Justin Lemkul [jalem...@vt.edu]
Sent: Thursday, February 28, 2013 3:02 PM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] mdrun WARING and crash

On 2/28/13 6:59 AM, l@utwente.nl wrote:
 Hallo Justin,

 Thank you for you help. I have read the previous discussions on this topic, 
 which is very helpful.
 The link is: 
 http://gromacs.5086.n6.nabble.com/What-is-the-purpose-of-the-pairs-section-td5003598.html
 Well, there are still something I want to make sure, which might be the 
 reason of mdrun crash of my system.

 ###Introduction of system##
 Linear Polyethylene melt:  each chain contains 16 beads, each bead coarse 
 grained 3 monomers. Number of  chain in the box is 64.

 Force Field##
 ffbonded.itp
 [ bondtypes ]
 ; FENE, Ro = 1.5 sigma and kb = 30 epsilon/sigma^2
 ;   ij funcb0 (nm) kb (kJ/mol nm^2)
   CH2   CH27   0.795   393.

 ffnonbonded.itp
 [ atomtypes ]
 ; epsilon / kB = 443K
 ;name  at.num  mass (au)   charge   ptype sigma (nm)epsilon 
 (kJ/mol)
 CH2  142.3   0.000   A   0.5300   
3.68133

 [ nonbond_params ]
; i  jfunc  sigma   epsilon
 CH2   CH210.5303.68133

 [ pairtypes ]
;  i  jfunc  sigma   epsilon
 CH2   CH21  0.533.68133

 topology##
 [ defaults ]
 ; nbfunccomb-rule   gen-pairs   fudgeLJ   fudgeQQ
  1  2no  1.0  1.0

 ; The force field files to be included
 #include ../forcefield/forcefield.itp

 [ moleculetype ]
 ; name  nrexcl
 PE  0
 [atoms]
 ;   nrtype   resnr  residuatomcgnr  charge
   1 CH2   1PE  C   1  0.0
   2 CH2   1PE  C   2  0.0
   3 CH2   1PE  C   3  0.0
   4 CH2   1PE  C   4  0.0
  ..
   15CH2   1PE  C   15 0.0
   16CH2   1PE  C   16 0.0

 [ bonds ]
 ;  aiaj  funct   c0   c1
  1 2  7  0.795 393.
  2 3 7  0.795 393.
  3 4 7  0.795 393.
 ..
  1415 7  0.795 393.
  1516 7  0.795 393.

 [ pairs ]
 ;  aiaj  funct   c0   c1
  1 2   1 0.53 3.68133
  1 3   1 0.53 3.68133
  1 4   1 0.53 3.68133
  2 3   1 0.53 3.68133
  2 4   1 0.53 3.68133
  2 5   1 0.53 3.68133
  ..
  1415  1 0.53 3.68133
  1416  1 0.53 3.68133
  1516  1 0.53 3.68133

 ##mdp###

 ;directories to include in your topology. Format
 include  = -I/home/otterw/Install/gromacs-4.6/src/gmxlib
 integrator   = md
 ;
 ;  RUN CONTROL  *
 ;
 tinit = 0; start time to run
 dt   = 1e-3

Re: [gmx-users] mdrun WARING and crash

2013-03-11 Thread Justin Lemkul
On Monday, March 11, 2013, wrote:

 Hallo Justin,

 Thank you for your comments.
 Taking your suggestions, I set nrexcl=1, and comment the [pairs] section,
 because there is no special case of non-bonded interactions to declare,
  then we try to see what happens.
 Now we minimize it by steep, then by cg, both of the processor are very
 quick done, because after around 7000 steps, the energy can not go further
 down any more.
 Then we finish the mdrun, the energies output are like:

Step   Time Lambda
   1   10.00.0

Energies (kJ/mol)
 LJ (SR)   Coulomb (SR)  PotentialKinetic En.   Total Energy
 1.49468e+050.0e+001.49468e+058.73890e+031.58207e+05
 Temperature Pressure (bar)
 4.38208e+029.93028e+04

 Although this time no running error, we find the outputs are extremely
 wired, for example through
 VMD conf.gro traj.xtc
 we find 0 frame a homogeneous box, starting from the first step, the box
 becomes a lattice, which is far away from our expected the Polymer melt
 system should be.

 The force parameters are taken from literature PRL 85(5), 1128(2000), I am
 still very worried about the format of my input files. Could you please
 give me, a very beginner a help.


Please provide links to images. This is probably not a big deal as long as
the simulation is actually running, since a triclinic representation of the
unit cell is used.

-Justin



 Thanks a lot.
 Kind regards,
 Li
 
 From: gmx-users-boun...@gromacs.org javascript:; [
 gmx-users-boun...@gromacs.org javascript:;] on behalf of Justin Lemkul [
 jalem...@vt.edu javascript:;]
 Sent: Thursday, February 28, 2013 3:02 PM
 To: Discussion list for GROMACS users
 Subject: Re: [gmx-users] mdrun WARING and crash

 On 2/28/13 6:59 AM, l@utwente.nl wrote:
  Hallo Justin,
 
  Thank you for you help. I have read the previous discussions on this
 topic, which is very helpful.
  The link is:
 http://gromacs.5086.n6.nabble.com/What-is-the-purpose-of-the-pairs-section-td5003598.html
  Well, there are still something I want to make sure, which might be the
 reason of mdrun crash of my system.
 
  ###Introduction of system##
  Linear Polyethylene melt:  each chain contains 16 beads, each bead
 coarse grained 3 monomers. Number of  chain in the box is 64.
 
  Force Field##
  ffbonded.itp
  [ bondtypes ]
  ; FENE, Ro = 1.5 sigma and kb = 30 epsilon/sigma^2
  ;   ij funcb0 (nm) kb (kJ/mol nm^2)
CH2   CH27   0.795   393.
 
  ffnonbonded.itp
  [ atomtypes ]
  ; epsilon / kB = 443K
  ;name  at.num  mass (au)   charge   ptype sigma (nm)
  epsilon (kJ/mol)
  CH2  142.3   0.000   A   0.5300
  3.68133
 
  [ nonbond_params ]
 ; i  jfunc  sigma   epsilon
  CH2   CH210.5303.68133
 
  [ pairtypes ]
 ;  i  jfunc  sigma   epsilon
  CH2   CH21  0.533.68133
 
  topology##
  [ defaults ]
  ; nbfunccomb-rule   gen-pairs   fudgeLJ   fudgeQQ
   1  2no  1.0  1.0
 
  ; The force field files to be included
  #include ../forcefield/forcefield.itp
 
  [ moleculetype ]
  ; name  nrexcl
  PE  0
  [atoms]
  ;   nrtype   resnr  residuatomcgnr  charge
1 CH2   1PE  C   1  0.0
2 CH2   1PE  C   2  0.0
3 CH2   1PE  C   3  0.0
4 CH2   1PE  C   4  0.0
   ..
15CH2   1PE  C   15 0.0
16CH2   1PE  C   16 0.0
 
  [ bonds ]
  ;  aiaj  funct   c0   c1
   1 2  7  0.795 393.
   2 3 7  0.795 393.
   3 4 7  0.795 393.
  ..
   1415 7  0.795 393.
   1516 7  0.795 393.
 
  [ pairs ]
  ;  ai   --
 gmx-users mailing listgmx-users@gromacs.org javascript:;
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org javascript:;.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 --
 gmx-users mailing listgmx-users@gromacs.org javascript:;
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send

Re: [gmx-users] mdrun-gpu

2012-08-09 Thread Mark Abraham

On 9/08/2012 3:47 PM, cuong nguyen wrote:

Dear Gromacs Users,

I am trying Gromacs/4.5.5-OpenMM on GPU with CUDA support.
when I run grompp-gpu to generate the .tpr file, it worked well:
grompp-gpu -f input_min.mdp -o min.tpr -c box1.g96
however, then I run mdrun-gpu mdrun-gpu -s min -o min -c min.g96 -x min
-e min -g min, it was stopped with the error:
Fatal error: OpenMM supports only the following integrators:
md/md-vv/md-vv-avek, sd/sd1, and bd.

Could someone please explain what this error means and the appropriate
way to remedy it?


You chose an integrator in your .mdp file, perhaps to do energy 
minimization (but since you're asking for help about a problem with your 
integrator, you should have identified which integrator you were using 
so you could tell us). It wasn't one of the legal set for GPU support. 
So you will not be able to use that integrator, and will have to use 
non-GPU GROMACS. When you are using such an integrator, you can use 
GPU-enabled GROMACS.


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun error

2012-08-09 Thread Justin Lemkul



On 8/9/12 1:40 PM, Shima Arasteh wrote:

Dear gmx users,

Would be this error (as you see here) a symptom of blowing up of a system? Or 
just .mdp options should be changed?

Fatal error:
1 of the 16625 bonded interactions could not be calculated because some atoms 
involved moved further apart than the multi-body cut-off distance (0.817695 nm) 
or the two-body cut-off distance (1 nm), see option -rdd, for pairs and 
tabulated bonds also see option -ddcheck
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors




Any time something moves too far, it's a case of blowing up.

-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun graceful exit

2012-07-13 Thread Elton Carvalho
On Fri, Jul 6, 2012 at 1:13 AM, Mark Abraham mark.abra...@anu.edu.au wrote:

 Possibly not. This might be another instance of the GROMACS team having not
 put much effort into the EM code on the theory that it doesn't run for long
 enough, so have enough time for developer effort to pay off in significantly
 better user experience, given most people's workflows.


Thanks, Mark.

Probably I'll add a feature request in redmine then, since I have no
idea how to code such signal handling.

-- 
Elton Carvalho
Tel.: +55 11 3091-6985/6922
Dept Física dos Materiais e Mecânica
Instituto de Física
Universidade de São Paulo
P.O. Box 66318 - 05314-970 São Paulo-SP, Brazil
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun graceful exit

2012-07-05 Thread Mark Abraham

On 6/07/2012 2:46 AM, Elton Carvalho wrote:

Dear gmx-people.

I know that if you send a KILL signal do a mdrun instance running
integrator = md it sets nsteps to the next NS step and exits
gracefully, but I don't see it happening to minimization runs.

Is it possible to send a signal ta minimization instance of mdrun to
make it exit as if nsteps was reached before convergence and write the
relevant gro and trr files?


Possibly not. This might be another instance of the GROMACS team having 
not put much effort into the EM code on the theory that it doesn't run 
for long enough, so have enough time for developer effort to pay off in 
significantly better user experience, given most people's workflows.


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun no structural output

2012-07-03 Thread Justin A. Lemkul



On 7/3/12 5:40 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:

Hi everybody,
I wanted to do a minimization with mdrun but the only output I get is:
3m71_minim.edr
3m71_minim.log
3m71_minim.trr
But no structure file like .pdb i.e.

There was no error in the step before where I prepared the input file with
grompp. My .mdp file looks like this:

define  = -DPOSRES
integrator  = steep
emtol   = 10
nsteps  = 5000
nstenergy   = 1
energygrps  = System
coulombtype = PME
rcoulomb= 0.9
rvdw= 0.9
rlist   = 0.9
fourierspacing  = 0.12
pme_order   = 4
ewald_rtol  = 1e-5
pbc = xyz



And also in this step there was no error.
The end of the 3m71_minim.log looks like this:

Step   Time Lambda
2647 2647.00.0

Energies (kJ/mol)
Bond  AngleProper Dih.  Improper Dih.  LJ-14
 5.35530e+022.26340e+031.17875e+041.31106e+024.73257e+03
  Coulomb-14LJ (SR)   Coulomb (SR)   Coul. recip. Position Rest.
 6.58188e+049.41710e+04   -7.10195e+05   -1.62296e+059.89235e+02
   Potential Pressure (bar)
-6.92062e+05   -3.32594e+03


And the stdout looks like this:

Step= 2646, Dmax= 3.6e-03 nm, Epot= -6.92039e+05 Fmax= 5.12625e+03, atom=
1022
Step= 2647, Dmax= 4.3e-03 nm, Epot= -6.92098e+05 Fmax= 3.72457e+03, atom=
1022
Step= 2647, Dmax= 4.3e-03 nm, Epot= -6.92062e+05 Fmax= 1.51933e+03, atom=
1022
Step= 2648, Dmax= 5.2e-03 nm, Epot= -6.92099e+05 Fmax= 4.25221e+03, atom=
1022
Step= 2648, Dmax= 5.2e-03 nm, Epot= -6.92041e+05 Fmax= 6.45781e+03, atom=
1022

The command for the mdrun was:

mpirun -n 2 $gromacsPath/mdrun_mpi -c $path/3m71_minim.pdb -compact
-deffnm $path/3m71_minim -s $path/3m71_minim_ion.tpr  -v
2$path/minLogErr 1$path/minLogOut

Can you please tell me whats wrong?



When EM is done, mdrun prints very clear messages indicating convergence (or 
lack thereof) with information about maximum force and potential energy.  If 
you're not seeing this information, mdrun isn't done or somehow got killed or 
hung up.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun -v output

2012-06-18 Thread Javier Cerezo
Information messages, such as those shown on the screen during mdrun are 
output to stderr. So if you want to get them you should redirect as follows:


mdrun -v -s md.tpr 2 verbose.txt

In the case where you may need to get all output (from both stdout and 
stderr) you should use:


mdrun -v -s md.tpr  verbose.txt

Javier

El 18/06/12 08:49, Chandan Choudhury escribió:


Dear gmx users,

Is it possible to redirect the output of mdrun -v to a file while 
submitting the job using pbs script?

 mdrun -v -s md.tpr  verbose.txt
donot produce output (to file verbose.txt) while the job is running.

Chandan

--
Chandan kumar Choudhury
NCL, Pune
INDIA




--
Javier CEREZO BASTIDA
PhD Student
Physical Chemistry
Universidad de Murcia
Murcia (Spain)
Tel: (+34)868887434
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun -v output

2012-06-18 Thread Chandan Choudhury
On Mon, Jun 18, 2012 at 12:43 PM, Javier Cerezo j...@um.es wrote:

  Information messages, such as those shown on the screen during mdrun are
 output to stderr. So if you want to get them you should redirect as follows:

 mdrun -v -s md.tpr 2 verbose.txt

 In the case where you may need to get all output (from both stdout and
 stderr) you should use:

 mdrun -v -s md.tpr  verbose.txt


Thanks. It is actually  mdrun -v -s md.tpr  verbose.txt


 Javier

 El 18/06/12 08:49, Chandan Choudhury escribió:


 Dear gmx users,

 Is it possible to redirect the output of mdrun -v to a file while
 submitting the job using pbs script?
  mdrun -v -s md.tpr  verbose.txt
 donot produce output (to file verbose.txt) while the job is running.

 Chandan

 --
 Chandan kumar Choudhury
 NCL, Pune
 INDIA



 --
 Javier CEREZO BASTIDA
 PhD Student
 Physical Chemistry
 Universidad de Murcia
 Murcia (Spain)
 Tel: (+34)868887434

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun -v output

2012-06-18 Thread Peter C. Lai
It actually depends on your shell/environment :-P

On sun grid engine and derivatives, the you can have the scheduler capture
the stdout and stderr output through the -o and -e parameters, respectively.

On 2012-06-18 05:28:11PM +0530, Chandan Choudhury wrote:
 On Mon, Jun 18, 2012 at 12:43 PM, Javier Cerezo j...@um.es wrote:
 
   Information messages, such as those shown on the screen during mdrun are
  output to stderr. So if you want to get them you should redirect as follows:
 
  mdrun -v -s md.tpr 2 verbose.txt
 
  In the case where you may need to get all output (from both stdout and
  stderr) you should use:
 
  mdrun -v -s md.tpr  verbose.txt
 
 
 Thanks. It is actually  mdrun -v -s md.tpr  verbose.txt
 
 
  Javier
 
  El 18/06/12 08:49, Chandan Choudhury escribió:
 
 
  Dear gmx users,
 
  Is it possible to redirect the output of mdrun -v to a file while
  submitting the job using pbs script?
   mdrun -v -s md.tpr  verbose.txt
  donot produce output (to file verbose.txt) while the job is running.
 
  Chandan
 
  --
  Chandan kumar Choudhury
  NCL, Pune
  INDIA
 
 
 
  --
  Javier CEREZO BASTIDA
  PhD Student
  Physical Chemistry
  Universidad de Murcia
  Murcia (Spain)
  Tel: (+34)868887434
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 

 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


-- 
==
Peter C. Lai| University of Alabama-Birmingham
Programmer/Analyst  | KAUL 752A
Genetics, Div. of Research  | 705 South 20th Street
p...@uab.edu| Birmingham AL 35294-4461
(205) 690-0808  |
==

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun -v output

2012-06-18 Thread Chandan Choudhury
Thanks Peter for the clarification.

Chandan

--
Chandan kumar Choudhury
NCL, Pune
INDIA


On Tue, Jun 19, 2012 at 2:27 AM, Peter C. Lai p...@uab.edu wrote:

 It actually depends on your shell/environment :-P

 On sun grid engine and derivatives, the you can have the scheduler capture
 the stdout and stderr output through the -o and -e parameters,
 respectively.

 On 2012-06-18 05:28:11PM +0530, Chandan Choudhury wrote:
  On Mon, Jun 18, 2012 at 12:43 PM, Javier Cerezo j...@um.es wrote:
 
Information messages, such as those shown on the screen during mdrun
 are
   output to stderr. So if you want to get them you should redirect as
 follows:
  
   mdrun -v -s md.tpr 2 verbose.txt
  
   In the case where you may need to get all output (from both stdout and
   stderr) you should use:
  
   mdrun -v -s md.tpr  verbose.txt
  
 
  Thanks. It is actually  mdrun -v -s md.tpr  verbose.txt
 
  
   Javier
  
   El 18/06/12 08:49, Chandan Choudhury escribió:
  
  
   Dear gmx users,
  
   Is it possible to redirect the output of mdrun -v to a file while
   submitting the job using pbs script?
mdrun -v -s md.tpr  verbose.txt
   donot produce output (to file verbose.txt) while the job is running.
  
   Chandan
  
   --
   Chandan kumar Choudhury
   NCL, Pune
   INDIA
  
  
  
   --
   Javier CEREZO BASTIDA
   PhD Student
   Physical Chemistry
   Universidad de Murcia
   Murcia (Spain)
   Tel: (+34)868887434
  
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   Please search the archive at
   http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  

  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


 --
 ==
 Peter C. Lai| University of Alabama-Birmingham
 Programmer/Analyst  | KAUL 752A
 Genetics, Div. of Research  | 705 South 20th Street
 p...@uab.edu | Birmingham AL 35294-4461
 (205) 690-0808  |
 ==

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun -rerun

2012-04-12 Thread Juliette N.
Hello all,

I am trying to exclude a nonbonded interactions on the polymer chains using

grompp -f old.mdp -c old_em.gro -p nrexcl_new.top -o new.tpr

and mdrun -rerun command. but when I issue the command above grompp
takes many hours to finish and at the end grompp crashes (Killed).
This even leads to some weired hardware problem which gibes failure of
node having some CPUs..I mean when I get grompp failure, I am not able
to connect to the node anymore and will have to restart the node!

grompp gives:

Generated 332520 of the 332520 1-4 parameter combinations
Opening library file /usr/local/gromacs/share/gromacs/top/spc.itp
Opening library file /usr/local/gromacs/share/gromacs/top/ions.itp
Excluding 101 bonded neighbours molecule type 'Polymer'

Can anyone help me please?

Thanks

On 8 April 2012 17:16, Justin A. Lemkul jalem...@vt.edu wrote:


 Juliette N. wrote:

 On 1 April 2012 20:17, Mark Abraham mark.abra...@anu.edu.au wrote:

 On 2/04/2012 10:10 AM, Juliette N. wrote:

 Hi all,

 I have an enquiry regarding calculation of heat of vaporization by
 estimating intermolecular nonbonded energies using mdrun rerun option.
 mdrun
 -rerun should break the total nonbonded energy coming from nonboded
 energy
 of (different molecules + a molecule with itself). By setting
 appropriate
 nrexcl in top file I am trying to exclude nonbonded part of molecule
 with
 itself within cut off radius so what remains would be intermolecular
 nonbonded energy between different molecules which determines heat of
 vaporization.

 1) Is this approach correct?


 For excluding a whole molecule, it could work. For excluding only a part,
 using energy group exclusions (see manual) is more flexible. Just setting
 energy groups suitably might work in your case, so that you get the
 group-wise break-down of nonbonded energy.


 2) If yes, can you please check the way I am applying mdrun rerun:

 grompp -p nrexcl_3.top -o total_nonbonded.tpr


 Hello all,

 I am a bit confused about whether or not mdp file has to be provided
 for grompp rerun step. In the original run (no mdrun I provide mdp as
 follows:

            grompp -f old.mdp -c  old_em.gro -p nrexcl_3.top -o
 total_nonbonded.tpr             (GROMPP old)
 then :
            mdrun -deffnm total_nonbonded -s -o -c -g -e

 Then I update the top file to new nrexcl= new value

 No do I have to provide the old mdp file and old gro file old_em.gro,
 which were used in  (GROMPP old)? that is:

  (GROMPP new:)       grompp -f old.mdp -c old_em.gro -p nrexcl_new.top
 -o new.tpr

 mpirun –np 4 mdrun_mpi -rerun total_nonbonded.trr  –deffnm new -s -o
 -c -g -e -x



 or I just have to use:
  grompp   -p nrexcl_new.top -o new.tpr     ( no -c and no -f flag)


 You need to provide some .mdp file and configuration.  If you omit -c and
 -f, grompp (like any other Gromacs program) will search for the default
 names of grompp.mdp and conf.gro.  If they don't exist, grompp will fail.



 My other question is that the grompp with large nrexcl around 100 is
 taking a lot of time, while the default  nrexcl=3 was grompp ed much
 faster. Why excluding bonds in this way is time consuming?


 It's going slower because it's doing exponentially more work.

 -Justin

 --
 

 Justin A. Lemkul
 Ph.D. Candidate
 ICTAS Doctoral Scholar
 MILES-IGERT Trainee
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 

 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



-- 
Thanks,
J. N.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun -rerun

2012-04-12 Thread Mark Abraham

On 13/04/2012 10:44 AM, Juliette N. wrote:

Hello all,

I am trying to exclude a nonbonded interactions on the polymer chains using

grompp -f old.mdp -c old_em.gro -p nrexcl_new.top -o new.tpr

and mdrun -rerun command. but when I issue the command above grompp
takes many hours to finish and at the end grompp crashes (Killed).
This even leads to some weired hardware problem which gibes failure of
node having some CPUs..I mean when I get grompp failure, I am not able
to connect to the node anymore and will have to restart the node!

grompp gives:

Generated 332520 of the 332520 1-4 parameter combinations
Opening library file /usr/local/gromacs/share/gromacs/top/spc.itp
Opening library file /usr/local/gromacs/share/gromacs/top/ions.itp
Excluding 101 bonded neighbours molecule type 'Polymer'

Can anyone help me please?


Justin explained the fundamental reason for the problem. Apparently the 
implementation of the nrexcl=large is not robust with respect to memory 
usage and/or CPU time for large numbers of exclusions. So you will have 
to use energy group exclusions like I suggested earlier in this thread.


Mark



Thanks

On 8 April 2012 17:16, Justin A. Lemkuljalem...@vt.edu  wrote:


Juliette N. wrote:

On 1 April 2012 20:17, Mark Abrahammark.abra...@anu.edu.au  wrote:

On 2/04/2012 10:10 AM, Juliette N. wrote:

Hi all,

I have an enquiry regarding calculation of heat of vaporization by
estimating intermolecular nonbonded energies using mdrun rerun option.
mdrun
-rerun should break the total nonbonded energy coming from nonboded
energy
of (different molecules + a molecule with itself). By setting
appropriate
nrexcl in top file I am trying to exclude nonbonded part of molecule
with
itself within cut off radius so what remains would be intermolecular
nonbonded energy between different molecules which determines heat of
vaporization.

1) Is this approach correct?


For excluding a whole molecule, it could work. For excluding only a part,
using energy group exclusions (see manual) is more flexible. Just setting
energy groups suitably might work in your case, so that you get the
group-wise break-down of nonbonded energy.



2) If yes, can you please check the way I am applying mdrun rerun:

grompp -p nrexcl_3.top -o total_nonbonded.tpr


Hello all,

I am a bit confused about whether or not mdp file has to be provided
for grompp rerun step. In the original run (no mdrun I provide mdp as
follows:

grompp -f old.mdp -c  old_em.gro -p nrexcl_3.top -o
total_nonbonded.tpr (GROMPP old)
then :
mdrun -deffnm total_nonbonded -s -o -c -g -e

Then I update the top file to new nrexcl= new value

No do I have to provide the old mdp file and old gro file old_em.gro,
which were used in  (GROMPP old)? that is:

  (GROMPP new:)   grompp -f old.mdp -c old_em.gro -p nrexcl_new.top
-o new.tpr

mpirun –np 4 mdrun_mpi -rerun total_nonbonded.trr  –deffnm new -s -o
-c -g -e -x



or I just have to use:
  grompp   -p nrexcl_new.top -o new.tpr ( no -c and no -f flag)


You need to provide some .mdp file and configuration.  If you omit -c and
-f, grompp (like any other Gromacs program) will search for the default
names of grompp.mdp and conf.gro.  If they don't exist, grompp will fail.



My other question is that the grompp with large nrexcl around 100 is
taking a lot of time, while the default  nrexcl=3 was grompp ed much
faster. Why excluding bonds in this way is time consuming?


It's going slower because it's doing exponentially more work.

-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface
or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists





--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun -rerun

2012-04-12 Thread Juliette N.
Thanks Mark. I have several polymer chains (single polymer type) each
having 362 atoms. So in order to exclude all nonbonded interactions of
a chain with itself I need to add about 362 lines in the top file.

[exclusions]
1 2 3 362
2 3 4 362
3 4 5 ...362
.
358 .. 362
.
.
360 361 362 (this is not needed even if I have nrexcl=3 in top file)

- I guess I need only the above lines as in top file,

[ molecules ]
; Compound#mols
Polymer 10

takes care of the exclusions for all other existing chains?

- Do I need to modify mdp file as well? energy_grps ?

Thanks

On 12 April 2012 20:58, Mark Abraham mark.abra...@anu.edu.au wrote:
 On 13/04/2012 10:44 AM, Juliette N. wrote:

 Hello all,

 I am trying to exclude a nonbonded interactions on the polymer chains
 using

 grompp -f old.mdp -c old_em.gro -p nrexcl_new.top -o new.tpr

 and mdrun -rerun command. but when I issue the command above grompp
 takes many hours to finish and at the end grompp crashes (Killed).
 This even leads to some weired hardware problem which gibes failure of
 node having some CPUs..I mean when I get grompp failure, I am not able
 to connect to the node anymore and will have to restart the node!

 grompp gives:

 Generated 332520 of the 332520 1-4 parameter combinations
 Opening library file /usr/local/gromacs/share/gromacs/top/spc.itp
 Opening library file /usr/local/gromacs/share/gromacs/top/ions.itp
 Excluding 101 bonded neighbours molecule type 'Polymer'

 Can anyone help me please?


 Justin explained the fundamental reason for the problem. Apparently the
 implementation of the nrexcl=large is not robust with respect to memory
 usage and/or CPU time for large numbers of exclusions. So you will have to
 use energy group exclusions like I suggested earlier in this thread.

 Mark



 Thanks

 On 8 April 2012 17:16, Justin A. Lemkuljalem...@vt.edu  wrote:


 Juliette N. wrote:

 On 1 April 2012 20:17, Mark Abrahammark.abra...@anu.edu.au  wrote:

 On 2/04/2012 10:10 AM, Juliette N. wrote:

 Hi all,

 I have an enquiry regarding calculation of heat of vaporization by
 estimating intermolecular nonbonded energies using mdrun rerun option.
 mdrun
 -rerun should break the total nonbonded energy coming from nonboded
 energy
 of (different molecules + a molecule with itself). By setting
 appropriate
 nrexcl in top file I am trying to exclude nonbonded part of molecule
 with
 itself within cut off radius so what remains would be intermolecular
 nonbonded energy between different molecules which determines heat of
 vaporization.

 1) Is this approach correct?


 For excluding a whole molecule, it could work. For excluding only a
 part,
 using energy group exclusions (see manual) is more flexible. Just
 setting
 energy groups suitably might work in your case, so that you get the
 group-wise break-down of nonbonded energy.


 2) If yes, can you please check the way I am applying mdrun rerun:

 grompp -p nrexcl_3.top -o total_nonbonded.tpr


 Hello all,

 I am a bit confused about whether or not mdp file has to be provided
 for grompp rerun step. In the original run (no mdrun I provide mdp as
 follows:

            grompp -f old.mdp -c  old_em.gro -p nrexcl_3.top -o
 total_nonbonded.tpr             (GROMPP old)
 then :
            mdrun -deffnm total_nonbonded -s -o -c -g -e

 Then I update the top file to new nrexcl= new value

 No do I have to provide the old mdp file and old gro file old_em.gro,
 which were used in  (GROMPP old)? that is:

  (GROMPP new:)       grompp -f old.mdp -c old_em.gro -p nrexcl_new.top
 -o new.tpr

 mpirun –np 4 mdrun_mpi -rerun total_nonbonded.trr  –deffnm new -s -o
 -c -g -e -x



 or I just have to use:
  grompp   -p nrexcl_new.top -o new.tpr     ( no -c and no -f flag)

 You need to provide some .mdp file and configuration.  If you omit -c and
 -f, grompp (like any other Gromacs program) will search for the default
 names of grompp.mdp and conf.gro.  If they don't exist, grompp will fail.


 My other question is that the grompp with large nrexcl around 100 is
 taking a lot of time, while the default  nrexcl=3 was grompp ed much
 faster. Why excluding bonds in this way is time consuming?

 It's going slower because it's doing exponentially more work.

 -Justin

 --
 

 Justin A. Lemkul
 Ph.D. Candidate
 ICTAS Doctoral Scholar
 MILES-IGERT Trainee
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 

 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www
 interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




 --
 gmx-users mailing 

Re: [gmx-users] mdrun segmentation fault

2012-04-11 Thread Mark Abraham

On 12/04/2012 3:30 PM, priya thiyagarajan wrote:

hello sir,

Thanks for your kind reply..

i am performing final md run for 60molecules ..

after i submitted my job for 5ns, when i analyse the result my run is 
completed only for 314ps initially..


At this point, you should have looked at your .log file and your stdout 
and stderr files to see why the simulation stopped, rather than blindly 
continuing.


then i extende my simulation.. but again it completed only for 500ps.. 
initially i thought because of queue limit its getting stopped.. but 
3rd time i extended my simulation it came for 538ps..


when i analysed my log file it just showing like this


DD  step 268999 load imb.: force  1.5%

   Step   Time Lambda
 269000  538.00.0

   Energies (kJ/mol)
   G96AngleProper Dih.  Improper Dih.  LJ-14 
Coulomb-14
9.84673e+035.10424e+031.80376e+032.90591e+03   
-2.97052e+03
LJ (SR)  Disper. corr.   Coulomb (SR)   Coul. recip.  
Potential
9.69125e+04   -1.59583e+03   -7.80231e+05   -3.04364e+04   
-6.98661e+05
Kinetic En.   Total EnergyTemperature Pres. DC (bar) Pressure 
(bar)
1.17775e+05   -5.80886e+052.97568e+02   -1.04033e+02
2.31825e+01

   Constr. rmsd
2.24598e-05



when i checked my errorfile i got this is because of segmentation fault..

my errorfile showed like this

Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
/var/spool/PBS/mom_priv/jobs/244266.vega.SC http://244266.vega.SC: 
line 11: 26308 Segmentation fault  mdrun -s md.tpr -o md3.trr -c 
md3.pdb -e md3.edr -g md3.log -cpi state2.cpt -cpo state3.cpt -x 
traj3.xtc -noappend


There should be more descriptive error output somewhere. In any case, 
you are probably 
http://www.gromacs.org/Documentation/Terminology/Blowing_Up and need to 
follow the diagnostic advice there.


Mark

http://www.gromacs.org/Documentation/Terminology/Blowing_Up



i searched in archieves but i didnt get the point clearly...

can anyone please help me by explaining that reason for my problem and 
solve it.


i am waiting for your reply..

pls help me with your valuable answer..

i am using gromacs 4.5.5 version.

thanking you,






-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun -rerun

2012-04-08 Thread Juliette N.
On 1 April 2012 20:17, Mark Abraham mark.abra...@anu.edu.au wrote:
 On 2/04/2012 10:10 AM, Juliette N. wrote:

 Hi all,

 I have an enquiry regarding calculation of heat of vaporization by
 estimating intermolecular nonbonded energies using mdrun rerun option. mdrun
 -rerun should break the total nonbonded energy coming from nonboded energy
 of (different molecules + a molecule with itself). By setting appropriate
 nrexcl in top file I am trying to exclude nonbonded part of molecule with
 itself within cut off radius so what remains would be intermolecular
 nonbonded energy between different molecules which determines heat of
 vaporization.

 1) Is this approach correct?


 For excluding a whole molecule, it could work. For excluding only a part,
 using energy group exclusions (see manual) is more flexible. Just setting
 energy groups suitably might work in your case, so that you get the
 group-wise break-down of nonbonded energy.



 2) If yes, can you please check the way I am applying mdrun rerun:

 grompp -p nrexcl_3.top -o total_nonbonded.tpr

Hello all,

I am a bit confused about whether or not mdp file has to be provided
for grompp rerun step. In the original run (no mdrun I provide mdp as
follows:

grompp -f old.mdp -c  old_em.gro -p nrexcl_3.top -o
total_nonbonded.tpr (GROMPP old)
then :
mdrun -deffnm total_nonbonded -s -o -c -g -e

Then I update the top file to new nrexcl= new value

No do I have to provide the old mdp file and old gro file old_em.gro,
which were used in  (GROMPP old)? that is:

 (GROMPP new:)   grompp -f old.mdp -c old_em.gro -p nrexcl_new.top
-o new.tpr

mpirun –np 4 mdrun_mpi -rerun total_nonbonded.trr  –deffnm new -s -o
-c -g -e -x



or I just have to use:
  grompp   -p nrexcl_new.top -o new.tpr ( no -c and no -f flag)


My other question is that the grompp with large nrexcl around 100 is
taking a lot of time, while the default  nrexcl=3 was grompp ed much
faster. Why excluding bonds in this way is time consuming?

Appreciate your comments,
Best
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun -rerun

2012-04-08 Thread Justin A. Lemkul



Juliette N. wrote:

On 1 April 2012 20:17, Mark Abraham mark.abra...@anu.edu.au wrote:

On 2/04/2012 10:10 AM, Juliette N. wrote:

Hi all,

I have an enquiry regarding calculation of heat of vaporization by
estimating intermolecular nonbonded energies using mdrun rerun option. mdrun
-rerun should break the total nonbonded energy coming from nonboded energy
of (different molecules + a molecule with itself). By setting appropriate
nrexcl in top file I am trying to exclude nonbonded part of molecule with
itself within cut off radius so what remains would be intermolecular
nonbonded energy between different molecules which determines heat of
vaporization.

1) Is this approach correct?


For excluding a whole molecule, it could work. For excluding only a part,
using energy group exclusions (see manual) is more flexible. Just setting
energy groups suitably might work in your case, so that you get the
group-wise break-down of nonbonded energy.



2) If yes, can you please check the way I am applying mdrun rerun:

grompp -p nrexcl_3.top -o total_nonbonded.tpr


Hello all,

I am a bit confused about whether or not mdp file has to be provided
for grompp rerun step. In the original run (no mdrun I provide mdp as
follows:

grompp -f old.mdp -c  old_em.gro -p nrexcl_3.top -o
total_nonbonded.tpr (GROMPP old)
then :
mdrun -deffnm total_nonbonded -s -o -c -g -e

Then I update the top file to new nrexcl= new value

No do I have to provide the old mdp file and old gro file old_em.gro,
which were used in  (GROMPP old)? that is:

 (GROMPP new:)   grompp -f old.mdp -c old_em.gro -p nrexcl_new.top
-o new.tpr

mpirun –np 4 mdrun_mpi -rerun total_nonbonded.trr  –deffnm new -s -o
-c -g -e -x



or I just have to use:
  grompp   -p nrexcl_new.top -o new.tpr ( no -c and no -f flag)



You need to provide some .mdp file and configuration.  If you omit -c and -f, 
grompp (like any other Gromacs program) will search for the default names of 
grompp.mdp and conf.gro.  If they don't exist, grompp will fail.




My other question is that the grompp with large nrexcl around 100 is
taking a lot of time, while the default  nrexcl=3 was grompp ed much
faster. Why excluding bonds in this way is time consuming?



It's going slower because it's doing exponentially more work.

-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun -rerun

2012-04-01 Thread Mark Abraham

On 2/04/2012 10:10 AM, Juliette N. wrote:

Hi all,

I have an enquiry regarding calculation of heat of vaporization by 
estimating intermolecular nonbonded energies using mdrun rerun option. 
mdrun -rerun should break the total nonbonded energy coming from 
nonboded energy of (different molecules + a molecule with itself). By 
setting appropriate nrexcl in top file I am trying to exclude 
nonbonded part of molecule with itself within cut off radius so what 
remains would be intermolecular nonbonded energy between different 
molecules which determines heat of vaporization.


1) Is this approach correct?


For excluding a whole molecule, it could work. For excluding only a 
part, using energy group exclusions (see manual) is more flexible. Just 
setting energy groups suitably might work in your case, so that you get 
the group-wise break-down of nonbonded energy.




2) If yes, can you please check the way I am applying mdrun rerun:

grompp -p nrexcl_3.top -o total_nonbonded.tpr

mdrun -deffnm total_nonbonded -s -o -c -g -e

I am done with these 5ns runs and now intend to exclude nonbonded 
interaction on a chain by increasing nrexcl in top file named 
nrexcl_new.top


grompp -p nrexcl_new.top -o new.tpr

mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e new

2) Am I doing this correctly? I doubt because I provide -rerun 
total_nonbonded.trr but dont know how to introduce -rerun 
total_nonbonded.edr so that new energies get written on it?


You want to write new energies and keep the old ones in case you need 
them. There's no reason to (want to) re-introduce the old ones. mdrun 
-rerun accepts the trajectory to determine what configurations to 
compute about. It doesn't need to know what some other algorithm thought 
about the energies of that configuration.




3) If I want to re-calculate only the last 1 ns of runs (after system 
is equilibrated), can I use -b 4000 ? i.e.:


mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e 
new -b 4000


Probably not. Check mdrun -h, but either way you can use trjconv first.

Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun -rerun

2012-04-01 Thread Juliette N.
On 1 April 2012 20:17, Mark Abraham mark.abra...@anu.edu.au wrote:
 On 2/04/2012 10:10 AM, Juliette N. wrote:

 Hi all,

 I have an enquiry regarding calculation of heat of vaporization by
 estimating intermolecular nonbonded energies using mdrun rerun option. mdrun
 -rerun should break the total nonbonded energy coming from nonboded energy
 of (different molecules + a molecule with itself). By setting appropriate
 nrexcl in top file I am trying to exclude nonbonded part of molecule with
 itself within cut off radius so what remains would be intermolecular
 nonbonded energy between different molecules which determines heat of
 vaporization.

 1) Is this approach correct?


 For excluding a whole molecule, it could work. For excluding only a part,
 using energy group exclusions (see manual) is more flexible. Just setting
 energy groups suitably might work in your case, so that you get the
 group-wise break-down of nonbonded energy.

Thank you Mark. I have a one component system. I guess group exclusions is 
used for multicomponent systems where one needs to breakdown the total 
energies.? In my case I need to exclude non bonded interaction of a polymer 
chain with itself (avoid calculation of 1 and last atoms on chains if they 
fall within cutoff radius)

 2) If yes, can you please check the way I am applying mdrun rerun:

 grompp -p nrexcl_3.top -o total_nonbonded.tpr

 mdrun -deffnm total_nonbonded -s -o -c -g -e

 I am done with these 5ns runs and now intend to exclude nonbonded
 interaction on a chain by increasing nrexcl in top file named nrexcl_new.top

 grompp -p nrexcl_new.top -o new.tpr

 mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e new

 2) Am I doing this correctly? I doubt because I provide -rerun
 total_nonbonded.trr but dont know how to introduce -rerun
 total_nonbonded.edr so that new energies get written on it?


 You want to write new energies and keep the old ones in case you need them.
 There's no reason to (want to) re-introduce the old ones. mdrun -rerun
 accepts the trajectory to determine what configurations to compute about. It
 doesn't need to know what some other algorithm thought about the energies of
 that configuration.

I am a bit puzzled now. mdrun run takes on the old trajectory and recomputes 
energies based on the old trajectory for each frame? If thats the case, then 
use of nrexcel in order to obtain new trajectories fails I guess, as mdrun 
rerun does not produce a new trajectory but only a new energy file?!

In other words, say I am doing NPT, with default nrexcel =3 I got a
trajectory file and a density. Now with a new nrexcel that is large
enough to exclude all nonbonded interactions of atoms on the same
chain, I should expect a new trajectory and new density corresponding
to this modified nrexcel. My concern is if what mdrun reruns does is
just computing energies based on the old trajectory, I need to redo
the simulations with a new top file (nrexcel=new number) because this
new top file should affect the configuration. Am I right?

So for instance,

 3) If I want to re-calculate only the last 1 ns of runs (after system is
 equilibrated), can I use -b 4000 ? i.e.:

 mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e new -b
 4000


 Probably not. Check mdrun -h, but either way you can use trjconv first.

 Mark
 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



-- 
Thanks,
J. N.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun -rerun

2012-04-01 Thread Mark Abraham

On 2/04/2012 11:16 AM, Juliette N. wrote:

On 1 April 2012 20:17, Mark Abrahammark.abra...@anu.edu.au  wrote:

On 2/04/2012 10:10 AM, Juliette N. wrote:

Hi all,

I have an enquiry regarding calculation of heat of vaporization by
estimating intermolecular nonbonded energies using mdrun rerun option. mdrun
-rerun should break the total nonbonded energy coming from nonboded energy
of (different molecules + a molecule with itself). By setting appropriate
nrexcl in top file I am trying to exclude nonbonded part of molecule with
itself within cut off radius so what remains would be intermolecular
nonbonded energy between different molecules which determines heat of
vaporization.

1) Is this approach correct?


For excluding a whole molecule, it could work. For excluding only a part,
using energy group exclusions (see manual) is more flexible. Just setting
energy groups suitably might work in your case, so that you get the
group-wise break-down of nonbonded energy.

Thank you Mark. I have a one component system. I guess group exclusions is used 
for multicomponent systems where one needs to breakdown the total energies.? In 
my case I need to exclude non bonded interaction of a polymer chain with itself 
(avoid calculation of 1 and last atoms on chains if they fall within cutoff 
radius)

2) If yes, can you please check the way I am applying mdrun rerun:

grompp -p nrexcl_3.top -o total_nonbonded.tpr

mdrun -deffnm total_nonbonded -s -o -c -g -e

I am done with these 5ns runs and now intend to exclude nonbonded
interaction on a chain by increasing nrexcl in top file named nrexcl_new.top

grompp -p nrexcl_new.top -o new.tpr

mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e new

2) Am I doing this correctly? I doubt because I provide -rerun
total_nonbonded.trr but dont know how to introduce -rerun
total_nonbonded.edr so that new energies get written on it?


You want to write new energies and keep the old ones in case you need them.
There's no reason to (want to) re-introduce the old ones. mdrun -rerun
accepts the trajectory to determine what configurations to compute about. It
doesn't need to know what some other algorithm thought about the energies of
that configuration.

I am a bit puzzled now. mdrun run takes on the old trajectory and recomputes 
energies based on the old trajectory for each frame?


Yes, per mdrun -h.


  If thats the case, then use of nrexcel in order to obtain new trajectories 
fails I guess, as mdrun rerun does not produce a new trajectory but only a new 
energy file?!

In other words, say I am doing NPT, with default nrexcel =3 I got a
trajectory file and a density. Now with a new nrexcel that is large
enough to exclude all nonbonded interactions of atoms on the same
chain, I should expect a new trajectory and new density corresponding
to this modified nrexcel. My concern is if what mdrun reruns does is
just computing energies based on the old trajectory, I need to redo
the simulations with a new top file (nrexcel=new number) because this
new top file should affect the configuration. Am I right?


If you want new configurations based on some Frankenstein of your model 
physics, then you do not need mdrun -rerun.


Mark



So for instance,

3) If I want to re-calculate only the last 1 ns of runs (after system is
equilibrated), can I use -b 4000 ? i.e.:

mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e new -b
4000


Probably not. Check mdrun -h, but either way you can use trjconv first.

Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface
or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists





--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun -rerun

2012-04-01 Thread Juliette N.
Thanks. One last question. So whats the new trr file provided by -o
flag of mdrun rerun below?

mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e new


 On 1 April 2012 20:17, Mark Abrahammark.abra...@anu.edu.au  wrote:

 On 2/04/2012 10:10 AM, Juliette N. wrote:

 Hi all,

 I have an enquiry regarding calculation of heat of vaporization by
 estimating intermolecular nonbonded energies using mdrun rerun option.
 mdrun
 -rerun should break the total nonbonded energy coming from nonboded
 energy
 of (different molecules + a molecule with itself). By setting
 appropriate
 nrexcl in top file I am trying to exclude nonbonded part of molecule
 with
 itself within cut off radius so what remains would be intermolecular
 nonbonded energy between different molecules which determines heat of
 vaporization.

 1) Is this approach correct?


 For excluding a whole molecule, it could work. For excluding only a part,
 using energy group exclusions (see manual) is more flexible. Just setting
 energy groups suitably might work in your case, so that you get the
 group-wise break-down of nonbonded energy.

 Thank you Mark. I have a one component system. I guess group exclusions
 is used for multicomponent systems where one needs to breakdown the total
 energies.? In my case I need to exclude non bonded interaction of a polymer
 chain with itself (avoid calculation of 1 and last atoms on chains if they
 fall within cutoff radius)

 2) If yes, can you please check the way I am applying mdrun rerun:

 grompp -p nrexcl_3.top -o total_nonbonded.tpr

 mdrun -deffnm total_nonbonded -s -o -c -g -e

 I am done with these 5ns runs and now intend to exclude nonbonded
 interaction on a chain by increasing nrexcl in top file named
 nrexcl_new.top

 grompp -p nrexcl_new.top -o new.tpr

 mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e new

 2) Am I doing this correctly? I doubt because I provide -rerun
 total_nonbonded.trr but dont know how to introduce -rerun
 total_nonbonded.edr so that new energies get written on it?


 You want to write new energies and keep the old ones in case you need
 them.
 There's no reason to (want to) re-introduce the old ones. mdrun -rerun
 accepts the trajectory to determine what configurations to compute about.
 It
 doesn't need to know what some other algorithm thought about the energies
 of
 that configuration.

 I am a bit puzzled now. mdrun run takes on the old trajectory and
 recomputes energies based on the old trajectory for each frame?


 Yes, per mdrun -h.


  If thats the case, then use of nrexcel in order to obtain new
 trajectories fails I guess, as mdrun rerun does not produce a new trajectory
 but only a new energy file?!

 In other words, say I am doing NPT, with default nrexcel =3 I got a
 trajectory file and a density. Now with a new nrexcel that is large
 enough to exclude all nonbonded interactions of atoms on the same
 chain, I should expect a new trajectory and new density corresponding
 to this modified nrexcel. My concern is if what mdrun reruns does is
 just computing energies based on the old trajectory, I need to redo
 the simulations with a new top file (nrexcel=new number) because this
 new top file should affect the configuration. Am I right?


 If you want new configurations based on some Frankenstein of your model
 physics, then you do not need mdrun -rerun.

 Mark



 So for instance,

 3) If I want to re-calculate only the last 1 ns of runs (after system is
 equilibrated), can I use -b 4000 ? i.e.:

 mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e new
 -b
 4000


 Probably not. Check mdrun -h, but either way you can use trjconv first.

 Mark
 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www
 interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



-- 
Thanks,
J. N.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun -rerun

2012-04-01 Thread Mark Abraham

On 2/04/2012 12:05 PM, Juliette N. wrote:

Thanks. One last question. So whats the new trr file provided by -o
flag of mdrun rerun below?

mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e new


If it even writes one, it will be identical to the -rerun file. There's 
no way for the rerun code to get new configurations except by reading 
the input trajectory.


Mark


On 1 April 2012 20:17, Mark Abrahammark.abra...@anu.edu.auwrote:

On 2/04/2012 10:10 AM, Juliette N. wrote:

Hi all,

I have an enquiry regarding calculation of heat of vaporization by
estimating intermolecular nonbonded energies using mdrun rerun option.
mdrun
-rerun should break the total nonbonded energy coming from nonboded
energy
of (different molecules + a molecule with itself). By setting
appropriate
nrexcl in top file I am trying to exclude nonbonded part of molecule
with
itself within cut off radius so what remains would be intermolecular
nonbonded energy between different molecules which determines heat of
vaporization.

1) Is this approach correct?


For excluding a whole molecule, it could work. For excluding only a part,
using energy group exclusions (see manual) is more flexible. Just setting
energy groups suitably might work in your case, so that you get the
group-wise break-down of nonbonded energy.

Thank you Mark. I have a one component system. I guess group exclusions
is used for multicomponent systems where one needs to breakdown the total
energies.? In my case I need to exclude non bonded interaction of a polymer
chain with itself (avoid calculation of 1 and last atoms on chains if they
fall within cutoff radius)

2) If yes, can you please check the way I am applying mdrun rerun:

grompp -p nrexcl_3.top -o total_nonbonded.tpr

mdrun -deffnm total_nonbonded -s -o -c -g -e

I am done with these 5ns runs and now intend to exclude nonbonded
interaction on a chain by increasing nrexcl in top file named
nrexcl_new.top

grompp -p nrexcl_new.top -o new.tpr

mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e new

2) Am I doing this correctly? I doubt because I provide -rerun
total_nonbonded.trr but dont know how to introduce -rerun
total_nonbonded.edr so that new energies get written on it?


You want to write new energies and keep the old ones in case you need
them.
There's no reason to (want to) re-introduce the old ones. mdrun -rerun
accepts the trajectory to determine what configurations to compute about.
It
doesn't need to know what some other algorithm thought about the energies
of
that configuration.

I am a bit puzzled now. mdrun run takes on the old trajectory and
recomputes energies based on the old trajectory for each frame?


Yes, per mdrun -h.



  If thats the case, then use of nrexcel in order to obtain new
trajectories fails I guess, as mdrun rerun does not produce a new trajectory
but only a new energy file?!

In other words, say I am doing NPT, with default nrexcel =3 I got a
trajectory file and a density. Now with a new nrexcel that is large
enough to exclude all nonbonded interactions of atoms on the same
chain, I should expect a new trajectory and new density corresponding
to this modified nrexcel. My concern is if what mdrun reruns does is
just computing energies based on the old trajectory, I need to redo
the simulations with a new top file (nrexcel=new number) because this
new top file should affect the configuration. Am I right?


If you want new configurations based on some Frankenstein of your model
physics, then you do not need mdrun -rerun.

Mark



So for instance,

3) If I want to re-calculate only the last 1 ns of runs (after system is
equilibrated), can I use -b 4000 ? i.e.:

mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e new
-b
4000


Probably not. Check mdrun -h, but either way you can use trjconv first.

Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www
interface
or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface
or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists





--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read 

Re: [gmx-users] mdrun -rerun

2012-04-01 Thread Juliette N.
On 1 April 2012 22:07, Mark Abraham mark.abra...@anu.edu.au wrote:
 On 2/04/2012 12:05 PM, Juliette N. wrote:

 Thanks. One last question. So whats the new trr file provided by -o
 flag of mdrun rerun below?

 mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e new


 If it even writes one, it will be identical to the -rerun file. There's no
 way for the rerun code to get new configurations except by reading the input
 trajectory.

Thanks a lot :)

 Mark


 On 1 April 2012 20:17, Mark Abrahammark.abra...@anu.edu.au    wrote:

 On 2/04/2012 10:10 AM, Juliette N. wrote:

 Hi all,

 I have an enquiry regarding calculation of heat of vaporization by
 estimating intermolecular nonbonded energies using mdrun rerun option.
 mdrun
 -rerun should break the total nonbonded energy coming from nonboded
 energy
 of (different molecules + a molecule with itself). By setting
 appropriate
 nrexcl in top file I am trying to exclude nonbonded part of molecule
 with
 itself within cut off radius so what remains would be intermolecular
 nonbonded energy between different molecules which determines heat of
 vaporization.

 1) Is this approach correct?


 For excluding a whole molecule, it could work. For excluding only a
 part,
 using energy group exclusions (see manual) is more flexible. Just
 setting
 energy groups suitably might work in your case, so that you get the
 group-wise break-down of nonbonded energy.

 Thank you Mark. I have a one component system. I guess group exclusions
 is used for multicomponent systems where one needs to breakdown the
 total
 energies.? In my case I need to exclude non bonded interaction of a
 polymer
 chain with itself (avoid calculation of 1 and last atoms on chains if
 they
 fall within cutoff radius)

 2) If yes, can you please check the way I am applying mdrun rerun:

 grompp -p nrexcl_3.top -o total_nonbonded.tpr

 mdrun -deffnm total_nonbonded -s -o -c -g -e

 I am done with these 5ns runs and now intend to exclude nonbonded
 interaction on a chain by increasing nrexcl in top file named
 nrexcl_new.top

 grompp -p nrexcl_new.top -o new.tpr

 mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e
 new

 2) Am I doing this correctly? I doubt because I provide -rerun
 total_nonbonded.trr but dont know how to introduce -rerun
 total_nonbonded.edr so that new energies get written on it?


 You want to write new energies and keep the old ones in case you need
 them.
 There's no reason to (want to) re-introduce the old ones. mdrun -rerun
 accepts the trajectory to determine what configurations to compute
 about.
 It
 doesn't need to know what some other algorithm thought about the
 energies
 of
 that configuration.

 I am a bit puzzled now. mdrun run takes on the old trajectory and
 recomputes energies based on the old trajectory for each frame?


 Yes, per mdrun -h.


  If thats the case, then use of nrexcel in order to obtain new
 trajectories fails I guess, as mdrun rerun does not produce a new
 trajectory
 but only a new energy file?!

 In other words, say I am doing NPT, with default nrexcel =3 I got a
 trajectory file and a density. Now with a new nrexcel that is large
 enough to exclude all nonbonded interactions of atoms on the same
 chain, I should expect a new trajectory and new density corresponding
 to this modified nrexcel. My concern is if what mdrun reruns does is
 just computing energies based on the old trajectory, I need to redo
 the simulations with a new top file (nrexcel=new number) because this
 new top file should affect the configuration. Am I right?


 If you want new configurations based on some Frankenstein of your model
 physics, then you do not need mdrun -rerun.

 Mark


 So for instance,

 3) If I want to re-calculate only the last 1 ns of runs (after system
 is
 equilibrated), can I use -b 4000 ? i.e.:

 mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e
 new
 -b
 4000


 Probably not. Check mdrun -h, but either way you can use trjconv first.

 Mark
 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www
 interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www
 interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search 

Re: [gmx-users] mdrun -rerun in parallel?

2012-04-01 Thread Mark Abraham

On 2/04/2012 1:57 PM, Juliette N. wrote:

Hi all,

Is there anyway mdrun -rerun can be used on multiple nodes? e.g. -nt?


Yes. You could have just tried it :-) It's just a version of mdrun that 
gets configurations from the magic file, doesn't actually do integration 
steps and does neighbour searching every configuration.


Mark



Best,


-- Forwarded message --
From: Juliette N.joojoojo...@gmail.com
Date: 1 April 2012 22:10
Subject: Re: [gmx-users] mdrun -rerun
To: Discussion list for GROMACS usersgmx-users@gromacs.org


On 1 April 2012 22:07, Mark Abrahammark.abra...@anu.edu.au  wrote:

On 2/04/2012 12:05 PM, Juliette N. wrote:

Thanks. One last question. So whats the new trr file provided by -o
flag of mdrun rerun below?

mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e new


If it even writes one, it will be identical to the -rerun file. There's no
way for the rerun code to get new configurations except by reading the input
trajectory.

Thanks a lot :)

Mark



On 1 April 2012 20:17, Mark Abrahammark.abra...@anu.edu.au  wrote:

On 2/04/2012 10:10 AM, Juliette N. wrote:

Hi all,

I have an enquiry regarding calculation of heat of vaporization by
estimating intermolecular nonbonded energies using mdrun rerun option.
mdrun
-rerun should break the total nonbonded energy coming from nonboded
energy
of (different molecules + a molecule with itself). By setting
appropriate
nrexcl in top file I am trying to exclude nonbonded part of molecule
with
itself within cut off radius so what remains would be intermolecular
nonbonded energy between different molecules which determines heat of
vaporization.

1) Is this approach correct?


For excluding a whole molecule, it could work. For excluding only a
part,
using energy group exclusions (see manual) is more flexible. Just
setting
energy groups suitably might work in your case, so that you get the
group-wise break-down of nonbonded energy.

Thank you Mark. I have a one component system. I guess group exclusions
is used for multicomponent systems where one needs to breakdown the
total
energies.? In my case I need to exclude non bonded interaction of a
polymer
chain with itself (avoid calculation of 1 and last atoms on chains if
they
fall within cutoff radius)

2) If yes, can you please check the way I am applying mdrun rerun:

grompp -p nrexcl_3.top -o total_nonbonded.tpr

mdrun -deffnm total_nonbonded -s -o -c -g -e

I am done with these 5ns runs and now intend to exclude nonbonded
interaction on a chain by increasing nrexcl in top file named
nrexcl_new.top

grompp -p nrexcl_new.top -o new.tpr

mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e
new

2) Am I doing this correctly? I doubt because I provide -rerun
total_nonbonded.trr but dont know how to introduce -rerun
total_nonbonded.edr so that new energies get written on it?


You want to write new energies and keep the old ones in case you need
them.
There's no reason to (want to) re-introduce the old ones. mdrun -rerun
accepts the trajectory to determine what configurations to compute
about.
It
doesn't need to know what some other algorithm thought about the
energies
of
that configuration.

I am a bit puzzled now. mdrun run takes on the old trajectory and
recomputes energies based on the old trajectory for each frame?


Yes, per mdrun -h.



  If thats the case, then use of nrexcel in order to obtain new
trajectories fails I guess, as mdrun rerun does not produce a new
trajectory
but only a new energy file?!

In other words, say I am doing NPT, with default nrexcel =3 I got a
trajectory file and a density. Now with a new nrexcel that is large
enough to exclude all nonbonded interactions of atoms on the same
chain, I should expect a new trajectory and new density corresponding
to this modified nrexcel. My concern is if what mdrun reruns does is
just computing energies based on the old trajectory, I need to redo
the simulations with a new top file (nrexcel=new number) because this
new top file should affect the configuration. Am I right?


If you want new configurations based on some Frankenstein of your model
physics, then you do not need mdrun -rerun.

Mark



So for instance,

3) If I want to re-calculate only the last 1 ns of runs (after system
is
equilibrated), can I use -b 4000 ? i.e.:

mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e
new
-b
4000


Probably not. Check mdrun -h, but either way you can use trjconv first.

Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www
interface
or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive

Re: [gmx-users] mdrun - topol.tpr

2012-03-07 Thread Dommert Florian
On Wed, 2012-03-07 at 12:33 +, Lara Bunte wrote: 
 Hi
 
 After I used
 
 
 grompp -f em.mdp -p topol.top -c solvated.gro -o em.tpr
 
 to collect my files into one em.tpr file (which is the meaning of gromp as 
 fas as I understand it)
 
 Then I start mdrun for energy minimization with the command
 
 mdrun -nt 1 deffnm em
 
You forgot a - in front of deffnm

/Flo 
 
 and got the error
 
 Can not open file:
 topol.tpr
 
 
 What ist the problem?
 
 Thank you
 Greetings
 Lara

-- 
Florian Dommert
Dipl. - Phys.

Institute for Computational Physics
University Stuttgart

Pfaffenwaldring 27
70569 Stuttgart

EMail: domm...@icp.uni-stuttgart.de
Homepage: http://www.icp.uni-stuttgart.de/~icp/Florian_Dommert

Tel.: +49 - (0)711 - 68563613
Fax.: +49 - (0)711 - 68563658


signature.asc
Description: This is a digitally signed message part
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun - topol.tpr

2012-03-07 Thread Siew Wen Leong
mdrun looks for topol.tpr by default. specify -s em.tpr in your command

On Wed, Mar 7, 2012 at 8:33 PM, Lara Bunte lara.bu...@yahoo.de wrote:

 Hi

 After I used


 grompp -f em.mdp -p topol.top -c solvated.gro -o em.tpr

 to collect my files into one em.tpr file (which is the meaning of gromp as
 fas as I understand it)

 Then I start mdrun for energy minimization with the command

 mdrun -nt 1 deffnm em


 and got the error

 Can not open file:
 topol.tpr


 What ist the problem?

 Thank you
 Greetings
 Lara
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Regards,
Leong Siew Wen

“Making the simple complicated is commonplace; making the complicated
simple, awesomely simple, that’s creativity.”
 *- Charles Mingus-*
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun - topol.tpr

2012-03-07 Thread Mark Abraham

On 8/03/2012 6:09 PM, Siew Wen Leong wrote:

mdrun looks for topol.tpr by default. specify -s em.tpr in your command


Please look up what -deffnm is for. :-)

Mark



On Wed, Mar 7, 2012 at 8:33 PM, Lara Bunte lara.bu...@yahoo.de 
mailto:lara.bu...@yahoo.de wrote:


Hi

After I used


grompp -f em.mdp -p topol.top -c solvated.gro -o em.tpr

to collect my files into one em.tpr file (which is the meaning of
gromp as fas as I understand it)

Then I start mdrun for energy minimization with the command

mdrun -nt 1 deffnm em


and got the error

Can not open file:
topol.tpr


What ist the problem?

Thank you
Greetings
Lara
--
gmx-users mailing list gmx-users@gromacs.org
mailto:gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org
mailto:gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




--
Regards,
Leong Siew Wen

Making the simple complicated is commonplace; making the complicated 
simple, awesomely simple, that's creativity.

*- Charles Mingus-*





-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] MDrun append...

2012-03-02 Thread lloyd riggs
Dear Rama,

Since the latest version, I have to use -noappend and then just concantenate 
them when they are finished.  I gave up as no mater how many paths to files, 
listening to the error messages, I supplied to the mdrun it still complained.  
Dont know if this is a personalized bug or more general, but figured the former 
and was not such a big deal, so never wrote here.

Stephan

 Original-Nachricht 
 Datum: Fri, 02 Mar 2012 17:31:49 +1100
 Von: Mark Abraham mark.abra...@anu.edu.au
 An: Discussion list for GROMACS users gmx-users@gromacs.org
 Betreff: Re: [gmx-users] MDrun append...

 On 2/03/2012 3:59 PM, rama david wrote:
  Dear GROMACS Friends,
 
   my MD run get crashed , I foollow following command ..
 
  mdrun -s protein_md.tpr -c protein_md.trr -e protein_md.edr -g 
  protein_md.log -cpi -append -v
 
   the system respond in a way..
  Fatal error:
  The original run wrote a file called 'traj.trr' which is larger than 2 
  GB, but mdrun did not support large file offsets. Can not append. Run 
  mdrun with -noappend
  What to do ???
 
 Update whichever of your filesystem or mdrun is outdated, or use 
 -noappend and resign yourself to concatenation after the simulation 
 completes.
 
 Mark
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
NEU: FreePhone 3-fach-Flat mit kostenlosem Smartphone!  

Jetzt informieren: http://mobile.1und1.de/?ac=OM.PW.PW003K20328T7073a
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun -multi flag

2012-03-01 Thread Mark Abraham

On 2/03/2012 10:15 AM, bo.shuang wrote:

Hello, all,

I am trying to do REMD simulation. So I used command:

mdrun -s t2T.tpr -multi 2 -replex 1000

And gromacs gives error report:
Fatal error:
mdrun -multi is not supported with the thread library.Please compile 
GROMACS with MPI support
For more information and tips for troubleshooting, please check the 
GROMACS

website at http://www.gromacs.org/Documentation/Errors

Then I tried re-install mdrun:
./configure --enable-mpi --program-suffix=_mpiq
make mdrun
make install-mdrun

It looks fine, but I still cannot use multi flag, and it is still the 
same error. I don't know what the problem is and what I should do 
next. Thank you for help!


With that program suffix, you need to use mdrun_mpiq

Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] MDrun append...

2012-03-01 Thread Mark Abraham

On 2/03/2012 3:59 PM, rama david wrote:

Dear GROMACS Friends,

 my MD run get crashed , I foollow following command ..

mdrun -s protein_md.tpr -c protein_md.trr -e protein_md.edr -g 
protein_md.log -cpi -append -v


 the system respond in a way..
Fatal error:
The original run wrote a file called 'traj.trr' which is larger than 2 
GB, but mdrun did not support large file offsets. Can not append. Run 
mdrun with -noappend

What to do ???


Update whichever of your filesystem or mdrun is outdated, or use 
-noappend and resign yourself to concatenation after the simulation 
completes.


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun -pd

2012-02-29 Thread Mark Abraham

On 1/03/2012 12:31 AM, Steven Neumann wrote:

Dear Gmx Users,
I am trying to use option -p of mdrun for particle decomposition.
I used:

mpiexec mdrun -pd -deffnm nvt

I obtain:

apps/intel/ict/mpi/3.1.038/bin/mpdlib.py:37: DeprecationWarning: the 
md5 module is deprecated; use hashlib instead

  from  md5   import  new as md5new
NODEID=0 argc=4
 :-)  G  R  O  M  A  C  S  (-:
NODEID=2 argc=4
NODEID=6 argc=4
NODEID=1 argc=4
NODEID=5 argc=4
NODEID=11 argc=4
NODEID=3 argc=4
NODEID=7 argc=4
NODEID=8 argc=4
NODEID=9 argc=4
NODEID=10 argc=4
NODEID=4 argc=4

Reading file nvt500ps.tpr, VERSION 4.5.4 (single precision)
starting mdrun 'Protein'
100 steps,500.0 ps.
/apps/intel/ict/mpi/3.1.038/bin/mpdlib.py:27: DeprecationWarning: The 
popen2 module is deprecated.  Use the subprocess module.

  import sys, os, signal, popen2, socket, select, inspect
/apps/intel/ict/mpi/3.1.038/bin/mpdlib.py:37: DeprecationWarning: the 
md5 module is deprecated; use hashlib instead

  from  md5   import  new as md5new

Then trajectory files are empty



Your MPI configuration is probably broken. You should observe similar 
output from mdrun without using -pd. You should find out if a simple MPI 
test program can run.


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun -pd

2012-02-29 Thread Steven Neumann
On Wed, Feb 29, 2012 at 1:47 PM, Mark Abraham mark.abra...@anu.edu.auwrote:

  On 1/03/2012 12:31 AM, Steven Neumann wrote:

 Dear Gmx Users,
 I am trying to use option -p of mdrun for particle decomposition.
 I used:

 mpiexec mdrun -pd -deffnm nvt

 I obtain:

 apps/intel/ict/mpi/3.1.038/**bin/mpdlib.py:37: DeprecationWarning: the
 md5 module is deprecated; use hashlib instead
  from  md5   import  new as md5new
 NODEID=0 argc=4
 :-)  G  R  O  M  A  C  S  (-:
 NODEID=2 argc=4
 NODEID=6 argc=4
 NODEID=1 argc=4
 NODEID=5 argc=4
 NODEID=11 argc=4
 NODEID=3 argc=4
 NODEID=7 argc=4
 NODEID=8 argc=4
 NODEID=9 argc=4
 NODEID=10 argc=4
 NODEID=4 argc=4

 Reading file nvt500ps.tpr, VERSION 4.5.4 (single precision)
 starting mdrun 'Protein'
 100 steps,500.0 ps.
 /apps/intel/ict/mpi/3.1.038/**bin/mpdlib.py:27: DeprecationWarning: The
 popen2 module is deprecated.  Use the subprocess module.
  import sys, os, signal, popen2, socket, select, inspect
 /apps/intel/ict/mpi/3.1.038/**bin/mpdlib.py:37: DeprecationWarning: the
 md5 module is deprecated; use hashlib instead
  from  md5   import  new as md5new

 Then trajectory files are empty


 Your MPI configuration is probably broken. You should observe similar
 output from mdrun without using -pd. You should find out if a simple MPI
 test program can run.

 Well, without -pd everything works fine.


 Mark
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun extension and concatenation

2012-02-27 Thread Mark Abraham

On 28/02/2012 3:50 PM, priya thiyagarajan wrote:

hello sir,

while performing simulation for 30ns, because of queue time limit my 
run terminated at 11.6ns.. then i extended my simulation using mdrun 
as you suggest..


while doing so i got error as

Fatal error:
Failed to lock: md20.log. Function not implemented.
For more information and tips for troubleshooting, please check the 
GROMACS

website at http://www.gromacs.org/Documentation/Errors


when i searched in mailing list i got one solution to this as

as a workaround you could run with -noappend and later
concatenate the output files. Then you should have no
problems with locking.


now when i tried using -noappend my simulation is working,.


is it correct??


Maybe. We can't know.



Also because my simulation is terminated because of queue limit i didnt get my 
gro file..

did i get my gro file at the end of simulation without any error?


I don't understand what you are asking. See 
http://www.gromacs.org/Documentation/How-tos/Extending_Simulation for 
general discussion of this topic.


Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users]mdrun using old version

2012-02-26 Thread Mark Abraham

On 26/02/2012 10:42 PM, nicolas prudhomme wrote:

Hi gmx-users,

I don't know why, but my mdrun suddenly started to use the 4.0.7 
version while I have installed only the 4.5.4 version.


I have reinstalled gromacs 4.5.4 but when I run mdrun it still want to 
use the 4.0.7 version and can not read the tpr file.


Clearly you do have the old version installed, and it is the first found 
in your PATH environment variable. See 
http://www.gromacs.org/Downloads/Installation_Instructions#Getting_access_to_GROMACS_after_installation


Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun on GROMACS 3.3.1

2012-02-22 Thread francesca vitalini
Hi,
sorry I'm back to this thread after quite a long time, as I was trying
to solve other problems. Now I'm back to the reverse transformation
tutorial on the martini webpage and whenever I try to use the mdp
script provided there for the annealing I just end up with the same
error message, which is that it cannot open the .xtc file that it is
supposed to create.

Program mdrun, VERSION 3.3.1
Source code file: gmxfio.c, line: 706

Can not open file:
annealing_LLL_frag.xtc
---

Love is Like Moby Dick, Get Chewed and Get Spat Out (Urban Dance Squad)

Halting program mdrun

gcq#83: Love is Like Moby Dick, Get Chewed and Get Spat Out (Urban
Dance Squad)

--
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode -1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

The mdp file looks like


cpp =  /lib/cpp
constraints =  none
integrator  =  md
tinit   =  0.0
dt  =  0.002
nsteps  =  4
nstcomm =  0
nstxout =  1
nstvout =  1
nstfout =  1
nstlog  =  1
nstenergy   =  100
nstxtcout   =  100
xtc_precision   =  1000
nstlist =  10
energygrps  =  Protein Water
ns_type =  grid
rlist   =  0.9
coulombtype =  Generalized-Reaction-Field
epsilon_rf   = 62
rcoulomb=  1
rvdw=  1.0
Tcoupl  =  nose-hoover
tc-grps =  Protein Water
ref_t   =  300  300
tau_t   =  0.1  0.1
Pcoupl  =  no
table_extension =  1.2


;dihre   =  simple ; Some dihedrals are restrained  for
instance peptide bonds are set to trans conformation.
;dihre_fc=  500
;dihre_tau   =  0.0
;nstdihreout =  50

cap_force= 15000
cap_a= 100
fc_restr = 12000
r_CGW= 0.21
fc_restrW= 400

rel_steps= 0
rel_water= no


andersen_seed   =  815131
annealing   =  single single
annealing_npoints   =  2 2
annealing_time  =  0 60  0 60 ; CHANGE HERE -annealing control
points for each group
annealing_temp  =  1300 300 400 300  ; list of temperatures at
control points for each group


gen_vel = yes
gen_temp= 1300
gen_seed= 473529
lincs_warnangle = 100

Compared to the file of the tutorial I just have changed the number of
groups (2 instead of 3) and the rest is as the one that is provided,
so I really don't understand why is GROMACS 3.3.1 complaining.
Can you help me on that?
Thanks a lot



2012/1/31 Justin A. Lemkul jalem...@vt.edu:


 francesca vitalini wrote:

 Well I'm keeping struggling with this script. Apparently the problem in in
 using the integrator md with the GOMACS 3.3.1 version. In fact the same .mdp
 file with integrator steep works. while with md it always gives the error
 message that it cannot open the .xtc file.


 The md integrator can produce an .xtc file, steepest descent EM does not.


 How can I get around this problem?
 I can only see two ways now:
 -either there is a way to use md with GROMACS 3.3.1
 -or there is a way so that the mdrun of a newer version of GROMACS can
 deal with the file. If I try, even specifying the path for each .itp file,
 then the program cannot find certain atomtypes, such as
 Atomtype CH2R not found
 any suggestions here?


 Some renaming has occurred for some atomtypes, and the force fields have
 been re-organized in a more sensible fashion in newer versions.  For
 instance, the CH2R atom type in the Gromos96 force fields is now called
 CH2r.

 -Justin

 Thanks
 Francesca



 2012/1/31 Francesca Vitalini francesca.vitalin...@gmail.com
 mailto:francesca.vitalin...@gmail.com


    Thank you Justin for your quick reply. Unfortunately I cannot use a
    more modern version of GROMACS as my topology and .gro files where
    first created for a reverse transformation from cg   to fg and thus
    required the 3.3.1 version and some specific .itp files that are
    only present in that version. If I try to run it all with GORMACS
    4.5 that it crashes immediately.
    I've also tried without the -cpo option but it doesn't change anything.
    I've checked the permission on the folder and as I supposed I have
    total access to it so it might not effect the results.
    If I open with vi a file with the same name as the .xtc file that I
    need for the script and write in it some crap just to try and then I
    just re run the mrdun command I don't get the error message anymore
    but gromacs complains saying
    Reading file 

Re: [gmx-users] mdrun on GROMACS 3.3.1

2012-01-31 Thread Justin A. Lemkul



francesca vitalini wrote:

Hallo GROMACS users!
I'm trying to run a simple md script after running an energy 
minimization script on my system and I'm getting a wired error message


Reading file dynamin_dimer_PR1.tpr, VERSION 3.3.1 (single precision)
Loaded with Money

---
Program mdrun, VERSION 3.3.1
Source code file: gmxfio.c, line: 706

Can not open file:
coarse.xtc


it is strange as the coarse.xtc file should be created by running the 
mdrun command.




Sounds to me like you don't have permission to write in the working directory.

-Justin


that is my submition command line

$MYGROMACSPATH/bin/mdrun -v  -s dynamin_dimer_PR1.tpr  -o 
dynamin_dimer_PR1.trr  -cpo dynamin_dimer_PR1.cpt -c dynamin_dimer_PR1.gro


the .tpr file had been previously created by grompp

$MYGROMACSPATH/bin/grompp -v -f pr1.mdp -c dynamin_dimer_EM_solvated.gro 
-p dynamin_dimer_fg.top -o dynamin_dimer_PR1.tpr


and the .mdp file I'm using is

cpp =  /lib/cpp
constraints =  all-bonds
integrator  =  md
tinit   =  0.0
dt  =  0.002
nsteps  =  4
nstcomm =  0
nstxout =  1
nstvout =  1
nstfout =  1
nstlog  =  1
nstenergy   =  100
nstxtcout   =  100
xtc_precision   =  1000
nstlist =  10
energygrps  =  Protein
ns_type =  grid
rlist   =  0.9
coulombtype =  Generalized-Reaction-Field
epsilon_rf   = 62
rcoulomb=  1.5
rvdw=  1.0

;Tcoupl  =  nose-hoover
;tc-grps =  Protein
;ref_t   =  300

nstxtcout   =  100
xtc_precision   =  1000
nstlist =  10
energygrps  =  Protein
ns_type =  grid
rlist   =  0.9
coulombtype =  Generalized-Reaction-Field
epsilon_rf   = 62
rcoulomb=  1.5
rvdw=  1.0

;Tcoupl  =  nose-hoover
;tc-grps =  Protein
;ref_t   =  300
;tau_t   =  0.1

; Temperature coupling
tcoupl   = Berendsen; Couple temperature to 
external heat bath according to Berendsen method
tc-grps  = Protein  Non-Protein ; Use separate heat 
baths for Protein and Non-Protein groups
tau_t= 0.1  0.1 ; Coupling time 
constant, controlling strength of coupling

ref_t= 200  200 ; Temperature of heat bat

I have also tried to change the .mdp file but I get the same error message.
If I try to use a mdrun from a different version of GROMACS it complains 
again as it is not the same as grompp.


Do you have any tips for solving this problem?

Thanks

Francesca



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun on GROMACS 3.3.1

2012-01-31 Thread Francesca Vitalini
Actually the directory is of my own and I have created it in my home directory 
so that shouldn't be a problem as I also have created other files in the same 
directory without any problems so far.
Other ideas? 
Thanks
Francesca



Il giorno 31/gen/2012, alle ore 13.02, Justin A. Lemkul ha scritto:

 
 
 francesca vitalini wrote:
 Hallo GROMACS users!
 I'm trying to run a simple md script after running an energy minimization 
 script on my system and I'm getting a wired error message
 Reading file dynamin_dimer_PR1.tpr, VERSION 3.3.1 (single precision)
 Loaded with Money
 ---
 Program mdrun, VERSION 3.3.1
 Source code file: gmxfio.c, line: 706
 Can not open file:
 coarse.xtc
 it is strange as the coarse.xtc file should be created by running the mdrun 
 command.
 
 Sounds to me like you don't have permission to write in the working directory.
 
 -Justin
 
 that is my submition command line
 $MYGROMACSPATH/bin/mdrun -v  -s dynamin_dimer_PR1.tpr  -o 
 dynamin_dimer_PR1.trr  -cpo dynamin_dimer_PR1.cpt -c dynamin_dimer_PR1.gro
 the .tpr file had been previously created by grompp
 $MYGROMACSPATH/bin/grompp -v -f pr1.mdp -c dynamin_dimer_EM_solvated.gro -p 
 dynamin_dimer_fg.top -o dynamin_dimer_PR1.tpr
 and the .mdp file I'm using is
 cpp =  /lib/cpp
 constraints =  all-bonds
 integrator  =  md
 tinit   =  0.0
 dt  =  0.002
 nsteps  =  4
 nstcomm =  0
 nstxout =  1
 nstvout =  1
 nstfout =  1
 nstlog  =  1
 nstenergy   =  100
 nstxtcout   =  100
 xtc_precision   =  1000
 nstlist =  10
 energygrps  =  Protein
 ns_type =  grid
 rlist   =  0.9
 coulombtype =  Generalized-Reaction-Field
 epsilon_rf   = 62
 rcoulomb=  1.5
 rvdw=  1.0
 ;Tcoupl  =  nose-hoover
 ;tc-grps =  Protein
 ;ref_t   =  300
 nstxtcout   =  100
 xtc_precision   =  1000
 nstlist =  10
 energygrps  =  Protein
 ns_type =  grid
 rlist   =  0.9
 coulombtype =  Generalized-Reaction-Field
 epsilon_rf   = 62
 rcoulomb=  1.5
 rvdw=  1.0
 ;Tcoupl  =  nose-hoover
 ;tc-grps =  Protein
 ;ref_t   =  300
 ;tau_t   =  0.1
 ; Temperature coupling
 tcoupl   = Berendsen; Couple temperature to 
 external heat bath according to Berendsen method
 tc-grps  = Protein  Non-Protein ; Use separate heat baths 
 for Protein and Non-Protein groups
 tau_t= 0.1  0.1 ; Coupling time constant, 
 controlling strength of coupling
 ref_t= 200  200 ; Temperature of heat bat
 I have also tried to change the .mdp file but I get the same error message.
 If I try to use a mdrun from a different version of GROMACS it complains 
 again as it is not the same as grompp.
 Do you have any tips for solving this problem?
 Thanks
 Francesca
 
 -- 
 
 
 Justin A. Lemkul
 Ph.D. Candidate
 ICTAS Doctoral Scholar
 MILES-IGERT Trainee
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
 
 
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www interface 
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun on GROMACS 3.3.1

2012-01-31 Thread Justin A. Lemkul



Francesca Vitalini wrote:

Actually the directory is of my own and I have created it in my home directory 
so that shouldn't be a problem as I also have created other files in the same 
directory without any problems so far.
Other ideas? 
Thanks

Francesca



Il giorno 31/gen/2012, alle ore 13.02, Justin A. Lemkul ha scritto:



francesca vitalini wrote:

Hallo GROMACS users!
I'm trying to run a simple md script after running an energy minimization 
script on my system and I'm getting a wired error message
Reading file dynamin_dimer_PR1.tpr, VERSION 3.3.1 (single precision)
Loaded with Money
---
Program mdrun, VERSION 3.3.1
Source code file: gmxfio.c, line: 706
Can not open file:
coarse.xtc
it is strange as the coarse.xtc file should be created by running the mdrun 
command.

Sounds to me like you don't have permission to write in the working directory.

-Justin


that is my submition command line
$MYGROMACSPATH/bin/mdrun -v  -s dynamin_dimer_PR1.tpr  -o dynamin_dimer_PR1.trr 
 -cpo dynamin_dimer_PR1.cpt -c dynamin_dimer_PR1.gro
the .tpr file had been previously created by grompp


I don't know if this is a problem or not, but I just noticed it.  If you're 
using version 3.3.1, the -cpo option doesn't exist.  mdrun won't exit with a 
fatal error in this case, but you still shouldn't be using it.


Aside from that, I would suggest you use a more modern version of Gromacs 
(4.5.5) rather than one that is certifiably ancient.  There may well have been 
some bug that was fixed 6 years ago that no one even remembers ;)



$MYGROMACSPATH/bin/grompp -v -f pr1.mdp -c dynamin_dimer_EM_solvated.gro -p 
dynamin_dimer_fg.top -o dynamin_dimer_PR1.tpr
and the .mdp file I'm using is


The .mdp file contains a number of redundancies, which should have caused grompp 
to fail.  Also probably irrelevant, but worth noting.


-Justin


cpp =  /lib/cpp
constraints =  all-bonds
integrator  =  md
tinit   =  0.0
dt  =  0.002
nsteps  =  4
nstcomm =  0
nstxout =  1
nstvout =  1
nstfout =  1
nstlog  =  1
nstenergy   =  100
nstxtcout   =  100
xtc_precision   =  1000
nstlist =  10
energygrps  =  Protein
ns_type =  grid
rlist   =  0.9
coulombtype =  Generalized-Reaction-Field
epsilon_rf   = 62
rcoulomb=  1.5
rvdw=  1.0
;Tcoupl  =  nose-hoover
;tc-grps =  Protein
;ref_t   =  300
nstxtcout   =  100
xtc_precision   =  1000
nstlist =  10
energygrps  =  Protein
ns_type =  grid
rlist   =  0.9
coulombtype =  Generalized-Reaction-Field
epsilon_rf   = 62
rcoulomb=  1.5
rvdw=  1.0
;Tcoupl  =  nose-hoover
;tc-grps =  Protein
;ref_t   =  300
;tau_t   =  0.1
; Temperature coupling
tcoupl   = Berendsen; Couple temperature to 
external heat bath according to Berendsen method
tc-grps  = Protein  Non-Protein ; Use separate heat baths for 
Protein and Non-Protein groups
tau_t= 0.1  0.1 ; Coupling time constant, 
controlling strength of coupling
ref_t= 200  200 ; Temperature of heat bat
I have also tried to change the .mdp file but I get the same error message.
If I try to use a mdrun from a different version of GROMACS it complains again 
as it is not the same as grompp.
Do you have any tips for solving this problem?
Thanks
Francesca

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface or 
send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists





--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe 

Re: [gmx-users] mdrun on GROMACS 3.3.1

2012-01-31 Thread Francesca Vitalini
Thank you Justin for your quick reply. Unfortunately I cannot use a more modern 
version of GROMACS as my topology and .gro files where first created for a 
reverse transformation from cg   to fg and thus required the 3.3.1 version and 
some specific .itp files that are only present in that version. If I try to run 
it all with GORMACS 4.5 that it crashes immediately. 
I've also tried without the -cpo option but it doesn't change anything.
I've checked the permission on the folder and as I supposed I have total access 
to it so it might not effect the results. 
If I open with vi a file with the same name as the .xtc file that I need for 
the script and write in it some crap just to try and then I just re run the 
mrdun command I don't get the error message anymore but gromacs complains 
saying 
Reading file dynamin_dimer_PR1.tpr, VERSION 3.3.1 (single precision)
Loaded with Money

WARNING: Incomplete frame: nr 0 time 0
Segmentation fault
I have checked with gmxcheck the .trr input file as it was suggested in another 
discussion, and apparently it is ok, so I really don't know what to do.
Can you help me with that?
Thanks
Francesca


Il giorno 31/gen/2012, alle ore 14.33, Justin A. Lemkul ha scritto:

 
 
 Francesca Vitalini wrote:
 Actually the directory is of my own and I have created it in my home 
 directory so that shouldn't be a problem as I also have created other files 
 in the same directory without any problems so far.
 Other ideas? Thanks
 Francesca
 Il giorno 31/gen/2012, alle ore 13.02, Justin A. Lemkul ha scritto:
 
 francesca vitalini wrote:
 Hallo GROMACS users!
 I'm trying to run a simple md script after running an energy minimization 
 script on my system and I'm getting a wired error message
 Reading file dynamin_dimer_PR1.tpr, VERSION 3.3.1 (single precision)
 Loaded with Money
 ---
 Program mdrun, VERSION 3.3.1
 Source code file: gmxfio.c, line: 706
 Can not open file:
 coarse.xtc
 it is strange as the coarse.xtc file should be created by running the 
 mdrun command.
 Sounds to me like you don't have permission to write in the working 
 directory.
 
 -Justin
 
 that is my submition command line
 $MYGROMACSPATH/bin/mdrun -v  -s dynamin_dimer_PR1.tpr  -o 
 dynamin_dimer_PR1.trr  -cpo dynamin_dimer_PR1.cpt -c dynamin_dimer_PR1.gro
 the .tpr file had been previously created by grompp
 
 I don't know if this is a problem or not, but I just noticed it.  If you're 
 using version 3.3.1, the -cpo option doesn't exist.  mdrun won't exit with a 
 fatal error in this case, but you still shouldn't be using it.
 
 Aside from that, I would suggest you use a more modern version of Gromacs 
 (4.5.5) rather than one that is certifiably ancient.  There may well have 
 been some bug that was fixed 6 years ago that no one even remembers ;)
 
 $MYGROMACSPATH/bin/grompp -v -f pr1.mdp -c dynamin_dimer_EM_solvated.gro 
 -p dynamin_dimer_fg.top -o dynamin_dimer_PR1.tpr
 and the .mdp file I'm using is
 
 The .mdp file contains a number of redundancies, which should have caused 
 grompp to fail.  Also probably irrelevant, but worth noting.
 
 -Justin
 
 cpp =  /lib/cpp
 constraints =  all-bonds
 integrator  =  md
 tinit   =  0.0
 dt  =  0.002
 nsteps  =  4
 nstcomm =  0
 nstxout =  1
 nstvout =  1
 nstfout =  1
 nstlog  =  1
 nstenergy   =  100
 nstxtcout   =  100
 xtc_precision   =  1000
 nstlist =  10
 energygrps  =  Protein
 ns_type =  grid
 rlist   =  0.9
 coulombtype =  Generalized-Reaction-Field
 epsilon_rf   = 62
 rcoulomb=  1.5
 rvdw=  1.0
 ;Tcoupl  =  nose-hoover
 ;tc-grps =  Protein
 ;ref_t   =  300
 nstxtcout   =  100
 xtc_precision   =  1000
 nstlist =  10
 energygrps  =  Protein
 ns_type =  grid
 rlist   =  0.9
 coulombtype =  Generalized-Reaction-Field
 epsilon_rf   = 62
 rcoulomb=  1.5
 rvdw=  1.0
 ;Tcoupl  =  nose-hoover
 ;tc-grps =  Protein
 ;ref_t   =  300
 ;tau_t   =  0.1
 ; Temperature coupling
 tcoupl   = Berendsen; Couple temperature to 
 external heat bath according to Berendsen method
 tc-grps  = Protein  Non-Protein ; Use separate heat baths 
 for Protein and Non-Protein groups
 tau_t= 0.1  0.1 ; Coupling time constant, 
 controlling strength of coupling
 ref_t= 200  200 ; Temperature of heat bat
 I have also tried to change the .mdp file but I get the same error message.
 If I try to use a mdrun from a different version of GROMACS it complains 
 again as it is not the same as grompp.
 Do you 

Re: [gmx-users] mdrun on GROMACS 3.3.1

2012-01-31 Thread francesca vitalini
Well I'm keeping struggling with this script. Apparently the problem in in
using the integrator md with the GOMACS 3.3.1 version. In fact the same
.mdp file with integrator steep works. while with md it always gives the
error message that it cannot open the .xtc file.
How can I get around this problem?
I can only see two ways now:
-either there is a way to use md with GROMACS 3.3.1
-or there is a way so that the mdrun of a newer version of GROMACS can deal
with the file. If I try, even specifying the path for each .itp file, then
the program cannot find certain atomtypes, such as
Atomtype CH2R not found
any suggestions here?
Thanks
Francesca



2012/1/31 Francesca Vitalini francesca.vitalin...@gmail.com

 Thank you Justin for your quick reply. Unfortunately I cannot use a more
 modern version of GROMACS as my topology and .gro files where first created
 for a reverse transformation from cg   to fg and thus required the 3.3.1
 version and some specific .itp files that are only present in that version.
 If I try to run it all with GORMACS 4.5 that it crashes immediately.
 I've also tried without the -cpo option but it doesn't change anything.
 I've checked the permission on the folder and as I supposed I have total
 access to it so it might not effect the results.
 If I open with vi a file with the same name as the .xtc file that I need
 for the script and write in it some crap just to try and then I just re run
 the mrdun command I don't get the error message anymore but gromacs
 complains saying
 Reading file dynamin_dimer_PR1.tpr, VERSION 3.3.1 (single precision)
 Loaded with Money

 WARNING: Incomplete frame: nr 0 time 0
 Segmentation fault
 I have checked with gmxcheck the .trr input file as it was suggested in
 another discussion, and apparently it is ok, so I really don't know what to
 do.
 Can you help me with that?
 Thanks
 Francesca


 Il giorno 31/gen/2012, alle ore 14.33, Justin A. Lemkul ha scritto:

 
 
  Francesca Vitalini wrote:
  Actually the directory is of my own and I have created it in my home
 directory so that shouldn't be a problem as I also have created other files
 in the same directory without any problems so far.
  Other ideas? Thanks
  Francesca
  Il giorno 31/gen/2012, alle ore 13.02, Justin A. Lemkul ha scritto:
 
  francesca vitalini wrote:
  Hallo GROMACS users!
  I'm trying to run a simple md script after running an energy
 minimization script on my system and I'm getting a wired error message
  Reading file dynamin_dimer_PR1.tpr, VERSION 3.3.1 (single precision)
  Loaded with Money
  ---
  Program mdrun, VERSION 3.3.1
  Source code file: gmxfio.c, line: 706
  Can not open file:
  coarse.xtc
  it is strange as the coarse.xtc file should be created by running the
 mdrun command.
  Sounds to me like you don't have permission to write in the working
 directory.
 
  -Justin
 
  that is my submition command line
  $MYGROMACSPATH/bin/mdrun -v  -s dynamin_dimer_PR1.tpr  -o
 dynamin_dimer_PR1.trr  -cpo dynamin_dimer_PR1.cpt -c dynamin_dimer_PR1.gro
  the .tpr file had been previously created by grompp
 
  I don't know if this is a problem or not, but I just noticed it.  If
 you're using version 3.3.1, the -cpo option doesn't exist.  mdrun won't
 exit with a fatal error in this case, but you still shouldn't be using it.
 
  Aside from that, I would suggest you use a more modern version of
 Gromacs (4.5.5) rather than one that is certifiably ancient.  There may
 well have been some bug that was fixed 6 years ago that no one even
 remembers ;)
 
  $MYGROMACSPATH/bin/grompp -v -f pr1.mdp -c
 dynamin_dimer_EM_solvated.gro -p dynamin_dimer_fg.top -o
 dynamin_dimer_PR1.tpr
  and the .mdp file I'm using is
 
  The .mdp file contains a number of redundancies, which should have
 caused grompp to fail.  Also probably irrelevant, but worth noting.
 
  -Justin
 
  cpp =  /lib/cpp
  constraints =  all-bonds
  integrator  =  md
  tinit   =  0.0
  dt  =  0.002
  nsteps  =  4
  nstcomm =  0
  nstxout =  1
  nstvout =  1
  nstfout =  1
  nstlog  =  1
  nstenergy   =  100
  nstxtcout   =  100
  xtc_precision   =  1000
  nstlist =  10
  energygrps  =  Protein
  ns_type =  grid
  rlist   =  0.9
  coulombtype =  Generalized-Reaction-Field
  epsilon_rf   = 62
  rcoulomb=  1.5
  rvdw=  1.0
  ;Tcoupl  =  nose-hoover
  ;tc-grps =  Protein
  ;ref_t   =  300
  nstxtcout   =  100
  xtc_precision   =  1000
  nstlist =  10
  energygrps  =  Protein
  ns_type =  grid
  rlist   =  0.9
  coulombtype =  Generalized-Reaction-Field
  epsilon_rf   = 62
  rcoulomb=  1.5
  rvdw=  

Re: [gmx-users] mdrun on GROMACS 3.3.1

2012-01-31 Thread Justin A. Lemkul



francesca vitalini wrote:
Well I'm keeping struggling with this script. Apparently the problem in 
in using the integrator md with the GOMACS 3.3.1 version. In fact the 
same .mdp file with integrator steep works. while with md it always 
gives the error message that it cannot open the .xtc file.


The md integrator can produce an .xtc file, steepest descent EM does not.


How can I get around this problem?
I can only see two ways now:
-either there is a way to use md with GROMACS 3.3.1
-or there is a way so that the mdrun of a newer version of GROMACS can 
deal with the file. If I try, even specifying the path for each .itp 
file, then the program cannot find certain atomtypes, such as

Atomtype CH2R not found
any suggestions here?


Some renaming has occurred for some atomtypes, and the force fields have been 
re-organized in a more sensible fashion in newer versions.  For instance, the 
CH2R atom type in the Gromos96 force fields is now called CH2r.


-Justin


Thanks
Francesca



2012/1/31 Francesca Vitalini francesca.vitalin...@gmail.com 
mailto:francesca.vitalin...@gmail.com


Thank you Justin for your quick reply. Unfortunately I cannot use a
more modern version of GROMACS as my topology and .gro files where
first created for a reverse transformation from cg   to fg and thus
required the 3.3.1 version and some specific .itp files that are
only present in that version. If I try to run it all with GORMACS
4.5 that it crashes immediately.
I've also tried without the -cpo option but it doesn't change anything.
I've checked the permission on the folder and as I supposed I have
total access to it so it might not effect the results.
If I open with vi a file with the same name as the .xtc file that I
need for the script and write in it some crap just to try and then I
just re run the mrdun command I don't get the error message anymore
but gromacs complains saying
Reading file dynamin_dimer_PR1.tpr, VERSION 3.3.1 (single precision)
Loaded with Money

WARNING: Incomplete frame: nr 0 time 0
Segmentation fault
I have checked with gmxcheck the .trr input file as it was suggested
in another discussion, and apparently it is ok, so I really don't
know what to do.
Can you help me with that?
Thanks
Francesca


Il giorno 31/gen/2012, alle ore 14.33, Justin A. Lemkul ha scritto:

 
 
  Francesca Vitalini wrote:
  Actually the directory is of my own and I have created it in my
home directory so that shouldn't be a problem as I also have created
other files in the same directory without any problems so far.
  Other ideas? Thanks
  Francesca
  Il giorno 31/gen/2012, alle ore 13.02, Justin A. Lemkul ha scritto:
 
  francesca vitalini wrote:
  Hallo GROMACS users!
  I'm trying to run a simple md script after running an energy
minimization script on my system and I'm getting a wired error message
  Reading file dynamin_dimer_PR1.tpr, VERSION 3.3.1 (single
precision)
  Loaded with Money
  ---
  Program mdrun, VERSION 3.3.1
  Source code file: gmxfio.c, line: 706
  Can not open file:
  coarse.xtc
  it is strange as the coarse.xtc file should be created by
running the mdrun command.
  Sounds to me like you don't have permission to write in the
working directory.
 
  -Justin
 
  that is my submition command line
  $MYGROMACSPATH/bin/mdrun -v  -s dynamin_dimer_PR1.tpr  -o
dynamin_dimer_PR1.trr  -cpo dynamin_dimer_PR1.cpt -c
dynamin_dimer_PR1.gro
  the .tpr file had been previously created by grompp
 
  I don't know if this is a problem or not, but I just noticed it.
 If you're using version 3.3.1, the -cpo option doesn't exist.
 mdrun won't exit with a fatal error in this case, but you still
shouldn't be using it.
 
  Aside from that, I would suggest you use a more modern version of
Gromacs (4.5.5) rather than one that is certifiably ancient.  There
may well have been some bug that was fixed 6 years ago that no one
even remembers ;)
 
  $MYGROMACSPATH/bin/grompp -v -f pr1.mdp -c
dynamin_dimer_EM_solvated.gro -p dynamin_dimer_fg.top -o
dynamin_dimer_PR1.tpr
  and the .mdp file I'm using is
 
  The .mdp file contains a number of redundancies, which should
have caused grompp to fail.  Also probably irrelevant, but worth noting.
 
  -Justin
 
  cpp =  /lib/cpp
  constraints =  all-bonds
  integrator  =  md
  tinit   =  0.0
  dt  =  0.002
  nsteps  =  4
  nstcomm =  0
  nstxout =  1
  nstvout =  1
  nstfout =  1
  nstlog  =  1
  nstenergy   =  100
  nstxtcout

Re: [gmx-users] mdrun-gpu error

2012-01-20 Thread Justin A. Lemkul
...@anu.edu.au

 wrote:
  On 19/01/2012 8:45 PM, aiswarya pawar wrote:
 
  Mark,
 
  THe normal mdrun also hangs thus not generating any
output.
 
 
  OK. It's your problem to solve... keep simplifying stuff
   until you can
  isolate a small number of possible causes. Top of the
list is
   file
  system
  availability.
 
  Mark
 
 
 
  On Thu, Jan 19, 2012 at 3:09 AM, Mark Abraham
   mark.abra...@anu.edu.au mailto:mark.abra...@anu.edu.au
mailto:mark.abra...@anu.edu.__au mailto:mark.abra...@anu.edu.au

  wrote:
 
  On 19/01/2012 2:59 AM, aiswarya.pa...@gmail.com
mailto:aiswarya.pa...@gmail.com
   mailto:aiswarya.pawar@gmail.__com
mailto:aiswarya.pa...@gmail.com wrote:
 
  Hi,
 
  Its going into the running mode but gets hang there
for long
   hours
  which
  generating any data in the output file. And am not
able to
   figure out
  the
  error file_doc. Please anyone knows what's going wrong.
 
 
  No, but you should start trying to simplify what you're
   doing to see
  where
  the problem lies. Does normal mdrun work?
 
  Mark
 
 
  Thanks
  Sent from my BlackBerryŽ on Reliance Mobile, India's
No. 1

   Network. Go
  for
  it!
 
  -Original Message-
  From: Szilárd Páll szilard.p...@cbr.su.se
mailto:szilard.p...@cbr.su.se
   mailto:szilard.p...@cbr.su.se
mailto:szilard.p...@cbr.su.se__
  Sender: gmx-users-boun...@gromacs.org
mailto:gmx-users-boun...@gromacs.org
   mailto:gmx-users-bounces@__gromacs.org
mailto:gmx-users-boun...@gromacs.org
  Date: Wed, 18 Jan 2012 14:47:59
  To: Discussion list for GROMACS
usersgmx-users@gromacs.org mailto:gmx-users@gromacs.org
   mailto:gmx-users@gromacs.org mailto:gmx-users@gromacs.org__
  Reply-To: Discussion list for GROMACS users
   gmx-users@gromacs.org mailto:gmx-users@gromacs.org
mailto:gmx-users@gromacs.org mailto:gmx-users@gromacs.org__
  Subject: Re: [gmx-users] mdrun-gpu error
 
  Hi,
 
  Most of those are just warnings, the only error I see
there
   comes from
  the shell, probably an error in your script.
 
  Cheers,
  --
  Szilárd
 
 
 
  On Wed, Jan 18, 2012 at 12:27 PM, aiswarya pawar
  aiswarya.pa...@gmail.com
mailto:aiswarya.pa...@gmail.com
mailto:aiswarya.pawar@gmail.__com
mailto:aiswarya.pa...@gmail.com

   wrote:
 
  Hi users,
 
  Am running mdrun on gpu . I receive an error such as=
 
  WARNING: This run will generate roughly 38570 Mb of data
 
 
  WARNING: OpenMM does not support leap-frog, will use
   velocity-verlet
  integrator.
 
 
  WARNING: OpenMM supports only Andersen thermostat
with the
  md/md-vv/md-vv-avek integrators.
 
 
  WARNING: OpenMM supports only Monte Carlo barostat
for pressure
  coupling.
 
 
  WARNING: OpenMM provides contraints as a combination of
   SHAKE, SETTLE
  and
  CCMA. Accuracy is based on the SHAKE tolerance set by the
   shake_tol
  option.
 
  /bin/cat: file_loc: No such file or directory
 
 
  and the job is running but the nothing written into .xtc,
   .trr, .edr
  files
  .
  What could have gone wrong?
 
  --
  Aiswarya  B Pawar
 
  Bioinformatics Dept,
  Indian Institute of Science
  Bangalore
 
 
 
  --
  gmx-users mailing listgmx-users@gromacs.org
mailto:gmx-users@gromacs.org
   mailto:gmx-users@gromacs.org mailto:gmx-users@gromacs.org

  http://lists.gromacs.org/__mailman/listinfo/gmx-users
http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive

Re: [gmx-users] mdrun-gpu error

2012-01-19 Thread Mark Abraham

On 19/01/2012 8:45 PM, aiswarya pawar wrote:

Mark,

THe normal mdrun also hangs thus not generating any output.


OK. It's your problem to solve... keep simplifying stuff until you can 
isolate a small number of possible causes. Top of the list is file 
system availability.


Mark



On Thu, Jan 19, 2012 at 3:09 AM, Mark Abraham mark.abra...@anu.edu.au 
mailto:mark.abra...@anu.edu.au wrote:


On 19/01/2012 2:59 AM, aiswarya.pa...@gmail.com
mailto:aiswarya.pa...@gmail.com wrote:

Hi,

Its going into the running mode but gets hang there for long hours which 
generating any data in the output file. And am not able to figure out the error 
file_doc. Please anyone knows what's going wrong.


No, but you should start trying to simplify what you're doing to
see where the problem lies. Does normal mdrun work?

Mark



Thanks
Sent from my BlackBerry® on Reliance Mobile, India's No. 1 Network. Go for 
it!

-Original Message-
From: Szilárd Pállszilard.p...@cbr.su.se  mailto:szilard.p...@cbr.su.se
Sender:gmx-users-boun...@gromacs.org  mailto:gmx-users-boun...@gromacs.org
Date: Wed, 18 Jan 2012 14:47:59
To: Discussion list for GROMACS usersgmx-users@gromacs.org  
mailto:gmx-users@gromacs.org
Reply-To: Discussion list for GROMACS usersgmx-users@gromacs.org  
mailto:gmx-users@gromacs.org
Subject: Re: [gmx-users] mdrun-gpu error

Hi,

Most of those are just warnings, the only error I see there comes from
the shell, probably an error in your script.

Cheers,
--
Szilárd



On Wed, Jan 18, 2012 at 12:27 PM, aiswarya pawar
aiswarya.pa...@gmail.com  mailto:aiswarya.pa...@gmail.com  wrote:

Hi users,

Am running mdrun on gpu . I receive an error such as=

WARNING: This run will generate roughly 38570 Mb of data


WARNING: OpenMM does not support leap-frog, will use velocity-verlet
integrator.


WARNING: OpenMM supports only Andersen thermostat with the
md/md-vv/md-vv-avek integrators.


WARNING: OpenMM supports only Monte Carlo barostat for pressure coupling.


WARNING: OpenMM provides contraints as a combination of SHAKE, SETTLE and
CCMA. Accuracy is based on the SHAKE tolerance set by the shake_tol
option.

/bin/cat: file_loc: No such file or directory


and the job is running but the nothing written into .xtc, .trr, .edr files .
What could have gone wrong?

--
Aiswarya  B Pawar

Bioinformatics Dept,
Indian Institute of Science
Bangalore



--
gmx-users mailing listgmx-us...@gromacs.org  mailto:gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search  before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it togmx-users-requ...@gromacs.org  
mailto:gmx-users-requ...@gromacs.org.
Can't post? Readhttp://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-us...@gromacs.org  mailto:gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive 
athttp://www.gromacs.org/Support/Mailing_Lists/Search  before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it togmx-users-requ...@gromacs.org  
mailto:gmx-users-requ...@gromacs.org.
Can't post? Readhttp://www.gromacs.org/Support/Mailing_Lists





--
gmx-users mailing list gmx-users@gromacs.org
mailto:gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org
mailto:gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




--
Aiswarya  B Pawar

Bioinformatics Dept,
Indian Institute of Science
Bangalore






-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun-gpu error

2012-01-19 Thread Szilárd Páll
And sorting out where the /bin/cat error comes from because that is
surely not a Gromacs message!

Cheers,
--
Szilárd



On Thu, Jan 19, 2012 at 1:12 PM, Mark Abraham mark.abra...@anu.edu.au wrote:
 On 19/01/2012 8:45 PM, aiswarya pawar wrote:

 Mark,

 THe normal mdrun also hangs thus not generating any output.


 OK. It's your problem to solve... keep simplifying stuff until you can
 isolate a small number of possible causes. Top of the list is file system
 availability.

 Mark



 On Thu, Jan 19, 2012 at 3:09 AM, Mark Abraham mark.abra...@anu.edu.au
 wrote:

 On 19/01/2012 2:59 AM, aiswarya.pa...@gmail.com wrote:

 Hi,

 Its going into the running mode but gets hang there for long hours which
 generating any data in the output file. And am not able to figure out the
 error file_doc. Please anyone knows what's going wrong.


 No, but you should start trying to simplify what you're doing to see where
 the problem lies. Does normal mdrun work?

 Mark


 Thanks
 Sent from my BlackBerry® on Reliance Mobile, India's No. 1 Network. Go for
 it!

 -Original Message-
 From: Szilárd Páll szilard.p...@cbr.su.se
 Sender: gmx-users-boun...@gromacs.org
 Date: Wed, 18 Jan 2012 14:47:59
 To: Discussion list for GROMACS usersgmx-users@gromacs.org
 Reply-To: Discussion list for GROMACS users gmx-users@gromacs.org
 Subject: Re: [gmx-users] mdrun-gpu error

 Hi,

 Most of those are just warnings, the only error I see there comes from
 the shell, probably an error in your script.

 Cheers,
 --
 Szilárd



 On Wed, Jan 18, 2012 at 12:27 PM, aiswarya pawar
 aiswarya.pa...@gmail.com wrote:

 Hi users,

 Am running mdrun on gpu . I receive an error such as=

 WARNING: This run will generate roughly 38570 Mb of data


 WARNING: OpenMM does not support leap-frog, will use velocity-verlet
 integrator.


 WARNING: OpenMM supports only Andersen thermostat with the
 md/md-vv/md-vv-avek integrators.


 WARNING: OpenMM supports only Monte Carlo barostat for pressure coupling.


 WARNING: OpenMM provides contraints as a combination of SHAKE, SETTLE and
 CCMA. Accuracy is based on the SHAKE tolerance set by the shake_tol
 option.

 /bin/cat: file_loc: No such file or directory


 and the job is running but the nothing written into .xtc, .trr, .edr files
 .
 What could have gone wrong?

 --
 Aiswarya  B Pawar

 Bioinformatics Dept,
 Indian Institute of Science
 Bangalore



 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists





 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




 --
 Aiswarya  B Pawar

 Bioinformatics Dept,
 Indian Institute of Science
 Bangalore






 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun-gpu error

2012-01-19 Thread aiswarya pawar
Has the tesla card got to do anything with the error. Am using Nvidia Tesla
S1070 1U server.


On Thu, Jan 19, 2012 at 8:37 PM, Szilárd Páll szilard.p...@cbr.su.sewrote:

 And sorting out where the /bin/cat error comes from because that is
 surely not a Gromacs message!

 Cheers,
 --
 Szilárd



 On Thu, Jan 19, 2012 at 1:12 PM, Mark Abraham mark.abra...@anu.edu.au
 wrote:
  On 19/01/2012 8:45 PM, aiswarya pawar wrote:
 
  Mark,
 
  THe normal mdrun also hangs thus not generating any output.
 
 
  OK. It's your problem to solve... keep simplifying stuff until you can
  isolate a small number of possible causes. Top of the list is file system
  availability.
 
  Mark
 
 
 
  On Thu, Jan 19, 2012 at 3:09 AM, Mark Abraham mark.abra...@anu.edu.au
  wrote:
 
  On 19/01/2012 2:59 AM, aiswarya.pa...@gmail.com wrote:
 
  Hi,
 
  Its going into the running mode but gets hang there for long hours which
  generating any data in the output file. And am not able to figure out
 the
  error file_doc. Please anyone knows what's going wrong.
 
 
  No, but you should start trying to simplify what you're doing to see
 where
  the problem lies. Does normal mdrun work?
 
  Mark
 
 
  Thanks
  Sent from my BlackBerry® on Reliance Mobile, India's No. 1 Network. Go
 for
  it!
 
  -Original Message-
  From: Szilárd Páll szilard.p...@cbr.su.se
  Sender: gmx-users-boun...@gromacs.org
  Date: Wed, 18 Jan 2012 14:47:59
  To: Discussion list for GROMACS usersgmx-users@gromacs.org
  Reply-To: Discussion list for GROMACS users gmx-users@gromacs.org
  Subject: Re: [gmx-users] mdrun-gpu error
 
  Hi,
 
  Most of those are just warnings, the only error I see there comes from
  the shell, probably an error in your script.
 
  Cheers,
  --
  Szilárd
 
 
 
  On Wed, Jan 18, 2012 at 12:27 PM, aiswarya pawar
  aiswarya.pa...@gmail.com wrote:
 
  Hi users,
 
  Am running mdrun on gpu . I receive an error such as=
 
  WARNING: This run will generate roughly 38570 Mb of data
 
 
  WARNING: OpenMM does not support leap-frog, will use velocity-verlet
  integrator.
 
 
  WARNING: OpenMM supports only Andersen thermostat with the
  md/md-vv/md-vv-avek integrators.
 
 
  WARNING: OpenMM supports only Monte Carlo barostat for pressure
 coupling.
 
 
  WARNING: OpenMM provides contraints as a combination of SHAKE, SETTLE
 and
  CCMA. Accuracy is based on the SHAKE tolerance set by the shake_tol
  option.
 
  /bin/cat: file_loc: No such file or directory
 
 
  and the job is running but the nothing written into .xtc, .trr, .edr
 files
  .
  What could have gone wrong?
 
  --
  Aiswarya  B Pawar
 
  Bioinformatics Dept,
  Indian Institute of Science
  Bangalore
 
 
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
 
 
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
 
 
  --
  Aiswarya  B Pawar
 
  Bioinformatics Dept,
  Indian Institute of Science
  Bangalore
 
 
 
 
 
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Aiswarya  B Pawar

Bioinformatics Dept,
Indian Institute of Science
Bangalore
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive

Re: [gmx-users] mdrun-gpu error

2012-01-19 Thread aiswarya pawar
when i tired running it again, i got an error as

Cuda error in file
'/home/staff/sec/secdpal/software/openmm_tesla/platforms/cuda/./src/kernels//
bbsort.cu' in line 176 : unspecified launch failure.
/bin/cat: file_loc: No such file or directory

On Thu, Jan 19, 2012 at 8:45 PM, aiswarya pawar aiswarya.pa...@gmail.comwrote:

 Has the tesla card got to do anything with the error. Am using Nvidia
 Tesla S1070 1U server.


 On Thu, Jan 19, 2012 at 8:37 PM, Szilárd Páll szilard.p...@cbr.su.sewrote:

 And sorting out where the /bin/cat error comes from because that is
 surely not a Gromacs message!

 Cheers,
 --
 Szilárd



 On Thu, Jan 19, 2012 at 1:12 PM, Mark Abraham mark.abra...@anu.edu.au
 wrote:
  On 19/01/2012 8:45 PM, aiswarya pawar wrote:
 
  Mark,
 
  THe normal mdrun also hangs thus not generating any output.
 
 
  OK. It's your problem to solve... keep simplifying stuff until you can
  isolate a small number of possible causes. Top of the list is file
 system
  availability.
 
  Mark
 
 
 
  On Thu, Jan 19, 2012 at 3:09 AM, Mark Abraham mark.abra...@anu.edu.au
  wrote:
 
  On 19/01/2012 2:59 AM, aiswarya.pa...@gmail.com wrote:
 
  Hi,
 
  Its going into the running mode but gets hang there for long hours
 which
  generating any data in the output file. And am not able to figure out
 the
  error file_doc. Please anyone knows what's going wrong.
 
 
  No, but you should start trying to simplify what you're doing to see
 where
  the problem lies. Does normal mdrun work?
 
  Mark
 
 
  Thanks
  Sent from my BlackBerry® on Reliance Mobile, India's No. 1 Network. Go
 for
  it!
 
  -Original Message-
  From: Szilárd Páll szilard.p...@cbr.su.se
  Sender: gmx-users-boun...@gromacs.org
  Date: Wed, 18 Jan 2012 14:47:59
  To: Discussion list for GROMACS usersgmx-users@gromacs.org
  Reply-To: Discussion list for GROMACS users gmx-users@gromacs.org
  Subject: Re: [gmx-users] mdrun-gpu error
 
  Hi,
 
  Most of those are just warnings, the only error I see there comes from
  the shell, probably an error in your script.
 
  Cheers,
  --
  Szilárd
 
 
 
  On Wed, Jan 18, 2012 at 12:27 PM, aiswarya pawar
  aiswarya.pa...@gmail.com wrote:
 
  Hi users,
 
  Am running mdrun on gpu . I receive an error such as=
 
  WARNING: This run will generate roughly 38570 Mb of data
 
 
  WARNING: OpenMM does not support leap-frog, will use velocity-verlet
  integrator.
 
 
  WARNING: OpenMM supports only Andersen thermostat with the
  md/md-vv/md-vv-avek integrators.
 
 
  WARNING: OpenMM supports only Monte Carlo barostat for pressure
 coupling.
 
 
  WARNING: OpenMM provides contraints as a combination of SHAKE, SETTLE
 and
  CCMA. Accuracy is based on the SHAKE tolerance set by the shake_tol
  option.
 
  /bin/cat: file_loc: No such file or directory
 
 
  and the job is running but the nothing written into .xtc, .trr, .edr
 files
  .
  What could have gone wrong?
 
  --
  Aiswarya  B Pawar
 
  Bioinformatics Dept,
  Indian Institute of Science
  Bangalore
 
 
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
 
 
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
 
 
  --
  Aiswarya  B Pawar
 
  Bioinformatics Dept,
  Indian Institute of Science
  Bangalore
 
 
 
 
 
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www

Re: [gmx-users] mdrun-gpu error

2012-01-19 Thread Szilárd Páll
That's a generic GPU kernel launch failure which can mean anything,
from faulty hardware to bad driver to messed up installation.

Does the memory test run? Try to compile/install again and see if it works.

--
Szilárd



On Thu, Jan 19, 2012 at 4:18 PM, aiswarya pawar
aiswarya.pa...@gmail.com wrote:
 when i tired running it again, i got an error as

 Cuda error in file
 '/home/staff/sec/secdpal/software/openmm_tesla/platforms/cuda/./src/kernels//bbsort.cu'
 in line 176 : unspecified launch failure.

 /bin/cat: file_loc: No such file or directory


 On Thu, Jan 19, 2012 at 8:45 PM, aiswarya pawar aiswarya.pa...@gmail.com
 wrote:

 Has the tesla card got to do anything with the error. Am using Nvidia
 Tesla S1070 1U server.


 On Thu, Jan 19, 2012 at 8:37 PM, Szilárd Páll szilard.p...@cbr.su.se
 wrote:

 And sorting out where the /bin/cat error comes from because that is
 surely not a Gromacs message!

 Cheers,
 --
 Szilárd



 On Thu, Jan 19, 2012 at 1:12 PM, Mark Abraham mark.abra...@anu.edu.au
 wrote:
  On 19/01/2012 8:45 PM, aiswarya pawar wrote:
 
  Mark,
 
  THe normal mdrun also hangs thus not generating any output.
 
 
  OK. It's your problem to solve... keep simplifying stuff until you can
  isolate a small number of possible causes. Top of the list is file
  system
  availability.
 
  Mark
 
 
 
  On Thu, Jan 19, 2012 at 3:09 AM, Mark Abraham mark.abra...@anu.edu.au
  wrote:
 
  On 19/01/2012 2:59 AM, aiswarya.pa...@gmail.com wrote:
 
  Hi,
 
  Its going into the running mode but gets hang there for long hours
  which
  generating any data in the output file. And am not able to figure out
  the
  error file_doc. Please anyone knows what's going wrong.
 
 
  No, but you should start trying to simplify what you're doing to see
  where
  the problem lies. Does normal mdrun work?
 
  Mark
 
 
  Thanks
  Sent from my BlackBerry® on Reliance Mobile, India's No. 1 Network. Go
  for
  it!
 
  -Original Message-
  From: Szilárd Páll szilard.p...@cbr.su.se
  Sender: gmx-users-boun...@gromacs.org
  Date: Wed, 18 Jan 2012 14:47:59
  To: Discussion list for GROMACS usersgmx-users@gromacs.org
  Reply-To: Discussion list for GROMACS users gmx-users@gromacs.org
  Subject: Re: [gmx-users] mdrun-gpu error
 
  Hi,
 
  Most of those are just warnings, the only error I see there comes from
  the shell, probably an error in your script.
 
  Cheers,
  --
  Szilárd
 
 
 
  On Wed, Jan 18, 2012 at 12:27 PM, aiswarya pawar
  aiswarya.pa...@gmail.com wrote:
 
  Hi users,
 
  Am running mdrun on gpu . I receive an error such as=
 
  WARNING: This run will generate roughly 38570 Mb of data
 
 
  WARNING: OpenMM does not support leap-frog, will use velocity-verlet
  integrator.
 
 
  WARNING: OpenMM supports only Andersen thermostat with the
  md/md-vv/md-vv-avek integrators.
 
 
  WARNING: OpenMM supports only Monte Carlo barostat for pressure
  coupling.
 
 
  WARNING: OpenMM provides contraints as a combination of SHAKE, SETTLE
  and
  CCMA. Accuracy is based on the SHAKE tolerance set by the shake_tol
  option.
 
  /bin/cat: file_loc: No such file or directory
 
 
  and the job is running but the nothing written into .xtc, .trr, .edr
  files
  .
  What could have gone wrong?
 
  --
  Aiswarya  B Pawar
 
  Bioinformatics Dept,
  Indian Institute of Science
  Bangalore
 
 
 
  --
  gmx-users mailing list    gmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  --
  gmx-users mailing list    gmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
 
 
 
  --
  gmx-users mailing list    gmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
 
 
  --
  Aiswarya  B Pawar
 
  Bioinformatics Dept,
  Indian Institute of Science
  Bangalore
 
 
 
 
 
 
  --
  gmx-users mailing list    gmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post

Re: [gmx-users] mdrun-gpu error

2012-01-19 Thread aiswarya pawar
Szilárd,

I did a memory test run yesterday and it went fine but today received an
error. So you mean to say the tesla card version nothing to do with this
right.

Thanks

On Thu, Jan 19, 2012 at 11:09 PM, Szilárd Páll szilard.p...@cbr.su.sewrote:

 That's a generic GPU kernel launch failure which can mean anything,
 from faulty hardware to bad driver to messed up installation.

 Does the memory test run? Try to compile/install again and see if it works.

 --
 Szilárd



 On Thu, Jan 19, 2012 at 4:18 PM, aiswarya pawar
 aiswarya.pa...@gmail.com wrote:
  when i tired running it again, i got an error as
 
  Cuda error in file
 
 '/home/staff/sec/secdpal/software/openmm_tesla/platforms/cuda/./src/kernels//
 bbsort.cu'
  in line 176 : unspecified launch failure.
 
  /bin/cat: file_loc: No such file or directory
 
 
  On Thu, Jan 19, 2012 at 8:45 PM, aiswarya pawar 
 aiswarya.pa...@gmail.com
  wrote:
 
  Has the tesla card got to do anything with the error. Am using Nvidia
  Tesla S1070 1U server.
 
 
  On Thu, Jan 19, 2012 at 8:37 PM, Szilárd Páll szilard.p...@cbr.su.se
  wrote:
 
  And sorting out where the /bin/cat error comes from because that is
  surely not a Gromacs message!
 
  Cheers,
  --
  Szilárd
 
 
 
  On Thu, Jan 19, 2012 at 1:12 PM, Mark Abraham mark.abra...@anu.edu.au
 
  wrote:
   On 19/01/2012 8:45 PM, aiswarya pawar wrote:
  
   Mark,
  
   THe normal mdrun also hangs thus not generating any output.
  
  
   OK. It's your problem to solve... keep simplifying stuff until you
 can
   isolate a small number of possible causes. Top of the list is file
   system
   availability.
  
   Mark
  
  
  
   On Thu, Jan 19, 2012 at 3:09 AM, Mark Abraham 
 mark.abra...@anu.edu.au
   wrote:
  
   On 19/01/2012 2:59 AM, aiswarya.pa...@gmail.com wrote:
  
   Hi,
  
   Its going into the running mode but gets hang there for long hours
   which
   generating any data in the output file. And am not able to figure
 out
   the
   error file_doc. Please anyone knows what's going wrong.
  
  
   No, but you should start trying to simplify what you're doing to see
   where
   the problem lies. Does normal mdrun work?
  
   Mark
  
  
   Thanks
   Sent from my BlackBerry® on Reliance Mobile, India's No. 1 Network.
 Go
   for
   it!
  
   -Original Message-
   From: Szilárd Páll szilard.p...@cbr.su.se
   Sender: gmx-users-boun...@gromacs.org
   Date: Wed, 18 Jan 2012 14:47:59
   To: Discussion list for GROMACS usersgmx-users@gromacs.org
   Reply-To: Discussion list for GROMACS users gmx-users@gromacs.org
   Subject: Re: [gmx-users] mdrun-gpu error
  
   Hi,
  
   Most of those are just warnings, the only error I see there comes
 from
   the shell, probably an error in your script.
  
   Cheers,
   --
   Szilárd
  
  
  
   On Wed, Jan 18, 2012 at 12:27 PM, aiswarya pawar
   aiswarya.pa...@gmail.com wrote:
  
   Hi users,
  
   Am running mdrun on gpu . I receive an error such as=
  
   WARNING: This run will generate roughly 38570 Mb of data
  
  
   WARNING: OpenMM does not support leap-frog, will use velocity-verlet
   integrator.
  
  
   WARNING: OpenMM supports only Andersen thermostat with the
   md/md-vv/md-vv-avek integrators.
  
  
   WARNING: OpenMM supports only Monte Carlo barostat for pressure
   coupling.
  
  
   WARNING: OpenMM provides contraints as a combination of SHAKE,
 SETTLE
   and
   CCMA. Accuracy is based on the SHAKE tolerance set by the
 shake_tol
   option.
  
   /bin/cat: file_loc: No such file or directory
  
  
   and the job is running but the nothing written into .xtc, .trr, .edr
   files
   .
   What could have gone wrong?
  
   --
   Aiswarya  B Pawar
  
   Bioinformatics Dept,
   Indian Institute of Science
   Bangalore
  
  
  
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   Please search the archive at
   http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   Please search the archive at
   http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
  
  
  
  
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   Please search the archive at
   http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun-gpu error

2012-01-19 Thread Justin A. Lemkul



aiswarya pawar wrote:

Szilárd,

I did a memory test run yesterday and it went fine but today received an 
error. So you mean to say the tesla card version nothing to do with this 
right.




You're trying to solve multiple problems at once.  You told Mark that the normal 
mdrun executable (which works independently of any GPU components) also hangs, 
so either your filesystem is faulty or your installation procedure produced 
nonfunctional executables.


You're posting bits and pieces of information, which makes it incredibly hard 
for anyone to help you.  Let's recap and try again.  Please provide:


1. The Gromacs version you're using
2. Description of the hardware (GPU and non-GPU components)
3. Installation procedure for Gromacs and any of the prerequisite software and 
libraries that were required, including versions
4. The exact command(s) you're issuing, including the full script that is 
causing a problem


-Justin


Thanks

On Thu, Jan 19, 2012 at 11:09 PM, Szilárd Páll szilard.p...@cbr.su.se 
mailto:szilard.p...@cbr.su.se wrote:


That's a generic GPU kernel launch failure which can mean anything,
from faulty hardware to bad driver to messed up installation.

Does the memory test run? Try to compile/install again and see if it
works.

--
Szilárd



On Thu, Jan 19, 2012 at 4:18 PM, aiswarya pawar
aiswarya.pa...@gmail.com mailto:aiswarya.pa...@gmail.com wrote:
  when i tired running it again, i got an error as
 
  Cuda error in file
 

'/home/staff/sec/secdpal/software/openmm_tesla/platforms/cuda/./src/kernels//bbsort.cu
http://bbsort.cu'
  in line 176 : unspecified launch failure.
 
  /bin/cat: file_loc: No such file or directory
 
 
  On Thu, Jan 19, 2012 at 8:45 PM, aiswarya pawar
aiswarya.pa...@gmail.com mailto:aiswarya.pa...@gmail.com
  wrote:
 
  Has the tesla card got to do anything with the error. Am
using Nvidia
  Tesla S1070 1U server.
 
 
  On Thu, Jan 19, 2012 at 8:37 PM, Szilárd Páll
szilard.p...@cbr.su.se mailto:szilard.p...@cbr.su.se
  wrote:
 
  And sorting out where the /bin/cat error comes from because that is
  surely not a Gromacs message!
 
  Cheers,
  --
  Szilárd
 
 
 
  On Thu, Jan 19, 2012 at 1:12 PM, Mark Abraham
mark.abra...@anu.edu.au mailto:mark.abra...@anu.edu.au
  wrote:
   On 19/01/2012 8:45 PM, aiswarya pawar wrote:
  
   Mark,
  
   THe normal mdrun also hangs thus not generating any output.
  
  
   OK. It's your problem to solve... keep simplifying stuff
until you can
   isolate a small number of possible causes. Top of the list is
file
   system
   availability.
  
   Mark
  
  
  
   On Thu, Jan 19, 2012 at 3:09 AM, Mark Abraham
mark.abra...@anu.edu.au mailto:mark.abra...@anu.edu.au
   wrote:
  
   On 19/01/2012 2:59 AM, aiswarya.pa...@gmail.com
mailto:aiswarya.pa...@gmail.com wrote:
  
   Hi,
  
   Its going into the running mode but gets hang there for long
hours
   which
   generating any data in the output file. And am not able to
figure out
   the
   error file_doc. Please anyone knows what's going wrong.
  
  
   No, but you should start trying to simplify what you're
doing to see
   where
   the problem lies. Does normal mdrun work?
  
   Mark
  
  
   Thanks
   Sent from my BlackBerry® on Reliance Mobile, India's No. 1
Network. Go
   for
   it!
  
   -Original Message-
   From: Szilárd Páll szilard.p...@cbr.su.se
mailto:szilard.p...@cbr.su.se
   Sender: gmx-users-boun...@gromacs.org
mailto:gmx-users-boun...@gromacs.org
   Date: Wed, 18 Jan 2012 14:47:59
   To: Discussion list for GROMACS usersgmx-users@gromacs.org
mailto:gmx-users@gromacs.org
   Reply-To: Discussion list for GROMACS users
gmx-users@gromacs.org mailto:gmx-users@gromacs.org
   Subject: Re: [gmx-users] mdrun-gpu error
  
   Hi,
  
   Most of those are just warnings, the only error I see there
comes from
   the shell, probably an error in your script.
  
   Cheers,
   --
   Szilárd
  
  
  
   On Wed, Jan 18, 2012 at 12:27 PM, aiswarya pawar
   aiswarya.pa...@gmail.com mailto:aiswarya.pa...@gmail.com
wrote:
  
   Hi users,
  
   Am running mdrun on gpu . I receive an error such as=
  
   WARNING: This run will generate roughly 38570 Mb of data
  
  
   WARNING: OpenMM does not support leap-frog, will use
velocity-verlet
   integrator.
  
  
   WARNING: OpenMM supports only Andersen thermostat with the
   md/md-vv/md-vv-avek integrators.
  
  
   WARNING: OpenMM supports only Monte Carlo barostat for pressure
   coupling

Re: [gmx-users] mdrun-gpu error

2012-01-19 Thread aiswarya pawar
mailto:gmx-users-bounces@**gromacs.orggmx-users-boun...@gromacs.org
 
   Date: Wed, 18 Jan 2012 14:47:59
   To: Discussion list for GROMACS usersgmx-users@gromacs.org
mailto:gmx-users@gromacs.org**
   Reply-To: Discussion list for GROMACS users
gmx-users@gromacs.org mailto:gmx-users@gromacs.org**
   Subject: Re: [gmx-users] mdrun-gpu error
  
   Hi,
  
   Most of those are just warnings, the only error I see there
comes from
   the shell, probably an error in your script.
  
   Cheers,
   --
   Szilárd
  
  
  
   On Wed, Jan 18, 2012 at 12:27 PM, aiswarya pawar
   aiswarya.pa...@gmail.com 
 mailto:aiswarya.pawar@gmail.**comaiswarya.pa...@gmail.com
 

wrote:
  
   Hi users,
  
   Am running mdrun on gpu . I receive an error such as=
  
   WARNING: This run will generate roughly 38570 Mb of data
  
  
   WARNING: OpenMM does not support leap-frog, will use
velocity-verlet
   integrator.
  
  
   WARNING: OpenMM supports only Andersen thermostat with the
   md/md-vv/md-vv-avek integrators.
  
  
   WARNING: OpenMM supports only Monte Carlo barostat for pressure
   coupling.
  
  
   WARNING: OpenMM provides contraints as a combination of
SHAKE, SETTLE
   and
   CCMA. Accuracy is based on the SHAKE tolerance set by the
shake_tol
   option.
  
   /bin/cat: file_loc: No such file or directory
  
  
   and the job is running but the nothing written into .xtc,
.trr, .edr
   files
   .
   What could have gone wrong?
  
   --
   Aiswarya  B Pawar
  
   Bioinformatics Dept,
   Indian Institute of Science
   Bangalore
  
  
  
   --
   gmx-users mailing listgmx-users@gromacs.org
mailto:gmx-users@gromacs.org

   
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
   Please search the archive at
   
 http://www.gromacs.org/**Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
posting!
   Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org
mailto:gmx-users-request@**gromacs.orggmx-users-requ...@gromacs.org
 .

   Can't post? Read http://www.gromacs.org/**
 Support/Mailing_Lists http://www.gromacs.org/Support/Mailing_Lists
  
   --
   gmx-users mailing listgmx-users@gromacs.org
mailto:gmx-users@gromacs.org

   
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
   Please search the archive at
   
 http://www.gromacs.org/**Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
posting!
   Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org
mailto:gmx-users-request@**gromacs.orggmx-users-requ...@gromacs.org
 .

   Can't post? Read http://www.gromacs.org/**
 Support/Mailing_Lists http://www.gromacs.org/Support/Mailing_Lists
  
  
  
  
  
   --
   gmx-users mailing listgmx-users@gromacs.org
mailto:gmx-users@gromacs.org

   
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
   Please search the archive at
   
 http://www.gromacs.org/**Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
posting!
   Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org
mailto:gmx-users-request@**gromacs.orggmx-users-requ...@gromacs.org
 .

   Can't post? Read http://www.gromacs.org/**
 Support/Mailing_Lists http://www.gromacs.org/Support/Mailing_Lists
  
  
  
  
   --
   Aiswarya  B Pawar
  
   Bioinformatics Dept,
   Indian Institute of Science
   Bangalore
  
  
  
  
  
  
   --
   gmx-users mailing listgmx-users@gromacs.org
mailto:gmx-users@gromacs.org

   
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
   Please search the archive at
   
 http://www.gromacs.org/**Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
posting!
   Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org
mailto:gmx-users-request@**gromacs.orggmx-users-requ...@gromacs.org
 .

   Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists
  --
  gmx-users mailing listgmx-users@gromacs.org
mailto:gmx-users@gromacs.org

  
 http

Re: [gmx-users] mdrun-gpu error

2012-01-18 Thread Szilárd Páll
Hi,

Most of those are just warnings, the only error I see there comes from
the shell, probably an error in your script.

Cheers,
--
Szilárd



On Wed, Jan 18, 2012 at 12:27 PM, aiswarya pawar
aiswarya.pa...@gmail.com wrote:
 Hi users,

 Am running mdrun on gpu . I receive an error such as=

 WARNING: This run will generate roughly 38570 Mb of data


 WARNING: OpenMM does not support leap-frog, will use velocity-verlet
 integrator.


 WARNING: OpenMM supports only Andersen thermostat with the
 md/md-vv/md-vv-avek integrators.


 WARNING: OpenMM supports only Monte Carlo barostat for pressure coupling.


 WARNING: OpenMM provides contraints as a combination of SHAKE, SETTLE and
 CCMA. Accuracy is based on the SHAKE tolerance set by the shake_tol
 option.

 /bin/cat: file_loc: No such file or directory


 and the job is running but the nothing written into .xtc, .trr, .edr files .
 What could have gone wrong?

 --
 Aiswarya  B Pawar

 Bioinformatics Dept,
 Indian Institute of Science
 Bangalore



 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun-gpu error

2012-01-18 Thread aiswarya . pawar
Hi, 

Its going into the running mode but gets hang there for long hours which 
generating any data in the output file. And am not able to figure out the error 
file_doc. Please anyone knows what's going wrong.

Thanks
Sent from my BlackBerry® on Reliance Mobile, India's No. 1 Network. Go for it!

-Original Message-
From: Szilárd Páll szilard.p...@cbr.su.se
Sender: gmx-users-boun...@gromacs.org
Date: Wed, 18 Jan 2012 14:47:59 
To: Discussion list for GROMACS usersgmx-users@gromacs.org
Reply-To: Discussion list for GROMACS users gmx-users@gromacs.org
Subject: Re: [gmx-users] mdrun-gpu error

Hi,

Most of those are just warnings, the only error I see there comes from
the shell, probably an error in your script.

Cheers,
--
Szilárd



On Wed, Jan 18, 2012 at 12:27 PM, aiswarya pawar
aiswarya.pa...@gmail.com wrote:
 Hi users,

 Am running mdrun on gpu . I receive an error such as=

 WARNING: This run will generate roughly 38570 Mb of data


 WARNING: OpenMM does not support leap-frog, will use velocity-verlet
 integrator.


 WARNING: OpenMM supports only Andersen thermostat with the
 md/md-vv/md-vv-avek integrators.


 WARNING: OpenMM supports only Monte Carlo barostat for pressure coupling.


 WARNING: OpenMM provides contraints as a combination of SHAKE, SETTLE and
 CCMA. Accuracy is based on the SHAKE tolerance set by the shake_tol
 option.

 /bin/cat: file_loc: No such file or directory


 and the job is running but the nothing written into .xtc, .trr, .edr files .
 What could have gone wrong?

 --
 Aiswarya  B Pawar

 Bioinformatics Dept,
 Indian Institute of Science
 Bangalore



 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun-gpu error

2012-01-18 Thread Mark Abraham

On 19/01/2012 2:59 AM, aiswarya.pa...@gmail.com wrote:

Hi,

Its going into the running mode but gets hang there for long hours which 
generating any data in the output file. And am not able to figure out the error 
file_doc. Please anyone knows what's going wrong.


No, but you should start trying to simplify what you're doing to see 
where the problem lies. Does normal mdrun work?


Mark



Thanks
Sent from my BlackBerry® on Reliance Mobile, India's No. 1 Network. Go for it!

-Original Message-
From: Szilárd Pállszilard.p...@cbr.su.se
Sender: gmx-users-boun...@gromacs.org
Date: Wed, 18 Jan 2012 14:47:59
To: Discussion list for GROMACS usersgmx-users@gromacs.org
Reply-To: Discussion list for GROMACS usersgmx-users@gromacs.org
Subject: Re: [gmx-users] mdrun-gpu error

Hi,

Most of those are just warnings, the only error I see there comes from
the shell, probably an error in your script.

Cheers,
--
Szilárd



On Wed, Jan 18, 2012 at 12:27 PM, aiswarya pawar
aiswarya.pa...@gmail.com  wrote:

Hi users,

Am running mdrun on gpu . I receive an error such as=

WARNING: This run will generate roughly 38570 Mb of data


WARNING: OpenMM does not support leap-frog, will use velocity-verlet
integrator.


WARNING: OpenMM supports only Andersen thermostat with the
md/md-vv/md-vv-avek integrators.


WARNING: OpenMM supports only Monte Carlo barostat for pressure coupling.


WARNING: OpenMM provides contraints as a combination of SHAKE, SETTLE and
CCMA. Accuracy is based on the SHAKE tolerance set by the shake_tol
option.

/bin/cat: file_loc: No such file or directory


and the job is running but the nothing written into .xtc, .trr, .edr files .
What could have gone wrong?

--
Aiswarya  B Pawar

Bioinformatics Dept,
Indian Institute of Science
Bangalore



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun stuck with PME on BlueGene/P

2011-12-05 Thread Mark Abraham

On 5/12/2011 9:03 AM, Rongliang Wu wrote:

Dear all,

I have compiled the gromacs 4.5.5 version on BlueGene/P both on the 
front end and bgp using the following script :

with the front end as an example
-- 


version=$1

PREFIX=/home/users/wurl/software/gromacs$version

export ac_cv_sizeof_int=4
export ac_cv_sizeof_long_int=4
export ac_cv_sizeof_long_long_int=8

export CC=xlc_r
export CFLAGS=-O3 -I/.../fftw3.3/include -qarch=auto -qtune=auto
export CPPFLAGS=-I/.../fftw3.3/include
export CXX=xlc_r
export CXXFLAGS=-O3 -qarch=auto -qtune=auto

export F77=xlf_r
export FFLAGS=-O3 -qnoprefetch -qarch=auto -qtune=auto

export LDFLAGS= -L/.../fftw3.3/lib
export LIBS=-lmass

echo make distclean
make distclean

../configure --prefix=$PREFIX \
 --with-fft=fftw3 \
 --disable-ppc-altivec \
 --without-x \
 --disable-software-sqrt \
 --enable-ppc-sqrt \
 --disable-float \
 --disable-shared

-- 



in both ways, the program gets stuck and doing nothing when using PME:

---
Back Off! I just backed up md.log to ./#md.log.10#
Getting Loaded...
Reading file topol.tpr, VERSION 4.5.5 (double precision)
Loaded with Money
---

I compiled the debug version of mdrun and it told me the problem of fftw

---
Program received signal SIGINT, Interrupt.
[Switching to Thread -134504448 (LWP 31644)]
0x1010dd1c in fftwf_cpy2d ()
--

my compilation of fftw is :
--
version=$1

PREFIX=/home/users/wurl/software/fftw$version

export ac_cv_sizeof_int=4
export ac_cv_sizeof_long_int=4
export ac_cv_sizeof_long_long_int=8

export CC=xlc_r
export CFLAGS=-O5 -qarch=auto -qtune=auto
export CXX=xlc_r
export CXXFLAGS=-O5 -qarch=auto -qtune=auto

export F77=xlf_r
export FFLAGS=-O5 -qnoprefetch -qarch=auto -qtune=auto

export LIBS=-lmass

echo make distclean
make distclean

../configure --prefix=$PREFIX \
 --disable-fortran \
# --enable-float \

--

When not using PME, the program went well. I also found the same 
problems on the mailing list, but it did not seem to solve the problem 
exactly. Does anyone know how to have this problem solved? I cannot do 
without PME for accurate MD simulations :)


If you want mdrun available on both front and back ends, you will need 
to link each with the corresponding FFTW library. It's not clear to me 
that you are doing that. You will also need to enable MPI to get any 
value out of the back end version.


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun -rerun (not reproducible energy values?)

2011-11-24 Thread Mark Abraham

On 25/11/2011 2:28 AM, Vasileios Tatsis wrote:

Dear Gromacs Users,

I am using the -rerun option of mdrun to re-analyze a trajectory. 
Thus, I tried to rerun the same trajectory (md.xtc) with exactly the 
same md.tpr.
But the bonded interactions are not computed or written to the log 
file or to the .edr file, resulting to completely different energy 
values from the initial log and edr files.


That does not sound possible if the .tpr is the same. With bond or angle 
constraints, some bonded terms would not appear in either version, of 
course.


Mark


I am using the following command, in order to read the coordinates stored in 
the .xtc file and compute the potential energy:
mdrun -rerun md.xtc -s md.tpr

Thanks in advance for your help




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] MDRun -append error

2011-11-16 Thread Roland Schulz
On Wed, Nov 16, 2011 at 4:11 PM, xianqiang xianqiang...@126.com wrote:

   Hi, all

 I just restart a simulation with 'mpirun -np 8 mdrun -pd yes -s md_0_1.tpr
 -cpi state.cpt -append'

 However, the following error appears:


 Output file appending has been requested,
 but some output files listed in the checkpoint file state.cpt
 are not present or are named differently by the current program:
 output files present: traj.xtc
 output files not present or named differently: md_0_1.log md_0_1.edr

 ---
 Program mdrun, VERSION 4.5.3
 Source code file: ../../../gromacs-4.5.3/src/gmxlib/checkpoint.c, line:
 2139

 Fatal error:
 File appending requested, but only 1 of the 3 output files are present
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors


 The two files which can not be found were located in the same directory
 with 'traj.xtc', and why they can not be found by gromacs?

Maybe they are not readable? Can you look at the log file (e.g. using
less)?

Roland



 Thanks and best regards,
 Xianqiang


 --
  Xianqiang Sun

  Email: xianqi...@theochem.kth.se
 Division of Theoretical Chemistry and Biology
 School of Biotechnology
 Royal Institute of Technology
 S-106 91 Stockholm, Sweden





-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun did not support large file offsets

2011-08-18 Thread Justin A. Lemkul



Bert wrote:

Dear gmx-users,

   When I continued a run on my x86_64 linux clusters using the command
mdrun -deffnm prod -cpi prod.cpt -append, I got the errors as below:

Program mdrun, VERSION 4.5.4
Source code file: checkpoint.c, line: 1734
Fatal error:
The original run wrote a file called 'prod.xtc' which is larger than 2 GB,
but mdrun did not support large file offsets. Can not append. Run mdrun with
-noappend
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors

I also tried to recompile gromacs with alternative option
--enable-largefile, but it still could not work. Then I compared the
config.log generated using --enable-largefile and --disable-largefile
(default) after configured, however, the two files were almost the same.

How to solve this problem? Any suggestions are appreciated. Thanks in
advance.



The error message gives you the solution. Use the -noappend option and 
concatenate your output files later.


-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun-gpu, fatal error: reading tpx file (topol.tpr) version 73 with version 71 program

2011-08-14 Thread Justin A. Lemkul



H.Ghorbanfekr wrote:

Hi ,

I'm using mdrun-gpu  for testing gpubench demos.
But I've got this error message:

Fatal error:
reading tpx file (topol.tpr) version 73 with version 71 program

I installed different version of gromacs 4.5 to 4.5.4 and 
gpu-gromacs-beta 1, 2 versions.  
But it still doesn't work. any idea on this?




Gromacs programs are backwards-compatible, but if the .tpr file version has 
changed between revisions, they are not forward-compatible.  A version 73 .tpr 
file is from Gromacs version 4.5.4, so the error comes when you try to use an 
older Gromacs version to run it.  Stick with one version of Gromacs (preferrably 
the newest) and use it for everything.  Having multiple versions lying around 
unnecessarily can be problematic when you might wind up using executables from 
different installations.


-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun-gpu, fatal error: reading tpx file (topol.tpr) version 73 with version 71 program

2011-08-14 Thread H.Ghorbanfekr
I used the newest version of gromacs 4.5.4 (and not install beta gpu version
) so everything goes well.
thanks for your reply

On Sun, Aug 14, 2011 at 5:52 PM, Justin A. Lemkul jalem...@vt.edu wrote:



 H.Ghorbanfekr wrote:

 Hi ,

 I'm using mdrun-gpu  for testing gpubench demos.
 But I've got this error message:

 Fatal error:
 reading tpx file (topol.tpr) version 73 with version 71 program

 I installed different version of gromacs 4.5 to 4.5.4 and gpu-gromacs-beta
 1, 2 versions.  But it still doesn't work. any idea on this?


 Gromacs programs are backwards-compatible, but if the .tpr file version has
 changed between revisions, they are not forward-compatible.  A version 73
 .tpr file is from Gromacs version 4.5.4, so the error comes when you try to
 use an older Gromacs version to run it.  Stick with one version of Gromacs
 (preferrably the newest) and use it for everything.  Having multiple
 versions lying around unnecessarily can be problematic when you might wind
 up using executables from different installations.

 -Justin

 --
 ==**==

 Justin A. Lemkul
 Ph.D. Candidate
 ICTAS Doctoral Scholar
 MILES-IGERT Trainee
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justinhttp://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 ==**==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 Please don't post (un)subscribe requests to the list. Use the www interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun crashes with 'One or more interactions were multiple assigned in the domain decompostion'

2011-08-12 Thread Mark Abraham

On 12/08/2011 9:55 PM, Sebastian Breuers wrote:

Dear all,

searching for the mentioned error message I found a bug report for 
mdrun. It seemed to be fixed, but in my setup it appears again and I 
am not sure if I could do something about it.


I did not attach the tpr file since it is bounced by the mailing list 
and I'd like to get at least a hint without waitng for approval of the 
rejected mail. :)
The simulation crashes with 64 CPUs after step 11237000 with the 
following entry in the log file:


---
Program mdrun, VERSION 4.5.4
Source code file: 
/home/breuerss/local/src/gromacs-4.5.4/src/mdlib/domdec_top.c, line: 352


Software inconsistency error:
One or more interactions were multiple assigned in the domain 
decompostion
For more information and tips for troubleshooting, please check the 
GROMACS

website at http://www.gromacs.org/Documentation/Errors
---

That means that the simulation already ran for some time. I could also 
finish some runs successfully with the very same topology but 
different simulation parameters.


For any help or hints how I could fix it I would be grateful.


This one really can't be managed. If you have a checkpoint file from a 
time close before the crash, then it may be possible for you to 
reproduce the error, and so to produce a starting point just before the 
error. Now perhaps a developer could reproduce the error with a view to 
fixing it...


More useful from your point of view would be restarting from that 
checkpoint file on a different number of processors, to try to get past 
the problematic point.


A thorough description of your system, preparation and simulation 
protocol might help someone else suggest a cause, but I wouldn't hold my 
breath :)


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun crashes with 'One or more interactions were multiple assigned in the domain decompostion'

2011-08-12 Thread lina
On Fri, Aug 12, 2011 at 7:55 PM, Sebastian Breuers
breue...@uni-koeln.de wrote:
 Dear all,

 searching for the mentioned error message I found a bug report for mdrun. It
 seemed to be fixed, but in my setup it appears again and I am not sure if I
 could do something about it.

 I did not attach the tpr file since it is bounced by the mailing list and
 I'd like to get at least a hint without waitng for approval of the rejected
 mail. :)
 The simulation crashes with 64 CPUs after step 11237000 with the following
 entry in the log file:

 ---
 Program mdrun, VERSION 4.5.4
 Source code file:
 /home/breuerss/local/src/gromacs-4.5.4/src/mdlib/domdec_top.c, line: 352

 Software inconsistency error:
 One or more interactions were multiple assigned in the domain decompostion
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 ---

 That means that the simulation already ran for some time. I could also
 finish some runs successfully with the very same topology but different
 simulation parameters.

 For any help or hints how I could fix it I would be grateful.

Have you tried to re-submit it, use the -cpi state.cpt -append

and see whether it can continue or not?


 Best regards


 Sebastian

 --
 _

 Sebastian Breuers               Tel: +49-221-470-4108
 EMail: breue...@uni-koeln.de

 Universität zu Köln             University of Cologne
 Department für Chemie           Department of Chemistry
 Organische Chemie               Organic Chemistry

 Greinstraße 4                   Greinstraße 4
 Raum 325                        Room 325
 D-50939 Köln                    D-50939 Cologne, Federal Rep. of Germany
 _

 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Best Regards,

lina
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun crashes with 'One or more interactions were multiple assigned in the domain decompostion'

2011-08-12 Thread Sebastian Breuers

Hey,

thank you both for the response. I at least could restart the system. 
And it is running beyond the crashing point. Keep the fingers crossed. :)


Kind regards


Sebastian

Am 12.08.2011 15:41, schrieb lina:

On Fri, Aug 12, 2011 at 7:55 PM, Sebastian Breuers
breue...@uni-koeln.de  wrote:

Dear all,

searching for the mentioned error message I found a bug report for mdrun. It
seemed to be fixed, but in my setup it appears again and I am not sure if I
could do something about it.

I did not attach the tpr file since it is bounced by the mailing list and
I'd like to get at least a hint without waitng for approval of the rejected
mail. :)
The simulation crashes with 64 CPUs after step 11237000 with the following
entry in the log file:

---
Program mdrun, VERSION 4.5.4
Source code file:
/home/breuerss/local/src/gromacs-4.5.4/src/mdlib/domdec_top.c, line: 352

Software inconsistency error:
One or more interactions were multiple assigned in the domain decompostion
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

That means that the simulation already ran for some time. I could also
finish some runs successfully with the very same topology but different
simulation parameters.

For any help or hints how I could fix it I would be grateful.


Have you tried to re-submit it, use the -cpi state.cpt -append

and see whether it can continue or not?



Best regards


Sebastian

--
_

Sebastian Breuers   Tel: +49-221-470-4108
EMail: breue...@uni-koeln.de

Universität zu Köln University of Cologne
Department für Chemie   Department of Chemistry
Organische Chemie   Organic Chemistry

Greinstraße 4   Greinstraße 4
Raum 325Room 325
D-50939 KölnD-50939 Cologne, Federal Rep. of Germany
_

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface
or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists







--
_

Sebastian Breuers   Tel: +49-221-470-4108
EMail: breue...@uni-koeln.de

Universität zu Köln University of Cologne
Department für Chemie   Department of Chemistry
Organische Chemie   Organic Chemistry

Greinstraße 4   Greinstraße 4
Raum 325Room 325
D-50939 KölnD-50939 Cologne, Federal Rep. of Germany
_
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun crashes with 'One or more interactions were multiple assigned in the domain decompostion'

2011-08-12 Thread Da-Wei Li
hello

Just to share information. My parallel MD run also crash (very rarely) but I
can always bypass the crash point using cpt files.

dawei

On Fri, Aug 12, 2011 at 10:02 AM, Sebastian Breuers
breue...@uni-koeln.dewrote:

 Hey,

 thank you both for the response. I at least could restart the system. And
 it is running beyond the crashing point. Keep the fingers crossed. :)

 Kind regards


 Sebastian

 Am 12.08.2011 15:41, schrieb lina:

 On Fri, Aug 12, 2011 at 7:55 PM, Sebastian Breuers
 breue...@uni-koeln.de  wrote:

 Dear all,

 searching for the mentioned error message I found a bug report for mdrun.
 It
 seemed to be fixed, but in my setup it appears again and I am not sure if
 I
 could do something about it.

 I did not attach the tpr file since it is bounced by the mailing list and
 I'd like to get at least a hint without waitng for approval of the
 rejected
 mail. :)
 The simulation crashes with 64 CPUs after step 11237000 with the
 following
 entry in the log file:

 --**-
 Program mdrun, VERSION 4.5.4
 Source code file:
 /home/breuerss/local/src/**gromacs-4.5.4/src/mdlib/**domdec_top.c, line:
 352

 Software inconsistency error:
 One or more interactions were multiple assigned in the domain
 decompostion
 For more information and tips for troubleshooting, please check the
 GROMACS
 website at 
 http://www.gromacs.org/**Documentation/Errorshttp://www.gromacs.org/Documentation/Errors
 --**-

 That means that the simulation already ran for some time. I could also
 finish some runs successfully with the very same topology but different
 simulation parameters.

 For any help or hints how I could fix it I would be grateful.


 Have you tried to re-submit it, use the -cpi state.cpt -append

 and see whether it can continue or not?


 Best regards


 Sebastian

 --
 __**__**
 _

 Sebastian Breuers   Tel: +49-221-470-4108
 EMail: breue...@uni-koeln.de

 Universität zu Köln University of Cologne
 Department für Chemie   Department of Chemistry
 Organische Chemie   Organic Chemistry

 Greinstraße 4   Greinstraße 4
 Raum 325Room 325
 D-50939 KölnD-50939 Cologne, Federal Rep. of Germany
 __**__**
 _

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/**Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 Please don't post (un)subscribe requests to the list. Use the www
 interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists





 --
 __**__**
 _

 Sebastian Breuers   Tel: +49-221-470-4108
 EMail: breue...@uni-koeln.de

 Universität zu Köln University of Cologne
 Department für Chemie   Department of Chemistry
 Organische Chemie   Organic Chemistry

 Greinstraße 4   Greinstraße 4
 Raum 325Room 325
 D-50939 KölnD-50939 Cologne, Federal Rep. of Germany
 __**__**
 _
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 Please don't post (un)subscribe requests to the list. Use the www interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun -nc

2011-06-14 Thread Justin A. Lemkul



Hsin-Lin Chiang wrote:

Hi,

I tried gromacs 4.5.4 in these days and last version I used is 4.0.5.
I found when I add --enable-threads in installation.
I can use mdrun -nc 12 to run 12 CPUs together within one machine.


I assume you mean -nt?

It also amazing me when I type top to check the job, only one process 
in computer and the CPU utility is 1200%!!


Sounds about right.  I see the same on my dual-core workstation.  One process, 2 
threads, and just less than 200% CPU usage.


But I tried to execute it on two machines, then the second machines 
didn't work.




You can't execute threads over multiple machines.  For that you need MPI, not 
threading (they are mutually exclusive).  You haven't provided much detail on 
what you actually did in this case and didn't work doesn't exactly provide any 
relevant diagnostic information.



I don't need mdrun_mpi any more because mdrun -nc is faster the mdrun_mpi.
That make me confused.
Am I right to use mdrun -nc to run parallel job in this way?


For a single, multi-core workstation, mdrun -nt is correct.


Does the result is the same as which is employed by mdrun_mpi?


A variety of factors influence whether or not the results are the same.

http://www.gromacs.org/Documentation/Terminology/Reproducibility

(Exactly I never use mdrun_mpi more than one machine since the ethernet 
between machines is very slow here.)


If mdrun -nc is available.
Do we have another commend support CPUs more than one in the same machine.



That's what threading is doing, assuming you're invoking the command correctly. 
 As stated above, the option is -nt, not -nc.  mdrun doesn't check for whether 
or not command line arguments are actually valid, so if you're using -nc you're 
not actually doing threading, but the 1200% usage suggests you probably are.


-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun -nc

2011-06-14 Thread Joshua L. Phillips
On most of my multi-core machines, an attempt is made to detect the
number of threads to start at run-time (there may be a check for the
MAXIMUM number of threads at compile-time, but a developer would need to
chime in to determine if this is the case). For instance, I have a dual
quadcore machine (eight cores total). When I just run a command like so
on 4.5.4:

mdrun -v -deffnm trp-cage

I get:

Starting 8 threads
Loaded with Money

Making 2D domain decomposition 4 x 2 x 1
starting mdrun 'Good ROcking Metal Altar for Chronical Sinners in water'
10 steps,200.0 ps.
step 200, will finish Tue Jun 14 12:01:01 2011imb F  8% 
...

So, by default the threaded version will start the maximum number of
threads that it can (if it can find a good decomposition for all cores,
otherwise fewer threads are used). So, -nt is used to explicitly control
the number of threads used (not -nc).

As Justin said, for running across multiple machines, MPI is necessary
and precludes using threads.

-- Josh

On Tue, 2011-06-14 at 11:15 -0400, Justin A. Lemkul wrote:
 
 Hsin-Lin Chiang wrote:
  Hi,
  
  I tried gromacs 4.5.4 in these days and last version I used is 4.0.5.
  I found when I add --enable-threads in installation.
  I can use mdrun -nc 12 to run 12 CPUs together within one machine.
 
 I assume you mean -nt?
 
  It also amazing me when I type top to check the job, only one process 
  in computer and the CPU utility is 1200%!!
 
 Sounds about right.  I see the same on my dual-core workstation.  One 
 process, 2 
 threads, and just less than 200% CPU usage.
 
  But I tried to execute it on two machines, then the second machines 
  didn't work.
  
 
 You can't execute threads over multiple machines.  For that you need MPI, not 
 threading (they are mutually exclusive).  You haven't provided much detail on 
 what you actually did in this case and didn't work doesn't exactly provide 
 any 
 relevant diagnostic information.
 
  I don't need mdrun_mpi any more because mdrun -nc is faster the mdrun_mpi.
  That make me confused.
  Am I right to use mdrun -nc to run parallel job in this way?
 
 For a single, multi-core workstation, mdrun -nt is correct.
 
  Does the result is the same as which is employed by mdrun_mpi?
 
 A variety of factors influence whether or not the results are the same.
 
 http://www.gromacs.org/Documentation/Terminology/Reproducibility
 
  (Exactly I never use mdrun_mpi more than one machine since the ethernet 
  between machines is very slow here.)
  
  If mdrun -nc is available.
  Do we have another commend support CPUs more than one in the same machine.
  
 
 That's what threading is doing, assuming you're invoking the command 
 correctly. 
   As stated above, the option is -nt, not -nc.  mdrun doesn't check for 
 whether 
 or not command line arguments are actually valid, so if you're using -nc 
 you're 
 not actually doing threading, but the 1200% usage suggests you probably are.
 
 -Justin
 
 -- 
 
 
 Justin A. Lemkul
 Ph.D. Candidate
 ICTAS Doctoral Scholar
 MILES-IGERT Trainee
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
 
 

-- 
Joshua L. Phillips
Ph.D. Candidate - School of Engineering
University of California, Merced
jphilli...@ucmerced.edu


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun Fatal error: Domain decomposition does not support simple neighbor searching

2011-05-04 Thread Justin A. Lemkul



shivangi nangia wrote:

Dear gmx-users,

I have a cube (8 nm)  of the system containing 1:1 :: water: methanol, a 
polypeptide, Li ions and 2,5-dihydroxybenzoic acid anions.

I am heating this system with no PBC ( evaporation).

The md.mdp file is:

; Run parameters
integrator  = md ; leap-frog integrator
nsteps  = 25 ; 2 * 25 = 500 ps, 0.5 ns
dt= 0.002 ; 2 fs
; Output control
nstxout = 1000  ; save coordinates every 2 ps
nstvout = 1000  ; save velocities every 2 ps
nstxtcout   = 1000  ; xtc compressed trajectory output every 2 ps
nstenergy   = 1000  ; save energies every 2 ps
nstlog  = 1000  ; update log file every 2 ps
; Bond parameters
continuation   = yes; Restarting after NVT
constraint_algorithm = lincs  ; holonomic constraints
constraints = all-bonds ; all bonds (even heavy atom-H bonds) constrained
lincs_iter  = 1  ; accuracy of LINCS
lincs_order = 4  ; also related to accuracy
; Neighborsearching
*ns_type = simple *
nstlist = 5  ; 10 fs
rlist= 1.0; short-range neighborlist cutoff (in nm)
rcoulomb = 1.0; short-range electrostatic cutoff (in nm)
rvdw = 1.0; short-range van der Waals cutoff (in nm)
; Electrostatics
;coulombtype = PME; Particle Mesh Ewald for long-range electrostatics
;pme_order   = 4  ; cubic interpolation
;fourierspacing = 0.16  ; grid spacing for FFT
; Temperature coupling is on
tcoupl  = V-rescale ; modified Berendsen thermostat
tc-grps = Protein Non-Protein   ; two coupling groups - more accurate
tau_t= 0.1 0.1   ; time constant, in ps
ref_t= 500500   ; reference temperature, one for each group, in K
; Pressure coupling is on
pcoupl  = no
pcoupltype  = isotropic ; uniform scaling of box vectors
tau_p= 2.0; time constant, in ps
ref_p= 1.0; reference pressure, in bar
compressibility = 4.5e-5   ; isothermal compressibility of water, bar^-1
; Periodic boundary conditions
*pbc  = no  *  ; 3-D PBC
; Dispersion correction
DispCorr = no  ; account for cut-off vdW scheme
; Velocity generation
gen_vel = no ; Velocity generation is off
;comm_mode
comm_mode = ANGULAR


I have run this system in past without any errors, suddenly I am 
constantly running into the following error if I submit my job on a 
cluster in que *( interactive runs fine)*




Are the Gromacs versions the same?



Fatal error:
Domain decomposition does not support simple neighbor searching, use 
grid searching or use particle decomposition

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors


With pbc = no the only grid type an be used is simple.
I tried changing the size of the cube but I still face the same problem.



The issue is not related to system size.


I am unable to understand what could be going wrong suddenly.



The error is pretty clear.  With simple searching, you have to run either in 
serial or using particle decomposition (mdrun -pd).  As to why your input used 
to work and now suddenly doesn't, consult your system administrator.  No one 
here can likely answer this.  Sounds like someone's been playing with the 
system-wide Gromacs installation such that you're not using the same version you 
used to be, although I can't recall if/when DD ever supporting using simple 
neighbor searching.


-Justin


Please guide

Thanks in advance,
SN





--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun segmentation fault

2011-04-11 Thread Justin A. Lemkul



shivangi nangia wrote:

Hello gmx users,

My system for NVT equilbration runs into segmentation fault as soon as I 
try to run it.

It does not give any warning or hint of what might be going wrong.
Since I am a new user I am having difficulty exploring the plausible 
reasons.


System: Protein( polyhistidine), 20  2,5-dihydroxybenzoic acid anions, 
1:1 water: methanol (~3000 molecules of each) in 8 nm cube



I had had EM of the system using steepest decent. Outcome:

Steepest Descents converged to machine precision in 15 steps, but did 
not reach the requested Fmax  1000.

Potential Energy  =  1.5354981e+19
Maximum force =inf on atom 651
Norm of force =inf



You were already told that this is the source of your problem and any procedure 
is destined to fail.  What's more, you were given hints on how to solve your issue:


http://lists.gromacs.org/pipermail/gmx-users/2011-April/060268.html

-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun segmentation fault

2011-04-10 Thread Mark Abraham

Hello gmx users,

My system for NVT equilbration runs into segmentation fault as soon as 
I try to run it.

It does not give any warning or hint of what might be going wrong.
Since I am a new user I am having difficulty exploring the plausible 
reasons.


System: Protein( polyhistidine), 20  2,5-dihydroxybenzoic acid anions, 
1:1 water: methanol (~3000 molecules of each) in 8 nm cube



I had had EM of the system using steepest decent. Outcome:

Steepest Descents converged to machine precision in 15 steps, but did 
not reach the requested Fmax  1000.

Potential Energy  =  1.5354981e+19
Maximum force =inf on atom 651
Norm of force =inf


So that's broken already - you have enormous positive energy and 
infinite forces. Stop there and fix it. Either your starting 
configuration has severe atomic overlaps (go and visualize it with the 
periodic box), or some of your topology is broken (try a box of methanol 
on its own, try the protein in vacuum, try a single acid in vacuum)


Mark


*The minim.mdp *is:
; minim.mdp - used as input into grompp to generate em.tpr
; Parameters describing what to do, when to stop and what to save
integrator  = steep ; Algorithm (steep = steepest descent 
minimization)
emtol= 1000.0; Stop minimization when the maximum force  
1000.0 kJ/mol/nm

emstep  = 0.02  ; Energy step size
nsteps  = 5 ; Maximum number of (minimization) steps to 
perform


; Parameters describing how to find the neighbors of each atom and how 
to calculate the interactions
nstlist = 1  ; Frequency to update the neighbor list and long 
range forces

ns_type = grid  ; Method to determine neighbor list (simple, grid)
rlist= 1.0; Cut-off for making neighbor list (short range forces)
coulombtype = PME; Treatment of long range electrostatic interactions
rcoulomb = 1.0; Short-range electrostatic cut-off
rvdw = 1.0; Short-range Van der Waals cut-off
pbc  = xyz   ; Periodic Boundary Conditions (yes/no)
constraints = none




*The nvt.mdp*:

title= hist NVT equilibration
define  = -DPOSRES  ; position restrain the protein
; Run parameters
integrator  = md ; leap-frog integrator
nsteps  = 5 ; 2 * 5 = 100 ps
dt= 0.002 ; 2 fs
; Output control
nstxout = 100; save coordinates every 0.2 ps
nstvout = 100; save velocities every 0.2 ps
nstenergy   = 100; save energies every 0.2 ps
nstlog  = 100; update log file every 0.2 ps
; Bond parameters
continuation   = no ; first dynamics run
constraint_algorithm = lincs  ; holonomic constraints
constraints = none ;
lincs_iter  = 1  ; accuracy of LINCS
lincs_order = 4  ; also related to accuracy
; Neighborsearching
ns_type = grid  ; search neighboring grid cells
nstlist = 5  ; 10 fs
rlist= 1.0; short-range neighborlist cutoff (in nm)
rcoulomb = 1.0; short-range electrostatic cutoff (in nm)
rvdw = 1.0; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype = PME; Particle Mesh Ewald for long-range electrostatics
pme_order   = 4  ; cubic interpolation
fourierspacing = 0.16  ; grid spacing for FFT
; Temperature coupling is on
tcoupl  = V-rescale ; modified Berendsen thermostat
tc-grps = Protein Non-Protein   ; two coupling groups - more accurate
tau_t= 0.1 0.1   ; time constant, in ps
ref_t= 300300   ; reference temperature, one for each group, in K
; Pressure coupling is off
pcoupl  = no ; no pressure coupling in NVT
; Periodic boundary conditions
pbc  = xyz; 3-D PBC
; Dispersion correction
DispCorr = EnerPres  ; account for cut-off vdW scheme
; Velocity generation
gen_vel = yes; assign velocities from Maxwell distribution
gen_temp = 300; temperature for Maxwell distribution
gen_seed = -1 ; generate a random seed


I tried to decrease the step size, that also runs into seg fault error.


Kindly guide.

Thanks,
SN




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun

2011-03-28 Thread Mark Abraham

On 28/03/2011 7:06 PM, michael zhenin wrote:

Dear all,
I am trying to run dynamics (mdrun) on GROMACS 4.5.3 in parallel on 8 
processors, but it crashes after a while and refuses to reach to the end.

The error note that pops out is:

1 particles communicated to PME node 4 are more than 2/3 times the cut-off

 out of the domain decomposition cell of their charge group in 
dimension y.


 This usually means that your system is not well equilibrated


So either your system is not well equilibrated, or your topologies don't 
match reality. See 
http://www.gromacs.org/Documentation/Terminology/Blowing_Up


Mark

My system contains protein, ATP and Mg. I have equilibirated it for 
300 ps that were divided into 3 parts.
The first 2 parts were the NVT equilibiration (the first one was made 
in order to equilibrate only the water) - 100 ps each.



The third was the NPT - 100 ps.
The system wasn't constraint.

My mdp file is :

title = OPLS wt nbd1 MD
; Run parameters
integrator = md ; leap-frog integrator
nsteps = 2000 ; 1 * 2000 = 2 ps, 20 ns


dt = 0.001 ; 1 fs
; Output control
nstxout = 2 ; save coordinates every 2 ps
nstvout = 2 ; save velocities every 2 ps
nstxtcout = 2 ; xtc compressed trajectory output every 2 ps
nstenergy = 2 ; save energies every 2 ps


nstlog = 2 ; update log file every 2 ps
; Bond parameters
continuation = yes ; Restarting after NPT
; Neighborsearching
ns_type = grid ; search neighboring grid cels
nstlist = 5 ; 10 fs
rlist = 1.0 ; short-range neighborlist cutoff (in nm)


rcoulomb = 1.0 ; short-range electrostatic cutoff (in nm)
rvdw = 1.0 ; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype = PME ; Particle Mesh Ewald for long-range electrostatics
ewald_rtol = 1e-05


fourierspacing = 0.12 ; grid spacing for FFT
pme_order = 6 ;
; Temperature coupling is on
tcoupl = V-rescale ; modified Berendsen thermostat
tc-grps = Protein Non-Protein ; two coupling groups - more accurate


tau_t = 0.1 0.1 ; time constant, in ps
ref_t = 300 300 ; reference temperature, one for each group, in K
; Pressure coupling is on
pcoupl = Parrinello-Rahman ; Pressure coupling on in NPT
pcoupltype = isotropic ; uniform scaling of box vectors


tau_p = 1.0 ; time constant, in ps
ref_p = 1.0 ; reference pressure, in bar
compressibility = 4.5e-5 ; isothermal compressibility of water, bar^-1
; Periodic boundary conditions
pbc = xyz ; 3-D PBC

; Dispersion correction

DispCorr = EnerPres ; account for cut-off vdW scheme
; Velocity generation
gen_vel = no ; Velocity generation is off
morse = yes


I've searched the net for this error and didn't find any answer. Did 
anybody see this error? and what should i do with it..


Thanks,

Michael



-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun cannot append to file larger than 2Gb

2011-03-26 Thread Justin A. Lemkul



Warren Gallin wrote:

Hi,

Using GROMACS 4.5.3 I tried to continue an mdrun from a checkpoint, and got an 
error that I have never seen before, to whit:

Program mdrun_mpi, VERSION 4.5.3
Source code file: 
/global/software/build/gromacs-4.5.3/gromacs/src/gmxlib/checkpoint.c, line: 1727

Fatal error:
The original run wrote a file called 'traj.xtc' which is larger than 2 GB, but 
mdrun did not support large file offsets. Can not append. Run mdrun with 
-noappend
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors



I've never had a problem with extending runs like this before.

As suggested, I then ran with -noappend set, and it ran fine, creating a new 
set of numbered files for the ouput.  I am assuming that I'll be able to 
combine the trajectory files at the end of the process.



Correct.


Have I missed some fine point in compiling that has left mdrun a little broken?



Possibly, but it probably depends more on your system's architecture than 
anything you did.  There is a configure option --disable-largefile (which should 
be the default), so you'd have to check config.log to see if this was changed to 
--enable-largefile based on what the configuration script found.  Or you can 
re-compile and see if configuration fails with --enable-largefile and understand 
the reason.


-Justin


Thanks,

Warren Gallin-- 
gmx-users mailing listgmx-users@gromacs.org

http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun the pr.trr

2011-03-22 Thread Erik Marklund

zen...@graduate.org skrev 2011-03-22 10.22:


when i type this order :szrun 2 2 mdrun_mpi -nice 0 -v -s pr.tpr -o 
pr.trr -c b4md.pdb -g pr.log -e pr.edr 
the process can not run,and there is a problem that t = 0.000 
 ps: Water molecule starting at atom 90777 can not be settled.
Check for bad contacts and/or reduce the timestep.Wrote pdb files with previous and current coordinates. 

i have made the dt =0.001and nsteps=1 in the em.mdp ,but the order 
(szrun 2 2 mdrun_mpi -nice 0 -v -s pr.tpr -o pr.trr -c b4md.pdb -g 
pr.log -e pr.edr  ) still can not run .how to slove this problem ?
Changing timestep won't help. Your simulation crashes at t=0.0, which 
means your system isn't properly equillibrated, or there are errors in 
the topology.


--
---
Erik Marklund, PhD student
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 4537fax: +46 18 511 755
er...@xray.bmc.uu.sehttp://folding.bmc.uu.se/

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

  1   2   3   >