Re: [gmx-users] How does gromacs checkpoint works

2016-06-23 Thread Husen R
Hi Mark,


Thank you very much!

Regards,


Husen

On Thu, Jun 23, 2016 at 3:42 PM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Yes
>
> On Thu, Jun 23, 2016 at 9:54 AM Husen R <hus...@gmail.com> wrote:
>
> > Hi,
> >
> > Could you tell me the location of the code ?
> > is this the location of the code ->
> > gromacs-5.1.2/src/gromacs/gmxlib/checkpoint.cpp ?
> >
> > regards,
> >
> > Husen
> >
> > On Thu, Jun 23, 2016 at 2:23 PM, Mark Abraham <mark.j.abra...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > There's only the code. All you have to do is write down everything that
> > you
> > > were going to need to read to do the next step (unless it's from the
> > .tpr).
> > > Add some checksums of the last pieces of output files, so you can help
> > the
> > > user not mangle their files upon restart. Decide how you're going to
> > > coordinate all your ranks/cores choosing to checkpoint at the same
> time.
> > > Pick a portable file format.
> > >
> > > Mark
> > >
> > > On Thu, Jun 23, 2016 at 4:15 AM Husen R <hus...@gmail.com> wrote:
> > >
> > > > Hi all,
> > > >
> > > > For academic purpose, I'm wondering how does checkpoint feature in
> > > Gromacs
> > > > works ?
> > > > is there any resource/tutorial that I can learn ?
> > > >
> > > >
> > > > Thank you in advance,
> > > >
> > > >
> > > > Husen
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Build time/Build user mismatch, fatal error truncation of file *.xtc failed

2016-06-23 Thread Husen R
Hi,

I'm wondering, if I use gromacs in cluster environment, do I have to
install gromacs in every nodes (at /usr/local/gromacs in every nodes) ?
or is it enough to install gromacs in one node (example,in head-node) only
?

Regards,

Husen



On Thu, Jun 23, 2016 at 3:41 PM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Hi,
>
> The only explanation is that that file is not in fact properly accessible
> if rank 0 is placed other than on "compute-node," which means your
> organization of file system / slurm / etc. aren't good enough for what
> you're doing.
>
> Mark
>
> On Thu, Jun 23, 2016 at 10:15 AM Husen R <hus...@gmail.com> wrote:
>
> > Hi,
> >
> > I still unable to find out the cause of the fatal error.
> > Previously, gromacs is installed in every nodes. That's the cause Build
> > time mismatch and Build user mismatch appeared.
> > Now, Build time mismatch and Build user mismatch issues are solved by
> > installing Gromacs in shared directory.
> >
> > I have tried to install gromacs in one node only (not in shared
> directory),
> > but the error appeared.
> >
> >
> > this is the error message if I exclude compute-node
> > "--exclude=compute-node" from nodelist in slurm sbatch. excluding other
> > nodes works fine.
> >
> >
> >
> >
> =
> > GROMACS:  gmx mdrun, VERSION 5.1.2
> > Executable:   /mirror/source/gromacs/bin/gmx_mpi
> > Data prefix:  /mirror/source/gromacs
> > Command line:
> >   gmx_mpi mdrun -cpi md_gmx.cpt -deffnm md_gmx
> >
> >
> > Running on 2 nodes with total 8 cores, 16 logical cores
> >   Cores per node:4
> >   Logical cores per node:8
> > Hardware detected on host head-node (the node of MPI rank 0):
> >   CPU info:
> > Vendor: GenuineIntel
> > Brand:  Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz
> > SIMD instructions most likely to fit this hardware: AVX_256
> > SIMD instructions selected at GROMACS compile time: AVX_256
> >
> > Reading file md_gmx.tpr, VERSION 5.1.2 (single precision)
> > Changing nstlist from 10 to 20, rlist from 1 to 1.03
> >
> >
> > Reading checkpoint file md_gmx.cpt generated: Thu Jun 23 12:54:02 2016
> >
> >
> >   #ranks mismatch,
> > current program: 16
> > checkpoint file: 24
> >
> >   #PME-ranks mismatch,
> > current program: -1
> > checkpoint file: 6
> >
> > GROMACS patchlevel, binary or parallel settings differ from previous run.
> > Continuation is exact, but not guaranteed to be binary identical.
> >
> >
> > ---
> > Program gmx mdrun, VERSION 5.1.2
> > Source code file:
> >
> /home/necis/gromacsinstall/gromacs-5.1.2/src/gromacs/gmxlib/checkpoint.cpp,
> > line: 2216
> >
> > Fatal error:
> > Truncation of file md_gmx.xtc failed. Cannot do appending because of this
> > failure.
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> >
> >
> 
> >
> > On Thu, Jun 16, 2016 at 6:23 PM, Mark Abraham <mark.j.abra...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > On Thu, Jun 16, 2016 at 12:24 PM Husen R <hus...@gmail.com> wrote:
> > >
> > > > On Thu, Jun 16, 2016 at 4:01 PM, Mark Abraham <
> > mark.j.abra...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > There's just nothing special about any node at run time.
> > > > >
> > > > > Your script looks like it is building GROMACS fresh each time -
> > there's
> > > > no
> > > > > need to do that,
> > > >
> > > >
> > > > which part of my script ?
> > > >
> > >
> > > I can't tell how your script is finding its GROMACS installations, but
> > the
> > > advisory message says precisely that your runs are finding different
> > > installations...
> > >
> > >   Build time mismatch,
> > > current program: Sel Apr  5 13:37:32 WIB 2016
> > > checkpoint file: Rab Apr  6 09:44:51 WIB 2016
> > >
> > >   Build user mismatch,
> > > current program: pro@head-

Re: [gmx-users] Build time/Build user mismatch, fatal error truncation of file *.xtc failed

2016-06-23 Thread Husen R
Hi,

I still unable to find out the cause of the fatal error.
Previously, gromacs is installed in every nodes. That's the cause Build
time mismatch and Build user mismatch appeared.
Now, Build time mismatch and Build user mismatch issues are solved by
installing Gromacs in shared directory.

I have tried to install gromacs in one node only (not in shared directory),
but the error appeared.


this is the error message if I exclude compute-node
"--exclude=compute-node" from nodelist in slurm sbatch. excluding other
nodes works fine.


=
GROMACS:  gmx mdrun, VERSION 5.1.2
Executable:   /mirror/source/gromacs/bin/gmx_mpi
Data prefix:  /mirror/source/gromacs
Command line:
  gmx_mpi mdrun -cpi md_gmx.cpt -deffnm md_gmx


Running on 2 nodes with total 8 cores, 16 logical cores
  Cores per node:4
  Logical cores per node:8
Hardware detected on host head-node (the node of MPI rank 0):
  CPU info:
Vendor: GenuineIntel
Brand:  Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz
SIMD instructions most likely to fit this hardware: AVX_256
SIMD instructions selected at GROMACS compile time: AVX_256

Reading file md_gmx.tpr, VERSION 5.1.2 (single precision)
Changing nstlist from 10 to 20, rlist from 1 to 1.03


Reading checkpoint file md_gmx.cpt generated: Thu Jun 23 12:54:02 2016


  #ranks mismatch,
current program: 16
checkpoint file: 24

  #PME-ranks mismatch,
current program: -1
checkpoint file: 6

GROMACS patchlevel, binary or parallel settings differ from previous run.
Continuation is exact, but not guaranteed to be binary identical.


---
Program gmx mdrun, VERSION 5.1.2
Source code file:
/home/necis/gromacsinstall/gromacs-5.1.2/src/gromacs/gmxlib/checkpoint.cpp,
line: 2216

Fatal error:
Truncation of file md_gmx.xtc failed. Cannot do appending because of this
failure.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors


On Thu, Jun 16, 2016 at 6:23 PM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Hi,
>
> On Thu, Jun 16, 2016 at 12:24 PM Husen R <hus...@gmail.com> wrote:
>
> > On Thu, Jun 16, 2016 at 4:01 PM, Mark Abraham <mark.j.abra...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > There's just nothing special about any node at run time.
> > >
> > > Your script looks like it is building GROMACS fresh each time - there's
> > no
> > > need to do that,
> >
> >
> > which part of my script ?
> >
>
> I can't tell how your script is finding its GROMACS installations, but the
> advisory message says precisely that your runs are finding different
> installations...
>
>   Build time mismatch,
> current program: Sel Apr  5 13:37:32 WIB 2016
> checkpoint file: Rab Apr  6 09:44:51 WIB 2016
>
>   Build user mismatch,
> current program: pro@head-node [CMAKE]
> checkpoint file: pro@compute-node [CMAKE]
>
> This reinforces my impression that the view of your file system available
> at the start of the job script is varying with your choice of node subsets.
>
>
> > I always use this command to restart from checkpoint file -->  "mpirun
> > gmx_mpi mdrun -cpi [name].cpt -deffnm [name]".
> > as far as I know -cpi option is used to refer to checkpoint file as input
> > file.
> >  what I have to change in my script ?
> >
>
> Nothing about that aspect. But clearly your first run and the restart
> simulating loss of a node are finding different gmx_mpi binaries from their
> respective environments. This is not itself a problem, but it's probably
> not what you intend, and may be symptomatic of the same issue that leads to
> md_test.xtc not being accessible.
>
> Mark
>
>
> >
> > but the fact that the node name is showing up in the check
> > > that takes place when the checkpoint is read is not relevant to the
> > > problem.
> > >
> > > Mark
> > >
> > > On Thu, Jun 16, 2016 at 9:46 AM Husen R <hus...@gmail.com> wrote:
> > >
> > > > On Thu, Jun 16, 2016 at 2:32 PM, Mark Abraham <
> > mark.j.abra...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > On Thu, Jun 16, 2016 at 9:30 AM Husen R <hus...@gmail.com> wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > Thank you for your reply !
> > > > > >
> >

Re: [gmx-users] How does gromacs checkpoint works

2016-06-23 Thread Husen R
Hi,

Could you tell me the location of the code ?
is this the location of the code ->
gromacs-5.1.2/src/gromacs/gmxlib/checkpoint.cpp ?

regards,

Husen

On Thu, Jun 23, 2016 at 2:23 PM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Hi,
>
> There's only the code. All you have to do is write down everything that you
> were going to need to read to do the next step (unless it's from the .tpr).
> Add some checksums of the last pieces of output files, so you can help the
> user not mangle their files upon restart. Decide how you're going to
> coordinate all your ranks/cores choosing to checkpoint at the same time.
> Pick a portable file format.
>
> Mark
>
> On Thu, Jun 23, 2016 at 4:15 AM Husen R <hus...@gmail.com> wrote:
>
> > Hi all,
> >
> > For academic purpose, I'm wondering how does checkpoint feature in
> Gromacs
> > works ?
> > is there any resource/tutorial that I can learn ?
> >
> >
> > Thank you in advance,
> >
> >
> > Husen
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] How does gromacs checkpoint works

2016-06-22 Thread Husen R
Hi all,

For academic purpose, I'm wondering how does checkpoint feature in Gromacs
works ?
is there any resource/tutorial that I can learn ?


Thank you in advance,


Husen
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Build time/Build user mismatch, fatal error truncation of file *.xtc failed

2016-06-16 Thread Husen R
On Thu, Jun 16, 2016 at 4:01 PM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Hi,
>
> There's just nothing special about any node at run time.
>
> Your script looks like it is building GROMACS fresh each time - there's no
> need to do that,


which part of my script ?
I always use this command to restart from checkpoint file -->  "mpirun
gmx_mpi mdrun -cpi [name].cpt -deffnm [name]".
as far as I know -cpi option is used to refer to checkpoint file as input
file.
 what I have to change in my script ?


but the fact that the node name is showing up in the check
> that takes place when the checkpoint is read is not relevant to the
> problem.
>
> Mark
>
> On Thu, Jun 16, 2016 at 9:46 AM Husen R <hus...@gmail.com> wrote:
>
> > On Thu, Jun 16, 2016 at 2:32 PM, Mark Abraham <mark.j.abra...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > On Thu, Jun 16, 2016 at 9:30 AM Husen R <hus...@gmail.com> wrote:
> > >
> > > > Hi,
> > > >
> > > > Thank you for your reply !
> > > >
> > > > md_test.xtc is exist and writable.
> > > >
> > >
> > > OK, but it needs to be seen that way from the set of compute nodes you
> > are
> > > using, and organizing that is up to you and your job scheduler, etc.
> > >
> > >
> > > > I tried to restart from checkpoint file by excluding other node than
> > > > compute-node and it works.
> > > >
> > >
> > > Go do that, then :-)
> > >
> >
> > I'm building a simple system that can respond to node failure. if failure
> > occured on node A, than the application has to be restarted and that node
> > has to be excluded.
> > this should apply to all node including this 'compute-node'.
> >
> > >
> > >
> > > > only '--exclude=compute-node' that produces this error.
> > > >
> > >
> > > Then there's something about that node that is special with respect to
> > the
> > > file system - there's nothing about any particular node that GROMACS
> > cares
> > > about.
> > >
> >
> > > Mark
> > >
> > >
> > > > is this has the same issue with this thread ?
> > > > http://comments.gmane.org/gmane.science.biology.gromacs.user/40984
> > > >
> > > > regards,
> > > >
> > > > Husen
> > > >
> > > > On Thu, Jun 16, 2016 at 2:20 PM, Mark Abraham <
> > mark.j.abra...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > The stuff about different nodes or numbers of nodes doesn't matter
> -
> > > it's
> > > > > merely an advisory note from mdrun. mdrun failed when it tried to
> > > operate
> > > > > upon md_test.xtc, so perhaps you need to consider whether the file
> > > > exists,
> > > > > is writable, etc.
> > > > >
> > > > > Mark
> > > > >
> > > > > On Thu, Jun 16, 2016 at 6:48 AM Husen R <hus...@gmail.com> wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I got the following error message when I tried to restart gromacs
> > > > > > simulation from checkpoint file.
> > > > > > I restart the simulation using fewer nodes and processes, and
> also
> > I
> > > > > > exclude one node using '--exclude=' option (in slurm) for
> > > experimental
> > > > > > purpose.
> > > > > >
> > > > > > I'm sure fewer nodes and processes are not the cause of this
> error
> > > as I
> > > > > > already test that.
> > > > > > I have checked that the cause of this error is '--exclude='
> usage.
> > I
> > > > > > excluded 1 node named 'compute-node' when restart from checkpoint
> > (at
> > > > > first
> > > > > > run, I use all node including 'compute-node').
> > > > > >
> > > > > >
> > > > > > it seems that at first run, the submit job script was built at
> > > > > > compute-node. So, at restart, build user mismatch appeared
> because
> > > > > > compute-node was not found (excluded).
> > > > > >
> > > > > > Am I right ? is this behavior normal ?
> > > > > > or is that a way to avoid

Re: [gmx-users] Build time/Build user mismatch, fatal error truncation of file *.xtc failed

2016-06-16 Thread Husen R
On Thu, Jun 16, 2016 at 2:32 PM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Hi,
>
> On Thu, Jun 16, 2016 at 9:30 AM Husen R <hus...@gmail.com> wrote:
>
> > Hi,
> >
> > Thank you for your reply !
> >
> > md_test.xtc is exist and writable.
> >
>
> OK, but it needs to be seen that way from the set of compute nodes you are
> using, and organizing that is up to you and your job scheduler, etc.
>
>
> > I tried to restart from checkpoint file by excluding other node than
> > compute-node and it works.
> >
>
> Go do that, then :-)
>

I'm building a simple system that can respond to node failure. if failure
occured on node A, than the application has to be restarted and that node
has to be excluded.
this should apply to all node including this 'compute-node'.

>
>
> > only '--exclude=compute-node' that produces this error.
> >
>
> Then there's something about that node that is special with respect to the
> file system - there's nothing about any particular node that GROMACS cares
> about.
>

> Mark
>
>
> > is this has the same issue with this thread ?
> > http://comments.gmane.org/gmane.science.biology.gromacs.user/40984
> >
> > regards,
> >
> > Husen
> >
> > On Thu, Jun 16, 2016 at 2:20 PM, Mark Abraham <mark.j.abra...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > The stuff about different nodes or numbers of nodes doesn't matter -
> it's
> > > merely an advisory note from mdrun. mdrun failed when it tried to
> operate
> > > upon md_test.xtc, so perhaps you need to consider whether the file
> > exists,
> > > is writable, etc.
> > >
> > > Mark
> > >
> > > On Thu, Jun 16, 2016 at 6:48 AM Husen R <hus...@gmail.com> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I got the following error message when I tried to restart gromacs
> > > > simulation from checkpoint file.
> > > > I restart the simulation using fewer nodes and processes, and also I
> > > > exclude one node using '--exclude=' option (in slurm) for
> experimental
> > > > purpose.
> > > >
> > > > I'm sure fewer nodes and processes are not the cause of this error
> as I
> > > > already test that.
> > > > I have checked that the cause of this error is '--exclude=' usage. I
> > > > excluded 1 node named 'compute-node' when restart from checkpoint (at
> > > first
> > > > run, I use all node including 'compute-node').
> > > >
> > > >
> > > > it seems that at first run, the submit job script was built at
> > > > compute-node. So, at restart, build user mismatch appeared because
> > > > compute-node was not found (excluded).
> > > >
> > > > Am I right ? is this behavior normal ?
> > > > or is that a way to avoid this, so I can freely restart from
> checkpoint
> > > > using any nodes without limitation.
> > > >
> > > > thank you in advance
> > > >
> > > > Regards,
> > > >
> > > >
> > > > Husen
> > > >
> > > > ==restart script=
> > > > #!/bin/bash
> > > > #SBATCH -J ayo
> > > > #SBATCH -o md%j.out
> > > > #SBATCH -A necis
> > > > #SBATCH -N 2
> > > > #SBATCH -n 16
> > > > #SBATCH --exclude=compute-node
> > > > #SBATCH --time=144:00:00
> > > > #SBATCH --mail-user=hus...@gmail.com
> > > > #SBATCH --mail-type=begin
> > > > #SBATCH --mail-type=end
> > > >
> > > > mpirun gmx_mpi mdrun -cpi md_test.cpt -deffnm md_test
> > > > =
> > > >
> > > >
> > > >
> > > >
> > > > ==output
> error
> > > > Reading checkpoint file md_test.cpt generated: Wed Jun 15 16:30:44
> 2016
> > > >
> > > >
> > > >   Build time mismatch,
> > > > current program: Sel Apr  5 13:37:32 WIB 2016
> > > > checkpoint file: Rab Apr  6 09:44:51 WIB 2016
> > > >
> > > >   Build user mismatch,
> > > > current program: pro@head-node [CMAKE]
> > > > checkpoint file: pro@compute-node [CMAKE]
> > > >
> > > >   #ranks mismatch,
> > > > current program: 16
> 

Re: [gmx-users] Build time/Build user mismatch, fatal error truncation of file *.xtc failed

2016-06-16 Thread Husen R
Hi,

Thank you for your reply !

md_test.xtc is exist and writable.
I tried to restart from checkpoint file by excluding other node than
compute-node and it works.
only '--exclude=compute-node' that produces this error.

is this has the same issue with this thread ?
http://comments.gmane.org/gmane.science.biology.gromacs.user/40984

regards,

Husen

On Thu, Jun 16, 2016 at 2:20 PM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Hi,
>
> The stuff about different nodes or numbers of nodes doesn't matter - it's
> merely an advisory note from mdrun. mdrun failed when it tried to operate
> upon md_test.xtc, so perhaps you need to consider whether the file exists,
> is writable, etc.
>
> Mark
>
> On Thu, Jun 16, 2016 at 6:48 AM Husen R <hus...@gmail.com> wrote:
>
> > Hi all,
> >
> > I got the following error message when I tried to restart gromacs
> > simulation from checkpoint file.
> > I restart the simulation using fewer nodes and processes, and also I
> > exclude one node using '--exclude=' option (in slurm) for experimental
> > purpose.
> >
> > I'm sure fewer nodes and processes are not the cause of this error as I
> > already test that.
> > I have checked that the cause of this error is '--exclude=' usage. I
> > excluded 1 node named 'compute-node' when restart from checkpoint (at
> first
> > run, I use all node including 'compute-node').
> >
> >
> > it seems that at first run, the submit job script was built at
> > compute-node. So, at restart, build user mismatch appeared because
> > compute-node was not found (excluded).
> >
> > Am I right ? is this behavior normal ?
> > or is that a way to avoid this, so I can freely restart from checkpoint
> > using any nodes without limitation.
> >
> > thank you in advance
> >
> > Regards,
> >
> >
> > Husen
> >
> > ==restart script=
> > #!/bin/bash
> > #SBATCH -J ayo
> > #SBATCH -o md%j.out
> > #SBATCH -A necis
> > #SBATCH -N 2
> > #SBATCH -n 16
> > #SBATCH --exclude=compute-node
> > #SBATCH --time=144:00:00
> > #SBATCH --mail-user=hus...@gmail.com
> > #SBATCH --mail-type=begin
> > #SBATCH --mail-type=end
> >
> > mpirun gmx_mpi mdrun -cpi md_test.cpt -deffnm md_test
> > =
> >
> >
> >
> >
> > ==output error
> > Reading checkpoint file md_test.cpt generated: Wed Jun 15 16:30:44 2016
> >
> >
> >   Build time mismatch,
> > current program: Sel Apr  5 13:37:32 WIB 2016
> > checkpoint file: Rab Apr  6 09:44:51 WIB 2016
> >
> >   Build user mismatch,
> > current program: pro@head-node [CMAKE]
> > checkpoint file: pro@compute-node [CMAKE]
> >
> >   #ranks mismatch,
> > current program: 16
> > checkpoint file: 24
> >
> >   #PME-ranks mismatch,
> > current program: -1
> > checkpoint file: 6
> >
> > GROMACS patchlevel, binary or parallel settings differ from previous run.
> > Continuation is exact, but not guaranteed to be binary identical.
> >
> >
> > ---
> > Program gmx mdrun, VERSION 5.1.2
> > Source code file:
> > /home/pro/gromacs-5.1.2/src/gromacs/gmxlib/checkpoint.cpp, line: 2216
> >
> > Fatal error:
> > Truncation of file md_test.xtc failed. Cannot do appending because of
> this
> > failure.
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> > ---
> > 
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Build time/Build user mismatch, fatal error truncation of file *.xtc failed

2016-06-15 Thread Husen R
this is the rest of the error message..
regards,

Husen




Halting parallel program gmx mdrun on rank 0 out of 16
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
Fatal error in PMPI_Bcast: Unknown error class, error stack:
PMPI_Bcast(1635)..: MPI_Bcast(buf=0xcd9ed8, count=4,
MPI_BYTE, root=0, MPI_COMM_WORLD) failed
MPIR_Bcast_impl(1477).:
MPIR_Bcast(1501)..:
MPIR_Bcast_intra(1272):
MPIR_SMP_Bcast(1104)..:
MPIR_Bcast_binomial(256)..:
MPIDU_Complete_posted_with_error(1189): Process failed
MPIR_SMP_Bcast()..:
MPIR_Bcast_binomial(327)..: Failure during collective
Fatal error in PMPI_Bcast: Other MPI error, error stack:
PMPI_Bcast(1635): MPI_Bcast(buf=0x1858e78, count=4, MPI_BYTE,
root=0, MPI_COMM_WORLD) failed
MPIR_Bcast_impl(1477)...:
MPIR_Bcast(1501):
MPIR_Bcast_intra(1272)..:
MPIR_SMP_Bcast():
MPIR_Bcast_binomial(327): Failure during collective
Fatal error in PMPI_Bcast: Other MPI error, error stack:
PMPI_Bcast(1635): MPI_Bcast(buf=0x24f7e78, count=4, MPI_BYTE,
root=0, MPI_COMM_WORLD) failed
MPIR_Bcast_impl(1477)...:
MPIR_Bcast(1501):
MPIR_Bcast_intra(1272)..:
MPIR_SMP_Bcast():
MPIR_Bcast_binomial(327): Failure during collective
Fatal error in PMPI_Bcast: Other MPI error, error stack:
PMPI_Bcast(1635): MPI_Bcast(buf=0xb21e78, count=4, MPI_BYTE,
root=0, MPI_COMM_WORLD) failed
MPIR_Bcast_impl(1477)...:
MPIR_Bcast(1501):
MPIR_Bcast_intra(1272)..:
MPIR_SMP_Bcast():
MPIR_Bcast_binomial(327): Failure during collective
Fatal error in PMPI_Bcast: Other MPI error, error stack:
PMPI_Bcast(1635): MPI_Bcast(buf=0x15fbe78, count=4, MPI_BYTE,
root=0, MPI_COMM_WORLD) failed
MPIR_Bcast_impl(1477)...:
MPIR_Bcast(1501):
MPIR_Bcast_intra(1272)..:
MPIR_SMP_Bcast():
MPIR_Bcast_binomial(327): Failure during collective

===
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 6983 RUNNING AT head-node
=   EXIT CODE: 1
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===

On Thu, Jun 16, 2016 at 11:48 AM, Husen R <hus...@gmail.com> wrote:

> Hi all,
>
> I got the following error message when I tried to restart gromacs
> simulation from checkpoint file.
> I restart the simulation using fewer nodes and processes, and also I
> exclude one node using '--exclude=' option (in slurm) for experimental
> purpose.
>
> I'm sure fewer nodes and processes are not the cause of this error as I
> already test that.
> I have checked that the cause of this error is '--exclude=' usage. I
> excluded 1 node named 'compute-node' when restart from checkpoint (at first
> run, I use all node including 'compute-node').
>
>
> it seems that at first run, the submit job script was built at
> compute-node. So, at restart, build user mismatch appeared because
> compute-node was not found (excluded).
>
> Am I right ? is this behavior normal ?
> or is that a way to avoid this, so I can freely restart from checkpoint
> using any nodes without limitation.
>
> thank you in advance
>
> Regards,
>
>
> Husen
>
> ==restart script=
> #!/bin/bash
> #SBATCH -J ayo
> #SBATCH -o md%j.out
> #SBATCH -A necis
> #SBATCH -N 2
> #SBATCH -n 16
> #SBATCH --exclude=compute-node
> #SBATCH --time=144:00:00
> #SBATCH --mail-user=hus...@gmail.com
> #SBATCH --mail-type=begin
> #SBATCH --mail-type=end
>
> mpirun gmx_mpi mdrun -cpi md_test.cpt -deffnm md_test
> =
>
>
>
>
> ==output error
> Reading checkpoint file md_test.cpt generated: Wed Jun 15 16:30:44 2016
>
>
>   Build time mismatch,
> current program: Sel Apr  5 13:37:32 WIB 2016
> checkpoint file: Rab Apr  6 09:44:51 WIB 2016
>
>   Build user mismatch,
> current program: pro@head-node [CMAKE]
> checkpoint file: pro@compute-node [CMAKE]
>
>   #ranks mismatch,
> current program: 16
> checkpoint file: 24
>
>   #PME-ranks mismatch,
> current program: -1
> checkpoint file: 6
>
> GROMACS patchlevel, binary or parallel settings differ from previous run.
> Continuation is exact, but not guaranteed to be binary identical.
>
>
> ---
> Program gmx mdrun, VERSION 5.1.2
> Source code file:
> /home/pro/gromacs-5.1.2/src/gromacs/gmxlib/checkpoint.cpp, line: 2216
&g

[gmx-users] Build time/Build user mismatch, fatal error truncation of file *.xtc failed

2016-06-15 Thread Husen R
Hi all,

I got the following error message when I tried to restart gromacs
simulation from checkpoint file.
I restart the simulation using fewer nodes and processes, and also I
exclude one node using '--exclude=' option (in slurm) for experimental
purpose.

I'm sure fewer nodes and processes are not the cause of this error as I
already test that.
I have checked that the cause of this error is '--exclude=' usage. I
excluded 1 node named 'compute-node' when restart from checkpoint (at first
run, I use all node including 'compute-node').


it seems that at first run, the submit job script was built at
compute-node. So, at restart, build user mismatch appeared because
compute-node was not found (excluded).

Am I right ? is this behavior normal ?
or is that a way to avoid this, so I can freely restart from checkpoint
using any nodes without limitation.

thank you in advance

Regards,


Husen

==restart script=
#!/bin/bash
#SBATCH -J ayo
#SBATCH -o md%j.out
#SBATCH -A necis
#SBATCH -N 2
#SBATCH -n 16
#SBATCH --exclude=compute-node
#SBATCH --time=144:00:00
#SBATCH --mail-user=hus...@gmail.com
#SBATCH --mail-type=begin
#SBATCH --mail-type=end

mpirun gmx_mpi mdrun -cpi md_test.cpt -deffnm md_test
=




==output error
Reading checkpoint file md_test.cpt generated: Wed Jun 15 16:30:44 2016


  Build time mismatch,
current program: Sel Apr  5 13:37:32 WIB 2016
checkpoint file: Rab Apr  6 09:44:51 WIB 2016

  Build user mismatch,
current program: pro@head-node [CMAKE]
checkpoint file: pro@compute-node [CMAKE]

  #ranks mismatch,
current program: 16
checkpoint file: 24

  #PME-ranks mismatch,
current program: -1
checkpoint file: 6

GROMACS patchlevel, binary or parallel settings differ from previous run.
Continuation is exact, but not guaranteed to be binary identical.


---
Program gmx mdrun, VERSION 5.1.2
Source code file:
/home/pro/gromacs-5.1.2/src/gromacs/gmxlib/checkpoint.cpp, line: 2216

Fatal error:
Truncation of file md_test.xtc failed. Cannot do appending because of this
failure.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Restart simulation from checkpoint file with fewer nodes

2016-05-16 Thread Husen R
Hi all,

After spending time for troubleshooting, I found that gromacs
checkpoint/restart feature is working well.

The failure occurred because I use root user to submit restart job (using
slurm resource manager). After switching to non root user, the restart
process is running.
The reason why I use root user is because I run this job in bash scripting
and execute it at designated time using Cron.
I know this is not the right place to talk about slurm.

Thank you for your reply !

Regards,

Husen

On Sun, May 15, 2016 at 8:20 PM, <jkrie...@mrc-lmb.cam.ac.uk> wrote:

> ok thanks
>
> > Hi,
> >
> > Yes, that's one way to work around the problem. In some places, a module
> > subsystem can be used to take care of the selection automatically, but
> you
> > don't want to set one up for just you to use.
> >
> > Mark
> >
> > On Sun, May 15, 2016 at 11:48 AM <jkrie...@mrc-lmb.cam.ac.uk> wrote:
> >
> >> Thanks Mark,
> >>
> >> My sysadmins have let me install my own GROMACS versions and have not
> >> informed me of any such mechanism. Would you suggest I qrsh into a node
> >> of
> >> each type and build an mdrun-only version on each? I'd then select a
> >> particular node type for a submit script with the relevant mdrun.
> >>
> >> Many thanks
> >> James
> >>
> >> > Hi,
> >> >
> >> > On Sat, May 14, 2016 at 1:09 PM <jkrie...@mrc-lmb.cam.ac.uk> wrote:
> >> >
> >> >> In case it's relevant/interesting to anyone, here are the details on
> >> our
> >> >> cluster nodes:
> >> >>
> >> >> nodes   #   model   # cores cpu
> >> >> model
> >> >>   RAM   node_type
> >> >> fmb01 - fmb33   33  IBM HS21XM  8   3 GHz
> >> >> Xeon
> >> >> E5450
> >> >>  16GB   hs21
> >> >> fmb34 - fmb42   9   IBM HS228   2.4
> >> GHz
> >> >> Xeon E5530
> >> >> 16GBhs22
> >> >> fmb43 - fmb88   45  Dell PE M6108   2.4
> >> GHz
> >> >> Xeon E5530
> >> >>  16GB   m610
> >> >> fmb88 - fmb90   3   Dell PE M610+   12  3.4
> >> GHz
> >> >> Xeon X5690
> >> >>   48GB  m610+
> >> >> fmb91 - fmb202  112 Dell PE M62024 (HT) 2.9
> >> GHz
> >> >> Xeon E5-2667
> >> >>64GB m620
> >> >> fmb203 - fmb279 77  Dell PE M62024 (HT) 3.5
> >> GHz
> >> >> Xeon E5-2643 v2 64GB
> >> >> m620+
> >> >> fmb280 - fmb359 80  Dell PE M63024 (HT) 3.4
> >> GHz
> >> >> Xeon E5-2643 v3 64GB
> >> >> m630
> >> >>
> >> >> I could only run GROMACS 4.6.2 on the last three node types and I
> >> >> believe
> >> >> the same is true for 5.0.4
> >> >>
> >> >
> >> > Sure. GROMACS is designed to target whichever hardware was selected at
> >> > configure time, which your sysadmins for such a heterogeneous cluster
> >> > should have documented somewhere. They should also be making available
> >> to
> >> > you a mechanism to target your jobs to nodes where they can run
> >> programs
> >> > that use the hardware efficiently, or providing GROMACS installations
> >> that
> >> > work regardless of which node you are actually on. You might like to
> >> > respectfully remind them of the things we say at
> >> >
> >>
> http://manual.gromacs.org/documentation/5.1.2/install-guide/index.html#portability-aspects
> >> > (These thoughts are common to earlier versions also.)
> >> >
> >> > Mark
> >> >
> >> >
> >> > Best wishes
> >> >> James
> >> >>
> >> >> > I have found that only some kinds of nodes on our cluster work for
> >> >> gromacs
> >> >> > 4.6 (the ones we call m620, m620+ and m630 but not others - I can
> >> >> check
> >> >> > the details tomorrow). I haven't tested it again now I'm using 5.0
> >> so
> >> >> > don't know if that's still an issue but if it is it could explain
> >> why
> >&

Re: [gmx-users] Restart simulation from checkpoint file with fewer nodes

2016-05-14 Thread Husen R
Hi,

Currently I'm running this tutorial (
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/08_MD.html)
to simulate restart with fewer nodes.
at restart, I changed the amount of nodes from 3 to 2 nodes.
I also changed the amount of processes from 24 to 16 processes.

While the application is running, I tried to see the output file.
This is the content of the output file :

#output file


Reading checkpoint file md_0_1.cpt generated: Sat May 14 13:10:25 2016

  #ranks mismatch,
current program: 16
checkpoint file: 24

  #PME-ranks mismatch,
current program: -1
checkpoint file: 6

GROMACS patchlevel, binary or parallel settings differ from previous run.
Continuation is exact, but not guaranteed to be binary identical.

Using 16 MPI processes
Using 1 OpenMP thread per MPI process

starting mdrun 'LYSOZYME in water'
50 steps,   1000.0 ps (continuing from step 54500,109.0 ps).



I got a mismatch note as described in the output file above. it is not a
problem, isn't it ?
I just want to make sure.

is it not allowed to use a different user when we restart simulation from
checkpoint file ?
Previously, I failed to restart simulation based on checkpoint file. I
guess, it is failed because I used a different user (Only a guess).
Thank you in advance.

regards,

Husen




On Sat, May 14, 2016 at 7:58 AM, Justin Lemkul <jalem...@vt.edu> wrote:

>
>
> On 5/13/16 8:53 PM, Husen R wrote:
>
>> Dear all
>>
>> Does simulation able to be restarted from checkpoint file with fewer
>> nodes ?
>> let's say, at the first time, I run simulation with 3 nodes. At running
>> time, one of those nodes is crashed and the simulation is terminated.
>>
>> I want to restart that simulation immadiately based on checkpoint file
>> with
>> the remaining 2 nodes. does gromacs support such case ?
>> I need help.
>>
>
> Have you tried it?  It should work.  You will probably get a note about
> the continuation not being exact due to a change in the number of cores,
> but the run should proceed fine.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 629
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> jalem...@outerbanks.umaryland.edu | (410) 706-7441
> http://mackerell.umaryland.edu/~jalemkul
>
> ==
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Restart simulation from checkpoint file with fewer nodes

2016-05-13 Thread Husen R
thanks a lot for your fast response.

I have tried it, and it failed. I ask in this forum just to make sure.
However, there was something in my cluster that probably make it failed.
I'll handle it first and then retry to restart again.

Regards,

Husen

On Sat, May 14, 2016 at 7:58 AM, Justin Lemkul <jalem...@vt.edu> wrote:

>
>
> On 5/13/16 8:53 PM, Husen R wrote:
>
>> Dear all
>>
>> Does simulation able to be restarted from checkpoint file with fewer
>> nodes ?
>> let's say, at the first time, I run simulation with 3 nodes. At running
>> time, one of those nodes is crashed and the simulation is terminated.
>>
>> I want to restart that simulation immadiately based on checkpoint file
>> with
>> the remaining 2 nodes. does gromacs support such case ?
>> I need help.
>>
>
> Have you tried it?  It should work.  You will probably get a note about
> the continuation not being exact due to a change in the number of cores,
> but the run should proceed fine.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 629
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> jalem...@outerbanks.umaryland.edu | (410) 706-7441
> http://mackerell.umaryland.edu/~jalemkul
>
> ==
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Restart simulation from checkpoint file with fewer nodes

2016-05-13 Thread Husen R
I use Gromacs-5.1.2 and SLURM-15.08.10 as a resource manager.

On Sat, May 14, 2016 at 7:53 AM, Husen R <hus...@gmail.com> wrote:

> Dear all
>
> Does simulation able to be restarted from checkpoint file with fewer nodes
> ?
> let's say, at the first time, I run simulation with 3 nodes. At running
> time, one of those nodes is crashed and the simulation is terminated.
>
> I want to restart that simulation immadiately based on checkpoint file
> with the remaining 2 nodes. does gromacs support such case ?
> I need help.
>
> Thank you in advance.
> Regards,
>
> Husen
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Restart simulation from checkpoint file with fewer nodes

2016-05-13 Thread Husen R
Dear all

Does simulation able to be restarted from checkpoint file with fewer nodes ?
let's say, at the first time, I run simulation with 3 nodes. At running
time, one of those nodes is crashed and the simulation is terminated.

I want to restart that simulation immadiately based on checkpoint file with
the remaining 2 nodes. does gromacs support such case ?
I need help.

Thank you in advance.
Regards,

Husen
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs-5.1.2 Checkpoint/restart example

2016-04-26 Thread Husen R
Hi,


Thanks for your reply.

There is no state.cpt or state_prev.cpt file.

In equilibration part 1 the resulting checkpoint files : nvt.cpt and
nvt_prev.cpt
In equilibration part 2 the resulting checkpoint files : npt.cpt and
npt_prev.cpt
In Production MD the resulting checkpoint files  : md_0_1.cpt and
md_0_1_prev.cpt

is this because of -deffnm option ?
I just want to make sure.

Regards,


Husen


On Tue, Apr 26, 2016 at 11:45 AM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Hi,
>
> On Tue, 26 Apr 2016 06:19 Husen R <hus...@gmail.com> wrote:
>
> > Hi all,
> >
> > I tried to run this gromacs tutorial (
> >
> >
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/01_pdb2gmx.html
> > )
> > in order to understand how checkpoint/restart works in Gromacs-5.1.2.
> >
> > Based on running that tutorial I found that mdrun automatically
> checkpoint
> > even though I didn't specify -cpt option (based on mdrun manual page, by
> > default checkpoint every 15 minutes)
> >
>
> Yes, as that webpage suggests.
>
> 1. What if I specify -cpt option with a value longer than or smaller than
> > 15 minutes ? does -cpt value will automatically override the default ?
> >
>
> Yes, the default is used when you don't supply a value. But usually you
> don't need to think about this.
>
> 2. I run the following command to restart production MD step (using SLURM)
> > from checkpoint file. md_0_1_prev.cpt is .cpt file resulting from
> previous
> > mdrun execution. is my command right ? I just want to make sure.
> >
>
> Not quite. From that webpage: "Note that mdrun will write state.cpt and
> state_prev.cpt files. As you can see from their time stamps, one was
> written approximately at the checkpoint interval before the other (15 mins
> by default). Or you can use gmxcheck to see what is in them." You want to
> follow the examples there and do your restart from the most recent file,
> which is state.cpt. The older file is just kept for safety. Your command
> will lead to wasting 15 minutes of simulation.
>
> Mark
>
> #!/bin/bash
> > #SBATCH -J Lysozyme
> > #SBATCH -o md-%j.out
> > #SBATCH -A necis
> > #SBATCH -N 3
> > #SBATCH -n 24
> > #SBATCH --time=144:00:00
> > #SBATCH --mail-user=hus...@gmail.com
> > #SBATCH --mail-type=begin
> > #SBATCH --mail-type=end
> >
> > mpirun gmx_mpi mdrun -cpi md_0_1_prev.cpt -deffnm md_0_1
> >
> >
> > Thank you in advance
> >
> > Regards,
> >
> >
> > Husen
> >
> >
> > On Sun, Apr 24, 2016 at 7:57 PM, Husen R <hus...@gmail.com> wrote:
> >
> > > Hi,
> > >
> > > Thanks a lot !
> > > I'll try it,..
> > >
> > > Regards,
> > >
> > >
> > > Husen
> > >
> > > On Sun, Apr 24, 2016 at 6:54 PM, Justin Lemkul <jalem...@vt.edu>
> wrote:
> > >
> > >>
> > >>
> > >> On 4/24/16 12:04 AM, Husen R wrote:
> > >>
> > >>> Dear all,
> > >>>
> > >>> is there any complete documentation discussing checkpoint/restart in
> > >>> Gromacs-5.1.2 ?
> > >>> I found this link that discuss checkpoint (
> > >>> http://www.gromacs.org/Documentation/How-tos/Extending_Simulations)
> > and
> > >>> this link that discuss restart (
> > >>> http://www.gromacs.org/Documentation/How-tos/Doing_Restarts) but
> these
> > >>> are
> > >>> not specifically for gromacs 5.1.2.
> > >>>
> > >>>
> > >> There are no differences for 5.1.2, so that information is all
> relevant.
> > >>
> > >> -Justin
> > >>
> > >> --
> > >> ==
> > >>
> > >> Justin A. Lemkul, Ph.D.
> > >> Ruth L. Kirschstein NRSA Postdoctoral Fellow
> > >>
> > >> Department of Pharmaceutical Sciences
> > >> School of Pharmacy
> > >> Health Sciences Facility II, Room 629
> > >> University of Maryland, Baltimore
> > >> 20 Penn St.
> > >> Baltimore, MD 21201
> > >>
> > >> jalem...@outerbanks.umaryland.edu | (410) 706-7441
> > >> http://mackerell.umaryland.edu/~jalemkul
> > >>
> > >> ==
> > >> --
> > >> Gromacs Users mailing list
> > >>
> > >> * Please search the archive at
&

Re: [gmx-users] Gromacs-5.1.2 Checkpoint/restart example

2016-04-25 Thread Husen R
Hi all,

I tried to run this gromacs tutorial (
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/01_pdb2gmx.html)
in order to understand how checkpoint/restart works in Gromacs-5.1.2.

Based on running that tutorial I found that mdrun automatically checkpoint
even though I didn't specify -cpt option (based on mdrun manual page, by
default checkpoint every 15 minutes)

1. What if I specify -cpt option with a value longer than or smaller than
15 minutes ? does -cpt value will automatically override the default ?

2. I run the following command to restart production MD step (using SLURM)
from checkpoint file. md_0_1_prev.cpt is .cpt file resulting from previous
mdrun execution. is my command right ? I just want to make sure.

#!/bin/bash
#SBATCH -J Lysozyme
#SBATCH -o md-%j.out
#SBATCH -A necis
#SBATCH -N 3
#SBATCH -n 24
#SBATCH --time=144:00:00
#SBATCH --mail-user=hus...@gmail.com
#SBATCH --mail-type=begin
#SBATCH --mail-type=end

mpirun gmx_mpi mdrun -cpi md_0_1_prev.cpt -deffnm md_0_1


Thank you in advance

Regards,


Husen


On Sun, Apr 24, 2016 at 7:57 PM, Husen R <hus...@gmail.com> wrote:

> Hi,
>
> Thanks a lot !
> I'll try it,..
>
> Regards,
>
>
> Husen
>
> On Sun, Apr 24, 2016 at 6:54 PM, Justin Lemkul <jalem...@vt.edu> wrote:
>
>>
>>
>> On 4/24/16 12:04 AM, Husen R wrote:
>>
>>> Dear all,
>>>
>>> is there any complete documentation discussing checkpoint/restart in
>>> Gromacs-5.1.2 ?
>>> I found this link that discuss checkpoint (
>>> http://www.gromacs.org/Documentation/How-tos/Extending_Simulations) and
>>> this link that discuss restart (
>>> http://www.gromacs.org/Documentation/How-tos/Doing_Restarts) but these
>>> are
>>> not specifically for gromacs 5.1.2.
>>>
>>>
>> There are no differences for 5.1.2, so that information is all relevant.
>>
>> -Justin
>>
>> --
>> ==
>>
>> Justin A. Lemkul, Ph.D.
>> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>>
>> Department of Pharmaceutical Sciences
>> School of Pharmacy
>> Health Sciences Facility II, Room 629
>> University of Maryland, Baltimore
>> 20 Penn St.
>> Baltimore, MD 21201
>>
>> jalem...@outerbanks.umaryland.edu | (410) 706-7441
>> http://mackerell.umaryland.edu/~jalemkul
>>
>> ==
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs-5.1.2 Checkpoint/restart example

2016-04-24 Thread Husen R
Hi,

Thanks a lot !
I'll try it,..

Regards,


Husen

On Sun, Apr 24, 2016 at 6:54 PM, Justin Lemkul <jalem...@vt.edu> wrote:

>
>
> On 4/24/16 12:04 AM, Husen R wrote:
>
>> Dear all,
>>
>> is there any complete documentation discussing checkpoint/restart in
>> Gromacs-5.1.2 ?
>> I found this link that discuss checkpoint (
>> http://www.gromacs.org/Documentation/How-tos/Extending_Simulations) and
>> this link that discuss restart (
>> http://www.gromacs.org/Documentation/How-tos/Doing_Restarts) but these
>> are
>> not specifically for gromacs 5.1.2.
>>
>>
> There are no differences for 5.1.2, so that information is all relevant.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 629
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> jalem...@outerbanks.umaryland.edu | (410) 706-7441
> http://mackerell.umaryland.edu/~jalemkul
>
> ==
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Gromacs-5.1.2 Checkpoint/restart example

2016-04-23 Thread Husen R
Dear all,

is there any complete documentation discussing checkpoint/restart in
Gromacs-5.1.2 ?
I found this link that discuss checkpoint (
http://www.gromacs.org/Documentation/How-tos/Extending_Simulations) and
this link that discuss restart (
http://www.gromacs.org/Documentation/How-tos/Doing_Restarts) but these are
not specifically for gromacs 5.1.2.

Thank you in advance.

Regards,



Husen
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun on multiple nodes

2016-04-20 Thread Husen R
Hi,


Thank a lot for the information.

in my gromacs installation, 'gmx_mpi' is works while 'mdrun_mpi' is not
recognized as a gromacs command.

Regards,


Husen


On Wed, Apr 20, 2016 at 6:42 PM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Hi,
>
> Well spotted. These should all be along the lines of
>
> mpirun -np 2 gmx_mpi mdrun
>
> or
>
> mpirun -np 2 mdrun_mpi
>
> if the admins have done an mdrun-only installation.
>
> I'll fix that for future versions
>
> Mark
>
> On Wed, 20 Apr 2016 13:34 Husen R <hus...@gmail.com> wrote:
>
> > Hi Mark,
> >
> > I'm wondering why in this link
> >
> >
> http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-performance.html
> > 'gmx' is used to run mdrun on more than one node instead of 'gmx_mpi' ?
> >
> > Regards,
> >
> > Husen
> >
> > On Wed, Apr 20, 2016 at 3:48 PM, Mark Abraham <mark.j.abra...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > Probably you haven't built gromacs with MPI support, else the name of
> the
> > > binary would be gmx_mpi. You can get that confirmed if you look further
> > > down the output .log files.
> > >
> > > Mark
> > >
> > > On Wed, 20 Apr 2016 10:09 Husen R <hus...@gmail.com> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I tried to run mdrun on more than one node using the command
> available
> > in
> > > > this url
> > > >
> > > >
> > >
> >
> http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-performance.html
> > > > .
> > > > The following is my sbatch job :
> > > >
> > > > ###SBATCH
> > > > #!/bin/bash
> > > > #SBATCH -J sim
> > > > #SBATCH -o md-%j.out
> > > > #SBATCH -A pro
> > > > #SBATCH -N 3
> > > > #SBATCH -n 24
> > > > #SBATCH --time=144:00:00
> > > > #SBATCH --mail-user=hus...@gmail.com
> > > > #SBATCH --mail-type=begin
> > > > #SBATCH --mail-type=end
> > > >
> > > > mpirun gmx mdrun -cpt 15 -deffnm md_0_1
> > > >
> > > > #SBATCH END
> > > >
> > > > I open the output file to see the result. It contains something
> > > repeatedly
> > > > as shown below:
> > > >
> > > > #OUTPUT###
> > > > ..
> > > > ..
> > > > ..
> > > >*GROMACS is written by:*
> > > >  Emile Apol  Rossen Apostolov  Herman J.C. BerendsenPar
> > > > Bjelkmar
> > > >  Aldert van Buuren   Rudi van Drunen Anton Feenstra   Sebastian
> > > Fritsch
> > > >   Gerrit Groenhof   Christoph Junghans   Anca HamuraruVincent
> > > > Hindriksen
> > > >  Dimitrios KarkoulisPeter KassonJiri Kraus  Carsten
> > > Kutzner
> > > >
> > > > Per Larsson  Justin A. Lemkul   Magnus Lundborg   Pieter
> > > Meulenhoff
> > > >Erik Marklund  Teemu Murtola   Szilard Pall   Sander
> > Pronk
> > > >Roland Schulz Alexey Shvetsov Michael Shirts  Executable:
> > > > /usr/local/gromacs/bin/gmx
> > > >Alfons Sijbers
> > > >Peter TielemanTeemu Virolainen  Christian WennbergMaarten
> > Wolf
> > > >and the project leaders:
> > > > Mark Abraham, Berk Hess, Erik Lindahl, and David van der
> Spoel
> > > >
> > > > Copyright (c) 1991-2000, University of Groningen, The Netherlands.
> > > > Copyright (c) 2001-2015, The GROMACS development team at
> > > > Uppsala University, Stockholm University and
> > > > the Royal Institute of Technology, Sweden.
> > > > check out http://www.gromacs.org for more information.
> > > >
> > > > GROMACS is free software; you can redistribute it and/or modify it
> > > > under the terms of the GNU Lesser General Public License
> > > > as published by the Free Software Foundation; either version 2.1
> > > > Data prefix:  /usr/local/gromacs
> > > > Command line:
> > > >   gmx mdrun -cpt 15 -deffnm md_0_1
> > > >
> > > > of the License, or (at your option) any late

Re: [gmx-users] mdrun on multiple nodes

2016-04-20 Thread Husen R
Hi Mark,

I'm wondering why in this link
http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-performance.html
'gmx' is used to run mdrun on more than one node instead of 'gmx_mpi' ?

Regards,

Husen

On Wed, Apr 20, 2016 at 3:48 PM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Hi,
>
> Probably you haven't built gromacs with MPI support, else the name of the
> binary would be gmx_mpi. You can get that confirmed if you look further
> down the output .log files.
>
> Mark
>
> On Wed, 20 Apr 2016 10:09 Husen R <hus...@gmail.com> wrote:
>
> > Hi all,
> >
> > I tried to run mdrun on more than one node using the command available in
> > this url
> >
> >
> http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-performance.html
> > .
> > The following is my sbatch job :
> >
> > ###SBATCH
> > #!/bin/bash
> > #SBATCH -J sim
> > #SBATCH -o md-%j.out
> > #SBATCH -A pro
> > #SBATCH -N 3
> > #SBATCH -n 24
> > #SBATCH --time=144:00:00
> > #SBATCH --mail-user=hus...@gmail.com
> > #SBATCH --mail-type=begin
> > #SBATCH --mail-type=end
> >
> > mpirun gmx mdrun -cpt 15 -deffnm md_0_1
> >
> > #SBATCH END
> >
> > I open the output file to see the result. It contains something
> repeatedly
> > as shown below:
> >
> > #OUTPUT###
> > ..
> > ..
> > ..
> >*GROMACS is written by:*
> >  Emile Apol  Rossen Apostolov  Herman J.C. BerendsenPar
> > Bjelkmar
> >  Aldert van Buuren   Rudi van Drunen Anton Feenstra   Sebastian
> Fritsch
> >   Gerrit Groenhof   Christoph Junghans   Anca HamuraruVincent
> > Hindriksen
> >  Dimitrios KarkoulisPeter KassonJiri Kraus  Carsten
> Kutzner
> >
> > Per Larsson  Justin A. Lemkul   Magnus Lundborg   Pieter
> Meulenhoff
> >Erik Marklund  Teemu Murtola   Szilard Pall   Sander Pronk
> >Roland Schulz Alexey Shvetsov Michael Shirts  Executable:
> > /usr/local/gromacs/bin/gmx
> >Alfons Sijbers
> >Peter TielemanTeemu Virolainen  Christian WennbergMaarten Wolf
> >and the project leaders:
> > Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
> >
> > Copyright (c) 1991-2000, University of Groningen, The Netherlands.
> > Copyright (c) 2001-2015, The GROMACS development team at
> > Uppsala University, Stockholm University and
> > the Royal Institute of Technology, Sweden.
> > check out http://www.gromacs.org for more information.
> >
> > GROMACS is free software; you can redistribute it and/or modify it
> > under the terms of the GNU Lesser General Public License
> > as published by the Free Software Foundation; either version 2.1
> > Data prefix:  /usr/local/gromacs
> > Command line:
> >   gmx mdrun -cpt 15 -deffnm md_0_1
> >
> > of the License, or (at your option) any later version.
> >
> > GROMACS:  gmx mdrun, VERSION 5.1.2
> > Executable:   /usr/local/gromacs/bin/gmx
> > Data prefix:  /usr/local/gromacs
> > Command line:
> >   gmx mdrun -cpt 15 -deffnm md_0_1
> >
> >:-) GROMACS - gmx mdrun, VERSION 5.1.2 (-:
> >
> >* GROMACS is written by:*
> >  Emile Apol  Rossen Apostolov  Herman J.C. BerendsenPar
> > Bjelkmar
> >  Aldert van Buuren   Rudi van Drunen Anton Feenstra   Sebastian
> Fritsch
> >   Gerrit Groenhof   Christoph Junghans   Anca HamuraruVincent
> > Hindriksen
> >  Dimitrios KarkoulisPeter KassonJiri Kraus  Carsten
> Kutzner
> >
> > Per Larsson  Justin A. Lemkul   Magnus Lundborg   Pieter
> Meulenhoff
> >Erik Marklund  Teemu Murtola   Szilard Pall   Sander Pronk
> >Roland Schulz Alexey Shvetsov Michael Shirts Alfons
> Sijbers
> >Peter TielemanTeemu Virolainen  Christian WennbergMaarten Wolf
> >and the project leaders:
> > Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
> >
> > Copyright (c) 1991-2000, University of Groningen, The Netherlands.
> > Copyright (c) 2001-2015, The GROMACS development team at
> > Uppsala University, Stockholm University and
> > the Royal Institute of Technology, Sweden.
> > check out http://www.gromacs.org for more information.
> >
> > GROMACS is free 

Re: [gmx-users] mdrun on multiple nodes

2016-04-20 Thread Husen R
Hi Mark,

I have configured gromacs with mpi support (using -DGMX_MPI=on):

tar xfz gromacs-5.1.2.tar.gz
cd gromacs-5.1.2
mkdir build
cd build
cmake .. -DGMX_MPI=on -DGMX_BUILD_OWN_FFTW=ON
make
make check
sudo make install
source /usr/local/gromacs/bin/GMXRC

Currently, I tried to use gmx_mpi (as you said) instead of gmx and it works !
Using htop command, I can see that Each node has 8 processes running
"gmx_mpi mdrun -cpt 15 -deffnm md_0_1".
I can not wait to see the results.
Thank a lot.

Regards,

Husen


On Wed, Apr 20, 2016 at 3:48 PM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Hi,
>
> Probably you haven't built gromacs with MPI support, else the name of the
> binary would be gmx_mpi. You can get that confirmed if you look further
> down the output .log files.
>
> Mark
>
> On Wed, 20 Apr 2016 10:09 Husen R <hus...@gmail.com> wrote:
>
> > Hi all,
> >
> > I tried to run mdrun on more than one node using the command available in
> > this url
> >
> >
> http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-performance.html
> > .
> > The following is my sbatch job :
> >
> > ###SBATCH
> > #!/bin/bash
> > #SBATCH -J sim
> > #SBATCH -o md-%j.out
> > #SBATCH -A pro
> > #SBATCH -N 3
> > #SBATCH -n 24
> > #SBATCH --time=144:00:00
> > #SBATCH --mail-user=hus...@gmail.com
> > #SBATCH --mail-type=begin
> > #SBATCH --mail-type=end
> >
> > mpirun gmx mdrun -cpt 15 -deffnm md_0_1
> >
> > #SBATCH END
> >
> > I open the output file to see the result. It contains something
> repeatedly
> > as shown below:
> >
> > #OUTPUT###
> > ..
> > ..
> > ..
> >*GROMACS is written by:*
> >  Emile Apol  Rossen Apostolov  Herman J.C. BerendsenPar
> > Bjelkmar
> >  Aldert van Buuren   Rudi van Drunen Anton Feenstra   Sebastian
> Fritsch
> >   Gerrit Groenhof   Christoph Junghans   Anca HamuraruVincent
> > Hindriksen
> >  Dimitrios KarkoulisPeter KassonJiri Kraus  Carsten
> Kutzner
> >
> > Per Larsson  Justin A. Lemkul   Magnus Lundborg   Pieter
> Meulenhoff
> >Erik Marklund  Teemu Murtola   Szilard Pall   Sander Pronk
> >Roland Schulz Alexey Shvetsov Michael Shirts  Executable:
> > /usr/local/gromacs/bin/gmx
> >Alfons Sijbers
> >Peter TielemanTeemu Virolainen  Christian WennbergMaarten Wolf
> >and the project leaders:
> > Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
> >
> > Copyright (c) 1991-2000, University of Groningen, The Netherlands.
> > Copyright (c) 2001-2015, The GROMACS development team at
> > Uppsala University, Stockholm University and
> > the Royal Institute of Technology, Sweden.
> > check out http://www.gromacs.org for more information.
> >
> > GROMACS is free software; you can redistribute it and/or modify it
> > under the terms of the GNU Lesser General Public License
> > as published by the Free Software Foundation; either version 2.1
> > Data prefix:  /usr/local/gromacs
> > Command line:
> >   gmx mdrun -cpt 15 -deffnm md_0_1
> >
> > of the License, or (at your option) any later version.
> >
> > GROMACS:  gmx mdrun, VERSION 5.1.2
> > Executable:   /usr/local/gromacs/bin/gmx
> > Data prefix:  /usr/local/gromacs
> > Command line:
> >   gmx mdrun -cpt 15 -deffnm md_0_1
> >
> >:-) GROMACS - gmx mdrun, VERSION 5.1.2 (-:
> >
> >* GROMACS is written by:*
> >  Emile Apol  Rossen Apostolov  Herman J.C. BerendsenPar
> > Bjelkmar
> >  Aldert van Buuren   Rudi van Drunen Anton Feenstra   Sebastian
> Fritsch
> >   Gerrit Groenhof   Christoph Junghans   Anca HamuraruVincent
> > Hindriksen
> >  Dimitrios KarkoulisPeter KassonJiri Kraus  Carsten
> Kutzner
> >
> > Per Larsson  Justin A. Lemkul   Magnus Lundborg   Pieter
> Meulenhoff
> >Erik Marklund  Teemu Murtola   Szilard Pall   Sander Pronk
> >Roland Schulz Alexey Shvetsov Michael Shirts Alfons
> Sijbers
> >Peter TielemanTeemu Virolainen  Christian WennbergMaarten Wolf
> >and the project leaders:
> > Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
> >
> > Copyright (c) 1991-2000, Univer

Re: [gmx-users] mdrun on multiple nodes

2016-04-20 Thread Husen R
Hi Mark,

I have configured gromacs with mpi support (using -DGMX_MPI=on):

tar xfz gromacs-5.1.2.tar.gz
cd gromacs-5.1.2
mkdir build
cd build
cmake .. -DGMX=on -DGMX_BUILD_OWN_FFTW=ON
make
make check
sudo make install
source /usr/local/gromacs/bin/GMXRC

Currently, I tried to use gmx_mpi (as you said) instead of gmx and it works !
Using htop command, I can see that Each node has 8 processes running
"gmx_mpi mdrun -cpt 15 -deffnm md_0_1".
I can not wait to see the results.
Thank a lot.

Regards,

Husen



On Wed, Apr 20, 2016 at 3:48 PM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Hi,
>
> Probably you haven't built gromacs with MPI support, else the name of the
> binary would be gmx_mpi. You can get that confirmed if you look further
> down the output .log files.
>
> Mark
>
> On Wed, 20 Apr 2016 10:09 Husen R <hus...@gmail.com> wrote:
>
> > Hi all,
> >
> > I tried to run mdrun on more than one node using the command available in
> > this url
> >
> >
> http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-performance.html
> > .
> > The following is my sbatch job :
> >
> > ###SBATCH
> > #!/bin/bash
> > #SBATCH -J sim
> > #SBATCH -o md-%j.out
> > #SBATCH -A pro
> > #SBATCH -N 3
> > #SBATCH -n 24
> > #SBATCH --time=144:00:00
> > #SBATCH --mail-user=hus...@gmail.com
> > #SBATCH --mail-type=begin
> > #SBATCH --mail-type=end
> >
> > mpirun gmx mdrun -cpt 15 -deffnm md_0_1
> >
> > #SBATCH END
> >
> > I open the output file to see the result. It contains something
> repeatedly
> > as shown below:
> >
> > #OUTPUT###
> > ..
> > ..
> > ..
> >*GROMACS is written by:*
> >  Emile Apol  Rossen Apostolov  Herman J.C. BerendsenPar
> > Bjelkmar
> >  Aldert van Buuren   Rudi van Drunen Anton Feenstra   Sebastian
> Fritsch
> >   Gerrit Groenhof   Christoph Junghans   Anca HamuraruVincent
> > Hindriksen
> >  Dimitrios KarkoulisPeter KassonJiri Kraus  Carsten
> Kutzner
> >
> > Per Larsson  Justin A. Lemkul   Magnus Lundborg   Pieter
> Meulenhoff
> >Erik Marklund  Teemu Murtola   Szilard Pall   Sander Pronk
> >Roland Schulz Alexey Shvetsov Michael Shirts  Executable:
> > /usr/local/gromacs/bin/gmx
> >Alfons Sijbers
> >Peter TielemanTeemu Virolainen  Christian WennbergMaarten Wolf
> >and the project leaders:
> > Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
> >
> > Copyright (c) 1991-2000, University of Groningen, The Netherlands.
> > Copyright (c) 2001-2015, The GROMACS development team at
> > Uppsala University, Stockholm University and
> > the Royal Institute of Technology, Sweden.
> > check out http://www.gromacs.org for more information.
> >
> > GROMACS is free software; you can redistribute it and/or modify it
> > under the terms of the GNU Lesser General Public License
> > as published by the Free Software Foundation; either version 2.1
> > Data prefix:  /usr/local/gromacs
> > Command line:
> >   gmx mdrun -cpt 15 -deffnm md_0_1
> >
> > of the License, or (at your option) any later version.
> >
> > GROMACS:  gmx mdrun, VERSION 5.1.2
> > Executable:   /usr/local/gromacs/bin/gmx
> > Data prefix:  /usr/local/gromacs
> > Command line:
> >   gmx mdrun -cpt 15 -deffnm md_0_1
> >
> >:-) GROMACS - gmx mdrun, VERSION 5.1.2 (-:
> >
> >* GROMACS is written by:*
> >  Emile Apol  Rossen Apostolov  Herman J.C. BerendsenPar
> > Bjelkmar
> >  Aldert van Buuren   Rudi van Drunen Anton Feenstra   Sebastian
> Fritsch
> >   Gerrit Groenhof   Christoph Junghans   Anca HamuraruVincent
> > Hindriksen
> >  Dimitrios KarkoulisPeter KassonJiri Kraus  Carsten
> Kutzner
> >
> > Per Larsson  Justin A. Lemkul   Magnus Lundborg   Pieter
> Meulenhoff
> >Erik Marklund  Teemu Murtola   Szilard Pall   Sander Pronk
> >Roland Schulz Alexey Shvetsov Michael Shirts Alfons
> Sijbers
> >Peter TielemanTeemu Virolainen  Christian WennbergMaarten Wolf
> >and the project leaders:
> > Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
> >
> > Copyright (c) 1991-2000, Univer

[gmx-users] mdrun on multiple nodes

2016-04-20 Thread Husen R
Hi all,

I tried to run mdrun on more than one node using the command available in
this url
http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-performance.html
.
The following is my sbatch job :

###SBATCH
#!/bin/bash
#SBATCH -J sim
#SBATCH -o md-%j.out
#SBATCH -A pro
#SBATCH -N 3
#SBATCH -n 24
#SBATCH --time=144:00:00
#SBATCH --mail-user=hus...@gmail.com
#SBATCH --mail-type=begin
#SBATCH --mail-type=end

mpirun gmx mdrun -cpt 15 -deffnm md_0_1

#SBATCH END

I open the output file to see the result. It contains something repeatedly
as shown below:

#OUTPUT###
..
..
..
   *GROMACS is written by:*
 Emile Apol  Rossen Apostolov  Herman J.C. BerendsenPar
Bjelkmar
 Aldert van Buuren   Rudi van Drunen Anton Feenstra   Sebastian Fritsch
  Gerrit Groenhof   Christoph Junghans   Anca HamuraruVincent Hindriksen
 Dimitrios KarkoulisPeter KassonJiri Kraus  Carsten Kutzner

Per Larsson  Justin A. Lemkul   Magnus Lundborg   Pieter Meulenhoff
   Erik Marklund  Teemu Murtola   Szilard Pall   Sander Pronk
   Roland Schulz Alexey Shvetsov Michael Shirts  Executable:
/usr/local/gromacs/bin/gmx
   Alfons Sijbers
   Peter TielemanTeemu Virolainen  Christian WennbergMaarten Wolf
   and the project leaders:
Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2015, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
Data prefix:  /usr/local/gromacs
Command line:
  gmx mdrun -cpt 15 -deffnm md_0_1

of the License, or (at your option) any later version.

GROMACS:  gmx mdrun, VERSION 5.1.2
Executable:   /usr/local/gromacs/bin/gmx
Data prefix:  /usr/local/gromacs
Command line:
  gmx mdrun -cpt 15 -deffnm md_0_1

   :-) GROMACS - gmx mdrun, VERSION 5.1.2 (-:

   * GROMACS is written by:*
 Emile Apol  Rossen Apostolov  Herman J.C. BerendsenPar
Bjelkmar
 Aldert van Buuren   Rudi van Drunen Anton Feenstra   Sebastian Fritsch
  Gerrit Groenhof   Christoph Junghans   Anca HamuraruVincent Hindriksen
 Dimitrios KarkoulisPeter KassonJiri Kraus  Carsten Kutzner

Per Larsson  Justin A. Lemkul   Magnus Lundborg   Pieter Meulenhoff
   Erik Marklund  Teemu Murtola   Szilard Pall   Sander Pronk
   Roland Schulz Alexey Shvetsov Michael Shirts Alfons Sijbers
   Peter TielemanTeemu Virolainen  Christian WennbergMaarten Wolf
   and the project leaders:
Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2015, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:  gmx mdrun, VERSION 5.1.2
Executable:   /usr/local/gromacs/bin/gmx
Data prefix:  /usr/local/gromacs
Command line:
  gmx mdrun -cpt 15 -deffnm md_0_1

..
..
..

OUTPUT END###

Is it normal for output file to have such content?

in addition, as we can see from my sbatch script, I use 3 nodes and 24
processes. Using that configuration, I hope each node will have 8 processes
running "gmx mdrun -cpt 15 -deffnm md_0_1".
However, when I see running processes in each node using htop command, each
node has more than 60 processes running "gmx mdrun -cpt 15 -deffnm md_0_1".
why is this happened ? I need help.

Thank you in advance.


regards,


Husen
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Run mdrun in parallel

2016-04-12 Thread Husen R
Hi all,

Currently, I'm trying to run mdrun command in parallel.
the following is my batch script using Slurm as a resource manager:

=Batch Script==
#!/bin/bash
#SBATCH -J Eq1
#SBATCH -o eq1-%j.out
#SBATCH -A pro
#SBATCH -N 2
#SBATCH -n 16

gmx mdrun -deffnm tpr
===End Batch Script===

As we can see from the batch script above, I tried to run mdrun with 2
Nodes and 16 processors.
After the batch job is submitted, I tried to see running process in the
background in each node using htop command and I found that only one node
that run 'gmx mdrun -deffnm tpr' command, there was no running process
called 'gmx mdrun -deffnm tpr' in the other one.

Please, anyone tell me how to run mdrun (or gromacs in general) in parallel
?

note : If I run mpi application, I always able to see it running in every
allocated nodes using htop.

thank you in advance.

Regards,


Husen
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 5.1.2 installation problem

2016-04-05 Thread Husen R
Hello James,

Thank you for your reply and valuable information.

I tried running "gmx [command]" and it works.
Sorry for this fundamental question.

Regards,

Husen

On Tue, Apr 5, 2016 at 4:11 PM, James Graham <j.a.gra...@soton.ac.uk> wrote:

> Hi Husen,
>
> In GROMACS version 5.1 the default program naming scheme was changed so
> that everything is now part of the executable 'gmx'.  Because of this you
> need to use 'gmx pdb2gmx' (and e.g. 'gmx grompp', 'gmx mdrun') instead.
>
> The change was actually made in version 5.0, but they left an option to
> use it the old way as well.
>
> Regards,
> James
>
>
> On 05/04/16 08:19, Husen R wrote:
>
>> Dear all,
>>
>> I already installed gromacs-5.1.2 succesfully with the following
>> instruction :
>>
>> tar xfz gromacs-5.1.2.tar.gz
>> cd gromacs-5.1.2
>> mkdir build
>> cd build
>> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DGMX_MPI=ON
>> make
>> make check
>> sudo make install
>> source /usr/local/gromacs/bin/GMXRC
>>
>> However, after finishing installation I can not use mdrun, pdb2gmx
>> (and possibly other commands).
>> The following is the error message when I tried to run pdb2gmx command.
>>
>> The program 'pdb2gmx' is currently not installed. You can install it by
>> typing:
>> sudo apt-get install gromacs
>>
>> I tried to find those commands using "locate" command but they're
>> doesn't exist as if gromacs is not installed.
>> Note : The installation directory /usr/local/gromacs is exist.
>>
>> anyone please tell me what will be the cause of this problem ?
>>
>> Regards,
>>
>>
>> Husen
>>
>
> --
> James Graham - PhD Student
> Institute for Complex Systems Simulation (ICSS)
> Computational Systems Chemistry
> University of Southampton, UK
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Gromacs 5.1.2 installation problem

2016-04-05 Thread Husen R
Dear all,

I already installed gromacs-5.1.2 succesfully with the following
instruction :

tar xfz gromacs-5.1.2.tar.gz
cd gromacs-5.1.2
mkdir build
cd build
cmake .. -DGMX_BUILD_OWN_FFTW=ON -DGMX_MPI=ON
make
make check
sudo make install
source /usr/local/gromacs/bin/GMXRC

However, after finishing installation I can not use mdrun, pdb2gmx
(and possibly other commands).
The following is the error message when I tried to run pdb2gmx command.

The program 'pdb2gmx' is currently not installed. You can install it by typing:
sudo apt-get install gromacs

I tried to find those commands using "locate" command but they're
doesn't exist as if gromacs is not installed.
Note : The installation directory /usr/local/gromacs is exist.

anyone please tell me what will be the cause of this problem ?

Regards,


Husen
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.