Re: [gmx-users] gmx mdrun with gpu

2019-05-06 Thread Szilárd Páll
Share a log file please so we can see the hardware detected, command line
options, etc.
--
Szilárd


On Sun, May 5, 2019 at 3:53 AM Maryam  wrote:

> Hello Reza
> Yes I complied it with GPU and the version of CUDA is 9.1. Any suggestions?
> Thanks.
>
> On Sat., May 4, 2019, 1:45 a.m. Reza Esmaeeli, 
> wrote:
>
> > Hello Maryam,
> > Have you compiled the gromacs 2019 with GPU?
> > What version of CUDA do you have?
> >
> > - Reza
> >
> > On Saturday, May 4, 2019, Maryam  wrote:
> >
> > > Dear all,
> > > I want to run a simulation in gromacs 2019 on a system with 1 gpu and
> 32
> > > threads. I write this command: gmx mdrun -s md.tpr -v -nb gpu but it
> > seems
> > > it does not recognize gpus and it takes long for the simulation to
> reach
> > > its end (-ntmpi ntomp and nt seem not working either). In gromacs 2016
> > with
> > > 2 gpus, I use gmx_mpi -s md.tpr -v -gpu_id 1 -nb gpu -ntomp 16 -pin on
> > > -tunepme and it works fine, but the same command regardless of (gpu_id)
> > > does not work in gromacs 2019. What flags should I use to get the best
> > > performance of the simulation?
> > > Thank you.
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at http://www.gromacs.org/
> > > Support/Mailing_Lists/GMX-Users_List before posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gmx mdrun with gpu

2019-05-04 Thread Maryam
Hello Reza
Yes I complied it with GPU and the version of CUDA is 9.1. Any suggestions?
Thanks.

On Sat., May 4, 2019, 1:45 a.m. Reza Esmaeeli, 
wrote:

> Hello Maryam,
> Have you compiled the gromacs 2019 with GPU?
> What version of CUDA do you have?
>
> - Reza
>
> On Saturday, May 4, 2019, Maryam  wrote:
>
> > Dear all,
> > I want to run a simulation in gromacs 2019 on a system with 1 gpu and 32
> > threads. I write this command: gmx mdrun -s md.tpr -v -nb gpu but it
> seems
> > it does not recognize gpus and it takes long for the simulation to reach
> > its end (-ntmpi ntomp and nt seem not working either). In gromacs 2016
> with
> > 2 gpus, I use gmx_mpi -s md.tpr -v -gpu_id 1 -nb gpu -ntomp 16 -pin on
> > -tunepme and it works fine, but the same command regardless of (gpu_id)
> > does not work in gromacs 2019. What flags should I use to get the best
> > performance of the simulation?
> > Thank you.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx mdrun with gpu

2019-05-03 Thread Reza Esmaeeli
Hello Maryam,
Have you compiled the gromacs 2019 with GPU?
What version of CUDA do you have?

- Reza

On Saturday, May 4, 2019, Maryam  wrote:

> Dear all,
> I want to run a simulation in gromacs 2019 on a system with 1 gpu and 32
> threads. I write this command: gmx mdrun -s md.tpr -v -nb gpu but it seems
> it does not recognize gpus and it takes long for the simulation to reach
> its end (-ntmpi ntomp and nt seem not working either). In gromacs 2016 with
> 2 gpus, I use gmx_mpi -s md.tpr -v -gpu_id 1 -nb gpu -ntomp 16 -pin on
> -tunepme and it works fine, but the same command regardless of (gpu_id)
> does not work in gromacs 2019. What flags should I use to get the best
> performance of the simulation?
> Thank you.
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] gmx mdrun with gpu

2019-05-03 Thread Maryam
Dear all,
I want to run a simulation in gromacs 2019 on a system with 1 gpu and 32
threads. I write this command: gmx mdrun -s md.tpr -v -nb gpu but it seems
it does not recognize gpus and it takes long for the simulation to reach
its end (-ntmpi ntomp and nt seem not working either). In gromacs 2016 with
2 gpus, I use gmx_mpi -s md.tpr -v -gpu_id 1 -nb gpu -ntomp 16 -pin on
-tunepme and it works fine, but the same command regardless of (gpu_id)
does not work in gromacs 2019. What flags should I use to get the best
performance of the simulation?
Thank you.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx mdrun -rerun issue

2018-10-12 Thread Benson Muite
Hi Andreas,

I tried it on my laptop:

gmx mdrun -rerun old_PROD.trr -deffnm new_PROD

and got the following output:

  gmx mdrun -rerun old_PROD.trr -deffnm new_PROD
   :-) GROMACS - gmx mdrun, 2018.3 (-:

     GROMACS is written by:
  Emile Apol  Rossen Apostolov  Paul Bauer Herman J.C. 
Berendsen
     Par Bjelkmar    Aldert van Buuren   Rudi van Drunen Anton Feenstra
   Gerrit Groenhof    Aleksei Iupinov   Christoph Junghans   Anca Hamuraru
  Vincent Hindriksen Dimitrios Karkoulis    Peter Kasson Jiri Kraus
   Carsten Kutzner  Per Larsson  Justin A. Lemkul    Viveca Lindahl
   Magnus Lundborg   Pieter Meulenhoff    Erik Marklund  Teemu Murtola
     Szilard Pall   Sander Pronk  Roland Schulz Alexey Shvetsov
    Michael Shirts Alfons Sijbers Peter Tieleman    Teemu 
Virolainen
  Christian Wennberg    Maarten Wolf
    and the project leaders:
     Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2017, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:  gmx mdrun, version 2018.3
Executable: /home/benson/Projects/GromacsTest/gromacsinstall/bin/gmx
Data prefix:  /home/benson/Projects/GromacsTest/gromacsinstall
Working dir: /home/benson/Projects/GromacsTest/small_rerun_example
Command line:
   gmx mdrun -rerun old_PROD.trr -deffnm new_PROD


Back Off! I just backed up new_PROD.log to ./#new_PROD.log.1#
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
Reading file new_PROD.tpr, VERSION 2016.1 (single precision)
Note: file tpx version 110, software tpx version 112
Changing nstlist from 10 to 100, rlist from 1.2 to 1.201

Using 1 MPI thread
Using 4 OpenMP threads


Back Off! I just backed up new_PROD.trr to ./#new_PROD.trr.1#

Back Off! I just backed up new_PROD.edr to ./#new_PROD.edr.1#
starting md rerun 'Ethanol_Ethanol', reading coordinates from input 
trajectory 'old_PROD.trr'

trr version: GMX_trn_file (single precision)
Reading frame   0 time    0.000
WARNING: Some frames do not contain velocities.
  Ekin, temperature and pressure are incorrect,
  the virial will be incorrect when constraints are present.

Reading frame   1 time   10.000
step -1: resetting all time and cycle counters
Last frame    200 time 2000.000

NOTE: 39 % of the run time was spent in pair search,
   you might want to increase nstlist (this has no effect on accuracy)

    Core t (s)   Wall t (s)    (%)
    Time:    5.795    1.449  400.0
  (ns/day)    (hour/ns)
Performance:    0.030  804.864

GROMACS reminds you: "Molecular biology is essentially the practice of 
biochemistry without a license." (Edwin Chargaff)


Perhaps try a newer version of GROMACS? Rather than using a provided 
module, you can install it in your home directory on your cluster.

Regards,

Benson

On 10/11/18 1:51 PM, Andreas Mecklenfeld wrote:
> Dear Benson,
>
> thanks for the offer. I've used gmx traj to generate a smaller *.trr 
> file, though the occuring issue seems unaffected.
> I've uploaded my files to http://ge.tt/1HNpO8s2
>
>
> Kind regards,
> Andreas
>
>
>
> Am 09.10.2018 um 10:48 schrieb Benson Muite:
>> Current version 2018.3 seems to have re-run feature:
>>
>> http://manual.gromacs.org/documentation/current/user-guide/mdrun-features.html
>>  
>>
>>
>> Is your input data reasonable? Might a small version be available where
>> one could try this in 2018.3 to see if the same error is obtained?
>>
>> Benson
>>
>> On 10/9/18 11:40 AM, Andreas Mecklenfeld wrote:
>>> Hey,
>>>
>>> thanks for the quick response. Unfortunately, there isn't (at least
>>> not in the short-term). Which one would be suitable though?
>>>
>>> Best regards,
>>> Andreas
>>>
>>>
>>>
>>> Am 09.10.2018 um 10:31 schrieb Benson Muite:
 Hi,

 Is it possible to use a newer version of Gromacs?

 Benson

 On 10/9/18 11:15 AM, Andreas Mecklenfeld wrote:
> Dear Gromacs users,
>
>
> I've a question regarding the 

Re: [gmx-users] gmx mdrun -rerun issue

2018-10-11 Thread Andreas Mecklenfeld

Dear Benson,

thanks for the offer. I've used gmx traj to generate a smaller *.trr 
file, though the occuring issue seems unaffected.

I've uploaded my files to http://ge.tt/1HNpO8s2


Kind regards,
Andreas



Am 09.10.2018 um 10:48 schrieb Benson Muite:

Current version 2018.3 seems to have re-run feature:

http://manual.gromacs.org/documentation/current/user-guide/mdrun-features.html

Is your input data reasonable? Might a small version be available where
one could try this in 2018.3 to see if the same error is obtained?

Benson

On 10/9/18 11:40 AM, Andreas Mecklenfeld wrote:

Hey,

thanks for the quick response. Unfortunately, there isn't (at least
not in the short-term). Which one would be suitable though?

Best regards,
Andreas



Am 09.10.2018 um 10:31 schrieb Benson Muite:

Hi,

Is it possible to use a newer version of Gromacs?

Benson

On 10/9/18 11:15 AM, Andreas Mecklenfeld wrote:

Dear Gromacs users,


I've a question regarding the rerun option of the mdrun command in
Gromacs 2016.1. It seems as if the calculation is repeatedly performed
for the last frame (until killed by the work station). The output is

"Last frame    1000 time 2000.000

WARNING: Incomplete header: nr 1001 time 2000"


My goal is to alter the .top-file (new) and calculate energies with
previously recorded coordinates (old): "gmx grompp -f old_PROD.mdp -c
old_PROD.gro -p new_topol.top -o new_PROD.tpr"

The mdrun looks like "gmx mdrun -rerun old_PROD.trr -deffnm new_PROD"


Is there a way to fix this?


Thanks,

Andreas




--
M. Sc. Andreas Mecklenfeld
Technische Universität Braunschweig
Institut für Thermodynamik
Hans-Sommer-Straße 5
38106 Braunschweig
Deutschland / Germany

Tel: +49 (0)531 391-2634
 +49 (0)531 391-65685
Fax: +49 (0)531 391-7814

http://www.ift-bs.de

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gmx mdrun -rerun issue

2018-10-09 Thread Benson Muite
Current version 2018.3 seems to have re-run feature:

http://manual.gromacs.org/documentation/current/user-guide/mdrun-features.html

Is your input data reasonable? Might a small version be available where 
one could try this in 2018.3 to see if the same error is obtained?

Benson

On 10/9/18 11:40 AM, Andreas Mecklenfeld wrote:
> Hey,
>
> thanks for the quick response. Unfortunately, there isn't (at least 
> not in the short-term). Which one would be suitable though?
>
> Best regards,
> Andreas
>
>
>
> Am 09.10.2018 um 10:31 schrieb Benson Muite:
>> Hi,
>>
>> Is it possible to use a newer version of Gromacs?
>>
>> Benson
>>
>> On 10/9/18 11:15 AM, Andreas Mecklenfeld wrote:
>>> Dear Gromacs users,
>>>
>>>
>>> I've a question regarding the rerun option of the mdrun command in
>>> Gromacs 2016.1. It seems as if the calculation is repeatedly performed
>>> for the last frame (until killed by the work station). The output is
>>>
>>> "Last frame    1000 time 2000.000
>>>
>>> WARNING: Incomplete header: nr 1001 time 2000"
>>>
>>>
>>> My goal is to alter the .top-file (new) and calculate energies with
>>> previously recorded coordinates (old): "gmx grompp -f old_PROD.mdp -c
>>> old_PROD.gro -p new_topol.top -o new_PROD.tpr"
>>>
>>> The mdrun looks like "gmx mdrun -rerun old_PROD.trr -deffnm new_PROD"
>>>
>>>
>>> Is there a way to fix this?
>>>
>>>
>>> Thanks,
>>>
>>> Andreas
>>>
>>>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gmx mdrun -rerun issue

2018-10-09 Thread Andreas Mecklenfeld

Hey,

thanks for the quick response. Unfortunately, there isn't (at least not 
in the short-term). Which one would be suitable though?


Best regards,
Andreas



Am 09.10.2018 um 10:31 schrieb Benson Muite:

Hi,

Is it possible to use a newer version of Gromacs?

Benson

On 10/9/18 11:15 AM, Andreas Mecklenfeld wrote:

Dear Gromacs users,


I've a question regarding the rerun option of the mdrun command in
Gromacs 2016.1. It seems as if the calculation is repeatedly performed
for the last frame (until killed by the work station). The output is

"Last frame    1000 time 2000.000

WARNING: Incomplete header: nr 1001 time 2000"


My goal is to alter the .top-file (new) and calculate energies with
previously recorded coordinates (old): "gmx grompp -f old_PROD.mdp -c
old_PROD.gro -p new_topol.top -o new_PROD.tpr"

The mdrun looks like "gmx mdrun -rerun old_PROD.trr -deffnm new_PROD"


Is there a way to fix this?


Thanks,

Andreas




--
M. Sc. Andreas Mecklenfeld
Technische Universität Braunschweig
Institut für Thermodynamik
Hans-Sommer-Straße 5
38106 Braunschweig
Deutschland / Germany

Tel: +49 (0)531 391-2634
 +49 (0)531 391-65685
Fax: +49 (0)531 391-7814

http://www.ift-bs.de

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gmx mdrun -rerun issue

2018-10-09 Thread Benson Muite
Hi,

Is it possible to use a newer version of Gromacs?

Benson

On 10/9/18 11:15 AM, Andreas Mecklenfeld wrote:
> Dear Gromacs users,
>
>
> I've a question regarding the rerun option of the mdrun command in 
> Gromacs 2016.1. It seems as if the calculation is repeatedly performed 
> for the last frame (until killed by the work station). The output is
>
> "Last frame    1000 time 2000.000
>
> WARNING: Incomplete header: nr 1001 time 2000"
>
>
> My goal is to alter the .top-file (new) and calculate energies with 
> previously recorded coordinates (old): "gmx grompp -f old_PROD.mdp -c 
> old_PROD.gro -p new_topol.top -o new_PROD.tpr"
>
> The mdrun looks like "gmx mdrun -rerun old_PROD.trr -deffnm new_PROD"
>
>
> Is there a way to fix this?
>
>
> Thanks,
>
> Andreas
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] gmx mdrun -rerun issue

2018-10-09 Thread Andreas Mecklenfeld

Dear Gromacs users,


I've a question regarding the rerun option of the mdrun command in 
Gromacs 2016.1. It seems as if the calculation is repeatedly performed 
for the last frame (until killed by the work station). The output is


"Last frame    1000 time 2000.000

WARNING: Incomplete header: nr 1001 time 2000"


My goal is to alter the .top-file (new) and calculate energies with 
previously recorded coordinates (old): "gmx grompp -f old_PROD.mdp -c 
old_PROD.gro -p new_topol.top -o new_PROD.tpr"


The mdrun looks like "gmx mdrun -rerun old_PROD.trr -deffnm new_PROD"


Is there a way to fix this?


Thanks,

Andreas


--
M. Sc. Andreas Mecklenfeld
Technische Universität Braunschweig
Institut für Thermodynamik
Hans-Sommer-Straße 5
38106 Braunschweig
Deutschland / Germany

Tel: +49 (0)531 391-2634
 +49 (0)531 391-65685
Fax: +49 (0)531 391-7814

http://www.ift-bs.de

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gmx mdrun, VERSION 5.0.2

2018-02-02 Thread Mark Abraham
Hi,

You've been using 5.1.4, not just 5.0.2. Older software can't know details
about what newer software does, and the .tpr file format is one of those
details. Make your tpr with 5.0.2

Mark

On Fri, Feb 2, 2018 at 4:23 PM K. Subashini  wrote:

> Hi gmx users,
>
> I use GROMACS version 5.0.2
>
> I got the following error, while giving gmx mdrun -nt 8 -v -deffnm NVT.tpr
>
> Reading file NVT.tpr, VERSION 5.1.4 (single precision)
> ---
> Program mdrun, VERSION 5.0.2
> Source code file: /opt/gromacs-5.0.2/src/gromacs/fileio/tpxio.c, line: 3303
> Fatal error:
> reading tpx file (NVT.tpr) version 103 with version 100 program
>
> I follow the gromacs tutorial (By Dr.Justin) which is for version 5.x
> series. Why do I face this error?
>
> Anything wrong with my input file?
>
> Thanks,
> Subashini.K
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx mdrun, VERSION 5.0.2

2018-02-02 Thread Justin Lemkul



On 2/2/18 10:22 AM, K. Subashini wrote:

Hi gmx users,

I use GROMACS version 5.0.2

I got the following error, while giving gmx mdrun -nt 8 -v -deffnm NVT.tpr

Reading file NVT.tpr, VERSION 5.1.4 (single precision)
---
Program mdrun, VERSION 5.0.2
Source code file: /opt/gromacs-5.0.2/src/gromacs/fileio/tpxio.c, line: 3303
Fatal error:
reading tpx file (NVT.tpr) version 103 with version 100 program

I follow the gromacs tutorial (By Dr.Justin) which is for version 5.x series. 
Why do I face this error?

Anything wrong with my input file?


Apparently you used a newer version to create the .tpr file than the 
version you're using to try to run it. Pick a version and use it; never 
mix and match. Versions in the 5.0.x series are pretty outdated. If 
you're starting new work, use the latest GROMACS version for bug fixes, 
feature enhancements, and much faster performance.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] gmx mdrun, VERSION 5.0.2

2018-02-02 Thread K. Subashini
Hi gmx users,

I use GROMACS version 5.0.2

I got the following error, while giving gmx mdrun -nt 8 -v -deffnm NVT.tpr

Reading file NVT.tpr, VERSION 5.1.4 (single precision)
---
Program mdrun, VERSION 5.0.2
Source code file: /opt/gromacs-5.0.2/src/gromacs/fileio/tpxio.c, line: 3303
Fatal error:
reading tpx file (NVT.tpr) version 103 with version 100 program

I follow the gromacs tutorial (By Dr.Justin) which is for version 5.x series. 
Why do I face this error?

Anything wrong with my input file?

Thanks,
Subashini.K


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx mdrun

2017-04-05 Thread Vytautas Rakeviius
"my CPU switches off automatically"? Can you expain what actually happens. 
Windows go to sleep etc?Also you should do such things on Linux, cygwin is not 
that fast as you see in warning message.
 

On Sunday, April 2, 2017 11:24 AM, Neha Gupta  
wrote:
 

 Hi gromacs users,


I am using Gromacs 5.1.1 in windows cygwin.

When I run long calculations my CPU switches off automatically (just before
writing the coordinate file .gro) . How to prevent this and ensure
longevity of CPU?

Running on 1 node with total 8 logical cores
Hardware detected:
  CPU info:
    Vendor: AuthenticAMD
    Brand:  AMD FX-8370E Eight-Core Processor
    Family: 21  model:  2  stepping:  0



I also observed these in log file


SIMD instructions most likely to fit this hardware: AVX_128_FMA
 SIMD instructions selected at GROMACS compile time: SSE4.1


Binary not matching hardware - you might be losing performance.
SIMD instructions most likely to fit this hardware: AVX_128_FMA
SIMD instructions selected at GROMACS compile time: SSE4.1


The current CPU can measure timings more accurately than the code in gmx
was configured to use. This might affect your simulation speed as accurate
timings are needed for load-balancing.
Please consider rebuilding gmx with the GMX_USE_RDTSCP=ON CMake option.



Thanks,
Neha
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


   
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] gmx mdrun

2017-04-02 Thread Neha Gupta
Hi gromacs users,


I am using Gromacs 5.1.1 in windows cygwin.

When I run long calculations my CPU switches off automatically (just before
writing the coordinate file .gro) . How to prevent this and ensure
longevity of CPU?

Running on 1 node with total 8 logical cores
Hardware detected:
  CPU info:
Vendor: AuthenticAMD
Brand:  AMD FX-8370E Eight-Core Processor
Family: 21  model:  2  stepping:  0



I also observed these in log file


SIMD instructions most likely to fit this hardware: AVX_128_FMA
 SIMD instructions selected at GROMACS compile time: SSE4.1


Binary not matching hardware - you might be losing performance.
SIMD instructions most likely to fit this hardware: AVX_128_FMA
SIMD instructions selected at GROMACS compile time: SSE4.1


The current CPU can measure timings more accurately than the code in gmx
was configured to use. This might affect your simulation speed as accurate
timings are needed for load-balancing.
Please consider rebuilding gmx with the GMX_USE_RDTSCP=ON CMake option.



Thanks,
Neha
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx mdrun restart

2016-10-18 Thread Mark Abraham
Hi,

There's no problem to fix.

But you need to understand whether you're implementing what you want,
because using -append and changing -deffnm is inconsistent.

Mark

On Tue, Oct 18, 2016 at 5:54 PM Sailesh Bataju  wrote:

> Hi,
>
> Thank you sir for your immediate response. So is there any way that I
> can fix this problem expect not renaming file at different stages. If
> there is, help me with the command.
>
> Thank you for your consideration.
>
> --
> Self-reliant is the great potential for success.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx mdrun restart

2016-10-18 Thread Sailesh Bataju
Hi,

Thank you sir for your immediate response. So is there any way that I
can fix this problem expect not renaming file at different stages. If
there is, help me with the command.

Thank you for your consideration.

-- 
Self-reliant is the great potential for success.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx mdrun restart

2016-10-18 Thread Mark Abraham
Hi,

You're doing too much work. If you want appending (and generally you
should, because all of our checks stop you making several of the many
available silly errors :-) ), don't name each stage differently. Once
you've forced it to write .edr files that are separate, then you have to
use the concatenation tools, and that gets you a naive concatenation.
(Though I suspect mdrun -append will still give you these extra frames,
because the previous -maxh meant that a checkpoint and an energy frame got
written at that arbitrary point of the simulation.)

Mark

On Tue, Oct 18, 2016 at 4:09 PM Sailesh Bataju  wrote:

> Hi,
>
> I'm following tutorial of "Lysozyme in Water" and playing with mdrun,
> restarted several times as an experiment and got confusion whether the
> restart process may have effect on subsequent process.
>
> Here is what I did.
>
> Suppose nvt.tpr file is generated then
>
> 1st: gmx mdrun -deffnm nvt -v -maxh 0.05
>
> the process goes on and stops at some point. Next i restarted it
>
> 2nd: gmx mdrun -s nvt.tpr -cpi nvt.cpt -deffnm nvt2 -v -maxh 0.05 -append
>
> the process goes on and stops. Again I restarted it
>
> 3rd: gmx mdrun -s nvt.tpr -cpi nvt2.cpt -deffnm nvt3 -v -maxh 0.05 -append
>
> the process goes on and stops.
>
> Then i combined all those .edr files via
>
> 4th: gmx eneconv -f nvt.edr nvt2.edr nvt3.edr -o nvt_comb.edr
>
> After that i want to generate temp_comb.xvg file
>
> 5th: gmx energy -f nvt_comb.edr -o temp_comb.xvg
>
> And i got this
>
> 6th: vi temp_comb.xvg
>
> # This file was created Tue Oct 18 19:36:56 2016
> # Created by:
> #  :-) GROMACS - gmx energy, VERSION 5.1.4 (-:
> #
> # Executable:   /usr/local/gromacs/bin/gmx
> # Data prefix:  /usr/local/gromacs
> # Command line:
> #   gmx energy -f nvt_comb.edr -o temp_comb.xvg
> # gmx energy is part of G R O M A C S:
> #
> # GROningen MAchine for Chemical Simulation
> #
> @title "GROMACS Energies"
> @xaxis  label "Time (ps)"
> @yaxis  label "(K)"
> @TYPE xy
> @ view 0.15, 0.15, 0.75, 0.85
> @ legend on
> @ legend box on
> @ legend loctype view
> @ legend 0.78, 0.8
> @ legend length 2
> @ s0 legend "Temperature"
> 0.00  300.157532
> 1.00  295.876801
> 2.00  302.064392
> 3.00  298.937622
> 4.00  301.033997
> 5.00  299.731812
> 6.00  299.529449
> 7.00  298.454712
> 8.00  302.686157
> 8.80  299.580170
> 9.00  301.788818
>10.00  301.024384
>11.00  299.27
>12.00  300.694550
>13.00  299.234558
>14.00  295.608673
>15.00  301.074036
>16.00  301.199493
>16.56  296.513916
>17.00  297.886108
>18.00  300.328278
>19.00  298.618591
>20.00  300.199432
>21.00  299.382965
>22.00  299.846497
>23.00  301.105682
>24.00  298.529266
>24.84  299.471161
>
> We see at 8, 16 and 24. They are repeated. So my question is does this
> affect the whole subsequent process in simulation. Not only the
> temperature others may have repeated too if see it thoroughly. I
> didn't see the use of -append flag from 1 & 2 step because it looks
> they are repeated anyhow.
>
> Thank you for your consideration.
>
> --
> Self-reliant is the great potential for success.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] gmx mdrun restart

2016-10-18 Thread Sailesh Bataju
Hi,

I'm following tutorial of "Lysozyme in Water" and playing with mdrun,
restarted several times as an experiment and got confusion whether the
restart process may have effect on subsequent process.

Here is what I did.

Suppose nvt.tpr file is generated then

1st: gmx mdrun -deffnm nvt -v -maxh 0.05

the process goes on and stops at some point. Next i restarted it

2nd: gmx mdrun -s nvt.tpr -cpi nvt.cpt -deffnm nvt2 -v -maxh 0.05 -append

the process goes on and stops. Again I restarted it

3rd: gmx mdrun -s nvt.tpr -cpi nvt2.cpt -deffnm nvt3 -v -maxh 0.05 -append

the process goes on and stops.

Then i combined all those .edr files via

4th: gmx eneconv -f nvt.edr nvt2.edr nvt3.edr -o nvt_comb.edr

After that i want to generate temp_comb.xvg file

5th: gmx energy -f nvt_comb.edr -o temp_comb.xvg

And i got this

6th: vi temp_comb.xvg

# This file was created Tue Oct 18 19:36:56 2016
# Created by:
#  :-) GROMACS - gmx energy, VERSION 5.1.4 (-:
#
# Executable:   /usr/local/gromacs/bin/gmx
# Data prefix:  /usr/local/gromacs
# Command line:
#   gmx energy -f nvt_comb.edr -o temp_comb.xvg
# gmx energy is part of G R O M A C S:
#
# GROningen MAchine for Chemical Simulation
#
@title "GROMACS Energies"
@xaxis  label "Time (ps)"
@yaxis  label "(K)"
@TYPE xy
@ view 0.15, 0.15, 0.75, 0.85
@ legend on
@ legend box on
@ legend loctype view
@ legend 0.78, 0.8
@ legend length 2
@ s0 legend "Temperature"
0.00  300.157532
1.00  295.876801
2.00  302.064392
3.00  298.937622
4.00  301.033997
5.00  299.731812
6.00  299.529449
7.00  298.454712
8.00  302.686157
8.80  299.580170
9.00  301.788818
   10.00  301.024384
   11.00  299.27
   12.00  300.694550
   13.00  299.234558
   14.00  295.608673
   15.00  301.074036
   16.00  301.199493
   16.56  296.513916
   17.00  297.886108
   18.00  300.328278
   19.00  298.618591
   20.00  300.199432
   21.00  299.382965
   22.00  299.846497
   23.00  301.105682
   24.00  298.529266
   24.84  299.471161

We see at 8, 16 and 24. They are repeated. So my question is does this
affect the whole subsequent process in simulation. Not only the
temperature others may have repeated too if see it thoroughly. I
didn't see the use of -append flag from 1 & 2 step because it looks
they are repeated anyhow.

Thank you for your consideration.

-- 
Self-reliant is the great potential for success.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx mdrun std::bad_alloc whilst using PLUMED

2015-11-18 Thread Nash, Anthony
Thanks Mark,

I threw an email across to the plumed group this morning. I was surprised
to get a reply almost immediately. It *could* be the memory allocation
required to define the grid spacing in PLUMED.

Thanks
Anthony

Dr Anthony Nash
Department of Chemistry
University College London





On 17/11/2015 22:24, "gromacs.org_gmx-users-boun...@maillist.sys.kth.se on
behalf of Mark Abraham"  wrote:

>Hi,
>
>GROMACS is apparently the first to notice that memory is a problem, but
>you
>should also be directing questions about memory use with different kinds
>of
>CVs to the PLUMED people. mdrun knows nothing at all about the PLUMED CVs.
>The most likely explanation is that they have some data structure that
>works OK on small scale problems, but which doesn't do well as the number
>of atoms, CVs, CV complexity, and/or ranks increases.
>
>Mark
>
>On Tue, Nov 17, 2015 at 11:05 PM Nash, Anthony  wrote:
>
>> Hi all,
>>
>> I am using PLUMED 2.2 and gromacs 5.0.4. For a while I had been testing
>> the viability of three collective variables for plumed, two dihedral
>> angles and one centre of mass distance. After observing my dimer rotate
>> about each other I decided it needed an intrahelical distance between
>>two
>> of the dihedral atoms (A,B,C,D), per helix, to sample the CV space
>>whilst
>> maintaining the Œregular¹ alpha-helical structure (the dihedral sampling
>> was coming from the protein uncoiling rather than rotating). Note: it is
>> likely that I will change these distances to the built-in alpha helical
>> CV.
>>
>> The moment I increased the number of CVs from three to five, gromacs
>> throws out a memory error. When I remove the 5th CV it still crashes.
>>When
>> I remove the 4th it stops crashing.
>>
>> ‹‹‹
>> CLUSTER OUTPUT FILE
>> ‹‹‹
>>
>>
>> starting mdrun 'NEU_MUT in POPC in water'
>> 5000 steps, 10.0 ps.
>>
>> ---
>> Program: gmx mdrun, VERSION 5.0.4
>>
>> Memory allocation failed:
>> std::bad_alloc
>>
>> For more information and tips for troubleshooting, please check the
>>GROMACS
>> website at http://www.gromacs.org/Documentation/Errors
>> ---
>> Halting parallel program mdrun_mpi_d on CPU 0 out of 12
>>
>>
>>
>>
>> It halts all 12 processes and the job dies. I increased the memory so I
>>am
>> using 43.2 GB of RAM distributed over 12 processes. The job still fails
>> (but then, allocation of memory is very different to not having any
>>memory
>> at all).
>>
>> The contents of the gromacs.log file only reports the initialisation of
>> gromacs followed by the initialisation of plumed. After this I would
>>have
>> expected the regular MD stepping output. I¹ve included the plumed
>> initialisation below. I would appreciate any help. I suspect the problem
>> lies with the 4th and 5th CV although systematically removing them and
>> playing around with the parameters hasn¹t yielded anything yet. Please
>> ignore what parameter values I have set. Besides the atom number
>> everything else is a result of me trying to work out where certain
>>ranges
>> of values is causing PLUMED to exit and gromacs to crash. PLUMED input
>> file below:
>>
>>
>> ‹‹‹
>> PLUMED INPUTFILE
>> ‹‹‹
>>
>> phi: TORSION ATOMS=214,230,938,922
>> psi: TORSION ATOMS=785,801,367,351
>>
>> c1: COM ATOMS=1-571
>> c2: COM ATOMS=572-1142
>> COMdist: DISTANCE ATOMS=c1,c2
>>
>> d1: DISTANCE ATOMS=214,367
>> d2: DISTANCE ATOMS=938,785
>>
>> UPPER_WALLS ARG=COMdist AT=2.5 KAPPA=1000 EXP=2.0 EPS=1.0 OFFSET=0
>> LABEL=COMuwall
>> LOWER_WALLS ARG=COMdist AT=1.38 KAPPA=1000 EXP=2.0 EPS=1.0 OFFSET=0
>> LABEL=COMlwall
>>
>> UPPER_WALLS ARG=d1 AT=1.260 KAPPA=1000 EXP=2.0 EPS=1.0 OFFSET=0
>> LABEL=d1uwall
>> LOWER_WALLS ARG=d1 AT=1.228 KAPPA=1000 EXP=2.0 EPS=1.0 OFFSET=0
>> LABEL=d1lwall
>>
>> UPPER_WALLS ARG=d2 AT=1.228 KAPPA=1000 EXP=2.0 EPS=1.0 OFFSET=0
>> LABEL=d2uwall
>> LOWER_WALLS ARG=d2 AT=1.196 KAPPA=1000 EXP=2.0 EPS=1.0 OFFSET=0
>> LABEL=d2lwall
>>
>> METAD ...
>> LABEL=metad
>> ARG=phi,psi,COMdist,d1,d2
>> PACE=1
>> HEIGHT=0.2
>> SIGMA=0.06,0.06,0.06,0.06,0.06
>> FILE=HILLS_neu_mut_meta_A
>> BIASFACTOR=10.0
>> TEMP=310.0
>> GRID_MIN=-pi,-pi,0,0,0
>> GRID_MAX=pi,pi,2.5,2.5,2.5
>> GRID_SPACING=0.01,0.01,0.01,0.01,0.01
>> ... METAD
>>
>>
>> PRINT STRIDE=100
>> 
>>ARG=phi,psi,COMdist,COMlwall.bias,COMuwall.bias,d1,d1lwall.bias,d1uwall.b
>>ia
>> s,d2,d2lwall.bias,d2uwall.bias,metad.bias FILE=COLVAR_neu_mut_meta_A
>>
>>
>>
>> ‹‹‹
>> GROMACS LOGFILE
>> ‹‹‹
>>
>> Center of mass motion removal mode is Linear
>> We have the following groups for center of mass motion removal:
>>   0:  rest
>> There are: 53575 Atoms
>> Charge group distribution at step 0:  4474 4439 4268 4913 4471