Re: [gmx-users] About system requirement to gromacs

2012-08-01 Thread Linus Östberg
Both are probably using 4 cores, the first one as threads, the second via mpi.

However, in the second one you are using 16 threads to run 4 copies of
the same simulation, 4 threads each, thus getting performance similar
(or less) to using only one core. When using mpi, you should use
mdrun_mpi instead, ie:
mpirun -np 4 mdrun_mpi -v -deffnm topol1

/ Linus

On Wed, Aug 1, 2012 at 12:36 PM, rama david ramadavidgr...@gmail.com wrote:
 Thank you Mark for reply..

 I run mdrun and mpirun with following command. I pasted output also..
 Please help me to parse it..


 1.   mdrun -v -deffnm topol1
 2.   mpirun -np 4 mdrun -v -deffnm topol1


 1.mdrun -v -deffnm topol1


 step 30, will finish Wed Aug  1 16:49:28 2012
  Average load imbalance: 12.3 %
  Part of the total run time spent waiting due to load imbalance: 5.1 %

 NOTE: 5.1 % performance was lost due to load imbalance
   in the domain decomposition.
   You might want to use dynamic load balancing (option -dlb.)


 Parallel run - timing based on wallclock.

NODE (s)   Real (s)  (%)
Time:  2.035  2.035100.0
(Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
 Performance:109.127  5.744  2.632  9.117

 gcq#98: You're About to Hurt Somebody (Jazzy Jeff)



 2. mpirun -np 4 mdrun -v -deffnm topol1

 Getting Loaded...
 Reading file topol1.tpr, VERSION 4.5.5 (single precision)
 Starting 4 threads
 Starting 4 threads
 Starting 4 threads
 Starting 4 threads
 Loaded with Money

 Loaded with Money

 Loaded with Money

 Loaded with Money

 Making 1D domain decomposition 4 x 1 x 1
 Making 1D domain decomposition 4 x 1 x 1


 Making 1D domain decomposition 4 x 1 x 1
 Making 1D domain decomposition 4 x 1 x 1

 starting mdrun 'Protein in water'
 5 steps,100.0 ps.
 starting mdrun 'Protein in water'
 5 steps,100.0 ps.

 starting mdrun 'Protein in water'
 5 steps,100.0 ps.
 starting mdrun 'Protein in water'
 5 steps,100.0 ps.

 NOTE: Turning on dynamic load balancing


 NOTE: Turning on dynamic load balancing

 step 0
 NOTE: Turning on dynamic load balancing

 step 100, will finish Wed Aug  1 19:36:10 2012vol 0.83  imb F  2% vol
 0.84  imb step 200, will finish Wed Aug  1 19:32:37 2012vol 0.87  imb
 F 16% vol 0.86  imb step 300, will finish Wed Aug  1 19:34:59 2012vol
 0.88  imb F  4% vol 0.85  imb step 400, will finish Wed Aug  1
 19:36:27 2012^Cmpirun: killing job...

 --
 mpirun noticed that process rank 0 with PID 4257 on node  VPCEB34EN
 exited on signal 0 (Unknown signal 0).
 --
 4 total processes killed (some possibly by mpirun during cleanup)
 mpirun: clean termination accomplished




 As you can also see the mdun command estimate to complete Aug  1 16:49:28 2012
 while mpirun taking the time Wed Aug  1 19:36:10 2012vol

 Mpirun command taking more time...

 so from above output I can  guess In mpirun 4 processor are used


 Sorry if I take any wrong meaning from output..

 Thank you for giving your valuable time..


 With best wishes and regards

 Rama David
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Only plain text messages are allowed!
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs installation

2012-07-30 Thread Linus Östberg
If you install package 1 on your list, the second one will be
installed as well (ie you need both of them).

// Linus

On Mon, Jul 30, 2012 at 10:56 AM, rama david ramadavidgr...@gmail.com wrote:
 Hi GROMACS FRIENDS,
   I have dell T3500 precision, 64 bits, 6C workstation with fedora
 operating system.
 I want to install gromacs in parallel mode with mpi...
 I am planning to performed Replica Exchange Molecular Dynamics ( REMD ).
 As per REMD instruction
 http://www.gromacs.org/Documentation/How-tos/REMD?highlight=remd,
 GROMACS should not compile in threading.
 I install open mpi with  command line yum -y install openmpi.
 I found that fedora add/remove software package has gromacs 4.5.5
 version that can be
 easily installed by  command yum  ..
 It  enlisted with  total 15 different packages : eg.. two packages..

 1. GROMACS Open MPI binaries and libraries
 2 . GROMACS OPEN MPI shared libraries

 and a more..

 Please can you tell me which packages I have to install so that I can
 run GROMACS 4.5.5 in parallel to do REMD.


 Thank you in advance
 Have a nice day..


 With Best Wishes and regards.
 Rama David
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Only plain text messages are allowed!
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: question about minimisation

2012-07-26 Thread Linus Östberg
Quite sure it's just different syntax in topology and mdp file. Compare to C;
#define POSRES // expressed as -DPOSRES in the mdp file
#ifdef POSRES // when working in the topology file

// Linus

On Thu, Jul 26, 2012 at 1:52 PM,
reising...@rostlab.informatik.tu-muenchen.de wrote:


 On 7/26/12 7:06 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:


 On 7/26/12 6:07 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
 On 26/07/2012 6:47 PM, reising...@rostlab.informatik.tu-muenchen.de
 wrote:
 Ho,
 first I minimize my structure. This is the corresponding mdp file:

 define  = -DPOSRES
 integrator  = steep
 emtol   = 10
 nsteps  = 1500
 nstenergy   = 1
 energygrps  = System
 coulombtype = PME
 rcoulomb= 0.9
 rvdw= 0.9
 rlist   = 0.9
 fourierspacing  = 0.12
 pme_order   = 4
 ewald_rtol  = 1e-5
 pbc = xyz


 and then I run a md run. This is the corresponding mdp file:

 define  = -DPOSRES
 integrator  = md
 dt  = 0.001
 nsteps  = 5000
 nstxout = 100
 nstvout = 0
 nstfout = 0
 nstlog  = 1000
 nstxtcout   = 500
 nstenergy   = 5
 energygrps  = Protein Non-Protein
 nstcalcenergy   = 5
 nstlist = 10
 ns-type = Grid
 pbc = xyz
 rlist   = 0.9
 coulombtype = PME
 rcoulomb= 0.9
 rvdw= 0.9
 fourierspacing  = 0.12
 pme_order   = 4
 ewald_rtol  = 1e-5
 gen_vel = yes
 gen_temp= 200.0
 gen_seed= 
 constraints = all-bonds
 tcoupl  = V-rescale
 tc-grps = Protein  Non-Protein
 tau_t   = 0.1  0.1
 ref_t   = 298  298
 pcoupl  = no



 In my topology file I include the restraint files like this:

 ; Include Position restraint file
 #ifdef POSRES
 #include posre.itp
 #endif

 #ifdef POSRES
 #include posre_memb.itp
 #endif

 This won't work for multiple [moleculetype] entries. See
 http://www.gromacs.org/Documentation/How-tos/Position_Restraints



 I just recognize that there is a DPOSRES in the mdp files and a
 POSRES
 in my topology file. Is this the problem. Do I have to write it the
 same
 way in the several files?

 http://www.gromacs.org/Documentation/Include_File_Mechanism

 Mark


 I just see that the [ position_restraints ] part is included under the
 [dihedral] and not between the [moleculetype] and the [atom] part. And
 according to the site you wrote me this is a problem, right? But this
 was
 done by gromacs itself. Shell I write it to the [moleculetype] part?


 A [position_restraints] directive belongs to the [moleculetype] in
 which
 it is
 declared.  The original location of the #include statement produced by
 Gromacs
 is correct; it follows sequentially within the protein [moleculetype].
 Your
 inclusion of the membrane restraint file within the protein
 [moleculetype]
 is,
 however, incorrect.

 But I can not see why it can not work that I have 2 restriction files?
 Can
 you please explain it to me?


 You can have two restraint files for different [moleculetypes] but they
 must be
 organized as such.

 okey. So the whole thing with [atoms], [bonds], [dihedral] all contains
 to
 the entry in [moleculetype]  , right?


 Any directive belongs to the [moleculetype] that immediately precedes it.
 Once
 a new [moleculetype] is declared (either directly or indirectly via
 #include
 mechanism), you're working with a different molecule.

 But directly before I include the membrane restriction file I include
 the
 membrane definition:

 #include amber03.ff/dum.itp
 #ifdef POSRES
 #include posre_memb.itp
 #endif


 So I thought that it is directly after the atomtype it belongs to. I
 thought that it is the same in the case with the water, where first the
 water definition is included and after that the restriction of the
 water.

 Or am I wrong?


 Well the #include statement shown here is different than the one you
 showed
 previously, which was wrong.  Please always be sure you're providing
 accurate
 information - it wastes less time and avoids confusion.

 I include the dummy atoms definition right after the ions. Or is this
 the
 wrong position?


 The position of #include statements in this context is irrelevant.  You
 can list
 the [moleculetypes] in any order you like, but the relevant dependent
 directives
 must be contained appropriately and the order of the listing in
 [molecules] must
 match the coordinate file.  Otherwise, it's fairly flexible.

 -Justin


   4935  4952  4950  4951 4
   4950  4954  4952  4953 4
   4954  4973  4972  4974 4

 ; Include Position restraint file
 #ifdef POSRES
 #include posre.itp
 #endif

 ; Include water topology
 #include amber03.ff/tip3p.itp

 #ifdef 

Re: [gmx-users] MPIRUN on Ubontu

2010-12-27 Thread Linus Östberg
In order to use MPI on Ubuntu with the distribution-supplied package,
you need to use a combination of mpirun and mdrun_mpi, e.g.

mpirun -np 2 mdrun_mpi -deffnm md

to run on two cores.

On Mon, Dec 27, 2010 at 7:04 PM, Justin A. Lemkul jalem...@vt.edu wrote:


 גדעון לפידות wrote:

 Hi all,
 I have recently installed Ubonto on my computer (i5 processor) and
 installed gromacs 4.0.7. I have installed openmpi and fftw but when using
 mpirun command instead of getting parallel processes it simply runs the same
 job four times simultaneously. How do I make the necessary adjustments.

 Properly compile an MPI-enabled mdrun.  Since you've provided no detail on
 how you did the installation, the only thing to suggest is that you've done
 something wrong.  Follow the installation guide:

 http://www.gromacs.org/Downloads/Installation_Instructions

 Alternatively, use the newest version of Gromacs (4.5.3), which uses
 threading for parallelization instead of requiring external MPI support.

 -Justin

 Thanks,
  Gideon


 --
 

 Justin A. Lemkul
 Ph.D. Candidate
 ICTAS Doctoral Scholar
 MILES-IGERT Trainee
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 
 --
 gmx-users mailing list    gmx-us...@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] what is the nicelevel ?

2010-12-07 Thread Linus Östberg
The nicelevel is simply the priority of the process, where 19 is the
lowest priority (ie most other programs will use the cpu before the
gromacs process does) and -20 the highest.

See man nice.

// Linus

On Tue, Dec 7, 2010 at 10:13 PM, GZ Zhang zgz...@gmail.com wrote:
 Hi, ALL
      I'm using genbox to create a water box. There is a flag called
 -nice which is described to be used to set the nicelevel.
 What is nicelevel ? What does the default value 19 mean ? What if I would
 like to increase the decimal places (the default
 is 3) of all numbers. Thanks.
      Regards,
      Guozhen

 --
 gmx-users mailing list    gmx-us...@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] dssp

2010-11-05 Thread Linus Östberg
What did you try to do? To use dssp, you must select entire residues,
e.g. 1 (protein). That error sounds a bit like my problem when trying
to do dssp using only the backbone.

// Linus Östberg

On Fri, Nov 5, 2010 at 2:08 PM, #ZHAO LINA# zhao0...@e.ntu.edu.sg wrote:
 Hi,

 Thanks for your response, I modified that path parts when I posted that 
 information.

 So my environment set was correct.

 I just suddenly realize I may not have root privilege there, cause the dssp 
 was not small, not in mine personal computer, there are in some other places.

 lina

 
 From: gmx-users-boun...@gromacs.org [gmx-users-boun...@gromacs.org] on behalf 
 of Justin A. Lemkul [jalem...@vt.edu]
 Sent: Friday, November 05, 2010 9:07 PM
 To: Discussion list for GROMACS users
 Subject: Re: [gmx-users] dssp

 #ZHAO LINA# wrote:
 Hi,

 Program do_dssp, VERSION 4.0.7
 Source code file: pathToGromacs/gromacs-4.0.7/src/tools/do_dssp.c, line: 471

 Fatal error:
 Failed to execute command: pathToDSSP/ -na ddEPI6I2 ddFHouPz  /dev/null
 2 /dev/null

 It came out two or three file like  ddEPI6I2 and then died like above.

 My first time try dssp, so do not know how to examine it.


 Your DSSP environment variable is set incorrectly.  do_dssp is trying to call
 pathToDSSP as the executable.

 http://www.gromacs.org/Documentation/Gromacs_Utilities/do_dssp

 Note that you should substitute a meaningful PATH on your system, not 
 something
 like pathToDSSP.

 -Justin

 Thanks for any advice,

 lina


 --
 

 Justin A. Lemkul
 Ph.D. Candidate
 ICTAS Doctoral Scholar
 MILES-IGERT Trainee
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 
 --
 gmx-users mailing list    gmx-us...@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 --
 gmx-users mailing list    gmx-us...@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] (no subject)

2010-06-21 Thread Linus Östberg
Use grompp normally, without the -np flag. Then run mdrun_mpi with your
normal parameters as mpirun -np x mdrun_mpi -deffnm xx

On Mon, Jun 21, 2010 at 2:00 PM, Amin Arabbagheri amin_a...@yahoo.comwrote:

 Hi all,

 I've installed GROMACS 4.0.7 and MPI libraries using ubuntu synaptic
 package manager.
 I want to run a simulation in parallel on a multi processor, single PC, but
 to compile via grompp, it doesn't accept -np flag, and also , using -np in
 mdrun, it still runs as a single job.
 Thanks a lot for any instruction.

 Bests,
 Amin



 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php