[gmx-users] g_tune_pme can't be executed

2013-03-21 Thread Daniel Wang
Hi everyone~

When I run g_tune_pme_mpi, it prompts:

Fatal error:
Need an MPI-enabled version of mdrun. This one
(mdrun_mpi)
seems to have been compiled without MPI support.

I'm sure my gromacs is compiled WITH MPI support and mpiexec -n xx
mdrun_mpi -s yy.tpr works normally.
How to fix it? I'm using gromacs4.6 and Intel MPI 4.1.0.
Thanks.

-- 
Daniel Wang / npbool
Computer Science  Technology, Tsinghua University
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme can't be executed

2013-03-21 Thread Carsten Kutzner
Hi Daniel,

are you using the newest version of 4.6? There was an issue with g_tune_pme,
which I already fixed. I guess it could be responsible for the error that 
you see.

Best,
  Carsten


On Mar 21, 2013, at 2:26 PM, Daniel Wang iwnk...@gmail.com wrote:

 Hi everyone~
 
 When I run g_tune_pme_mpi, it prompts:
 
 Fatal error:
 Need an MPI-enabled version of mdrun. This one
 (mdrun_mpi)
 seems to have been compiled without MPI support.
 
 I'm sure my gromacs is compiled WITH MPI support and mpiexec -n xx
 mdrun_mpi -s yy.tpr works normally.
 How to fix it? I'm using gromacs4.6 and Intel MPI 4.1.0.
 Thanks.
 
 -- 
 Daniel Wang / npbool
 Computer Science  Technology, Tsinghua University
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme can't be executed

2013-03-21 Thread Daniel Wang
Hi Carsten,

  Actually I tried 4.6.1 weeks ago. Howerev, it seems slighty slower than
old version. It's lucky that I haven't deleted the 4.6.1 build from my
disk. I'm now testing the newest g_tune_pme. It starts up normally but I
have to wait to see the result.
   Thanks a lot!

2013/3/21 Carsten Kutzner ckut...@gwdg.de

 Hi Daniel,

 are you using the newest version of 4.6? There was an issue with
 g_tune_pme,
 which I already fixed. I guess it could be responsible for the error that
 you see.

 Best,
   Carsten


 On Mar 21, 2013, at 2:26 PM, Daniel Wang iwnk...@gmail.com wrote:

  Hi everyone~
 
  When I run g_tune_pme_mpi, it prompts:
 
  Fatal error:
  Need an MPI-enabled version of mdrun. This one
  (mdrun_mpi)
  seems to have been compiled without MPI support.
 
  I'm sure my gromacs is compiled WITH MPI support and mpiexec -n xx
  mdrun_mpi -s yy.tpr works normally.
  How to fix it? I'm using gromacs4.6 and Intel MPI 4.1.0.
  Thanks.
 
  --
  Daniel Wang / npbool
  Computer Science  Technology, Tsinghua University
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


 --
 Dr. Carsten Kutzner
 Max Planck Institute for Biophysical Chemistry
 Theoretical and Computational Biophysics
 Am Fassberg 11, 37077 Goettingen, Germany
 Tel. +49-551-2012313, Fax: +49-551-2012302
 http://www.mpibpc.mpg.de/grubmueller/kutzner
 http://www.mpibpc.mpg.de/grubmueller/sppexa

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
王凝枰 Daniel Wang / npbool
Computer Science  Technology, Tsinghua University
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme for multiple nodes

2012-12-04 Thread Chandan Choudhury
Dear Carsten and Florian,

Thanks for you useful suggestions. It did work. I still have a doubt
regarding the execution :

export MPIRUN=`which mpirun`
export MDRUN=/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5
g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
tune.edr -g tune.log

I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme tunes
the no. of pme nodes. As I am executing it on a single node, mdrun never
checks pme for greater than 12 ppn. So, how do I understand that the pme is
tuned for 24 ppn spanning across the two nodes.

Chandan


--
Chandan kumar Choudhury
NCL, Pune
INDIA


On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner ckut...@gwdg.de wrote:

 Hi Chandan,

 On Nov 29, 2012, at 3:30 PM, Chandan Choudhury iitd...@gmail.com wrote:

  Hi Carsten,
 
  Thanks for your suggestion.
 
  I did try to pass to total number of cores with the np flag to the
  g_tune_pme, but it didnot help. Hopefully I am doing something silliy. I
  have pasted the snippet of the PBS script.
 
  #!/bin/csh
  #PBS -l nodes=2:ppn=12:twelve
  #PBS -N bilayer_tune
  
  
  
 
  cd $PBS_O_WORKDIR
  export MDRUN=/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5
 from here on you job file should read:

 export MPIRUN=`which mpirun`
 g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
 tune.edr -g tune.log

  mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb -x
  tune.xtc -e tune.edr -g tune.log -nice 0
 this way you will get $NPROCS g_tune_pme instances, each trying to run an
 mdrun job on 24 cores,
 which is not what you want. g_tune_pme itself is a serial program, it just
 spawns the mdrun's.

 Carsten
 
 
  Then I submit the script using qsub.
  When I login to the compute nodes there I donot find and mdrun executable
  running.
 
  I also tried using nodes=1 and np 12. It didnot work through qsub.
 
  Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np
 12
  -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
 
  It worked.
 
  Also, if I just use
  $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
 tune.edr
  -g tune.log -nice 0
  g_tune_pme executes on the head node and writes various files.
 
  Kindly let me know what am I missing when I submit through qsub.
 
  Thanks
 
  Chandan
  --
  Chandan kumar Choudhury
  NCL, Pune
  INDIA
 
 
  On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner ckut...@gwdg.de wrote:
 
  Hi Chandan,
 
  g_tune_pme also finds the optimal number of PME cores if the cores
  are distributed on multiple nodes. Simply pass the total number of
  cores to the -np option. Depending on the MPI and queue environment
  that you use, the distribution of the cores over the nodes may have
  to be specified in a hostfile / machinefile. Check g_tune_pme -h
  on how to set that.
 
  Best,
   Carsten
 
 
  On Aug 28, 2012, at 8:33 PM, Chandan Choudhury iitd...@gmail.com
 wrote:
 
  Dear gmx users,
 
  I am using 4.5.5 of gromacs.
 
  I was trying to use g_tune_pme for a simulation. I intend to execute
  mdrun at multiple nodes with 12 cores each. Therefore, I would like to
  optimize the number of pme nodes. I could execute g_tune_pme -np 12
  md.tpr. But this will only find the optimal PME nodes for single nodes
  run. How do I find the optimal PME nodes for multiple nodes.
 
  Any suggestion would be helpful.
 
  Chandan
 
  --
  Chandan kumar Choudhury
  NCL, Pune
  INDIA
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
  --
  Dr. Carsten Kutzner
  Max Planck Institute for Biophysical Chemistry
  Theoretical and Computational Biophysics
  Am Fassberg 11, 37077 Goettingen, Germany
  Tel. +49-551-2012313, Fax: +49-551-2012302
  http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


 --
 Dr. Carsten 

Re: [gmx-users] g_tune_pme for multiple nodes

2012-12-04 Thread Carsten Kutzner
Hi Chandan,

the number of separate PME nodes in Gromacs must be larger than two and
smaller or equal to half the number of MPI processes (=np). Thus, g_tune_pme
checks only up to npme = np/2 PME nodes. 

Best,
  Carsten


On Dec 4, 2012, at 1:54 PM, Chandan Choudhury iitd...@gmail.com wrote:

 Dear Carsten and Florian,
 
 Thanks for you useful suggestions. It did work. I still have a doubt
 regarding the execution :
 
 export MPIRUN=`which mpirun`
 export MDRUN=/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5
 g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
 tune.edr -g tune.log
 
 I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme tunes
 the no. of pme nodes. As I am executing it on a single node, mdrun never
 checks pme for greater than 12 ppn. So, how do I understand that the pme is
 tuned for 24 ppn spanning across the two nodes.
 
 Chandan
 
 
 --
 Chandan kumar Choudhury
 NCL, Pune
 INDIA
 
 
 On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner ckut...@gwdg.de wrote:
 
 Hi Chandan,
 
 On Nov 29, 2012, at 3:30 PM, Chandan Choudhury iitd...@gmail.com wrote:
 
 Hi Carsten,
 
 Thanks for your suggestion.
 
 I did try to pass to total number of cores with the np flag to the
 g_tune_pme, but it didnot help. Hopefully I am doing something silliy. I
 have pasted the snippet of the PBS script.
 
 #!/bin/csh
 #PBS -l nodes=2:ppn=12:twelve
 #PBS -N bilayer_tune
 
 
 
 
 cd $PBS_O_WORKDIR
 export MDRUN=/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5
 from here on you job file should read:
 
 export MPIRUN=`which mpirun`
 g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
 tune.edr -g tune.log
 
 mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb -x
 tune.xtc -e tune.edr -g tune.log -nice 0
 this way you will get $NPROCS g_tune_pme instances, each trying to run an
 mdrun job on 24 cores,
 which is not what you want. g_tune_pme itself is a serial program, it just
 spawns the mdrun's.
 
 Carsten
 
 
 Then I submit the script using qsub.
 When I login to the compute nodes there I donot find and mdrun executable
 running.
 
 I also tried using nodes=1 and np 12. It didnot work through qsub.
 
 Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np
 12
 -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
 
 It worked.
 
 Also, if I just use
 $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
 tune.edr
 -g tune.log -nice 0
 g_tune_pme executes on the head node and writes various files.
 
 Kindly let me know what am I missing when I submit through qsub.
 
 Thanks
 
 Chandan
 --
 Chandan kumar Choudhury
 NCL, Pune
 INDIA
 
 
 On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner ckut...@gwdg.de wrote:
 
 Hi Chandan,
 
 g_tune_pme also finds the optimal number of PME cores if the cores
 are distributed on multiple nodes. Simply pass the total number of
 cores to the -np option. Depending on the MPI and queue environment
 that you use, the distribution of the cores over the nodes may have
 to be specified in a hostfile / machinefile. Check g_tune_pme -h
 on how to set that.
 
 Best,
 Carsten
 
 
 On Aug 28, 2012, at 8:33 PM, Chandan Choudhury iitd...@gmail.com
 wrote:
 
 Dear gmx users,
 
 I am using 4.5.5 of gromacs.
 
 I was trying to use g_tune_pme for a simulation. I intend to execute
 mdrun at multiple nodes with 12 cores each. Therefore, I would like to
 optimize the number of pme nodes. I could execute g_tune_pme -np 12
 md.tpr. But this will only find the optimal PME nodes for single nodes
 run. How do I find the optimal PME nodes for multiple nodes.
 
 Any suggestion would be helpful.
 
 Chandan
 
 --
 Chandan kumar Choudhury
 NCL, Pune
 INDIA
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
 --
 Dr. Carsten Kutzner
 Max Planck Institute for Biophysical Chemistry
 Theoretical and Computational Biophysics
 Am Fassberg 11, 37077 Goettingen, Germany
 Tel. +49-551-2012313, Fax: +49-551-2012302
 http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 

Re: [gmx-users] g_tune_pme for multiple nodes

2012-12-04 Thread Chandan Choudhury
Hi Carsten,

Thanks for the reply.

If PME nodes for the g_tune is half of np, then if it exceeds the ppn of of
a node, how would g_tune perform. What I mean if $NPROCS=36, the its half
is 18 ppn, but 18 ppns are not present in a single node  (max. ppn = 12 per
node). How would g_tune function in such scenario?

Chandan


--
Chandan kumar Choudhury
NCL, Pune
INDIA


On Tue, Dec 4, 2012 at 6:39 PM, Carsten Kutzner ckut...@gwdg.de wrote:

 Hi Chandan,

 the number of separate PME nodes in Gromacs must be larger than two and
 smaller or equal to half the number of MPI processes (=np). Thus,
 g_tune_pme
 checks only up to npme = np/2 PME nodes.

 Best,
   Carsten


 On Dec 4, 2012, at 1:54 PM, Chandan Choudhury iitd...@gmail.com wrote:

  Dear Carsten and Florian,
 
  Thanks for you useful suggestions. It did work. I still have a doubt
  regarding the execution :
 
  export MPIRUN=`which mpirun`
  export MDRUN=/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5
  g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
  tune.edr -g tune.log
 
  I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme tunes
  the no. of pme nodes. As I am executing it on a single node, mdrun never
  checks pme for greater than 12 ppn. So, how do I understand that the pme
 is
  tuned for 24 ppn spanning across the two nodes.
 
  Chandan
 
 
  --
  Chandan kumar Choudhury
  NCL, Pune
  INDIA
 
 
  On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner ckut...@gwdg.de
 wrote:
 
  Hi Chandan,
 
  On Nov 29, 2012, at 3:30 PM, Chandan Choudhury iitd...@gmail.com
 wrote:
 
  Hi Carsten,
 
  Thanks for your suggestion.
 
  I did try to pass to total number of cores with the np flag to the
  g_tune_pme, but it didnot help. Hopefully I am doing something silliy.
 I
  have pasted the snippet of the PBS script.
 
  #!/bin/csh
  #PBS -l nodes=2:ppn=12:twelve
  #PBS -N bilayer_tune
  
  
  
 
  cd $PBS_O_WORKDIR
  export MDRUN=/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5
  from here on you job file should read:
 
  export MPIRUN=`which mpirun`
  g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
  tune.edr -g tune.log
 
  mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb
 -x
  tune.xtc -e tune.edr -g tune.log -nice 0
  this way you will get $NPROCS g_tune_pme instances, each trying to run
 an
  mdrun job on 24 cores,
  which is not what you want. g_tune_pme itself is a serial program, it
 just
  spawns the mdrun's.
 
  Carsten
 
 
  Then I submit the script using qsub.
  When I login to the compute nodes there I donot find and mdrun
 executable
  running.
 
  I also tried using nodes=1 and np 12. It didnot work through qsub.
 
  Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np
  12
  -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
 
  It worked.
 
  Also, if I just use
  $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
  tune.edr
  -g tune.log -nice 0
  g_tune_pme executes on the head node and writes various files.
 
  Kindly let me know what am I missing when I submit through qsub.
 
  Thanks
 
  Chandan
  --
  Chandan kumar Choudhury
  NCL, Pune
  INDIA
 
 
  On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner ckut...@gwdg.de
 wrote:
 
  Hi Chandan,
 
  g_tune_pme also finds the optimal number of PME cores if the cores
  are distributed on multiple nodes. Simply pass the total number of
  cores to the -np option. Depending on the MPI and queue environment
  that you use, the distribution of the cores over the nodes may have
  to be specified in a hostfile / machinefile. Check g_tune_pme -h
  on how to set that.
 
  Best,
  Carsten
 
 
  On Aug 28, 2012, at 8:33 PM, Chandan Choudhury iitd...@gmail.com
  wrote:
 
  Dear gmx users,
 
  I am using 4.5.5 of gromacs.
 
  I was trying to use g_tune_pme for a simulation. I intend to execute
  mdrun at multiple nodes with 12 cores each. Therefore, I would like
 to
  optimize the number of pme nodes. I could execute g_tune_pme -np 12
  md.tpr. But this will only find the optimal PME nodes for single
 nodes
  run. How do I find the optimal PME nodes for multiple nodes.
 
  Any suggestion would be helpful.
 
  Chandan
 
  --
  Chandan kumar Choudhury
  NCL, Pune
  INDIA
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
  --
  Dr. Carsten Kutzner
  Max Planck Institute for Biophysical Chemistry
  Theoretical and Computational Biophysics
  Am Fassberg 11, 37077 Goettingen, Germany
  Tel. +49-551-2012313, Fax: +49-551-2012302
  http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
 
  --
  gmx-users 

Re: [gmx-users] g_tune_pme for multiple nodes

2012-12-04 Thread Carsten Kutzner

On Dec 4, 2012, at 2:45 PM, Chandan Choudhury iitd...@gmail.com wrote:

 Hi Carsten,
 
 Thanks for the reply.
 
 If PME nodes for the g_tune is half of np, then if it exceeds the ppn of of
 a node, how would g_tune perform. What I mean if $NPROCS=36, the its half
 is 18 ppn, but 18 ppns are not present in a single node  (max. ppn = 12 per
 node). How would g_tune function in such scenario?
Typically mdrun allocates the PME and PP nodes in an interleaved way, meaning
you would end up with 9 PME nodes on each of your two nodes.

Check the -ddorder of mdrun.

Interleaving is normally fastest unless you could have all PME processes 
exclusively
on a single node.

Carsten

 
 Chandan
 
 
 --
 Chandan kumar Choudhury
 NCL, Pune
 INDIA
 
 
 On Tue, Dec 4, 2012 at 6:39 PM, Carsten Kutzner ckut...@gwdg.de wrote:
 
 Hi Chandan,
 
 the number of separate PME nodes in Gromacs must be larger than two and
 smaller or equal to half the number of MPI processes (=np). Thus,
 g_tune_pme
 checks only up to npme = np/2 PME nodes.
 
 Best,
  Carsten
 
 
 On Dec 4, 2012, at 1:54 PM, Chandan Choudhury iitd...@gmail.com wrote:
 
 Dear Carsten and Florian,
 
 Thanks for you useful suggestions. It did work. I still have a doubt
 regarding the execution :
 
 export MPIRUN=`which mpirun`
 export MDRUN=/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5
 g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
 tune.edr -g tune.log
 
 I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme tunes
 the no. of pme nodes. As I am executing it on a single node, mdrun never
 checks pme for greater than 12 ppn. So, how do I understand that the pme
 is
 tuned for 24 ppn spanning across the two nodes.
 
 Chandan
 
 
 --
 Chandan kumar Choudhury
 NCL, Pune
 INDIA
 
 
 On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner ckut...@gwdg.de
 wrote:
 
 Hi Chandan,
 
 On Nov 29, 2012, at 3:30 PM, Chandan Choudhury iitd...@gmail.com
 wrote:
 
 Hi Carsten,
 
 Thanks for your suggestion.
 
 I did try to pass to total number of cores with the np flag to the
 g_tune_pme, but it didnot help. Hopefully I am doing something silliy.
 I
 have pasted the snippet of the PBS script.
 
 #!/bin/csh
 #PBS -l nodes=2:ppn=12:twelve
 #PBS -N bilayer_tune
 
 
 
 
 cd $PBS_O_WORKDIR
 export MDRUN=/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5
 from here on you job file should read:
 
 export MPIRUN=`which mpirun`
 g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
 tune.edr -g tune.log
 
 mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb
 -x
 tune.xtc -e tune.edr -g tune.log -nice 0
 this way you will get $NPROCS g_tune_pme instances, each trying to run
 an
 mdrun job on 24 cores,
 which is not what you want. g_tune_pme itself is a serial program, it
 just
 spawns the mdrun's.
 
 Carsten
 
 
 Then I submit the script using qsub.
 When I login to the compute nodes there I donot find and mdrun
 executable
 running.
 
 I also tried using nodes=1 and np 12. It didnot work through qsub.
 
 Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np
 12
 -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
 
 It worked.
 
 Also, if I just use
 $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
 tune.edr
 -g tune.log -nice 0
 g_tune_pme executes on the head node and writes various files.
 
 Kindly let me know what am I missing when I submit through qsub.
 
 Thanks
 
 Chandan
 --
 Chandan kumar Choudhury
 NCL, Pune
 INDIA
 
 
 On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner ckut...@gwdg.de
 wrote:
 
 Hi Chandan,
 
 g_tune_pme also finds the optimal number of PME cores if the cores
 are distributed on multiple nodes. Simply pass the total number of
 cores to the -np option. Depending on the MPI and queue environment
 that you use, the distribution of the cores over the nodes may have
 to be specified in a hostfile / machinefile. Check g_tune_pme -h
 on how to set that.
 
 Best,
 Carsten
 
 
 On Aug 28, 2012, at 8:33 PM, Chandan Choudhury iitd...@gmail.com
 wrote:
 
 Dear gmx users,
 
 I am using 4.5.5 of gromacs.
 
 I was trying to use g_tune_pme for a simulation. I intend to execute
 mdrun at multiple nodes with 12 cores each. Therefore, I would like
 to
 optimize the number of pme nodes. I could execute g_tune_pme -np 12
 md.tpr. But this will only find the optimal PME nodes for single
 nodes
 run. How do I find the optimal PME nodes for multiple nodes.
 
 Any suggestion would be helpful.
 
 Chandan
 
 --
 Chandan kumar Choudhury
 NCL, Pune
 INDIA
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
 

Re: [gmx-users] g_tune_pme for multiple nodes

2012-12-04 Thread Chandan Choudhury
On Tue, Dec 4, 2012 at 7:18 PM, Carsten Kutzner ckut...@gwdg.de wrote:


 On Dec 4, 2012, at 2:45 PM, Chandan Choudhury iitd...@gmail.com wrote:

  Hi Carsten,
 
  Thanks for the reply.
 
  If PME nodes for the g_tune is half of np, then if it exceeds the ppn of
 of
  a node, how would g_tune perform. What I mean if $NPROCS=36, the its half
  is 18 ppn, but 18 ppns are not present in a single node  (max. ppn = 12
 per
  node). How would g_tune function in such scenario?
 Typically mdrun allocates the PME and PP nodes in an interleaved way,
 meaning
 you would end up with 9 PME nodes on each of your two nodes.

 Check the -ddorder of mdrun.

 Interleaving is normally fastest unless you could have all PME processes
 exclusively
 on a single node.


Thanks Carsten for the explanation.

Chandan


 Carsten

 
  Chandan
 
 
  --
  Chandan kumar Choudhury
  NCL, Pune
  INDIA
 
 
  On Tue, Dec 4, 2012 at 6:39 PM, Carsten Kutzner ckut...@gwdg.de wrote:
 
  Hi Chandan,
 
  the number of separate PME nodes in Gromacs must be larger than two and
  smaller or equal to half the number of MPI processes (=np). Thus,
  g_tune_pme
  checks only up to npme = np/2 PME nodes.
 
  Best,
   Carsten
 
 
  On Dec 4, 2012, at 1:54 PM, Chandan Choudhury iitd...@gmail.com
 wrote:
 
  Dear Carsten and Florian,
 
  Thanks for you useful suggestions. It did work. I still have a doubt
  regarding the execution :
 
  export MPIRUN=`which mpirun`
  export MDRUN=/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5
  g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
  tune.edr -g tune.log
 
  I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme
 tunes
  the no. of pme nodes. As I am executing it on a single node, mdrun
 never
  checks pme for greater than 12 ppn. So, how do I understand that the
 pme
  is
  tuned for 24 ppn spanning across the two nodes.
 
  Chandan
 
 
  --
  Chandan kumar Choudhury
  NCL, Pune
  INDIA
 
 
  On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner ckut...@gwdg.de
  wrote:
 
  Hi Chandan,
 
  On Nov 29, 2012, at 3:30 PM, Chandan Choudhury iitd...@gmail.com
  wrote:
 
  Hi Carsten,
 
  Thanks for your suggestion.
 
  I did try to pass to total number of cores with the np flag to the
  g_tune_pme, but it didnot help. Hopefully I am doing something
 silliy.
  I
  have pasted the snippet of the PBS script.
 
  #!/bin/csh
  #PBS -l nodes=2:ppn=12:twelve
  #PBS -N bilayer_tune
  
  
  
 
  cd $PBS_O_WORKDIR
  export
 MDRUN=/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5
  from here on you job file should read:
 
  export MPIRUN=`which mpirun`
  g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
  tune.edr -g tune.log
 
  mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c
 tune.pdb
  -x
  tune.xtc -e tune.edr -g tune.log -nice 0
  this way you will get $NPROCS g_tune_pme instances, each trying to run
  an
  mdrun job on 24 cores,
  which is not what you want. g_tune_pme itself is a serial program, it
  just
  spawns the mdrun's.
 
  Carsten
 
 
  Then I submit the script using qsub.
  When I login to the compute nodes there I donot find and mdrun
  executable
  running.
 
  I also tried using nodes=1 and np 12. It didnot work through qsub.
 
  Then I logged in to the compute nodes and executed g_tune_pme_4.5.5
 -np
  12
  -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice
 0
 
  It worked.
 
  Also, if I just use
  $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
  tune.edr
  -g tune.log -nice 0
  g_tune_pme executes on the head node and writes various files.
 
  Kindly let me know what am I missing when I submit through qsub.
 
  Thanks
 
  Chandan
  --
  Chandan kumar Choudhury
  NCL, Pune
  INDIA
 
 
  On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner ckut...@gwdg.de
  wrote:
 
  Hi Chandan,
 
  g_tune_pme also finds the optimal number of PME cores if the cores
  are distributed on multiple nodes. Simply pass the total number of
  cores to the -np option. Depending on the MPI and queue environment
  that you use, the distribution of the cores over the nodes may have
  to be specified in a hostfile / machinefile. Check g_tune_pme -h
  on how to set that.
 
  Best,
  Carsten
 
 
  On Aug 28, 2012, at 8:33 PM, Chandan Choudhury iitd...@gmail.com
  wrote:
 
  Dear gmx users,
 
  I am using 4.5.5 of gromacs.
 
  I was trying to use g_tune_pme for a simulation. I intend to
 execute
  mdrun at multiple nodes with 12 cores each. Therefore, I would like
  to
  optimize the number of pme nodes. I could execute g_tune_pme -np 12
  md.tpr. But this will only find the optimal PME nodes for single
  nodes
  run. How do I find the optimal PME nodes for multiple nodes.
 
  Any suggestion would be helpful.
 
  Chandan
 
  --
  Chandan kumar Choudhury
  NCL, Pune
  INDIA
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  

Re: [gmx-users] g_tune_pme for multiple nodes

2012-11-29 Thread Chandan Choudhury
Hi Carsten,

Thanks for your suggestion.

I did try to pass to total number of cores with the np flag to the
g_tune_pme, but it didnot help. Hopefully I am doing something silliy. I
have pasted the snippet of the PBS script.

#!/bin/csh
#PBS -l nodes=2:ppn=12:twelve
#PBS -N bilayer_tune




cd $PBS_O_WORKDIR
export MDRUN=/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5
mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb -x
tune.xtc -e tune.edr -g tune.log -nice 0


Then I submit the script using qsub.
When I login to the compute nodes there I donot find and mdrun executable
running.

I also tried using nodes=1 and np 12. It didnot work through qsub.

Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np 12
-s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0

It worked.

Also, if I just use
$g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr
-g tune.log -nice 0
g_tune_pme executes on the head node and writes various files.

Kindly let me know what am I missing when I submit through qsub.

Thanks

Chandan
--
Chandan kumar Choudhury
NCL, Pune
INDIA


On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner ckut...@gwdg.de wrote:

 Hi Chandan,

 g_tune_pme also finds the optimal number of PME cores if the cores
 are distributed on multiple nodes. Simply pass the total number of
 cores to the -np option. Depending on the MPI and queue environment
 that you use, the distribution of the cores over the nodes may have
 to be specified in a hostfile / machinefile. Check g_tune_pme -h
 on how to set that.

 Best,
   Carsten


 On Aug 28, 2012, at 8:33 PM, Chandan Choudhury iitd...@gmail.com wrote:

  Dear gmx users,
 
  I am using 4.5.5 of gromacs.
 
  I was trying to use g_tune_pme for a simulation. I intend to execute
  mdrun at multiple nodes with 12 cores each. Therefore, I would like to
  optimize the number of pme nodes. I could execute g_tune_pme -np 12
  md.tpr. But this will only find the optimal PME nodes for single nodes
  run. How do I find the optimal PME nodes for multiple nodes.
 
  Any suggestion would be helpful.
 
  Chandan
 
  --
  Chandan kumar Choudhury
  NCL, Pune
  INDIA
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


 --
 Dr. Carsten Kutzner
 Max Planck Institute for Biophysical Chemistry
 Theoretical and Computational Biophysics
 Am Fassberg 11, 37077 Goettingen, Germany
 Tel. +49-551-2012313, Fax: +49-551-2012302
 http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


AW: [gmx-users] g_tune_pme for multiple nodes

2012-11-29 Thread Florian Dommert
 -Ursprüngliche Nachricht-
 Von: gmx-users-boun...@gromacs.org [mailto:gmx-users-
 boun...@gromacs.org] Im Auftrag von Chandan Choudhury
 Gesendet: Donnerstag, 29. November 2012 15:31
 An: Discussion list for GROMACS users
 Betreff: Re: [gmx-users] g_tune_pme for multiple nodes
 
 Hi Carsten,
 
 Thanks for your suggestion.
 
 I did try to pass to total number of cores with the np flag to the
g_tune_pme,
 but it didnot help. Hopefully I am doing something silliy. I have pasted
the
 snippet of the PBS script.
 
 #!/bin/csh
 #PBS -l nodes=2:ppn=12:twelve
 #PBS -N bilayer_tune
 
 
 
 
 cd $PBS_O_WORKDIR
 export
 MDRUN=/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5
 mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb -x
 tune.xtc -e tune.edr -g tune.log -nice 0

Hi,

 Don't start an MPI process. Run:

g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb -x

and everything should work fine.

/Flo
 
 
 Then I submit the script using qsub.
 When I login to the compute nodes there I donot find and mdrun executable
 running.
 
 I also tried using nodes=1 and np 12. It didnot work through qsub.
 
 Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np 12
-
 s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
 
 It worked.
 
 Also, if I just use
 $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
tune.edr -g
 tune.log -nice 0 g_tune_pme executes on the head node and writes various
files.
 
 Kindly let me know what am I missing when I submit through qsub.
 
 Thanks
 
 Chandan
 --
 Chandan kumar Choudhury
 NCL, Pune
 INDIA
 
 
 On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner ckut...@gwdg.de wrote:
 
  Hi Chandan,
 
  g_tune_pme also finds the optimal number of PME cores if the cores are
  distributed on multiple nodes. Simply pass the total number of cores
  to the -np option. Depending on the MPI and queue environment that you
  use, the distribution of the cores over the nodes may have to be
  specified in a hostfile / machinefile. Check g_tune_pme -h on how to
  set that.
 
  Best,
Carsten
 
 
  On Aug 28, 2012, at 8:33 PM, Chandan Choudhury iitd...@gmail.com
 wrote:
 
   Dear gmx users,
  
   I am using 4.5.5 of gromacs.
  
   I was trying to use g_tune_pme for a simulation. I intend to execute
   mdrun at multiple nodes with 12 cores each. Therefore, I would like
   to optimize the number of pme nodes. I could execute g_tune_pme -np
   12 md.tpr. But this will only find the optimal PME nodes for single
   nodes run. How do I find the optimal PME nodes for multiple nodes.
  
   Any suggestion would be helpful.
  
   Chandan
  
   --
   Chandan kumar Choudhury
   NCL, Pune
   INDIA
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   * Please don't post (un)subscribe requests to the list. Use the www
   interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
  --
  Dr. Carsten Kutzner
  Max Planck Institute for Biophysical Chemistry Theoretical and
  Computational Biophysics Am Fassberg 11, 37077 Goettingen, Germany
  Tel. +49-551-2012313, Fax: +49-551-2012302
  http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the www
  interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
interface or
 send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme for multiple nodes

2012-11-29 Thread Carsten Kutzner
Hi Chandan,

On Nov 29, 2012, at 3:30 PM, Chandan Choudhury iitd...@gmail.com wrote:

 Hi Carsten,
 
 Thanks for your suggestion.
 
 I did try to pass to total number of cores with the np flag to the
 g_tune_pme, but it didnot help. Hopefully I am doing something silliy. I
 have pasted the snippet of the PBS script.
 
 #!/bin/csh
 #PBS -l nodes=2:ppn=12:twelve
 #PBS -N bilayer_tune
 
 
 
 
 cd $PBS_O_WORKDIR
 export MDRUN=/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5
from here on you job file should read:

export MPIRUN=`which mpirun`
g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr 
-g tune.log

 mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb -x
 tune.xtc -e tune.edr -g tune.log -nice 0
this way you will get $NPROCS g_tune_pme instances, each trying to run an mdrun 
job on 24 cores,
which is not what you want. g_tune_pme itself is a serial program, it just 
spawns the mdrun's.

Carsten
 
 
 Then I submit the script using qsub.
 When I login to the compute nodes there I donot find and mdrun executable
 running.
 
 I also tried using nodes=1 and np 12. It didnot work through qsub.
 
 Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np 12
 -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
 
 It worked.
 
 Also, if I just use
 $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr
 -g tune.log -nice 0
 g_tune_pme executes on the head node and writes various files.
 
 Kindly let me know what am I missing when I submit through qsub.
 
 Thanks
 
 Chandan
 --
 Chandan kumar Choudhury
 NCL, Pune
 INDIA
 
 
 On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner ckut...@gwdg.de wrote:
 
 Hi Chandan,
 
 g_tune_pme also finds the optimal number of PME cores if the cores
 are distributed on multiple nodes. Simply pass the total number of
 cores to the -np option. Depending on the MPI and queue environment
 that you use, the distribution of the cores over the nodes may have
 to be specified in a hostfile / machinefile. Check g_tune_pme -h
 on how to set that.
 
 Best,
  Carsten
 
 
 On Aug 28, 2012, at 8:33 PM, Chandan Choudhury iitd...@gmail.com wrote:
 
 Dear gmx users,
 
 I am using 4.5.5 of gromacs.
 
 I was trying to use g_tune_pme for a simulation. I intend to execute
 mdrun at multiple nodes with 12 cores each. Therefore, I would like to
 optimize the number of pme nodes. I could execute g_tune_pme -np 12
 md.tpr. But this will only find the optimal PME nodes for single nodes
 run. How do I find the optimal PME nodes for multiple nodes.
 
 Any suggestion would be helpful.
 
 Chandan
 
 --
 Chandan kumar Choudhury
 NCL, Pune
 INDIA
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
 --
 Dr. Carsten Kutzner
 Max Planck Institute for Biophysical Chemistry
 Theoretical and Computational Biophysics
 Am Fassberg 11, 37077 Goettingen, Germany
 Tel. +49-551-2012313, Fax: +49-551-2012302
 http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme cannot be executed

2012-09-03 Thread Carsten Kutzner
Hi Zifeng,

have you tried to use 

g_tune_pme -npstring none …

Carsten


On Aug 20, 2012, at 5:07 PM, zifeng li lizife...@gmail.com wrote:

 Dear Gromacs users,
 
 Morning!
 I am using Gromacs 4.5.4 version and tries to use the magic power of
 g_tune_pme. However, it cannot be executed with the error in
 benchtest.log file:
 
 mpirun error: do not specify a -np argument.  it is set for you.
 
 The cluster I use needs to submit mpirun job though PBS script, which
 looks like following:
 
 #PBS -l nodes=8
 #PBS -l walltime=2:00:00
 #PBS -l pmem=2gb
 cd $PBS_O_WORKDIR
 #
 echo  
 echo  
 echo Job started on `hostname` at `date`
 g_tune_pme -s npt
 echo  
 echo Job Ended at `date`
 echo  
 ~
 I can run the command mpirun mdrun_mpi  -deffnm npt  using this PBS
 script before and as you notice, -np for g_tune_mpe is not used.  Any
 suggestions about this issue?
 
 What I have tried for your reference:
 1. to delete the first line. well...it won't help.
 2. to set the environmental variable as Manual suggests curiously:
 export MPIRUN=/usr/local/mpirun -machinefile hostsuse my account
 name as the hosts here.
 
 
 Thanks in advance!
 
 Good day :)
 
 -Zifeng
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Only plain text messages are allowed!
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme for multiple nodes

2012-09-03 Thread Carsten Kutzner
Hi Chandan,

g_tune_pme also finds the optimal number of PME cores if the cores
are distributed on multiple nodes. Simply pass the total number of
cores to the -np option. Depending on the MPI and queue environment
that you use, the distribution of the cores over the nodes may have
to be specified in a hostfile / machinefile. Check g_tune_pme -h
on how to set that.

Best,
  Carsten


On Aug 28, 2012, at 8:33 PM, Chandan Choudhury iitd...@gmail.com wrote:

 Dear gmx users,
 
 I am using 4.5.5 of gromacs.
 
 I was trying to use g_tune_pme for a simulation. I intend to execute
 mdrun at multiple nodes with 12 cores each. Therefore, I would like to
 optimize the number of pme nodes. I could execute g_tune_pme -np 12
 md.tpr. But this will only find the optimal PME nodes for single nodes
 run. How do I find the optimal PME nodes for multiple nodes.
 
 Any suggestion would be helpful.
 
 Chandan
 
 --
 Chandan kumar Choudhury
 NCL, Pune
 INDIA
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_tune_pme for multiple nodes

2012-08-28 Thread Chandan Choudhury
Dear gmx users,

I am using 4.5.5 of gromacs.

I was trying to use g_tune_pme for a simulation. I intend to execute
mdrun at multiple nodes with 12 cores each. Therefore, I would like to
optimize the number of pme nodes. I could execute g_tune_pme -np 12
md.tpr. But this will only find the optimal PME nodes for single nodes
run. How do I find the optimal PME nodes for multiple nodes.

Any suggestion would be helpful.

Chandan

--
Chandan kumar Choudhury
NCL, Pune
INDIA
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_tune_pme optimal PME nodes for multiple nodes

2012-08-25 Thread Chandan Choudhury
Dear gmx users,

I am using 4.5.5 of gromacs.

I was trying to use g_tune_pme for a simulation. I intend to execute
mdrun at multiple nodes with 12 cores each. Therefore, I would like to
optimize the number of pme nodes. I could execute g_tune_pme -np 12
md.tpr. But this will only find the optimal PME nodes for single nodes
run. How do I find the optimal PME nodes for multiple nodes.

Any suggestion would be helpful.

Chandan
--
Chandan kumar Choudhury
NCL, Pune
INDIA
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_tune_pme restart

2012-08-24 Thread Albert

Dear:

  I use g_tune_pme for MD production but it clashed before the job 
finished since it is over cluster walltime limitation. I get the 
following information from perf.out:


mpirun -np 144 mdrun -npme -1 -s tuned.tpr -v -o md.trr -c md.gro -e 
md.edr -g md.log


I am just wondering, is it correct using the following command to append 
the jobs?


mpirun -np 144 mdrun -npme -1 -s tuned.tpr -v -f md.trr -e md.edr -g 
md.log -o md.trr -cpi -append


thank you very much

best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme restart

2012-08-24 Thread Roland Schulz
On Fri, Aug 24, 2012 at 12:23 PM, Albert mailmd2...@gmail.com wrote:
 Dear:

I use g_tune_pme for MD production but it clashed before the job
 finished since it is over cluster walltime limitation. I get the
 following information from perf.out:

 mpirun -np 144 mdrun -npme -1 -s tuned.tpr -v -o md.trr -c md.gro -e
 md.edr -g md.log

 I am just wondering, is it correct using the following command to append
 the jobs?

 mpirun -np 144 mdrun -npme -1 -s tuned.tpr -v -f md.trr -e md.edr -g
 md.log -o md.trr -cpi -append

mdrun doesn't have -f as option. It has to be -o. Otherwise it seems OK.

Roland

 thank you very much

 best
 Albert
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Only plain text messages are allowed!
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists







-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_tune_pme cannot be executed

2012-08-20 Thread zifeng li
Dear Gromacs users,

Morning!
I am using Gromacs 4.5.4 version and tries to use the magic power of
g_tune_pme. However, it cannot be executed with the error in
benchtest.log file:

mpirun error: do not specify a -np argument.  it is set for you.

The cluster I use needs to submit mpirun job though PBS script, which
looks like following:

#PBS -l nodes=8
#PBS -l walltime=2:00:00
#PBS -l pmem=2gb
cd $PBS_O_WORKDIR
#
echo  
echo  
echo Job started on `hostname` at `date`
g_tune_pme -s npt
echo  
echo Job Ended at `date`
echo  
~
I can run the command mpirun mdrun_mpi  -deffnm npt  using this PBS
script before and as you notice, -np for g_tune_mpe is not used.  Any
suggestions about this issue?

What I have tried for your reference:
1. to delete the first line. well...it won't help.
2. to set the environmental variable as Manual suggests curiously:
export MPIRUN=/usr/local/mpirun -machinefile hostsuse my account
name as the hosts here.


Thanks in advance!

Good day :)

-Zifeng
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_tune_pme error in blue gene

2012-03-31 Thread Albert

Hello:

  I am trying to run g_tune_pme in blue gene with following script:

# @ job_name = bm
# @ class = kdm-large
# @ account_no = G07-13
# @ error = gromacs.info
# @ output = gromacs.out
# @ environment = COPY_ALL
# @ wall_clock_limit = 160:00:00
# @ notification = error
# @ job_type = bluegene
# @ bg_size = 64
# @ queue
mpirun -exe /opt/gromacs/4.5.5/bin/g_tune_pme -args -v -s md.tpr -o 
bm.trr -cpo bm.cpt -g bm.log -launch -mode VN -np 256


but I've got the following messages as soon as I submit jobs and it 
terminate soon:


---gromacs.info--
Mar 31 20:45:57.677742 BE_MPI (ERROR): Job execution failed
Mar 31 20:45:57.677803 BE_MPI (ERROR): Job 10969 is in state ERROR ('E')
Mar 31 20:44:58.476985 FE_MPI (ERROR): Job execution failed (error 
code - 50)
Mar 31 20:44:58.477065 FE_MPI (ERROR):  - Job execution failed - job 
switched to an error state
Mar 31 20:45:57.714358 BE_MPI (ERROR): The error message in the job 
record is as follows:
Mar 31 20:45:57.714376 BE_MPI (ERROR):   Load failed on 
192.168.101.49: Executable file is not a 32-bit ELF file

Mar 31 20:44:58.691897 FE_MPI (ERROR): Failure list:
Mar 31 20:44:58.691923 FE_MPI (ERROR):   - 1. Job execution failed - 
job switched to an error state (failure #50)



-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] g_tune_pme error in blue gene

2012-03-31 Thread Mark Abraham

On 1/04/2012 4:50 AM, Albert wrote:

Hello:

  I am trying to run g_tune_pme in blue gene with following script:

# @ job_name = bm
# @ class = kdm-large
# @ account_no = G07-13
# @ error = gromacs.info
# @ output = gromacs.out
# @ environment = COPY_ALL
# @ wall_clock_limit = 160:00:00
# @ notification = error
# @ job_type = bluegene
# @ bg_size = 64
# @ queue
mpirun -exe /opt/gromacs/4.5.5/bin/g_tune_pme -args -v -s md.tpr -o 
bm.trr -cpo bm.cpt -g bm.log -launch -mode VN -np 256


but I've got the following messages as soon as I submit jobs and it 
terminate soon:


---gromacs.info--
Mar 31 20:45:57.677742 BE_MPI (ERROR): Job execution failed
Mar 31 20:45:57.677803 BE_MPI (ERROR): Job 10969 is in state ERROR ('E')
Mar 31 20:44:58.476985 FE_MPI (ERROR): Job execution failed (error 
code - 50)
Mar 31 20:44:58.477065 FE_MPI (ERROR):  - Job execution failed - job 
switched to an error state
Mar 31 20:45:57.714358 BE_MPI (ERROR): The error message in the job 
record is as follows:
Mar 31 20:45:57.714376 BE_MPI (ERROR):   Load failed on 
192.168.101.49: Executable file is not a 32-bit ELF file


This means the executable is unsuitable for the hardware to run. Front 
and back ends of BlueGene are different hardware, of course.



Mar 31 20:44:58.691897 FE_MPI (ERROR): Failure list:
Mar 31 20:44:58.691923 FE_MPI (ERROR):   - 1. Job execution failed - 
job switched to an error state (failure #50)


g_tune_pme relies on being able to spawn mpirun processes and measure 
their performance. Back-end BlueGene/L jobs cannot spawn new processes, 
and I'm skeptical that BlueGene/P would be able to do this either (but P 
is less restrictive). So you will need to run g_tune_pme compiled for 
the front end in your job script, and consult g_tune_pme -h for clues on 
how to set up your job script so that g_tune_pme can correctly call 
mpirun to invoke mdrun_mpi on the back end.


Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] g_tune_pme error in blue gene

2012-03-31 Thread Mark Abraham

On 1/04/2012 9:13 AM, Mark Abraham wrote:

On 1/04/2012 4:50 AM, Albert wrote:

Hello:

  I am trying to run g_tune_pme in blue gene with following script:

# @ job_name = bm
# @ class = kdm-large
# @ account_no = G07-13
# @ error = gromacs.info
# @ output = gromacs.out
# @ environment = COPY_ALL
# @ wall_clock_limit = 160:00:00
# @ notification = error
# @ job_type = bluegene
# @ bg_size = 64
# @ queue
mpirun -exe /opt/gromacs/4.5.5/bin/g_tune_pme -args -v -s md.tpr -o 
bm.trr -cpo bm.cpt -g bm.log -launch -mode VN -np 256


but I've got the following messages as soon as I submit jobs and it 
terminate soon:


---gromacs.info--
Mar 31 20:45:57.677742 BE_MPI (ERROR): Job execution failed
Mar 31 20:45:57.677803 BE_MPI (ERROR): Job 10969 is in state ERROR 
('E')
Mar 31 20:44:58.476985 FE_MPI (ERROR): Job execution failed (error 
code - 50)
Mar 31 20:44:58.477065 FE_MPI (ERROR):  - Job execution failed - 
job switched to an error state
Mar 31 20:45:57.714358 BE_MPI (ERROR): The error message in the job 
record is as follows:
Mar 31 20:45:57.714376 BE_MPI (ERROR):   Load failed on 
192.168.101.49: Executable file is not a 32-bit ELF file


This means the executable is unsuitable for the hardware to run. Front 
and back ends of BlueGene are different hardware, of course.



Mar 31 20:44:58.691897 FE_MPI (ERROR): Failure list:
Mar 31 20:44:58.691923 FE_MPI (ERROR):   - 1. Job execution failed 
- job switched to an error state (failure #50)


g_tune_pme relies on being able to spawn mpirun processes and measure 
their performance. Back-end BlueGene/L jobs cannot spawn new 
processes, and I'm skeptical that BlueGene/P would be able to do this 
either (but P is less restrictive). So you will need to run g_tune_pme 
compiled for the front end in your job script, and consult g_tune_pme 
-h for clues on how to set up your job script so that g_tune_pme can 
correctly call mpirun to invoke mdrun_mpi on the back end.




Posting your successful job script would be a welcome contribution for 
those in the community who will face this problem in the future.


Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] g_tune_pme

2011-07-28 Thread Carla Jamous
Hi everyone, please I was running simulations with gromacs version 4.0.3
,but I got the following error:
Average load imbalance: 12.1 %
 Part of the total run time spent waiting due to load imbalance: 6.9 %
 Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0
% Y 9 %
 Average PME mesh/force load: 0.807
 Part of the total run time spent waiting due to PP/PME imbalance: 5.3 %

NOTE: 6.9 % performance was lost due to load imbalance
  in the domain decomposition.

NOTE: 5.3 % performance was lost because the PME nodes
  had less work to do than the PP nodes.
  You might want to decrease the number of PME nodes
  or decrease the cut-off and the grid spacing.

After searching the archive mailing list and reading the manual , I decided
to use g_tune_pme so I switched to gromacs 4.5.4. Here's my script:

#PBS -S /bin/bash
#PBS -N job_md6ns
#PBS -e job_md6ns.err
#PBS -o job_md6ns.log
#PBS -m ae -M carlajam...@gmail.com
#PBS -l select=2:ncpus=8:mpiprocs=8
#PBS -l walltime=024:00:00
cd $PBS_O_WORKDIR
export GMXLIB=$GMXLIB:/scratch/carla/top:.
module load gromacs
chem=/opt/software/SGI/gromacs/4.5.4/bin/
mdrunmpi=mpiexec /opt/software/SGI/gromacs/4.5.4/bin/
${chem}grompp -v -f md6ns.mdp -c 1rlu_apo_mdeq.gro -o 1rlu_apo_md6ns.tpr -p
1rlu_apo.top
${mdrunmpi}g_tune_pme -v -s 1rlu_apo_md6ns.tpr -o 1rlu_apo_md6ns.trr -cpo
state_6ns.cpt -c 1rlu_apo_md6ns.gro -x 1rlu_apo_md6ns.xtc -e md6ns.edr -g
md6ns.log -np 4 -ntpr 1 -launch

But now, I have the following error message:

Fatal error:
Library file residuetypes.dat not found in current dir nor in your GMXLIB
path.

Except that I'm using amber94 force-field and that my topology files are in
a special directory called top where I modified certain things. With gromacs
4.0.3, it always worked so I don't know what is happening here.

Please does anyone have an idea of what it might be?

Do I have to run pdb2gmx, editconf, etc... with the gromacs 4.5.4 for it to
work?

Thank you,

Carla
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] g_tune_pme

2011-07-28 Thread Carsten Kutzner
Hi Carla,

On Jul 28, 2011, at 9:38 AM, Carla Jamous wrote:

 Hi everyone, please I was running simulations with gromacs version 4.0.3 ,but 
 I got the following error:
 Average load imbalance: 12.1 %
  Part of the total run time spent waiting due to load imbalance: 6.9 %
  Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 % 
 Y 9 %
  Average PME mesh/force load: 0.807
  Part of the total run time spent waiting due to PP/PME imbalance: 5.3 %
 
This is not an error but just a hint how you could optimize your performance.

 NOTE: 6.9 % performance was lost due to load imbalance
   in the domain decomposition.
 
 NOTE: 5.3 % performance was lost because the PME nodes
   had less work to do than the PP nodes.
   You might want to decrease the number of PME nodes
   or decrease the cut-off and the grid spacing.
 
 After searching the archive mailing list and reading the manual , I decided 
 to use g_tune_pme so I switched to gromacs 4.5.4. Here's my script:
Note that there is also a g_tune_pme version for 4.0.7: 
http://www.mpibpc.mpg.de/home/grubmueller/projects/MethodAdvancements/Gromacs/index.html

As another possibility, you can use the tpr file you created with 4.0.x as 
input for
Gromcas 4.5.x, also for g_tune_pme, this is probably the easiest solution.

 
 #PBS -S /bin/bash
 #PBS -N job_md6ns
 #PBS -e job_md6ns.err
 #PBS -o job_md6ns.log
 #PBS -m ae -M carlajam...@gmail.com
 #PBS -l select=2:ncpus=8:mpiprocs=8
 #PBS -l walltime=024:00:00
 cd $PBS_O_WORKDIR
 export GMXLIB=$GMXLIB:/scratch/carla/top:.
 module load gromacs
 chem=/opt/software/SGI/gromacs/4.5.4/bin/
 mdrunmpi=mpiexec /opt/software/SGI/gromacs/4.5.4/bin/
 ${chem}grompp -v -f md6ns.mdp -c 1rlu_apo_mdeq.gro -o 1rlu_apo_md6ns.tpr -p 
 1rlu_apo.top
 ${mdrunmpi}g_tune_pme -v -s 1rlu_apo_md6ns.tpr -o 1rlu_apo_md6ns.trr -cpo 
 state_6ns.cpt -c 1rlu_apo_md6ns.gro -x 1rlu_apo_md6ns.xtc -e md6ns.edr -g 
 md6ns.log -np 4 -ntpr 1 -launch
 
 But now, I have the following error message: 
 
 Fatal error:
 Library file residuetypes.dat not found in current dir nor in your GMXLIB 
 path.
Why don't you build your tpr file on your workstation and then switch over
to the cluster? I guess this will make life easier for you.

Also note that you must not call g_tune_pme in parallel (which you do by
${mdrunmpi}g_tune_pme. g_tune_pme will spawn its own MPI processes with the
help of the MPIRUN and MDRUN environment variables. See g_tune_pme -h,
probably you need do set
export MDRUN=/opt/software/SGI/gromacs/4.5.4/bin/mdrun
export MPIRUN=`which mpiexec`

Hope that helps,
  Carsten

 
 Except that I'm using amber94 force-field and that my topology files are in a 
 special directory called top where I modified certain things. With gromacs 
 4.0.3, it always worked so I don't know what is happening here.
 
 Please does anyone have an idea of what it might be?
 
 Do I have to run pdb2gmx, editconf, etc... with the gromacs 4.5.4 for it to 
 work?
 
 Thank you,
 
 Carla
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme big standard deviation in perf.out output

2011-01-01 Thread Carsten Kutzner
Dear Yanbin,

On Dec 30, 2010, at 9:20 PM, WU Yanbin wrote:
 I'm simulating a SPC/E water box with the size of 4nm by 4nm by 4nm. The 
 command g_tune_pme was used to find the optimal PME node numbers, Coulomb 
 cutoff radius and grid spacing size. 
 
 The following command is used:
 g_tune_pme -np 24 -steps 5000 -resetstep 500 ...
 rcoul=1.5nm, rvdw=1.5nm, fourierspacing=0.12
 
 The simulation is done with no error. Below is the output:
 ---
 Line tpr PME nodes  Gcycles Av. Std.dev.   ns/dayPME/fDD 
 grid
0   0   12  2813.762  187.1159.6040.3614   
 3   1
1   0   11  2969.826  251.2109.1120.510   13   
 1   1
2   0   10  2373.469  154.005   11.3850.4452   
 7   1
3   09  2129.519   58.132   12.6650.6015   
 3   1
4   08  2411.653  265.233   11.2480.5704   
 4   1
5   07  2062.770  514.023   13.4900.616   17   
 1   1
6   06  1539.237   89.189   17.5470.7486   
 3   1
7   00  1633.318  113.037   16.548  -  6   
 4   1
8   0   -1(  4) 1330.146   32.362   20.2761.0504   
 5   1
 ---
 
 The optimal -npme is 4.
 
 It seems to me that the Std. dev is too huge.
This is the standard deviation resulting from multiple runs with the
same settings. If you do not specify -r for the number of repeats 
explicitly to g_tune_pme, it will do two tests for each setting. For
the optimum of 4 PME nodes the standard deviation is 2.4 percent of the 
mean, thus not large at all.

 Can anyone tell me the meaning of Gcycles Av. and Std. dev and their 
 relations to the accuracy of ns/day?
Both the number of CPU cycles as the ns/day values are determined from
the md.log output file of the respective runs. g_tune_pme does the averaging
for you, but you can also look at the individual results, these log files
are still there after the tuning run. The standard deviation is printed
only for the Gcycles - maybe it is a good idea to also print the standard
deviation for the ns/day values. If the standard dev is X percent of the
mean for the cycles, then it is also X percent of the mean ns/day.

 
 Another question:
 I tried 
 g_tune_pme -np 24 -steps 1000 -resetstep 100 ... (the default value of 
 g_tune_pme)
 rcoul=1.5nm, rvdw=1.5nm, fourierspacing=0.12
 
 The optimal -npme is 6, different from -npme=4 as obtained with big 
 -nsteps.
 Should I increase -nsteps even more to get better estimate, or what else 
 parameters should I try?
 
In principle the results will become more exact, the longer the test runs
are. For your system it seems that the load between the processes is not yet
optimally balanced after the default 100 steps so that -resetstep 500 gives
you a more accurate value. I think the -steps 5000 value is large enough, 
but another test with a higher resetstep value would answer your question.
Since you already know that 7-12 PME nodes will not perform well, I would
try

g_tune_pme -np 24 -steps 5000 -resetstep 5000 -min 0.16 -max 0.25 ...

Regards,
  Carsten

 Do let me know if the questions are not made clear.
 Thank you.
 
 Best,
 Yanbin
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_tune_pme big standard deviation in perf.out output

2010-12-30 Thread WU Yanbin
Dear GMXers,

I'm simulating a SPC/E water box with the size of 4nm by 4nm by 4nm. The
command g_tune_pme was used to find the optimal PME node numbers, Coulomb
cutoff radius and grid spacing size.

The following command is used:
g_tune_pme -np 24 -steps 5000 -resetstep 500 ...
rcoul=1.5nm, rvdw=1.5nm, fourierspacing=0.12

The simulation is done with no error. Below is the output:
---
Line tpr PME nodes  Gcycles Av. Std.dev.   ns/dayPME/fDD
grid
   0   0   12  2813.762  187.1159.6040.361
4   3   1
   1   0   11  2969.826  251.2109.1120.510
13   1   1
   2   0   10  2373.469  154.005   11.3850.445
2   7   1
   3   09  2129.519   58.132   12.6650.601
5   3   1
   4   08  2411.653  265.233   11.2480.570
4   4   1
   5   07  2062.770  514.023   13.4900.616
17   1   1
   6   06  1539.237   89.189   17.5470.748
6   3   1
   7   00  1633.318  113.037   16.548  -
6   4   1
   8   0   -1(  4) 1330.146   32.362   20.2761.050
4   5   1
---

The optimal -npme is 4.

It seems to me that the Std. dev is too huge.
Can anyone tell me the meaning of Gcycles Av. and Std. dev and their
relations to the accuracy of ns/day?

Another question:
I tried
g_tune_pme -np 24 -steps 1000 -resetstep 100 ... (the default value of
g_tune_pme)
rcoul=1.5nm, rvdw=1.5nm, fourierspacing=0.12

The optimal -npme is 6, different from -npme=4 as obtained with big
-nsteps.
Should I increase -nsteps even more to get better estimate, or what else
parameters should I try?

Do let me know if the questions are not made clear.
Thank you.

Best,
Yanbin
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] g_tune_pme in Ranger

2009-06-30 Thread Patricia Soto
Hi,
I wonder whether somebody has been able to install g_tune_pme in Ranger (one
of the tacc clusters). If so, please contact me off list as I have a couple
of questions to ask.

Patricia.

-- 
Dr. Patricia Soto
patricia.sot...@gmail.com
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php