Re: [gmx-users] g_tune_pme for multiple nodes

2012-12-04 Thread Chandan Choudhury
On Tue, Dec 4, 2012 at 7:18 PM, Carsten Kutzner  wrote:

>
> On Dec 4, 2012, at 2:45 PM, Chandan Choudhury  wrote:
>
> > Hi Carsten,
> >
> > Thanks for the reply.
> >
> > If PME nodes for the g_tune is half of np, then if it exceeds the ppn of
> of
> > a node, how would g_tune perform. What I mean if $NPROCS=36, the its half
> > is 18 ppn, but 18 ppns are not present in a single node  (max. ppn = 12
> per
> > node). How would g_tune function in such scenario?
> Typically mdrun allocates the PME and PP nodes in an interleaved way,
> meaning
> you would end up with 9 PME nodes on each of your two nodes.
>
> Check the -ddorder of mdrun.
>
> Interleaving is normally fastest unless you could have all PME processes
> exclusively
> on a single node.
>

Thanks Carsten for the explanation.

Chandan

>
> Carsten
>
> >
> > Chandan
> >
> >
> > --
> > Chandan kumar Choudhury
> > NCL, Pune
> > INDIA
> >
> >
> > On Tue, Dec 4, 2012 at 6:39 PM, Carsten Kutzner  wrote:
> >
> >> Hi Chandan,
> >>
> >> the number of separate PME nodes in Gromacs must be larger than two and
> >> smaller or equal to half the number of MPI processes (=np). Thus,
> >> g_tune_pme
> >> checks only up to npme = np/2 PME nodes.
> >>
> >> Best,
> >>  Carsten
> >>
> >>
> >> On Dec 4, 2012, at 1:54 PM, Chandan Choudhury 
> wrote:
> >>
> >>> Dear Carsten and Florian,
> >>>
> >>> Thanks for you useful suggestions. It did work. I still have a doubt
> >>> regarding the execution :
> >>>
> >>> export MPIRUN=`which mpirun`
> >>> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
> >>> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> >>> tune.edr -g tune.log
> >>>
> >>> I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme
> tunes
> >>> the no. of pme nodes. As I am executing it on a single node, mdrun
> never
> >>> checks pme for greater than 12 ppn. So, how do I understand that the
> pme
> >> is
> >>> tuned for 24 ppn spanning across the two nodes.
> >>>
> >>> Chandan
> >>>
> >>>
> >>> --
> >>> Chandan kumar Choudhury
> >>> NCL, Pune
> >>> INDIA
> >>>
> >>>
> >>> On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner 
> >> wrote:
> >>>
>  Hi Chandan,
> 
>  On Nov 29, 2012, at 3:30 PM, Chandan Choudhury 
> >> wrote:
> 
> > Hi Carsten,
> >
> > Thanks for your suggestion.
> >
> > I did try to pass to total number of cores with the np flag to the
> > g_tune_pme, but it didnot help. Hopefully I am doing something
> silliy.
> >> I
> > have pasted the snippet of the PBS script.
> >
> > #!/bin/csh
> > #PBS -l nodes=2:ppn=12:twelve
> > #PBS -N bilayer_tune
> > 
> > 
> > 
> >
> > cd $PBS_O_WORKDIR
> > export
> MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
>  from here on you job file should read:
> 
>  export MPIRUN=`which mpirun`
>  g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
>  tune.edr -g tune.log
> 
> > mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c
> tune.pdb
> >> -x
> > tune.xtc -e tune.edr -g tune.log -nice 0
>  this way you will get $NPROCS g_tune_pme instances, each trying to run
> >> an
>  mdrun job on 24 cores,
>  which is not what you want. g_tune_pme itself is a serial program, it
> >> just
>  spawns the mdrun's.
> 
>  Carsten
> >
> >
> > Then I submit the script using qsub.
> > When I login to the compute nodes there I donot find and mdrun
> >> executable
> > running.
> >
> > I also tried using nodes=1 and np 12. It didnot work through qsub.
> >
> > Then I logged in to the compute nodes and executed g_tune_pme_4.5.5
> -np
>  12
> > -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice
> 0
> >
> > It worked.
> >
> > Also, if I just use
> > $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
>  tune.edr
> > -g tune.log -nice 0
> > g_tune_pme executes on the head node and writes various files.
> >
> > Kindly let me know what am I missing when I submit through qsub.
> >
> > Thanks
> >
> > Chandan
> > --
> > Chandan kumar Choudhury
> > NCL, Pune
> > INDIA
> >
> >
> > On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner 
> >> wrote:
> >
> >> Hi Chandan,
> >>
> >> g_tune_pme also finds the optimal number of PME cores if the cores
> >> are distributed on multiple nodes. Simply pass the total number of
> >> cores to the -np option. Depending on the MPI and queue environment
> >> that you use, the distribution of the cores over the nodes may have
> >> to be specified in a hostfile / machinefile. Check g_tune_pme -h
> >> on how to set that.
> >>
> >> Best,
> >> Carsten
> >>
> >>
> >> On Aug 28, 2012, at 8:33 PM, Chandan Choudhury 
>  wrote:
> >>
> >>> Dear gmx users,
> >>>
> 

Re: [gmx-users] g_tune_pme for multiple nodes

2012-12-04 Thread Carsten Kutzner

On Dec 4, 2012, at 2:45 PM, Chandan Choudhury  wrote:

> Hi Carsten,
> 
> Thanks for the reply.
> 
> If PME nodes for the g_tune is half of np, then if it exceeds the ppn of of
> a node, how would g_tune perform. What I mean if $NPROCS=36, the its half
> is 18 ppn, but 18 ppns are not present in a single node  (max. ppn = 12 per
> node). How would g_tune function in such scenario?
Typically mdrun allocates the PME and PP nodes in an interleaved way, meaning
you would end up with 9 PME nodes on each of your two nodes.

Check the -ddorder of mdrun.

Interleaving is normally fastest unless you could have all PME processes 
exclusively
on a single node.

Carsten

> 
> Chandan
> 
> 
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
> 
> 
> On Tue, Dec 4, 2012 at 6:39 PM, Carsten Kutzner  wrote:
> 
>> Hi Chandan,
>> 
>> the number of separate PME nodes in Gromacs must be larger than two and
>> smaller or equal to half the number of MPI processes (=np). Thus,
>> g_tune_pme
>> checks only up to npme = np/2 PME nodes.
>> 
>> Best,
>>  Carsten
>> 
>> 
>> On Dec 4, 2012, at 1:54 PM, Chandan Choudhury  wrote:
>> 
>>> Dear Carsten and Florian,
>>> 
>>> Thanks for you useful suggestions. It did work. I still have a doubt
>>> regarding the execution :
>>> 
>>> export MPIRUN=`which mpirun`
>>> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
>>> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
>>> tune.edr -g tune.log
>>> 
>>> I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme tunes
>>> the no. of pme nodes. As I am executing it on a single node, mdrun never
>>> checks pme for greater than 12 ppn. So, how do I understand that the pme
>> is
>>> tuned for 24 ppn spanning across the two nodes.
>>> 
>>> Chandan
>>> 
>>> 
>>> --
>>> Chandan kumar Choudhury
>>> NCL, Pune
>>> INDIA
>>> 
>>> 
>>> On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner 
>> wrote:
>>> 
 Hi Chandan,
 
 On Nov 29, 2012, at 3:30 PM, Chandan Choudhury 
>> wrote:
 
> Hi Carsten,
> 
> Thanks for your suggestion.
> 
> I did try to pass to total number of cores with the np flag to the
> g_tune_pme, but it didnot help. Hopefully I am doing something silliy.
>> I
> have pasted the snippet of the PBS script.
> 
> #!/bin/csh
> #PBS -l nodes=2:ppn=12:twelve
> #PBS -N bilayer_tune
> 
> 
> 
> 
> cd $PBS_O_WORKDIR
> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
 from here on you job file should read:
 
 export MPIRUN=`which mpirun`
 g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
 tune.edr -g tune.log
 
> mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb
>> -x
> tune.xtc -e tune.edr -g tune.log -nice 0
 this way you will get $NPROCS g_tune_pme instances, each trying to run
>> an
 mdrun job on 24 cores,
 which is not what you want. g_tune_pme itself is a serial program, it
>> just
 spawns the mdrun's.
 
 Carsten
> 
> 
> Then I submit the script using qsub.
> When I login to the compute nodes there I donot find and mdrun
>> executable
> running.
> 
> I also tried using nodes=1 and np 12. It didnot work through qsub.
> 
> Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np
 12
> -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
> 
> It worked.
> 
> Also, if I just use
> $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
 tune.edr
> -g tune.log -nice 0
> g_tune_pme executes on the head node and writes various files.
> 
> Kindly let me know what am I missing when I submit through qsub.
> 
> Thanks
> 
> Chandan
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
> 
> 
> On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner 
>> wrote:
> 
>> Hi Chandan,
>> 
>> g_tune_pme also finds the optimal number of PME cores if the cores
>> are distributed on multiple nodes. Simply pass the total number of
>> cores to the -np option. Depending on the MPI and queue environment
>> that you use, the distribution of the cores over the nodes may have
>> to be specified in a hostfile / machinefile. Check g_tune_pme -h
>> on how to set that.
>> 
>> Best,
>> Carsten
>> 
>> 
>> On Aug 28, 2012, at 8:33 PM, Chandan Choudhury 
 wrote:
>> 
>>> Dear gmx users,
>>> 
>>> I am using 4.5.5 of gromacs.
>>> 
>>> I was trying to use g_tune_pme for a simulation. I intend to execute
>>> mdrun at multiple nodes with 12 cores each. Therefore, I would like
>> to
>>> optimize the number of pme nodes. I could execute g_tune_pme -np 12
>>> md.tpr. But this will only find the optimal PME nodes for single
>> nodes
>>> run. How do I find the optimal PME nodes for 

Re: [gmx-users] g_tune_pme for multiple nodes

2012-12-04 Thread Chandan Choudhury
Hi Carsten,

Thanks for the reply.

If PME nodes for the g_tune is half of np, then if it exceeds the ppn of of
a node, how would g_tune perform. What I mean if $NPROCS=36, the its half
is 18 ppn, but 18 ppns are not present in a single node  (max. ppn = 12 per
node). How would g_tune function in such scenario?

Chandan


--
Chandan kumar Choudhury
NCL, Pune
INDIA


On Tue, Dec 4, 2012 at 6:39 PM, Carsten Kutzner  wrote:

> Hi Chandan,
>
> the number of separate PME nodes in Gromacs must be larger than two and
> smaller or equal to half the number of MPI processes (=np). Thus,
> g_tune_pme
> checks only up to npme = np/2 PME nodes.
>
> Best,
>   Carsten
>
>
> On Dec 4, 2012, at 1:54 PM, Chandan Choudhury  wrote:
>
> > Dear Carsten and Florian,
> >
> > Thanks for you useful suggestions. It did work. I still have a doubt
> > regarding the execution :
> >
> > export MPIRUN=`which mpirun`
> > export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
> > g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> > tune.edr -g tune.log
> >
> > I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme tunes
> > the no. of pme nodes. As I am executing it on a single node, mdrun never
> > checks pme for greater than 12 ppn. So, how do I understand that the pme
> is
> > tuned for 24 ppn spanning across the two nodes.
> >
> > Chandan
> >
> >
> > --
> > Chandan kumar Choudhury
> > NCL, Pune
> > INDIA
> >
> >
> > On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner 
> wrote:
> >
> >> Hi Chandan,
> >>
> >> On Nov 29, 2012, at 3:30 PM, Chandan Choudhury 
> wrote:
> >>
> >>> Hi Carsten,
> >>>
> >>> Thanks for your suggestion.
> >>>
> >>> I did try to pass to total number of cores with the np flag to the
> >>> g_tune_pme, but it didnot help. Hopefully I am doing something silliy.
> I
> >>> have pasted the snippet of the PBS script.
> >>>
> >>> #!/bin/csh
> >>> #PBS -l nodes=2:ppn=12:twelve
> >>> #PBS -N bilayer_tune
> >>> 
> >>> 
> >>> 
> >>>
> >>> cd $PBS_O_WORKDIR
> >>> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
> >> from here on you job file should read:
> >>
> >> export MPIRUN=`which mpirun`
> >> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> >> tune.edr -g tune.log
> >>
> >>> mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb
> -x
> >>> tune.xtc -e tune.edr -g tune.log -nice 0
> >> this way you will get $NPROCS g_tune_pme instances, each trying to run
> an
> >> mdrun job on 24 cores,
> >> which is not what you want. g_tune_pme itself is a serial program, it
> just
> >> spawns the mdrun's.
> >>
> >> Carsten
> >>>
> >>>
> >>> Then I submit the script using qsub.
> >>> When I login to the compute nodes there I donot find and mdrun
> executable
> >>> running.
> >>>
> >>> I also tried using nodes=1 and np 12. It didnot work through qsub.
> >>>
> >>> Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np
> >> 12
> >>> -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
> >>>
> >>> It worked.
> >>>
> >>> Also, if I just use
> >>> $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> >> tune.edr
> >>> -g tune.log -nice 0
> >>> g_tune_pme executes on the head node and writes various files.
> >>>
> >>> Kindly let me know what am I missing when I submit through qsub.
> >>>
> >>> Thanks
> >>>
> >>> Chandan
> >>> --
> >>> Chandan kumar Choudhury
> >>> NCL, Pune
> >>> INDIA
> >>>
> >>>
> >>> On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner 
> wrote:
> >>>
>  Hi Chandan,
> 
>  g_tune_pme also finds the optimal number of PME cores if the cores
>  are distributed on multiple nodes. Simply pass the total number of
>  cores to the -np option. Depending on the MPI and queue environment
>  that you use, the distribution of the cores over the nodes may have
>  to be specified in a hostfile / machinefile. Check g_tune_pme -h
>  on how to set that.
> 
>  Best,
>  Carsten
> 
> 
>  On Aug 28, 2012, at 8:33 PM, Chandan Choudhury 
> >> wrote:
> 
> > Dear gmx users,
> >
> > I am using 4.5.5 of gromacs.
> >
> > I was trying to use g_tune_pme for a simulation. I intend to execute
> > mdrun at multiple nodes with 12 cores each. Therefore, I would like
> to
> > optimize the number of pme nodes. I could execute g_tune_pme -np 12
> > md.tpr. But this will only find the optimal PME nodes for single
> nodes
> > run. How do I find the optimal PME nodes for multiple nodes.
> >
> > Any suggestion would be helpful.
> >
> > Chandan
> >
> > --
> > Chandan kumar Choudhury
> > NCL, Pune
> > INDIA
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
>  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)

Re: [gmx-users] g_tune_pme for multiple nodes

2012-12-04 Thread Carsten Kutzner
Hi Chandan,

the number of separate PME nodes in Gromacs must be larger than two and
smaller or equal to half the number of MPI processes (=np). Thus, g_tune_pme
checks only up to npme = np/2 PME nodes. 

Best,
  Carsten


On Dec 4, 2012, at 1:54 PM, Chandan Choudhury  wrote:

> Dear Carsten and Florian,
> 
> Thanks for you useful suggestions. It did work. I still have a doubt
> regarding the execution :
> 
> export MPIRUN=`which mpirun`
> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> tune.edr -g tune.log
> 
> I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme tunes
> the no. of pme nodes. As I am executing it on a single node, mdrun never
> checks pme for greater than 12 ppn. So, how do I understand that the pme is
> tuned for 24 ppn spanning across the two nodes.
> 
> Chandan
> 
> 
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
> 
> 
> On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner  wrote:
> 
>> Hi Chandan,
>> 
>> On Nov 29, 2012, at 3:30 PM, Chandan Choudhury  wrote:
>> 
>>> Hi Carsten,
>>> 
>>> Thanks for your suggestion.
>>> 
>>> I did try to pass to total number of cores with the np flag to the
>>> g_tune_pme, but it didnot help. Hopefully I am doing something silliy. I
>>> have pasted the snippet of the PBS script.
>>> 
>>> #!/bin/csh
>>> #PBS -l nodes=2:ppn=12:twelve
>>> #PBS -N bilayer_tune
>>> 
>>> 
>>> 
>>> 
>>> cd $PBS_O_WORKDIR
>>> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
>> from here on you job file should read:
>> 
>> export MPIRUN=`which mpirun`
>> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
>> tune.edr -g tune.log
>> 
>>> mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb -x
>>> tune.xtc -e tune.edr -g tune.log -nice 0
>> this way you will get $NPROCS g_tune_pme instances, each trying to run an
>> mdrun job on 24 cores,
>> which is not what you want. g_tune_pme itself is a serial program, it just
>> spawns the mdrun's.
>> 
>> Carsten
>>> 
>>> 
>>> Then I submit the script using qsub.
>>> When I login to the compute nodes there I donot find and mdrun executable
>>> running.
>>> 
>>> I also tried using nodes=1 and np 12. It didnot work through qsub.
>>> 
>>> Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np
>> 12
>>> -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
>>> 
>>> It worked.
>>> 
>>> Also, if I just use
>>> $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
>> tune.edr
>>> -g tune.log -nice 0
>>> g_tune_pme executes on the head node and writes various files.
>>> 
>>> Kindly let me know what am I missing when I submit through qsub.
>>> 
>>> Thanks
>>> 
>>> Chandan
>>> --
>>> Chandan kumar Choudhury
>>> NCL, Pune
>>> INDIA
>>> 
>>> 
>>> On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner  wrote:
>>> 
 Hi Chandan,
 
 g_tune_pme also finds the optimal number of PME cores if the cores
 are distributed on multiple nodes. Simply pass the total number of
 cores to the -np option. Depending on the MPI and queue environment
 that you use, the distribution of the cores over the nodes may have
 to be specified in a hostfile / machinefile. Check g_tune_pme -h
 on how to set that.
 
 Best,
 Carsten
 
 
 On Aug 28, 2012, at 8:33 PM, Chandan Choudhury 
>> wrote:
 
> Dear gmx users,
> 
> I am using 4.5.5 of gromacs.
> 
> I was trying to use g_tune_pme for a simulation. I intend to execute
> mdrun at multiple nodes with 12 cores each. Therefore, I would like to
> optimize the number of pme nodes. I could execute g_tune_pme -np 12
> md.tpr. But this will only find the optimal PME nodes for single nodes
> run. How do I find the optimal PME nodes for multiple nodes.
> 
> Any suggestion would be helpful.
> 
> Chandan
> 
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
 --
 Dr. Carsten Kutzner
 Max Planck Institute for Biophysical Chemistry
 Theoretical and Computational Biophysics
 Am Fassberg 11, 37077 Goettingen, Germany
 Tel. +49-551-2012313, Fax: +49-551-2012302
 http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before

Re: [gmx-users] g_tune_pme for multiple nodes

2012-12-04 Thread Chandan Choudhury
Dear Carsten and Florian,

Thanks for you useful suggestions. It did work. I still have a doubt
regarding the execution :

export MPIRUN=`which mpirun`
export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
tune.edr -g tune.log

I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme tunes
the no. of pme nodes. As I am executing it on a single node, mdrun never
checks pme for greater than 12 ppn. So, how do I understand that the pme is
tuned for 24 ppn spanning across the two nodes.

Chandan


--
Chandan kumar Choudhury
NCL, Pune
INDIA


On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner  wrote:

> Hi Chandan,
>
> On Nov 29, 2012, at 3:30 PM, Chandan Choudhury  wrote:
>
> > Hi Carsten,
> >
> > Thanks for your suggestion.
> >
> > I did try to pass to total number of cores with the np flag to the
> > g_tune_pme, but it didnot help. Hopefully I am doing something silliy. I
> > have pasted the snippet of the PBS script.
> >
> > #!/bin/csh
> > #PBS -l nodes=2:ppn=12:twelve
> > #PBS -N bilayer_tune
> > 
> > 
> > 
> >
> > cd $PBS_O_WORKDIR
> > export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
> from here on you job file should read:
>
> export MPIRUN=`which mpirun`
> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> tune.edr -g tune.log
>
> > mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb -x
> > tune.xtc -e tune.edr -g tune.log -nice 0
> this way you will get $NPROCS g_tune_pme instances, each trying to run an
> mdrun job on 24 cores,
> which is not what you want. g_tune_pme itself is a serial program, it just
> spawns the mdrun's.
>
> Carsten
> >
> >
> > Then I submit the script using qsub.
> > When I login to the compute nodes there I donot find and mdrun executable
> > running.
> >
> > I also tried using nodes=1 and np 12. It didnot work through qsub.
> >
> > Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np
> 12
> > -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
> >
> > It worked.
> >
> > Also, if I just use
> > $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> tune.edr
> > -g tune.log -nice 0
> > g_tune_pme executes on the head node and writes various files.
> >
> > Kindly let me know what am I missing when I submit through qsub.
> >
> > Thanks
> >
> > Chandan
> > --
> > Chandan kumar Choudhury
> > NCL, Pune
> > INDIA
> >
> >
> > On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner  wrote:
> >
> >> Hi Chandan,
> >>
> >> g_tune_pme also finds the optimal number of PME cores if the cores
> >> are distributed on multiple nodes. Simply pass the total number of
> >> cores to the -np option. Depending on the MPI and queue environment
> >> that you use, the distribution of the cores over the nodes may have
> >> to be specified in a hostfile / machinefile. Check g_tune_pme -h
> >> on how to set that.
> >>
> >> Best,
> >>  Carsten
> >>
> >>
> >> On Aug 28, 2012, at 8:33 PM, Chandan Choudhury 
> wrote:
> >>
> >>> Dear gmx users,
> >>>
> >>> I am using 4.5.5 of gromacs.
> >>>
> >>> I was trying to use g_tune_pme for a simulation. I intend to execute
> >>> mdrun at multiple nodes with 12 cores each. Therefore, I would like to
> >>> optimize the number of pme nodes. I could execute g_tune_pme -np 12
> >>> md.tpr. But this will only find the optimal PME nodes for single nodes
> >>> run. How do I find the optimal PME nodes for multiple nodes.
> >>>
> >>> Any suggestion would be helpful.
> >>>
> >>> Chandan
> >>>
> >>> --
> >>> Chandan kumar Choudhury
> >>> NCL, Pune
> >>> INDIA
> >>> --
> >>> gmx-users mailing listgmx-users@gromacs.org
> >>> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >>> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >>> * Please don't post (un)subscribe requests to the list. Use the
> >>> www interface or send it to gmx-users-requ...@gromacs.org.
> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >>
> >> --
> >> Dr. Carsten Kutzner
> >> Max Planck Institute for Biophysical Chemistry
> >> Theoretical and Computational Biophysics
> >> Am Fassberg 11, 37077 Goettingen, Germany
> >> Tel. +49-551-2012313, Fax: +49-551-2012302
> >> http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
> >>
> >> --
> >> gmx-users mailing listgmx-users@gromacs.org
> >> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >> * Please don't post (un)subscribe requests to the list. Use the
> >> www interface or send it to gmx-users-requ...@gromacs.org.
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> http://www.gr

Re: [gmx-users] g_tune_pme for multiple nodes

2012-11-29 Thread Carsten Kutzner
Hi Chandan,

On Nov 29, 2012, at 3:30 PM, Chandan Choudhury  wrote:

> Hi Carsten,
> 
> Thanks for your suggestion.
> 
> I did try to pass to total number of cores with the np flag to the
> g_tune_pme, but it didnot help. Hopefully I am doing something silliy. I
> have pasted the snippet of the PBS script.
> 
> #!/bin/csh
> #PBS -l nodes=2:ppn=12:twelve
> #PBS -N bilayer_tune
> 
> 
> 
> 
> cd $PBS_O_WORKDIR
> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
from here on you job file should read:

export MPIRUN=`which mpirun`
g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr 
-g tune.log

> mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb -x
> tune.xtc -e tune.edr -g tune.log -nice 0
this way you will get $NPROCS g_tune_pme instances, each trying to run an mdrun 
job on 24 cores,
which is not what you want. g_tune_pme itself is a serial program, it just 
spawns the mdrun's.

Carsten
> 
> 
> Then I submit the script using qsub.
> When I login to the compute nodes there I donot find and mdrun executable
> running.
> 
> I also tried using nodes=1 and np 12. It didnot work through qsub.
> 
> Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np 12
> -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
> 
> It worked.
> 
> Also, if I just use
> $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr
> -g tune.log -nice 0
> g_tune_pme executes on the head node and writes various files.
> 
> Kindly let me know what am I missing when I submit through qsub.
> 
> Thanks
> 
> Chandan
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
> 
> 
> On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner  wrote:
> 
>> Hi Chandan,
>> 
>> g_tune_pme also finds the optimal number of PME cores if the cores
>> are distributed on multiple nodes. Simply pass the total number of
>> cores to the -np option. Depending on the MPI and queue environment
>> that you use, the distribution of the cores over the nodes may have
>> to be specified in a hostfile / machinefile. Check g_tune_pme -h
>> on how to set that.
>> 
>> Best,
>>  Carsten
>> 
>> 
>> On Aug 28, 2012, at 8:33 PM, Chandan Choudhury  wrote:
>> 
>>> Dear gmx users,
>>> 
>>> I am using 4.5.5 of gromacs.
>>> 
>>> I was trying to use g_tune_pme for a simulation. I intend to execute
>>> mdrun at multiple nodes with 12 cores each. Therefore, I would like to
>>> optimize the number of pme nodes. I could execute g_tune_pme -np 12
>>> md.tpr. But this will only find the optimal PME nodes for single nodes
>>> run. How do I find the optimal PME nodes for multiple nodes.
>>> 
>>> Any suggestion would be helpful.
>>> 
>>> Chandan
>>> 
>>> --
>>> Chandan kumar Choudhury
>>> NCL, Pune
>>> INDIA
>>> --
>>> gmx-users mailing listgmx-users@gromacs.org
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>>> * Please don't post (un)subscribe requests to the list. Use the
>>> www interface or send it to gmx-users-requ...@gromacs.org.
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> 
>> --
>> Dr. Carsten Kutzner
>> Max Planck Institute for Biophysical Chemistry
>> Theoretical and Computational Biophysics
>> Am Fassberg 11, 37077 Goettingen, Germany
>> Tel. +49-551-2012313, Fax: +49-551-2012302
>> http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
>> 
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> * Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


AW: [gmx-users] g_tune_pme for multiple nodes

2012-11-29 Thread Florian Dommert
> -Ursprüngliche Nachricht-
> Von: gmx-users-boun...@gromacs.org [mailto:gmx-users-
> boun...@gromacs.org] Im Auftrag von Chandan Choudhury
> Gesendet: Donnerstag, 29. November 2012 15:31
> An: Discussion list for GROMACS users
> Betreff: Re: [gmx-users] g_tune_pme for multiple nodes
> 
> Hi Carsten,
> 
> Thanks for your suggestion.
> 
> I did try to pass to total number of cores with the np flag to the
g_tune_pme,
> but it didnot help. Hopefully I am doing something silliy. I have pasted
the
> snippet of the PBS script.
> 
> #!/bin/csh
> #PBS -l nodes=2:ppn=12:twelve
> #PBS -N bilayer_tune
> 
> 
> 
> 
> cd $PBS_O_WORKDIR
> export
> MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
> mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb -x
> tune.xtc -e tune.edr -g tune.log -nice 0

Hi,

 Don't start an MPI process. Run:

g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb -x

and everything should work fine.

/Flo
> 
> 
> Then I submit the script using qsub.
> When I login to the compute nodes there I donot find and mdrun executable
> running.
> 
> I also tried using nodes=1 and np 12. It didnot work through qsub.
> 
> Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np 12
-
> s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
> 
> It worked.
> 
> Also, if I just use
> $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
tune.edr -g
> tune.log -nice 0 g_tune_pme executes on the head node and writes various
files.
> 
> Kindly let me know what am I missing when I submit through qsub.
> 
> Thanks
> 
> Chandan
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
> 
> 
> On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner  wrote:
> 
> > Hi Chandan,
> >
> > g_tune_pme also finds the optimal number of PME cores if the cores are
> > distributed on multiple nodes. Simply pass the total number of cores
> > to the -np option. Depending on the MPI and queue environment that you
> > use, the distribution of the cores over the nodes may have to be
> > specified in a hostfile / machinefile. Check g_tune_pme -h on how to
> > set that.
> >
> > Best,
> >   Carsten
> >
> >
> > On Aug 28, 2012, at 8:33 PM, Chandan Choudhury 
> wrote:
> >
> > > Dear gmx users,
> > >
> > > I am using 4.5.5 of gromacs.
> > >
> > > I was trying to use g_tune_pme for a simulation. I intend to execute
> > > mdrun at multiple nodes with 12 cores each. Therefore, I would like
> > > to optimize the number of pme nodes. I could execute g_tune_pme -np
> > > 12 md.tpr. But this will only find the optimal PME nodes for single
> > > nodes run. How do I find the optimal PME nodes for multiple nodes.
> > >
> > > Any suggestion would be helpful.
> > >
> > > Chandan
> > >
> > > --
> > > Chandan kumar Choudhury
> > > NCL, Pune
> > > INDIA
> > > --
> > > gmx-users mailing listgmx-users@gromacs.org
> > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > > * Please don't post (un)subscribe requests to the list. Use the www
> > > interface or send it to gmx-users-requ...@gromacs.org.
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> >
> > --
> > Dr. Carsten Kutzner
> > Max Planck Institute for Biophysical Chemistry Theoretical and
> > Computational Biophysics Am Fassberg 11, 37077 Goettingen, Germany
> > Tel. +49-551-2012313, Fax: +49-551-2012302
> > http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
> >
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the www
> > interface or send it to gmx-users-requ...@gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www
interface or
> send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme for multiple nodes

2012-11-29 Thread Chandan Choudhury
Hi Carsten,

Thanks for your suggestion.

I did try to pass to total number of cores with the np flag to the
g_tune_pme, but it didnot help. Hopefully I am doing something silliy. I
have pasted the snippet of the PBS script.

#!/bin/csh
#PBS -l nodes=2:ppn=12:twelve
#PBS -N bilayer_tune




cd $PBS_O_WORKDIR
export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb -x
tune.xtc -e tune.edr -g tune.log -nice 0


Then I submit the script using qsub.
When I login to the compute nodes there I donot find and mdrun executable
running.

I also tried using nodes=1 and np 12. It didnot work through qsub.

Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np 12
-s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0

It worked.

Also, if I just use
$g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr
-g tune.log -nice 0
g_tune_pme executes on the head node and writes various files.

Kindly let me know what am I missing when I submit through qsub.

Thanks

Chandan
--
Chandan kumar Choudhury
NCL, Pune
INDIA


On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner  wrote:

> Hi Chandan,
>
> g_tune_pme also finds the optimal number of PME cores if the cores
> are distributed on multiple nodes. Simply pass the total number of
> cores to the -np option. Depending on the MPI and queue environment
> that you use, the distribution of the cores over the nodes may have
> to be specified in a hostfile / machinefile. Check g_tune_pme -h
> on how to set that.
>
> Best,
>   Carsten
>
>
> On Aug 28, 2012, at 8:33 PM, Chandan Choudhury  wrote:
>
> > Dear gmx users,
> >
> > I am using 4.5.5 of gromacs.
> >
> > I was trying to use g_tune_pme for a simulation. I intend to execute
> > mdrun at multiple nodes with 12 cores each. Therefore, I would like to
> > optimize the number of pme nodes. I could execute g_tune_pme -np 12
> > md.tpr. But this will only find the optimal PME nodes for single nodes
> > run. How do I find the optimal PME nodes for multiple nodes.
> >
> > Any suggestion would be helpful.
> >
> > Chandan
> >
> > --
> > Chandan kumar Choudhury
> > NCL, Pune
> > INDIA
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
> --
> Dr. Carsten Kutzner
> Max Planck Institute for Biophysical Chemistry
> Theoretical and Computational Biophysics
> Am Fassberg 11, 37077 Goettingen, Germany
> Tel. +49-551-2012313, Fax: +49-551-2012302
> http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme for multiple nodes

2012-09-03 Thread Carsten Kutzner
Hi Chandan,

g_tune_pme also finds the optimal number of PME cores if the cores
are distributed on multiple nodes. Simply pass the total number of
cores to the -np option. Depending on the MPI and queue environment
that you use, the distribution of the cores over the nodes may have
to be specified in a hostfile / machinefile. Check g_tune_pme -h
on how to set that.

Best,
  Carsten


On Aug 28, 2012, at 8:33 PM, Chandan Choudhury  wrote:

> Dear gmx users,
> 
> I am using 4.5.5 of gromacs.
> 
> I was trying to use g_tune_pme for a simulation. I intend to execute
> mdrun at multiple nodes with 12 cores each. Therefore, I would like to
> optimize the number of pme nodes. I could execute g_tune_pme -np 12
> md.tpr. But this will only find the optimal PME nodes for single nodes
> run. How do I find the optimal PME nodes for multiple nodes.
> 
> Any suggestion would be helpful.
> 
> Chandan
> 
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_tune_pme for multiple nodes

2012-08-28 Thread Chandan Choudhury
Dear gmx users,

I am using 4.5.5 of gromacs.

I was trying to use g_tune_pme for a simulation. I intend to execute
mdrun at multiple nodes with 12 cores each. Therefore, I would like to
optimize the number of pme nodes. I could execute g_tune_pme -np 12
md.tpr. But this will only find the optimal PME nodes for single nodes
run. How do I find the optimal PME nodes for multiple nodes.

Any suggestion would be helpful.

Chandan

--
Chandan kumar Choudhury
NCL, Pune
INDIA
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists