[OMPI devel] inquiry about mpirun

2010-04-06 Thread luyang dong
dear teachers:
    Regardless of  any mpi implementation , there is always a command 
named mpirun. And correspondingly there is a source file called mpirun.c.(at 
least in lam/mpi),but i can not find this file in openmpi. can you tell me how 
to  produce this command in openmpi.
  thanks a lot


  

Re: [OMPI devel] compile openmpi error on debian

2010-04-06 Thread Matthias Jurenz
Hi Yaohui,

can you tell me the version of your gcc and g++ compiler?
It seems to me that your g++ compiler is older (<4.2) than your gcc compiler. 
If that's true, then we have to enhance the VT configure, so that the 
availability of '-fopenmp' for g++ will be tested.

Matthias

On Monday 05 April 2010 03:33:06 hu yaohui wrote:
> Thank you very much!
> 
> Yaohui!
> 
> On Sat, Apr 3, 2010 at 7:09 PM, Jeff Squyres  wrote:
> > Can you try Open MPI v1.4.1?
> >
> > If that doesn't work, you can disable building VampirTrace; it's an
> > optional component of Open MPI with the following configure argument:
> >
> >./configure --enable-contrib-no-build=vt ...
> >
> > On Apr 3, 2010, at 1:22 AM, hu yaohui wrote:
> > > Hi all,
> > > when i make openmpi on debian, i meet the following error that i dont
> >
> > know why?
> >
> > > 
> > > Making all in vtfilter
> > > make[6]: Entering directory
> >
> > `/root/openmpi-1.4-ht/ompi/contrib/vt/vt/tools/vtfilter'
> >
> > > g++ -DHAVE_CONFIG_H -I. -I../.. -I../../extlib/otf/otflib
> >
> > -I../../extlib/otf/otflib -I../../vtlib/ -I../../vtlib  -D_GNU_SOURCE
> > -fopenmp -DVT_OMP -g -finline-functions -pthread -MT vtfilter-vt_filter.o
> > -MD -MP -MF .deps/vtfilter-vt_filter.Tpo -c -o vtfilter-vt_filter.o `test
> > -f 'vt_filter.cc' || echo './'`vt_filter.cc
> >
> > > cc1plus: error: unrecognized command line option "-fopenmp"
> > > make[6]: *** [vtfilter-vt_filter.o] Error 1
> > > make[6]: Leaving directory
> >
> > `/root/openmpi-1.4-ht/ompi/contrib/vt/vt/tools/vtfilter'
> >
> > > make[5]: *** [all-recursive] Error 1
> > > make[5]: Leaving directory
> >
> > `/root/openmpi-1.4-ht/ompi/contrib/vt/vt/tools'
> >
> > > make[4]: *** [all-recursive] Error 1
> > > make[4]: Leaving directory `/root/openmpi-1.4-ht/ompi/contrib/vt/vt'
> > > make[3]: *** [all] Error 2
> > > make[3]: Leaving directory `/root/openmpi-1.4-ht/ompi/contrib/vt/vt'
> > > make[2]: *** [all-recursive] Error 1
> > > make[2]: Leaving directory `/root/openmpi-1.4-ht/ompi/contrib/vt'
> > > make[1]: *** [all-recursive] Error 1
> > > make[1]: Leaving directory `/root/openmpi-1.4-ht/ompi'
> > > make: *** [all-recursive] Error 1
> > > 
> > > someone tell me it's the gcc version's problem,but my gcc is the latest
> >
> > one,
> >
> > > is someone know the reason i meet this probelm
> > >
> > > Thanks & Regards
> > > Yaohui Hu
> > > ___
> > > devel mailing list
> > > de...@open-mpi.org
> > > http://www.open-mpi.org/mailman/listinfo.cgi/devel
> >
> > --
> > Jeff Squyres
> > jsquy...@cisco.com
> > For corporate legal information go to:
> > http://www.cisco.com/web/about/doing_business/legal/cri/
> >
> >
> > ___
> > devel mailing list
> > de...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/devel
> 


Re: [OMPI devel] inquiry about mpirun

2010-04-06 Thread N.M. Maclaren

On Apr 6 2010, luyang dong wrote:


Regardless of  any mpi implementation , there 
is always a command named mpirun. And correspondingly there is a source 
file called mpirun.c.(at least in lam/mpi),but i can not find this file 
in openmpi. can you tell me how to  produce this command in openmpi. 


Er, no.  There are some that I have used that do not have such a
command at all, and some where it is a script in some shell language,
Python or Perl.  I believe that OpenMPI usually makes it a symbolic link
to some other command (orterun or mpiexec), and so do some others.

It's trivial to write a simple wrapper for mpiexec for your own use and
call it mpirun.  Or just create a symbolic link.

Someone else has indicated that OpenMPI intends to set up such a command,
but I am not commenting on that aspect.


Regards,
Nick Maclaren.





Re: [OMPI devel] inquiry about mpirun

2010-04-06 Thread Terry Dontje

N.M. Maclaren wrote:

On Apr 6 2010, luyang dong wrote:


Regardless of  any mpi implementation , there is always a command 
named mpirun. And correspondingly there is a source file called 
mpirun.c.(at least in lam/mpi),but i can not find this file in 
openmpi. can you tell me how to  produce this command in openmpi. 


Er, no.  There are some that I have used that do not have such a
command at all, and some where it is a script in some shell language,
Python or Perl.  I believe that OpenMPI usually makes it a symbolic link
to some other command (orterun or mpiexec), and so do some others.

I believe mpiexec is actually the command specified in the MPI spec 
which can be a link.  However, the command, in OMPI,  that ends up doing 
the real business orterun and its base source is 
orte/tools/orterun/orterun.c.


--td

It's trivial to write a simple wrapper for mpiexec for your own use and
call it mpirun.  Or just create a symbolic link.

Someone else has indicated that OpenMPI intends to set up such a command,
but I am not commenting on that aspect.


Regards,
Nick Maclaren.



___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel



--
Oracle
Terry D. Dontje | Principal Software Engineer
Developer Tools Engineering | +1.650.633.7054
Oracle * - Performance Technologies*
95 Network Drive, Burlington, MA 01803
Email terry.don...@oracle.com 



[OMPI devel] kernel 2.6.23 vs 2.6.24 - communication/wait times

2010-04-06 Thread Oliver Geisler
Hello Devel-List,

I am a little bit helpless about this matter. I already posted in the
user list. In case you don't read the users list, I post in here.

This is the original posting:

http://www.open-mpi.org/community/lists/users/2010/03/12474.php

Short:
Switching from kernel 2.6.23 to 2.6.24 (and up), using openmpi 1.2.7-rc2
(I know outdated, but in debian stable, and same results with 1.4.1)
increases communication times between processes (essentially between one
master and several slave processes). This is regardless of whether the
processes are local only or communication is over ethernet.

Did anybody witness such a behavior?

Ideas what should I test for?

What additional information should I provide for you?

Thanks for your time

oli

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



Re: [OMPI devel] kernel 2.6.23 vs 2.6.24 - communication/wait times

2010-04-06 Thread Rainer Keller
Hello Oliver,
Hmm, this is really a teaser...
I haven't seen such a drastic behavior, and haven't read of any on the list.

One thing however, that might interfere is process binding.
Could You make sure, that processes are not bound to cores (default in 1.4.1):
with mpirun --bind-to-none 

Just an idea...

Regards,
Rainer

On Tuesday 06 April 2010 10:07:35 am Oliver Geisler wrote:
> Hello Devel-List,
> 
> I am a little bit helpless about this matter. I already posted in the
> user list. In case you don't read the users list, I post in here.
> 
> This is the original posting:
> 
> http://www.open-mpi.org/community/lists/users/2010/03/12474.php
> 
> Short:
> Switching from kernel 2.6.23 to 2.6.24 (and up), using openmpi 1.2.7-rc2
> (I know outdated, but in debian stable, and same results with 1.4.1)
> increases communication times between processes (essentially between one
> master and several slave processes). This is regardless of whether the
> processes are local only or communication is over ethernet.
> 
> Did anybody witness such a behavior?
> 
> Ideas what should I test for?
> 
> What additional information should I provide for you?
> 
> Thanks for your time
> 
> oli
> 

-- 

Rainer Keller, PhD  Tel: +1 (865) 241-6293
Oak Ridge National Lab  Fax: +1 (865) 241-4811
PO Box 2008 MS 6164   Email: kel...@ornl.gov
Oak Ridge, TN 37831-2008AIM/Skype: rusraink



Re: [OMPI devel] kernel 2.6.23 vs 2.6.24 - communication/wait times

2010-04-06 Thread Oliver Geisler
On 4/6/2010 10:11 AM, Rainer Keller wrote:
> Hello Oliver,
> Hmm, this is really a teaser...
> I haven't seen such a drastic behavior, and haven't read of any on the list.
> 
> One thing however, that might interfere is process binding.
> Could You make sure, that processes are not bound to cores (default in 1.4.1):
> with mpirun --bind-to-none 
> 

I have tried version 1.4.1. Using default settings and watched processes
switching from core to core in "top" (with "f" + "j"). Then I tried
--bind-to-core and explicitly --bind-to-none. All with the same result:
~20% cpu wait and lot longer over-all computation times.

Thanks for the idea ...
Every input is helpful.

Oli


> Just an idea...
> 
> Regards,
> Rainer
> 
> On Tuesday 06 April 2010 10:07:35 am Oliver Geisler wrote:
>> Hello Devel-List,
>>
>> I am a little bit helpless about this matter. I already posted in the
>> user list. In case you don't read the users list, I post in here.
>>
>> This is the original posting:
>>
>> http://www.open-mpi.org/community/lists/users/2010/03/12474.php
>>
>> Short:
>> Switching from kernel 2.6.23 to 2.6.24 (and up), using openmpi 1.2.7-rc2
>> (I know outdated, but in debian stable, and same results with 1.4.1)
>> increases communication times between processes (essentially between one
>> master and several slave processes). This is regardless of whether the
>> processes are local only or communication is over ethernet.
>>
>> Did anybody witness such a behavior?
>>
>> Ideas what should I test for?
>>
>> What additional information should I provide for you?
>>
>> Thanks for your time
>>
>> oli
>>
> 


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



Re: [OMPI devel] [OMPI users] kernel 2.6.23 vs 2.6.24 - communication/wait times

2010-04-06 Thread Oliver Geisler
On 4/1/2010 12:49 PM, Rainer Keller wrote:

> On Thursday 01 April 2010 12:16:25 pm Oliver Geisler wrote:
>> Does anyone know a benchmark program, I could use for testing?
> There's an abundance of benchmarks (IMB, netpipe, SkaMPI...) and performance 
> analysis tools (Scalasca, Vampir, Paraver, Opt, Jumpshot).
> 

I used SkaMPI to test communication: Most important the third column
showing the communication time. Same effect, kernel lower 2.6.24 showing
faster communication(by thousands) against higher kernel version with
slow communication.

Hm. The issue seems not to be linked to the application. The kernel
configuration was carried forward from the working kernel 2.6.18 thru to
2.6.33 mostly using defaults for new features.

Any ideas what to look for? What other tests could I make to give you
guys more information?

Thanks so far,

oli


Tested on Intel Core2 Duo with openmpi 1.4.1

"skampi_coll"-test

kernel 2.6.18.6:
# begin result "MPI_Bcast-length"
count= 14   1.0   0.0   16   0.1   1.0
count= 28   1.0   0.08   0.0   1.0
count= 3   12   1.0   0.0   16   0.0   1.0
count= 4   16   1.3   0.1   32   0.0   1.3
count= 6   24   1.0   0.08   0.2   1.0
count= 8   32   1.0   0.0   32   0.1   1.0
{...}
count= 370728  14829121023.8  42.381023.81023.1
count= 524288  20971521440.3   3.781440.31439.5
# end result "MPI_Bcast-length"
# duration = 0.09 sec

kernel 2.6.33.1:
# begin result "MPI_Bcast-length"
count= 141786.5 131.2   341095.31786.5
count= 281504.9  77.1   34 759.31504.9
count= 3   121852.4 139.2   351027.91852.4
count= 4   162430.5 152.0   381200.52430.5
count= 6   241898.7  69.5   35 807.61898.7
count= 8   321769.1  16.3   34 763.31769.1
{...}
count= 370728  1482912  216145.93011.6   29  216145.9  214898.1
count= 524288  2097152  274813.71519.5   12  274813.7  274087.4
# end result "MPI_Bcast-length"
# duration = 140.64 sec

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



Re: [OMPI devel] kernel 2.6.23 vs 2.6.24 - communication/wait times

2010-04-06 Thread Jeff Squyres
Sorry for the delay -- I just replied on the user list -- I think the first 
thing to do is to establish baseline networking performance and see if that is 
out of whack.  If the underlying network is bad, then MPI performance will also 
be bad.


On Apr 6, 2010, at 11:51 AM, Oliver Geisler wrote:

> On 4/6/2010 10:11 AM, Rainer Keller wrote:
> > Hello Oliver,
> > Hmm, this is really a teaser...
> > I haven't seen such a drastic behavior, and haven't read of any on the list.
> >
> > One thing however, that might interfere is process binding.
> > Could You make sure, that processes are not bound to cores (default in 
> > 1.4.1):
> > with mpirun --bind-to-none
> >
> 
> I have tried version 1.4.1. Using default settings and watched processes
> switching from core to core in "top" (with "f" + "j"). Then I tried
> --bind-to-core and explicitly --bind-to-none. All with the same result:
> ~20% cpu wait and lot longer over-all computation times.
> 
> Thanks for the idea ...
> Every input is helpful.
> 
> Oli
> 
> 
> > Just an idea...
> >
> > Regards,
> > Rainer
> >
> > On Tuesday 06 April 2010 10:07:35 am Oliver Geisler wrote:
> >> Hello Devel-List,
> >>
> >> I am a little bit helpless about this matter. I already posted in the
> >> user list. In case you don't read the users list, I post in here.
> >>
> >> This is the original posting:
> >>
> >> http://www.open-mpi.org/community/lists/users/2010/03/12474.php
> >>
> >> Short:
> >> Switching from kernel 2.6.23 to 2.6.24 (and up), using openmpi 1.2.7-rc2
> >> (I know outdated, but in debian stable, and same results with 1.4.1)
> >> increases communication times between processes (essentially between one
> >> master and several slave processes). This is regardless of whether the
> >> processes are local only or communication is over ethernet.
> >>
> >> Did anybody witness such a behavior?
> >>
> >> Ideas what should I test for?
> >>
> >> What additional information should I provide for you?
> >>
> >> Thanks for your time
> >>
> >> oli
> >>
> >
> 
> 
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
> 
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
> 


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI devel] kernel 2.6.23 vs 2.6.24 - communication/wait times

2010-04-06 Thread Oliver Geisler
On 4/6/2010 2:54 PM, Jeff Squyres wrote:
> Sorry for the delay -- I just replied on the user list -- I think the first 
> thing to do is to establish baseline networking performance and see if that 
> is out of whack.  If the underlying network is bad, then MPI performance will 
> also be bad.
> 

Could make sense. With kernel 2.6.24 it seems a major change in the
modules for Intel PCI-Express network cards was introduced.
Does openmpi use TCP communication, even if all processes are on the
same local node?


> 
> On Apr 6, 2010, at 11:51 AM, Oliver Geisler wrote:
> 
>> On 4/6/2010 10:11 AM, Rainer Keller wrote:
>>> Hello Oliver,
>>> Hmm, this is really a teaser...
>>> I haven't seen such a drastic behavior, and haven't read of any on the list.
>>>
>>> One thing however, that might interfere is process binding.
>>> Could You make sure, that processes are not bound to cores (default in 
>>> 1.4.1):
>>> with mpirun --bind-to-none
>>>
>>
>> I have tried version 1.4.1. Using default settings and watched processes
>> switching from core to core in "top" (with "f" + "j"). Then I tried
>> --bind-to-core and explicitly --bind-to-none. All with the same result:
>> ~20% cpu wait and lot longer over-all computation times.
>>
>> Thanks for the idea ...
>> Every input is helpful.
>>
>> Oli
>>
>>
>>> Just an idea...
>>>
>>> Regards,
>>> Rainer
>>>
>>> On Tuesday 06 April 2010 10:07:35 am Oliver Geisler wrote:
 Hello Devel-List,

 I am a little bit helpless about this matter. I already posted in the
 user list. In case you don't read the users list, I post in here.

 This is the original posting:

 http://www.open-mpi.org/community/lists/users/2010/03/12474.php

 Short:
 Switching from kernel 2.6.23 to 2.6.24 (and up), using openmpi 1.2.7-rc2
 (I know outdated, but in debian stable, and same results with 1.4.1)
 increases communication times between processes (essentially between one
 master and several slave processes). This is regardless of whether the
 processes are local only or communication is over ethernet.

 Did anybody witness such a behavior?

 Ideas what should I test for?

 What additional information should I provide for you?

 Thanks for your time

 oli

>>>
>>
>>
>> --
>> This message has been scanned for viruses and
>> dangerous content by MailScanner, and is
>> believed to be clean.
>>
>> ___
>> devel mailing list
>> de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>>
> 
> 


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



Re: [OMPI devel] kernel 2.6.23 vs 2.6.24 - communication/wait times

2010-04-06 Thread Jeff Squyres
On Apr 6, 2010, at 4:29 PM, Oliver Geisler wrote:

> > Sorry for the delay -- I just replied on the user list -- I think the first 
> > thing to do is to establish baseline networking performance and see if that 
> > is out of whack.  If the underlying network is bad, then MPI performance 
> > will also be bad.
> 
> Could make sense. With kernel 2.6.24 it seems a major change in the
> modules for Intel PCI-Express network cards was introduced.
> Does openmpi use TCP communication, even if all processes are on the
> same local node?

It depends.  :-)

The "--mca btl sm,self,tcp" option to mpirun tells Open MPI to use shared 
memory, tcp, and process-loopback for MPI point-to-point communications.  Open 
MPI computes a reachability / priority map and uses the highest priority plugin 
that is reachable for each peer MPI process.

Meaning that on each node, if you allow "sm" to be used, "sm" should be used 
for all on-node communications.  If you had only said --mca btl tcp,self", then 
you're only allowing Open MPI to use TCP for all non-self MPI point-to-point 
communications.

The default -- if you don't specify "--mca btl " at all -- is for Open MPI 
to figure it out automatically and use whatever networks it can find.  In your 
case, I'm guessing that it's pretty much identical to specifying "--mca btl 
tcp,sm,self".

Another good raw TCP performance program that network wonks are familiar with 
is netperf.  NetPipe is nice because it allows an apples-to-apples comparison 
of TCP and MPI (i.e., it's the same benchmark app that can use either TCP or 
MPI [or several other] transports underneath).  But netperf might be a bit more 
familiar to those outside the HPC community.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI devel] kernel 2.6.23 vs 2.6.24 - communication/wait times

2010-04-06 Thread Oliver Geisler
On 4/6/2010 2:54 PM, Jeff Squyres wrote:
> Sorry for the delay -- I just replied on the user list -- I think the first 
> thing to do is to establish baseline networking performance and see if that 
> is out of whack.  If the underlying network is bad, then MPI performance will 
> also be bad.
> 
> 

Using netpipe and comparing tcp and mpi communication I get the
following results:

TCP is much faster than MPI, approx. by factor 12
e.g a packet size of 4096 bytes deliveres in
97.11 usec with NPtcp and
15338.98 usec with NPmpi
or
packet size 262kb
0.05268801 sec NPtcp
0.00254560 sec NPmpi

Further our benchmark started with "--mca btl tcp,self" runs with short
communication times, even using kernel 2.6.33.1

Is there a way to see what type of communication is actually selected?

Can anybody imagine why shared memory leads to these problems?
Kernel configuration?


Thanks, Jeff, for insisting upon testing network performance.
Thanks all others, too ;-)

oli


> On Apr 6, 2010, at 11:51 AM, Oliver Geisler wrote:
> 
>> On 4/6/2010 10:11 AM, Rainer Keller wrote:
>>> Hello Oliver,
>>> Hmm, this is really a teaser...
>>> I haven't seen such a drastic behavior, and haven't read of any on the list.
>>>
>>> One thing however, that might interfere is process binding.
>>> Could You make sure, that processes are not bound to cores (default in 
>>> 1.4.1):
>>> with mpirun --bind-to-none
>>>
>>
>> I have tried version 1.4.1. Using default settings and watched processes
>> switching from core to core in "top" (with "f" + "j"). Then I tried
>> --bind-to-core and explicitly --bind-to-none. All with the same result:
>> ~20% cpu wait and lot longer over-all computation times.
>>
>> Thanks for the idea ...
>> Every input is helpful.
>>
>> Oli
>>
>>
>>> Just an idea...
>>>
>>> Regards,
>>> Rainer
>>>
>>> On Tuesday 06 April 2010 10:07:35 am Oliver Geisler wrote:
 Hello Devel-List,

 I am a little bit helpless about this matter. I already posted in the
 user list. In case you don't read the users list, I post in here.

 This is the original posting:

 http://www.open-mpi.org/community/lists/users/2010/03/12474.php

 Short:
 Switching from kernel 2.6.23 to 2.6.24 (and up), using openmpi 1.2.7-rc2
 (I know outdated, but in debian stable, and same results with 1.4.1)
 increases communication times between processes (essentially between one
 master and several slave processes). This is regardless of whether the
 processes are local only or communication is over ethernet.

 Did anybody witness such a behavior?

 Ideas what should I test for?

 What additional information should I provide for you?

 Thanks for your time

 oli

>>>
>>
>>
>> --
>> This message has been scanned for viruses and
>> dangerous content by MailScanner, and is
>> believed to be clean.
>>
>> ___
>> devel mailing list
>> de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>>
> 
> 


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



Re: [OMPI devel] kernel 2.6.23 vs 2.6.24 - communication/wait times

2010-04-06 Thread Jeff Squyres
On Apr 6, 2010, at 6:04 PM, Oliver Geisler wrote:

> Using netpipe and comparing tcp and mpi communication I get the
> following results:
> 
> TCP is much faster than MPI, approx. by factor 12
> e.g a packet size of 4096 bytes deliveres in
> 97.11 usec with NPtcp and
> 15338.98 usec with NPmpi
> or
> packet size 262kb
> 0.05268801 sec NPtcp
> 0.00254560 sec NPmpi

Well that's not good (for us).  :-\

> Further our benchmark started with "--mca btl tcp,self" runs with short
> communication times, even using kernel 2.6.33.1

I'm not sure what this statement means (^^).  Can you explain?

> Is there a way to see what type of communication is actually selected?

If you "--mca btl tcp,self" is used, then TCP sockets are used for non-self 
communications (i.e., communications with peer MPI processes, regardless of 
location).

> Can anybody imagine why shared memory leads to these problems?

I'm not sure I understand this -- if "--mca btl tcp,self", shared memory is not 
used...?

re-reading your email, I'm wondering: did you run the NPmpi process with 
"--mca btl tcp,sm,self" (or no --mca btl param)?  That might explain some of my 
confusion, above.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI devel] adding ping-pong test to examples directory?

2010-04-06 Thread Jeff Squyres
I see your point, but the cons make sink the idea for me.

How about a compromise -- write up a scripty-foo to automatically download and 
build some of the more common benchmarks?  This still makes it a trivial 
exercise for the user, but it avoids us needing to bundle already-popular 
benchmarks in OMPI (plus, they release at different schedules than us).

For extra bonus points, you could make the scripty-foo be a dumb client that 
downloads an XML file from www.open-mpi.org that indicates where it should 
*really* download and build a given benchmark from.  This would allow us to 
"release" new benchmarks independent of Open MPI releases (e.g., if NetPIPE 
releases a new version, we can just update the XML file on www.open-mpi.org).


On Apr 1, 2010, at 7:14 PM, Eugene Loh wrote:

> I wanted to know what folks thought about adding a ping-pong performance
> test to the examples directory?
> 
> Pros:  This would facilitate performance sanity testing by OMPI users --
> particularly MPI neophytes.  It would add something to the examples
> directory with a performance orientation.  It would give us
> (devel@ompiorg) a known quantity when trouble shooting performance with
> users.  It could conceivably raise OMPI visibility in the MPI world.  It
> could be a stepping stone to developing a more complete set of MPI
> performance sanity tests with time.
> 
> Cons:  There are already many performance tests.  We shouldn't be
> replicating what others do, but should be leveraging what they do. 
> Other existing tests are relatively easy to use and already familiar to
> many users.
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
> 


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/