[OMPI devel] Create a new component (for btl)

2007-10-11 Thread Torje Henriksen

Hi,

I would like to make my own btl component for shared memory, and use that 
instead of the sm component.


First off I would just copy the current sm component, give it another name 
and see if I can get that to load instead.


Is there an elegant way of adding components, any documentation on this?

I've tried to grep for mca_btl_sm etc, but the number of files returned is 
daunting. Do I have to make changes all around ompi, or is there something 
I'm missing? Some automagical goodness, maybe? ;)



Tanks,

-Torje


Re: [OMPI devel] Create a new component (for btl)

2007-10-11 Thread Aurelien Bouteiller
The elegant way is to go the way you are going. Basically you need to  
provide open and close to  the mca framework, init and finalize to  
the btl framework and populate all the functions defined in btl.h and  
btl/base/base.h. Copying an already existing btl is the best way not  
to forget anybody in that process. You also need to change names in  
makefile.am. The autogen.sh will do all the smart things and  
recognize and configure your new component. This is that simple :]


Aurelien
--
Aurelien Bouteiller, PhD
Innovative Computing Laboratory - MPI group
+1 865 974 6321
1122 Volunteer Boulevard
Claxton Education Building Suite 350
Knoxville, TN 37996

Le 11 oct. 07 à 10:39, Torje Henriksen a écrit :


Hi,

I would like to make my own btl component for shared memory, and  
use that

instead of the sm component.

First off I would just copy the current sm component, give it  
another name

and see if I can get that to load instead.

Is there an elegant way of adding components, any documentation on  
this?


I've tried to grep for mca_btl_sm etc, but the number of files  
returned is
daunting. Do I have to make changes all around ompi, or is there  
something

I'm missing? Some automagical goodness, maybe? ;)


Tanks,

-Torje
___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel





Re: [OMPI devel] Create a new component (for btl)

2007-10-11 Thread George Bosilca
You can always start from scratch using the template in the btl  
directory ...


  george.

On Oct 11, 2007, at 6:46 AM, Aurelien Bouteiller wrote:


The elegant way is to go the way you are going. Basically you need to
provide open and close to  the mca framework, init and finalize to
the btl framework and populate all the functions defined in btl.h and
btl/base/base.h. Copying an already existing btl is the best way not
to forget anybody in that process. You also need to change names in
makefile.am. The autogen.sh will do all the smart things and
recognize and configure your new component. This is that simple :]

Aurelien
--
Aurelien Bouteiller, PhD
Innovative Computing Laboratory - MPI group
+1 865 974 6321
1122 Volunteer Boulevard
Claxton Education Building Suite 350
Knoxville, TN 37996

Le 11 oct. 07 à 10:39, Torje Henriksen a écrit :


Hi,

I would like to make my own btl component for shared memory, and
use that
instead of the sm component.

First off I would just copy the current sm component, give it
another name
and see if I can get that to load instead.

Is there an elegant way of adding components, any documentation on
this?

I've tried to grep for mca_btl_sm etc, but the number of files
returned is
daunting. Do I have to make changes all around ompi, or is there
something
I'm missing? Some automagical goodness, maybe? ;)


Tanks,

-Torje
___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel



___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel




smime.p7s
Description: S/MIME cryptographic signature


Re: [OMPI devel] Create a new component (for btl)

2007-10-11 Thread Jeff Squyres
If you copy the sm btl, be sure to change all function names /  
variables from "*btl_sm_..." to "*btl__...".  The  
OMPI configure/build/run system requires that the name of the  
component be the same as:


- the directory that it lives in under ompi/mca/btl
- the well-known component struct
- the filename of the DSO

That's how the infrastructure finds your component and all the  
relevant parts.


Also, since no one answered this question directly: the shared memory  
code is only in two directories:


- ompi/mca/btl/sm
- ompi/mca/common/sm

It's split between those two because it was envisioned that a coll  
component may also want to share some data between the sm btl and  
itself.  Hence, the stuff in "common" can be shared.



On Oct 11, 2007, at 6:46 AM, Aurelien Bouteiller wrote:


The elegant way is to go the way you are going. Basically you need to
provide open and close to  the mca framework, init and finalize to
the btl framework and populate all the functions defined in btl.h and
btl/base/base.h. Copying an already existing btl is the best way not
to forget anybody in that process. You also need to change names in
makefile.am. The autogen.sh will do all the smart things and
recognize and configure your new component. This is that simple :]

Aurelien
--
Aurelien Bouteiller, PhD
Innovative Computing Laboratory - MPI group
+1 865 974 6321
1122 Volunteer Boulevard
Claxton Education Building Suite 350
Knoxville, TN 37996

Le 11 oct. 07 à 10:39, Torje Henriksen a écrit :


Hi,

I would like to make my own btl component for shared memory, and
use that
instead of the sm component.

First off I would just copy the current sm component, give it
another name
and see if I can get that to load instead.

Is there an elegant way of adding components, any documentation on
this?

I've tried to grep for mca_btl_sm etc, but the number of files
returned is
daunting. Do I have to make changes all around ompi, or is there
something
I'm missing? Some automagical goodness, maybe? ;)


Tanks,

-Torje
___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel



___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel



--
Jeff Squyres
Cisco Systems




[OMPI devel] [RFC] update to ompi_request_t

2007-10-11 Thread George Bosilca
Sorry for the duplication in devel-core. As suggested, this RFC is  
now posted on devel.


Deadline: October 19th 2007.

Short version: We need one additional field in the ompi_request_t  
struct which contain a callback to be called when a request complete.  
This callback is not intended for the PML layer, but for any other  
component inside Open MPI. It will provide them a event based  
progress (based on requests completion).


Long version: During the Open MPI meeting in Paris, we talked about  
revamping the progress engine. It's not a complete rewrite, it's more  
related to performance improvement and supporting a blocking mode. In  
order to be able to reach our goal, we need to get rid of all  
progress functions that are not related to the BTL/MTL. Therefore, we  
propose another mechanism for progressing components inside Open MPI,  
based on the completion event for requests. ROMIO and OSC can use it  
without problems, instead of the progress function they use today (we  
talked with Brian about this and he agreed).


This RFC is not about the progress engine, it's about the  
modifications we need in order to allow any component to have a event  
based progress. This affect only the ompi_request_t structure, and  
add one if in the critical path (cost minimal). The base request will  
have one more field, which contain a callback for completion. This  
callback, if not NULL, will be called every time the PML complete a  
request. However, the PML is not allowed to add it's own completion  
callback (it should use instead the req_free callback as it does  
today). As stated in the "short version", the new completion callback  
is only intended for non-devices layers such as OSC and IO.


  george.



smime.p7s
Description: S/MIME cryptographic signature


Re: [OMPI devel] collective problems

2007-10-11 Thread Gleb Natapov
On Fri, Oct 05, 2007 at 09:43:44AM +0200, Jeff Squyres wrote:
> David --
> 
> Gleb and I just actively re-looked at this problem yesterday; we  
> think it's related to https://svn.open-mpi.org/trac/ompi/ticket/ 
> 1015.  We previously thought this ticket was a different problem, but  
> our analysis yesterday shows that it could be a real problem in the  
> openib BTL or ob1 PML (kinda think it's the openib btl because it  
> doesn't seem to happen on other networks, but who knows...).
> 
> Gleb is investigating.
Here is the result of the investigation. The problem is different than
#1015 ticket. What we have here is one rank calls isend() of a small
message and wait_all() in a loop and another one calls irecv(). The
problem is that isend() usually doesn't call opal_progress() anywhere
and wait_all() doesn't call progress if all requests are already completed
so messages are never progressed. We may force opal_progress() to be called
by setting btl_openib_free_list_max to 1000. Then wait_all() will call
progress because not every request will be immediately completed by OB1. Or
we can limit a number of uncompleted requests that OB1 can allocate by setting
pml_ob1_free_list_max to 1000. Then opal_progress() will be called from a
free_list_wait() when max will be reached. The second option works much
faster for me.

> 
> 
> 
> On Oct 5, 2007, at 12:59 AM, David Daniel wrote:
> 
> > Hi Folks,
> >
> > I have been seeing some nasty behaviour in collectives,  
> > particularly bcast and reduce.  Attached is a reproducer (for bcast).
> >
> > The code will rapidly slow to a crawl (usually interpreted as a  
> > hang in real applications) and sometimes gets killed with sigbus or  
> > sigterm.
> >
> > I see this with
> >
> >   openmpi-1.2.3 or openmpi-1.2.4
> >   ofed 1.2
> >   linux 2.6.19 + patches
> >   gcc (GCC) 3.4.5 20051201 (Red Hat 3.4.5-2)
> >   4 socket, dual core opterons
> >
> > run as
> >
> >   mpirun --mca btl self,openib --npernode 1 --np 4 bcast-hang
> >
> > To my now uneducated eye it looks as if the root process is rushing  
> > ahead and not progressing earlier bcasts.
> >
> > Anyone else seeing similar?  Any ideas for workarounds?
> >
> > As a point of reference, mvapich2 0.9.8 works fine.
> >
> > Thanks, David
> >
> >
> > 
> > ___
> > devel mailing list
> > de...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/devel
> 
> 
> -- 
> Jeff Squyres
> Cisco Systems
> 
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel

--
Gleb.


Re: [OMPI devel] RFC: delete mvapi BTL for v1.3

2007-10-11 Thread Jeff Squyres

Reminder -- this RFC expires tonight.

Speak now or forever hold your peace...


On Oct 5, 2007, at 7:46 AM, Jeff Squyres wrote:


WHAT: Remove the mvapi BTL for the v1.3 release.

WHY: None of the IB vendors want to maintain it anymore; our future
is OFED.  If someone still has mvapi IB drivers, they can use the
OMPI v1.2 series.

WHERE: svn rm ompi/mca/btl/mvapi

WHEN: Before the v1.3 release.

TIMEOUT: COB, Thurs, Oct 11, 2007

-

None of the IB vendors are interested in maintaining the "mvapi" BTL
anymore.  Indeed, none of us have updated it with any of the new/
interesting/better performance features that went into the openib BTL
over the past year (or more).  Additionally, some changes may be
coming in the OMPI infrastructure that would *require* some revamping
in the mvapi BTL -- and no one of Cisco, Voltaire, Mellanox is
willing to do it.

So we'd like to ditch the mvapi BTL starting with v1.3 and have the
official guidance be that if you have mvapi, you need to use the OMPI
v1.2 series (i.e., remove this from the SVN trunk in the Very Near
Future).

--
Jeff Squyres
Cisco Systems

___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel



--
Jeff Squyres
Cisco Systems



[OMPI devel] [RFC] change wrapper compilers from binaries to shell scripts

2007-10-11 Thread Richard Graham
What:  Change the mpicc/mpicxx/mpif77/mpif90 from being binaries to being
shell scripts

Why: Our build environment assumes that wrapper compilers will use the same
binary format that the Open MPI libraries do.  In cross-compile environment,
the MPI wrapper compilers will run on the front-end and need to run on the
front-end, and not the back-end.  Jeff has suggested this as the simplest
way to build back-end libraries, and front-end wrapper-compilers.

When: within the next several weeks (for the 1.3 release)

Timeout: 10/19/2007


Rich



Re: [OMPI devel] DDT for v1.2 branch

2007-10-11 Thread Jeff Squyres

On Oct 10, 2007, at 8:11 AM, Terry Dontje wrote:


George has proposed to bring the DDT over from the trunk to the v1.2
branch before v1.2.5 in order to fix some pending bugs.


What does this entail (ie does this affect the pml interface at all)?


George will have to answer this -- George?  (he's on pseudo-vacation  
for the next 10 days)



Also by saying "before v1.2.5" I am assuming you mean this fix is to
be put into v1.2.5 since v1.2.4 has been released, right?


Correct.


I do not think that this has been tested yet, but are there any knee-
jerk reactions against doing this?

Can this be done in a tmp branch and tested out before commiting to  
the

1.2 branch?


I don't see any technical reason why not.  It could also be done on a  
1.2 workspace and tested (just not committed) if the changes are as  
simple as "cp trunk_workspace/ompi/datatype/* v1.2_workspace/ompi/ 
datatype".


--
Jeff Squyres
Cisco Systems



Re: [OMPI devel] RFC: delete mvapi BTL for v1.3

2007-10-11 Thread Josh Aune
How long will the 1.2 series be maintained?

This has been giving some of our customers a bit of heart burn, but it
can also be used to help push through the OFED upgrades on the
clusters (a good thing).

Josh

On 10/11/07, Jeff Squyres  wrote:
> Reminder -- this RFC expires tonight.
>
> Speak now or forever hold your peace...
>
>
> On Oct 5, 2007, at 7:46 AM, Jeff Squyres wrote:
>
> > WHAT: Remove the mvapi BTL for the v1.3 release.
> >
> > WHY: None of the IB vendors want to maintain it anymore; our future
> > is OFED.  If someone still has mvapi IB drivers, they can use the
> > OMPI v1.2 series.
> >
> > WHERE: svn rm ompi/mca/btl/mvapi
> >
> > WHEN: Before the v1.3 release.
> >
> > TIMEOUT: COB, Thurs, Oct 11, 2007
> >
> > -
> >
> > None of the IB vendors are interested in maintaining the "mvapi" BTL
> > anymore.  Indeed, none of us have updated it with any of the new/
> > interesting/better performance features that went into the openib BTL
> > over the past year (or more).  Additionally, some changes may be
> > coming in the OMPI infrastructure that would *require* some revamping
> > in the mvapi BTL -- and no one of Cisco, Voltaire, Mellanox is
> > willing to do it.
> >
> > So we'd like to ditch the mvapi BTL starting with v1.3 and have the
> > official guidance be that if you have mvapi, you need to use the OMPI
> > v1.2 series (i.e., remove this from the SVN trunk in the Very Near
> > Future).
> >
> > --
> > Jeff Squyres
> > Cisco Systems
> >
> > ___
> > devel mailing list
> > de...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/devel
>
>
> --
> Jeff Squyres
> Cisco Systems
>
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>


Re: [OMPI devel] [RFC] change wrapper compilers from binaries to shell scripts

2007-10-11 Thread George Bosilca
I know that [with few exception] nobody cares about our Windows  
support, but we finally have a working Open MPI software stack there  
and this approach will definitively break our "Unix like"  
friendliness on Windows.


As a temporary solution and until we can figure out how many people  
use mpicc (and friends) on Windows, I suggest we keep around the old  
wrapper compilers, together with the new shell scripts.


  Thanks,
george.

On Oct 11, 2007, at 3:51 PM, Richard Graham wrote:

What:  Change the mpicc/mpicxx/mpif77/mpif90 from being binaries to  
being

shell scripts

Why: Our build environment assumes that wrapper compilers will use  
the same
binary format that the Open MPI libraries do.  In cross-compile  
environment,
the MPI wrapper compilers will run on the front-end and need to run  
on the
front-end, and not the back-end.  Jeff has suggested this as the  
simplest

way to build back-end libraries, and front-end wrapper-compilers.

When: within the next several weeks (for the 1.3 release)

Timeout: 10/19/2007


Rich

___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel




smime.p7s
Description: S/MIME cryptographic signature


Re: [OMPI devel] [RFC] change wrapper compilers from binaries to shell scripts

2007-10-11 Thread Jeff Squyres

On Oct 11, 2007, at 5:17 PM, George Bosilca wrote:

I know that [with few exception] nobody cares about our Windows  
support, but we finally have a working Open MPI software stack  
there and this approach will definitively break our "Unix like"  
friendliness on Windows.


As a temporary solution and until we can figure out how many people  
use mpicc (and friends) on Windows, I suggest we keep around the  
old wrapper compilers, together with the new shell scripts.


Sounds reasonable.  It would not be [too] difficult to have the build  
system do the following:


- install the binaries to mpicc.exe (and friends)
- install the shell scripts to mpicc.sh (or mpicc.pl or whatever  
suffix is appropriate for the scripting language that is used)
- make sym links from $bindir/mpicc to $bindir/mpicc.sh (as the  
default), or $bindir/mpicc to $bindir/mpicc.exe if building or  
windows (or explicitly asked for via a configure --with kind of option)


Hence, everyone will see "mpicc", but the back-end technology may be  
different.




  Thanks,
george.

On Oct 11, 2007, at 3:51 PM, Richard Graham wrote:

What:  Change the mpicc/mpicxx/mpif77/mpif90 from being binaries  
to being

shell scripts

Why: Our build environment assumes that wrapper compilers will use  
the same
binary format that the Open MPI libraries do.  In cross-compile  
environment,
the MPI wrapper compilers will run on the front-end and need to  
run on the
front-end, and not the back-end.  Jeff has suggested this as the  
simplest

way to build back-end libraries, and front-end wrapper-compilers.

When: within the next several weeks (for the 1.3 release)

Timeout: 10/19/2007


Rich

___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel


___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel



--
Jeff Squyres
Cisco Systems



Re: [OMPI devel] [RFC] change wrapper compilers from binaries to shell scripts

2007-10-11 Thread George Bosilca

Sounds perfect. I'll vote for it.

  Thanks,
george.

On Oct 11, 2007, at 5:23 PM, Jeff Squyres wrote:


On Oct 11, 2007, at 5:17 PM, George Bosilca wrote:


I know that [with few exception] nobody cares about our Windows
support, but we finally have a working Open MPI software stack
there and this approach will definitively break our "Unix like"
friendliness on Windows.

As a temporary solution and until we can figure out how many people
use mpicc (and friends) on Windows, I suggest we keep around the
old wrapper compilers, together with the new shell scripts.


Sounds reasonable.  It would not be [too] difficult to have the build
system do the following:

- install the binaries to mpicc.exe (and friends)
- install the shell scripts to mpicc.sh (or mpicc.pl or whatever
suffix is appropriate for the scripting language that is used)
- make sym links from $bindir/mpicc to $bindir/mpicc.sh (as the
default), or $bindir/mpicc to $bindir/mpicc.exe if building or
windows (or explicitly asked for via a configure --with kind of  
option)


Hence, everyone will see "mpicc", but the back-end technology may be
different.



  Thanks,
george.

On Oct 11, 2007, at 3:51 PM, Richard Graham wrote:


What:  Change the mpicc/mpicxx/mpif77/mpif90 from being binaries
to being
shell scripts

Why: Our build environment assumes that wrapper compilers will use
the same
binary format that the Open MPI libraries do.  In cross-compile
environment,
the MPI wrapper compilers will run on the front-end and need to
run on the
front-end, and not the back-end.  Jeff has suggested this as the
simplest
way to build back-end libraries, and front-end wrapper-compilers.

When: within the next several weeks (for the 1.3 release)

Timeout: 10/19/2007


Rich

___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel


___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel



--
Jeff Squyres
Cisco Systems

___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel




smime.p7s
Description: S/MIME cryptographic signature


[OMPI devel] small configure cache variable bug

2007-10-11 Thread Ralf Wildenhues
Hello Open MPI Developers,

Here's a small patch to fix a misnamed cache variable.  The next
Autoconf version will issue a warning for typos like these, which
is how I found this.

Cheers,
Ralf

* config/ompi_check_visibility.m4 (OMPI_CHECK_VISIBILITY):
Rename ompi_cv_cc_fvisibility to ompi_vc_cc_fvisibility, so
that it will be cached.

Index: config/ompi_check_visibility.m4
===
--- config/ompi_check_visibility.m4 (revision 16430)
+++ config/ompi_check_visibility.m4 (working copy)
@@ -33,7 +33,7 @@
 CFLAGS="$CFLAGS_orig -fvisibility=hidden"
 add=
 AC_CACHE_CHECK([if $CC supports -fvisibility],
-[ompi_vc_cc_fvisibility],
+[ompi_cv_cc_fvisibility],
 [AC_TRY_LINK([
 #include 
 __attribute__((visibility("default"))) int foo;
@@ -42,17 +42,17 @@
 [if test -s conftest.err ; then
 $GREP -iq "visibility" conftest.err
 if test "$?" = "0" ; then
-ompi_vc_cc_fvisibility="no"
+ompi_cv_cc_fvisibility="no"
 else
-ompi_vc_cc_fvisibility="yes"
+ompi_cv_cc_fvisibility="yes"
 fi
  else
-ompi_vc_cc_fvisibility="yes"
+ompi_cv_cc_fvisibility="yes"
  fi],
-[ompi_vc_cc_fvisibility="no"])
+[ompi_cv_cc_fvisibility="no"])
 ])

-if test "$ompi_vc_cc_fvisibility" = "yes" ; then
+if test "$ompi_cv_cc_fvisibility" = "yes" ; then
 add=" -fvisibility=hidden"
 have_visibility=1
 AC_MSG_WARN([$add has been added to CFLAGS])


Re: [OMPI devel] small configure cache variable bug

2007-10-11 Thread Ralf Wildenhues
* Ralf Wildenhues wrote on Thu, Oct 11, 2007 at 11:36:53PM CEST:
> 
>   * config/ompi_check_visibility.m4 (OMPI_CHECK_VISIBILITY):
>   Rename ompi_cv_cc_fvisibility to ompi_vc_cc_fvisibility, so
>   that it will be cached.

Of course then I get it wrong myself.  I meant to write:

* config/ompi_check_visibility.m4 (OMPI_CHECK_VISIBILITY):
Rename ompi_vc_cc_fvisibility to ompi_cv_cc_fvisibility, so
that it will be cached.

Sorry about that.


Re: [OMPI devel] DDT for v1.2 branch

2007-10-11 Thread George Bosilca


On Oct 11, 2007, at 5:05 PM, Jeff Squyres wrote:


George will have to answer this -- George?  (he's on pseudo-vacation
for the next 10 days)


No, no, no. It's not pseudo at all :) It's as real as possible, when  
one still have access to email ... Anyway, I'm working out a way to  
enjoy every moment out of it !!!


I do not think that this has been tested yet, but are there any  
knee-

jerk reactions against doing this?


Can this be done in a tmp branch and tested out before commiting to
the
1.2 branch?


I don't see any technical reason why not.  It could also be done on a
1.2 workspace and tested (just not committed) if the changes are as
simple as "cp trunk_workspace/ompi/datatype/* v1.2_workspace/ompi/
datatype".


Meanwhile, I came out with a patch. I include only the fix for the  
bug on the mailing list, the fix for the MPI_Get_content on Solaris  
(related to the alignment problems) and some other small typos. The  
patch is attached to this email and soon will go in the bug report  
(ticket 1149) and a new CMR (ticket 1165). I slightly test it, on all  
datatype and point-to-point tests on my laptop (MAC OS X) and my  
cluster (Debian AMD 64) and everything runs fine.


Sooner it get included in the 1.2.5 release candidate, sooner it will  
get intensively tested.


  Thanks,
george.



ddt-1.2.patch
Description: Binary data




smime.p7s
Description: S/MIME cryptographic signature


Re: [OMPI devel] small configure cache variable bug

2007-10-11 Thread George Bosilca

Thanks Ralf.

Patch pushed in the trunk (commit 16435).

  george.

On Oct 11, 2007, at 5:39 PM, Ralf Wildenhues wrote:


* Ralf Wildenhues wrote on Thu, Oct 11, 2007 at 11:36:53PM CEST:


* config/ompi_check_visibility.m4 (OMPI_CHECK_VISIBILITY):
Rename ompi_cv_cc_fvisibility to ompi_vc_cc_fvisibility, so
that it will be cached.


Of course then I get it wrong myself.  I meant to write:

* config/ompi_check_visibility.m4 (OMPI_CHECK_VISIBILITY):
Rename ompi_vc_cc_fvisibility to ompi_cv_cc_fvisibility, so
that it will be cached.

Sorry about that.
___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel




smime.p7s
Description: S/MIME cryptographic signature


Re: [OMPI devel] RFC: delete mvapi BTL for v1.3

2007-10-11 Thread Jeff Squyres (jsquyres)
Josh and I are talking off-list with specific regards to his customers before I 
delete the mvapi btl.

-jms
Sent from my PDA

 -Original Message-
From:   Josh Aune [mailto:lu...@lnxi.com]
Sent:   Thursday, October 11, 2007 05:15 PM Eastern Standard Time
To: Open MPI Developers
Subject:Re: [OMPI devel] RFC: delete mvapi BTL for v1.3

How long will the 1.2 series be maintained?

This has been giving some of our customers a bit of heart burn, but it
can also be used to help push through the OFED upgrades on the
clusters (a good thing).

Josh

On 10/11/07, Jeff Squyres  wrote:
> Reminder -- this RFC expires tonight.
>
> Speak now or forever hold your peace...
>
>
> On Oct 5, 2007, at 7:46 AM, Jeff Squyres wrote:
>
> > WHAT: Remove the mvapi BTL for the v1.3 release.
> >
> > WHY: None of the IB vendors want to maintain it anymore; our future
> > is OFED.  If someone still has mvapi IB drivers, they can use the
> > OMPI v1.2 series.
> >
> > WHERE: svn rm ompi/mca/btl/mvapi
> >
> > WHEN: Before the v1.3 release.
> >
> > TIMEOUT: COB, Thurs, Oct 11, 2007
> >
> > -
> >
> > None of the IB vendors are interested in maintaining the "mvapi" BTL
> > anymore.  Indeed, none of us have updated it with any of the new/
> > interesting/better performance features that went into the openib BTL
> > over the past year (or more).  Additionally, some changes may be
> > coming in the OMPI infrastructure that would *require* some revamping
> > in the mvapi BTL -- and no one of Cisco, Voltaire, Mellanox is
> > willing to do it.
> >
> > So we'd like to ditch the mvapi BTL starting with v1.3 and have the
> > official guidance be that if you have mvapi, you need to use the OMPI
> > v1.2 series (i.e., remove this from the SVN trunk in the Very Near
> > Future).
> >
> > --
> > Jeff Squyres
> > Cisco Systems
> >
> > ___
> > devel mailing list
> > de...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/devel
>
>
> --
> Jeff Squyres
> Cisco Systems
>
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>
___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel