Re: [OMPI devel] OMPI v1.10.6

2017-01-18 Thread r...@open-mpi.org
Last call for v1.10.6 changes - we still have a few pending for review, but 
none marked as critical. If you want them included, please push for a review 
_now_

Thanks
Ralph

> On Jan 12, 2017, at 1:54 PM, r...@open-mpi.org wrote:
> 
> Hi folks
> 
> It looks like we may have motivation to release 1.10.6 in the near future. 
> Please check to see if you have anything that should be included, or is 
> pending review.
> 
> Thanks
> Ralph
> 

___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel


Re: [OMPI devel] OMPI v1.10.6

2017-01-18 Thread George Bosilca
https://github.com/open-mpi/ompi/issues/2750

  George.



On Wed, Jan 18, 2017 at 12:57 PM, r...@open-mpi.org  wrote:

> Last call for v1.10.6 changes - we still have a few pending for review,
> but none marked as critical. If you want them included, please push for a
> review _now_
>
> Thanks
> Ralph
>
> > On Jan 12, 2017, at 1:54 PM, r...@open-mpi.org wrote:
> >
> > Hi folks
> >
> > It looks like we may have motivation to release 1.10.6 in the near
> future. Please check to see if you have anything that should be included,
> or is pending review.
> >
> > Thanks
> > Ralph
> >
>
> ___
> devel mailing list
> devel@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
>
___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Re: [OMPI devel] OMPI v1.10.6

2017-01-18 Thread r...@open-mpi.org
Will someone be submitting that PR soon?

> On Jan 18, 2017, at 10:09 AM, George Bosilca  wrote:
> 
> https://github.com/open-mpi/ompi/issues/2750 
> 
> 
>   George.
> 
> 
> 
> On Wed, Jan 18, 2017 at 12:57 PM, r...@open-mpi.org 
>  mailto:r...@open-mpi.org>> 
> wrote:
> Last call for v1.10.6 changes - we still have a few pending for review, but 
> none marked as critical. If you want them included, please push for a review 
> _now_
> 
> Thanks
> Ralph
> 
> > On Jan 12, 2017, at 1:54 PM, r...@open-mpi.org  
> > wrote:
> >
> > Hi folks
> >
> > It looks like we may have motivation to release 1.10.6 in the near future. 
> > Please check to see if you have anything that should be included, or is 
> > pending review.
> >
> > Thanks
> > Ralph
> >
> 
> ___
> devel mailing list
> devel@lists.open-mpi.org 
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel 
> 
> 
> ___
> devel mailing list
> devel@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Re: [OMPI devel] MCA Component Development: Function Pointers

2017-01-18 Thread Kawashima, Takahiro
Hi,

I created a pull request to add the persistent collective
communication request feature to Open MPI. Though it's
incomplete and will not be merged into Open MPI soon,
you can play your collective algorithms based on my work.

  https://github.com/open-mpi/ompi/pull/2758

Takahiro Kawashima,
MPI development team,
Fujitsu

> Bradley,
> 
> 
> good to hear that !
> 
> 
> What Jeff meant in his previous email, is that since persistent 
> collectives are not (yet) part of the standard, user visible functions
> 
> (Pbcast_init, Pcoll_start, ...) should be part of an extension (e.g. 
> ompi/mpiext/pcoll) and should be named with the MPIX_ prefix
> 
> (e.g. MPIX_Pbcast_init)
> 
> 
> if you can make your source code available (e.g. github, bitbucket, 
> email, ...), then we'll get some more chances to review it and guide you.
> 
> 
> Cheers,
> 
> 
> Gilles
> 
> 
> On 8/1/2016 12:41 PM, Bradley Morgan wrote:
> >
> > Gilles, Nathan, Jeff, George, and the OMPI Developer Community,
> >
> > Thank you all for your kind and helpful responses.
> >
> > I have been gathering your advice and trying to put the various pieces 
> > together.
> >
> > Currently, I have managed to graft a new function MPI_LIBPNBC_Start at 
> > the MPI level with a corresponding pointer into 
> > mca->coll->libpnbc->mca_coll_libpnbc_start() and I can get it to fire 
> > from my test code.  This required good deal of hacking on some of the 
> > core files in trunk/ompi/mpi/c/… and trunk/ompi/mca/coll/… Not ideal, 
> > I’m sure, but for my purposes (and level of familiarity) just getting 
> > this to fire is a breakthrough.
> >
> > I will delve into some of the cleaner looking methods that you all 
> > have provided―I still need much more familiarity with the codebase, as 
> > I often find myself way out in the woods :)
> >
> > Thanks again to all of you for your help.  It is nice to find a 
> > welcoming community of developers.  I hope to be in touch soon with 
> > some more useful findings for you.
> >
> >
> > Best Regards,
> >
> > -Bradley
> >
> >
> >
> >
> >> On Jul 31, 2016, at 5:28 PM, George Bosilca  >> > wrote:
> >>
> >> Bradley,
> >>
> >> We had similar needs in one of our projects and as a quick hack we 
> >> extended the GRequest interface to support persistent requests. There 
> >> are cleaner ways, but we decided that highjacking 
> >> the OMPI_REQUEST_GEN was good enough for a proof-of-concept. Then add 
> >> a start member to the ompi_grequest_t in request/grequest.h, and then 
> >> do what Nathan suggested by extending the switch in the 
> >> ompi/mpi/c/start.c (and startall), and directly call your own start 
> >> function.
> >>
> >> George.
> >>
> >>
> >> On Sat, Jul 30, 2016 at 6:29 PM, Jeff Squyres (jsquyres) 
> >> mailto:jsquy...@cisco.com>> wrote:
> >>
> >> Also be aware of the Open MPI Extensions framework, explicitly
> >> intended for adding new/experimental APIs to mpi.h and the
> >> Fortran equivalents.  See ompi/mpiext.
> >>
> >>
> >> > On Jul 29, 2016, at 11:16 PM, Gilles Gouaillardet
> >>  >> > wrote:
> >> >
> >> > For a proof-of-concept, I'd rather suggest you add
> >> MPI_Pcoll_start(), and add a pointer in mca_coll_base_comm_coll_t.
> >> > If you add MCA_PML_REQUEST_COLL, then you have to update all
> >> pml components (fastidious), if you update start.c (quite
> >> simple), then you also need to update start_all.c (less trivial)
> >> > If the future standard mandates the use of MPI_Start and
> >> MPI_Startall, then we will reconsider this.
> >> >
> >> > From a performance point of view, that should not change much.
> >> > IMHO, non blocking collectives come with a lot of overhead, so
> >> shaving a few nanoseconds here and then will unlikely change the
> >> big picture.
> >> >
> >> > If I oversimplify libnbc, it basically schedule MPI_Isend,
> >> MPI_Irecv and MPI_Wait (well, MPI_Test since this is on blocking,
> >> but let's keep it simple)
> >> > My intuition is your libpnbc will post MPI_Send_init,
> >> MPI_Recv_init, and schedule MPI_Start and MPI_Wait.
> >> > Because of the overhead, I would only expect marginal
> >> performance improvement, if any.
> >> >
> >> > Cheers,
> >> >
> >> > Gilles
> >> >
> >> > On Saturday, July 30, 2016, Bradley Morgan  >> > wrote:
> >> >
> >> > Hello Gilles,
> >> >
> >> > Thank you very much for your response.
> >> >
> >> > My understanding is yes, this might be part of the future
> >> standard―but probably not from my work alone.  I’m currently just
> >> trying get a proof-of-concept and some performance metrics.
> >> >
> >> > I have item one of your list completed, but not the others.  I
> >> will look into adding the MCA_PML_REQUEST_COLL case to
> >> mea_pml_ob1_start.
> >> >
> >> > Would it also