I think Case #1 is only a partial solution, as it only solves the example
attached to the ticket. Based on my reading the the tool chapter calling
MPI_T_init after MPI_Finalize is legit, and this case is not covered by the
patch. But this is not the major issue I have with this patch. From a
coding perspective, it makes the initialization of OPAL horribly unnatural,
requiring any other layer using OPAL to make a horrible gymnastic just to
tear it down correctly (setting opal_init_util_init_extra to the right
value).

  George.



On Wed, Jul 16, 2014 at 11:29 AM, Pritchard, Howard r <howa...@lanl.gov>
wrote:

> HI Folks,
>
> I vote for solution #1.  Doesn't change current behavior.  Doesn't open
> the door to becoming dependent on availability of
> ctor/dtor feature in future toolchains.
>
> Howard
>
>
> -----Original Message-----
> From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Nathan Hjelm
> Sent: Wednesday, July 16, 2014 9:08 AM
> To: Open MPI Developers
> Subject: Re: [OMPI devel] RFC: Add an __attribute__((destructor)) function
> to opal
>
> On Wed, Jul 16, 2014 at 07:59:14AM -0700, Ralph Castain wrote:
> > I discussed this over IM with Nathan to try and get a better
> understanding of the options. Basically, we have two approaches available
> to us:
> >
> > 1. my solution resolves the segv problem and eliminates leaks so long as
> the user calls MPI_Init/Finalize after calling the MPI_T init/finalize
> functions. This method will still leak memory if the user doesn't use MPI
> after calling the MPI_T functions, but does mean that all memory used by
> MPI will be released upon MPI_Finalize. So if the user program continues
> beyond MPI, they won't be carrying the MPI memory footprint with them. This
> continues our current behavior.
> >
> > 2. the destructor method, which release the MPI memory footprint upon
> final program termination instead of at MPI_Finalize. This also solves the
> segv and leak problems, and ensures that someone calling only the MPI_T
> init/finalize functions will be valgrind-clean, but means that a user
> program that runs beyond MPI will carry the MPI memory footprint with them.
> This is a change in our current behavior.
>
> Correct. Though the only thing we will carry around until termination is
> the memory associated with opal/mca/if, opal/mca/event, opal_net,
> opal_malloc, opal_show_help, opal_output, opal_dss, opal_datatype, and
> opal_class. Not sure how much memory this is.
>
> -Nathan
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
> Link to this post:
> http://www.open-mpi.org/community/lists/devel/2014/07/15172.php
>

Reply via email to