Hi Chris

Please go ahead and open a PR for master and I'll open corresponding ones
for the release branches.

Howard

Christoph Niethammer <nietham...@hlrs.de> schrieb am Do. 22. Juni 2017 um
01:10:

> Hi Howard,
>
> Sorry, missed the new license policy. I added a Sign-off now.
> Shall I open a pull request?
>
> Best
> Christoph
>
> ----- Original Message -----
> From: "Howard Pritchard" <hpprit...@gmail.com>
> To: "Open MPI Developers" <devel@lists.open-mpi.org>
> Sent: Wednesday, June 21, 2017 5:57:05 PM
> Subject: Re: [OMPI devel] orte-clean not cleaning left over temporary I/O
> files in /tmp
>
> Hi Chris,
>
> Sorry for being a bit picky, but could you add a sign-off to the commit
> message?
> I'm not suppose to manually add it for you.
>
> Thanks,
>
> Howard
>
>
> 2017-06-21 9:45 GMT-06:00 Howard Pritchard < [ mailto:hpprit...@gmail.com
> | hpprit...@gmail.com ] > :
>
>
>
> Hi Chris,
>
> Thanks very much for the patch!
>
> Howard
>
>
> 2017-06-21 9:43 GMT-06:00 Christoph Niethammer < [ mailto:
> nietham...@hlrs.de | nietham...@hlrs.de ] > :
>
>
> Hello Ralph,
>
> Thanks for the update on this issue.
>
> I used the latest master (c38866eb3929339147259a3a46c6fc815720afdb).
>
> The behaviour is still the same: aborting before MPI_File_close leaves
> /tmp/OMPI_*.sm files.
> These are not removed by your updated orte-clean.
>
> I now seeked for the origin of these files and it seems to be in
> ompi/mca/sharedfp/sm/sharedfp_sm_file_open.c:154
> where also a left over TODO note some lines above is mentioning the need
> for a correct directory.
>
> I would suggest updating the path there to be under the
> <ompi_process_info.job_session_dir> directory which is cleaned by
> orte-clean, see
>
> [
> https://github.com/cniethammer/ompi/commit/2aedf6134813299803628e7d6856a3b781542c02
> |
> https://github.com/cniethammer/ompi/commit/2aedf6134813299803628e7d6856a3b781542c02
> ]
>
> Best
> Christoph
>
> ----- Original Message -----
> From: "Ralph Castain" < [ mailto:r...@open-mpi.org | r...@open-mpi.org ] >
> To: "Open MPI Developers" < [ mailto:devel@lists.open-mpi.org |
> devel@lists.open-mpi.org ] >
> Sent: Wednesday, June 21, 2017 4:33:29 AM
> Subject: Re: [OMPI devel] orte-clean not cleaning left over temporary I/O
> files in /tmp
>
> I updated orte-clean in master, and for v3.0, so it cleans up all both
> current and legacy session directory files as well as any pmix artifacts. I
> don’t see any files named OMPI_*.sm, though that might be something from
> v2.x? I don’t recall us ever making files of that name before - anything we
> make should be under the session directory, not directly in /tmp.
>
> > On May 9, 2017, at 2:10 AM, Christoph Niethammer < [ mailto:
> nietham...@hlrs.de | nietham...@hlrs.de ] > wrote:
> >
> > Hi,
> >
> > I am using Open MPI 2.1.0.
> >
> > Best
> > Christoph
> >
> > ----- Original Message -----
> > From: "Ralph Castain" < [ mailto:r...@open-mpi.org | r...@open-mpi.org ] >
> > To: "Open MPI Developers" < [ mailto:devel@lists.open-mpi.org |
> devel@lists.open-mpi.org ] >
> > Sent: Monday, May 8, 2017 6:28:42 PM
> > Subject: Re: [OMPI devel] orte-clean not cleaning left over temporary
> I/O files in /tmp
> >
> > What version of OMPI are you using?
> >
> >> On May 8, 2017, at 8:56 AM, Christoph Niethammer < [ mailto:
> nietham...@hlrs.de | nietham...@hlrs.de ] > wrote:
> >>
> >> Hello
> >>
> >> According to the manpage "...orte-clean attempts to clean up any
> processes and files left over from Open MPI jobs that were run in the past
> as well as any currently running jobs. This includes OMPI infrastructure
> and helper commands, any processes that were spawned as part of the job,
> and any temporary files...".
> >>
> >> If I now have a program which calls MPI_File_open, MPI_File_write and
> MPI_Abort() in order, I get left over files /tmp/OMPI_*.sm.
> >> Running orte-clean does not remove them.
> >>
> >> Is this a bug or a feature?
> >>
> >> Best
> >> Christoph Niethammer
> >>
> >> --
> >>
> >> Christoph Niethammer
> >> High Performance Computing Center Stuttgart (HLRS)
> >> Nobelstrasse 19
> >> 70569 Stuttgart
> >>
> >> Tel: [ tel:%2B%2B49%280%29711-685-87203 | ++49(0)711-685-87203 ]
> >> email: [ mailto:nietham...@hlrs.de | nietham...@hlrs.de ]
> >> [ http://www.hlrs.de/people/niethammer |
> http://www.hlrs.de/people/niethammer ]
> >> _______________________________________________
> >> devel mailing list
> >> [ mailto:devel@lists.open-mpi.org | devel@lists.open-mpi.org ]
> >> [ https://rfd.newmexicoconsortium.org/mailman/listinfo/devel |
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel ]
> >
> > _______________________________________________
> > devel mailing list
> > [ mailto:devel@lists.open-mpi.org | devel@lists.open-mpi.org ]
> > [ https://rfd.newmexicoconsortium.org/mailman/listinfo/devel |
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel ]
> > _______________________________________________
> > devel mailing list
> > [ mailto:devel@lists.open-mpi.org | devel@lists.open-mpi.org ]
> > [ https://rfd.newmexicoconsortium.org/mailman/listinfo/devel |
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel ]
>
> _______________________________________________
> devel mailing list
> [ mailto:devel@lists.open-mpi.org | devel@lists.open-mpi.org ]
> [ https://rfd.newmexicoconsortium.org/mailman/listinfo/devel |
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel ]
> _______________________________________________
> devel mailing list
> [ mailto:devel@lists.open-mpi.org | devel@lists.open-mpi.org ]
> [ https://rfd.newmexicoconsortium.org/mailman/listinfo/devel |
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel ]
>
>
>
> _______________________________________________
> devel mailing list
> devel@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
> _______________________________________________
> devel mailing list
> devel@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
_______________________________________________
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Reply via email to