On Thu, May 17, 2007 at 10:20:35AM -0600, Brian Barrett wrote:
> On the other hand, since the MPI standard explicitly says you're not  
> allowed to call fork() or system() during the MPI application and  
> sense the network should really cope with this in some way, if it  
> further complicates the code *at all*, I'm strongly against it.   
> Especially since it won't really solve the problem.  For example,  
> with one-sided, I'm not going to go out of my way to send the first  
> and last bit of the buffer so the user can touch those pages while  
> calling fork.
> 
> Also, if I understand the leave_pinned protocol, this still won't  
> really solve anything for the general case -- leave pinned won't send  
> any data eagerly if the buffer is already pinned, so there are still  
> going to be situations where the user can cause problems.  Now we  
> have a situation where sometimes it works and sometimes it doesn't  
> and we pretend to support fork()/system() in certain cases.  Seems  
> like actually fixing the problem the "right way" would be the right  
> path forward...

This will not solve all the problems, it just slightly decries a chance
of a program to get SIGSEGV. We will not going to pretend that we
support fork() or system(). Obviously this change will not help to one
sided, leave_pinned or leave_pinned_pipeline cases. About "complicating
the code" issue I am working on solving deadlock in pipeline protocol
and for this I need a capability to send any part of a message by copy
in/out. The change I propose will be trivial to do on top of this. The
code will be more complex because of deadlock issues and not because of
the change we discuss now :)

> 
> Brian
> 
> On May 17, 2007, at 10:10 AM, Jeff Squyres wrote:
> 
> > Moving to devel; this question seems worthwhile to push out to the
> > general development community.
> >
> > I've been coming across an increasing number of customers and other
> > random OMPI users who use system().  So if there's zero impact on
> > performance and it doesn't make the code [more] incredibly horrible
> > [than it already is], I'm in favor of this change.
> >
> >
> >
> > On May 17, 2007, at 7:00 AM, Gleb Natapov wrote:
> >
> >> Hi,
> >>
> >>  I thought about changing pipeline protocol to send data from the
> >> end of
> >> the message instead of the middle like it does now. The rationale
> >> behind
> >> this is better fork() support. When application forks, child doesn't
> >> inherit registered memory, so IB providers educate users to not touch
> >> buffers that were owned by the MPI before fork in a child process.  
> >> The
> >> problem is that granularity of registration is HW page (4K), so last
> >> page of the buffer may contain also other application's data and user
> >> may be unaware of this and be very surprised by SIGSEGV. If pipeline
> >> protocol will send data from the end of a buffer then the last  
> >> page of
> >> the buffer will not be registered (and first page is never registered
> >> because we send beginning of the buffer eagerly with rendezvous
> >> packet)
> >> so this situation will be avoided. It should have zero impact on
> >> performance. What do you think? How common for MPI applications to
> >> fork()?
> >>
> >> --
> >>                    Gleb.
> >> _______________________________________________
> >> devel-core mailing list
> >> devel-c...@open-mpi.org
> >> http://www.open-mpi.org/mailman/listinfo.cgi/devel-core
> >
> >
> > -- 
> > Jeff Squyres
> > Cisco Systems
> >
> > _______________________________________________
> > devel mailing list
> > de...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/devel
> 
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel

--
                        Gleb.

Reply via email to