On May 1, 2009, at 8:25 AM, arkady kanevsky wrote:

What if we provide a script which converts all malloc/free
calls into MPI ones and move MPI_INIT before any memory allocation?
Will these application user be willing to do the conversion?

We've been trying to educate MPI application developers for 10 years. :-)

If you think a script will help, go for it.  :-)

Sorry; I'm not trying to be snide -- this thread is getting increasingly frustrating. No, I don't think it will help for a few reasons:

- MPI's already support malloc/etc. buffers; changing that now would be a big change -- based on this one network stack.

- MPI's are competitive. If one MPI forces the use of MPI_ALLOC_MEM, then others will say "you should use my MPI because then you don't have to change your code to use MPI_ALLOC_MEM." Because we're ultimately competing for customer's dollars -- MPI's actively try to make programming/using their product as easy as possible.

- Fortran is always problematic. I haven't thought through the problems there, but I know of many apps that have huge arrays declared statically (which the fortran compiler gets from the heap, not the stack). Forcing them to change to F90-style pointers would never happen.

- I cited earlier in the thread MPI-based middleware that could do MPI_ALLOC_MEM (potentially plus a copy) for short messages, but likely re-uses application buffers directly for large messages because the copy cost would be too much. Specifically: if MPI is not the top level middleware in an application -- some other middleware is fronting the network stack, like a computational library or somesuch -- they might have to make exactly the same compromises (e.g., application buffers are too large, so let's just use those instead of MPI_ALLOC_MEM+copy).

--
Jeff Squyres
Cisco Systems

_______________________________________________
general mailing list
[email protected]
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to