[OMPI devel] glibc malloc hooks going away
If you saw Mellanox's commit this morning, you noticed a comment about how the glibc malloc hooks are deprecated. I pinged Mike D. about this off-list, and he sent me the following reference from the glibc 2.14 release notes at http://sourceware.org/ml/libc-alpha/2011-05/msg00103.html: * The malloc hook implementation is marked deprecated and will be removed from the default implementation in the next version. The design never worked ever since the introduction of threads. Even programs which do not create threads themselves can use multiple threads created internally. Yoinks. The OpenFabrics community had better come up with something to replace the glibc malloc hooks implementation fairly soon... (e.g., push ummunotify upstream, or push something else -- Mellanox is currently arguing that On Demand Paging will obviate the need for something like ummunotify; see the linux-rdma mailing list for an ongoing discussion about this exact topic) -- Jeff Squyres jsquy...@cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
Re: [OMPI devel] [EXTERNAL] glibc malloc hooks going away
On 6/10/13 8:23 AM, "Jeff Squyres (jsquyres)" wrote: >If you saw Mellanox's commit this morning, you noticed a comment about >how the glibc malloc hooks are deprecated. I pinged Mike D. about this >off-list, and he sent me the following reference from the glibc 2.14 >release notes at >http://sourceware.org/ml/libc-alpha/2011-05/msg00103.html: > >* The malloc hook implementation is marked deprecated and will be >removed > from the default implementation in the next version. The design >never > worked ever since the introduction of threads. Even programs which >do > not create threads themselves can use multiple threads created >internally. > >Yoinks. At least they've finally come to that conclusion. I look forward to not shipping a memory allocator with our communication library ;). >The OpenFabrics community had better come up with something to replace >the glibc malloc hooks implementation fairly soon... (e.g., push >ummunotify upstream, or push something else -- Mellanox is currently >arguing that On Demand Paging will obviate the need for something like >ummunotify; see the linux-rdma mailing list for an ongoing discussion >about this exact topic) +1. Brian -- Brian W. Barrett Scalable System Software Group Sandia National Laboratories smime.p7s Description: S/MIME cryptographic signature
Re: [OMPI devel] RFC: Add static initializer for opal_mutex_t
On Sat, Jun 08, 2013 at 12:28:02PM +0200, George Bosilca wrote: > All Windows objects that are managed as HANDLES can easily be modified to > have static initializer. A clean solution is attached to the question at > stackoverflow: > http://stackoverflow.com/questions/3555859/is-it-possible-to-do-static-initialization-of-mutexes-in-windows Not the cleanest solution (and I don't know how handles work) so I held off on proposing adding a static initializer until the windows code was gone. > That being said I think having a static initializer for a synchronization > object is a dangerous thing. It has many subtleties and too many hidden > limitations. As an example they can only be used on the declaration of the > object, and can't be safely used for locally static object (they must be > global). I have never seen any indication that a statically initialized mutex is not safe for static objecs. The man page for thread_mutex_init uses the static initializer on a static mutex: http://linux.die.net/man/3/pthread_mutex_init > What are the instances in the Open MPI code where such a statically defined > mutex need to be used before it has a chance of being correctly initialized? MPI_T_thread_init may be called from any thread (or multiple threads at the same time). The current code uses atomics to protect the initialization of the mutex. I would prefer to declare the mpit lock like: opal_mutex_t mpit_big_lock = OPAL_MUTEX_STATIC_INIT; and remove the atomics. It would be much cleaner and should work fine on all currently supported platforms. -Nathan
Re: [OMPI devel] RFC: Add static initializer for opal_mutex_t
On Jun 10, 2013, at 17:18 , Nathan Hjelm wrote: > On Sat, Jun 08, 2013 at 12:28:02PM +0200, George Bosilca wrote: >> All Windows objects that are managed as HANDLES can easily be modified to >> have static initializer. A clean solution is attached to the question at >> stackoverflow: >> http://stackoverflow.com/questions/3555859/is-it-possible-to-do-static-initialization-of-mutexes-in-windows > > Not the cleanest solution (and I don't know how handles work) so I held off > on proposing adding a static initializer until the windows code was gone. Nothing really fancy, a HANDLE is basically an untyped location storage (a void*). >> That being said I think having a static initializer for a synchronization >> object is a dangerous thing. It has many subtleties and too many hidden >> limitations. As an example they can only be used on the declaration of the >> object, and can't be safely used for locally static object (they must be >> global). > > I have never seen any indication that a statically initialized mutex is not > safe for static objecs. The man page for thread_mutex_init uses the static > initializer on a static mutex: http://linux.die.net/man/3/pthread_mutex_init It is thread safe for global static objects, but might not be thread safe for local static objects. >> What are the instances in the Open MPI code where such a statically defined >> mutex need to be used before it has a chance of being correctly initialized? > > MPI_T_thread_init may be called from any thread (or multiple threads at the > same time). The current code uses atomics to protect the initialization of > the mutex. I would prefer to declare the mpit lock like: > > opal_mutex_t mpit_big_lock = OPAL_MUTEX_STATIC_INIT; > > and remove the atomics. It would be much cleaner and should work fine on all > currently supported platforms. OK, almost a corner-case. > how does mutex static initializer works A more detailed explanation in the "Static Initializers for Mutexes and Condition Variables" part of the http://pubs.opengroup.org/onlinepubs/009695399/functions/pthread_mutex_init.html George. > > -Nathan > ___ > devel mailing list > de...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/devel
Re: [OMPI devel] [EXTERNAL] glibc malloc hooks going away
On Jun 10, 2013, at 10:29 AM, "Barrett, Brian W" wrote: > At least they've finally come to that conclusion. I look forward to not > shipping a memory allocator with our communication library ;). +1 on that. That being said, that release note was for glibc 2.14. I just downloaded and built 2.17; it looks like a) the hooks are still there, and b) they're still installed by default. Mellanox: did you find a distro that disables the glibc hooks by default? -- Jeff Squyres jsquy...@cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/