Could someone tell me how these settings are used in OMPI or give any guidance on how they should or should not be used?

The background is that (on Linux? with GCC libc? with OMPI?) small memory allocations are allocated on the heap, with brk() or sbrk() used to modify the high-water mark. Lest a large, freed allocation cannot be returned to the OS due to a small, active allocation, the memory allocator uses mmap() instead of brk/sbrk for large allocations. There is some discussion of the internet about how mmap is a costly way of allocating memory, but I'm concerned about something else. With mmap, you get page-aligned allocations back. This means that if you loop over the elements of multiple large arrays (which is common in HPC), you can generate a lot of cache conflicts, depending on the cache associativity.

There are multiple reasons one might want to modify the behavior of the memory allocator, including high cost of mmap calls, wanting to register memory for faster communications, and now this cache-conflict issue. The usual solution is

setenv MALLOC_MMAP_MAX_        0
setenv MALLOC_TRIM_THRESHOLD_ -1

or the equivalent mallopt() calls.

This issue becomes an MPI issue for at least three reasons:

*) MPI may care about these settings due to memory registration and pinning. (I invite you to explain to me what I mean. I'm talking over my head here.)

*) (Related to the previous bullet), MPI performance comparisons may reflect these effects. Specifically, in comparing performance of OMPI, Intel MPI, Scali/Platform MPI, and MVAPICH2, some tests (such as HPCC and SPECmpi) have shown large performance differences between the various MPIs when, it seems, none were actually spending much time in MPI. Rather, some MPI implementations were turning off large-malloc mmaps and getting good performance (and sadly OMPI looked bad in comparison).

*) These settings seem to be desirable for HPC codes since they don't do much allocation/deallocation and they do tend to have loop nests that wade through multiple large arrays at once. For best "out of the box" performance, a software stack should turn these settings on for HPC. Codes don't typically identify themselves as "HPC", but some indicators include Fortran, OpenMP, and MPI.

I don't know the full scope of the problem, but I've run into this with at least HPCC STREAM (which shouldn't depend on MPI at all, but OMPI looks much slower than Scali/Platform on some tests) and SPECmpi (primarily one or two codes, though it depends also on problem size).

Discussion is invited.

Reply via email to