https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999

--- Comment #11 from Florian Weimer <fw at gcc dot gnu.org> ---
(In reply to Daniel Micay from comment #9)

> I don't think there's much of a use case for allocating a single >2G
> allocation in a 3G or 4G address space.

The main OpenJDK heap (well, it was Java back then) has to be once contiguous
memory mapping, and there was significant demand to get past 2 GiB.  For users
who are tied to 32-bit VMs due to JNI and other considerations, this demand
probably still exists.

Oracle database apparently tried to use large shared-memory mappings as well. 
If I read the old documentation correctly, it actually had to be in one piece,
too.  (The documentation talks about changing the SHMMAX parameter to a large
value, not just SHMALL.)

PostgreSQL definitely needs a single large shared-memory mapping, but its
buffering behavior is significantly different, so I think there was less demand
to create these huge mappings.

> It has a high chance of failure
> simply due to virtual memory fragmentation, especially since the kernel's
> mmap allocation algorithm is so naive (keeps going downwards and ignores
> holes until it runs out, rather than using first-best-fit).

The mappings are created early during process life-time, and if I recall
correctly, this requirement limited ASLR for 32-bit processes quite
significantly.

> Was the demand for a larger address space or was it really for the ability
> to allocate all that memory in one go?

In the Java case, it was for a contiguous memory mapping larger than 2 GiB. 
I'm less sure about the Oracle use case.

Reply via email to