On 2015-10-21 23:18, Florian Weimer wrote:
On 10/21/2015 10:17 PM, Alexander Cherepanov wrote:
On 19.10.2015 12:07, Florian Weimer wrote:
On 10/19/2015 02:50 AM, Alexander Cherepanov wrote:

gcc doesn't support objects more than half the address space in size --
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 . So if you are
malloc'ing >2GB on 32-bit platforms you should be concerned.

This needs to be fixed in GCC.  Even if we artificially fail large
allocations in malloc, there will be cases where people call mmap or
shmat directly.  And at least for the latter two, there is an
expectation that this works with larger-than-2-GiB mappings for 32-bit
processes (to the degree that Red Hat shipped very special 32-bit
kernels for a while to support this).

I'm all for fixing it in GCC. It gives more flexibility: you cannot
support huge objects in libc when your compiler doesn't support them but
you can choose if you want to support them in libc when you compiler
support them. But I guess it's not easy to fix.

OTOH perhaps ability to create huge objects in libc should be somehow
hidden by default? As evidenced by this thread:-)

It's possible to set a virtual address space limit with ulimit.  Is this
sufficient?

Such a limit is overly strict for this problem as it bounds the total size of all allocations. And it have to be default in 32-bit distros to be effective. Which seems doubtful given its strictness. OTOH it's easy to change for those who need it and I guess it could be implemented by distros very fast, without waiting for gcc or glibc fixes.

--
Alexander Cherepanov

Reply via email to