https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999

--- Comment #21 from Alexander Cherepanov <ch3root at openwall dot com> ---
On 2015-10-21 06:21, danielmicay at gmail dot com wrote:
>> I think several issues are mixed:
>
> A conforming C implementation requires either fixing both the compiler and 
> libc
> functions to handle > PTRDIFF_MAX objects or preventing them from being
> allocated via standard mechanisms (and *ideally* documenting the restriction).

Yes, but:
1) a practical C implementation is not isolated and have to be able to 
work with external objects (e.g. received from a kernel);
2) a conforming C implementation could be freestanding;
3) the situation is not symmetric. You cannot make a libc be able 
process huge objects until a compiler is able to do it. IOW if the 
compiler supports huge objects then you have a freedom choose whether 
you want your libc to support them or not.

> Since there are many other issues with > PTRDIFF_MAX objects (p - q, 
> read/write
> and similar uses of ssize_t, etc.) and few reasons to allow it, it really 
> makes
> the most sense to tackle it in libc.

Other issues where? In typical user code? Then compiler/libc shouldn't 
create them objects with size > PTRDIFF_MAX. It doesn't mean shouldn't 
be able to deal with them. E.g., I can imagine a libc where malloc 
doesn't create such object by default but have a system-wide, per-user 
or even compile-time option to enable such a feature. Or you can limit 
memory with some system feature (ulimit, cgroups) independently from 
libc (mentioned by Florian Weimer elsewhere).

Lack of compiler's support more or less makes all these possimilities 
impossible.

What is missing in the discussion is a cost of support in gcc of objects 
with size > PTRDIFF_MAX. I guess overhead in compiled code would be 
minimal while headache in gcc itself is noticable. But I could be wrong.

>> How buggy? Are there bugs filed? Searching for PTRDIFF_MAX finds Zarro Boogs.
>
> It hasn't been treated as a systemic issue or considered as something related
> to PTRDIFF_MAX. You'd need to search for issues like ssize_t overflow to find
> them. If you really want one specific example, it looks like there's at least
> one case of `end - start` in stdlib/qsort.c among other places (char *mid = lo
> + size * ((hi - lo) / size >> 1);).

Ok, in this specific example, 'end - start' is divided by a value of 
size_t type and, hence, is casted to an unsigned type giving a right 
thing in the end.

> I don't think fixing every usage of `end -
> start` on arbitrarily sized objects is the right way to go, so it's not
> something I'd audit for and file bugs about.

I was going to try to submit this bug but the code turned out to be 
working fine. Not that the code is valid C but the situation is a bit 
trickier than simple "the function doesn't work for this data".

Another example?

>> For this to work a compiler have to support for working with huge objects, 
>> right?
>
> Well, they might just need a contiguous allocation without the need to 
> actually
> use it all at once. It doesn't necessarily require compiler support, but it
> could easily go wrong without compiler support if the semantics of the
> implementation aren't clearly laid out (and at the moment it's definitely not
> documented).

Exactly! It's a mine field.

Reply via email to