https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999

--- Comment #25 from Alexander Cherepanov <ch3root at openwall dot com> ---
On 28.10.2015 03:12, joseph at codesourcery dot com wrote:
>> What is missing in the discussion is a cost of support in gcc of objects
>> with size > PTRDIFF_MAX. I guess overhead in compiled code would be
>> minimal while headache in gcc itself is noticable. But I could be wrong.
>
> I think the biggest overhead would include that every single pointer
> subtraction, where the target type is (or might be, in the case or VLAs)
> larger than one byte, would either need to have conditional code for what
> order the pointers are in,

E.g. by using __builtin_sub_overflow.

> or would need to extend to a wider type,
> subtract in that type, divide in that type and then reduce to ptrdiff_t;
> it would no longer be possible to do (ptrdiff_t subtraction, then
> EXACT_DIV_EXPR on ptrdiff_t).

Do you expect many such cases? My wild guess would be that most cases of 
pointer subtraction are for char* or known (at compile-time) to be 
positive or both.

> There would be other things, such as
> pointer addition / subtraction of integers needing to handle values
> outside the range of ptrdiff_t,

At first sight, these don't require special treatment and could just 
wrap. But their handling is probably trickier in optimizer.

Reply via email to