https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999

--- Comment #22 from joseph at codesourcery dot com <joseph at codesourcery dot 
com> ---
On Tue, 27 Oct 2015, ch3root at openwall dot com wrote:

> What is missing in the discussion is a cost of support in gcc of objects 
> with size > PTRDIFF_MAX. I guess overhead in compiled code would be 
> minimal while headache in gcc itself is noticable. But I could be wrong.

I think the biggest overhead would include that every single pointer 
subtraction, where the target type is (or might be, in the case or VLAs) 
larger than one byte, would either need to have conditional code for what 
order the pointers are in, or would need to extend to a wider type, 
subtract in that type, divide in that type and then reduce to ptrdiff_t; 
it would no longer be possible to do (ptrdiff_t subtraction, then 
EXACT_DIV_EXPR on ptrdiff_t).  There would be other things, such as 
pointer addition / subtraction of integers needing to handle values 
outside the range of ptrdiff_t, but it's pointer subtraction that I expect 
would have the main runtime overhead.

(On strict alignment targets, for naturally-aligned power-of-two element 
sizes, you could do logical shifts on the pointers before doing a signed 
subtraction, so that case needn't be quite as inefficient as the general 
case.)

Reply via email to