Kenneth Zadeck <zad...@naturalbridge.com> writes:
>> Changing the representation of unsigned constants is only worthwhile
>> if we can avoid the force_to_size for some unsigned cases.  I think we can
>> avoid it for precision >= xprecision && !small_prec.  Either we should take
>> the hit of doing that comparison (but see below) or the change isn't
>> worthwhile.
> i think that it is something closer to precision >= xprecision + 
> HOST_BITS_PER_WIDE_INT && ...
> The problem is that the tree cst may have one extra block beyond the 
> precision.

Ah, OK.

>> I was thinking that we should always be able to use the constant as-is
>> for max_wide_int-based and addr_wide_int-based operations.  The small_prec
> again, you can get edge cased to death here.    i think it would work 
> for max because that really is bigger than anything else, but it is 
> possible (though unlikely) to have something big converted to an address 
> by truncation.

But I'd have expected that conversion to be represented by an explicit
CONVERT_EXPR or NOP_EXPR.  It seems wrong to use addr_wide_int directly on
something that isn't bit- or byte-address-sized.  It'd be the C equivalent
of int + long -> int rather than the expected int + long -> long.

Same goes for wide_int.  If we're doing arithmetic at a specific
precision, it seems odd for one of the inputs to be wider and yet
not have an explicit truncation.

Thanks,
Richard

Reply via email to