Accordoing to GCC implementation defined
assigned 0x80000000u to signed int should be 0x80000000.

If GCC generate abs, abs will saturation or not depend on target ISA.
Then the result of the case won't follow the GCC implementation defined.

Then the result of the marco in
libgcc/soft-fp/op-common.h

1126 #define _FP_FROM_INT(fs, wc, X, r, rsize, rtype)
           \
1127   do {
           \
1128     if (r)
           \
1129       {
           \
1130         rtype ur_;
           \
1131
           \
1132         if ((X##_s = (r < 0)))
           \
1133           r = -(rtype)r;

The value of r will be target dependent if the input of r = 0x80000000

I'm not sure which way would better.

We should avoid undefined C99 behavior

or We should avoid generate abs then the implementation defined could
be guarantee.

2013/6/28 Marc Glisse <marc.gli...@inria.fr>:
> On Fri, 28 Jun 2013, Andrew Haley wrote:
>
>> On 06/28/2013 08:53 AM, Shiva Chen wrote:
>>>
>>> I have a case which will generate abs instructions.
>>>
>>> int main(int argc)
>>>  {
>>>     if (argc < 0)
>>>        argc = -(unsigned int)argc;
>>>      return argc;
>>>   }
>>>
>>> To my understanding, given that argc=0x80000000 in 32bit int plaform,
>>> the result of (unsigned int)argc is well defined and should be
>>> 0x80000000u.
>>> (C99  6.3.1.3 point 2)
>>>
>>> And then the result of -0x80000000u should be 0x80000000 because
>>> unsigned operation can never overflow and the value can be
>>> represented by signed integer.
>>> (C99  6.2.5 point 9)
>>
>>
>> Yes, but you can't then assign that to an int, because it will overflow.
>> 0x80000000 will not fit in an int: it's undefined behaviour.
>
>
> Implementation defined, and ok with gcc:
> http://gcc.gnu.org/onlinedocs/gcc/Integers-implementation.html
>
> --
> Marc Glisse

Reply via email to