http://gcc.gnu.org/bugzilla/show_bug.cgi?id=50189

Richard Guenther <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |ASSIGNED
         AssignedTo|unassigned at gcc dot       |rguenth at gcc dot gnu.org
                   |gnu.org                     |

--- Comment #11 from Richard Guenther <rguenth at gcc dot gnu.org> 2011-10-12 
11:42:45 UTC ---
(In reply to comment #10)
> Created attachment 25467 [details]
> Tentative patch against 4.6.1
> 
> I chased the issue for a while, using 4.6.1 as the test version.
> 
> The problem is in extract_range_from_assert.  When processing < <= > >=
> assertions, it sets the min (for < <=) or max (for > >=) of the calculated
> range to be the type min or max of the right hand side.
> 
> In the testcase, we have "m_timestamp > AT_END" where m_timestamp is unsigned
> int and AT_END is enum with value 2.  The highest enum value of that enum type
> is 3, so if fstrict-enums is in effect, the type max is 3.
> 
> Result: while the dump file shows the resulting range as [3,+INF] what that
> actually means is [3,3] because the upper bound of the enum is applied, *not*
> the upper bound of the variable being compared.
> 
> The solution is for extract_range_from_assert to pick up type min or type max
> from the type of the left hand side (here m_timestamp, i.e., unsigned int).  
> So
> the range still shows as [3,+INF] but now that represents [3,4294967295] and
> the resulting code is correct.
> 
> The patch is just one line.  The question I have is whether changing the way
> variable "type" is set is right, because it is also used in the != case and I
> don't fully understand that one.

While I don't necessarily agree with your conclusion that this is the bug
(GCC expects both operands of the comparisons to be of "compatible types",
and GCC treats types compatible when they have the same precision and
disregards TYPE_{MIN,MAX}_VALUE - which it can't when at the same time
VRP (and _only_ VRP) uses those values for optimization), your patch
makes sense - the type of var is the more natural one to chose (also
when looking at other uses of 'type').  If it mitigates the issue in
more cases that's even better!

I'm going to give it proper testing.

Thanks for spotting this simple solution for your problem.

Reply via email to