https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97505

--- Comment #4 from Aldy Hernandez <aldyh at gcc dot gnu.org> ---
Looking at vr_values::extract_range_builtin(), I see that every single place
where we use ask for a range, we bail on non-integers (symbolics, etc).  That
is, with the exception of the UBSAN builtins.

The UBSAN builtins degrade into PLUS/MINUS/MULT and call
extract_range_from_binary_expr, which as I've shown, can special case some
symbolics which the ranger doesn't currently handle.

Since this is a UBSAN issue, we could still go with the original plan of
removing the duplicity in ranger vs vr-values, but we'd still have to leave in
the UBSAN builtin handling, perhaps as vr_values::extract_ubsan_builtin(). 
This isn't ideal, as we'd like to remove all the common code, but I'd be
willing to put up with UBSAN duplication for the time being.

A possible solution for this PR would be to disable the assert on the UBSAN
builtins:

diff --git a/gcc/vr-values.c b/gcc/vr-values.c
index 67c88006f13..3021f619319 100644
--- a/gcc/vr-values.c
+++ b/gcc/vr-values.c
@@ -1432,14 +1432,17 @@ vr_values::extract_range_basic (value_range_equiv *vr,
gimple *stmt)

   if (is_gimple_call (stmt) && extract_range_builtin (vr, stmt))
     {
+      combined_fn cfn = gimple_call_combined_fn (stmt);
+      if (cfn == CFN_UBSAN_CHECK_ADD
+         || cfn == CFN_UBSAN_CHECK_SUB
+         || cfn == CFN_UBSAN_CHECK_MUL)
+       return;
+
       value_range_equiv tmp;

And then, as a follow-up, remove all the builtin code from
extract_range_builtin, with the exception of the UBSAN stuff (renaming it to
extract_ubsan_builtin).

Now... the question is, whether this is worth it, since the plan for next
stage1 is to overhaul evrp and vrp, causing vr_values and friends to be
entirely removed.  If we leave the duplicate code in, we'll have to remember to
update the vr_values and ranger versions of this code in sync for the next
year.  I'm guessing this code doesn't change much??

If we do decide to remove it starting with the proposed patch above, we'd have
to do more extensive testing to make sure shit doesn't break.  Perhaps testing
on {-m32,-m64,-fsanitize=signed-integer-overflow} on x86, ppc64, and possibly
other slowpokes such as aarch64.

I'm torn.  It may be a lot of testing and possible breakage in return for
removing 90% of code that will hopefully be entirely removed during the next
cycle.

Thoughts?

Reply via email to