Hi!

For casts from integers to floating point,
simplify_float_conversion_using_ranges uses SCALAR_INT_TYPE_MODE
and queries optabs on the optimization it wants to make.

That doesn't really work for large/huge BITINT_TYPE, those have BLKmode
which is not scalar int mode.  Querying an optab is not useful for that
either.

I think it is best to just skip this optimization for those bitints,
after all, bitint lowering uses ranges already to determine minimum
precision for bitint operands of the integer to float casts.

Bootstrapped/regrtested on x86_64-linux and i686-linux, ok for trunk?

2023-12-08  Jakub Jelinek  <ja...@redhat.com>

        PR tree-optimization/112901
        * vr-values.cc
        (simplify_using_ranges::simplify_float_conversion_using_ranges):
        Return false if rhs1 has BITINT_TYPE type with BLKmode TYPE_MODE.

        * gcc.dg/bitint-51.c: New test.

--- gcc/vr-values.cc.jj 2023-09-06 17:28:24.240977329 +0200
+++ gcc/vr-values.cc    2023-12-07 14:34:36.935121459 +0100
@@ -1656,6 +1656,11 @@ simplify_using_ranges::simplify_float_co
       || vr.undefined_p ())
     return false;
 
+  /* The code below doesn't work for large/huge _BitInt, nor is really
+     needed for those, bitint lowering does use ranges already.  */
+  if (TREE_CODE (TREE_TYPE (rhs1)) == BITINT_TYPE
+      && TYPE_MODE (TREE_TYPE (rhs1)) == BLKmode)
+    return false;
   /* First check if we can use a signed type in place of an unsigned.  */
   scalar_int_mode rhs_mode = SCALAR_INT_TYPE_MODE (TREE_TYPE (rhs1));
   if (TYPE_UNSIGNED (TREE_TYPE (rhs1))
--- gcc/testsuite/gcc.dg/bitint-51.c.jj 2023-12-07 15:10:20.500384705 +0100
+++ gcc/testsuite/gcc.dg/bitint-51.c    2023-12-07 15:09:54.159750006 +0100
@@ -0,0 +1,14 @@
+/* PR tree-optimization/112901 */
+/* { dg-do compile { target bitint } } */
+/* { dg-options "-O2" } */
+
+float f;
+#if __BITINT_MAXWIDTH__ >= 256
+_BitInt(256) i;
+
+void
+foo (void)
+{
+  f *= 4 * i;
+}
+#endif

        Jakub

Reply via email to