Hi!

This patch fixes 2 issues.  One is when we want to get address of
an uninitialized large/huge bitint SSA_NAME for multiplication/division/modulo
or conversion to floating point (binary or decimal), the code just creates
an uninitialized limb sized variable and passes address of that, but I forgot
to initialize *prec in that case, so it invoked UB at compile time rather
than at runtime.  As it is UB, we could use anything valid as precision there,
say 2 bits for signed, 1 bit for unsigned as smallest possible set of values,
or full bitint precision as full random value.  Though, because we only pass
address to a single limb, I think it is best to pass the bitsize of the limb.

And the other issue is that when ranger in range_to_prec finds some range
is undefined_p (), it will assert {lower,upper}_bound () method isn't called
on it, but we were.  So, the patch adjusts range_to_proc to treat it like
the !optimized case, full bitint precision.

Bootstrapped/regtested on x86_64-linux and i686-linux, committed to trunk.

2023-09-30  Jakub Jelinek  <ja...@redhat.com>

        PR middle-end/111625
        PR middle-end/111637
        * gimple-lower-bitint.cc (range_to_prec): Use prec or -prec if
        r.undefined_p ().
        (bitint_large_huge::handle_operand_addr): For uninitialized operands
        use limb_prec or -limb_prec precision.

--- gcc/gimple-lower-bitint.cc.jj       2023-09-20 09:45:39.000000000 +0200
+++ gcc/gimple-lower-bitint.cc  2023-09-29 16:29:36.541473743 +0200
@@ -1932,7 +1932,8 @@ range_to_prec (tree op, gimple *stmt)
   unsigned int prec = TYPE_PRECISION (type);
 
   if (!optimize
-      || !get_range_query (cfun)->range_of_expr (r, op, stmt))
+      || !get_range_query (cfun)->range_of_expr (r, op, stmt)
+      || r.undefined_p ())
     {
       if (TYPE_UNSIGNED (type))
        return prec;
@@ -2066,6 +2067,9 @@ bitint_large_huge::handle_operand_addr (
            }
          else if (gimple_code (g) == GIMPLE_NOP)
            {
+             *prec = TYPE_UNSIGNED (TREE_TYPE (op)) ? limb_prec : -limb_prec;
+             if (prec_stored)
+               *prec_stored = *prec;
              tree var = create_tmp_var (m_limb_type);
              TREE_ADDRESSABLE (var) = 1;
              ret = build_fold_addr_expr (var);

        Jakub

Reply via email to