https://gcc.gnu.org/bugzilla/show_bug.cgi?id=120674
Jeffrey A. Law <law at gcc dot gnu.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|UNCONFIRMED |NEW
Last reconfirmed| |2025-08-13
Priority|P3 |P4
Ever confirmed|0 |1
CC| |rdapp at gcc dot gnu.org
--- Comment #2 from Jeffrey A. Law <law at gcc dot gnu.org> ---
So the problem is a bit of hackish code we have in the RISC-V backend that
interacts badly with the dwarf2 emitter when vector is not enabled.
Essentially that code in riscv_convert_vector_chunks sets riscv_vector_chunks
to {1, 0} when RVV is not enabled.
Then in riscv_dwarf_poly_indeterminite_value we pull out that 0 ceofficient and
use that as the factor which causes this code to do a divide by zero:
/* Add COEFF * ((REGNO / FACTOR) - BIAS) to the value:
add COEFF * (REGNO / FACTOR) now and subtract
COEFF * BIAS from the final constant part. */
constant -= coeff * bias;
add_loc_descr (&ret, new_reg_loc_descr (regno, 0));
if (coeff % factor == 0)
coeff /= factor;
else
{
int amount = exact_log2 (factor);
gcc_assert (amount >= 0);
add_loc_descr (&ret, int_loc_descriptor (amount));
add_loc_descr (&ret, new_loc_descr (DW_OP_shr, 0, 0));
}
Robin, I think you added the chunk stuff a couple years back. Was the setting
of riscv_vector_chunks to {1, 0} when RVV is not enabled meant to be a
compile-time savings by having the value collapse down? Or is this a
correctness thing?