[Bug tree-optimization/111015] [11/12/13/14 Regression] __int128 bitfields optimized incorrectly to the 64 bit operations
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111015 Jakub Jelinek changed: What|Removed |Added Status|ASSIGNED|RESOLVED Resolution|--- |FIXED --- Comment #10 from Jakub Jelinek --- Fixed.
[Bug tree-optimization/111015] [11/12/13/14 Regression] __int128 bitfields optimized incorrectly to the 64 bit operations
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111015 --- Comment #9 from CVS Commits --- The releases/gcc-11 branch has been updated by Jakub Jelinek : https://gcc.gnu.org/g:beabb96786e4b3e1a820e400c09b1c1c9ab06287 commit r11-10968-gbeabb96786e4b3e1a820e400c09b1c1c9ab06287 Author: Jakub Jelinek Date: Wed Aug 30 10:47:21 2023 +0200 store-merging: Fix up >= 64 bit insertion [PR111015] The following testcase shows that we mishandle bit insertion for info->bitsize >= 64. The problem is in using unsigned HOST_WIDE_INT shift + subtraction + build_int_cst to compute mask, the shift invokes UB at compile time for info->bitsize 64 and larger and e.g. on the testcase with info->bitsize happens to compute mask of 0x3f rather than 0x3f''. The patch fixes that by using wide_int wi::mask + wide_int_to_tree, so it handles masks in any precision (up to WIDE_INT_MAX_PRECISION ;) ). 2023-08-30 Jakub Jelinek PR tree-optimization/111015 * gimple-ssa-store-merging.c (imm_store_chain_info::output_merged_store): Use wi::mask and wide_int_to_tree instead of unsigned HOST_WIDE_INT shift and build_int_cst to build BIT_AND_EXPR mask. * gcc.dg/pr111015.c: New test. (cherry picked from commit 49a3b35c4068091900b657cd36e5cffd41ef0c47)
[Bug tree-optimization/111015] [11/12/13/14 Regression] __int128 bitfields optimized incorrectly to the 64 bit operations
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111015 --- Comment #8 from CVS Commits --- The releases/gcc-12 branch has been updated by Jakub Jelinek : https://gcc.gnu.org/g:d04993b217f42b8e60b7a6d66647966b1e41302d commit r12-9836-gd04993b217f42b8e60b7a6d66647966b1e41302d Author: Jakub Jelinek Date: Wed Aug 30 10:47:21 2023 +0200 store-merging: Fix up >= 64 bit insertion [PR111015] The following testcase shows that we mishandle bit insertion for info->bitsize >= 64. The problem is in using unsigned HOST_WIDE_INT shift + subtraction + build_int_cst to compute mask, the shift invokes UB at compile time for info->bitsize 64 and larger and e.g. on the testcase with info->bitsize happens to compute mask of 0x3f rather than 0x3f''. The patch fixes that by using wide_int wi::mask + wide_int_to_tree, so it handles masks in any precision (up to WIDE_INT_MAX_PRECISION ;) ). 2023-08-30 Jakub Jelinek PR tree-optimization/111015 * gimple-ssa-store-merging.cc (imm_store_chain_info::output_merged_store): Use wi::mask and wide_int_to_tree instead of unsigned HOST_WIDE_INT shift and build_int_cst to build BIT_AND_EXPR mask. * gcc.dg/pr111015.c: New test. (cherry picked from commit 49a3b35c4068091900b657cd36e5cffd41ef0c47)
[Bug tree-optimization/111015] [11/12/13/14 Regression] __int128 bitfields optimized incorrectly to the 64 bit operations
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111015 --- Comment #7 from CVS Commits --- The releases/gcc-13 branch has been updated by Jakub Jelinek : https://gcc.gnu.org/g:f8ea576111a499595b0fe9d879830ae03afbaf17 commit r13-7767-gf8ea576111a499595b0fe9d879830ae03afbaf17 Author: Jakub Jelinek Date: Wed Aug 30 10:47:21 2023 +0200 store-merging: Fix up >= 64 bit insertion [PR111015] The following testcase shows that we mishandle bit insertion for info->bitsize >= 64. The problem is in using unsigned HOST_WIDE_INT shift + subtraction + build_int_cst to compute mask, the shift invokes UB at compile time for info->bitsize 64 and larger and e.g. on the testcase with info->bitsize happens to compute mask of 0x3f rather than 0x3f''. The patch fixes that by using wide_int wi::mask + wide_int_to_tree, so it handles masks in any precision (up to WIDE_INT_MAX_PRECISION ;) ). 2023-08-30 Jakub Jelinek PR tree-optimization/111015 * gimple-ssa-store-merging.cc (imm_store_chain_info::output_merged_store): Use wi::mask and wide_int_to_tree instead of unsigned HOST_WIDE_INT shift and build_int_cst to build BIT_AND_EXPR mask. * gcc.dg/pr111015.c: New test. (cherry picked from commit 49a3b35c4068091900b657cd36e5cffd41ef0c47)
[Bug tree-optimization/111015] [11/12/13/14 Regression] __int128 bitfields optimized incorrectly to the 64 bit operations
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111015 --- Comment #6 from CVS Commits --- The master branch has been updated by Jakub Jelinek : https://gcc.gnu.org/g:49a3b35c4068091900b657cd36e5cffd41ef0c47 commit r14-3563-g49a3b35c4068091900b657cd36e5cffd41ef0c47 Author: Jakub Jelinek Date: Wed Aug 30 10:47:21 2023 +0200 store-merging: Fix up >= 64 bit insertion [PR111015] The following testcase shows that we mishandle bit insertion for info->bitsize >= 64. The problem is in using unsigned HOST_WIDE_INT shift + subtraction + build_int_cst to compute mask, the shift invokes UB at compile time for info->bitsize 64 and larger and e.g. on the testcase with info->bitsize happens to compute mask of 0x3f rather than 0x3f''. The patch fixes that by using wide_int wi::mask + wide_int_to_tree, so it handles masks in any precision (up to WIDE_INT_MAX_PRECISION ;) ). 2023-08-30 Jakub Jelinek PR tree-optimization/111015 * gimple-ssa-store-merging.cc (imm_store_chain_info::output_merged_store): Use wi::mask and wide_int_to_tree instead of unsigned HOST_WIDE_INT shift and build_int_cst to build BIT_AND_EXPR mask. * gcc.dg/pr111015.c: New test.
[Bug tree-optimization/111015] [11/12/13/14 Regression] __int128 bitfields optimized incorrectly to the 64 bit operations
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111015 --- Comment #5 from Jakub Jelinek --- Created attachment 55811 --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=55811=edit gcc14-pr111015.patch Untested fix.
[Bug tree-optimization/111015] [11/12/13/14 Regression] __int128 bitfields optimized incorrectly to the 64 bit operations
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111015 Jakub Jelinek changed: What|Removed |Added CC||jakub at gcc dot gnu.org Assignee|unassigned at gcc dot gnu.org |jakub at gcc dot gnu.org Status|NEW |ASSIGNED
[Bug tree-optimization/111015] [11/12/13/14 Regression] __int128 bitfields optimized incorrectly to the 64 bit operations
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111015 --- Comment #4 from Mikael Pettersson --- Reverting the pass_store_merging::process_store hunk makes this test case work again: diff --git a/gcc/gimple-ssa-store-merging.cc b/gcc/gimple-ssa-store-merging.cc index 0d19b98ed73..c4bf8eec64e 100644 --- a/gcc/gimple-ssa-store-merging.cc +++ b/gcc/gimple-ssa-store-merging.cc @@ -5299,7 +5299,7 @@ pass_store_merging::process_store (gimple *stmt) && bitsize.is_constant (_bitsize) && ((const_bitsize % BITS_PER_UNIT) != 0 || !multiple_p (bitpos, BITS_PER_UNIT)) - && const_bitsize <= MAX_FIXED_MODE_SIZE) + && const_bitsize <= 64) { /* Bypass a conversion to the bit-field type. */ if (!bit_not_p
[Bug tree-optimization/111015] [11/12/13/14 Regression] __int128 bitfields optimized incorrectly to the 64 bit operations
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111015 Mikael Pettersson changed: What|Removed |Added CC||mikpelinux at gmail dot com --- Comment #3 from Mikael Pettersson --- 10.5.0 is good, 11.4.0 and above are affected, started with (or was exposed by): commit ed01d707f8594827de95304371d5b62752410842 Author: Eric Botcazou Date: Mon May 25 22:13:11 2020 +0200 Fix internal error on store to FP component at -O2 This is about a GIMPLE verification failure at -O2 or above because the GIMPLE store merging pass generates a NOP_EXPR between a FP type and an integral type. This happens when the bit-field insertion path is taken for a FP field, which can happen in Ada for bit-packed record types. It is fixed by generating an intermediate VIEW_CONVERT_EXPR. The patch also tames a little the bit-field insertion path because, for bit-packed record types in Ada, you can end up with large bit-field regions, which results in a lot of mask-and-shifts instructions. gcc/ChangeLog * gimple-ssa-store-merging.c (merged_store_group::can_be_merged_into): Only turn MEM_REFs into bit-field stores for small bit-field regions (imm_store_chain_info::output_merged_store): Be prepared for sources with non-integral type in the bit-field insertion case. (pass_store_merging::process_store): Use MAX_BITSIZE_MODE_ANY_INT as the largest size for the bit-field case. gcc/testsuite/ChangeLog * gnat.dg/opt84.adb: New test. gcc/ChangeLog | 9 + gcc/gimple-ssa-store-merging.c | 20 --- gcc/testsuite/ChangeLog | 4 +++ gcc/testsuite/gnat.dg/opt84.adb | 74 + 4 files changed, 103 insertions(+), 4 deletions(-) create mode 100644 gcc/testsuite/gnat.dg/opt84.adb
[Bug tree-optimization/111015] [11/12/13/14 Regression] __int128 bitfields optimized incorrectly to the 64 bit operations
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111015 Richard Biener changed: What|Removed |Added Status|UNCONFIRMED |NEW Summary|__int128 bitfields |[11/12/13/14 Regression] |optimized incorrectly to|__int128 bitfields |the 64 bit operations |optimized incorrectly to ||the 64 bit operations Known to work||7.5.0 Last reconfirmed||2023-08-14 Target Milestone|--- |11.5 Priority|P3 |P2 Component|rtl-optimization|tree-optimization Ever confirmed|0 |1 Keywords||needs-bisection Known to fail||11.4.0, 13.2.0 --- Comment #2 from Richard Biener --- Confirmed.