Currently we scale the number of stmts allowed for forward jump threading to limit those for backwards jump threading by applying a factor of two to the counted stmts. That doesn't allow fine-grained adjustments, like by a single stmt as needed for PR109893. The following changes the factor to be a percentage of the forward threading number and adjusts that percentage from 50 to 54, fixing the regression.
Bootstrapped and tested on x86_64-unknown-linux-gnu, I'm cross-checking some FAILs I see. PR tree-optimization/109893 * params.opt (fsm-scale-path-stmts): Change to percentage and default to 54 from 50. * doc/invoke.texi (--param fsm-scale-path-stmts): Adjust. * tree-ssa-threadbackward.cc (back_threader_profitability::possibly_profitable_path_p): Adjust param_fsm_scale_path_stmts uses. (back_threader_profitability::profitable_path_p): Likewise. * gcc.dg/tree-ssa/pr109893.c: New testcase. --- gcc/doc/invoke.texi | 4 +-- gcc/params.opt | 4 +-- gcc/testsuite/gcc.dg/tree-ssa/pr109893.c | 33 ++++++++++++++++++++++++ gcc/tree-ssa-threadbackward.cc | 17 ++++++------ 4 files changed, 46 insertions(+), 12 deletions(-) create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/pr109893.c diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi index b7a201317ce..7e19f0245de 100644 --- a/gcc/doc/invoke.texi +++ b/gcc/doc/invoke.texi @@ -16647,8 +16647,8 @@ Maximum number of arrays per scop. Max. size of loc list for which reverse ops should be added. @item fsm-scale-path-stmts -Scale factor to apply to the number of statements in a threading path -crossing a loop backedge when comparing to +Percentage of max-jump-thread-duplication-stmts to allow for the number of +statements in a threading path crossing a loop backedge. @option{--param=max-jump-thread-duplication-stmts}. @item uninit-control-dep-attempts diff --git a/gcc/params.opt b/gcc/params.opt index 5eb045b2e6c..dbfa8ece8e0 100644 --- a/gcc/params.opt +++ b/gcc/params.opt @@ -131,8 +131,8 @@ Common Joined UInteger Var(param_early_inlining_insns) Init(6) Optimization Para Maximal estimated growth of function body caused by early inlining of single call. -param=fsm-scale-path-stmts= -Common Joined UInteger Var(param_fsm_scale_path_stmts) Init(2) IntegerRange(1, 10) Param Optimization -Scale factor to apply to the number of statements in a threading path crossing a loop backedge when comparing to max-jump-thread-duplication-stmts. +Common Joined UInteger Var(param_fsm_scale_path_stmts) Init(54) IntegerRange(1, 100) Param Optimization +Percentage of max-jump-thread-duplication-stmts to allow for the number of statements in a threading path crossing a loop backedge. -param=fully-pipelined-fma= Common Joined UInteger Var(param_fully_pipelined_fma) Init(0) IntegerRange(0, 1) Param Optimization diff --git a/gcc/testsuite/gcc.dg/tree-ssa/pr109893.c b/gcc/testsuite/gcc.dg/tree-ssa/pr109893.c new file mode 100644 index 00000000000..5c98664df72 --- /dev/null +++ b/gcc/testsuite/gcc.dg/tree-ssa/pr109893.c @@ -0,0 +1,33 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -fdump-tree-dom2" } */ + +void foo(void); +void bar(void); +static char a; +static int b, e, f; +static int *c = &b, *g; +int main() { + int *j = 0; + if (a) { + g = 0; + if (c) + bar(); + } else { + j = &e; + c = 0; + } + if (c == &f == b || c == &e) + ; + else + __builtin_unreachable(); + if (g || e) { + if (j == &e || j == 0) + ; + else + foo(); + } + a = 4; +} + +/* Jump threading in thread1 should enable to elide the call to foo. */ +/* { dg-final { scan-tree-dump-not "foo" "dom2" } } */ diff --git a/gcc/tree-ssa-threadbackward.cc b/gcc/tree-ssa-threadbackward.cc index fcebcdb5eaa..3091ddf4af1 100644 --- a/gcc/tree-ssa-threadbackward.cc +++ b/gcc/tree-ssa-threadbackward.cc @@ -741,8 +741,8 @@ back_threader_profitability::possibly_profitable_path_p if ((!m_threaded_multiway_branch || !loop->latch || loop->latch->index == EXIT_BLOCK) - && (m_n_insns * param_fsm_scale_path_stmts - >= param_max_jump_thread_duplication_stmts)) + && (m_n_insns * 100 >= (param_max_jump_thread_duplication_stmts + * param_fsm_scale_path_stmts))) { if (dump_file && (dump_flags & TDF_DETAILS)) fprintf (dump_file, @@ -751,8 +751,9 @@ back_threader_profitability::possibly_profitable_path_p return false; } *large_non_fsm = (!(m_threaded_through_latch && m_threaded_multiway_branch) - && (m_n_insns * param_fsm_scale_path_stmts - >= param_max_jump_thread_duplication_stmts)); + && (m_n_insns * 100 + >= (param_max_jump_thread_duplication_stmts + * param_fsm_scale_path_stmts))); if (dump_file && (dump_flags & TDF_DETAILS)) fputc ('\n', dump_file); @@ -825,8 +826,8 @@ back_threader_profitability::profitable_path_p (const vec<basic_block> &m_path, if (!m_threaded_multiway_branch && *creates_irreducible_loop && (!(cfun->curr_properties & PROP_loop_opts_done) - || (m_n_insns * param_fsm_scale_path_stmts - >= param_max_jump_thread_duplication_stmts))) + || (m_n_insns * 100 >= (param_max_jump_thread_duplication_stmts + * param_fsm_scale_path_stmts)))) { if (dump_file && (dump_flags & TDF_DETAILS)) fprintf (dump_file, @@ -841,8 +842,8 @@ back_threader_profitability::profitable_path_p (const vec<basic_block> &m_path, case, drastically reduce the number of statements we are allowed to copy. */ if (!(m_threaded_through_latch && m_threaded_multiway_branch) - && (m_n_insns * param_fsm_scale_path_stmts - >= param_max_jump_thread_duplication_stmts)) + && (m_n_insns * 100 >= (param_max_jump_thread_duplication_stmts + * param_fsm_scale_path_stmts))) { if (dump_file && (dump_flags & TDF_DETAILS)) fprintf (dump_file, -- 2.35.3