There's quite special code in copy_bb that handles inline substitution of a non-invariant address in place of an invariant one that's now handled by more generic handling of this case in remap_gimple_op_r so this removes the special casing that happens in a hot path, providing a small speedup.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed. PR middle-end/108086 * tree-inline.cc (copy_bb): Remove handling of (foo *)&this->m substitution which is done in remap_gimple_op_r via re-gimplifying. --- gcc/tree-inline.cc | 15 --------------- 1 file changed, 15 deletions(-) diff --git a/gcc/tree-inline.cc b/gcc/tree-inline.cc index ad8275185ac..c802792fa07 100644 --- a/gcc/tree-inline.cc +++ b/gcc/tree-inline.cc @@ -2074,21 +2074,6 @@ copy_bb (copy_body_data *id, basic_block bb, gimple_duplicate_stmt_histograms (cfun, stmt, id->src_cfun, orig_stmt); - /* With return slot optimization we can end up with - non-gimple (foo *)&this->m, fix that here. */ - if (is_gimple_assign (stmt) - && CONVERT_EXPR_CODE_P (gimple_assign_rhs_code (stmt)) - && !is_gimple_val (gimple_assign_rhs1 (stmt))) - { - tree new_rhs; - new_rhs = force_gimple_operand_gsi (&seq_gsi, - gimple_assign_rhs1 (stmt), - true, NULL, false, - GSI_CONTINUE_LINKING); - gimple_assign_set_rhs1 (stmt, new_rhs); - id->regimplify = false; - } - gsi_insert_after (&seq_gsi, stmt, GSI_NEW_STMT); if (id->regimplify) -- 2.35.3