On Mon, Jan 23, 2012 at 6:01 PM, Michael Matz <[email protected]> wrote:
> Hi,
>
> On Sat, 21 Jan 2012, Eric Botcazou wrote:
>
>> > Trivially fixing the thinko (iterating over (work bit-and
>> > old_conflict) in the first inner loop) would fix the testcase but in
>> > general create too few conflicts, i.e. generate wrong code. I need
>> > some time to think about this again.
>>
>> OK, thanks in advance.
>
> Sigh. I can't come up with a way to generally speed up the conflict
> generation without sometimes adding artificial conflicts. I.e. I'll have
> to revert r183305. I can still fix the specific situation of PR46590 by
> making sure clobbers are added for all aggregates, not only the (at
> gimplification time) address takens.
>
> So, this is currently in regstrapping, x86_64-linux, all langs+Ada. Okay
> for trunk (well, the revert is obvious, but the gimplify.c hunk)?
>
>
> Ciao,
> Michael.
>
> PR tree-optimization/46590
> * cfgexpand.c: Revert last change (r183305).
> * gimplify.c (gimplify_bind_expr): Add clobbers for all non-gimple
> regs.
>
> Index: gimplify.c
> ===================================================================
> --- gimplify.c (revision 183303)
> +++ gimplify.c (working copy)
> @@ -1226,12 +1226,11 @@ gimplify_bind_expr (tree *expr_p, gimple
> if (TREE_CODE (t) == VAR_DECL
> && !is_global_var (t)
> && DECL_CONTEXT (t) == current_function_decl
> - && !DECL_HARD_REGISTER (t)
> - && !TREE_THIS_VOLATILE (t)
> && !DECL_HAS_VALUE_EXPR_P (t)
> /* Only care for variables that have to be in memory. Others
> will be rewritten into SSA names, hence moved to the top-level.
> */
> - && needs_to_live_in_memory (t))
> + && !is_gimple_reg (t))
> +
Ok with the excessive vertical space removed.
Thanks,
Richard.
> {
> tree clobber = build_constructor (TREE_TYPE (t), NULL);
> TREE_THIS_VOLATILE (clobber) = 1;
> Index: cfgexpand.c
> ===================================================================
> --- cfgexpand.c (revision 183305)
> +++ cfgexpand.c (working copy)
> @@ -441,12 +441,11 @@ visit_conflict (gimple stmt ATTRIBUTE_UN
>
> /* Helper routine for add_scope_conflicts, calculating the active partitions
> at the end of BB, leaving the result in WORK. We're called to generate
> - conflicts when OLD_CONFLICTS is non-null, otherwise we're just tracking
> - liveness. If we generate conflicts then OLD_CONFLICTS stores the bits
> - for which we generated conflicts already. */
> + conflicts when FOR_CONFLICT is true, otherwise we're just tracking
> + liveness. */
>
> static void
> -add_scope_conflicts_1 (basic_block bb, bitmap work, bitmap old_conflicts)
> +add_scope_conflicts_1 (basic_block bb, bitmap work, bool for_conflict)
> {
> edge e;
> edge_iterator ei;
> @@ -483,7 +482,7 @@ add_scope_conflicts_1 (basic_block bb, b
> }
> else if (!is_gimple_debug (stmt))
> {
> - if (old_conflicts
> + if (for_conflict
> && visit == visit_op)
> {
> /* If this is the first real instruction in this BB we need
> @@ -491,27 +490,16 @@ add_scope_conflicts_1 (basic_block bb, b
> Unlike classical liveness for named objects we can't
> rely on seeing a def/use of the names we're interested in.
> There might merely be indirect loads/stores. We'd not add any
> - conflicts for such partitions. We know that we generated
> - conflicts between all partitions in old_conflicts already,
> - so we need to generate only the new ones, avoiding to
> - repeatedly pay the O(N^2) cost for each basic block. */
> + conflicts for such partitions. */
> bitmap_iterator bi;
> unsigned i;
> -
> - EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, 0, i, bi)
> + EXECUTE_IF_SET_IN_BITMAP (work, 0, i, bi)
> {
> unsigned j;
> bitmap_iterator bj;
> - /* First the conflicts between new and old_conflicts. */
> - EXECUTE_IF_SET_IN_BITMAP (old_conflicts, 0, j, bj)
> - add_stack_var_conflict (i, j);
> - /* Then the conflicts between only the new members. */
> - EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, i + 1,
> - j, bj)
> + EXECUTE_IF_SET_IN_BITMAP (work, i + 1, j, bj)
> add_stack_var_conflict (i, j);
> }
> - /* And remember for the next basic block. */
> - bitmap_ior_into (old_conflicts, work);
> visit = visit_conflict;
> }
> walk_stmt_load_store_addr_ops (stmt, work, visit, visit, visit);
> @@ -528,7 +516,6 @@ add_scope_conflicts (void)
> basic_block bb;
> bool changed;
> bitmap work = BITMAP_ALLOC (NULL);
> - bitmap old_conflicts;
>
> /* We approximate the live range of a stack variable by taking the first
> mention of its name as starting point(s), and by the end-of-scope
> @@ -550,18 +537,15 @@ add_scope_conflicts (void)
> FOR_EACH_BB (bb)
> {
> bitmap active = (bitmap)bb->aux;
> - add_scope_conflicts_1 (bb, work, NULL);
> + add_scope_conflicts_1 (bb, work, false);
> if (bitmap_ior_into (active, work))
> changed = true;
> }
> }
>
> - old_conflicts = BITMAP_ALLOC (NULL);
> -
> FOR_EACH_BB (bb)
> - add_scope_conflicts_1 (bb, work, old_conflicts);
> + add_scope_conflicts_1 (bb, work, true);
>
> - BITMAP_FREE (old_conflicts);
> BITMAP_FREE (work);
> FOR_ALL_BB (bb)
> BITMAP_FREE (bb->aux);