On Mon, 21 Jul 2025, Tamar Christina wrote: > > > > -----Original Message----- > > From: Richard Biener <rguent...@suse.de> > > Sent: Friday, July 18, 2025 1:09 PM > > To: gcc-patches@gcc.gnu.org > > Cc: seg...@kernel.crashing.org; Jan Hubicka <hubi...@ucw.cz>; Richard > > Sandiford > > <richard.sandif...@arm.com>; Tamar Christina <tamar.christ...@arm.com>; > > rdapp....@gmail.com; j...@ventanamicro.com > > Subject: [PATCH] [RFC] Move STMT_VINFO_TYPE to SLP_TREE_TYPE > > > > I am at a point where I want to store additional information from > > analysis (from loads and stores) to re-use them at transform stage > > without repeating the analysis. I do not want to add to > > stmt_vec_info at this point, so this starts adding kind specific > > sub-structures by moving the STMT_VINFO_TYPE field to the SLP > > tree and adding a (dummy for now) union tagged by it to receive > > such data. > > I assume the plan is that every type becomes part of the union and > some accessors are provided?
Yes. I come here from vectorizable_{load,store} and want to avoid calling vect_check_gather_scatter and get_load_store_type again during transform time. So there's where my immediate need comes from and the vectorizable_* analysis data really belongs to the SLP node, not to a stmt (the alternative would have been the SLP representative stmt). > > > > The change is largely mechanical, but I didn't think of target > > cost models here, much less of that of Risc-V. I have fixed > > all but Risc-V given all memory access add_stmt_cost calls should > > now receive a SLP node. Risc-V will fail to build after this. > > > > In this RFC I have settled for a union (supposed to get pointers > > to data), if somebody has a strong opinion on doing it in other > > ways please speak up. I did once have the idea that analysis > > should create a copy of the SLP graph on-the-fly, split up > > according to the number of vector stmts required so we could > > run post-codegen optimizations on it. That would allow for > > using a class inheritance hierarchy. > > > > As followup this enables getting rid of SLP_TREE_CODE and making > > VEC_PERM therein a separate type, unifying its handling. > > > > Bootstrap and regtest running on x86_64-unknown-linux-gnu. > > I've build-tested aarch64 and ppc64le. > > > > I'm not sure whom to ask for ppc approval, thus CCed Segher. > > Are the x86/aarch64 changes OK eventually? > > AArch64 parts are OK, I assume for now we still get stmt_info > but should start reworking the cost model to use slp_tree as > much as possible? For now we still get stmt_info. I'd be interested to see cases where we miss to pass the SLP node (there should be none). Some do not have stmt_info (like SLP permutes), some do have neither (the costs we register for alias versioning compares and branches). But yes, for vector stmt costs the hooks should use the SLP node (and its representative for data only there - that's the stmt_info you get passed for SLP nodes). Richard. > Thanks, > Tamar > > > > Can the risc-v people try to sort out this up to a point > > where I can just s/STMT_VINFO_TYPE/SLP_TREE_TYPE there? > > > > Thanks, > > Richard. > > > > * tree-vectorizer.h (_slp_tree::type): Add. > > (_slp_tree::u): Likewise. > > (_stmt_vec_info::type): Remove. > > (STMT_VINFO_TYPE): Likewise. > > (SLP_TREE_TYPE): New. > > * tree-vectorizer.cc (vec_info::new_stmt_vec_info): Do not > > initialize type. > > * tree-vect-slp.cc (_slp_tree::_slp_tree): Initialize type. > > (vect_slp_analyze_node_operations): Adjust. > > (vect_schedule_slp_node): Likewise. > > * tree-vect-patterns.cc (vect_init_pattern_stmt): Do not > > copy STMT_VINFO_TYPE. > > * tree-vect-loop.cc: Set SLP_TREE_TYPE instead of > > STMT_VINFO_TYPE everywhere. > > (vect_create_loop_vinfo): Do not set STMT_VINFO_TYPE on > > loop conditions. > > * tree-vect-stmts.cc: Set SLP_TREE_TYPE instead of > > STMT_VINFO_TYPE everywhere. > > (vect_analyze_stmt): Adjust. > > (vect_transform_stmt): Likewise. > > * config/aarch64/aarch64.cc (aarch64_vector_costs::count_ops): > > Access SLP_TREE_TYPE instead of STMT_VINFO_TYPE. > > * config/i386/i386.cc (ix86_vector_costs::add_stmt_cost): > > Remove non-SLP element-wise load/store matching. > > * config/rs6000/rs6000.cc > > (rs6000_cost_data::update_target_cost_per_stmt): Pass in > > the SLP node. Use that to get at the memory access > > kind and type. > > (rs6000_cost_data::add_stmt_cost): Pass down SLP node. > > --- > > gcc/config/aarch64/aarch64.cc | 2 +- > > gcc/config/i386/i386.cc | 24 +++++-------- > > gcc/config/rs6000/rs6000.cc | 16 +++++---- > > gcc/tree-vect-loop.cc | 25 ++++++-------- > > gcc/tree-vect-patterns.cc | 1 - > > gcc/tree-vect-slp.cc | 19 +++++------ > > gcc/tree-vect-stmts.cc | 34 +++++++++---------- > > gcc/tree-vectorizer.cc | 1 - > > gcc/tree-vectorizer.h | 63 +++++++++++++++++++---------------- > > 9 files changed, 88 insertions(+), 97 deletions(-) > > > > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc > > index 0485f695941..dea94c95ff1 100644 > > --- a/gcc/config/aarch64/aarch64.cc > > +++ b/gcc/config/aarch64/aarch64.cc > > @@ -17712,7 +17712,7 @@ aarch64_vector_costs::count_ops (unsigned int > > count, vect_cost_for_stmt kind, > > { > > if (gimple_vuse (SSA_NAME_DEF_STMT (offset))) > > { > > - if (STMT_VINFO_TYPE (stmt_info) == load_vec_info_type) > > + if (SLP_TREE_TYPE (node) == load_vec_info_type) > > ops->loads += count - 1; > > else > > /* Stores want to count both the index to array and > > data > > to > > diff --git a/gcc/config/i386/i386.cc b/gcc/config/i386/i386.cc > > index 49bd3939eb4..8c26c67072c 100644 > > --- a/gcc/config/i386/i386.cc > > +++ b/gcc/config/i386/i386.cc > > @@ -26122,23 +26122,15 @@ ix86_vector_costs::add_stmt_cost (int count, > > vect_cost_for_stmt kind, > > (AGU and load ports). Try to account for this by scaling the > > construction cost by the number of elements involved. */ > > if ((kind == vec_construct || kind == vec_to_scalar) > > - && ((stmt_info > > - && (STMT_VINFO_TYPE (stmt_info) == load_vec_info_type > > - || STMT_VINFO_TYPE (stmt_info) == store_vec_info_type) > > - && ((STMT_VINFO_MEMORY_ACCESS_TYPE (stmt_info) == > > VMAT_ELEMENTWISE > > - && (TREE_CODE (DR_STEP (STMT_VINFO_DATA_REF (stmt_info))) > > + && ((node > > + && (((SLP_TREE_MEMORY_ACCESS_TYPE (node) == > > VMAT_ELEMENTWISE > > + || (SLP_TREE_MEMORY_ACCESS_TYPE (node) == > > VMAT_STRIDED_SLP > > + && SLP_TREE_LANES (node) == 1)) > > + && (TREE_CODE (DR_STEP (STMT_VINFO_DATA_REF > > + (SLP_TREE_REPRESENTATIVE (node)))) > > != INTEGER_CST)) > > - || (STMT_VINFO_MEMORY_ACCESS_TYPE (stmt_info) > > - == VMAT_GATHER_SCATTER))) > > - || (node > > - && (((SLP_TREE_MEMORY_ACCESS_TYPE (node) == > > VMAT_ELEMENTWISE > > - || (SLP_TREE_MEMORY_ACCESS_TYPE (node) == > > VMAT_STRIDED_SLP > > - && SLP_TREE_LANES (node) == 1)) > > - && (TREE_CODE (DR_STEP (STMT_VINFO_DATA_REF > > - (SLP_TREE_REPRESENTATIVE (node)))) > > - != INTEGER_CST)) > > - || (SLP_TREE_MEMORY_ACCESS_TYPE (node) > > - == VMAT_GATHER_SCATTER))))) > > + || (SLP_TREE_MEMORY_ACCESS_TYPE (node) > > + == VMAT_GATHER_SCATTER))))) > > { > > stmt_cost = ix86_builtin_vectorization_cost (kind, vectype, > > misalign); > > stmt_cost *= (TYPE_VECTOR_SUBPARTS (vectype) + 1); > > diff --git a/gcc/config/rs6000/rs6000.cc b/gcc/config/rs6000/rs6000.cc > > index 7ee26e52b13..1b2a4730ccb 100644 > > --- a/gcc/config/rs6000/rs6000.cc > > +++ b/gcc/config/rs6000/rs6000.cc > > @@ -5165,6 +5165,7 @@ public: > > > > protected: > > void update_target_cost_per_stmt (vect_cost_for_stmt, stmt_vec_info, > > + slp_tree node, > > vect_cost_model_location, unsigned int); > > void density_test (loop_vec_info); > > void adjust_vect_cost_per_loop (loop_vec_info); > > @@ -5312,6 +5313,7 @@ rs6000_adjust_vect_cost_per_stmt (enum > > vect_cost_for_stmt kind, > > void > > rs6000_cost_data::update_target_cost_per_stmt (vect_cost_for_stmt kind, > > stmt_vec_info stmt_info, > > + slp_tree node, > > vect_cost_model_location where, > > unsigned int orig_count) > > { > > @@ -5372,12 +5374,12 @@ rs6000_cost_data::update_target_cost_per_stmt > > (vect_cost_for_stmt kind, > > or may not need to apply. When finalizing the cost of the loop, > > the extra penalty is applied when the load density heuristics > > are satisfied. */ > > - if (kind == vec_construct && stmt_info > > - && STMT_VINFO_TYPE (stmt_info) == load_vec_info_type > > - && (STMT_VINFO_MEMORY_ACCESS_TYPE (stmt_info) == > > VMAT_ELEMENTWISE > > - || STMT_VINFO_MEMORY_ACCESS_TYPE (stmt_info) == > > VMAT_STRIDED_SLP)) > > + if (kind == vec_construct && node > > + && SLP_TREE_TYPE (node) == load_vec_info_type > > + && (SLP_TREE_MEMORY_ACCESS_TYPE (node) == VMAT_ELEMENTWISE > > + || SLP_TREE_MEMORY_ACCESS_TYPE (node) == VMAT_STRIDED_SLP)) > > { > > - tree vectype = STMT_VINFO_VECTYPE (stmt_info); > > + tree vectype = SLP_TREE_VECTYPE (node); > > unsigned int nunits = vect_nunits_for_cost (vectype); > > /* As PR103702 shows, it's possible that vectorizer wants to do > > costings for only one unit here, it's no need to do any > > @@ -5406,7 +5408,7 @@ rs6000_cost_data::update_target_cost_per_stmt > > (vect_cost_for_stmt kind, > > > > unsigned > > rs6000_cost_data::add_stmt_cost (int count, vect_cost_for_stmt kind, > > - stmt_vec_info stmt_info, slp_tree, > > + stmt_vec_info stmt_info, slp_tree node, > > tree vectype, int misalign, > > vect_cost_model_location where) > > { > > @@ -5424,7 +5426,7 @@ rs6000_cost_data::add_stmt_cost (int count, > > vect_cost_for_stmt kind, > > retval = adjust_cost_for_freq (stmt_info, where, count * stmt_cost); > > m_costs[where] += retval; > > > > - update_target_cost_per_stmt (kind, stmt_info, where, orig_count); > > + update_target_cost_per_stmt (kind, stmt_info, node, where, > > orig_count); > > } > > > > return retval; > > diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc > > index 712e6f368ad..82681081476 100644 > > --- a/gcc/tree-vect-loop.cc > > +++ b/gcc/tree-vect-loop.cc > > @@ -1919,7 +1919,6 @@ vect_create_loop_vinfo (class loop *loop, > > vec_info_shared *shared, > > for (gcond *cond : info->conds) > > { > > stmt_vec_info loop_cond_info = loop_vinfo->lookup_stmt (cond); > > - STMT_VINFO_TYPE (loop_cond_info) = loop_exit_ctrl_vec_info_type; > > /* Mark the statement as a condition. */ > > STMT_VINFO_DEF_TYPE (loop_cond_info) = vect_condition_def; > > } > > @@ -1936,9 +1935,6 @@ vect_create_loop_vinfo (class loop *loop, > > vec_info_shared *shared, > > > > if (info->inner_loop_cond) > > { > > - stmt_vec_info inner_loop_cond_info > > - = loop_vinfo->lookup_stmt (info->inner_loop_cond); > > - STMT_VINFO_TYPE (inner_loop_cond_info) = > > loop_exit_ctrl_vec_info_type; > > /* If we have an estimate on the number of iterations of the inner > > loop use that to limit the scale for costing, otherwise use > > --param vect-inner-loop-cost-factor literally. */ > > @@ -7158,7 +7154,7 @@ vectorizable_lane_reducing (loop_vec_info loop_vinfo, > > stmt_vec_info stmt_info, > > } > > > > /* Transform via vect_transform_reduction. */ > > - STMT_VINFO_TYPE (stmt_info) = reduc_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = reduc_vec_info_type; > > return true; > > } > > > > @@ -7260,18 +7256,17 @@ vectorizable_reduction (loop_vec_info loop_vinfo, > > } > > /* Analysis for double-reduction is done on the outer > > loop PHI, nested cycles have no further restrictions. */ > > - STMT_VINFO_TYPE (stmt_info) = cycle_phi_info_type; > > + SLP_TREE_TYPE (slp_node) = cycle_phi_info_type; > > } > > else > > - STMT_VINFO_TYPE (stmt_info) = reduc_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = reduc_vec_info_type; > > return true; > > } > > > > - stmt_vec_info orig_stmt_of_analysis = stmt_info; > > stmt_vec_info phi_info = stmt_info; > > if (!is_a <gphi *> (stmt_info->stmt)) > > { > > - STMT_VINFO_TYPE (stmt_info) = reduc_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = reduc_vec_info_type; > > return true; > > } > > if (STMT_VINFO_DEF_TYPE (stmt_info) == vect_double_reduction_def) > > @@ -8081,7 +8076,7 @@ vectorizable_reduction (loop_vec_info loop_vinfo, > > && reduction_type == FOLD_LEFT_REDUCTION) > > dump_printf_loc (MSG_NOTE, vect_location, > > "using an in-order (fold-left) reduction.\n"); > > - STMT_VINFO_TYPE (orig_stmt_of_analysis) = cycle_phi_info_type; > > + SLP_TREE_TYPE (slp_node) = cycle_phi_info_type; > > > > /* All but single defuse-cycle optimized and fold-left reductions go > > through their own vectorizable_* routines. */ > > @@ -8765,7 +8760,7 @@ vectorizable_lc_phi (loop_vec_info loop_vinfo, > > "incompatible vector types for invariants\n"); > > return false; > > } > > - STMT_VINFO_TYPE (stmt_info) = lc_phi_info_type; > > + SLP_TREE_TYPE (slp_node) = lc_phi_info_type; > > return true; > > } > > > > @@ -8850,7 +8845,7 @@ vectorizable_phi (vec_info *, > > if (gimple_phi_num_args (as_a <gphi *> (stmt_info->stmt)) > 1) > > record_stmt_cost (cost_vec, SLP_TREE_NUMBER_OF_VEC_STMTS > > (slp_node), > > vector_stmt, stmt_info, vectype, 0, vect_body); > > - STMT_VINFO_TYPE (stmt_info) = phi_info_type; > > + SLP_TREE_TYPE (slp_node) = phi_info_type; > > return true; > > } > > > > @@ -9037,7 +9032,7 @@ vectorizable_recurr (loop_vec_info loop_vinfo, > > stmt_vec_info stmt_info, > > "prologue_cost = %d .\n", inside_cost, > > prologue_cost); > > > > - STMT_VINFO_TYPE (stmt_info) = recurr_info_type; > > + SLP_TREE_TYPE (slp_node) = recurr_info_type; > > return true; > > } > > > > @@ -9578,7 +9573,7 @@ vectorizable_nonlinear_induction (loop_vec_info > > loop_vinfo, > > "prologue_cost = %d. \n", inside_cost, > > prologue_cost); > > > > - STMT_VINFO_TYPE (stmt_info) = induc_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = induc_vec_info_type; > > DUMP_VECT_SCOPE ("vectorizable_nonlinear_induction"); > > return true; > > } > > @@ -9880,7 +9875,7 @@ vectorizable_induction (loop_vec_info loop_vinfo, > > "prologue_cost = %d .\n", inside_cost, > > prologue_cost); > > > > - STMT_VINFO_TYPE (stmt_info) = induc_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = induc_vec_info_type; > > DUMP_VECT_SCOPE ("vectorizable_induction"); > > return true; > > } > > diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc > > index 0f6d6b77ea1..888aaa75fe2 100644 > > --- a/gcc/tree-vect-patterns.cc > > +++ b/gcc/tree-vect-patterns.cc > > @@ -130,7 +130,6 @@ vect_init_pattern_stmt (vec_info *vinfo, gimple > > *pattern_stmt, > > STMT_VINFO_RELATED_STMT (pattern_stmt_info) = orig_stmt_info; > > STMT_VINFO_DEF_TYPE (pattern_stmt_info) > > = STMT_VINFO_DEF_TYPE (orig_stmt_info); > > - STMT_VINFO_TYPE (pattern_stmt_info) = STMT_VINFO_TYPE (orig_stmt_info); > > if (!STMT_VINFO_VECTYPE (pattern_stmt_info)) > > { > > gcc_assert (!vectype > > diff --git a/gcc/tree-vect-slp.cc b/gcc/tree-vect-slp.cc > > index 7c23496b5e0..8de9bca8743 100644 > > --- a/gcc/tree-vect-slp.cc > > +++ b/gcc/tree-vect-slp.cc > > @@ -130,6 +130,8 @@ _slp_tree::_slp_tree () > > this->failed = NULL; > > this->max_nunits = 1; > > this->lanes = 0; > > + SLP_TREE_TYPE (this) = undef_vec_info_type; > > + this->u.undef = NULL; > > } > > > > /* Tear down a SLP node. */ > > @@ -8257,8 +8259,7 @@ vect_slp_analyze_node_operations (vec_info *vinfo, > > slp_tree node, > > /* Masked loads can have an undefined (default SSA definition) > > else operand. We do not need to cost it. */ > > vec<tree> ops = SLP_TREE_SCALAR_OPS (child); > > - if ((STMT_VINFO_TYPE (SLP_TREE_REPRESENTATIVE (node)) > > - == load_vec_info_type) > > + if (SLP_TREE_TYPE (node) == load_vec_info_type > > && ((ops.length () > > && TREE_CODE (ops[0]) == SSA_NAME > > && SSA_NAME_IS_DEFAULT_DEF (ops[0]) > > @@ -8269,8 +8270,7 @@ vect_slp_analyze_node_operations (vec_info *vinfo, > > slp_tree node, > > /* For shifts with a scalar argument we don't need > > to cost or code-generate anything. > > ??? Represent this more explicitely. */ > > - gcc_assert ((STMT_VINFO_TYPE (SLP_TREE_REPRESENTATIVE (node)) > > - == shift_vec_info_type) > > + gcc_assert (SLP_TREE_TYPE (node) == shift_vec_info_type > > && j == 1); > > continue; > > } > > @@ -11307,9 +11307,9 @@ vect_schedule_slp_node (vec_info *vinfo, > > si = gsi_for_stmt (last_stmt_info->stmt); > > } > > else if (SLP_TREE_CODE (node) != VEC_PERM_EXPR > > - && (STMT_VINFO_TYPE (stmt_info) == cycle_phi_info_type > > - || STMT_VINFO_TYPE (stmt_info) == induc_vec_info_type > > - || STMT_VINFO_TYPE (stmt_info) == phi_info_type)) > > + && (SLP_TREE_TYPE (node) == cycle_phi_info_type > > + || SLP_TREE_TYPE (node) == induc_vec_info_type > > + || SLP_TREE_TYPE (node) == phi_info_type)) > > { > > /* For PHI node vectorization we do not use the insertion iterator. > > */ > > si = gsi_none (); > > @@ -11329,8 +11329,7 @@ vect_schedule_slp_node (vec_info *vinfo, > > last scalar def here. */ > > if (SLP_TREE_VEC_DEFS (child).is_empty ()) > > { > > - gcc_assert (STMT_VINFO_TYPE (SLP_TREE_REPRESENTATIVE > > (child)) > > - == cycle_phi_info_type); > > + gcc_assert (SLP_TREE_TYPE (child) == cycle_phi_info_type); > > gphi *phi = as_a <gphi *> > > (vect_find_last_scalar_stmt_in_slp (child)->stmt); > > if (!last_stmt) > > @@ -11477,7 +11476,7 @@ vect_schedule_slp_node (vec_info *vinfo, > > if (dump_enabled_p ()) > > dump_printf_loc (MSG_NOTE, vect_location, > > "------>vectorizing SLP permutation node\n"); > > - /* ??? the transform kind is stored to STMT_VINFO_TYPE which might > > + /* ??? the transform kind was stored to STMT_VINFO_TYPE which might > > be shared with different SLP nodes (but usually it's the same > > operation apart from the case the stmt is only there for denoting > > the actual scalar lane defs ...). So do not call vect_transform_stmt > > diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc > > index 2e9b3d2e686..86b6904facf 100644 > > --- a/gcc/tree-vect-stmts.cc > > +++ b/gcc/tree-vect-stmts.cc > > @@ -3299,7 +3299,7 @@ vectorizable_bswap (vec_info *vinfo, > > return false; > > } > > > > - STMT_VINFO_TYPE (stmt_info) = call_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = call_vec_info_type; > > DUMP_VECT_SCOPE ("vectorizable_bswap"); > > record_stmt_cost (cost_vec, > > 1, vector_stmt, stmt_info, 0, vect_prologue); > > @@ -3650,7 +3650,7 @@ vectorizable_call (vec_info *vinfo, > > "incompatible vector types for invariants\n"); > > return false; > > } > > - STMT_VINFO_TYPE (stmt_info) = call_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = call_vec_info_type; > > DUMP_VECT_SCOPE ("vectorizable_call"); > > vect_model_simple_cost (vinfo, ncopies, dt, ndts, slp_node, > > cost_vec); > > if (ifn != IFN_LAST && modifier == NARROW && !slp_node) > > @@ -4617,7 +4617,7 @@ vectorizable_simd_clone_call (vec_info *vinfo, > > stmt_vec_info stmt_info, > > LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo) = false; > > } > > > > - STMT_VINFO_TYPE (stmt_info) = call_simd_clone_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = call_simd_clone_vec_info_type; > > DUMP_VECT_SCOPE ("vectorizable_simd_clone_call"); > > /* vect_model_simple_cost (vinfo, ncopies, dt, slp_node, cost_vec); */ > > return true; > > @@ -5830,13 +5830,13 @@ vectorizable_conversion (vec_info *vinfo, > > DUMP_VECT_SCOPE ("vectorizable_conversion"); > > if (modifier == NONE) > > { > > - STMT_VINFO_TYPE (stmt_info) = type_conversion_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = type_conversion_vec_info_type; > > vect_model_simple_cost (vinfo, (1 + multi_step_cvt), > > dt, ndts, slp_node, cost_vec); > > } > > else if (modifier == NARROW_SRC || modifier == NARROW_DST) > > { > > - STMT_VINFO_TYPE (stmt_info) = type_demotion_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = type_demotion_vec_info_type; > > /* The final packing step produces one vector result per copy. */ > > unsigned int nvectors = SLP_TREE_NUMBER_OF_VEC_STMTS (slp_node); > > vect_model_promotion_demotion_cost (stmt_info, dt, nvectors, > > @@ -5845,7 +5845,7 @@ vectorizable_conversion (vec_info *vinfo, > > } > > else > > { > > - STMT_VINFO_TYPE (stmt_info) = type_promotion_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = type_promotion_vec_info_type; > > /* The initial unpacking step produces two vector results > > per copy. MULTI_STEP_CVT is 0 for a single conversion, > > so >> MULTI_STEP_CVT divides by 2^(number of steps - 1). */ > > @@ -6197,7 +6197,7 @@ vectorizable_assignment (vec_info *vinfo, > > "incompatible vector types for invariants\n"); > > return false; > > } > > - STMT_VINFO_TYPE (stmt_info) = assignment_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = assignment_vec_info_type; > > DUMP_VECT_SCOPE ("vectorizable_assignment"); > > if (!vect_nop_conversion_p (stmt_info)) > > vect_model_simple_cost (vinfo, ncopies, dt, ndts, slp_node, cost_vec); > > @@ -6568,7 +6568,7 @@ vectorizable_shift (vec_info *vinfo, > > == INTEGER_CST)); > > } > > } > > - STMT_VINFO_TYPE (stmt_info) = shift_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = shift_vec_info_type; > > DUMP_VECT_SCOPE ("vectorizable_shift"); > > vect_model_simple_cost (vinfo, ncopies, dt, > > scalar_shift_arg ? 1 : ndts, slp_node, cost_vec); > > @@ -7004,7 +7004,7 @@ vectorizable_operation (vec_info *vinfo, > > return false; > > } > > > > - STMT_VINFO_TYPE (stmt_info) = op_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = op_vec_info_type; > > DUMP_VECT_SCOPE ("vectorizable_operation"); > > vect_model_simple_cost (vinfo, 1, dt, ndts, slp_node, cost_vec); > > if (using_emulated_vectors_p) > > @@ -8463,7 +8463,7 @@ vectorizable_store (vec_info *vinfo, > > dump_printf_loc (MSG_NOTE, vect_location, > > "Vectorizing an unaligned access.\n"); > > > > - STMT_VINFO_TYPE (stmt_info) = store_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = store_vec_info_type; > > } > > gcc_assert (memory_access_type == SLP_TREE_MEMORY_ACCESS_TYPE > > (stmt_info)); > > > > @@ -10144,7 +10144,7 @@ vectorizable_load (vec_info *vinfo, > > if (memory_access_type == VMAT_LOAD_STORE_LANES) > > vinfo->any_known_not_updated_vssa = true; > > > > - STMT_VINFO_TYPE (stmt_info) = load_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = load_vec_info_type; > > } > > else > > { > > @@ -12377,7 +12377,7 @@ vectorizable_condition (vec_info *vinfo, > > } > > } > > > > - STMT_VINFO_TYPE (stmt_info) = condition_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = condition_vec_info_type; > > vect_model_simple_cost (vinfo, ncopies, dts, ndts, slp_node, > > cost_vec, kind); > > return true; > > @@ -12911,7 +12911,7 @@ vectorizable_comparison (vec_info *vinfo, > > return false; > > > > if (!vec_stmt) > > - STMT_VINFO_TYPE (stmt_info) = comparison_vec_info_type; > > + SLP_TREE_TYPE (slp_node) = comparison_vec_info_type; > > > > return true; > > } > > @@ -13377,8 +13377,8 @@ vect_analyze_stmt (vec_info *vinfo, > > /* Stmts that are (also) "live" (i.e. - that are used out of the loop) > > need extra handling, except for vectorizable reductions. */ > > if (!bb_vinfo > > - && STMT_VINFO_TYPE (stmt_info) != reduc_vec_info_type > > - && (STMT_VINFO_TYPE (stmt_info) != lc_phi_info_type > > + && SLP_TREE_TYPE (node) != reduc_vec_info_type > > + && (SLP_TREE_TYPE (node) != lc_phi_info_type > > || STMT_VINFO_DEF_TYPE (stmt_info) == vect_internal_def) > > && (!node->ldst_lanes || SLP_TREE_CODE (node) == VEC_PERM_EXPR) > > && !can_vectorize_live_stmts (as_a <loop_vec_info> (vinfo), > > @@ -13416,7 +13416,7 @@ vect_transform_stmt (vec_info *vinfo, > > if (slp_node) > > STMT_VINFO_VECTYPE (stmt_info) = SLP_TREE_VECTYPE (slp_node); > > > > - switch (STMT_VINFO_TYPE (stmt_info)) > > + switch (SLP_TREE_TYPE (slp_node)) > > { > > case type_demotion_vec_info_type: > > case type_promotion_vec_info_type: > > @@ -13547,7 +13547,7 @@ vect_transform_stmt (vec_info *vinfo, > > if (!slp_node && vec_stmt) > > gcc_assert (STMT_VINFO_VEC_STMTS (stmt_info).exists ()); > > > > - if (STMT_VINFO_TYPE (stmt_info) != store_vec_info_type > > + if (SLP_TREE_TYPE (slp_node) != store_vec_info_type > > && (!slp_node > > || !slp_node->ldst_lanes > > || SLP_TREE_CODE (slp_node) == VEC_PERM_EXPR)) > > diff --git a/gcc/tree-vectorizer.cc b/gcc/tree-vectorizer.cc > > index 89fecd78088..03e5dcd576d 100644 > > --- a/gcc/tree-vectorizer.cc > > +++ b/gcc/tree-vectorizer.cc > > @@ -715,7 +715,6 @@ vec_info::new_stmt_vec_info (gimple *stmt) > > stmt_vec_info res = XCNEW (class _stmt_vec_info); > > res->stmt = stmt; > > > > - STMT_VINFO_TYPE (res) = undef_vec_info_type; > > STMT_VINFO_RELEVANT (res) = vect_unused_in_scope; > > STMT_VINFO_VECTORIZABLE (res) = true; > > STMT_VINFO_REDUC_TYPE (res) = TREE_CODE_REDUCTION; > > diff --git a/gcc/tree-vectorizer.h b/gcc/tree-vectorizer.h > > index a811e009835..c0be7dc7944 100644 > > --- a/gcc/tree-vectorizer.h > > +++ b/gcc/tree-vectorizer.h > > @@ -210,6 +210,32 @@ enum vect_memory_access_type { > > VMAT_GATHER_SCATTER > > }; > > > > +/*-----------------------------------------------------------------*/ > > +/* Info on vectorized defs. */ > > +/*-----------------------------------------------------------------*/ > > +enum stmt_vec_info_type { > > + undef_vec_info_type = 0, > > + load_vec_info_type, > > + store_vec_info_type, > > + shift_vec_info_type, > > + op_vec_info_type, > > + call_vec_info_type, > > + call_simd_clone_vec_info_type, > > + assignment_vec_info_type, > > + condition_vec_info_type, > > + comparison_vec_info_type, > > + reduc_vec_info_type, > > + induc_vec_info_type, > > + type_promotion_vec_info_type, > > + type_demotion_vec_info_type, > > + type_conversion_vec_info_type, > > + cycle_phi_info_type, > > + lc_phi_info_type, > > + phi_info_type, > > + recurr_info_type, > > + loop_exit_ctrl_vec_info_type > > +}; > > + > > > > /********************************************************************** > > ** > > SLP > > > > ********************************************************************** > > **/ > > @@ -284,6 +310,13 @@ struct _slp_tree { > > for loop vectorization. */ > > vect_memory_access_type memory_access_type; > > > > + /* The kind of operation as determined by analysis and a tagged > > + union with kind specific data. */ > > + enum stmt_vec_info_type type; > > + union { > > + void *undef; > > + } u; > > + > > /* If not NULL this is a cached failed SLP discovery attempt with > > the lanes that failed during SLP discovery as 'false'. This is > > a copy of the matches array. */ > > @@ -369,6 +402,7 @@ public: > > #define SLP_TREE_LANES(S) (S)->lanes > > #define SLP_TREE_CODE(S) (S)->code > > #define SLP_TREE_MEMORY_ACCESS_TYPE(S) (S)- > > >memory_access_type > > +#define SLP_TREE_TYPE(S) (S)->type > > > > enum vect_partial_vector_style { > > vect_partial_vectors_none, > > @@ -1216,32 +1250,6 @@ public: > > #define BB_VINFO_DATAREFS(B) (B)->shared->datarefs > > #define BB_VINFO_DDRS(B) (B)->shared->ddrs > > > > -/*-----------------------------------------------------------------*/ > > -/* Info on vectorized defs. */ > > -/*-----------------------------------------------------------------*/ > > -enum stmt_vec_info_type { > > - undef_vec_info_type = 0, > > - load_vec_info_type, > > - store_vec_info_type, > > - shift_vec_info_type, > > - op_vec_info_type, > > - call_vec_info_type, > > - call_simd_clone_vec_info_type, > > - assignment_vec_info_type, > > - condition_vec_info_type, > > - comparison_vec_info_type, > > - reduc_vec_info_type, > > - induc_vec_info_type, > > - type_promotion_vec_info_type, > > - type_demotion_vec_info_type, > > - type_conversion_vec_info_type, > > - cycle_phi_info_type, > > - lc_phi_info_type, > > - phi_info_type, > > - recurr_info_type, > > - loop_exit_ctrl_vec_info_type > > -}; > > - > > /* Indicates whether/how a variable is used in the scope of loop/basic > > block. */ > > enum vect_relevant { > > @@ -1334,8 +1342,6 @@ typedef struct data_reference *dr_p; > > class _stmt_vec_info { > > public: > > > > - enum stmt_vec_info_type type; > > - > > /* Indicates whether this stmts is part of a computation whose result is > > used outside the loop. */ > > bool live; > > @@ -1581,7 +1587,6 @@ struct gather_scatter_info { > > }; > > > > /* Access Functions. */ > > -#define STMT_VINFO_TYPE(S) (S)->type > > #define STMT_VINFO_STMT(S) (S)->stmt > > #define STMT_VINFO_RELEVANT(S) (S)->relevant > > #define STMT_VINFO_LIVE_P(S) (S)->live > > -- > > 2.43.0 > > -- Richard Biener <rguent...@suse.de> SUSE Software Solutions Germany GmbH, Frankenstrasse 146, 90461 Nuernberg, Germany; GF: Ivo Totev, Andrew McDonald, Werner Knoblich; (HRB 36809, AG Nuernberg)