On Tue, Nov 5, 2019 at 3:09 PM Richard Sandiford
<richard.sandif...@arm.com> wrote:
>
> Richard Biener <richard.guent...@gmail.com> writes:
> > On Tue, Oct 29, 2019 at 6:05 PM Richard Sandiford
> > <richard.sandif...@arm.com> wrote:
> >>
> >> The BB vectoriser picked vector types in the same way as the loop
> >> vectoriser: it picked a vector mode/size for the region and then
> >> based all the vector types off that choice.  This meant we could
> >> end up trying to use vector types that had too many elements for
> >> the group size.
> >>
> >> The main part of this patch is therefore about passing the SLP
> >> group size down to routines like get_vectype_for_scalar_type and
> >> ensuring that each vector type in the SLP tree is chosen wrt the
> >> group size.  That part in itself is pretty easy and mechanical.
> >>
> >> The main warts are:
> >>
> >> (1) We normally pick a STMT_VINFO_VECTYPE for data references at an
> >>     early stage (vect_analyze_data_refs).  However, nothing in the
> >>     BB vectoriser relied on this, or on the min_vf calculated from it.
> >>     I couldn't see anything other than vect_recog_bool_pattern that
> >>     tried to access the vector type before the SLP tree is built.
> >
> > So can you not set STMT_VINFO_VECTYPE for data refs with BB vectorization
> > then?
>
> Yeah, the patch stops us from setting it during vect_analyze_data_refs.
> We still need to set it later when building the SLP tree, just like
> we do for other statements.
>
> >> (2) It's possible for the same statement to be used in the groups of
> >>     different sizes.  Taking the group size into account meant that
> >>     we could try to pick different vector types for the same statement.
> >
> > That only happens when we have multiple SLP instances though
> > (entries into the shared SLP graph).
>
> Yeah.
>
> > It probably makes sense to keep handling SLP instances sharing stmts
> > together for costing reasons but one issue is that for disjunct pieces
> > (in the same BB) disqualifying one cost-wise disqualifies all.  So at
> > some point during analysis (which should eventually cover more than a
> > single BB) we want to split the graph.  It probably doesn't help the
> > above case.
>
> Yeah, sounds like there are two issues: one with sharing stmt_vec_infos
> between multiple SLP nodes, and one with sharing SLP child nodes between
> multiple parent nodes.  (2) comes from the first, but I guess failing
> based on costs is more about the second.
>
> >>     This problem should go away with the move to doing everything on
> >>     SLP trees, where presumably we would attach the vector type to the
> >>     SLP node rather than the stmt_vec_info.  Until then, the patch just
> >>     uses a first-come, first-served approach.
> >
> > Yeah, I ran into not having vectype on SLP trees with invariants/externals
> > as well.  I suppose you didn't try simply adding that to the SLP tree
> > and pushing/popping it like we push/pop the def type?
>
> No, didn't try that.  Maybe it would be worth a go, but it seems like it
> could be a rabbit hole.
>
> > Assigning the vector types should really happen in vectorizable_*
> > and not during SLP build itself btw.
>
> Agree we need to improve the way this is handled, but delaying it
> to vectorizable_* sounds quite late.  Maybe it should be a more global
> decision, since the vector types for each vectorizable_* have to be
> compatible and it's not obvious which routine should get first choice.
>
> > Your update-all-shared-vectypes thing looks quadratic to me :/
>
> Should be amortised linear.  The statements in a DR group always
> have the same vectype.  When we want to change the vector type
> of one statement, we change it for all statements if possible
> or fail if we can't.

OK, let's go for it.

Thanks,
Richard.

> Thanks,
> Richard

Reply via email to