https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113091

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
     Ever confirmed|0                           |1
   Last reconfirmed|                            |2023-12-21
             Status|UNCONFIRMED                 |NEW

--- Comment #4 from Richard Biener <rguenth at gcc dot gnu.org> ---
"The use stmt is "_2 = (int) _1", whose pattern statement is "patt_64 = (int)
patt_63", which is not referenced by any original or other pattern statements.
Or in other word, the orig_stmt could be absorbed into a vector operation,
without any outlier scalar use."

That means the code sees that _2 = (int) _1 isn't vectorized (the pattern
stmt isn't actually used) which means _2 = (int) _1 stays in the code and
thus _1 is live.

The issue here is that because the "outer" pattern consumes
patt_64 = (int) patt_63 it should have adjusted _2 = (int) _1 stmt-to-vectorize
as being the outer pattern root stmt for all this logic to work correctly.

Otherwise we have no means of identifying whether a scalar stmt takes part
in vectorization or not.

I'm not sure what restrictions we place on pattern recognition of patterns - do
we require single-uses or do we allow the situation that one vectorization
path picks up the "inner" pattern while another picks the "outer" one?

In theory we can hack up the liveness analysis but as you noticed that
isn't the part doing the costing.  The costing part is just written in
the very same way (vect_bb_vectorization_profitable_p, specifically
vect_slp_gather_vectorized_scalar_stmts and vect_bb_slp_scalar_cost).
Basically the scalar cost is
the cost of the scalar stmts that are fully replaced (can be DCEd after
vectorization) by the vector stmts.

Reply via email to