https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101944

--- Comment #2 from Kewen Lin <linkw at gcc dot gnu.org> ---
Back to the optimized IR, I thought the problem is that the vectorized
version has longer critical path for the reduc_plus result (latency in total).
For vectorized version,

  _51 = diffa_41(D) *
1.666666666666666574148081281236954964697360992431640625e-1;
  _59 = {_51, 2.5e-1};
  vect__20.13_60 = vect_vdw_d_37.12_56 * _59;
  _61 = .REDUC_PLUS (vect__20.13_60);

The critical path is: scalar mult -> vect CTOR -> vector mult -> reduc_plus

While for the scalar version:

  _51 = diffa_41(D) *
1.666666666666666574148081281236954964697360992431640625e-1;
  _21 = vdw_c_38 * 2.5e-1;
  _22 = .FMA (vdw_d_37, _51, _21);

Two scalar mult can run in parallel and it further ends up with one FMA.

On Power9, we don't have one unique REDUC_PLUS insn for double, it takes three
insns: vector shift + vector addition + vector extraction.  I'm not sure if
this is a problem on the platforms which support efficient REDUC_PLUS, but it
seems a bad idea to SLP that case where the root is reduc op, its feeders are
not isomorphic and whose types are V2* and can be math optimized.

Reply via email to