> From: Richard Sandiford <richard.sandif...@arm.com> > Date: Mon, 6 Jul 2020 11:48:25 +0200
> Out of interest, how do the results change if we still allow the > combination for equal costs if the new sequence is shorter than > the original? I think that still counts as "cheaper than", > but maybe I'm too RISC-centric. ;-) (I'm not saying that's > what we should do, I'm just curious.) Aha. I agree to the point I'd say it's a key question and we should (i.e. if the "speed" cost is the same, inspect again using size metrics, definitely reject if larger, though undecided if same). > Originally combine always produced shorter sequences, so by the > above reasoning, combining for == was correct. These days we allow > N-to-N replacements too, which are obviously a good thing when > the new cost is lower, but are more of a wash when the costs > are the same. But even then, the combination should have a > "canonicalisation" effect. (Unfortunately that effect includes > the result of expand_compound_operation/make_compound_operation.) > Do you have specific examples of where things go wrong? I initially thought I had one clear one, but it was an actual bug: https://gcc.gnu.org/pipermail/gcc-patches/2020-July/549412.html which I guess is a point in favor for keeping it same-cost, but on the other hand, that condition (!i2_was_move && !i3_was_move) (failing due to wrong value of i3_was_move) is an artificial measure that would not be needed if same-cost combinations were rejected. brgds, H-P