On 03/08/15 16:53, Kyrill Tkachov wrote:
On 03/08/15 16:45, Uros Bizjak wrote:
On Mon, Aug 3, 2015 at 5:37 PM, Kyrill Tkachov <kyrylo.tkac...@arm.com> wrote:

Looking at the x86 movcc expansion code (ix86_expand_int_movcc) I
don't think this is a good idea. In the expander, there is already
quite some target-dependent code that goes great length to utilize sbb
insn as much as possible, before cmove is used.

IMO, as far as x86 is concerned, the best solution would be to revert
the change. ix86_expand_int_movcc already does some tricks from your
patch in a target-efficient way. Generic change that was introduced by
your patch now interferes with this expansion.
Well, technically the transformation was already there, it was just never
reached for an x86 compilation because noce_try_cmove was tried in front of
it
and used a target-specific expansion.
In any case, how's this proposal?
The transformation noce_try_store_flag_constants
        /* if (test) x = a; else x = b;
       =>   x = (-(test != 0) & (b - a)) + a;  */

Is a catch-all-immediates transformation in noce_try_store_flag_constants.
What if we moved it to noce_try_cmove and performed it only if the
target-specific
conditional move expansion there failed?

That way we can try the x86_64-specific sequence first and still give the
opportunity
to noce_try_store_flag_constants to perform the transformations that can
benefit targets
that don't have highly specific conditional move expanders.
Yes, let's try this approach. As was found out, some targets (e.g.
x86) hide lots of different target-dependent expansion strategies into
movcc expander. Perhaps this fact should be documented in the comment
in the generic code?
Ok, I'll work on that approach and add a comment.

I'm testing a patch that fix the testcases on x86_64 and does not
harm codegen on aarch64. Feel free to file a PR and assign it to me.

Thanks,
Kyrill


Thanks,
Kyrill


Uros.


Reply via email to