On 31/05/2019 02:54, Jan Beulich wrote:
> This is faster than using the software implementation, and the insn is
> available on all half-way recent hardware. Therefore convert
> generic_hweight<N>() to out-of-line functions (without affecting Arm)
> and use alternatives patching to replace the function calls.
>
> Suggested-by: Andrew Cooper <andrew.coop...@citrix.com>
> Signed-off-by: Jan Beulich <jbeul...@suse.com>

So, I trust you weren't expecting to just ack this and let it go in?

The principle of the patch (use popcnt when available) is an improvement
which I'm entirely in agreement with, but everything else is a problem.

The long and the short of it is that I'm not going to accept any version
of this which isn't the Linux version.

>From a microarchitectural standpoint, the tradeoff between fractional
register scheduling flexibility (which in practice is largely bound
anyway by real function calls in surrounding code) and increased icache
pressure/coldness (from the redundant function copies) falls largely in
favour of the Linux way of doing it, a cold icache line is
disproportionally more expensive than requiring the compiler to order
its registers differently (especially as all non-obsolete processors
these days have zero-cost register renaming internally, for the purpose
of superscalar execution).

If however you can demonstrate that a number of CPU microarchitects are
wrong about what is faster in practice, then I will happily accept a
patch which matches Linux, once you've improved the performance of Linux.

> @@ -245,6 +246,9 @@ boot/mkelf32: boot/mkelf32.c
>  efi/mkreloc: efi/mkreloc.c
>       $(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
>  
> +nocov-y += hweight.o

Irrespective of the exact specifics of how the patch ends up, I don't
think the nocov restriction is a direction we want to take.

Coverage may not be a thing used in production, but when it is used for
development, it needs to not have random holes missing in the results data.

> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
> +

Does this work with Clang?


> --- /dev/null
> +++ b/xen/arch/x86/hweight.c
> @@ -0,0 +1,28 @@
> +#define generic_hweight64 _hweight64
> +#define generic_hweight32 _hweight32
> +#define generic_hweight16 _hweight16
> +#define generic_hweight8  _hweight8
> +
> +#include <xen/compiler.h>
> +
> +#undef inline
> +#define inline always_inline
> +
> +#include <xen/bitops.h>
> +
> +#undef generic_hweight8
> +#undef generic_hweight16
> +#undef generic_hweight32
> +#undef generic_hweight64
> +
> +#define HWEIGHT(n)                                             \
> +typeof(_hweight##n) generic_hweight##n;                        \
> +unsigned int generic_hweight##n(typeof((uint##n##_t)0 + 0U) x) \

A question to the rest of xen-devel.  Is there anyone else who can
actually work out what this construct is doing?

I'd like to get a feel for how many people can even follow some of our C.

> --- a/xen/include/asm-x86/bitops.h
> +++ b/xen/include/asm-x86/bitops.h
> @@ -469,15 +469,35 @@ static inline int fls(unsigned int x)
>      return r + 1;
>  }
>  
> +/* POPCNT encodings with %{r,e}di input and %{r,e}ax output: */
> +#define POPCNT_64 ".byte 0xF3, 0x48, 0x0F, 0xB8, 0xC7"
> +#define POPCNT_32 ".byte 0xF3, 0x0F, 0xB8, 0xC7"

So (the dangers of false micro-optimsiation aside), POPCNT_32 will
probably be better using a redundant %ds prefix.

The reason is that the result needs padding to 5 bytes, as the original
instruction is `call disp32`, meaning that a single byte nop needs
inserting.  The version with a single nop takes two decode ports as
opposed to one, and single byte nops are forced to take an execution
delay for backwards compatibility with DoS.

OTOH, I also bet that noone could observe a difference without using
perf counters and fetch/decode uarch events.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to