在 2016年04月06日 18:13, Borislav Petkov 写道:
On Wed, Apr 06, 2016 at 05:14:45PM +0800, [email protected] wrote:
From: Zhaoxiu Zeng <[email protected]>

Use alternatives, lifted from arch_hweight

Signed-off-by: Zhaoxiu Zeng <[email protected]>
---
  arch/x86/include/asm/arch_hweight.h |   5 ++
  arch/x86/include/asm/arch_parity.h  | 102 ++++++++++++++++++++++++++++++++++++
  arch/x86/include/asm/bitops.h       |   4 +-
  arch/x86/lib/Makefile               |   8 +++
  arch/x86/lib/parity.c               |  32 ++++++++++++
  5 files changed, 150 insertions(+), 1 deletion(-)
  create mode 100644 arch/x86/include/asm/arch_parity.h
  create mode 100644 arch/x86/lib/parity.c
...

+static __always_inline unsigned int __arch_parity32(unsigned int w)
+{
+       unsigned int res;
+
+       asm(ALTERNATIVE("call __sw_parity32", POPCNT32 "; and $1, %0", 
X86_FEATURE_POPCNT)
+               : "="REG_OUT (res)
+               : REG_IN (w)
+               : "cc");
So why all that churn instead of simply doing:

static __always_inline unsigned int __arch_parity32(unsigned int w)
{
        return hweight32(w) & 1;
}

Ditto for the 64-bit version.


__sw_parity32 is faster than __sw_hweight32.
I don't know how many CPUs do not support the popc, if they are outdated,
use __arch_hweight32 is the easiest way.

Reply via email to