Commit-ID: 11f254dbb3a2e3f0d8552d0dd37f4faa432b6b16
Gitweb: http://git.kernel.org/tip/11f254dbb3a2e3f0d8552d0dd37f4faa432b6b16
Author: Peter Zijlstra <[email protected]>
AuthorDate: Thu, 8 Dec 2016 16:42:15 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Sun, 11 Dec 2016 13:09:20 +0100
x86/paravirt: Fix bool return type for PVOP_CALL()
Commit:
3cded4179481 ("x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()")
introduced a paravirt op with bool return type [*]
It turns out that the PVOP_CALL*() macros miscompile when rettype is
bool. Code that looked like:
83 ef 01 sub $0x1,%edi
ff 15 32 a0 d8 00 callq *0xd8a032(%rip) # ffffffff81e28120
<pv_lock_ops+0x20>
84 c0 test %al,%al
ended up looking like so after PVOP_CALL1() was applied:
83 ef 01 sub $0x1,%edi
48 63 ff movslq %edi,%rdi
ff 14 25 20 81 e2 81 callq *0xffffffff81e28120
48 85 c0 test %rax,%rax
Note how it tests the whole of %rax, even though a typical bool return
function only sets %al, like:
0f 95 c0 setne %al
c3 retq
This is because ____PVOP_CALL() does:
__ret = (rettype)__eax;
and while regular integer type casts truncate the result, a cast to
bool tests for any !0 value. Fix this by explicitly truncating to
sizeof(rettype) before casting.
[*] The actual bug should've been exposed in commit:
446f3dc8cc0a ("locking/core, x86/paravirt: Implement
vcpu_is_preempted(cpu) for KVM and Xen guests")
but that didn't properly implement the paravirt call.
Reported-by: kernel test robot <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Alok Kataria <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Chris Wright <[email protected]>
Cc: Jeremy Fitzhardinge <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Pan Xinhui <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Peter Anvin <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Rusty Russell <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Fixes: 3cded4179481 ("x86/paravirt: Optimize native
pv_lock_ops.vcpu_is_preempted()")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/x86/include/asm/paravirt_types.h | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/paravirt_types.h
b/arch/x86/include/asm/paravirt_types.h
index 2614bd7..3f2bc0f 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -510,6 +510,18 @@ int paravirt_disable_iospace(void);
#define PVOP_TEST_NULL(op) ((void)op)
#endif
+#define PVOP_RETMASK(rettype) \
+ ({ unsigned long __mask = ~0UL; \
+ switch (sizeof(rettype)) { \
+ case 1: __mask = 0xffUL; break; \
+ case 2: __mask = 0xffffUL; break; \
+ case 4: __mask = 0xffffffffUL; break; \
+ default: break; \
+ } \
+ __mask; \
+ })
+
+
#define ____PVOP_CALL(rettype, op, clbr, call_clbr, extra_clbr,
\
pre, post, ...) \
({ \
@@ -537,7 +549,7 @@ int paravirt_disable_iospace(void);
paravirt_clobber(clbr), \
##__VA_ARGS__ \
: "memory", "cc" extra_clbr); \
- __ret = (rettype)__eax; \
+ __ret = (rettype)(__eax & PVOP_RETMASK(rettype));
\
} \
__ret; \
})