This vDSO code only gets used by 64-bit kernel, not 32-bit. In 64-bit kernels, data segment is the same for 32-bit and 64-bit userspace, and SYSRET insn does load %ss with its selector. No need to repeat it by hand. Segment loads are somewhat expensive: tens of cycles.
Signed-off-by: Denys Vlasenko <dvlas...@redhat.com> CC: Linus Torvalds <torva...@linux-foundation.org> CC: Steven Rostedt <rost...@goodmis.org> CC: Ingo Molnar <mi...@kernel.org> CC: Borislav Petkov <b...@alien8.de> CC: "H. Peter Anvin" <h...@zytor.com> CC: Andy Lutomirski <l...@amacapital.net> CC: Oleg Nesterov <o...@redhat.com> CC: Frederic Weisbecker <fweis...@gmail.com> CC: Alexei Starovoitov <a...@plumgrid.com> CC: Will Drewry <w...@chromium.org> CC: Kees Cook <keesc...@chromium.org> CC: x...@kernel.org CC: linux-kernel@vger.kernel.org --- Patch was run-tested. arch/x86/vdso/vdso32/syscall.S | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/arch/x86/vdso/vdso32/syscall.S b/arch/x86/vdso/vdso32/syscall.S index 5415b56..ccdb9ef 100644 --- a/arch/x86/vdso/vdso32/syscall.S +++ b/arch/x86/vdso/vdso32/syscall.S @@ -19,8 +19,15 @@ __kernel_vsyscall: .Lpush_ebp: movl %ecx, %ebp syscall - movl $__USER32_DS, %ecx - movl %ecx, %ss + /* + * Used to load __USER32_DS to %ss here, + * but it's not necessary: this vDSO is only used if our kernel + * is 64-bit one (and we are on AMD CPU). + * For 64-bit kernels, __USER32_DS and __USER_DS are the same. + * SYSRET restores %ss to the same value when returning to + * either 64- or 32-bit userspace, and 64-bit kernel uses the same + * descriptor for %ss in 64- and 32-bit userspace. + */ movl %ebp, %ecx popl %ebp .Lpop_ebp: -- 1.8.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/