From: Andi Kleen <a...@linux.intel.com>

For the enhanced copy string case we can trivially optimize the fault
handling. It is just a single subtraction, as there is only one
possible fault point. So get rid of handle tail for this and
just do the subtraction directly.

This patch is strictly not needed for the goal of making perf
backtrace faster, but it will possibly help other workloads.

Signed-off-by: Andi Kleen <a...@linux.intel.com>
---
 arch/x86/lib/copy_user_64.S | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/lib/copy_user_64.S b/arch/x86/lib/copy_user_64.S
index dee945d..608caf45 100644
--- a/arch/x86/lib/copy_user_64.S
+++ b/arch/x86/lib/copy_user_64.S
@@ -285,8 +285,10 @@ ENTRY(copy_user_enhanced_fast_string)
        ret
 
        .section .fixup,"ax"
-12:    movl %ecx,%edx          /* ecx is zerorest also */
-       jmp copy_user_handle_tail
+       /* edx: len: ecx: actually copied bytes */
+12:    sub  %ecx,%edx
+       mov %edx,%eax
+       ret
        .previous
 
        _ASM_EXTABLE(1b,12b)
-- 
1.9.3

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to