On Wed, May 13, 2015 at 08:19:55AM +0200, Ingo Molnar wrote:
> Looks nice. Would be useful to do before/after analysis of the
> generated asm with a defconfig and document that in the changelog.

Right, so I'm looking at what we have now:

/* Standard copy_to_user with segment limit checking */
ENTRY(_copy_to_user)
        CFI_STARTPROC
        GET_THREAD_INFO(%rax)
        movq %rdi,%rcx
        addq %rdx,%rcx
        jc bad_to_user
        cmpq TI_addr_limit(%rax),%rcx
        ja bad_to_user

This is adding @to (in %rdi) with size (in %rdx) and then looking at the
carry flag. __chk_range_not_ok() does the same thing, but with a single
operation, AFAICT:

static inline bool __chk_range_not_ok(unsigned long addr, unsigned long size, 
unsigned long limit)
{
        /*
         * If we have used "sizeof()" for the size,
         * we know it won't overflow the limit (but
         * it might overflow the 'addr', so it's
         * important to subtract the size from the
         * limit, not add it to the address).
         */
        if (__builtin_constant_p(size))
                return addr > limit - size;

and we're avoiding the addr overflow by subtracting size from limit.

So the resulting asm looks like this:

        .file 22 "./arch/x86/include/asm/uaccess.h"
        .loc 22 54 0
        movq    -16360(%r14), %rax      # _208->addr_limit.seg, tmp347          
%r14 contains thread_info
        subq    $88, %rax       #, D.37904                                      
88 is the size

        .file 23 "./arch/x86/include/asm/uaccess_64.h"
        .loc 23 165 0
        cmpq    %rax, %r12      # D.37904, ubuf                                 
%r12 contains the user ptr
        ja      .L493   #,
        movq    %r12, %rdi      # ubuf, to                                      
prep args for copy_user...
        movl    $88, %edx       #, len

                                                                                
alternative starts here
        #APP
        # 36 "./arch/x86/include/asm/uaccess_64.h" 1
        661:
        call copy_user_generic_unrolled #
        ....


so we end up replacing

        MOV
        ADD
        JC
        CMP
        JA
        JMP (alternative)

with

        MOV
        SUB
        CMP
        JA
        MOV
        MOV
        CALL (alternative)

The only problem I see here is that we have to do two MOVs to put args
in proper registers before calling the copy_user* version. But we end
up with a single conditional instead of two. And the MOVs are cheaper.
Also, we gets rid of asm glue, even betterer :-)

> I'd keep any changes to inlining decisions a separate patch and do
> vmlinux before/after size analysis as well, so that we don't mix the
> effects of the various enhancements.

Yap.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to