On Mon, Mar 23, 2015 at 12:07 PM, Denys Vlasenko <dvlas...@redhat.com> wrote:
> On 03/23/2015 07:38 PM, Andy Lutomirski wrote:
>>>         cmpq $__NR_syscall_max,%rax
>>>         ja ret_from_sys_call
>>>         movq %r10,%rcx
>>>         call *sys_call_table(,%rax,8)  # XXX:    rip relative
>>>         movq %rax,RAX-ARGOFFSET(%rsp)
>>> ret_from_sys_call:
>>>         testl $_TIF_ALLWORK_MASK,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET)
>>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>>         jnz int_ret_from_sys_call_fixup /* Go the the slow path */
>>>         LOCKDEP_SYS_EXIT
>>>         DISABLE_INTERRUPTS(CLBR_NONE)
>>>         TRACE_IRQS_OFF
>>> ...
>>> ...
>>> int_ret_from_sys_call_fixup:
>>>         FIXUP_TOP_OF_STACK %r11, -ARGOFFSET
>>>         jmp int_ret_from_sys_call
>>> ...
>>> ...
>>> GLOBAL(int_ret_from_sys_call)
>>>         DISABLE_INTERRUPTS(CLBR_NONE)
>>>         TRACE_IRQS_OFF
>>>
>>> You reverted that by moving this insn to be after first 
>>> DISABLE_INTERRUPTS(CLBR_NONE).
>>>
>>> I also don't see how moving that check (even if it is wrong in a more
>>> benign way) can have such a drastic effect.
>>
>> I bet I see it.  I have the advantage of having stared at KVM code and
>> cursed at it more recently than you, I suspect.  KVM does awful, awful
>> things to CPU state, and, as an optimization, it allows kernel code to
>> run with CPU state that would be totally invalid in user mode.  This
>> happens through a bunch of hooks, including this bit in __switch_to:
>>
>>     /*
>>      * Now maybe reload the debug registers and handle I/O bitmaps
>>      */
>>     if (unlikely(task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT ||
>>              task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV))
>>         __switch_to_xtra(prev_p, next_p, tss);
>>
>> IOW, we *change* tif during context switches.
>>
>>
>> The race looks like this:
>>
>>     testl $_TIF_ALLWORK_MASK,TI_flags+THREAD_INFO(%rsp,RIP)
>>     jnz int_ret_from_sys_call_fixup    /* Go the the slow path */
>>
>> --- preempted here, switch to KVM guest ---
>>
>> KVM guest enters and screws up, say, MSR_SYSCALL_MASK.  This wouldn't
>> happen to be a *32-bit* KVM guest, perhaps?
>>
>> Now KVM schedules, calling __switch_to.  __switch_to sets
>> _TIF_USER_RETURN_NOTIFY.
>
> Clear up to now...
>
>> We IRET back to the syscall exit code,
>
> So we end up being just after the "testl", right?
> We go into "int_ret_from_sys_call_fixup".

Nope, other way around.  We saw no work bits set in testl, but one or
more of those bits was set when we're preempted and return.  Now we
*don't* go to int_ret_from_sys_call_fixup.  I don't think that the
resulting sysret itself is harmful, but I think we're now running user
code with some MSRs programmed wrong.  The next syscall could do bad
things, such as failing to clear IF.

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to