> On 14 May 2019, at 0:42, Nakajima, Jun <jun.nakaj...@intel.com> wrote:
> 
> 
> 
>> On May 13, 2019, at 2:16 PM, Liran Alon <liran.a...@oracle.com> wrote:
>> 
>>> On 13 May 2019, at 22:31, Nakajima, Jun <jun.nakaj...@intel.com> wrote:
>>> 
>>> On 5/13/19, 7:43 AM, "kvm-ow...@vger.kernel.org on behalf of Alexandre 
>>> Chartre" wrote:
>>> 
>>>   Proposal
>>>   ========
>>> 
>>>   To handle both these points, this series introduce the mechanism of KVM
>>>   address space isolation. Note that this mechanism completes (a)+(b) and
>>>   don't contradict. In case this mechanism is also applied, (a)+(b) should
>>>   still be applied to the full virtual address space as a defence-in-depth).
>>> 
>>>   The idea is that most of KVM #VMExit handlers code will run in a special
>>>   KVM isolated address space which maps only KVM required code and per-VM
>>>   information. Only once KVM needs to architectually access other 
>>> (sensitive)
>>>   data, it will switch from KVM isolated address space to full standard
>>>   host address space. At this point, KVM will also need to kick all sibling
>>>   hyperthreads to get out of guest (note that kicking all sibling 
>>> hyperthreads
>>>   is not implemented in this serie).
>>> 
>>>   Basically, we will have the following flow:
>>> 
>>>     - qemu issues KVM_RUN ioctl
>>>     - KVM handles the ioctl and calls vcpu_run():
>>>       . KVM switches from the kernel address to the KVM address space
>>>       . KVM transfers control to VM (VMLAUNCH/VMRESUME)
>>>       . VM returns to KVM
>>>       . KVM handles VM-Exit:
>>>         . if handling need full kernel then switch to kernel address space
>>>         . else continues with KVM address space
>>>       . KVM loops in vcpu_run() or return
>>>     - KVM_RUN ioctl returns
>>> 
>>>   So, the KVM_RUN core function will mainly be executed using the KVM 
>>> address
>>>   space. The handling of a VM-Exit can require access to the kernel space
>>>   and, in that case, we will switch back to the kernel address space.
>>> 
>>> Once all sibling hyperthreads are in the host (either using the full kernel 
>>> address space or user address space), what happens to the other sibling 
>>> hyperthreads if one of them tries to do VM entry? That VCPU will switch to 
>>> the KVM address space prior to VM entry, but others continue to run? Do you 
>>> think (a) + (b) would be sufficient for that case?
>> 
>> The description here is missing and important part: When a hyperthread needs 
>> to switch from KVM isolated address space to kernel full address space, it 
>> should first kick all sibling hyperthreads outside of guest and only then 
>> safety switch to full kernel address space. Only once all sibling 
>> hyperthreads are running with KVM isolated address space, it is safe to 
>> enter guest.
>> 
> 
> Okay, it makes sense. So, it will require some synchronization among the 
> siblings there.

Definitely.
Currently the kicking of sibling hyperthreads is not integrated yet with this 
patch series. But it should be at some point.

-Liran

> 
>> The main point of this address space is to avoid kicking all sibling 
>> hyperthreads on *every* VMExit from guest but instead only kick them when 
>> switching address space. The assumption is that the vast majority of exits 
>> can be handled in KVM isolated address space and therefore do not require to 
>> kick the sibling hyperthreads outside of guest.
> 
> 
> ---
> Jun
> Intel Open Source Technology Center

Reply via email to