>>> I've never really thought much about them until now.  What's the
case
>>> for supporting userspace hypercalls?
>>>
>>> The current way the code works is a little scary.  Hypercalls that
>>> aren't handled by kernelspace are deferred to userspace.  Of course,
>>> kernelspace has no idea whether userspace is actually using a given
>>> hypercall so if kernelspace needs another one, the two may clash.
>>>
>>> AFAICT, the primary reason to use hypercalls is performance.  A
>vmcall
>>> is a few hundred cycles faster than a PIO exit.  In the light-weight
>>> exit path, this may make a significant different.  However, when
>going
>>> to userspace, it's not only a heavy-weight exit but it's also paying
>the
>>> cost of a ring transition.  The few hundred cycle savings is small
in
>>> comparison to the total cost so I don't think performance is a real
>>> benefit here.
>>>
>>
>> Actually the heavyweight exit is much more expensive than the ring
>> transition.
>>
>>> The hypercall namespace is much smaller than the PIO namespace, and
>>> there's no "plug-and-play" like mechanism to resolve conflict.
>PIO/MMIO
>>> has this via PCI and it seems like any userspace device ought to be
>>> either a PCI device or use a static PIO port.  Plus, paravirtual
>devices
>>> that use PCI/PIO/MMIO are much more likely to be reusable by other
>VMMs
>>> (Xen, QEMU, even VMware).
>>>
>>> In the future, if we decide a certain hypercall could be done better
>in
>>> userspace, and we have guests using those hypercalls, it makes sense
>to
>>> plumb the hypercalls down.
>>>
>>> My question is, should we support userspace hypercalls until that
>point?
>>>
>>
>> I've already mentioned this but I'll repeat it for google:  allowing
>> hypercalls to fallback to userspace gives you flexibility to have
>> either a kernel implementation or a userspace implementation for the
>> same functionality.  This means a pvnet driver can be used either
>> directly with a virtual interface on the host, or having some
>> userspace processing in qemu.  Similarly, pvblock can be processed in
>> the kernel for real block devices, or in userspace for qcow format
>> files, without the need to teach the kernel about the qcow format
>> somehow.
>>
>> Dor's initial pv devices are implemented in qemu with a view to
having
>> a faster implementation in the kernel, so userspace hypercalls are on
>> the table now.
>>
>
>Thinking a little more about this, it isn't about handling hypercalls
in
>userspace, but about handling a virtio sync() in userspace.
>
>So how about having a KVM_HC_WAKE_CHANNEL hypercall (similar to Xen's
>event channel, but assymetric) that has a channel parameter.  The
kernel
>handler for that hypercall dispatches calls to either a kernel handler
>or a userspace handler.  That means we don't need a separate ETH_SEND,
>ETH_RECEIVE, or BLOCK_SEND hypercalls.
>

Some points:
- These were none receive/send/block_send hypercalls on the first place.
  There were just register and notify hypercalls.
- The balloon code also uses hypercalls and let userspace handle them so
  higher layer will allow the guest inflate/deflate actions.
- The good thing about using hypercalls than pio is that it's cpu arch
agnostics.
- It's also more complex to asign io range for a driver inside the guest
(not that
  complex but harder then issueing a simple hypercall.
Regards,
Dor.

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to