Dor Laor wrote:
>> Can you elaborate here?  Using a PCI discover mechanism, you've got 
>> your memory already.  Not point in reinventing PCI with hypercalls.
>>     
> In this case I agree that it should be done using pci/other_bus config 
> space.
>   
>>> 2. For other purposes such as a balloon driver, a deflate/inflate 
>>> hypercalls are needed.
>>>    Although for x86 mmio/pio can be used but this is not compatible 
>>> with other architectures.
>>>       
>> Isn't a balloon driver just another virtio device?  Rather, it might 
>> be interesting to build a simple RPC mechanism on top of virtio and do 
>> things like balloon on top of that.
>>     
> Currently the balloon driver is not a virtio device but it will become 
> one, nevertheless, not all devices must be virtio, and we cannot predict 
> all sort
> of usages. Even if a device can work over virtio it shouldn't be a 
> perquisite.
>
> I have another two new points in favor of userspace hypercall handling:
> 1. Hypercalls needed for pci pass through devices.
>     We have an home grown implementation for pci pass through devices 
> that will soon be merged.
>      It allows redirecting a physical pci device into a guest.
>      The guest kernel issues hypercalls to know whether a device is 
> physical or not. It's much more easy to let
>      userspace catch them since it is aware of all devices, unlike the 
> kernel.
>   

It's really hard to say here without seeing the code but if you really 
need to use a hypercall, then I think the better thing to do is define 
an higher level interface (like an exit reason) and do the translation 
from hypercall to this exit reason.  There's no performance difference 
doing this.

My thinking is that there will be other userspace other than QEMU.  The 
hypercall interface is static and needs to be treated as the host=>guest 
ABI.  By allowing hypercalls to be interpreted by userspace, you are now 
making the host=>guest ABI depend on userspace too instead of just 
kernel space.

The only argument I can see for passing through hypercalls:

1) you may want two separate userspaces to define the same hypercall 
number in two different ways

2) it's easier to just pass through the hypercall by default than it is 
to translate to a higher level exit reason

I think #1 is fundamentally a bad thing to do allow.  I think #2 is not 
justified because you're just making the hypercall interface part of the 
kernel/userspace interface anyway.

> 2. Vmexit speeds
>    Theoretically, vmcall should be faster than pio/mmio for the bare 
> hardware.
>    When implementing PV driver, the guest implementation is agnostic to 
> the host implementation. For maximum performance
>    the host side will use kernel modules while for flexibility a 
> userspace implementation will do the job.
>    So although vmcall efficiency is neglectable comparing to context 
> switch to user, there will be occasions were the host has a PV driver 
> backend.
>   

Whether to use hypercalls vs. PIO is a separate issue from whether 
hypercalls should be handled in userspace.  I think we should always 
handle hypercalls in kernel space and that the hypercall interface ought 
to be defined within the kernel.  Now, this doesn't mean that the result 
of a hypercall can't be dropping down to userspace but I don't think we 
should do it blindly.

> Does it help changing your minds?
>   

No, but I'm hoping that I can change yours :-)

Regards,

Anthony Liguori

> Dor
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Microsoft
> Defy all challenges. Microsoft(R) Visual Studio 2005.
> http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
> _______________________________________________
> kvm-devel mailing list
> kvm-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/kvm-devel
>
>   


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to