Avi Kivity wrote:
>> There's no reason that the PIO operations couldn't be handled in the 
>> kernel.  You'll already need some level of cooperation in userspace 
>> unless you plan on implementing the PCI bus in kernel space too.  
>> It's easy enough in the pci_map function in QEMU to just notify the 
>> kernel that it should listen on a particular PIO range.
>>
>>   
>
> This is a config space write, right?  If so, the range is the regular 
> 0xcf8-0xcff and it has to be very specially handled.

This is a per-device IO slot and as best as I can tell, the PCI device 
advertises the size of the region and the OS then identifies a range of 
PIO space to use and tells the PCI device about it.  So we would just 
need to implement a generic userspace virtio PCI device in QEMU that did 
an ioctl to the kernel when this happened to tell the kernel what region 
to listen on for a particular device.

>> vmcalls will certainly get faster but I doubt that the cost 
>> difference between vmcall and pio will ever be greater than a few 
>> hundred cycles.  The only performance sensitive operation here would 
>> be the kick and I don't think a few hundred cycles in the kick path 
>> is ever going to be that significant for overall performance.
>>
>>   
>
> Why do you think the different will be a few hundred cycles?

The only difference in hardware between a PIO exit and a vmcall is that 
you don't have write out an exit reason in the VMC[SB].  So the 
performance difference between PIO/vmcall shouldn't be that great (and 
if it were, the difference would probably be obvious today).  That's 
different from, say, a PF exit because with a PF, you also have to 
attempt to resolve it by walking the guest page table before determining 
that you do in fact need to exit.

>   And if you have a large number of devices, searching the list 
> becomes expensive too.

The PIO address space is relatively small.  You could do a radix tree or 
even a direct array lookup if you are concerned about performance.

>> So why introduce the extra complexity?
>>   
>
> Overall I think it reduces comlexity if we have in-kernel devices.  
> Anyway we can add additional signalling methods later.

In-kernel virtio backends add quite a lot of complexity.  Just the 
mechanism to setup the device is complicated enough.  I suspect that 
it'll be necessary down the road for performance but I certainly don't 
think it's a simplification.

Regards,

Anthony Liguori


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to