Anthony Liguori wrote:
> Avi Kivity wrote:
>>> There's no reason that the PIO operations couldn't be handled in the 
>>> kernel.  You'll already need some level of cooperation in userspace 
>>> unless you plan on implementing the PCI bus in kernel space too.  
>>> It's easy enough in the pci_map function in QEMU to just notify the 
>>> kernel that it should listen on a particular PIO range.
>>>
>>>   
>>
>> This is a config space write, right?  If so, the range is the regular 
>> 0xcf8-0xcff and it has to be very specially handled.
>
> This is a per-device IO slot and as best as I can tell, the PCI device 
> advertises the size of the region and the OS then identifies a range 
> of PIO space to use and tells the PCI device about it.  So we would 
> just need to implement a generic userspace virtio PCI device in QEMU 
> that did an ioctl to the kernel when this happened to tell the kernel 
> what region to listen on for a particular device.
>

I'll just go and read the patches more carefully before making any more 
stupid remarks about the code.


>>> vmcalls will certainly get faster but I doubt that the cost 
>>> difference between vmcall and pio will ever be greater than a few 
>>> hundred cycles.  The only performance sensitive operation here would 
>>> be the kick and I don't think a few hundred cycles in the kick path 
>>> is ever going to be that significant for overall performance.
>>>
>>>   
>>
>> Why do you think the different will be a few hundred cycles?
>
> The only difference in hardware between a PIO exit and a vmcall is 
> that you don't have write out an exit reason in the VMC[SB].  So the 
> performance difference between PIO/vmcall shouldn't be that great (and 
> if it were, the difference would probably be obvious today).  That's 
> different from, say, a PF exit because with a PF, you also have to 
> attempt to resolve it by walking the guest page table before 
> determining that you do in fact need to exit.
>

You have to look at the pio bitmaps with pio.  Point taken though.

>
>>> So why introduce the extra complexity?
>>>   
>>
>> Overall I think it reduces comlexity if we have in-kernel devices.  
>> Anyway we can add additional signalling methods later.
>
> In-kernel virtio backends add quite a lot of complexity.  Just the 
> mechanism to setup the device is complicated enough.  I suspect that 
> it'll be necessary down the road for performance but I certainly don't 
> think it's a simplification.

I didn't mean that in-kernel devices simplify things (they don't), but 
that using hypercalls is simpler for in-kernel than pio.


-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to