Avi Kivity wrote:
> Dong, Eddie wrote:
>> Avi Kivity wrote:
>> 
>>> Dong, Eddie wrote:
>>> 
>>>>> There's a two-liner required to make it work.  I'll add it soon.
>>>>> 
>>>>> 
>>>>> 
>>>> But you still needs to issue WBINVD to all pCPUs which just move
>>>> non-response time from one place to another, not?
>>>> 
>>>> 
>>> You don't actually need to emulate wbinvd, you can just ignore it.
>>> 
>>> The only reason I can think of to use wbinvd is if you're taking a
>>> cpu down (for a deep sleep state, or if you're shutting it off).  A
>>> guest need not do that. 
>>> 
>>> Any other reason? dma?  all dma today is cache-coherent, no?
>>> 
>>> 
>> For legacy PCI device, yes it is cache-cohetent, but for PCIe
>> devices, it is no longer a must. A PCIe device may not generate
>> snoopy cycle and thus require OS to flush cache.
>> 
>> For example, a guest with direct device, say audio, can copy
>> data to dma buffer and then wbinvd to flush cache out and ask HW DMA
>> to output. 
>> 
> 
> Okay.  In that case the host can emulate wbinvd by using the clflush
> instruction, which is much faster (although overall execution time may
> be higher), maintaining real-time response times.

Faster? maybe.
The issue is clflush take va parameter. So KVM needs to map gpa first
and
then do flush.

WIth this additional overhead. I am not sure which one is faster. But
yes,
this is the trend we may walk toward to reduce Deny of Service. 
(flush host or other VM's cache will slowdown whole system).

Eddie

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to