Dong, Eddie wrote:
> Avi Kivity wrote:
>> Dong, Eddie wrote:
>>>> There's a two-liner required to make it work.  I'll add it soon.
>>>>
>>>>
>>> But you still needs to issue WBINVD to all pCPUs which just move
>>> non-response time from one place to another, not?
>>>
>> You don't actually need to emulate wbinvd, you can just ignore it.
>>
>> The only reason I can think of to use wbinvd is if you're taking a cpu
>> down (for a deep sleep state, or if you're shutting it off).  A guest
>> need not do that. 
>>
>> Any other reason? dma?  all dma today is cache-coherent, no?
>>
> For legacy PCI device, yes it is cache-cohetent, but for PCIe devices,
> it is no longer a must. A PCIe device may not generate snoopy cycle
> and thus require OS to flush cache.
> 
> For example, a guest with direct device, say audio, can copy 
> data to dma buffer and then wbinvd to flush cache out and ask HW 
> DMA to output.

So if you want the higher performance of PCIe you need
performance-killing wbindv (not to speak of latency)? That sounds a bit
contradictory to me. So this is also true for native PCIe usage?

What really frightens me about wbinvd is that its latency "nicely"
scales with the cache sizes. And I think my observed latency is far from
being the worst case. In a different experiment, I once measured wbinvd
latencies of a few milliseconds... :(

Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to