On 02/28/2013 10:24 AM, Michael S. Tsirkin wrote:
OK we talked about this a while ago, here's
a summary and some proposals:
At the moment, virtio PCI uses IO BARs for all accesses.
The reason for IO use is the cost of different VM exit types
of transactions and their emulation on KVM on x86
(it
On 04/29/2013 07:48 AM, Don Dutile wrote:
c) it's architecture neutral, or can be made architecture neutral.
e.g., inb/outb PCI ioport support is very different btwn x86
non-x86.
A hypercall interface would not have that dependency/difference.
You are joking, right? Hypercalls
On Tue, Mar 05, 2013 at 11:14:31PM -0800, H. Peter Anvin wrote:
On 03/05/2013 04:05 PM, H. Peter Anvin wrote:
On 02/28/2013 07:24 AM, Michael S. Tsirkin wrote:
3. hypervisor assigned IO address
qemu can reserve IO addresses and assign to virtio devices.
2 bytes per device (for
On 03/06/2013 01:21 AM, Michael S. Tsirkin wrote:
Right. Though even with better granularify bridge windows
would still be a (smaller) problem causing fragmentation.
If we were to extend the PCI spec I would go for a bridge without
windows at all: a bridge can snoop on configuration
On Wed, Mar 06, 2013 at 03:15:16AM -0800, H. Peter Anvin wrote:
On 03/06/2013 01:21 AM, Michael S. Tsirkin wrote:
Right. Though even with better granularify bridge windows
would still be a (smaller) problem causing fragmentation.
If we were to extend the PCI spec I would go for a
On 02/28/2013 07:24 AM, Michael S. Tsirkin wrote:
3. hypervisor assigned IO address
qemu can reserve IO addresses and assign to virtio devices.
2 bytes per device (for notification and ISR access) will be
enough. So we can reserve 4K and this gets us 2000 devices.
On 03/05/2013 04:05 PM, H. Peter Anvin wrote:
On 02/28/2013 07:24 AM, Michael S. Tsirkin wrote:
3. hypervisor assigned IO address
qemu can reserve IO addresses and assign to virtio devices.
2 bytes per device (for notification and ISR access) will be
enough. So we can reserve
On Thu, Feb 28, 2013 at 05:24:33PM +0200, Michael S. Tsirkin wrote:
OK we talked about this a while ago, here's
a summary and some proposals:
At the moment, virtio PCI uses IO BARs for all accesses.
The reason for IO use is the cost of different VM exit types
of transactions and their
On 2013-02-28 16:24, Michael S. Tsirkin wrote:
Another problem with PIO is support for physical virtio devices,
and nested virt: KVM currently programs all PIO accesses
to cause vm exit, so using this device in a VM will be slow.
Not answering your question, but support for programming direct