Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-27 Thread Avi Kivity
Anthony Liguori wrote:
 Another point is that virtio still has a lot of leading zeros in its 
 mileage counter. We need to keep things flexible and learn from 
 others as much as possible, especially when talking about the ABI.

 Yes, after thinking about it over holiday, I agree that we should at 
 least introduce a virtio-pci feature bitmask.  I'm not inclined to 
 attempt to define a hypercall ABI or anything like that right now but 
 having the feature bitmask will at least make it possible to do such a 
 thing in the future.

No, definitely not define a hypercall ABI.  The feature bit should say 
this device understands a hypervisor-specific way of kicking.  consult 
your hypervisor manual and cpuid bits for further details.  should you 
not be satisfied with this method, port io is still available.


 I'm wary of introducing the notion of hypercalls to this device 
 because it makes the device VMM specific.  Maybe we could have the 
 device provide an option ROM that was treated as the device BIOS 
 that we could use for kicking and interrupt acking?  Any idea of how 
 that would map to Windows?  Are there real PCI devices that use the 
 option ROM space to provide what's essentially firmware?  
 Unfortunately, I don't think an option ROM BIOS would map well to 
 other architectures.

   

 The BIOS wouldn't work even on x86 because it isn't mapped to the 
 guest address space (at least not consistently), and doesn't know the 
 guest's programming model (16, 32, or 64-bits? segmented or flat?)

 Xen uses a hypercall page to abstract these details out. However, I'm 
 not proposing that. Simply indicate that we support hypercalls, and 
 use some layer below to actually send them. It is the responsibility 
 of this layer to detect if hypercalls are present and how to call them.

 Hey, I think the best place for it is in paravirt_ops. We can even 
 patch the hypercall instruction inline, and the driver doesn't need 
 to know about it.

 Yes, paravirt_ops is attractive for abstracting the hypercall calling 
 mechanism but it's still necessary to figure out how hypercalls would 
 be identified.  I think it would be necessary to define a virtio 
 specific hypercall space and use the virtio device ID to claim subspaces.

 For instance, the hypercall number could be (virtio_devid  16) | 
 (call number).  How that translates into a hypercall would then be 
 part of the paravirt_ops abstraction.  In KVM, we may have a single 
 virtio hypercall where we pass the virtio hypercall number as one of 
 the arguments or something like that.

If we don't call it a hypercall, but a virtio kick operation, we don't 
need to worry about the hypercall number or ABI.  It's just a function 
that takes an argument that's implemented differently by every 
hypervisor.  The default implementation can be a pio operation.

 Make it appear as a pci function?  (though my feeling is that 
 multiple mounts should be different devices; we can then hotplug 
 mountpoints).
 

 We may run out of PCI slots though :-/
   

 Then we can start selling virtio extension chassis.

 :-)  Do you know if there is a hard limit on the number of devices on 
 a PCI bus?  My concern was that it was limited by something stupid 
 like an 8-bit identifier.

IIRC pci slots are 8-bit, but you can have multiple buses, so 
effectively 16 bits of device address space (discounting functions which 
are likely not hot-pluggable).


-- 
error compiling committee.c: too many arguments to function


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-27 Thread Avi Kivity
Carsten Otte wrote:
 Avi Kivity wrote:
   
 No, definitely not define a hypercall ABI.  The feature bit should say 
 this device understands a hypervisor-specific way of kicking.  consult 
 your hypervisor manual and cpuid bits for further details.  should you 
 not be satisfied with this method, port io is still available.
 
 ...unless you're lucky enough to be on s390 where pio is not available.
 I don't see why we'd have two different ways to talk to a virtio 
 device. I think we should use a hypercall for interrupt injection, 
 without support for grumpy old soldered pci features other than 
 HPA-style Lguest PCI bus organization. There are no devices that we 
 want to be backward compatible with.
   

pio is useful for qemu, for example, and as a fallback against changing 
hypervisor calling conventions.  As Anthony points out, it makes a 
qemu-implemented device instantly available to Xen at no extra charge.

My wording was inappropriate for s390, though.  The politically correct 
version reads this device understands a hypervisor-specific way of 
kicking. consult your hypervisor manual and platform-specific way of 
querying hypervisor information for further details. should you not be 
satisfied with this method, the standard method of kicking virtio 
devices on your platform is still available.

On s390, I imagine that the standard method is the fabled diag 
instruction (which, with the proper arguments, will cook your steak to 
the exact shade of medium-rare you desire).  So you will never need to 
set the hypervisor-specific way of kicking bit, as your standard 
method is already optimal.

Unfortunately, we have to care for platform differences, subarch 
differences (vmx/svm), hypervisor differences (with virtio), and guest 
differences (Linux/Windows/pvLinux, 32/64).  Much care is needed when 
designing the ABI here.

[actually thinking a bit, this is specific to the virtio pci binding; 
s390 will never see any of it]

-- 
error compiling committee.c: too many arguments to function


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-27 Thread Carsten Otte
Avi Kivity wrote:
 Unfortunately, we have to care for platform differences, subarch 
 differences (vmx/svm), hypervisor differences (with virtio), and guest 
 differences (Linux/Windows/pvLinux, 32/64).  Much care is needed when 
 designing the ABI here.
Yea, I agree.

 [actually thinking a bit, this is specific to the virtio pci binding; 
 s390 will never see any of it]
You remember that we've lost the big debate around virtio in Tucson? 
We intend to bind our virtio devices to PCI too, so that they look the 
same in Linux userland across architectures.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-27 Thread Avi Kivity
Carsten Otte wrote:

 [actually thinking a bit, this is specific to the virtio pci binding; 
 s390 will never see any of it]
 You remember that we've lost the big debate around virtio in Tucson? 

I was in the embedded BOF.

 We intend to bind our virtio devices to PCI too, so that they look the 
 same in Linux userland across architectures.

Ouch.

-- 
error compiling committee.c: too many arguments to function


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-27 Thread Carsten Otte
Avi Kivity wrote:
 We intend to bind our virtio devices to PCI too, so that they look the 
 same in Linux userland across architectures.
 
 Ouch.
That was my initial opinion too, but HPA has come up with a lean and 
clean PCI binding for lguest. I think we should seriously consider 
using that over the current qemu device emulation based thing.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-27 Thread Dor Laor

Carsten Otte wrote:

Avi Kivity wrote:
  
We intend to bind our virtio devices to PCI too, so that they look the 
same in Linux userland across architectures.
  

Ouch.

That was my initial opinion too, but HPA has come up with a lean and 
clean PCI binding for lguest. I think we should seriously consider 
using that over the current qemu device emulation based thing.


  

There are two solutions for this problem:
1. Use hypercalls and supply mechanism for hypercall patching for qemu.
   This way we can make s390  qemu/xen happy.
2. Have two transport mechanism for virtio.
   Actually this is what we have today (but not yet merged) - lguest 
uses the pci config space

   but without using Anthony's pci module.
   We'll have virtio host i(qemu/kernel) implementation for the shared 
memory and interface.
   We'll have pci transport for x86 that glues the above and a virtual 
transport for s390 and paravirt_ops.

   Both transports will be based on Rusty's config space.
   This is the idea I suggested in Tuscon:

- -
| 9p | |  network | |  block |
--   -
 | virtio interface|
 
   ---   
--
   | virtio_pci|  OR   | virtio_vbus (includes 
configs  hypercall/portio) |
   ---   
--

  ---   -
  | virtio_ring||virtio_config|
  ---   -

Regards,
Dor

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

  


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-26 Thread Anthony Liguori
Avi Kivity wrote:
 rx and tx are closely related. You rarely have one without the other.

 In fact, a turned implementation should have zero kicks or interrupts 
 for bulk transfers. The rx interrupt on the host will process new tx 
 descriptors and fill the guest's rx queue; the guest's transmit 
 function can also check the receive queue. I don't know if that's 
 achievable for Linuz guests currently, but we should aim to make it 
 possible.

ATM, the net driver does a pretty good job of disabling kicks/interrupts 
unless they are needed.  Checking for rx on tx and vice versa is a good 
idea and could further help there.  I'll give it a try this week.

 Another point is that virtio still has a lot of leading zeros in its 
 mileage counter. We need to keep things flexible and learn from others 
 as much as possible, especially when talking about the ABI.

Yes, after thinking about it over holiday, I agree that we should at 
least introduce a virtio-pci feature bitmask.  I'm not inclined to 
attempt to define a hypercall ABI or anything like that right now but 
having the feature bitmask will at least make it possible to do such a 
thing in the future.

 I'm wary of introducing the notion of hypercalls to this device 
 because it makes the device VMM specific.  Maybe we could have the 
 device provide an option ROM that was treated as the device BIOS 
 that we could use for kicking and interrupt acking?  Any idea of how 
 that would map to Windows?  Are there real PCI devices that use the 
 option ROM space to provide what's essentially firmware?  
 Unfortunately, I don't think an option ROM BIOS would map well to 
 other architectures.

   

 The BIOS wouldn't work even on x86 because it isn't mapped to the 
 guest address space (at least not consistently), and doesn't know the 
 guest's programming model (16, 32, or 64-bits? segmented or flat?)

 Xen uses a hypercall page to abstract these details out. However, I'm 
 not proposing that. Simply indicate that we support hypercalls, and 
 use some layer below to actually send them. It is the responsibility 
 of this layer to detect if hypercalls are present and how to call them.

 Hey, I think the best place for it is in paravirt_ops. We can even 
 patch the hypercall instruction inline, and the driver doesn't need to 
 know about it.

Yes, paravirt_ops is attractive for abstracting the hypercall calling 
mechanism but it's still necessary to figure out how hypercalls would be 
identified.  I think it would be necessary to define a virtio specific 
hypercall space and use the virtio device ID to claim subspaces.

For instance, the hypercall number could be (virtio_devid  16) | (call 
number).  How that translates into a hypercall would then be part of the 
paravirt_ops abstraction.  In KVM, we may have a single virtio hypercall 
where we pass the virtio hypercall number as one of the arguments or 
something like that.

 Not much of an argument, I know.


 wrt. number of queues, 8 queues will consume 32 bytes of pci space 
 if all you store is the ring pfn.
 
 You also at least need a num argument which takes you to 48 or 64 
 depending on whether you care about strange formatting.  8 queues 
 may not be enough either.  Eric and I have discussed whether the 9p 
 virtio device should support multiple mounts per-virtio device and 
 if so, whether each one should have it's own queue.  Any devices 
 that supports this sort of multiplexing will very quickly start 
 using a lot of queues.
 
 Make it appear as a pci function?  (though my feeling is that 
 multiple mounts should be different devices; we can then hotplug 
 mountpoints).
 

 We may run out of PCI slots though :-/
   

 Then we can start selling virtio extension chassis.

:-)  Do you know if there is a hard limit on the number of devices on a 
PCI bus?  My concern was that it was limited by something stupid like an 
8-bit identifier.

Regards,

Anthony Liguori


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-23 Thread Anthony Liguori
Avi Kivity wrote:
 Anthony Liguori wrote:
 Well please propose the virtio API first and then I'll adjust the PCI 
 ABI.  I don't want to build things into the ABI that we never 
 actually end up using in virtio :-)

   

 Move -kick() to virtio_driver.

Then on each kick, all queues have to be checked for processing?  What 
devices do you expect this would help?

 I believe Xen networking uses the same event channel for both rx and 
 tx, so in effect they're using this model.  Long time since I looked 
 though,

I would have to look, but since rx/tx are rather independent actions, 
I'm not sure that you would really save that much.  You still end up 
doing the same number of kicks unless I'm missing something.

 I was thinking more along the lines that a hypercall-based device 
 would certainly be implemented in-kernel whereas the current device 
 is naturally implemented in userspace.  We can simply use a different 
 device for in-kernel drivers than for userspace drivers.  

 Where the device is implemented is an implementation detail that 
 should be hidden from the guest, isn't that one of the strengths of 
 virtualization?  Two examples: a file-based block device implemented 
 in qemu gives you fancy file formats with encryption and compression, 
 while the same device implemented in the kernel gives you a 
 low-overhead path directly to a zillion-disk SAN volume.  Or a 
 user-level network device capable of running with the slirp stack and 
 no permissions vs. the kernel device running copyless most of the time 
 and using a dma engine for the rest but requiring you to be good 
 friends with the admin.

 The user should expect zero reconfigurations moving a VM from one 
 model to the other.

I'm wary of introducing the notion of hypercalls to this device because 
it makes the device VMM specific.  Maybe we could have the device 
provide an option ROM that was treated as the device BIOS that we 
could use for kicking and interrupt acking?  Any idea of how that would 
map to Windows?  Are there real PCI devices that use the option ROM 
space to provide what's essentially firmware?  Unfortunately, I don't 
think an option ROM BIOS would map well to other architectures.

 None of the PCI devices currently work like that in QEMU.  It would 
 be very hard to make a device that worked this way because since the 
 order in which values are written matter a whole lot.  For instance, 
 if you wrote the status register before the queue information, the 
 driver could get into a funky state.
   

 I assume you're talking about restore?  Isn't that atomic?

If you're doing restore by passing the PCI config blob to a registered 
routine, then sure, but that doesn't seem much better to me than just 
having the device generate that blob in the first place (which is what 
we have today).  I was assuming that you would want to use the existing 
PIO/MMIO handlers to do restore by rewriting the config as if the guest was.

 Not much of an argument, I know.


 wrt. number of queues, 8 queues will consume 32 bytes of pci space 
 if all you store is the ring pfn.
 

 You also at least need a num argument which takes you to 48 or 64 
 depending on whether you care about strange formatting.  8 queues may 
 not be enough either.  Eric and I have discussed whether the 9p 
 virtio device should support multiple mounts per-virtio device and if 
 so, whether each one should have it's own queue.  Any devices that 
 supports this sort of multiplexing will very quickly start using a 
 lot of queues.
   

 Make it appear as a pci function?  (though my feeling is that multiple 
 mounts should be different devices; we can then hotplug mountpoints).

We may run out of PCI slots though :-/

 I think most types of hardware have some notion of a selector or 
 mode.  Take a look at the LSI adapter or even VGA.

   

 True.  They aren't fun to use, though.

I don't think they're really any worse :-)

Regards,

Anthony Liguori


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-23 Thread Avi Kivity
Anthony Liguori wrote:
 Avi Kivity wrote:
   
 Anthony Liguori wrote:
 
 Well please propose the virtio API first and then I'll adjust the PCI 
 ABI.  I don't want to build things into the ABI that we never 
 actually end up using in virtio :-)

   
   
 Move -kick() to virtio_driver.
 

 Then on each kick, all queues have to be checked for processing?  What 
 devices do you expect this would help?

   

Networking.

 I believe Xen networking uses the same event channel for both rx and 
 tx, so in effect they're using this model.  Long time since I looked 
 though,
 

 I would have to look, but since rx/tx are rather independent actions, 
 I'm not sure that you would really save that much.  You still end up 
 doing the same number of kicks unless I'm missing something.

   

rx and tx are closely related. You rarely have one without the other.

In fact, a turned implementation should have zero kicks or interrupts 
for bulk transfers. The rx interrupt on the host will process new tx 
descriptors and fill the guest's rx queue; the guest's transmit function 
can also check the receive queue. I don't know if that's achievable for 
Linuz guests currently, but we should aim to make it possible.

Another point is that virtio still has a lot of leading zeros in its 
mileage counter. We need to keep things flexible and learn from others 
as much as possible, especially when talking about the ABI.

 I'm wary of introducing the notion of hypercalls to this device because 
 it makes the device VMM specific.  Maybe we could have the device 
 provide an option ROM that was treated as the device BIOS that we 
 could use for kicking and interrupt acking?  Any idea of how that would 
 map to Windows?  Are there real PCI devices that use the option ROM 
 space to provide what's essentially firmware?  Unfortunately, I don't 
 think an option ROM BIOS would map well to other architectures.

   

The BIOS wouldn't work even on x86 because it isn't mapped to the guest 
address space (at least not consistently), and doesn't know the guest's 
programming model (16, 32, or 64-bits? segmented or flat?)

Xen uses a hypercall page to abstract these details out. However, I'm 
not proposing that. Simply indicate that we support hypercalls, and use 
some layer below to actually send them. It is the responsibility of this 
layer to detect if hypercalls are present and how to call them.

Hey, I think the best place for it is in paravirt_ops. We can even patch 
the hypercall instruction inline, and the driver doesn't need to know 
about it.

 None of the PCI devices currently work like that in QEMU.  It would 
 be very hard to make a device that worked this way because since the 
 order in which values are written matter a whole lot.  For instance, 
 if you wrote the status register before the queue information, the 
 driver could get into a funky state.
   
   
 I assume you're talking about restore?  Isn't that atomic?
 

 If you're doing restore by passing the PCI config blob to a registered 
 routine, then sure, but that doesn't seem much better to me than just 
 having the device generate that blob in the first place (which is what 
 we have today).  I was assuming that you would want to use the existing 
 PIO/MMIO handlers to do restore by rewriting the config as if the guest was.

   

Sure some complexity is unavoidable. But flat is simpler than indirect.

 Not much of an argument, I know.


 wrt. number of queues, 8 queues will consume 32 bytes of pci space 
 if all you store is the ring pfn.
 
 
 You also at least need a num argument which takes you to 48 or 64 
 depending on whether you care about strange formatting.  8 queues may 
 not be enough either.  Eric and I have discussed whether the 9p 
 virtio device should support multiple mounts per-virtio device and if 
 so, whether each one should have it's own queue.  Any devices that 
 supports this sort of multiplexing will very quickly start using a 
 lot of queues.
   
   
 Make it appear as a pci function?  (though my feeling is that multiple 
 mounts should be different devices; we can then hotplug mountpoints).
 

 We may run out of PCI slots though :-/
   

Then we can start selling virtio extension chassis.

-- 
Any sufficiently difficult bug is indistinguishable from a feature.


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-21 Thread Zachary Amsden
On Wed, 2007-11-21 at 09:13 +0200, Avi Kivity wrote:

 Where the device is implemented is an implementation detail that should 
 be hidden from the guest, isn't that one of the strengths of 
 virtualization?  Two examples: a file-based block device implemented in 
 qemu gives you fancy file formats with encryption and compression, while 
 the same device implemented in the kernel gives you a low-overhead path 
 directly to a zillion-disk SAN volume.  Or a user-level network device 
 capable of running with the slirp stack and no permissions vs. the 
 kernel device running copyless most of the time and using a dma engine 
 for the rest but requiring you to be good friends with the admin.
 
 The user should expect zero reconfigurations moving a VM from one model 
 to the other.

I think that is pretty insightful, and indeed, is probably the only
reason we would ever consider using a virtio based driver.

But is this really a virtualization problem, and is virtio the right
place to solve it?  Doesn't I/O hotplug with multipathing or NIC teaming
provide the same infrastructure in a way that is useful in more than
just a virtualization context?

Zach


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-20 Thread Avi Kivity
Anthony Liguori wrote:
 This is a PCI device that implements a transport for virtio.  It allows virtio
 devices to be used by QEMU based VMMs like KVM or Xen.

 +
 +/* the notify function used when creating a virt queue */
 +static void vp_notify(struct virtqueue *vq)
 +{
 + struct virtio_pci_device *vp_dev = to_vp_device(vq-vdev);
 + struct virtio_pci_vq_info *info = vq-priv;
 +
 + /* we write the queue's selector into the notification register to
 +  * signal the other end */
 + iowrite16(info-queue_index, vp_dev-ioaddr + VIRTIO_PCI_QUEUE_NOTIFY);
 +}
   

This means we can't kick multiple queues with one exit.

I'd also like to see a hypercall-capable version of this (but that can 
wait).

 +
 +/* A small wrapper to also acknowledge the interrupt when it's handled.
 + * I really need an EIO hook for the vring so I can ack the interrupt once we
 + * know that we'll be handling the IRQ but before we invoke the callback 
 since
 + * the callback may notify the host which results in the host attempting to
 + * raise an interrupt that we would then mask once we acknowledged the
 + * interrupt. */
 +static irqreturn_t vp_interrupt(int irq, void *opaque)
 +{
 + struct virtio_pci_device *vp_dev = opaque;
 + struct virtio_pci_vq_info *info;
 + irqreturn_t ret = IRQ_NONE;
 + u8 isr;
 +
 + /* reading the ISR has the effect of also clearing it so it's very
 +  * important to save off the value. */
 + isr = ioread8(vp_dev-ioaddr + VIRTIO_PCI_ISR);
   

Can this be implemented via shared memory? We're exiting now on every 
interrupt.


 + return ret;
 +}
 +
 +/* the config-find_vq() implementation */
 +static struct virtqueue *vp_find_vq(struct virtio_device *vdev, unsigned 
 index,
 + bool (*callback)(struct virtqueue *vq))
 +{
 + struct virtio_pci_device *vp_dev = to_vp_device(vdev);
 + struct virtio_pci_vq_info *info;
 + struct virtqueue *vq;
 + int err;
 + u16 num;
 +
 + /* Select the queue we're interested in */
 + iowrite16(index, vp_dev-ioaddr + VIRTIO_PCI_QUEUE_SEL);
   

I would really like to see this implemented as pci config space, with no 
tricks like multiplexing several virtqueues on one register. Something 
like the PCI BARs where you have all the register numbers allocated 
statically to queues.

 +
 + /* Check if queue is either not available or already active. */
 + num = ioread16(vp_dev-ioaddr + VIRTIO_PCI_QUEUE_NUM);
 + if (!num || ioread32(vp_dev-ioaddr + VIRTIO_PCI_QUEUE_PFN))
 + return ERR_PTR(-ENOENT);
 +
 + /* allocate and fill out our structure the represents an active
 +  * queue */
 + info = kmalloc(sizeof(struct virtio_pci_vq_info), GFP_KERNEL);
 + if (!info)
 + return ERR_PTR(-ENOMEM);
 +
 + info-queue_index = index;
 + info-num = num;
 +
 + /* determine the memory needed for the queue and provide the memory
 +  * location to the host */
 + info-n_pages = DIV_ROUND_UP(vring_size(num), PAGE_SIZE);
 + info-pages = alloc_pages(GFP_KERNEL | __GFP_ZERO,
 +   get_order(info-n_pages));
 + if (info-pages == NULL) {
 + err = -ENOMEM;
 + goto out_info;
 + }
 +
 + /* FIXME: is this sufficient for info-n_pages  1? */
 + info-queue = kmap(info-pages);
 + if (info-queue == NULL) {
 + err = -ENOMEM;
 + goto out_alloc_pages;
 + }
 +
 + /* activate the queue */
 + iowrite32(page_to_pfn(info-pages),
 +   vp_dev-ioaddr + VIRTIO_PCI_QUEUE_PFN);
 +   
 + /* create the vring */
 + vq = vring_new_virtqueue(info-num, vdev, info-queue,
 +  vp_notify, callback);
 + if (!vq) {
 + err = -ENOMEM;
 + goto out_activate_queue;
 + }
 +
 + vq-priv = info;
 + info-vq = vq;
 +
 + spin_lock(vp_dev-lock);
 + list_add(info-node, vp_dev-virtqueues);
 + spin_unlock(vp_dev-lock);
 +
   

Is this run only on init? If so the lock isn't needed.


-- 
error compiling committee.c: too many arguments to function


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-20 Thread Anthony Liguori
Avi Kivity wrote:
 Anthony Liguori wrote:
 This is a PCI device that implements a transport for virtio.  It 
 allows virtio
 devices to be used by QEMU based VMMs like KVM or Xen.

 +
 +/* the notify function used when creating a virt queue */
 +static void vp_notify(struct virtqueue *vq)
 +{
 +struct virtio_pci_device *vp_dev = to_vp_device(vq-vdev);
 +struct virtio_pci_vq_info *info = vq-priv;
 +
 +/* we write the queue's selector into the notification register to
 + * signal the other end */
 +iowrite16(info-queue_index, vp_dev-ioaddr + 
 VIRTIO_PCI_QUEUE_NOTIFY);
 +}
   

 This means we can't kick multiple queues with one exit.

There is no interface in virtio currently to batch multiple queue 
notifications so the only way one could do this AFAICT is to use a timer 
to delay the notifications.  Were you thinking of something else?

 I'd also like to see a hypercall-capable version of this (but that can 
 wait).

That can be a different device.

 +
 +/* A small wrapper to also acknowledge the interrupt when it's handled.
 + * I really need an EIO hook for the vring so I can ack the 
 interrupt once we
 + * know that we'll be handling the IRQ but before we invoke the 
 callback since
 + * the callback may notify the host which results in the host 
 attempting to
 + * raise an interrupt that we would then mask once we acknowledged the
 + * interrupt. */
 +static irqreturn_t vp_interrupt(int irq, void *opaque)
 +{
 +struct virtio_pci_device *vp_dev = opaque;
 +struct virtio_pci_vq_info *info;
 +irqreturn_t ret = IRQ_NONE;
 +u8 isr;
 +
 +/* reading the ISR has the effect of also clearing it so it's very
 + * important to save off the value. */
 +isr = ioread8(vp_dev-ioaddr + VIRTIO_PCI_ISR);
   

 Can this be implemented via shared memory? We're exiting now on every 
 interrupt.

I don't think so.  A vmexit is required to lower the IRQ line.  It may 
be possible to do something clever like set a shared memory value that's 
checked on every vmexit.  I think it's very unlikely that it's worth it 
though.


 +return ret;
 +}
 +
 +/* the config-find_vq() implementation */
 +static struct virtqueue *vp_find_vq(struct virtio_device *vdev, 
 unsigned index,
 +bool (*callback)(struct virtqueue *vq))
 +{
 +struct virtio_pci_device *vp_dev = to_vp_device(vdev);
 +struct virtio_pci_vq_info *info;
 +struct virtqueue *vq;
 +int err;
 +u16 num;
 +
 +/* Select the queue we're interested in */
 +iowrite16(index, vp_dev-ioaddr + VIRTIO_PCI_QUEUE_SEL);
   

 I would really like to see this implemented as pci config space, with 
 no tricks like multiplexing several virtqueues on one register. 
 Something like the PCI BARs where you have all the register numbers 
 allocated statically to queues.

My first implementation did that.  I switched to using a selector 
because it reduces the amount of PCI config space used and does not 
limit the number of queues defined by the ABI as much.

 +
 +/* Check if queue is either not available or already active. */
 +num = ioread16(vp_dev-ioaddr + VIRTIO_PCI_QUEUE_NUM);
 +if (!num || ioread32(vp_dev-ioaddr + VIRTIO_PCI_QUEUE_PFN))
 +return ERR_PTR(-ENOENT);
 +
 +/* allocate and fill out our structure the represents an active
 + * queue */
 +info = kmalloc(sizeof(struct virtio_pci_vq_info), GFP_KERNEL);
 +if (!info)
 +return ERR_PTR(-ENOMEM);
 +
 +info-queue_index = index;
 +info-num = num;
 +
 +/* determine the memory needed for the queue and provide the memory
 + * location to the host */
 +info-n_pages = DIV_ROUND_UP(vring_size(num), PAGE_SIZE);
 +info-pages = alloc_pages(GFP_KERNEL | __GFP_ZERO,
 +  get_order(info-n_pages));
 +if (info-pages == NULL) {
 +err = -ENOMEM;
 +goto out_info;
 +}
 +
 +/* FIXME: is this sufficient for info-n_pages  1? */
 +info-queue = kmap(info-pages);
 +if (info-queue == NULL) {
 +err = -ENOMEM;
 +goto out_alloc_pages;
 +}
 +
 +/* activate the queue */
 +iowrite32(page_to_pfn(info-pages),
 +  vp_dev-ioaddr + VIRTIO_PCI_QUEUE_PFN);
 +  +/* create the vring */
 +vq = vring_new_virtqueue(info-num, vdev, info-queue,
 + vp_notify, callback);
 +if (!vq) {
 +err = -ENOMEM;
 +goto out_activate_queue;
 +}
 +
 +vq-priv = info;
 +info-vq = vq;
 +
 +spin_lock(vp_dev-lock);
 +list_add(info-node, vp_dev-virtqueues);
 +spin_unlock(vp_dev-lock);
 +
   

 Is this run only on init? If so the lock isn't needed.

Yes, it's also not stricly needed on cleanup I think.  I left it there 
though for clarity.  I can remove.

Regards,

Anthony Liguori


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.

Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-20 Thread Avi Kivity
Anthony Liguori wrote:
 Avi Kivity wrote:
   
 Anthony Liguori wrote:
 
 This is a PCI device that implements a transport for virtio.  It 
 allows virtio
 devices to be used by QEMU based VMMs like KVM or Xen.

 +
 +/* the notify function used when creating a virt queue */
 +static void vp_notify(struct virtqueue *vq)
 +{
 +struct virtio_pci_device *vp_dev = to_vp_device(vq-vdev);
 +struct virtio_pci_vq_info *info = vq-priv;
 +
 +/* we write the queue's selector into the notification register to
 + * signal the other end */
 +iowrite16(info-queue_index, vp_dev-ioaddr + 
 VIRTIO_PCI_QUEUE_NOTIFY);
 +}
   
   
 This means we can't kick multiple queues with one exit.
 

 There is no interface in virtio currently to batch multiple queue 
 notifications so the only way one could do this AFAICT is to use a timer 
 to delay the notifications.  Were you thinking of something else?

   

No.  We can change virtio though, so let's have a flexible ABI.

 I'd also like to see a hypercall-capable version of this (but that can 
 wait).
 

 That can be a different device.
   

That means the user has to select which device to expose.  With feature 
bits, the hypervisor advertises both pio and hypercalls, the guest picks 
whatever it wants.

   
 +
 +/* A small wrapper to also acknowledge the interrupt when it's handled.
 + * I really need an EIO hook for the vring so I can ack the 
 interrupt once we
 + * know that we'll be handling the IRQ but before we invoke the 
 callback since
 + * the callback may notify the host which results in the host 
 attempting to
 + * raise an interrupt that we would then mask once we acknowledged the
 + * interrupt. */
 +static irqreturn_t vp_interrupt(int irq, void *opaque)
 +{
 +struct virtio_pci_device *vp_dev = opaque;
 +struct virtio_pci_vq_info *info;
 +irqreturn_t ret = IRQ_NONE;
 +u8 isr;
 +
 +/* reading the ISR has the effect of also clearing it so it's very
 + * important to save off the value. */
 +isr = ioread8(vp_dev-ioaddr + VIRTIO_PCI_ISR);
   
   
 Can this be implemented via shared memory? We're exiting now on every 
 interrupt.
 

 I don't think so.  A vmexit is required to lower the IRQ line.  It may 
 be possible to do something clever like set a shared memory value that's 
 checked on every vmexit.  I think it's very unlikely that it's worth it 
 though.
   

Why so unlikely?  Not all workloads will have good batching.


   
 +return ret;
 +}
 +
 +/* the config-find_vq() implementation */
 +static struct virtqueue *vp_find_vq(struct virtio_device *vdev, 
 unsigned index,
 +bool (*callback)(struct virtqueue *vq))
 +{
 +struct virtio_pci_device *vp_dev = to_vp_device(vdev);
 +struct virtio_pci_vq_info *info;
 +struct virtqueue *vq;
 +int err;
 +u16 num;
 +
 +/* Select the queue we're interested in */
 +iowrite16(index, vp_dev-ioaddr + VIRTIO_PCI_QUEUE_SEL);
   
   
 I would really like to see this implemented as pci config space, with 
 no tricks like multiplexing several virtqueues on one register. 
 Something like the PCI BARs where you have all the register numbers 
 allocated statically to queues.
 

 My first implementation did that.  I switched to using a selector 
 because it reduces the amount of PCI config space used and does not 
 limit the number of queues defined by the ABI as much.
   

But... it's tricky, and it's nonstandard.  With pci config, you can do 
live migration by shipping the pci config space to the other side.  With 
the special iospace, you need to encode/decode it.

Not much of an argument, I know.


wrt. number of queues, 8 queues will consume 32 bytes of pci space if 
all you store is the ring pfn.


-- 
error compiling committee.c: too many arguments to function


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-20 Thread Anthony Liguori
Avi Kivity wrote:
 Anthony Liguori wrote:
 Avi Kivity wrote:
  
 Anthony Liguori wrote:

 This is a PCI device that implements a transport for virtio.  It 
 allows virtio
 devices to be used by QEMU based VMMs like KVM or Xen.

 +
 +/* the notify function used when creating a virt queue */
 +static void vp_notify(struct virtqueue *vq)
 +{
 +struct virtio_pci_device *vp_dev = to_vp_device(vq-vdev);
 +struct virtio_pci_vq_info *info = vq-priv;
 +
 +/* we write the queue's selector into the notification 
 register to
 + * signal the other end */
 +iowrite16(info-queue_index, vp_dev-ioaddr + 
 VIRTIO_PCI_QUEUE_NOTIFY);
 +}
 
 This means we can't kick multiple queues with one exit.
 

 There is no interface in virtio currently to batch multiple queue 
 notifications so the only way one could do this AFAICT is to use a 
 timer to delay the notifications.  Were you thinking of something else?

   

 No.  We can change virtio though, so let's have a flexible ABI.

Well please propose the virtio API first and then I'll adjust the PCI 
ABI.  I don't want to build things into the ABI that we never actually 
end up using in virtio :-)

 I'd also like to see a hypercall-capable version of this (but that 
 can wait).
 

 That can be a different device.
   

 That means the user has to select which device to expose.  With 
 feature bits, the hypervisor advertises both pio and hypercalls, the 
 guest picks whatever it wants.

I was thinking more along the lines that a hypercall-based device would 
certainly be implemented in-kernel whereas the current device is 
naturally implemented in userspace.  We can simply use a different 
device for in-kernel drivers than for userspace drivers.  There's no 
point at all in doing a hypercall based userspace device IMHO.

 I don't think so.  A vmexit is required to lower the IRQ line.  It 
 may be possible to do something clever like set a shared memory value 
 that's checked on every vmexit.  I think it's very unlikely that it's 
 worth it though.
   

 Why so unlikely?  Not all workloads will have good batching.

It's pretty invasive.  I think a more paravirt device that expected an 
edge triggered interrupt would be a better solution for those types of 
devices.
 
 +return ret;
 +}
 +
 +/* the config-find_vq() implementation */
 +static struct virtqueue *vp_find_vq(struct virtio_device *vdev, 
 unsigned index,
 +bool (*callback)(struct virtqueue *vq))
 +{
 +struct virtio_pci_device *vp_dev = to_vp_device(vdev);
 +struct virtio_pci_vq_info *info;
 +struct virtqueue *vq;
 +int err;
 +u16 num;
 +
 +/* Select the queue we're interested in */
 +iowrite16(index, vp_dev-ioaddr + VIRTIO_PCI_QUEUE_SEL);
 
 I would really like to see this implemented as pci config space, 
 with no tricks like multiplexing several virtqueues on one register. 
 Something like the PCI BARs where you have all the register numbers 
 allocated statically to queues.
 

 My first implementation did that.  I switched to using a selector 
 because it reduces the amount of PCI config space used and does not 
 limit the number of queues defined by the ABI as much.
   

 But... it's tricky, and it's nonstandard.  With pci config, you can do 
 live migration by shipping the pci config space to the other side.  
 With the special iospace, you need to encode/decode it.

None of the PCI devices currently work like that in QEMU.  It would be 
very hard to make a device that worked this way because since the order 
in which values are written matter a whole lot.  For instance, if you 
wrote the status register before the queue information, the driver could 
get into a funky state.

We'll still need save/restore routines for virtio devices.  I don't 
really see this as a problem since we do this for every other device.

 Not much of an argument, I know.


 wrt. number of queues, 8 queues will consume 32 bytes of pci space if 
 all you store is the ring pfn.

You also at least need a num argument which takes you to 48 or 64 
depending on whether you care about strange formatting.  8 queues may 
not be enough either.  Eric and I have discussed whether the 9p virtio 
device should support multiple mounts per-virtio device and if so, 
whether each one should have it's own queue.  Any devices that supports 
this sort of multiplexing will very quickly start using a lot of queues.

I think most types of hardware have some notion of a selector or mode.  
Take a look at the LSI adapter or even VGA.

Regards,

Anthony Liguori


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-20 Thread Avi Kivity
Anthony Liguori wrote:
 Avi Kivity wrote:
   
 Anthony Liguori wrote:
 
 Avi Kivity wrote:
  
   
 Anthony Liguori wrote:

 
 This is a PCI device that implements a transport for virtio.  It 
 allows virtio
 devices to be used by QEMU based VMMs like KVM or Xen.

 +
 +/* the notify function used when creating a virt queue */
 +static void vp_notify(struct virtqueue *vq)
 +{
 +struct virtio_pci_device *vp_dev = to_vp_device(vq-vdev);
 +struct virtio_pci_vq_info *info = vq-priv;
 +
 +/* we write the queue's selector into the notification 
 register to
 + * signal the other end */
 +iowrite16(info-queue_index, vp_dev-ioaddr + 
 VIRTIO_PCI_QUEUE_NOTIFY);
 +}
 
   
 This means we can't kick multiple queues with one exit.
 
 
 There is no interface in virtio currently to batch multiple queue 
 notifications so the only way one could do this AFAICT is to use a 
 timer to delay the notifications.  Were you thinking of something else?

   
   
 No.  We can change virtio though, so let's have a flexible ABI.
 

 Well please propose the virtio API first and then I'll adjust the PCI 
 ABI.  I don't want to build things into the ABI that we never actually 
 end up using in virtio :-)

   

Move -kick() to virtio_driver.

I believe Xen networking uses the same event channel for both rx and tx, 
so in effect they're using this model.  Long time since I looked though,

 I'd also like to see a hypercall-capable version of this (but that 
 can wait).
 
 
 That can be a different device.
   
   
 That means the user has to select which device to expose.  With 
 feature bits, the hypervisor advertises both pio and hypercalls, the 
 guest picks whatever it wants.
 

 I was thinking more along the lines that a hypercall-based device would 
 certainly be implemented in-kernel whereas the current device is 
 naturally implemented in userspace.  We can simply use a different 
 device for in-kernel drivers than for userspace drivers.  

Where the device is implemented is an implementation detail that should 
be hidden from the guest, isn't that one of the strengths of 
virtualization?  Two examples: a file-based block device implemented in 
qemu gives you fancy file formats with encryption and compression, while 
the same device implemented in the kernel gives you a low-overhead path 
directly to a zillion-disk SAN volume.  Or a user-level network device 
capable of running with the slirp stack and no permissions vs. the 
kernel device running copyless most of the time and using a dma engine 
for the rest but requiring you to be good friends with the admin.

The user should expect zero reconfigurations moving a VM from one model 
to the other.

 There's no 
 point at all in doing a hypercall based userspace device IMHO.
   

We abstract this away by having a channel signalled API (both at the 
kernel for kernel devices and as a kvm.h exit reason / libkvm callback.

Again, somewhat like Xen's event channels, though asymmetric.

 I don't think so.  A vmexit is required to lower the IRQ line.  It 
 may be possible to do something clever like set a shared memory value 
 that's checked on every vmexit.  I think it's very unlikely that it's 
 worth it though.
   
   
 Why so unlikely?  Not all workloads will have good batching.
 

 It's pretty invasive.  I think a more paravirt device that expected an 
 edge triggered interrupt would be a better solution for those types of 
 devices.
   

I was thinking it could be useful mostly in the context of a paravirt 
irqchip, where we can lower the cost of level-triggered interrupts.

 +
 +/* Select the queue we're interested in */
 +iowrite16(index, vp_dev-ioaddr + VIRTIO_PCI_QUEUE_SEL);
 
   
 I would really like to see this implemented as pci config space, 
 with no tricks like multiplexing several virtqueues on one register. 
 Something like the PCI BARs where you have all the register numbers 
 allocated statically to queues.
 
 
 My first implementation did that.  I switched to using a selector 
 because it reduces the amount of PCI config space used and does not 
 limit the number of queues defined by the ABI as much.
   
   
 But... it's tricky, and it's nonstandard.  With pci config, you can do 
 live migration by shipping the pci config space to the other side.  
 With the special iospace, you need to encode/decode it.
 

 None of the PCI devices currently work like that in QEMU.  It would be 
 very hard to make a device that worked this way because since the order 
 in which values are written matter a whole lot.  For instance, if you 
 wrote the status register before the queue information, the driver could 
 get into a funky state.
   

I assume you're talking about restore?  Isn't that atomic?

 We'll still need save/restore routines for virtio devices.  I don't 
 really see this as a problem since we do this for every other device.

   

Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-09 Thread Arnd Bergmann
On Thursday 08 November 2007, Anthony Liguori wrote:
 
 They already show up underneath of the PCI bus.  The issue is that there 
 are two separate 'struct device's for each virtio device.  There's the 
 PCI device (that's part of the pci_dev structure) and then there's the 
 virtio_device one.  I thought that setting the dev.parent of the 
 virtio_device struct device would result in having two separate entries 
 under the PCI bus directory which would be pretty confusing 

But that's what a device tree means. Think about a USB disk drive: The drive
shows up as a child of the USB controller, which in turn is a child of
the PCI bridge. Note that I did not suggest having the virtio parent set to
the parent of the PCI device, but to the PCI device itself.

I find it more confusing to have a device just hanging off the root when
it is actually handled by the PCI subsystem.

Arnd 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-08 Thread Avi Kivity
Anthony Liguori wrote:
 Avi Kivity wrote:
 Anthony Liguori wrote:
  
 This is a PCI device that implements a transport for virtio.  It 
 allows virtio
 devices to be used by QEMU based VMMs like KVM or Xen.

   

 Didn't see support for dma.

 Not sure what you're expecting there.  Using dma_ops in virtio_ring?


If a pci device is capable of dma (or issuing interrupts), it will be 
useless with pv pci.


  I think that with Amit's pvdma patches you
 can support dma-capable devices as well without too much fuss.
   

 What is the use case you're thinking of?  A semi-paravirt driver that 
 does dma directly to a device?

No, an unmodified driver that, by using clever tricks with dma_ops, can 
do dma directly to guest memory.  See Amit's patches.

In fact, why do a virtio transport at all?  It can be done either with 
trap'n'emulate, or by directly mapping the device mmio space into the guest.


(what use case are you considering? devices without interrupts and dma? 
pci door stoppers?)

-- 
error compiling committee.c: too many arguments to function


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-08 Thread Anthony Liguori
Avi Kivity wrote:
 Anthony Liguori wrote:
   
 This is a PCI device that implements a transport for virtio.  It allows 
 virtio
 devices to be used by QEMU based VMMs like KVM or Xen.

   
 

 Didn't see support for dma.

Not sure what you're expecting there.  Using dma_ops in virtio_ring?

  I think that with Amit's pvdma patches you
 can support dma-capable devices as well without too much fuss.
   

What is the use case you're thinking of?  A semi-paravirt driver that 
does dma directly to a device?

Regards,

Anthony Liguori



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-08 Thread Arnd Bergmann
On Thursday 08 November 2007, Anthony Liguori wrote:
 +/* A PCI device has it's own struct device and so does a virtio device so
 + * we create a place for the virtio devices to show up in sysfs.  I think it
 + * would make more sense for virtio to not insist on having it's own device. 
 */
 +static struct device virtio_pci_root = {
 +   .parent = NULL,
 +   .bus_id = virtio-pci,
 +};
 +
 +/* Unique numbering for devices under the kvm root */
 +static unsigned int dev_index;
 +

...

 +/* the PCI probing function */
 +static int __devinit virtio_pci_probe(struct pci_dev *pci_dev,
 +         const struct pci_device_id *id)
 +{
 +   struct virtio_pci_device *vp_dev;
 +   int err;
 +
 +   /* allocate our structure and fill it out */
 +   vp_dev = kzalloc(sizeof(struct virtio_pci_device), GFP_KERNEL);
 +   if (vp_dev == NULL)
 +   return -ENOMEM;
 +
 +   vp_dev-pci_dev = pci_dev;
 +   vp_dev-vdev.dev.parent = virtio_pci_root;

If you use 

vp_dev-vdev.dev.parent = pci_dev-dev;

Then there is no need for the special kvm root device, and the actual
virtio device shows up in a more logical place, under where it is
really (virtually) attached.

Arnd 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-08 Thread Avi Kivity
Anthony Liguori wrote:
 Avi Kivity wrote:
 If a pci device is capable of dma (or issuing interrupts), it will be 
 useless with pv pci.

 Hrm, I think we may be talking about different things.  Are you 
 thinking that the driver I posted allows you to do PCI pass-through 
 over virtio?  That's not what it is.

 The driver I posted is a virtio implementation that uses a PCI 
 device.  This lets you use virtio-blk and virtio-net under KVM.  The 
 alternative to this virtio PCI device would be a virtio transport 
 built with hypercalls like lguest has.  I choose a PCI device because 
 it ensured that each virtio device showed up like a normal PCI device.

 Am I misunderstanding what you're asking about?


No, I completely misunderstood the patch.  Should review complete 
patches rather than random hunks.

Sorry for the noise.

-- 
error compiling committee.c: too many arguments to function


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-08 Thread Anthony Liguori
Avi Kivity wrote:
 If a pci device is capable of dma (or issuing interrupts), it will be 
 useless with pv pci.

Hrm, I think we may be talking about different things.  Are you thinking 
that the driver I posted allows you to do PCI pass-through over virtio?  
That's not what it is.

The driver I posted is a virtio implementation that uses a PCI device.  
This lets you use virtio-blk and virtio-net under KVM.  The alternative 
to this virtio PCI device would be a virtio transport built with 
hypercalls like lguest has.  I choose a PCI device because it ensured 
that each virtio device showed up like a normal PCI device.

Am I misunderstanding what you're asking about?

Regards,

Anthony Liguori


  I think that with Amit's pvdma patches you
 can support dma-capable devices as well without too much fuss.
   

 What is the use case you're thinking of?  A semi-paravirt driver that 
 does dma directly to a device?

 No, an unmodified driver that, by using clever tricks with dma_ops, 
 can do dma directly to guest memory.  See Amit's patches.

 In fact, why do a virtio transport at all?  It can be done either with 
 trap'n'emulate, or by directly mapping the device mmio space into the 
 guest.


 (what use case are you considering? devices without interrupts and 
 dma? pci door stoppers?)



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-08 Thread Arnd Bergmann
On Thursday 08 November 2007, Anthony Liguori wrote:
 +/* A PCI device has it's own struct device and so does a virtio device so
 + * we create a place for the virtio devices to show up in sysfs.  I think it
 + * would make more sense for virtio to not insist on having it's own device. 
 */
 +static struct device virtio_pci_root = {
 +   .parent = NULL,
 +   .bus_id = virtio-pci,
 +};
 +
 +/* Unique numbering for devices under the kvm root */
 +static unsigned int dev_index;
 +

...

 +/* the PCI probing function */
 +static int __devinit virtio_pci_probe(struct pci_dev *pci_dev,
 +         const struct pci_device_id *id)
 +{
 +   struct virtio_pci_device *vp_dev;
 +   int err;
 +
 +   /* allocate our structure and fill it out */
 +   vp_dev = kzalloc(sizeof(struct virtio_pci_device), GFP_KERNEL);
 +   if (vp_dev == NULL)
 +   return -ENOMEM;
 +
 +   vp_dev-pci_dev = pci_dev;
 +   vp_dev-vdev.dev.parent = virtio_pci_root;

If you use 

vp_dev-vdev.dev.parent = pci_dev-dev;

Then there is no need for the special kvm root device, and the actual
virtio device shows up in a more logical place, under where it is
really (virtually) attached.

Arnd 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-08 Thread Dor Laor
Anthony Liguori wrote:
 Avi Kivity wrote:
   
 Anthony Liguori wrote:
   
 
 This is a PCI device that implements a transport for virtio.  It allows 
 virtio
 devices to be used by QEMU based VMMs like KVM or Xen.

   
 
   
 Didn't see support for dma.
 

 Not sure what you're expecting there.  Using dma_ops in virtio_ring?

   
  I think that with Amit's pvdma patches you
 can support dma-capable devices as well without too much fuss.
   
 

 What is the use case you're thinking of?  A semi-paravirt driver that 
 does dma directly to a device?

 Regards,

 Anthony Liguori

   
You would also lose performance since pv-dma will trigger an exit for 
each virtio io while
virtio kicks the hypervisor after several IOs were queued.

 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now  http://get.splunk.com/
 ___
 kvm-devel mailing list
 kvm-devel@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/kvm-devel

   


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-08 Thread Dor Laor
Anthony Liguori wrote:
 This is a PCI device that implements a transport for virtio.  It allows virtio
 devices to be used by QEMU based VMMs like KVM or Xen.

 
   
While it's a little premature, we can start thinking of irq path 
improvements.
The current patch acks a private isr and afterwards apic eoi will also 
be hit since its
a level trig irq. This means 2 vmexits per irq.
We can start with regular pci irqs and move afterwards to msi.
Some other ugly hack options [we're better use msi]:
- Read the eoi directly from apic and save the first private isr ack
- Convert the specific irq line to edge triggered and dont share it
What do you guys think?
 +/* A small wrapper to also acknowledge the interrupt when it's handled.
 + * I really need an EIO hook for the vring so I can ack the interrupt once we
 + * know that we'll be handling the IRQ but before we invoke the callback 
 since
 + * the callback may notify the host which results in the host attempting to
 + * raise an interrupt that we would then mask once we acknowledged the
 + * interrupt. */
 +static irqreturn_t vp_interrupt(int irq, void *opaque)
 +{
 + struct virtio_pci_device *vp_dev = opaque;
 + struct virtio_pci_vq_info *info;
 + irqreturn_t ret = IRQ_NONE;
 + u8 isr;
 +
 + /* reading the ISR has the effect of also clearing it so it's very
 +  * important to save off the value. */
 + isr = ioread8(vp_dev-ioaddr + VIRTIO_PCI_ISR);
 +
 + /* It's definitely not us if the ISR was not high */
 + if (!isr)
 + return IRQ_NONE;
 +
 + spin_lock(vp_dev-lock);
 + list_for_each_entry(info, vp_dev-virtqueues, node) {
 + if (vring_interrupt(irq, info-vq) == IRQ_HANDLED)
 + ret = IRQ_HANDLED;
 + }
 + spin_unlock(vp_dev-lock);
 +
 + return ret;
 +}
   


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-08 Thread Anthony Liguori
Dor Laor wrote:
 Anthony Liguori wrote:
 This is a PCI device that implements a transport for virtio.  It 
 allows virtio
 devices to be used by QEMU based VMMs like KVM or Xen.

 
   
 While it's a little premature, we can start thinking of irq path 
 improvements.
 The current patch acks a private isr and afterwards apic eoi will also 
 be hit since its
 a level trig irq. This means 2 vmexits per irq.
 We can start with regular pci irqs and move afterwards to msi.
 Some other ugly hack options [we're better use msi]:
- Read the eoi directly from apic and save the first private isr ack

I must admit, that I don't know a whole lot about interrupt delivery.  
If we can avoid the private ISR ack then that would certainly be a good 
thing to do!  I think that would involve adding another bit to the 
virtqueues to indicate whether or not there is work to be handled.  It's 
really just moving the ISR to shared memory so that there's no plenty 
for accessing it.

Regards,

Anthony Liguori

- Convert the specific irq line to edge triggered and dont share it
 What do you guys think?
 +/* A small wrapper to also acknowledge the interrupt when it's handled.
 + * I really need an EIO hook for the vring so I can ack the 
 interrupt once we
 + * know that we'll be handling the IRQ but before we invoke the 
 callback since
 + * the callback may notify the host which results in the host 
 attempting to
 + * raise an interrupt that we would then mask once we acknowledged the
 + * interrupt. */
 +static irqreturn_t vp_interrupt(int irq, void *opaque)
 +{
 +struct virtio_pci_device *vp_dev = opaque;
 +struct virtio_pci_vq_info *info;
 +irqreturn_t ret = IRQ_NONE;
 +u8 isr;
 +
 +/* reading the ISR has the effect of also clearing it so it's very
 + * important to save off the value. */
 +isr = ioread8(vp_dev-ioaddr + VIRTIO_PCI_ISR);
 +
 +/* It's definitely not us if the ISR was not high */
 +if (!isr)
 +return IRQ_NONE;
 +
 +spin_lock(vp_dev-lock);
 +list_for_each_entry(info, vp_dev-virtqueues, node) {
 +if (vring_interrupt(irq, info-vq) == IRQ_HANDLED)
 +ret = IRQ_HANDLED;
 +}
 +spin_unlock(vp_dev-lock);
 +
 +return ret;
 +}
   



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-07 Thread Avi Kivity
Anthony Liguori wrote:
 This is a PCI device that implements a transport for virtio.  It allows virtio
 devices to be used by QEMU based VMMs like KVM or Xen.

   

Didn't see support for dma. I think that with Amit's pvdma patches you
can support dma-capable devices as well without too much fuss.

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


[kvm-devel] [PATCH 3/3] virtio PCI device

2007-11-07 Thread Anthony Liguori
This is a PCI device that implements a transport for virtio.  It allows virtio
devices to be used by QEMU based VMMs like KVM or Xen.

Signed-off-by: Anthony Liguori [EMAIL PROTECTED]

diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
index 9e33fc4..c81e0f3 100644
--- a/drivers/virtio/Kconfig
+++ b/drivers/virtio/Kconfig
@@ -6,3 +6,20 @@ config VIRTIO
 config VIRTIO_RING
bool
depends on VIRTIO
+
+config VIRTIO_PCI
+   tristate PCI driver for virtio devices (EXPERIMENTAL)
+   depends on PCI  EXPERIMENTAL
+   select VIRTIO
+   select VIRTIO_RING
+   ---help---
+ This drivers provides support for virtio based paravirtual device
+ drivers over PCI.  This requires that your VMM has appropriate PCI
+ virtio backends.  Most QEMU based VMMs should support these devices
+ (like KVM or Xen).
+
+ Currently, the ABI is not considered stable so there is no guarantee
+ that this version of the driver will work with your VMM.
+
+ If unsure, say M.
+  
diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile
index f70e409..cc84999 100644
--- a/drivers/virtio/Makefile
+++ b/drivers/virtio/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_VIRTIO) += virtio.o
 obj-$(CONFIG_VIRTIO_RING) += virtio_ring.o
+obj-$(CONFIG_VIRTIO_PCI) += virtio_pci.o
diff --git a/drivers/virtio/virtio_pci.c b/drivers/virtio/virtio_pci.c
new file mode 100644
index 000..85ae096
--- /dev/null
+++ b/drivers/virtio/virtio_pci.c
@@ -0,0 +1,469 @@
+#include linux/module.h
+#include linux/list.h
+#include linux/pci.h
+#include linux/interrupt.h
+#include linux/virtio.h
+#include linux/virtio_config.h
+#include linux/virtio_ring.h
+#include linux/virtio_pci.h
+#include linux/highmem.h
+#include linux/spinlock.h
+
+MODULE_AUTHOR(Anthony Liguori [EMAIL PROTECTED]);
+MODULE_DESCRIPTION(virtio-pci);
+MODULE_LICENSE(GPL);
+MODULE_VERSION(1);
+
+/* Our device structure */
+struct virtio_pci_device
+{
+   /* the virtio device */
+   struct virtio_device vdev;
+   /* the PCI device */
+   struct pci_dev *pci_dev;
+   /* the IO mapping for the PCI config space */
+   void *ioaddr;
+
+   spinlock_t lock;
+   struct list_head virtqueues;
+};
+
+struct virtio_pci_vq_info
+{
+   /* the number of entries in the queue */
+   int num;
+   /* the number of pages the device needs for the ring queue */
+   int n_pages;
+   /* the index of the queue */
+   int queue_index;
+   /* the struct page of the ring queue */
+   struct page *pages;
+   /* the virtual address of the ring queue */
+   void *queue;
+   /* a pointer to the virtqueue */
+   struct virtqueue *vq;
+   /* the node pointer */
+   struct list_head node;
+};
+
+/* We have to enumerate here all virtio PCI devices. */
+static struct pci_device_id virtio_pci_id_table[] = {
+   { 0x5002, 0x2258, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 }, /* Dummy entry */
+   { 0 },
+};
+
+MODULE_DEVICE_TABLE(pci, virtio_pci_id_table);
+
+/* A PCI device has it's own struct device and so does a virtio device so
+ * we create a place for the virtio devices to show up in sysfs.  I think it
+ * would make more sense for virtio to not insist on having it's own device. */
+static struct device virtio_pci_root = {
+   .parent = NULL,
+   .bus_id = virtio-pci,
+};
+
+/* Unique numbering for devices under the kvm root */
+static unsigned int dev_index;
+
+/* Convert a generic virtio device to our structure */
+static struct virtio_pci_device *to_vp_device(struct virtio_device *vdev)
+{
+   return container_of(vdev, struct virtio_pci_device, vdev);
+}
+
+/* virtio config-feature() implementation */
+static bool vp_feature(struct virtio_device *vdev, unsigned bit)
+{
+   struct virtio_pci_device *vp_dev = to_vp_device(vdev);
+   u32 mask;
+
+   /* Since this function is supposed to have the side effect of
+* enabling a queried feature, we simulate that by doing a read
+* from the host feature bitmask and then writing to the guest
+* feature bitmask */
+   mask = ioread32(vp_dev-ioaddr + VIRTIO_PCI_HOST_FEATURES);
+   if (mask  (1  bit)) {
+   mask |= (1  bit);
+   iowrite32(mask, vp_dev-ioaddr + VIRTIO_PCI_GUEST_FEATURES);
+   }
+
+   return !!(mask  (1  bit));
+}
+
+/* virtio config-get() implementation */
+static void vp_get(struct virtio_device *vdev, unsigned offset,
+  void *buf, unsigned len)
+{
+   struct virtio_pci_device *vp_dev = to_vp_device(vdev);
+   void *ioaddr = vp_dev-ioaddr + VIRTIO_PCI_CONFIG + offset;
+
+   /* We translate appropriately sized get requests into more natural
+* IO operations.  These functions also take care of endianness
+* conversion. */
+   switch (len) {
+   case 1: {
+   u8 val;
+   val = ioread8(ioaddr);
+   memcpy(buf,