Do not go beyond ARRAY_SIZE of info->shadow
Signed-off-by: Roel Kluin
---
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index a6cbf7b..d395986 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -122,7 +122,7 @@ static DEFINE_SPINLOCK(blkif_i
Roel Kluin wrote:
> Do not go beyond ARRAY_SIZE of info->shadow
>
> Signed-off-by: Roel Kluin
>
Acked-by: Jeremy Fitzhardinge
Jens, can you put this into a next-merge-window branch?
Thanks,
J
> ---
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index a6cbf7b
On Thu, May 21, 2009 at 07:45:20PM +0300, Michael S. Tsirkin wrote:
> On Thu, May 21, 2009 at 02:31:26PM +0100, Paul Brook wrote:
> > On Thursday 21 May 2009, Paul Brook wrote:
> > > > > MSI provides multiple edge triggered interrupts, whereas traditional
> > > > > mode provides a single level trig
On Thu, May 21, 2009 at 02:31:26PM +0100, Paul Brook wrote:
> On Thursday 21 May 2009, Paul Brook wrote:
> > > > MSI provides multiple edge triggered interrupts, whereas traditional
> > > > mode provides a single level triggered interrupt. My guess is most
> > > > devices will want to treat these d
On Thu, May 21, 2009 at 03:50:18PM +0100, Paul Brook wrote:
> > >>> kvm has no business messing with the PCI device code.
> > >>
> > >> kvm has a fast path for irq injection. If qemu wants to support it we
> > >> need some abstraction here.
> > >
> > > Fast path from where to where? Having the PCI
Paul Brook wrote:
>> The fast path is an eventfd so that we don't have to teach all the
>> clients about the details of MSI. Userspace programs the MSI details
>> into kvm and hands the client an eventfd. All the client has to do is
>> bang on the eventfd for the interrupt to be queued. The even
On Thursday 21 May 2009, Avi Kivity wrote:
> Paul Brook wrote:
> >> kvm implements the APIC in the host kernel (qemu upstream doesn't
> >> support this yet). The fast path is wired to the in-kernel APIC, not
> >> the cpu core directly.
> >>
> >> The idea is to wire it to UIO for device assignment,
Paul Brook wrote:
>> kvm implements the APIC in the host kernel (qemu upstream doesn't
>> support this yet). The fast path is wired to the in-kernel APIC, not
>> the cpu core directly.
>>
>> The idea is to wire it to UIO for device assignment, to a virtio-device
>> implemented in the kernel, and t
On Thu, May 21, 2009 at 02:23:20PM +0100, Paul Brook wrote:
> > > MSI provides multiple edge triggered interrupts, whereas traditional mode
> > > provides a single level triggered interrupt. My guess is most devices
> > > will want to treat these differently anyway.
> >
> > So, is qemu_send_msi bet
> >>> kvm has no business messing with the PCI device code.
> >>
> >> kvm has a fast path for irq injection. If qemu wants to support it we
> >> need some abstraction here.
> >
> > Fast path from where to where? Having the PCI layer bypass/re-implement
> > the APIC and inject the interrupt directl
Paul Brook wrote:
> On Thursday 21 May 2009, Avi Kivity wrote:
>
>> Paul Brook wrote:
>>
> which is a trivial wrapper around stl_phys.
>
OK, but I'm adding another level of indirection in the middle,
to allow us to tie in a kvm backend.
>>> kvm has
On Thursday 21 May 2009, Avi Kivity wrote:
> Paul Brook wrote:
> >>> which is a trivial wrapper around stl_phys.
> >>
> >> OK, but I'm adding another level of indirection in the middle,
> >> to allow us to tie in a kvm backend.
> >
> > kvm has no business messing with the PCI device code.
>
> kvm h
> > which is a trivial wrapper around stl_phys.
>
> OK, but I'm adding another level of indirection in the middle,
> to allow us to tie in a kvm backend.
kvm has no business messing with the PCI device code.
Paul
___
Virtualization mailing list
Virtual
On Thursday 21 May 2009, Paul Brook wrote:
> > > MSI provides multiple edge triggered interrupts, whereas traditional
> > > mode provides a single level triggered interrupt. My guess is most
> > > devices will want to treat these differently anyway.
> >
> > So, is qemu_send_msi better than qemu_set
> > MSI provides multiple edge triggered interrupts, whereas traditional mode
> > provides a single level triggered interrupt. My guess is most devices
> > will want to treat these differently anyway.
>
> So, is qemu_send_msi better than qemu_set_irq.
Neither. pci_send_msi, which is a trivial wrap
> > A tight coupling between PCI devices and the APIC is just going to cause
> > us problems later one. I'm going to come back to the fact that these are
> > memory writes so once we get IOMMU support they will presumably be
> > subject to remapping by that, just like any other memory access.
>
> I
On Thursday 21 May 2009, Avi Kivity wrote:
> Paul Brook wrote:
> In any case we need some internal API for this, and qemu_irq looks
> like a good choice.
> >>>
> >>> What do you expect to be using this API?
> >>
> >> virtio, emulated devices capable of supporting MSI (e1000?), device
> >>
> >> The PCI bus doesn't need any special support (I think) but something on
> >> the other end needs to interpret those writes.
> >
> > Sure. But there's definitely nothing PCI specific about it. I assumed
> > this would all be contained within the APIC.
>
> MSIs are defined by PCI and their confi
> The PCI bus doesn't need any special support (I think) but something on
> the other end needs to interpret those writes.
Sure. But there's definitely nothing PCI specific about it. I assumed this
would all be contained within the APIC.
> In any case we need some internal API for this, and qemu
On Wednesday 20 May 2009, Michael S. Tsirkin wrote:
> define api for allocating/setting up msi-x irqs, and for updating them
> with msi-x vector information, supply implementation in ioapic. Please
> comment on this API: I intend to port my msi-x patch to work on top of
> it.
I though the point of
On Thu, May 21, 2009 at 02:53:14PM +0100, Paul Brook wrote:
> > > which is a trivial wrapper around stl_phys.
> >
> > OK, but I'm adding another level of indirection in the middle,
> > to allow us to tie in a kvm backend.
>
> kvm has no business messing with the PCI device code.
Yes it has :)
kv
Paul Brook wrote:
>>> which is a trivial wrapper around stl_phys.
>>>
>> OK, but I'm adding another level of indirection in the middle,
>> to allow us to tie in a kvm backend.
>>
>
> kvm has no business messing with the PCI device code.
>
kvm has a fast path for irq injection. If q
On Thu, May 21, 2009 at 02:23:20PM +0100, Paul Brook wrote:
> > > MSI provides multiple edge triggered interrupts, whereas traditional mode
> > > provides a single level triggered interrupt. My guess is most devices
> > > will want to treat these differently anyway.
> >
> > So, is qemu_send_msi bet
On Thu, May 21, 2009 at 02:09:32PM +0100, Paul Brook wrote:
> > > A tight coupling between PCI devices and the APIC is just going to cause
> > > us problems later one. I'm going to come back to the fact that these are
> > > memory writes so once we get IOMMU support they will presumably be
> > > su
On Thu, May 21, 2009 at 03:38:56PM +0300, Avi Kivity wrote:
> Paul Brook wrote:
>>> Instead of writing directly, let's abstract it behind a qemu_set_irq().
>>> This is easier for device authors. The default implementation of the
>>> irq callback could write to apic memory, while for kvm we can dir
On Thu, May 21, 2009 at 03:38:56PM +0300, Avi Kivity wrote:
> Paul Brook wrote:
>>> Instead of writing directly, let's abstract it behind a qemu_set_irq().
>>> This is easier for device authors. The default implementation of the
>>> irq callback could write to apic memory, while for kvm we can dir
On Thu, May 21, 2009 at 01:29:37PM +0100, Paul Brook wrote:
> On Thursday 21 May 2009, Avi Kivity wrote:
> > Paul Brook wrote:
> > In any case we need some internal API for this, and qemu_irq looks
> > like a good choice.
> > >>>
> > >>> What do you expect to be using this API?
> > >>
> >
Paul Brook wrote:
>> Instead of writing directly, let's abstract it behind a qemu_set_irq().
>> This is easier for device authors. The default implementation of the
>> irq callback could write to apic memory, while for kvm we can directly
>> trigger the interrupt via the kvm APIs.
>>
>
> I'm
On Thu, May 21, 2009 at 03:08:18PM +0300, Avi Kivity wrote:
> Paul Brook wrote:
> In any case we need some internal API for this, and qemu_irq looks like
> a good choice.
>
What do you expect to be using this API?
>>> virtio, emulated devices capable of support
Paul Brook wrote:
In any case we need some internal API for this, and qemu_irq looks like
a good choice.
>>> What do you expect to be using this API?
>>>
>> virtio, emulated devices capable of supporting MSI (e1000?), device
>> assignment (not yet in qemu.git).
>>
Paul Brook wrote:
>> The PCI bus doesn't need any special support (I think) but something on
>> the other end needs to interpret those writes.
>>
>
> Sure. But there's definitely nothing PCI specific about it. I assumed this
> would all be contained within the APIC.
>
MSIs are defined by
On Thu, May 21, 2009 at 11:34:11AM +0100, Paul Brook wrote:
> > The PCI bus doesn't need any special support (I think) but something on
> > the other end needs to interpret those writes.
>
> Sure. But there's definitely nothing PCI specific about it. I assumed this
> would all be contained within
Paul Brook wrote:
> On Wednesday 20 May 2009, Michael S. Tsirkin wrote:
>
>> define api for allocating/setting up msi-x irqs, and for updating them
>> with msi-x vector information, supply implementation in ioapic. Please
>> comment on this API: I intend to port my msi-x patch to work on top of
On Wed, May 20, 2009 at 11:44:57PM +0300, Blue Swirl wrote:
> On 5/20/09, Michael S. Tsirkin wrote:
> > On Wed, May 20, 2009 at 11:26:42PM +0300, Blue Swirl wrote:
> > > On 5/20/09, Michael S. Tsirkin wrote:
> > > > On Wed, May 20, 2009 at 11:02:24PM +0300, Michael S. Tsirkin wrote:
> > > > >
From: Rusty Russell
Date: Thu, 21 May 2009 16:27:05 +0930
> On Tue, 19 May 2009 12:10:13 pm David Miller wrote:
>> What you're doing by orphan'ing is creating a situation where a single
>> UDP socket can loop doing sends and monopolize the TX queue of a
>> device. The only control we have over a
On Tue, 19 May 2009 12:10:13 pm David Miller wrote:
> From: Rusty Russell
> Date: Mon, 18 May 2009 22:18:47 +0930
> > We check for finished xmit skbs on every xmit, or on a timer (unless
> > the host promises to force an interrupt when the xmit ring is empty).
> > This can penalize userspace tasks
36 matches
Mail list logo