On Tue, Dec 11, 2018 at 06:20:57PM +, Jean-Philippe Brucker wrote:
> Implement the virtio-iommu driver, following specification v0.9 [1].
>
> Only minor changes since v5 [2]. I fixed issues reported by Michael and
> added tags from Eric and Bharat. Thanks!
>
> You can find Linux driver and
On Tue, Nov 06, 2018 at 10:45:08AM +0800, Jason Wang wrote:
> > Storage industry is shifting away from SCSI, which has a scaling
> > problem.
>
>
> Know little about storage. For scaling, do you mean SCSI protocol itself? If
> not, it's probably not a real issue for virtio-scsi itself.
The
There is no good reason to not define ARCH_HAS_SG_CHAIN. To fix
your immediate problem just select it from riscv, as riscv uses
the generic dma-direct code which is chained S/G safe by definition.
And if you want to get extra points do a quick audit of the remaning
iommu drivers on architectures
On Fri, Oct 26, 2018 at 01:28:54AM +0200, Paolo Bonzini wrote:
> On 15/10/2018 11:27, Christoph Hellwig wrote:
> > There is some issues in this spec. For one using the multiple ranges
> > also for write zeroes is rather inefficient. Write zeroes really should
> > use th
On Fri, Oct 12, 2018 at 02:06:28PM -0700, Daniel Verkamp wrote:
> From: Changpeng Liu
>
> In commit 88c85538, "virtio-blk: add discard and write zeroes features
> to specification" (https://github.com/oasis-tcs/virtio-spec), the virtio
There is some issues in this spec. For one using the
On Thu, Sep 06, 2018 at 07:09:09PM -0500, Jiandi An wrote:
> For virtio device we have to pass in iommu_platform=true flag for
> this to set the VIRTIO_F_IOMMU_PLATFORM flag. But for example
> QEMU has the use of iommu_platform attribute disabled for virtio-gpu
> device. So would also like to
On Thu, Aug 09, 2018 at 08:13:32AM +1000, Benjamin Herrenschmidt wrote:
> > > - if (xen_domain())
> > > + if (xen_domain() || pseries_secure_vm())
> > > return true;
> >
> > I don't think it's pseries specific actually. E.g. I suspect AMD SEV
> > might benefit from the same kind of
On Wed, Aug 08, 2018 at 08:07:49PM +1000, Benjamin Herrenschmidt wrote:
> Qemu virtio bypasses that iommu when the VIRTIO_F_IOMMU_PLATFORM flag
> is not set (default) but there's nothing in the device-tree to tell the
> guest about this since it's a violation of our pseries architecture, so
> we
On Wed, Aug 08, 2018 at 06:32:45AM +1000, Benjamin Herrenschmidt wrote:
> As for the flag itself, while we could set it from qemu when we get
> notified that the guest is going secure, both Michael and I think it's
> rather gross, it requires qemu to go iterate all virtio devices and
> "poke"
On Tue, Aug 07, 2018 at 04:42:44PM +1000, Benjamin Herrenschmidt wrote:
> Note that I can make it so that the same DMA ops (basically standard
> swiotlb ops without arch hacks) work for both "direct virtio" and
> "normal PCI" devices.
>
> The trick is simply in the arch to setup the iommu to map
On Tue, Aug 07, 2018 at 02:45:25AM +0300, Michael S. Tsirkin wrote:
> > > I think that's where Christoph might have specific ideas about it.
> >
> > OK well, assuming Christoph can solve the direct case in a way that
> > also work for the virtio !iommu case, we still want some bit of logic
> >
On Tue, Aug 07, 2018 at 08:13:56AM +1000, Benjamin Herrenschmidt wrote:
> It would be indeed ideal if all we had to do was setup some kind of
> bus_dma_mask on all PCI devices and have virtio automagically insert
> swiotlb when necessary.
For 4.20 I plan to remove the swiotlb ops and instead do
On Tue, Aug 07, 2018 at 05:52:12AM +1000, Benjamin Herrenschmidt wrote:
> > It is your job to write a coherent interface specification that does
> > not depend on the used components. The hypervisor might be PAPR,
> > Linux + qemu, VMware, Hyperv or something so secret that you'd have
> > to
On Tue, Aug 07, 2018 at 12:46:34AM +0300, Michael S. Tsirkin wrote:
> Well we have the RFC for that - the switch to using DMA ops unconditionally
> isn't
> problematic itself IMHO, for now that RFC is blocked
> by its perfromance overhead for now but Christoph says
> he's trying to remove that
On Tue, Aug 07, 2018 at 07:26:35AM +1000, Benjamin Herrenschmidt wrote:
> > I think Christoph merely objects to the specific implementation. If
> > instead you do something like tweak dev->bus_dma_mask for the virtio
> > device I think he won't object.
>
> Well, we don't have "bus_dma_mask" yet
On Mon, Aug 06, 2018 at 11:35:39PM +0300, Michael S. Tsirkin wrote:
> > As I said replying to Christoph, we are "leaking" into the interface
> > something here that is really what's the VM is doing to itself, which
> > is to stash its memory away in an inaccessible place.
> >
> > Cheers,
> > Ben.
On Mon, Aug 06, 2018 at 07:13:32PM +0300, Michael S. Tsirkin wrote:
> Oh that makes sense then. Could you post a pointer pls so
> this patchset is rebased on top (there are things to
> change about 4/4 but 1-3 could go in if they don't add
> overhead)?
The dma mapping direct calls will need a
On Mon, Aug 06, 2018 at 07:06:05PM +0300, Michael S. Tsirkin wrote:
> > I've done something very similar in the thread I posted a few years
> > ago.
>
> Right so that was before spectre where a virtual call was cheaper :(
Sorry, I meant days, not years. The whole point of the thread was the
On Mon, Aug 06, 2018 at 04:36:43PM +0300, Michael S. Tsirkin wrote:
> On Mon, Aug 06, 2018 at 02:32:28PM +0530, Anshuman Khandual wrote:
> > On 08/05/2018 05:54 AM, Michael S. Tsirkin wrote:
> > > On Fri, Aug 03, 2018 at 08:21:26PM -0500, Benjamin Herrenschmidt wrote:
> > >> On Fri, 2018-08-03 at
On Mon, Aug 06, 2018 at 07:16:47AM +1000, Benjamin Herrenschmidt wrote:
> Who would set this bit ? qemu ? Under what circumstances ?
I don't really care who sets what. The implementation might not even
involved qemu.
It is your job to write a coherent interface specification that does
not
On Sun, Aug 05, 2018 at 11:10:15AM +1000, Benjamin Herrenschmidt wrote:
> - One you have rejected, which is to have a way for "no-iommu" virtio
> (which still doesn't use an iommu on the qemu side and doesn't need
> to), to be forced to use some custom DMA ops on the VM side.
>
> - One, which
On Sun, Aug 05, 2018 at 03:09:55AM +0300, Michael S. Tsirkin wrote:
> So in this case however I'm not sure what exactly do we want to add. It
> seems that from point of view of the device, there is nothing special -
> it just gets a PA and writes there. It also seems that guest does not
> need to
On Fri, Aug 03, 2018 at 01:58:46PM -0500, Benjamin Herrenschmidt wrote:
> You are saying something along the lines of "I don't like an
> instruction in your ISA, let's not support your entire CPU architecture
> in Linux".
No. I'm saying if you can't describe your architecture in the virtio
spec
On Fri, Aug 03, 2018 at 10:17:32PM +0300, Michael S. Tsirkin wrote:
> It seems reasonable to teach a platform to override dma-range
> for a specific device e.g. in case it knows about bugs in ACPI.
A platform will be able override dma-range using the dev->bus_dma_mask
field starting in 4.19. But
On Fri, Aug 03, 2018 at 10:58:36AM -0500, Benjamin Herrenschmidt wrote:
> On Fri, 2018-08-03 at 00:05 -0700, Christoph Hellwig wrote:
> > > 2- Make virtio use the DMA API with our custom platform-provided
> > > swiotlb callbacks when needed, that is when not using IOMM
On Thu, Aug 02, 2018 at 11:53:08PM +0300, Michael S. Tsirkin wrote:
> > We don't need cache flushing tricks.
>
> You don't but do real devices on same platform need them?
IBMs power plaforms are always cache coherent. There are some powerpc
platforms have not cache coherent DMA, but I guess
On Thu, Aug 02, 2018 at 04:13:09PM -0500, Benjamin Herrenschmidt wrote:
> So let's differenciate the two problems of having an IOMMU (real or
> emulated) which indeeds adds overhead etc... and using the DMA API.
>
> At the moment, virtio does this all over the place:
>
> if (use_dma_api)
>
On Wed, Aug 01, 2018 at 09:16:38AM +0100, Will Deacon wrote:
> On arm/arm64, the problem we have is that legacy virtio devices on the MMIO
> transport (so definitely not PCI) have historically been advertised by qemu
> as not being cache coherent, but because the virtio core has bypassed DMA
> ops
On Mon, Jul 30, 2018 at 01:28:03PM +0300, Michael S. Tsirkin wrote:
> Let me reply to the "crappy" part first:
> So virtio devices can run on another CPU or on a PCI bus. Configuration
> can happen over mupltiple transports. There is a discovery protocol to
> figure out where it is. It has some
ete no-go.
Nacked-by: Christoph Hellwig
for both.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
> > +
> > + if (xen_domain())
> > + goto skip_override;
> > +
> > + if (virtio_has_iommu_quirk(dev))
> > + set_dma_ops(dev->dev.parent, _direct_dma_ops);
> > +
> > + skip_override:
> > +
>
> I prefer normal if scoping as opposed to goto spaghetti pls.
> Better yet move
> +const struct dma_map_ops virtio_direct_dma_ops;
This belongs into a header if it is non-static. If you only
use it in this file anyway please mark it static and avoid a forward
declaration.
> +
> int virtio_finalize_features(struct virtio_device *dev)
> {
> int ret =
> +/*
> + * Virtio direct mapping DMA API operations structure
> + *
> + * This defines DMA API structure for all virtio devices which would not
> + * either bring in their own DMA OPS from architecture or they would not
> + * like to use architecture specific IOMMU based DMA OPS because QEMU
> +
On Wed, Jun 13, 2018 at 11:11:01PM +1000, Benjamin Herrenschmidt wrote:
> Actually ... the stuff in lib/dma-direct.c seems to be just it, no ?
>
> There's no cache flushing and there's no architecture hooks that I can
> see other than the AMD security stuff which is probably fine.
>
> Or am I
Btw, if you are on a spree to remove almost unused data structures
from target code, the lib/btree.c code is only used by the qla2xxx
target code, and doesn't really look like the best fit for it either.
___
Virtualization mailing list
On Mon, Jun 11, 2018 at 01:29:18PM +1000, Benjamin Herrenschmidt wrote:
> At the risk of repeating myself, let's just do the first pass which is
> to switch virtio over to always using the DMA API in the actual data
> flow code, with a hook at initialization time that replaces the DMA ops
> with
On Thu, Jun 07, 2018 at 07:28:35PM +0300, Michael S. Tsirkin wrote:
> Let me restate it: DMA API has support for a wide range of hardware, and
> hardware based virtio implementations likely won't benefit from all of
> it.
That is completely wrong. All aspects of the DMA API are about the
system
On Thu, May 31, 2018 at 08:43:58PM +0300, Michael S. Tsirkin wrote:
> Pls work on a long term solution. Short term needs can be served by
> enabling the iommu platform in qemu.
So, I spent some time looking at converting virtio to dma ops overrides,
and the current virtio spec, and the sad
On Tue, Jun 05, 2018 at 09:26:56AM +1000, Benjamin Herrenschmidt wrote:
> Sorry Michael, that doesn't click. Yes of course virtio is implemented
> in qemu, but the problem we are trying to solve is *not* a qemu problem
> (the fact that the Linux drivers bypass the DMA API is wrong, needs
> fixing,
On Mon, Jun 04, 2018 at 03:43:09PM +0300, Michael S. Tsirkin wrote:
> Another is that given the basic functionality is in there, optimizations
> can possibly wait until per-device quirks in DMA API are supported.
We have had per-device dma_ops for quite a while.
On Tue, May 29, 2018 at 09:56:24AM +1000, Benjamin Herrenschmidt wrote:
> I don't think forcing the addition of an emulated iommu in the middle
> just to work around the fact that virtio "cheats" and doesn't use the
> dma API unless there is one, is the right "fix".
Agreed.
> The right long term
On Thu, May 24, 2018 at 08:27:04AM +1000, Benjamin Herrenschmidt wrote:
> - First qemu doesn't know that the guest will switch to "secure mode"
> in advance. There is no difference between a normal and a secure
> partition until the partition does the magic UV call to "enter secure
> mode" and
On Fri, Apr 06, 2018 at 06:37:18PM +1000, Benjamin Herrenschmidt wrote:
> > > implemented as DMA API which the virtio core understands. There is no
> > > need for an IOMMU to be involved for the device representation in this
> > > case IMHO.
> >
> > This whole virtio translation issue is a mess.
On Fri, Apr 06, 2018 at 08:23:10AM +0530, Anshuman Khandual wrote:
> On 04/06/2018 02:48 AM, Benjamin Herrenschmidt wrote:
> > On Thu, 2018-04-05 at 21:34 +0300, Michael S. Tsirkin wrote:
> >>> In this specific case, because that would make qemu expect an iommu,
> >>> and there isn't one.
> >>
>
Ok, it helps to make sure we're actually doing I/O from the CPU,
I've reproduced it now.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
I can't reproduce it in my VM with adding a new CPU. Do you have
any interesting blk-mq like actually using multiple queues? I'll
give that a spin next.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
Jens, please don't just revert the commit in your for-linus tree.
On its own this will totally mess up the interrupt assignments. Give
me a bit of time to sort this out properly.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
On Sat, Sep 16, 2017 at 04:16:06PM -0700, Laura Abbott wrote:
> Yes, the issue goes away when CONFIG_VIRTIO_BLK_SCSI is
> disabled.
Ok, so it's probably related to follow ups to the scsi_request split.
That being said, I would highly recommend turning off
CONFIG_VIRTIO_BLK_SCSI in fedora. The
On Fri, Sep 15, 2017 at 09:54:08AM -0700, Laura Abbott wrote:
> Hi,
>
> Fedora got a bug report on an early version of 4.13.2
> https://paste.fedoraproject.org/paste/t-Yx23LN5QwJ7oPZLj3zrg
Can you check if the issue goes away when you disable
CONFIG_VIRTIO_BLK_SCSI?
This will
lead to an incorrect assignment of MSI-X vectors, and potential
deadlocks when offlining cpus.
Signed-off-by: Christoph Hellwig <h...@lst.de>
Fixes: 0b0f9dc5 ("Revert "virtio_pci: use shared interrupts for virtqueues")
Reported-by: YASUAKI ISHIMATSU <yasu.isim...@gma
On Tue, Mar 28, 2017 at 04:39:25PM +0800, Changpeng Liu wrote:
> Currently virtio-blk driver does not provide discard feature flag, so the
> filesystems which built on top of the block device will not send discard
> command. This is okay for HDD backend, but it will impact the performance
> for
or GPF in virtio related code. Multiple people
>> have done bisections (Thank you Thorsten Leemhuis and
>> Richard W.M. Jones) and found this commit to be at fault
>>
>> 07ec51480b5eb1233f8c1b0f5d7a7c8d1247c507 is the first bad commit
>> commit 07ec51480b5eb
On Thu, Feb 09, 2017 at 06:01:57PM +0200, Michael S. Tsirkin wrote:
> > Any chance to get this in for 4.11 after I got reviews from Jason
> > for most of the patches?
>
> Absolutely, I intend to merge it.
So, what is the plan for virtio this merge window? No changes seem
to have made it into
On Sun, Feb 05, 2017 at 06:15:17PM +0100, Christoph Hellwig wrote:
> Hi Michael, hi Jason,
>
> This patches applies a few cleanups to the virtio PCI interrupt handling
> code, and then converts the virtio PCI code to use the automatic MSI-X
> vectors spreading, as well as using
On Tue, Feb 07, 2017 at 03:17:02PM +0800, Jason Wang wrote:
> The check is still there.
Meh, I could swear I fixed it up. Here is an updated version:
---
>From bf5e3b7fd272aea32388570503f00d0ab592fc2a Mon Sep 17 00:00:00 2001
From: Christoph Hellwig <h...@lst.de>
Date: Wed, 25 Jan 2
Use automatic IRQ affinity assignment in the virtio layer if available,
and build the blk-mq queues based on it.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/block/virtio_blk.c | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers
to allocate the irq descritors node-local and avoid interconnect
traffic. Last but not least this will allow blk-mq queues are created
based on the interrupt affinity for storage drivers.
Signed-off-by: Christoph Hellwig <h...@lst.de>
Reviewed-by: Jason Wang <jasow...@redhat.com>
---
d
This basically passed up the pci_irq_get_affinity information through
virtio through an optional get_vq_affinity method. It is only implemented
by the PCI backend for now, and only when we use per-virtqueue IRQs.
Signed-off-by: Christoph Hellwig <h...@lst.de>
Reviewed-by: Jason Wang
Try to grab the MSI-X vectors early and fall back to the shared one
before doing lots of allocations.
Signed-off-by: Christoph Hellwig <h...@lst.de>
Reviewed-by: Jason Wang <jasow...@redhat.com>
---
drivers/virtio/virtio_pci_common.c | 35 ++-
1 file
Use automatic IRQ affinity assignment in the virtio layer if available,
and build the blk-mq queues based on it.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/scsi/virtio_scsi.c | 126 +
include/linux/cpuhotplug.h | 1 -
2 files c
Similar to the PCI version, just calling into virtio instead.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/Kconfig | 5
block/Makefile| 1 +
block/blk-mq-virtio.c | 54 +++
include/linux/
Signed-off-by: Christoph Hellwig <h...@lst.de>
Reviewed-by: Jason Wang <jasow...@redhat.com>
---
drivers/virtio/virtio_pci_common.c | 5 ++---
drivers/virtio/virtio_pci_common.h | 2 --
drivers/virtio/virtio_pci_legacy.c | 2 +-
drivers/virtio/virtio_pci_modern.c | 2 +-
includ
semantics
- we can use a simple array to look up the MSI-X vec if needed.
- That simple array now also duoble serves to replace the per_vq_vectors
flag
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/virtio/virtio_pci_common.c | 117 +++--
d
Hi Michael, hi Jason,
This patches applies a few cleanups to the virtio PCI interrupt handling
code, and then converts the virtio PCI code to use the automatic MSI-X
vectors spreading, as well as using the information in virtio-blk
and virtio-scsi to automatically align the blk-mq queues to the
handlers there, and only treat
vector 0 special.
Additionally clean up the VQ allocation code to properly unwind on error
instead of having a single global cleanup label, which is error prone,
and in this case also leads to more code.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/virtio/virtio_mmio.c | 52
1 file changed, 4 insertions(+), 48 deletions(-)
diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
index 98adb10025fc..1367aec415bf
On Fri, Feb 03, 2017 at 05:47:41PM +0800, Jason Wang wrote:
>> No, we need to allocate the array larger in that case as want proper
>> names for the interrupts.
>
> Consider the case of !per_vq_vectors, the size of msix_names is 2, but
> snprintf can do out of bound accessing here. (We name the
On Fri, Feb 03, 2017 at 03:54:54PM +0800, Jason Wang wrote:
> On 2017年01月27日 16:16, Christoph Hellwig wrote:
>> +snprintf(vp_dev->msix_names[i + 1],
>> + sizeof(*vp_dev->msix_names), "%s-%s",
>>
On Fri, Feb 03, 2017 at 03:54:36PM +0800, Jason Wang wrote:
>> +list_for_each_entry(vq, _dev->vdev.vqs, list) {
>> +if (vq->callback && vring_interrupt(irq, vq) == IRQ_HANDLED)
>
> The check of vq->callback seems redundant, we will check it soon in
> vring_interrupt().
Good point
related to it (like the one recently fixed for vmapped stacks)
do not affect other users, and the size of the virtblk_req structure
also shrinks significantly.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/block/Kconfig | 11 +++-
drivers/block/virtio_blk.c
We can simply use blk_mq_rq_from_pdu to get back at the request at
I/O completion time.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/block/virtio_blk.c | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/block/virtio_blk.c b/d
This way there is no need to drag in a dependency on the
BLOCK_PC code, which is going to become optional.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/block/skd_main.c | 15 +++
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/drivers/block/skd_ma
We only need this code to support scsi, ide, cciss and virtio. And at
least for virtio it's a deprecated feature to start with.
This should shrink the kernel size for embedded device that only use,
say eMMC a bit.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/K
The NVMe SCSI emulation doesn't use BLOCK_PC requests, so BLK_MAX_CDB
doesn't have a meaning for it. Instead opencode the value of 16
and refactor the code a bit so that related checks are next to each
other and we only need to use the value in one place.
Signed-off-by: Christoph Hellwig &l
Hi all,
this series builds on my previous changes in Jens' for-4.11/rq-refactor
branch that split out the BLOCK_PC fields from struct request into a new
struct scsi_request, and makes support for struct scsi_request and the
SCSI passthrough ioctls optional. It is now only enabled by drivers that
This basically passed up the pci_irq_get_affinity information through
virtio through an optional get_vq_affinity method. It is only implemented
by the PCI backend for now, and only when we use per-virtqueue IRQs.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/
Use automatic IRQ affinity assignment in the virtio layer if available,
and build the blk-mq queues based on it.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/scsi/virtio_scsi.c | 126 +
include/linux/cpuhotplug.h | 1 -
2 files c
Use automatic IRQ affinity assignment in the virtio layer if available,
and build the blk-mq queues based on it.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/block/virtio_blk.c | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers
Similar to the PCI version, just calling into virtio instead.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/Kconfig | 5
block/Makefile| 1 +
block/blk-mq-virtio.c | 54 +++
include/linux/
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/virtio/virtio_pci_common.c | 5 ++---
drivers/virtio/virtio_pci_common.h | 2 --
drivers/virtio/virtio_pci_legacy.c | 2 +-
drivers/virtio/virtio_pci_modern.c | 2 +-
include/uapi/linux/virtio_pci.h| 2 +-
5 files chan
Try to grab the MSI-X vectors early and fall back to the shared one
before doing lots of allocations.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/virtio/virtio_pci_common.c | 58 +++---
1 file changed, 29 insertions(+), 29 deletions(-)
diff
to allocate the irq descritors node-local and avoid interconnect
traffic. Last but not least this will allow blk-mq queues are created
based on the interrupt affinity for storage drivers.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/block/virtio_blk.c | 3 ++-
driver
semantics
- we can use a simple array to look up the MSI-X vec if needed.
- That simple array now also duoble serves to replace the per_vq_vectors
flag
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/virtio/virtio_pci_common.c | 117 +++--
d
handlers there, and only treat
vector 0 special.
Additionally clean up the VQ allocation code to properly unwind on error
instead of having a single global cleanup label, which is error prone,
and in this case also leads to more code.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/
Hi Michael, hi Jason,
This patches applies a few cleanups to the virtio PCI interrupt handling
code, and then converts the virtio PCI code to use the automatic MSI-X
vectors spreading, as well as using the information in virtio-blk
and virtio-scsi to automatically align the blk-mq queues to the
On Thu, Jan 26, 2017 at 11:41:09AM +0800, Fam Zheng wrote:
> This implements the VIRTIO_SCSI_F_FC_HOST feature by reading the config
> fields and presenting them as sysfs fc_host attributes. The config
> change handler is added here because primary_active will toggle during
> migration.
This
On Thu, Jan 12, 2017 at 11:37:22PM +0200, Michael S. Tsirkin wrote:
> It's handy for userspace emulators like QEMU.
But it's not actually a userspace API - it's an on the write protocol.
so: NAK.
___
Virtualization mailing list
Is someone going to pick the patch up and send it to Linus? I keep
running into all kinds of boot failures whenever I forget to cherry
pick it into my development trees..
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
On Thu, Jan 05, 2017 at 11:37:46AM +0100, 王金浦 wrote:
> Thanks, so it's only relevant to kernel > 4.9, as CONFIG_VMAP_STACK
> only introduced in 4.9 kernel.
kernel >= 4.9, but otherwise, yes.
___
Virtualization mailing list
On Wed, Jan 04, 2017 at 04:47:03PM +0100, 王金浦 wrote:
> This sounds scary.
> Could you share how to reproduce it, this should go into stable if
> it's the case.
Step 1: Build your kernel with CONFIG_VMAP_STACK=y
Step 2: issue a SG_IO ioctl, e.g. sg_inq /dev/vda
Without this fix attempts to do scsi passthrough on virtio_blk will crash
the system on virtually mapped stacks, which is something happening during
boot with many distros.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
that
this includes running tools like hdparm even when the host does not have
SCSI passthrough enabled.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/block/virtio_blk.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio
We'll need a bit of a wieder audience for this I think..
On Wed, Dec 07, 2016 at 05:11:28PM -0800, Bart Van Assche wrote:
> Additionally, introduce set_dma_ops(). A later patch will introduce a
> call to that function in the RDMA drivers that will be modified to use
> dma_noop_ops.
This looks
On Sun, Nov 27, 2016 at 05:37:04AM +0200, Michael S. Tsirkin wrote:
> On Fri, Nov 25, 2016 at 08:25:38AM +0100, Christoph Hellwig wrote:
> > Btw, what's the best way to get any response to this series?
> > But this and the predecessor seem to have completly fallen on deaf
> >
On Tue, Dec 06, 2016 at 05:41:05PM +0200, Michael S. Tsirkin wrote:
> __CHECK_ENDIAN__ isn't on by default presumably because
> it triggers too many sparse warnings for correct code.
> But virtio is now clean of these warnings, and
> we want to keep it this way - enable this for
> sparse builds.
>
Btw, what's the best way to get any response to this series?
But this and the predecessor seem to have completly fallen on deaf
ears.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
Use automatic IRQ affinity assignment in the virtio layer if available,
and build the blk-mq queues based on it.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/block/virtio_blk.c | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers
This basically passed up the pci_irq_get_affinity information through
virtio through an optional get_vq_affinity method. It is only implemented
by the PCI backend for now, and only when we use per-virtqueue IRQs.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/
to allocate the irq descritors node-local and avoid interconnect
traffic. Last but not least this will allow blk-mq queues are created
based on the interrupt affinity for storage drivers.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/block/virtio_blk.c | 3 ++-
driver
Similar to the PCI version, just calling into virtio instead.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/Kconfig | 5
block/Makefile| 1 +
block/blk-mq-virtio.c | 55 +++
include/linux/
701 - 800 of 910 matches
Mail list logo