Re: [PATCH] virtio-blk: fallback to draining the queue if barrier ops are not supported

2009-10-14 Thread Avi Kivity

On 10/14/2009 11:46 PM, Javier Guerra wrote:

On Wed, Oct 14, 2009 at 7:03 AM, Avi Kivity  wrote:
   

Early implementations of virtio devices did not support barrier operations,
but did commit the data to disk.  In such cases, drain the queue to emulate
barrier operations.
 

would this help on the (i think common) situation with XFS on a
virtio-enabled VM, using LVM-backed storage; where LVM just loses
barriers.
   


No, it's a guest only patch.  If LVM loses barriers, I don't think 
anything can restore them.


--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] virtio-blk: fallback to draining the queue if barrier ops are not supported

2009-10-14 Thread Christoph Hellwig
On Wed, Oct 14, 2009 at 07:38:45PM +0400, Michael Tokarev wrote:
> Avi Kivity wrote:
> >Early implementations of virtio devices did not support barrier operations,
> >but did commit the data to disk.  In such cases, drain the queue to emulate
> >barrier operations.
> 
> Are there any implementation currently that actually supports barriers?
> As far as I remember there's no way to invoke barriers from a user-space
> application on linux, and this is how kvm/qemu is running on this OS.

Ignore all the barrier talk.  The way Linux uses the various storage
transport the primitives are queue draining (done entirely in the guest
block layer) and cache flushes.  Fdatasync is exactly the same primitive
as a WIN FLUSH CACHE in ATA or SYNCHRONIZE cache in SCSI module the lack
or ranges in fdatasync - but that is just a performance optimization and
not actually used by Linux guests for now.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] virtio-blk: fallback to draining the queue if barrier ops are not supported

2009-10-14 Thread Michael Tokarev

Avi Kivity wrote:

Early implementations of virtio devices did not support barrier operations,
but did commit the data to disk.  In such cases, drain the queue to emulate
barrier operations.


Are there any implementation currently that actually supports barriers?
As far as I remember there's no way to invoke barriers from a user-space
application on linux, and this is how kvm/qemu is running on this OS.

Thanks!

/mjt
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] virtio-blk: fallback to draining the queue if barrier ops are not supported

2009-10-14 Thread Javier Guerra
On Wed, Oct 14, 2009 at 7:03 AM, Avi Kivity  wrote:
> Early implementations of virtio devices did not support barrier operations,
> but did commit the data to disk.  In such cases, drain the queue to emulate
> barrier operations.

would this help on the (i think common) situation with XFS on a
virtio-enabled VM, using LVM-backed storage; where LVM just loses
barriers.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] virtio-blk: fallback to draining the queue if barrier ops are not supported

2009-10-14 Thread Avi Kivity
Early implementations of virtio devices did not support barrier operations,
but did commit the data to disk.  In such cases, drain the queue to emulate
barrier operations.

Signed-off-by: Avi Kivity 
---
 drivers/block/virtio_blk.c |6 +-
 1 files changed, 5 insertions(+), 1 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 43f1938..2627cc3 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -354,12 +354,16 @@ static int __devinit virtblk_probe(struct virtio_device 
*vdev)
vblk->disk->driverfs_dev = &vdev->dev;
index++;
 
-   /* If barriers are supported, tell block layer that queue is ordered */
+   /* If barriers are supported, tell block layer that queue is ordered;
+* otherwise just drain the queue.
+*/
if (virtio_has_feature(vdev, VIRTIO_BLK_F_FLUSH))
blk_queue_ordered(vblk->disk->queue, QUEUE_ORDERED_DRAIN_FLUSH,
  virtblk_prepare_flush);
else if (virtio_has_feature(vdev, VIRTIO_BLK_F_BARRIER))
blk_queue_ordered(vblk->disk->queue, QUEUE_ORDERED_TAG, NULL);
+   else
+   blk_queue_ordered(vblk->disk->queue, QUEUE_ORDERED_DRAIN, NULL);
 
/* If disk is read-only in the host, the guest should obey */
if (virtio_has_feature(vdev, VIRTIO_BLK_F_RO))
-- 
1.6.2.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html