Re: [Qemu-block] [PATCH v2] block: Fix bdrv_drain in coroutine

2016-04-04 Thread Fam Zheng
On Mon, 04/04 12:57, Stefan Hajnoczi wrote: > On Fri, Apr 01, 2016 at 09:57:38PM +0800, Fam Zheng wrote: > > Using the nested aio_poll() in coroutine is a bad idea. This patch > > replaces the aio_poll loop in bdrv_drain with a BH, if called in > > coroutine. > > > > For example, the bdrv_drain()

Re: [Qemu-block] [Qemu-devel] [PATCH for-2.6] block: Forbid I/O throttling on nodes with multiple parents for 2.6

2016-04-04 Thread Eric Blake
On 04/04/2016 09:26 AM, Kevin Wolf wrote: > As the patches to move I/O throttling to BlockBackend didn't make it in > time for the 2.6 release, but the release adds new ways of configuring > VMs whose behaviour would change once the move is done, we need to > outlaw such configurations

[Qemu-block] [PATCH for-2.6] block: Forbid I/O throttling on nodes with multiple parents for 2.6

2016-04-04 Thread Kevin Wolf
As the patches to move I/O throttling to BlockBackend didn't make it in time for the 2.6 release, but the release adds new ways of configuring VMs whose behaviour would change once the move is done, we need to outlaw such configurations temporarily. The problem exists whenever a BDS has more

Re: [Qemu-block] [PATCH v2] block: Fix bdrv_drain in coroutine

2016-04-04 Thread Paolo Bonzini
On 04/04/2016 13:57, Stefan Hajnoczi wrote: > On Fri, Apr 01, 2016 at 09:57:38PM +0800, Fam Zheng wrote: >> Using the nested aio_poll() in coroutine is a bad idea. This patch >> replaces the aio_poll loop in bdrv_drain with a BH, if called in >> coroutine. >> >> For example, the bdrv_drain() in

Re: [Qemu-block] [PATCH v2] block: Fix bdrv_drain in coroutine

2016-04-04 Thread Stefan Hajnoczi
On Fri, Apr 01, 2016 at 09:57:38PM +0800, Fam Zheng wrote: > Using the nested aio_poll() in coroutine is a bad idea. This patch > replaces the aio_poll loop in bdrv_drain with a BH, if called in > coroutine. > > For example, the bdrv_drain() in mirror.c can hang when a guest issued > request is

[Qemu-block] [PATCH v9 07/11] block: Add QMP support for streaming to an intermediate layer

2016-04-04 Thread Alberto Garcia
This patch makes the 'device' parameter of the 'block-stream' command accept a node name as well as a device name. In addition to that, operation blockers will be checked in all intermediate nodes between the top and the base node. Since qmp_block_stream() now uses the error from

[Qemu-block] [PATCH v9 08/11] docs: Document how to stream to an intermediate layer

2016-04-04 Thread Alberto Garcia
Signed-off-by: Alberto Garcia Reviewed-by: Max Reitz Reviewed-by: Eric Blake --- docs/live-block-ops.txt | 31 --- 1 file changed, 20 insertions(+), 11 deletions(-) diff --git a/docs/live-block-ops.txt

[Qemu-block] [PATCH v9 11/11] qemu-iotests: test non-overlapping block-stream operations

2016-04-04 Thread Alberto Garcia
Even if there are no common nodes involved, we currently don't support several operations at the same time in the same backing chain. Signed-off-by: Alberto Garcia --- tests/qemu-iotests/030 | 21 + tests/qemu-iotests/030.out | 4 ++-- 2 files changed,

[Qemu-block] [PATCH v9 10/11] qemu-iotests: test overlapping block-stream operations

2016-04-04 Thread Alberto Garcia
This test case checks that it's not possible to perform two block-stream operations if there are nodes involved in both. Signed-off-by: Alberto Garcia --- tests/qemu-iotests/030 | 60 ++ tests/qemu-iotests/030.out | 4 ++-- 2

[Qemu-block] [PATCH v9 09/11] qemu-iotests: test streaming to an intermediate layer

2016-04-04 Thread Alberto Garcia
This adds test_stream_intermediate(), similar to test_stream() but streams to the intermediate image instead. Signed-off-by: Alberto Garcia Reviewed-by: Max Reitz --- tests/qemu-iotests/030 | 18 +- tests/qemu-iotests/030.out | 4 ++--

[Qemu-block] [PATCH v9 06/11] block: Support streaming to an intermediate layer

2016-04-04 Thread Alberto Garcia
This makes sure that the image we are steaming into is open in read-write mode during the operation. The block job is created on the destination image, but operation blockers are also set on the active layer. We do this in order to prevent other block jobs from running in parallel in the same

[Qemu-block] [PATCH v9 04/11] block: use the block job list in bdrv_close()

2016-04-04 Thread Alberto Garcia
bdrv_close_all() cancels all block jobs by iterating over all BlockDriverStates. This patch simplifies the code by iterating directly over the block jobs using block_job_next(). Signed-off-by: Alberto Garcia --- block.c | 25 ++--- 1 file changed, 6

[Qemu-block] [PATCH v9 05/11] block: allow block jobs in any arbitrary node

2016-04-04 Thread Alberto Garcia
Currently, block jobs can only be owned by root nodes. This patch allows block jobs to be in any arbitrary node, by making the following changes: - Block jobs can now be identified by the node name of their BlockDriverState in addition to the device name. Since both device and node names live

[Qemu-block] [PATCH v9 01/11] block: keep a list of block jobs

2016-04-04 Thread Alberto Garcia
The current way to obtain the list of existing block jobs is to iterate over all root nodes and check which ones own a job. Since we want to be able to support block jobs in other nodes as well, this patch keeps a list of jobs that is updated every time one is created or destroyed.

[Qemu-block] [PATCH v9 02/11] block: use the block job list in bdrv_drain_all()

2016-04-04 Thread Alberto Garcia
bdrv_drain_all() pauses all block jobs by using bdrv_next() to iterate over all top-level BlockDriverStates. Therefore the code is unable to find block jobs in other nodes. This patch uses block_job_next() to iterate over all block jobs. Signed-off-by: Alberto Garcia ---

[Qemu-block] sheepdog's "CoQueue overlapping_queue;"

2016-04-04 Thread Paolo Bonzini
I am curious about why overlapping_queue is required for sheepdog. Overlapping requests have unspecified outcome so the CoQueue is not necessary as long as the server doesn't crash or return an error. Hitoshi, could you clarify this point for me? Thanks, Paolo

Re: [Qemu-block] [RFC for-2.7 0/1] block/qapi: Add query-block-node-tree

2016-04-04 Thread Stefan Hajnoczi
On Fri, Apr 01, 2016 at 05:30:58PM +0200, Max Reitz wrote: > > Does it still serve its purpose if we warn the user that the > > graph structure can contain little surprises :)? > > As I replied to Berto, I think we can come up with some constraints > about what qemu may do and what it

Re: [Qemu-block] [PATCH] virtio-blk: assert on starting/stopping

2016-04-04 Thread Michael S. Tsirkin
On Mon, Apr 04, 2016 at 10:25:34AM +0200, Cornelia Huck wrote: > On Mon, 4 Apr 2016 10:19:42 +0200 > Paolo Bonzini wrote: > > > On 04/04/2016 10:10, Cornelia Huck wrote: > > > > This will be fixed by Cornelia's rework, and is an example of why I > > > > think patch 1/9 is a

Re: [Qemu-block] [PATCH] virtio-blk: assert on starting/stopping

2016-04-04 Thread Cornelia Huck
On Mon, 4 Apr 2016 10:19:42 +0200 Paolo Bonzini wrote: > On 04/04/2016 10:10, Cornelia Huck wrote: > > > This will be fixed by Cornelia's rework, and is an example of why I > > > think patch 1/9 is a good idea (IOW, assign=false is harmful). > > > > So what do we want to do

Re: [Qemu-block] [PATCH] virtio-blk: assert on starting/stopping

2016-04-04 Thread Paolo Bonzini
On 04/04/2016 10:10, Cornelia Huck wrote: > > This will be fixed by Cornelia's rework, and is an example of why I > > think patch 1/9 is a good idea (IOW, assign=false is harmful). > > So what do we want to do for 2.6? The aio handler rework (without the > cleanup) is needed. Do we want to

Re: [Qemu-block] [PATCH] virtio-blk: assert on starting/stopping

2016-04-04 Thread Cornelia Huck
On Sun, 3 Apr 2016 23:13:28 +0200 Paolo Bonzini wrote: > On 03/04/2016 21:59, Christian Borntraeger wrote: > > Thread 1 (Thread 0x3ffad25bb90 (LWP 41685)): > > ---Type to continue, or q to quit--- > > #0 0x03ffab5be2c0 in raise () at /lib64/libc.so.6 > > #1

Re: [Qemu-block] [PATCH for-2.6] nbd: don't request FUA on FLUSH

2016-04-04 Thread Paolo Bonzini
On 01/04/2016 18:08, Eric Blake wrote: > The NBD protocol does not clearly document what will happen > if a client sends NBD_CMD_FLAG_FUA on NBD_CMD_FLUSH. > Historically, both the qemu and upstream NBD servers silently > ignored that flag, but that feels a bit risky. Meanwhile, the > qemu NBD