On Mon, 04/04 12:57, Stefan Hajnoczi wrote:
> On Fri, Apr 01, 2016 at 09:57:38PM +0800, Fam Zheng wrote:
> > Using the nested aio_poll() in coroutine is a bad idea. This patch
> > replaces the aio_poll loop in bdrv_drain with a BH, if called in
> > coroutine.
> >
> > For example, the bdrv_drain()
On 04/04/2016 09:26 AM, Kevin Wolf wrote:
> As the patches to move I/O throttling to BlockBackend didn't make it in
> time for the 2.6 release, but the release adds new ways of configuring
> VMs whose behaviour would change once the move is done, we need to
> outlaw such configurations
As the patches to move I/O throttling to BlockBackend didn't make it in
time for the 2.6 release, but the release adds new ways of configuring
VMs whose behaviour would change once the move is done, we need to
outlaw such configurations temporarily.
The problem exists whenever a BDS has more
On 04/04/2016 13:57, Stefan Hajnoczi wrote:
> On Fri, Apr 01, 2016 at 09:57:38PM +0800, Fam Zheng wrote:
>> Using the nested aio_poll() in coroutine is a bad idea. This patch
>> replaces the aio_poll loop in bdrv_drain with a BH, if called in
>> coroutine.
>>
>> For example, the bdrv_drain() in
On Fri, Apr 01, 2016 at 09:57:38PM +0800, Fam Zheng wrote:
> Using the nested aio_poll() in coroutine is a bad idea. This patch
> replaces the aio_poll loop in bdrv_drain with a BH, if called in
> coroutine.
>
> For example, the bdrv_drain() in mirror.c can hang when a guest issued
> request is
This patch makes the 'device' parameter of the 'block-stream' command
accept a node name as well as a device name.
In addition to that, operation blockers will be checked in all
intermediate nodes between the top and the base node.
Since qmp_block_stream() now uses the error from
Signed-off-by: Alberto Garcia
Reviewed-by: Max Reitz
Reviewed-by: Eric Blake
---
docs/live-block-ops.txt | 31 ---
1 file changed, 20 insertions(+), 11 deletions(-)
diff --git a/docs/live-block-ops.txt
Even if there are no common nodes involved, we currently don't support
several operations at the same time in the same backing chain.
Signed-off-by: Alberto Garcia
---
tests/qemu-iotests/030 | 21 +
tests/qemu-iotests/030.out | 4 ++--
2 files changed,
This test case checks that it's not possible to perform two
block-stream operations if there are nodes involved in both.
Signed-off-by: Alberto Garcia
---
tests/qemu-iotests/030 | 60 ++
tests/qemu-iotests/030.out | 4 ++--
2
This adds test_stream_intermediate(), similar to test_stream() but
streams to the intermediate image instead.
Signed-off-by: Alberto Garcia
Reviewed-by: Max Reitz
---
tests/qemu-iotests/030 | 18 +-
tests/qemu-iotests/030.out | 4 ++--
This makes sure that the image we are steaming into is open in
read-write mode during the operation.
The block job is created on the destination image, but operation
blockers are also set on the active layer. We do this in order to
prevent other block jobs from running in parallel in the same
bdrv_close_all() cancels all block jobs by iterating over all
BlockDriverStates. This patch simplifies the code by iterating
directly over the block jobs using block_job_next().
Signed-off-by: Alberto Garcia
---
block.c | 25 ++---
1 file changed, 6
Currently, block jobs can only be owned by root nodes. This patch
allows block jobs to be in any arbitrary node, by making the following
changes:
- Block jobs can now be identified by the node name of their
BlockDriverState in addition to the device name. Since both device
and node names live
The current way to obtain the list of existing block jobs is to
iterate over all root nodes and check which ones own a job.
Since we want to be able to support block jobs in other nodes as well,
this patch keeps a list of jobs that is updated every time one is
created or destroyed.
bdrv_drain_all() pauses all block jobs by using bdrv_next() to iterate
over all top-level BlockDriverStates. Therefore the code is unable to
find block jobs in other nodes.
This patch uses block_job_next() to iterate over all block jobs.
Signed-off-by: Alberto Garcia
---
I am curious about why overlapping_queue is required for sheepdog.
Overlapping requests have unspecified outcome so the CoQueue is not
necessary as long as the server doesn't crash or return an error.
Hitoshi, could you clarify this point for me?
Thanks,
Paolo
On Fri, Apr 01, 2016 at 05:30:58PM +0200, Max Reitz wrote:
> > Does it still serve its purpose if we warn the user that the
> > graph structure can contain little surprises :)?
>
> As I replied to Berto, I think we can come up with some constraints
> about what qemu may do and what it
On Mon, Apr 04, 2016 at 10:25:34AM +0200, Cornelia Huck wrote:
> On Mon, 4 Apr 2016 10:19:42 +0200
> Paolo Bonzini wrote:
>
> > On 04/04/2016 10:10, Cornelia Huck wrote:
> > > > This will be fixed by Cornelia's rework, and is an example of why I
> > > > think patch 1/9 is a
On Mon, 4 Apr 2016 10:19:42 +0200
Paolo Bonzini wrote:
> On 04/04/2016 10:10, Cornelia Huck wrote:
> > > This will be fixed by Cornelia's rework, and is an example of why I
> > > think patch 1/9 is a good idea (IOW, assign=false is harmful).
> >
> > So what do we want to do
On 04/04/2016 10:10, Cornelia Huck wrote:
> > This will be fixed by Cornelia's rework, and is an example of why I
> > think patch 1/9 is a good idea (IOW, assign=false is harmful).
>
> So what do we want to do for 2.6? The aio handler rework (without the
> cleanup) is needed. Do we want to
On Sun, 3 Apr 2016 23:13:28 +0200
Paolo Bonzini wrote:
> On 03/04/2016 21:59, Christian Borntraeger wrote:
> > Thread 1 (Thread 0x3ffad25bb90 (LWP 41685)):
> > ---Type to continue, or q to quit---
> > #0 0x03ffab5be2c0 in raise () at /lib64/libc.so.6
> > #1
On 01/04/2016 18:08, Eric Blake wrote:
> The NBD protocol does not clearly document what will happen
> if a client sends NBD_CMD_FLAG_FUA on NBD_CMD_FLUSH.
> Historically, both the qemu and upstream NBD servers silently
> ignored that flag, but that feels a bit risky. Meanwhile, the
> qemu NBD
22 matches
Mail list logo