Instead of both frozen and qmp_locked checks, wrap it into one check.
frozen implies the bitmap is split in two (for backup), and shouldn't
be modified. qmp_locked implies it's being used by another operation,
like being exported over NBD. In both cases it means we shouldn't allow
the user to modif
If the bitmap is locked, we shouldn't touch it.
Signed-off-by: John Snow
---
blockdev.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/blockdev.c b/blockdev.c
index 751e153557..c998336a43 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3512,10 +3512,10 @@ static Bl
We're not being consistent about this. If it's in use by an operation,
the user should not be able to change the behavior of that bitmap.
Signed-off-by: John Snow
---
blockdev.c | 26 --
1 file changed, 20 insertions(+), 6 deletions(-)
diff --git a/blockdev.c b/blockdev.
We wish to prohibit merging to read-only bitmaps and frozen bitmaps,
but "disabled" bitmaps only preclude their recording of live, new
information. It does not prohibit them from manual writes at the behest
of the user, as is the case for merge operations.
Allow the merge to "disabled" bitmaps,
an
Similarly to merge, it's OK to allow clear operations on disabled
bitmaps, as this condition only means that they are not recording
new writes. We are free to clear it if the user requests it.
Signed-off-by: John Snow
---
block/dirty-bitmap.c | 1 -
blockdev.c | 8
2 files cha
based on: jsnow/bitmaps staging branch
This series builds on a previous standalone patch and adjusts
the permission for all (or most) of the QMP bitmap commands.
John Snow (5):
block/dirty-bitmaps: add user_modifiable status checker
block/dirty-bitmaps: fix merge permissions
block/dirty-bit
On 09/25/2018 12:12 AM, Jeff Cody wrote:
> On Tue, Sep 25, 2018 at 12:09:15AM -0400, Jeff Cody wrote:
>> I'll not be involved with day-to-day qemu development, and John
>> Snow is a block jobs wizard. Have him take over block job
>> maintainership duties.
>>
>> Signed-off-by: Jeff Cody
>> ---
> On Sep 25, 2018, at 12:46 PM, Murilo Opsfelder Araujo
> wrote:
>
> Hi, John.
>
> On Tue, Sep 25, 2018 at 11:39:49AM -0400, John Arbuckle wrote:
>> Add the ability for the user to display help for a certain command.
>> Example: qemu-img create --help
>>
>> What is printed is all the options
On 25 September 2018 at 16:14, Max Reitz wrote:
> The following changes since commit 506e4a00de01e0b29fa83db5cbbc3d154253b4ea:
>
> Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.1-20180925'
> into staging (2018-09-25 13:30:45 +0100)
>
> are availa
Hi, John.
On Tue, Sep 25, 2018 at 11:39:49AM -0400, John Arbuckle wrote:
> Add the ability for the user to display help for a certain command.
> Example: qemu-img create --help
>
> What is printed is all the options available to this command and an example.
>
> Signed-off-by: John Arbuckle
Segfa
On 9/25/18 9:13 AM, Alberto Garcia wrote:
If you only want to copy parts of a backing file I think it's much
simpler if you use copy-on-read:
qemu-io -C -c 'read 0 1M' img.003.commit.000
Oh, slick.
$ qemu-img convert -O qcow2 \
"json:{'driver':'null-co','size':1048576}" \
"json:
From: Kevin Wolf
This is a regression test for a deadlock that could occur in callbacks
called from the aio_poll() in bdrv_drain_poll_top_level(). The
AioContext lock wasn't released and therefore would be taken a second
time in the callback. This would cause a possible AIO_WAIT_WHILE() in
the ca
Add the ability for the user to display help for a certain command.
Example: qemu-img create --help
What is printed is all the options available to this command and an example.
Signed-off-by: John Arbuckle
---
v3 changes:
Fixed a bug that caused qemu-img to crash when running a command without
From: Kevin Wolf
This extends the existing drain test with a block job to include
variants where the block job runs in a different AioContext.
Signed-off-by: Kevin Wolf
Reviewed-by: Fam Zheng
---
tests/test-bdrv-drain.c | 92 ++---
1 file changed, 86 insert
From: Kevin Wolf
Commit 89bd030533e changed the test case from using job_sleep_ns() to
using qemu_co_sleep_ns() instead. Also, block_job_sleep_ns() became
job_sleep_ns() in commit 5d43e86e11f.
In both cases, some comments in the test case were not updated. Do that
now.
Reported-by: Max Reitz
S
From: Kevin Wolf
This adds tests for calling AIO_WAIT_WHILE() in the .commit and .abort
callbacks. Both reasons why .abort could be called for a single job are
tested: Either .run or .prepare could return an error.
Signed-off-by: Kevin Wolf
Reviewed-by: Max Reitz
---
tests/test-bdrv-drain.c |
From: Fam Zheng
All callers have acquired ctx already. Doing that again results in
aio_poll() hang. This fixes the problem that a BDRV_POLL_WHILE() in the
callback cannot make progress because ctx is recursively locked, for
example, when drive-backup finishes.
There are two callers of job_finali
From: Kevin Wolf
bdrv_drain_poll_top_level() was buggy because it didn't release the
AioContext lock of the node to be drained before calling aio_poll().
This way, callbacks called by aio_poll() would possibly take the lock a
second time and run into a deadlock with a nested AIO_WAIT_WHILE() call
From: Kevin Wolf
For the block job drain test, don't only test draining the source and
the target node, but create a backing chain for the source
(source_backing <- source <- source_overlay) and test draining each of
the nodes in it.
When using iothreads, the source node (and therefore the job)
From: Kevin Wolf
When draining a block node, we recurse to its parent and for subtree
drains also to its children. A single AIO_WAIT_WHILE() is then used to
wait for bdrv_drain_poll() to become true, which depends on all of the
nodes we recursed to. However, if the respective child or parent beco
From: Kevin Wolf
Amongst others, job_finalize_single() calls the .prepare/.commit/.abort
callbacks of the individual job driver. Recently, their use was adapted
for all block jobs so that they involve code calling AIO_WAIT_WHILE()
now. Such code must be called under the AioContext lock for the
re
From: Kevin Wolf
Block jobs claim in .drained_poll() that they are in a quiescent state
as soon as job->deferred_to_main_loop is true. This is obviously wrong,
they still have a completion BH to run. We only get away with this
because commit 91af091f923 added an unconditional aio_poll(false) to t
From: Sergio Lopez
In qemu_laio_process_completions_and_submit, the AioContext is acquired
before the ioq_submit iteration and after qemu_laio_process_completions,
but the latter is not thread safe either.
This change avoids a number of random crashes when the Main Thread and
an IO Thread collid
From: Kevin Wolf
blk_unref() first decreases the refcount of the BlockBackend and calls
blk_delete() if the refcount reaches zero. Requests can still be in
flight at this point, they are only drained during blk_delete():
At this point, arbitrary callbacks can run. If any callback takes a
tempora
From: Kevin Wolf
Request callbacks can do pretty much anything, including operations that
will yield from the coroutine (such as draining the backend). In that
case, a decreased in_flight would be visible to other code and could
lead to a drain completing while the callback hasn't actually comple
From: Kevin Wolf
Even if AIO_WAIT_WHILE() is called in the home context of the
AioContext, we still want to allow the condition to change depending on
other threads as long as they kick the AioWait. Specfically block jobs
can be running in an I/O thread and should then be able to kick a drain
in
From: Kevin Wolf
In the context of draining a BDS, the .drained_poll callback of block
jobs is called. If this returns true (i.e. there is still some activity
pending), the drain operation may call aio_poll() with blocking=true to
wait for completion.
As soon as the pending activity is completed
From: Kevin Wolf
A bdrv_drain operation must ensure that all parents are quiesced, this
includes BlockBackends. Otherwise, callbacks called by requests that are
completed on the BDS layer, but not quite yet on the BlockBackend layer
could still create new requests.
Signed-off-by: Kevin Wolf
Rev
From: Kevin Wolf
bdrv_do_drained_begin/end() assume that they are called with the
AioContext lock of bs held. If we call drain functions from a coroutine
with the AioContext lock held, we yield and schedule a BH to move out of
coroutine context. This means that the lock for the home context of th
From: John Snow
Signed-off-by: John Snow
Reviewed-by: Max Reitz
Message-id: 20180906130225.5118-14-js...@redhat.com
Reviewed-by: Jeff Cody
Signed-off-by: Max Reitz
---
qapi/block-core.json | 30 --
blockdev.c | 14 ++
2 files changed, 42 inse
From: John Snow
Signed-off-by: John Snow
Reviewed-by: Max Reitz
Message-id: 20180906130225.5118-15-js...@redhat.com
Reviewed-by: Jeff Cody
Signed-off-by: Max Reitz
---
qapi/block-core.json | 16 +++-
blockdev.c | 9 +
hmp.c| 5 +++--
3 files ch
From: John Snow
Fix documentation to match the other jobs amended for 3.1.
Signed-off-by: John Snow
Reviewed-by: Max Reitz
Message-id: 20180906130225.5118-16-js...@redhat.com
Reviewed-by: Jeff Cody
Signed-off-by: Max Reitz
---
qapi/block-core.json | 18 ++
1 file changed, 10
From: John Snow
Presently only the backup job really guarantees what one would consider
transactional semantics. To guard against someone helpfully adding them
in the future, document that there are shortcomings in the model that
would need to be audited at that time.
Signed-off-by: John Snow
M
From: Kevin Wolf
job_finish_sync() needs to release the AioContext lock of the job before
calling aio_poll(). Otherwise, callbacks called by aio_poll() would
possibly take the lock a second time and run into a deadlock with a
nested AIO_WAIT_WHILE() call.
Also, job_drain() without aio_poll() isn
From: Kevin Wolf
This is a regression test for a deadlock that occurred in block job
completion callbacks (via job_defer_to_main_loop) because the AioContext
lock was taken twice: once in job_finish_sync() and then again in
job_defer_to_main_loop_bh(). This would cause AIO_WAIT_WHILE() to hang.
From: Kevin Wolf
All callers in QEMU proper hold the AioContext lock when calling
job_finish_sync(). test-blockjob should do the same when it calls the
function indirectly through job_cancel_sync().
Signed-off-by: Kevin Wolf
Reviewed-by: Fam Zheng
---
include/qemu/job.h| 6 ++
tests/t
From: Kevin Wolf
job_completed() had a problem with double locking that was recently
fixed independently by two different commits:
"job: Fix nested aio_poll() hanging in job_txn_apply"
"jobs: add exit shim"
One fix removed the first aio_context_acquire(), the other fix removed
the other one. No
From: Alberto Garcia
We just fixed a bug that was causing a use-after-free when QEMU was
unable to create a temporary snapshot. This is a test case for this
scenario.
Signed-off-by: Alberto Garcia
Signed-off-by: Kevin Wolf
---
tests/qemu-iotests/051| 3 +++
tests/qemu-iotests/051.out
From: Sergio Lopez
AIO Coroutines shouldn't by managed by an AioContext different than the
one assigned when they are created. aio_co_enter avoids entering a
coroutine from a different AioContext, calling aio_co_schedule instead.
Scheduled coroutines are then entered by co_schedule_bh_cb using
q
From: Alberto Garcia
When a block device is opened with BDRV_O_SNAPSHOT and the
bdrv_append_temp_snapshot() call fails then the error code path tries
to unref the already destroyed 'options' QDict.
This can be reproduced easily by setting TMPDIR to a location where
the QEMU process can't write:
From: Kevin Wolf
The block-commit QMP command required specifying the top and base nodes
of the commit jobs using the file name of that node. While this works
in simple cases (local files with absolute paths), the file names
generated for more complicated setups can be hard to predict.
The block
From: Kevin Wolf
This adds some tests for block-commit with the new options top-node and
base-node (taking node names) instead of top and base (taking file
names).
Signed-off-by: Kevin Wolf
---
tests/qemu-iotests/040 | 52 --
tests/qemu-iotests/040.out |
From: John Snow
Now that all of the jobs use the component finalization callbacks,
there's no use for the heavy-hammer .exit callback anymore.
job_exit becomes a glorified type shim so that we can call
job_completed from aio_bh_schedule_oneshot.
Move these three functions down into job.c to eli
From: John Snow
The exit callback in this test actually only performs cleanup.
Signed-off-by: John Snow
Reviewed-by: Max Reitz
Message-id: 20180906130225.5118-11-js...@redhat.com
Reviewed-by: Jeff Cody
Signed-off-by: Max Reitz
---
tests/test-blockjob-txn.c | 4 ++--
1 file changed, 2 insert
From: John Snow
Signed-off-by: John Snow
Reviewed-by: Max Reitz
Message-id: 20180906130225.5118-8-js...@redhat.com
Reviewed-by: Jeff Cody
Signed-off-by: Max Reitz
---
block/stream.c | 23 +++
1 file changed, 15 insertions(+), 8 deletions(-)
diff --git a/block/stream.c b/
From: John Snow
For purposes of minimum code movement, refactor the mirror_exit
callback to use the post-finalization callbacks in a trivial way.
Signed-off-by: John Snow
Message-id: 20180906130225.5118-7-js...@redhat.com
Reviewed-by: Jeff Cody
Reviewed-by: Max Reitz
[mreitz: Added comment fo
From: John Snow
Signed-off-by: John Snow
Reviewed-by: Max Reitz
Message-id: 20180906130225.5118-13-js...@redhat.com
Reviewed-by: Jeff Cody
Signed-off-by: Max Reitz
---
qapi/block-core.json | 16 +++-
blockdev.c | 8
2 files changed, 23 insertions(+), 1 deletio
From: John Snow
These tests don't actually test blockjobs anymore, they test
generic Job lifetimes. Change the types accordingly.
Signed-off-by: John Snow
Reviewed-by: Max Reitz
Message-id: 20180906130225.5118-9-js...@redhat.com
Reviewed-by: Jeff Cody
Signed-off-by: Max Reitz
---
tests/test
From: John Snow
We remove the exit callback and the completed boolean along with it.
We can simulate it just fine by waiting for the job to defer to the
main loop, and then giving it one final kick to get the main loop
portion to run.
Signed-off-by: John Snow
Reviewed-by: Max Reitz
Message-id:
From: John Snow
Add support for taking and passing forward job creation flags.
Signed-off-by: John Snow
Reviewed-by: Max Reitz
Reviewed-by: Jeff Cody
Message-id: 20180906130225.5118-4-js...@redhat.com
Signed-off-by: Max Reitz
---
include/block/block_int.h | 5 -
block/stream.c
From: John Snow
Add support for taking and passing forward job creation flags.
Signed-off-by: John Snow
Reviewed-by: Max Reitz
Reviewed-by: Jeff Cody
Message-id: 20180906130225.5118-3-js...@redhat.com
Signed-off-by: Max Reitz
---
include/block/block_int.h | 5 -
block/mirror.c
The following changes since commit 506e4a00de01e0b29fa83db5cbbc3d154253b4ea:
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.1-20180925' into
staging (2018-09-25 13:30:45 +0100)
are available in the Git repository at:
https://git.xanclic.moe/XanClic/qemu.git tags/
From: John Snow
Use the component callbacks; prepare, abort, and clean.
NB: prepare is only called when the job has not yet failed;
and abort can be called after prepare.
complete -> prepare -> abort -> clean
complete -> abort -> clean
During refactor, a potential problem with bdrv_drop_interm
From: John Snow
Add support for taking and passing forward job creation flags.
Signed-off-by: John Snow
Reviewed-by: Max Reitz
Reviewed-by: Jeff Cody
Message-id: 20180906130225.5118-2-js...@redhat.com
Signed-off-by: Max Reitz
---
include/block/block_int.h | 5 -
block/commit.c
From: John Snow
In cases where we abort the block/mirror job, there's no point in
installing the new backing chain before we finish aborting.
Signed-off-by: John Snow
Message-id: 20180906130225.5118-6-js...@redhat.com
Reviewed-by: Jeff Cody
Reviewed-by: Max Reitz
Signed-off-by: Max Reitz
---
On 9/25/18 12:05 AM, Fam Zheng wrote:
Image locking errors happening at device initialization time doesn't say
which file cannot be locked, for instance,
-device scsi-disk,drive=drive-1: Failed to get shared "write" lock
Is another process using the image?
could refer to either the ov
On Thu 13 Sep 2018 08:37:05 PM CEST, Max Reitz wrote:
> First, split .003 into the part we want to commit and the part we
> don't want to commit. This is a bit tricky without qemu-img dd @seek
> (or a corresponding convert parameter), so we'll have to make do with
> backing=null so we don't copy a
On 25 September 2018 at 04:54, Jeff Cody wrote:
> The following changes since commit 741e1a618b126e664f7b723e6fe1b7ace511caf7:
>
> Merge remote-tracking branch
> 'remotes/stefanberger/tags/pull-tpm-2018-09-07-1' into staging (2018-09-24
> 18:12:54 +0100)
>
> are available in the Git repository
On Tue 25 Sep 2018 12:53:53 AM CEST, Leonid Bloch wrote:
> Now, the L2 cache assignment is aware of the virtual size of the
> image, and will cover the entire image, unless the cache size needed
> for that is larger than a certain maximum. This maximum is set to 1 MB
> by default (enough to cover a
On Tue 25 Sep 2018 12:53:49 AM CEST, Leonid Bloch wrote:
> Signed-off-by: Leonid Bloch
Reviewed-by: Alberto Garcia
Berto
On Tue, 09/25 09:37, Markus Armbruster wrote:
> Do we want to have a dedicated VHDX driver submaintainer again? Fam,
> you're maintaining VMDK, could you cover VHDX as well?
I don't know a lot VHDX internals. Considering my capacity at the moment I'd
rather not take this one.
Fam
Fam Zheng writes:
> On Tue, 09/25 07:00, Markus Armbruster wrote:
>> Jeff Cody writes:
>>
>> > I'll not be involved in day-to-day qemu development. Remove
>> > myself as maintainer from the remainder of the network block drivers
>> > (and vhdx), and revert them to the general block layer maint
On Tue, Sep 25, 2018 at 01:32:04PM +0800, Fam Zheng wrote:
> On Tue, 09/25 07:00, Markus Armbruster wrote:
> > Jeff Cody writes:
> >
> > > I'll not be involved in day-to-day qemu development. Remove
> > > myself as maintainer from the remainder of the network block drivers
> > > (and vhdx), and
63 matches
Mail list logo