26.02.2020 16:13, Max Reitz wrote:
On 05.02.20 12:20, Vladimir Sementsov-Ogievskiy wrote:
Hi!
The main feature here is improvement of _next_dirty_area API, which I'm
going to use then for backup / block-copy.
Somehow, I thought that it was merged, but seems I even forgot to send
v4.
The
27.02.2020 16:21, Eric Blake wrote:
On 2/27/20 6:46 AM, Vladimir Sementsov-Ogievskiy wrote:
26.02.2020 18:06, Eric Blake wrote:
On 2/5/20 5:20 AM, Vladimir Sementsov-Ogievskiy wrote:
Introduce NBDExtentArray class, to handle extents list creation in more
controlled way and with fewer OUT
Hide structure definitions and add explicit API instead, to keep an
eye on the scope of the shared fields.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
include/block/block-copy.h | 52 +++
block/backup-top.c | 6 ++--
block/backup.c | 25
Assume we have two regions, A and B, and region B is in-flight now,
region A is not yet touched, but it is unallocated and should be
skipped.
Correspondingly, as progress we have
total = A + B
current = 0
If we reset unallocated region A and call progress_reset_callback,
it will calculate 0
We have a lot of "chunk_end - start" invocations, let's switch to
bytes/cur_bytes scheme instead.
While being here, improve check on block_copy_do_copy parameters to not
overflow when calculating nbytes and use int64_t for bytes in
block_copy for consistency.
Signed-off-by: Vladimir
Currently, block_copy operation lock the whole requested region. But
there is no reason to lock clusters, which are already copied, it will
disturb other parallel block_copy requests for no reason.
Let's instead do the following:
Lock only sub-region, which we are going to operate on. Then,
We need it in separate to pass to the block-copy object in the next
commit.
Cc: qemu-sta...@nongnu.org
Signed-off-by: Vladimir Sementsov-Ogievskiy
Reviewed-by: Andrey Shinkevich
---
include/qemu/job.h| 11 ++-
include/qemu/progress_meter.h | 58
Use bdrv_block_status_above to chose effective chunk size and to handle
zeroes effectively.
This substitutes checking for just being allocated or not, and drops
old code path for it. Assistance by backup job is dropped too, as
caching block-status information is more difficult than just caching
In block_copy_do_copy we fallback to read+write if copy_range failed.
In this case copy_size is larger than defined for buffered IO, and
there is corresponding commit. Still, backup copies data cluster by
cluster, and most of requests are limited to one cluster anyway, so the
only source of this
Split find_conflicting_inflight_req to be used separately.
Signed-off-by: Vladimir Sementsov-Ogievskiy
Reviewed-by: Andrey Shinkevich
---
block/block-copy.c | 31 +++
1 file changed, 19 insertions(+), 12 deletions(-)
diff --git a/block/block-copy.c
v3:
01: new
03: fix block_copy_do_copy
04: add comment, rebase on 01
05: s/block_copy_find_inflight_req/find_conflicting_inflight_req/
06: add overflow check
use int64_t for block_copy bytes parameter
fix preexisting typo in modified comment
07: update forgotten block_copy prototype
offset/bytes pair is more usual naming in block layer, let's use it.
Signed-off-by: Vladimir Sementsov-Ogievskiy
Reviewed-by: Andrey Shinkevich
Reviewed-by: Max Reitz
---
include/block/block-copy.h | 4 +-
block/block-copy.c | 84 +++---
2 files
On 06/03/2020 00:21, BALATON Zoltan wrote:
> On Fri, 6 Mar 2020, BALATON Zoltan wrote:
>> On Thu, 5 Mar 2020, Mark Cave-Ayland wrote:
>>> On 04/03/2020 22:33, BALATON Zoltan wrote:
>>> another possibility: PCI configuration space register 0x3d (Interrupt pin)
>>> is
>>> documented as having
Kevin Wolf writes:
> Am 05.03.2020 um 16:30 hat Markus Armbruster geschrieben:
>> Kevin Wolf writes:
>>
>> > Am 22.01.2020 um 07:32 hat Markus Armbruster geschrieben:
>> >> Kevin Wolf writes:
>> >>
>> >> > This patch adds a new 'coroutine' flag to QMP command definitions that
>> >> > tells
On 05/03/2020 23:35, BALATON Zoltan wrote:
>> I just a quick look at the PCI specification and found this interesting
>> paragraph in
>> the section about "Interrupt Line":
>>
>>
>> "The Interrupt Line register is an eight-bit register used to communicate
>> interrupt
>> line routing
If we want to add some info to errp (by error_prepend() or
error_append_hint()), we must use the ERRP_AUTO_PROPAGATE macro.
Otherwise, this info will not be added when errp == _fatal
(the program will exit prior to the error_append_hint() or
error_prepend() call). Fix such cases.
If we want to
If we want to add some info to errp (by error_prepend() or
error_append_hint()), we must use the ERRP_AUTO_PROPAGATE macro.
Otherwise, this info will not be added when errp == _fatal
(the program will exit prior to the error_append_hint() or
error_prepend() call). Fix such cases.
If we want to
File with errp-cleaning APIs dropped for two reasons:
1. I'm tired after a 3-days war with coccinelle, and don't want to add more
patches here.
2. Markus noted, that we forget two more functions which needs such wrappers
and corresponding conversion, so seems better to handle all these
Here is introduced ERRP_AUTO_PROPAGATE macro, to be used at start of
functions with an errp OUT parameter.
It has three goals:
1. Fix issue with error_fatal and error_prepend/error_append_hint: user
can't see this additional information, because exit() happens in
error_setg earlier than
Script adds ERRP_AUTO_PROPAGATE macro invocation where appropriate and
does corresponding changes in code (look for details in
include/qapi/error.h)
Usage example:
spatch --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
--macro-file scripts/cocci-macro-file.h --in-place --no-show-diff \
If we want to add some info to errp (by error_prepend() or
error_append_hint()), we must use the ERRP_AUTO_PROPAGATE macro.
Otherwise, this info will not be added when errp == _fatal
(the program will exit prior to the error_append_hint() or
error_prepend() call). Fix such cases.
If we want to
On Fri, 6 Mar 2020, BALATON Zoltan wrote:
On Thu, 5 Mar 2020, Mark Cave-Ayland wrote:
On 04/03/2020 22:33, BALATON Zoltan wrote:
another possibility: PCI configuration space register 0x3d (Interrupt pin)
is
documented as having value 0 == Legacy IRQ routing which should be the
initial value
On Thu, 5 Mar 2020, Mark Cave-Ayland wrote:
On 04/03/2020 22:33, BALATON Zoltan wrote:
AFAICT this then only leaves the question: why does the firmware set
PCI_INTERRUPT_LINE to 9, which is presumably why you are seeing problems running
MorphOS under QEMU.
Linux does try to handle both true
On 2/27/20 7:08 AM, Eric Blake wrote:
The change in libvirt to reject images without explicit backing format
has pointed out that a number of tools have been far too reliant on
probing in the past. It's time to set a better example in our own
iotests of properly setting this parameter.
iotest
On Wed 04 Mar 2020 02:35:35 PM CET, Denis Plotnikov wrote:
> +##
> +# @Qcow2CompressionType:
I realized that we have a bit of a mix in the way we write this type of
identifiers between QCow2FooBar (capital C) and Qcow2FooBar ... what's
the recommended one?
> @@ -146,6 +146,12 @@ typedef struct
On 04/03/2020 22:33, BALATON Zoltan wrote:
>> AFAICT this then only leaves the question: why does the firmware set
>> PCI_INTERRUPT_LINE to 9, which is presumably why you are seeing problems
>> running
>> MorphOS under QEMU.
>
> Linux does try to handle both true native
On 3/5/20 6:55 AM, Kevin Wolf wrote:
> Am 05.03.2020 um 00:14 hat John Snow geschrieben:
>>
>>
>> On 3/4/20 4:58 PM, Philippe Mathieu-Daudé wrote:
>
> Adding back the context:
>
>> -sys.stderr.write('qemu-img received signal %i: %s\n' % (-exitcode,
>> ' '.join(qemu_img_args +
On 05/03/20 18:08, Stefan Hajnoczi wrote:
> +/*
> + * List of handlers participating in userspace polling. Accessed almost
> + * exclusively from aio_poll() and therefore not an RCU list. Protected
> by
> + * ctx->list_lock.
> + */
> +AioHandlerList poll_aio_handlers;
>
One iteration of polling is always performed even when polling is
disabled. This is done because:
1. Userspace polling is cheaper than making a syscall. We might get
lucky.
2. We must poll once more after polling has stopped in case an event
occurred while stopping polling.
However, there
A guest with 100 virtio-blk-pci,num-queues=32 devices only reaches 10k IOPS
while a guest with a single device reaches 105k IOPS
(rw=randread,bs=4k,iodepth=1,ioengine=libaio).
The bottleneck is that aio_poll() userspace polling iterates over all
AioHandlers to invoke their ->io_poll() callbacks.
On 05/03/20 18:08, Stefan Hajnoczi wrote:
>
> +/*
> + * Optimization: ->io_poll() handlers often contain RCU read critical
> + * sections and we therefore see many rcu_read_lock() ->
> rcu_read_unlock()
> + * -> rcu_read_lock() -> ... sequences with expensive memory
> + *
Unlike ppoll(2) and epoll(7), Linux io_uring completions can be polled
from userspace. Previously userspace polling was only allowed when all
AioHandler's had an ->io_poll() callback. This prevented starvation of
fds by userspace pollable handlers.
Add the FDMonOps->need_wait() callback that
The recent Linux io_uring API has several advantages over ppoll(2) and
epoll(2). Details are given in the source code.
Add an io_uring implementation and make it the default on Linux.
Performance is the same as with epoll(7) but later patches add
optimizations that take advantage of io_uring.
Now that run_poll_handlers_once() is only called by run_poll_handlers()
we can improve the CPU time profile by moving the expensive
RCU_READ_LOCK() out of the polling loop.
This reduces the run_poll_handlers() from 40% CPU to 10% CPU in perf's
sampling profiler output.
Signed-off-by: Stefan
The AioHandler *node, bool is_new arguments are more complicated to
think about than simply being given AioHandler *old_node, AioHandler
*new_node.
Furthermore, the new Linux io_uring file descriptor monitoring mechanism
added by the new patch requires access to both the old and the new
nodes.
The ppoll(2) and epoll(7) file descriptor monitoring implementations are
mixed with the core util/aio-posix.c code. Before adding another
implementation for Linux io_uring, extract out the existing
ones so there is a clear interface and the core code is simpler.
The new interface is
When there are many poll handlers it's likely that some of them are idle
most of the time. Remove handlers that haven't had activity recently so
that the polling loop scales better for guests with a large number of
devices.
This feature only takes effect for the Linux io_uring fd monitoring
Am 05.03.2020 um 16:30 hat Markus Armbruster geschrieben:
> Kevin Wolf writes:
>
> > Am 22.01.2020 um 07:32 hat Markus Armbruster geschrieben:
> >> Kevin Wolf writes:
> >>
> >> > This patch adds a new 'coroutine' flag to QMP command definitions that
> >> > tells the QMP dispatcher that the
On Thu 27 Feb 2020 07:18:04 PM CET, Kevin Wolf wrote:
> /*
> - * TODO: before removing the x- prefix from x-blockdev-reopen we
> - * should move the new backing file into the right AioContext
> - * instead of returning an error.
> + * Check AioContext compatibility so that the
Kevin Wolf writes:
> Am 22.01.2020 um 07:32 hat Markus Armbruster geschrieben:
>> Kevin Wolf writes:
>>
>> > This patch adds a new 'coroutine' flag to QMP command definitions that
>> > tells the QMP dispatcher that the command handler is safe to be run in a
>> > coroutine.
>>
>> I'm afraid I
On Thu, Mar 05, 2020 at 13:50:56 +0100, Kevin Wolf wrote:
> This series allows libvirt to fix a regression that its switch from
> drive-mirror to blockdev-mirror caused: It currently requires that the
> backing chain of the target image is already available when the mirror
> operation is started.
Patchew URL: https://patchew.org/QEMU/20200305125100.386-1-kw...@redhat.com/
Hi,
This series failed the docker-mingw@fedora build test. Please find the testing
commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.
=== TEST SCRIPT BEGIN ===
#!
Patchew URL: https://patchew.org/QEMU/20200305125100.386-1-kw...@redhat.com/
Hi,
This series failed the docker-quick@centos7 build test. Please find the testing
commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.
=== TEST SCRIPT BEGIN ===
The 'job-complete' QMP command should be run with qmp() rather than
qmp_log() if use_log=False is passed.
Signed-off-by: Kevin Wolf
---
tests/qemu-iotests/iotests.py | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/tests/qemu-iotests/iotests.py
The newly tested scenario is a common live storage migration scenario:
The target node is opened without a backing file so that the active
layer is mirrored while its backing chain can be copied in the
background.
The backing chain should be attached to the mirror target node when
finalising the
blockdev-snapshot returned an error if the overlay was already in use,
which it defined as having any BlockBackend parent. This is in fact both
too strict (some parents can tolerate the change of visible data caused
by attaching a backing file) and too loose (some non-BlockBackend
parents may not
Signed-off-by: Kevin Wolf
---
include/block/block_int.h | 3 +++
block.c | 6 ++
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/include/block/block_int.h b/include/block/block_int.h
index f422c0bff0..71164c4ee1 100644
--- a/include/block/block_int.h
+++
This series allows libvirt to fix a regression that its switch from
drive-mirror to blockdev-mirror caused: It currently requires that the
backing chain of the target image is already available when the mirror
operation is started.
In reality, the backing chain may only be copied while the
On Tue, 2020-03-03 at 11:18 +0200, Maxim Levitsky wrote:
> On Sat, 2020-02-15 at 15:51 +0100, Markus Armbruster wrote:
> > Review of this patch led to a lengthy QAPI schema design discussion.
> > Let me try to condense it into a concrete proposal.
> >
> > This is about the QAPI schema, and
Vladimir Sementsov-Ogievskiy writes:
> 04.03.2020 16:35, Denis Plotnikov wrote:
>> zstd significantly reduces cluster compression time.
>> It provides better compression performance maintaining
>> the same level of the compression ratio in comparison with
>> zlib, which, at the moment, is the
Am 05.03.2020 um 00:14 hat John Snow geschrieben:
>
>
> On 3/4/20 4:58 PM, Philippe Mathieu-Daudé wrote:
Adding back the context:
> -sys.stderr.write('qemu-img received signal %i: %s\n' % (-exitcode, '
> '.join(qemu_img_args + list(args
> +sys.stderr.write('qemu-img
04.03.2020 16:35, Denis Plotnikov wrote:
zstd significantly reduces cluster compression time.
It provides better compression performance maintaining
the same level of the compression ratio in comparison with
zlib, which, at the moment, is the only compression
method available.
The performance
52 matches
Mail list logo