On 10/8/19 8:38 AM, Daniel P. Berrangé wrote:
On Tue, Oct 08, 2019 at 08:28:16AM -0500, Eric Blake wrote:
On 10/8/19 4:40 AM, Vladimir Sementsov-Ogievskiy wrote:
08.10.2019 12:24, Daniel P. Berrangé wrote:
On Mon, Oct 07, 2019 at 02:48:40PM -0500, Eric Blake wrote:
One benefit of --pid-file
On 11/15/19 11:08 AM, Vladimir Sementsov-Ogievskiy wrote:
14.11.2019 5:46, Eric Blake wrote:
Qemu as server currently won't accept export names larger than 256
bytes, nor create dirty bitmap names longer than 1023 bytes, so most
uses of qemu as client or server have no reason to get anywhere
On 11/15/19 8:14 AM, Vladimir Sementsov-Ogievskiy wrote:
This brings async request handling and block-status driven chunk sizes
to backup out of the box, which improves backup performance.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
+++ b/qapi/block-core.json
@@ -1455,6 +1455,12 @@
#
15.11.2019 20:30, no-re...@patchew.org wrote:
> Patchew URL:
> https://patchew.org/QEMU/20191115141444.24155-1-vsement...@virtuozzo.com/
>
>
>
> Hi,
>
> This series seems to have some coding style problems. See output below for
> more information:
>
> Subject: [RFC 00/24] backup performance:
Patchew URL:
https://patchew.org/QEMU/20191115141444.24155-1-vsement...@virtuozzo.com/
Hi,
This series seems to have some coding style problems. See output below for
more information:
Subject: [RFC 00/24] backup performance: block_status + async
Type: series
Message-id:
15.11.2019 19:33, Eric Blake wrote:
> On 11/15/19 9:47 AM, Vladimir Sementsov-Ogievskiy wrote:
>> 15.11.2019 18:03, Vladimir Sementsov-Ogievskiy wrote:
>>> 14.11.2019 5:46, Eric Blake wrote:
We document that for qcow2 persistent bitmaps, the name cannot exceed
1023 bytes. It is
14.11.2019 5:46, Eric Blake wrote:
> Qemu as server currently won't accept export names larger than 256
> bytes, nor create dirty bitmap names longer than 1023 bytes, so most
> uses of qemu as client or server have no reason to get anywhere near
> the NBD spec maximum of a 4k limit per string.
>
On 11/15/19 9:47 AM, Vladimir Sementsov-Ogievskiy wrote:
15.11.2019 18:03, Vladimir Sementsov-Ogievskiy wrote:
14.11.2019 5:46, Eric Blake wrote:
We document that for qcow2 persistent bitmaps, the name cannot exceed
1023 bytes. It is inconsistent if transient bitmaps do not have to
abide by
15.11.2019 18:03, Vladimir Sementsov-Ogievskiy wrote:
> 14.11.2019 5:46, Eric Blake wrote:
>> We document that for qcow2 persistent bitmaps, the name cannot exceed
>> 1023 bytes. It is inconsistent if transient bitmaps do not have to
>> abide by the same limit, and it is unlikely that any
On Thu, 2019-11-14 at 07:33 -0600, Eric Blake wrote:
> On 11/14/19 4:04 AM, Maxim Levitsky wrote:
> > On Wed, 2019-11-13 at 20:46 -0600, Eric Blake wrote:
> > > As long as we limit NBD names to 256 bytes (the bare minimum permitted
> > > by the standard), stack-allocation works for parsing a name
On Thu, Nov 14, 2019 at 08:25:31PM +0300, Alexander Popov wrote:
The commit a718978ed58a from July 2015 introduced the assertion which
implies that the size of successful DMA transfers handled in ide_dma_cb()
should be multiple of 512 (the size of a sector). But guest systems can
initiate DMA
14.11.2019 5:46, Eric Blake wrote:
> We document that for qcow2 persistent bitmaps, the name cannot exceed
> 1023 bytes. It is inconsistent if transient bitmaps do not have to
> abide by the same limit, and it is unlikely that any existing client
> even cares about using bitmap names this long.
14.11.2019 5:46, Eric Blake wrote:
> As long as we limit NBD names to 256 bytes (the bare minimum permitted
> by the standard), stack-allocation works for parsing a name received
> from the client. But as mentioned in a comment, we eventually want to
> permit up to the 4k maximum of the NBD
This brings async request handling and block-status driven chunk sizes
to backup out of the box, which improves backup performance.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
qapi/block-core.json | 9 +-
include/block/block_int.h | 7 +
block/backup.c | 159
Benchmark test for the series. This patch is RFC, it would be strange
to commit it as is.. On the other hand I feel that we should commit
some example to show usage of simplebench and bench_block_job.
May be I should add some simple example to compare backup and mirror..
Any ideas?
Anyway, this
Add simple benchmark table creator.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
python/simplebench.py | 122 ++
1 file changed, 122 insertions(+)
create mode 100644 python/simplebench.py
diff --git a/python/simplebench.py b/python/simplebench.py
new
Currently, block_copy operation lock the whole requested region. But
there is no reason to lock clusters, which are already copied, it will
disturb other parallel block_copy requests for no reason.
Let's instead do the following:
Lock only sub-region, which we are going to operate on. Then,
Run block_copy iterations in parallel in aio tasks.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
block/block-copy.c | 102 +++--
1 file changed, 90 insertions(+), 12 deletions(-)
diff --git a/block/block-copy.c b/block/block-copy.c
index
Hide structure definitions and add explicit API instead, to keep an
eye on the scope of the shared fields.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
include/block/block-copy.h | 57 +++--
block/backup-top.c | 6 ++--
block/backup.c | 27
We are going to use aio-task-pool API, so tasks will be handled in
parallel. We need therefore separate allocated task on each iteration.
Introduce this logic now.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
block/block-copy.c | 18 +++---
1 file changed, 11 insertions(+), 7
Use bdrv_block_status_above to chose effective chunk size and to handle
zeroes effectively.
This substitutes checking for just being allocated or not, and drops
old code path for it. Assistance by backup job is dropped too, as
caching block-status information is more difficult than just caching
We are going to use aio-task-pool API, so we'll need state pointer in
BlockCopyTask anyway. Add it now and use where possible.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
block/block-copy.c | 24 +---
1 file changed, 13 insertions(+), 11 deletions(-)
diff --git
We are going to use async block-copy call in backup, so we'll need to
passthrough setting backup speed to block-copy call.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
include/block/blockjob_int.h | 2 ++
blockjob.c | 6 ++
2 files changed, 8 insertions(+)
diff --git
We have a lot of "chunk_end - start" invocations, let's switch to
bytes/cur_bytes scheme instead.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
include/block/block-copy.h | 4 +--
block/block-copy.c | 68 --
2 files changed, 37 insertions(+), 35
Hi all!
These series does the following things:
1. bring block_status to block-copy, for efficient chunk sizes and
handling ZERO clusters. (mirror does it)
2. bring aio-task-pool to block-copy, for parallel copying loop
iteration. (mirror does it its own way)
4. add speed limit and cancelling
offset/bytes pair is more usual naming in block layer, let's use it.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
include/block/block-copy.h | 2 +-
block/block-copy.c | 80 +++---
2 files changed, 41 insertions(+), 41 deletions(-)
diff --git
Comment "Called only on full-dirty region" without corresponding
assertion is a very unsafe thing. Adding assertion means call
bdrv_dirty_bitmap_next_zero twice. Instead, let's move
bdrv_dirty_bitmap_next_zero call to block_copy_task_create. It also
allows to drop cur_bytes variable which partly
We are going to use aio-task-pool API and extend in-flight request
structure to be a successor of AioTask, so rename things appropriately.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
block/block-copy.c | 82 ++
1 file changed, 40 insertions(+), 42
In block_copy_do_copy we fallback to read+write if copy_range failed.
In this case copy_size is larger than defined for buffered IO, and
there is corresponding commit. Still, backup copies data cluster by
cluster, and most of requests are limited to one cluster anyway, so the
only source of this
Refactor common path to use BlockCopyCallState pointer as parameter, to
prepare it for use in asynchronous block-copy (at least, we'll need to
run block-copy in a coroutine, passing the whole parameters as one
pointer).
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
block/block-copy.c | 50
Simple movement without any change. It's needed for the following
patch, as this function will need to use some staff which is currently
below it.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
block/block-copy.c | 64 +++---
1 file changed, 32
They will be used for backup.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
include/block/block-copy.h | 5 +
block/block-copy.c | 9 +++--
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
index
Split block_copy_find_inflight_req to be used in seprate.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
block/block-copy.c | 31 +++
1 file changed, 19 insertions(+), 12 deletions(-)
diff --git a/block/block-copy.c b/block/block-copy.c
index 74295d93d5..94e7e855ef
We are going to directly use one async block-copy operation for backup
job, so we need rate limitator.
We want to maintain current backup behavior: only background copying is
limited and copy-before-write operations only participate in limit
calculation. Therefore we need one rate limitator for
If main job coroutine called job_yield (while some background process
is in progress), we should give it a chance to call job_pause_point().
It will be used in backup, when moved on async block-copy.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
job.c | 1 +
1 file changed, 1 insertion(+)
Add function to cancel running async block-copy call. It will be used
in backup.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
include/block/block-copy.h | 7 +++
block/block-copy.c | 20 ++--
2 files changed, 25 insertions(+), 2 deletions(-)
diff --git
Add block-job benchmarking helper functions.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
python/qemu/bench_block_job.py | 114 +
1 file changed, 114 insertions(+)
create mode 100755 python/qemu/bench_block_job.py
diff --git a/python/qemu/bench_block_job.py
We'll need async block-copy invocation to use in backup directly.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
include/block/block-copy.h | 13 +++
block/block-copy.c | 48 +++---
2 files changed, 58 insertions(+), 3 deletions(-)
diff --git
On Tue 05 Nov 2019 01:47:58 PM CET, Max Reitz wrote:
>> @@ -1347,6 +1347,12 @@ static int coroutine_fn
>> qcow2_do_open(BlockDriverState *bs, QDict *options,
>> s->subcluster_size = s->cluster_size / s->subclusters_per_cluster;
>> s->subcluster_bits = ctz32(s->subcluster_size);
>>
>>
On 15.11.19 11:55, Vladimir Sementsov-Ogievskiy wrote:
> 15.11.2019 13:52, Vladimir Sementsov-Ogievskiy wrote:
>> 15.11.2019 12:32, Max Reitz wrote:
>>> On 14.11.19 12:59, Vladimir Sementsov-Ogievskiy wrote:
14.11.2019 14:27, Max Reitz wrote:
> On 13.11.19 19:43, Andrey Shinkevich wrote:
On 15.11.19 11:12, Andrey Shinkevich wrote:
>
>
> On 15/11/2019 12:32, Max Reitz wrote:
>> On 14.11.19 12:59, Vladimir Sementsov-Ogievskiy wrote:
>>> 14.11.2019 14:27, Max Reitz wrote:
On 13.11.19 19:43, Andrey Shinkevich wrote:
> Allow writing all the data compressed through the filter
Hi Klaus,
On Wed, 13 Nov 2019 at 06:12, Klaus Birkelund wrote:
>
> On Tue, Nov 12, 2019 at 03:04:38PM +, Beata Michalska wrote:
> > Hi Klaus
> >
>
> Hi Beata,
>
> Thank you very much for your thorough reviews! I'll start going through
> them one by one :) You might have seen that I've posted
15.11.2019 12:32, Max Reitz wrote:
> On 14.11.19 12:59, Vladimir Sementsov-Ogievskiy wrote:
>> 14.11.2019 14:27, Max Reitz wrote:
>>> On 13.11.19 19:43, Andrey Shinkevich wrote:
Allow writing all the data compressed through the filter driver.
The written data will be aligned by the
On 15/11/2019 12:32, Max Reitz wrote:
> On 14.11.19 12:59, Vladimir Sementsov-Ogievskiy wrote:
>> 14.11.2019 14:27, Max Reitz wrote:
>>> On 13.11.19 19:43, Andrey Shinkevich wrote:
Allow writing all the data compressed through the filter driver.
The written data will be aligned by the
On 14.11.19 12:59, Vladimir Sementsov-Ogievskiy wrote:
> 14.11.2019 14:27, Max Reitz wrote:
>> On 13.11.19 19:43, Andrey Shinkevich wrote:
>>> Allow writing all the data compressed through the filter driver.
>>> The written data will be aligned by the cluster size.
>>> Based on the QEMU current
On 14.11.19 18:39, janine.schnei...@fau.de wrote:
> Hello,
>
> thank you for the quick feedback. I am sorry that I expressed myself so
> unclearly. I don't want to use qemu but want to know how qemu converts vmdk
> to raw. So how exactly is the conversion programmed? How are the sparse
> grains
46 matches
Mail list logo