Re: [Qemu-block] [Qemu-devel] [PATCH v2 0/5] block: Avoid copy-on-read assertions

2017-10-03 Thread Fam Zheng
On Tue, 10/03 21:22, Eric Blake wrote:
> On 10/03/2017 09:16 PM, no-re...@patchew.org wrote:
> > Hi,
> > 
> > This series failed automatic build test. Please find the testing commands 
> > and
> > their output below. If you have docker installed, you can probably 
> > reproduce it
> > locally.
> > 
> 
> > 195 [not run] not suitable for this image format: raw
> > 197 - output mismatch (see 197.out.bad)
> > --- /tmp/qemu-test/src/tests/qemu-iotests/197.out   2017-10-04 
> > 01:52:59.0 +
> > +++ /tmp/qemu-test/build/tests/qemu-iotests/197.out.bad 2017-10-04 
> > 02:15:52.212004491 +
> > @@ -12,13 +12,18 @@
> >  128 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> >  read 0/0 bytes at offset 0
> >  0 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> > -read 2147483136/2147483136 bytes at offset 1024
> > -2 GiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> > +
> > +(process:16284): GLib-ERROR **: gmem.c:100: failed to allocate 2147483136 
> > bytes
> 
> Okay, a test that requires a nearly-2G read in one operation is fringe,
> and I can see it choking 32-bit platforms rather easily.  How do we
> modify the test to not be so mean to memory-starved systems?  And why
> didn't patchew complain about this on v1, which had the same ~2G read?

I don't know. The whole system (Fedora VM) is dedicated to patchew test and no
concurrent task should be running. Maybe 2G is just in between the memory
watermarks.

Fam



Re: [Qemu-block] [Qemu-devel] [PATCH v2 0/5] block: Avoid copy-on-read assertions

2017-10-03 Thread Eric Blake
On 10/03/2017 09:16 PM, no-re...@patchew.org wrote:
> Hi,
> 
> This series failed automatic build test. Please find the testing commands and
> their output below. If you have docker installed, you can probably reproduce 
> it
> locally.
> 

> 195 [not run] not suitable for this image format: raw
> 197 - output mismatch (see 197.out.bad)
> --- /tmp/qemu-test/src/tests/qemu-iotests/197.out 2017-10-04 
> 01:52:59.0 +
> +++ /tmp/qemu-test/build/tests/qemu-iotests/197.out.bad   2017-10-04 
> 02:15:52.212004491 +
> @@ -12,13 +12,18 @@
>  128 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
>  read 0/0 bytes at offset 0
>  0 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> -read 2147483136/2147483136 bytes at offset 1024
> -2 GiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> +
> +(process:16284): GLib-ERROR **: gmem.c:100: failed to allocate 2147483136 
> bytes

Okay, a test that requires a nearly-2G read in one operation is fringe,
and I can see it choking 32-bit platforms rather easily.  How do we
modify the test to not be so mean to memory-starved systems?  And why
didn't patchew complain about this on v1, which had the same ~2G read?

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



signature.asc
Description: OpenPGP digital signature


[Qemu-block] [PATCH v5 22/23] block: Relax bdrv_aligned_preadv() assertion

2017-10-03 Thread Eric Blake
Now that bdrv_is_allocated accepts non-aligned inputs, we can
remove the TODO added in commit d6a644bb.

Signed-off-by: Eric Blake 
Reviewed-by: John Snow 

---
v4-v5: no change
v3: new patch [Kevin]
---
 block/io.c | 8 ++--
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/block/io.c b/block/io.c
index 8619f82eae..cf4217ec29 100644
--- a/block/io.c
+++ b/block/io.c
@@ -1103,18 +1103,14 @@ static int coroutine_fn bdrv_aligned_preadv(BdrvChild 
*child,
 }

 if (flags & BDRV_REQ_COPY_ON_READ) {
-/* TODO: Simplify further once bdrv_is_allocated no longer
- * requires sector alignment */
-int64_t start = QEMU_ALIGN_DOWN(offset, BDRV_SECTOR_SIZE);
-int64_t end = QEMU_ALIGN_UP(offset + bytes, BDRV_SECTOR_SIZE);
 int64_t pnum;

-ret = bdrv_is_allocated(bs, start, end - start, );
+ret = bdrv_is_allocated(bs, offset, bytes, );
 if (ret < 0) {
 goto out;
 }

-if (!ret || pnum != end - start) {
+if (!ret || pnum != bytes) {
 ret = bdrv_co_do_copy_on_readv(child, offset, bytes, qiov);
 goto out;
 }
-- 
2.13.6




[Qemu-block] [PATCH v5 21/23] block: Align block status requests

2017-10-03 Thread Eric Blake
Any device that has request_alignment greater than 512 should be
unable to report status at a finer granularity; it may also be
simpler for such devices to be guaranteed that the block layer
has rounded things out to the granularity boundary (the way the
block layer already rounds all other I/O out).  Besides, getting
the code correct for super-sector alignment also benefits us
for the fact that our public interface now has byte granularity,
even though none of our drivers have byte-level callbacks.

Add an assertion in blkdebug that proves that the block layer
never requests status of unaligned sections, similar to what it
does on other requests (while still keeping the generic helper
in place for when future patches add a throttle driver).  Note
that iotest 177 already covers this (it would fail if you use
just the blkdebug.c hunk without the io.c changes).  Meanwhile,
we can drop assertions in callers that no longer have to pass
in sector-aligned addresses.

There is a mid-function scope added for 'int count', for a
couple of reasons: first, an upcoming patch will add an 'if'
statement that checks whether a driver has an old- or new-style
callback, and can conveniently use the same scope for less
indentation churn at that time.  Second, since we are trying
to get rid of sector-based computations, wrapping things in
a scope makes it easier to group and see what will be deleted
in a final cleanup patch once all drivers have been converted
to the new-style callback.

Signed-off-by: Eric Blake 

---
v5: rebase to earlier changes, add more comments
v4: no change
v3: tweak commit message [Fam], rebase to context conflicts, ensure
we don't exceed 32-bit limit, drop R-b
v2: new patch
---
 include/block/block_int.h |  3 ++-
 block/io.c| 68 +--
 block/blkdebug.c  | 13 -
 3 files changed, 62 insertions(+), 22 deletions(-)

diff --git a/include/block/block_int.h b/include/block/block_int.h
index 3b4158f576..41a229d933 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -207,7 +207,8 @@ struct BlockDriver {
  * according to the current layer, and should not set
  * BDRV_BLOCK_ALLOCATED, but may set BDRV_BLOCK_RAW.  See block.h
  * for the meaning of _DATA, _ZERO, and _OFFSET_VALID.  The block
- * layer guarantees non-NULL pnum and file.
+ * layer guarantees input aligned to request_alignment, as well as
+ * non-NULL pnum and file.
  */
 int64_t coroutine_fn (*bdrv_co_get_block_status)(BlockDriverState *bs,
 int64_t sector_num, int nb_sectors, int *pnum,
diff --git a/block/io.c b/block/io.c
index 8f0434ce4f..8619f82eae 100644
--- a/block/io.c
+++ b/block/io.c
@@ -1818,7 +1818,8 @@ static int64_t coroutine_fn 
bdrv_co_block_status(BlockDriverState *bs,
 int64_t ret, ret2;
 BlockDriverState *local_file = NULL;
 int64_t local_pnum = 0;
-int count; /* sectors */
+int64_t aligned_offset, aligned_bytes;
+uint32_t align;

 assert(pnum);
 total_size = bdrv_getlength(bs);
@@ -1851,32 +1852,58 @@ static int64_t coroutine_fn 
bdrv_co_block_status(BlockDriverState *bs,
 }

 bdrv_inc_in_flight(bs);
+
+/* Round out to request_alignment boundaries */
+/* TODO: until we have a byte-based driver callback, we also have to
+ * round out to sectors, even if that is bigger than request_alignment */
+align = MAX(bs->bl.request_alignment, BDRV_SECTOR_SIZE);
+aligned_offset = QEMU_ALIGN_DOWN(offset, align);
+aligned_bytes = ROUND_UP(offset + bytes, align) - aligned_offset;
+
+{
+int count; /* sectors */
+
+assert(QEMU_IS_ALIGNED(aligned_offset | aligned_bytes,
+   BDRV_SECTOR_SIZE));
+/*
+ * The contract allows us to return pnum smaller than bytes, even
+ * if the next query would see the same status; we truncate the
+ * request to avoid overflowing the driver's 32-bit interface.
+ */
+ret = bs->drv->bdrv_co_get_block_status(
+bs, aligned_offset >> BDRV_SECTOR_BITS,
+MIN(INT_MAX, aligned_bytes) >> BDRV_SECTOR_BITS, ,
+_file);
+if (ret < 0) {
+goto out;
+}
+local_pnum = count * BDRV_SECTOR_SIZE;
+}
+
 /*
- * TODO: Rather than require aligned offsets, we could instead
- * round to the driver's request_alignment here, then touch up
- * count afterwards back to the caller's expectations.
- */
-assert(QEMU_IS_ALIGNED(offset | bytes, BDRV_SECTOR_SIZE));
-/*
- * The contract allows us to return pnum smaller than bytes, even
- * if the next query would see the same status; we truncate the
- * request to avoid overflowing the driver's 32-bit interface.
+ * The driver's result must be a multiple of request_alignment.
+ * Clamp pnum and ret to original request; requires care if align
+ * is larger than a sector.
  

[Qemu-block] [PATCH v5 14/23] qemu-img: Speed up compare on pre-allocated larger file

2017-10-03 Thread Eric Blake
Compare the following images with all-zero contents:
$ truncate --size 1M A
$ qemu-img create -f qcow2 -o preallocation=off B 1G
$ qemu-img create -f qcow2 -o preallocation=metadata C 1G

On my machine, the difference is noticeable for pre-patch speeds,
with more than an order of magnitude in difference caused by the
choice of preallocation in the qcow2 file:

$ time ./qemu-img compare -f raw -F qcow2 A B
Warning: Image size mismatch!
Images are identical.

real0m0.014s
user0m0.007s
sys 0m0.007s

$ time ./qemu-img compare -f raw -F qcow2 A C
Warning: Image size mismatch!
Images are identical.

real0m0.341s
user0m0.144s
sys 0m0.188s

Why? Because bdrv_is_allocated() returns false for image B but
true for image C, throwing away the fact that both images know
via lseek(SEEK_HOLE) that the entire image still reads as zero.
>From there, qemu-img ends up calling bdrv_pread() for every byte
of the tail, instead of quickly looking for the next allocation.
The solution: use block_status instead of is_allocated, giving:

$ time ./qemu-img compare -f raw -F qcow2 A C
Warning: Image size mismatch!
Images are identical.

real0m0.014s
user0m0.011s
sys 0m0.003s

which is on par with the speeds for no pre-allocation.

Signed-off-by: Eric Blake 
Reviewed-by: John Snow 
Reviewed-by: Vladimir Sementsov-Ogievskiy 

---
v4-v5: no change
v3: new patch
---
 qemu-img.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/qemu-img.c b/qemu-img.c
index abd289c0b5..43e3038894 100644
--- a/qemu-img.c
+++ b/qemu-img.c
@@ -1481,11 +1481,11 @@ static int img_compare(int argc, char **argv)
 while (sector_num < progress_base) {
 int64_t count;

-ret = bdrv_is_allocated_above(blk_bs(blk_over), NULL,
+ret = bdrv_block_status_above(blk_bs(blk_over), NULL,
   sector_num * BDRV_SECTOR_SIZE,
   (progress_base - sector_num) *
   BDRV_SECTOR_SIZE,
-  );
+  , NULL);
 if (ret < 0) {
 ret = 3;
 error_report("Sector allocation test failed for %s",
@@ -1493,11 +1493,11 @@ static int img_compare(int argc, char **argv)
 goto out;

 }
-/* TODO relax this once bdrv_is_allocated_above does not enforce
+/* TODO relax this once bdrv_block_status_above does not enforce
  * sector alignment */
 assert(QEMU_IS_ALIGNED(count, BDRV_SECTOR_SIZE));
 nb_sectors = count >> BDRV_SECTOR_BITS;
-if (ret) {
+if (ret & BDRV_BLOCK_ALLOCATED && !(ret & BDRV_BLOCK_ZERO)) {
 nb_sectors = MIN(nb_sectors, IO_BUF_SIZE >> BDRV_SECTOR_BITS);
 ret = check_empty_sectors(blk_over, sector_num, nb_sectors,
   filename_over, buf1, quiet);
-- 
2.13.6




[Qemu-block] [PATCH v5 17/23] qemu-img: Change check_empty_sectors() to byte-based

2017-10-03 Thread Eric Blake
Continue on the quest to make more things byte-based instead of
sector-based.

Signed-off-by: Eric Blake 
Reviewed-by: John Snow 

---
v4-v5: no change
v3: new patch
---
 qemu-img.c | 27 +++
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/qemu-img.c b/qemu-img.c
index a8ed5d990d..016d0cc23a 100644
--- a/qemu-img.c
+++ b/qemu-img.c
@@ -1203,30 +1203,29 @@ static int64_t sectors_to_bytes(int64_t sectors)
  * an error message.
  *
  * @param blk:  BlockBackend for the image
- * @param sect_num: Number of first sector to check
- * @param sect_count: Number of sectors to check
+ * @param offset: Starting offset to check
+ * @param bytes: Number of bytes to check
  * @param filename: Name of disk file we are checking (logging purpose)
  * @param buffer: Allocated buffer for storing read data
  * @param quiet: Flag for quiet mode
  */
-static int check_empty_sectors(BlockBackend *blk, int64_t sect_num,
-   int sect_count, const char *filename,
+static int check_empty_sectors(BlockBackend *blk, int64_t offset,
+   int64_t bytes, const char *filename,
uint8_t *buffer, bool quiet)
 {
 int ret = 0;
 int64_t idx;

-ret = blk_pread(blk, sect_num << BDRV_SECTOR_BITS, buffer,
-sect_count << BDRV_SECTOR_BITS);
+ret = blk_pread(blk, offset, buffer, bytes);
 if (ret < 0) {
 error_report("Error while reading offset %" PRId64 " of %s: %s",
- sectors_to_bytes(sect_num), filename, strerror(-ret));
+ offset, filename, strerror(-ret));
 return 4;
 }
-idx = find_nonzero(buffer, sect_count * BDRV_SECTOR_SIZE);
+idx = find_nonzero(buffer, bytes);
 if (idx >= 0) {
 qprintf(quiet, "Content mismatch at offset %" PRId64 "!\n",
-sectors_to_bytes(sect_num) + idx);
+offset + idx);
 return 1;
 }

@@ -1472,10 +1471,12 @@ static int img_compare(int argc, char **argv)
 } else {
 nb_sectors = MIN(nb_sectors, IO_BUF_SIZE >> BDRV_SECTOR_BITS);
 if (allocated1) {
-ret = check_empty_sectors(blk1, sector_num, nb_sectors,
+ret = check_empty_sectors(blk1, sector_num * BDRV_SECTOR_SIZE,
+  nb_sectors * BDRV_SECTOR_SIZE,
   filename1, buf1, quiet);
 } else {
-ret = check_empty_sectors(blk2, sector_num, nb_sectors,
+ret = check_empty_sectors(blk2, sector_num * BDRV_SECTOR_SIZE,
+  nb_sectors * BDRV_SECTOR_SIZE,
   filename2, buf1, quiet);
 }
 if (ret) {
@@ -1520,7 +1521,9 @@ static int img_compare(int argc, char **argv)
 nb_sectors = count >> BDRV_SECTOR_BITS;
 if (ret & BDRV_BLOCK_ALLOCATED && !(ret & BDRV_BLOCK_ZERO)) {
 nb_sectors = MIN(nb_sectors, IO_BUF_SIZE >> BDRV_SECTOR_BITS);
-ret = check_empty_sectors(blk_over, sector_num, nb_sectors,
+ret = check_empty_sectors(blk_over,
+  sector_num * BDRV_SECTOR_SIZE,
+  nb_sectors * BDRV_SECTOR_SIZE,
   filename_over, buf1, quiet);
 if (ret) {
 goto out;
-- 
2.13.6




[Qemu-block] [PATCH v5 13/23] qemu-img: Simplify logic in img_compare()

2017-10-03 Thread Eric Blake
As long as we are querying the status for a chunk smaller than
the known image size, we are guaranteed that a successful return
will have set pnum to a non-zero size (pnum is zero only for
queries beyond the end of the file).  Use that to slightly
simplify the calculation of the current chunk size being compared.
Likewise, we don't have to shrink the amount of data operated on
until we know we have to read the file, and therefore have to fit
in the bounds of our buffer.  Also, note that 'total_sectors_over'
is equivalent to 'progress_base'.

With these changes in place, sectors_to_process() is now dead code,
and can be removed.

Signed-off-by: Eric Blake 

---
v5: rebase to alignment assertion [John]
v4: no change
v3: new patch
---
 qemu-img.c | 38 +++---
 1 file changed, 11 insertions(+), 27 deletions(-)

diff --git a/qemu-img.c b/qemu-img.c
index a351af09ac..abd289c0b5 100644
--- a/qemu-img.c
+++ b/qemu-img.c
@@ -1172,11 +1172,6 @@ static int64_t sectors_to_bytes(int64_t sectors)
 return sectors << BDRV_SECTOR_BITS;
 }

-static int64_t sectors_to_process(int64_t total, int64_t from)
-{
-return MIN(total - from, IO_BUF_SIZE >> BDRV_SECTOR_BITS);
-}
-
 /*
  * Check if passed sectors are empty (not allocated or contain only 0 bytes)
  *
@@ -1373,13 +1368,9 @@ static int img_compare(int argc, char **argv)
 goto out;
 }

-for (;;) {
+while (sector_num < total_sectors) {
 int64_t status1, status2;

-nb_sectors = sectors_to_process(total_sectors, sector_num);
-if (nb_sectors <= 0) {
-break;
-}
 status1 = bdrv_block_status_above(bs1, NULL,
   sector_num * BDRV_SECTOR_SIZE,
   (total_sectors1 - sector_num) *
@@ -1406,12 +1397,9 @@ static int img_compare(int argc, char **argv)
 /* TODO: Relax this once comparison is byte-based, and we no longer
  * have to worry about sector alignment */
 assert(QEMU_IS_ALIGNED(pnum1 | pnum2, BDRV_SECTOR_SIZE));
-if (pnum1) {
-nb_sectors = MIN(nb_sectors, pnum1 >> BDRV_SECTOR_BITS);
-}
-if (pnum2) {
-nb_sectors = MIN(nb_sectors, pnum2 >> BDRV_SECTOR_BITS);
-}
+
+assert(pnum1 && pnum2);
+nb_sectors = MIN(pnum1, pnum2) >> BDRV_SECTOR_BITS;

 if (strict) {
 if ((status1 & ~BDRV_BLOCK_OFFSET_MASK) !=
@@ -1424,9 +1412,10 @@ static int img_compare(int argc, char **argv)
 }
 }
 if ((status1 & BDRV_BLOCK_ZERO) && (status2 & BDRV_BLOCK_ZERO)) {
-nb_sectors = DIV_ROUND_UP(MIN(pnum1, pnum2), BDRV_SECTOR_SIZE);
+/* nothing to do */
 } else if (allocated1 == allocated2) {
 if (allocated1) {
+nb_sectors = MIN(nb_sectors, IO_BUF_SIZE >> BDRV_SECTOR_BITS);
 ret = blk_pread(blk1, sector_num << BDRV_SECTOR_BITS, buf1,
 nb_sectors << BDRV_SECTOR_BITS);
 if (ret < 0) {
@@ -1455,7 +1444,7 @@ static int img_compare(int argc, char **argv)
 }
 }
 } else {
-
+nb_sectors = MIN(nb_sectors, IO_BUF_SIZE >> BDRV_SECTOR_BITS);
 if (allocated1) {
 ret = check_empty_sectors(blk1, sector_num, nb_sectors,
   filename1, buf1, quiet);
@@ -1478,30 +1467,24 @@ static int img_compare(int argc, char **argv)

 if (total_sectors1 != total_sectors2) {
 BlockBackend *blk_over;
-int64_t total_sectors_over;
 const char *filename_over;

 qprintf(quiet, "Warning: Image size mismatch!\n");
 if (total_sectors1 > total_sectors2) {
-total_sectors_over = total_sectors1;
 blk_over = blk1;
 filename_over = filename1;
 } else {
-total_sectors_over = total_sectors2;
 blk_over = blk2;
 filename_over = filename2;
 }

-for (;;) {
+while (sector_num < progress_base) {
 int64_t count;

-nb_sectors = sectors_to_process(total_sectors_over, sector_num);
-if (nb_sectors <= 0) {
-break;
-}
 ret = bdrv_is_allocated_above(blk_bs(blk_over), NULL,
   sector_num * BDRV_SECTOR_SIZE,
-  nb_sectors * BDRV_SECTOR_SIZE,
+  (progress_base - sector_num) *
+  BDRV_SECTOR_SIZE,
   );
 if (ret < 0) {
 ret = 3;
@@ -1515,6 +1498,7 @@ static int img_compare(int argc, char **argv)
 assert(QEMU_IS_ALIGNED(count, BDRV_SECTOR_SIZE));
 nb_sectors = count >> BDRV_SECTOR_BITS;
 if (ret) {
+

[Qemu-block] [PATCH v5 18/23] qemu-img: Change compare_sectors() to be byte-based

2017-10-03 Thread Eric Blake
In the continuing quest to make more things byte-based, change
compare_sectors(), renaming it to compare_buffers() in the
process.  Note that one caller (qemu-img compare) only cares
about the first difference, while the other (qemu-img rebase)
cares about how many consecutive sectors have the same
equal/different status; however, this patch does not bother to
micro-optimize the compare case to avoid the comparisons of
sectors beyond the first mismatch.  Both callers are always
passing valid buffers in, so the initial check for buffer size
can be turned into an assertion.

Signed-off-by: Eric Blake 
Reviewed-by: John Snow 

---
v4-v5: no change
v3: new patch
---
 qemu-img.c | 55 +++
 1 file changed, 27 insertions(+), 28 deletions(-)

diff --git a/qemu-img.c b/qemu-img.c
index 016d0cc23a..b988c718aa 100644
--- a/qemu-img.c
+++ b/qemu-img.c
@@ -1156,31 +1156,28 @@ static int is_allocated_sectors_min(const uint8_t *buf, 
int n, int *pnum,
 }

 /*
- * Compares two buffers sector by sector. Returns 0 if the first sector of both
- * buffers matches, non-zero otherwise.
+ * Compares two buffers sector by sector. Returns 0 if the first
+ * sector of each buffer matches, non-zero otherwise.
  *
- * pnum is set to the number of sectors (including and immediately following
- * the first one) that are known to have the same comparison result
+ * pnum is set to the sector-aligned size of the buffer prefix that
+ * has the same matching status as the first sector.
  */
-static int compare_sectors(const uint8_t *buf1, const uint8_t *buf2, int n,
-int *pnum)
+static int compare_buffers(const uint8_t *buf1, const uint8_t *buf2,
+   int64_t bytes, int64_t *pnum)
 {
 bool res;
-int i;
+int64_t i = MIN(bytes, BDRV_SECTOR_SIZE);

-if (n <= 0) {
-*pnum = 0;
-return 0;
-}
+assert(bytes > 0);

-res = !!memcmp(buf1, buf2, 512);
-for(i = 1; i < n; i++) {
-buf1 += 512;
-buf2 += 512;
+res = !!memcmp(buf1, buf2, i);
+while (i < bytes) {
+int64_t len = MIN(bytes - i, BDRV_SECTOR_SIZE);

-if (!!memcmp(buf1, buf2, 512) != res) {
+if (!!memcmp(buf1 + i, buf2 + i, len) != res) {
 break;
 }
+i += len;
 }

 *pnum = i;
@@ -1255,7 +1252,7 @@ static int img_compare(int argc, char **argv)
 int64_t total_sectors;
 int64_t sector_num = 0;
 int64_t nb_sectors;
-int c, pnum;
+int c;
 uint64_t progress_base;
 bool image_opts = false;
 bool force_share = false;
@@ -1440,6 +1437,8 @@ static int img_compare(int argc, char **argv)
 /* nothing to do */
 } else if (allocated1 == allocated2) {
 if (allocated1) {
+int64_t pnum;
+
 nb_sectors = MIN(nb_sectors, IO_BUF_SIZE >> BDRV_SECTOR_BITS);
 ret = blk_pread(blk1, sector_num << BDRV_SECTOR_BITS, buf1,
 nb_sectors << BDRV_SECTOR_BITS);
@@ -1459,11 +1458,11 @@ static int img_compare(int argc, char **argv)
 ret = 4;
 goto out;
 }
-ret = compare_sectors(buf1, buf2, nb_sectors, );
-if (ret || pnum != nb_sectors) {
+ret = compare_buffers(buf1, buf2,
+  nb_sectors * BDRV_SECTOR_SIZE, );
+if (ret || pnum != nb_sectors * BDRV_SECTOR_SIZE) {
 qprintf(quiet, "Content mismatch at offset %" PRId64 "!\n",
-sectors_to_bytes(
-ret ? sector_num : sector_num + pnum));
+sectors_to_bytes(sector_num) + (ret ? 0 : pnum));
 ret = 1;
 goto out;
 }
@@ -3354,16 +3353,16 @@ static int img_rebase(int argc, char **argv)
 /* If they differ, we need to write to the COW file */
 uint64_t written = 0;

-while (written < n) {
-int pnum;
+while (written < n * BDRV_SECTOR_SIZE) {
+int64_t pnum;

-if (compare_sectors(buf_old + written * 512,
-buf_new + written * 512, n - written, ))
+if (compare_buffers(buf_old + written,
+buf_new + written,
+n * BDRV_SECTOR_SIZE - written, ))
 {
 ret = blk_pwrite(blk,
- (sector + written) << BDRV_SECTOR_BITS,
- buf_old + written * 512,
- pnum << BDRV_SECTOR_BITS, 0);
+ (sector << BDRV_SECTOR_BITS) + written,
+ buf_old + written, pnum, 0);
 if (ret < 0) {

[Qemu-block] [PATCH v5 19/23] qemu-img: Change img_rebase() to be byte-based

2017-10-03 Thread Eric Blake
In the continuing quest to make more things byte-based, change
the internal iteration of img_rebase().  We can finally drop the
TODO assertion added earlier, now that the entire algorithm is
byte-based and no longer has to shift from bytes to sectors.

Most of the change is mechanical ('num_sectors' becomes 'size',
'sector' becomes 'offset', 'n' goes from sectors to bytes); some
of it is also a cleanup (use of MIN() instead of open-coding,
loss of variable 'count' added earlier in commit d6a644bb).

Signed-off-by: Eric Blake 
Reviewed-by: John Snow 

---
v4-v5: no change
v3: new patch
---
 qemu-img.c | 84 +-
 1 file changed, 34 insertions(+), 50 deletions(-)

diff --git a/qemu-img.c b/qemu-img.c
index b988c718aa..2e74da978e 100644
--- a/qemu-img.c
+++ b/qemu-img.c
@@ -3248,70 +3248,58 @@ static int img_rebase(int argc, char **argv)
  * the image is the same as the original one at any time.
  */
 if (!unsafe) {
-int64_t num_sectors;
-int64_t old_backing_num_sectors;
-int64_t new_backing_num_sectors = 0;
-uint64_t sector;
-int n;
-int64_t count;
+int64_t size;
+int64_t old_backing_size;
+int64_t new_backing_size = 0;
+uint64_t offset;
+int64_t n;
 float local_progress = 0;

 buf_old = blk_blockalign(blk, IO_BUF_SIZE);
 buf_new = blk_blockalign(blk, IO_BUF_SIZE);

-num_sectors = blk_nb_sectors(blk);
-if (num_sectors < 0) {
+size = blk_getlength(blk);
+if (size < 0) {
 error_report("Could not get size of '%s': %s",
- filename, strerror(-num_sectors));
+ filename, strerror(-size));
 ret = -1;
 goto out;
 }
-old_backing_num_sectors = blk_nb_sectors(blk_old_backing);
-if (old_backing_num_sectors < 0) {
+old_backing_size = blk_getlength(blk_old_backing);
+if (old_backing_size < 0) {
 char backing_name[PATH_MAX];

 bdrv_get_backing_filename(bs, backing_name, sizeof(backing_name));
 error_report("Could not get size of '%s': %s",
- backing_name, strerror(-old_backing_num_sectors));
+ backing_name, strerror(-old_backing_size));
 ret = -1;
 goto out;
 }
 if (blk_new_backing) {
-new_backing_num_sectors = blk_nb_sectors(blk_new_backing);
-if (new_backing_num_sectors < 0) {
+new_backing_size = blk_getlength(blk_new_backing);
+if (new_backing_size < 0) {
 error_report("Could not get size of '%s': %s",
- out_baseimg, strerror(-new_backing_num_sectors));
+ out_baseimg, strerror(-new_backing_size));
 ret = -1;
 goto out;
 }
 }

-if (num_sectors != 0) {
-local_progress = (float)100 /
-(num_sectors / MIN(num_sectors, IO_BUF_SIZE / 512));
+if (size != 0) {
+local_progress = (float)100 / (size / MIN(size, IO_BUF_SIZE));
 }

-for (sector = 0; sector < num_sectors; sector += n) {
-
-/* How many sectors can we handle with the next read? */
-if (sector + (IO_BUF_SIZE / 512) <= num_sectors) {
-n = (IO_BUF_SIZE / 512);
-} else {
-n = num_sectors - sector;
-}
+for (offset = 0; offset < size; offset += n) {
+/* How many bytes can we handle with the next read? */
+n = MIN(IO_BUF_SIZE, size - offset);

 /* If the cluster is allocated, we don't need to take action */
-ret = bdrv_is_allocated(bs, sector << BDRV_SECTOR_BITS,
-n << BDRV_SECTOR_BITS, );
+ret = bdrv_is_allocated(bs, offset, n, );
 if (ret < 0) {
 error_report("error while reading image metadata: %s",
  strerror(-ret));
 goto out;
 }
-/* TODO relax this once bdrv_is_allocated does not enforce
- * sector alignment */
-assert(QEMU_IS_ALIGNED(count, BDRV_SECTOR_SIZE));
-n = count >> BDRV_SECTOR_BITS;
 if (ret) {
 continue;
 }
@@ -3320,30 +3308,28 @@ static int img_rebase(int argc, char **argv)
  * Read old and new backing file and take into consideration that
  * backing files may be smaller than the COW image.
  */
-if (sector >= old_backing_num_sectors) {
-memset(buf_old, 0, n * BDRV_SECTOR_SIZE);
+if (offset >= old_backing_size) {
+memset(buf_old, 0, n);
 } else {
-if (sector + n > 

[Qemu-block] [PATCH v5 12/23] block: Convert bdrv_get_block_status_above() to bytes

2017-10-03 Thread Eric Blake
We are gradually moving away from sector-based interfaces, towards
byte-based.  In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.

Changing the name of the function from bdrv_get_block_status_above()
to bdrv_block_status_above() ensures that the compiler enforces that
all callers are updated.  For now, the io.c layer still assert()s
that all callers are sector-aligned, but that can be relaxed when a
later patch implements byte-based block status in the drivers.

For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_block_status().  But some
code, particularly bdrv_block_status(), gets a lot simpler because
it no longer has to mess with sectors.  Likewise, mirror code no
longer computes s->granularity >> BDRV_SECTOR_BITS, and can therefore
drop an assertion about alignment because the loop no longer depends
on alignment (never mind that we don't really have a driver that
reports sub-sector alignments, so it's not really possible to test
the effect of sub-sector mirroring).  Fix a neighboring assertion to
use is_power_of_2 while there.

For ease of review, bdrv_get_block_status() was tackled separately.

Signed-off-by: Eric Blake 

---
v5: assert alignment rather than rounding up in img_compare [John],
rebase to earlier changes
v4: rebase to earlier changes
v3: rebase to allocation/mapping sense change and qcow2-measure, tweak
mirror assertions, drop R-b
v2: rebase to earlier changes
---
 include/block/block.h | 10 +-
 block/io.c| 43 ---
 block/mirror.c| 16 +---
 block/qcow2.c | 25 +
 qemu-img.c| 40 
 5 files changed, 51 insertions(+), 83 deletions(-)

diff --git a/include/block/block.h b/include/block/block.h
index 4ecd2c4a65..b484e60509 100644
--- a/include/block/block.h
+++ b/include/block/block.h
@@ -427,11 +427,11 @@ bool bdrv_can_write_zeroes_with_unmap(BlockDriverState 
*bs);
 int64_t bdrv_block_status(BlockDriverState *bs, int64_t offset,
   int64_t bytes, int64_t *pnum,
   BlockDriverState **file);
-int64_t bdrv_get_block_status_above(BlockDriverState *bs,
-BlockDriverState *base,
-int64_t sector_num,
-int nb_sectors, int *pnum,
-BlockDriverState **file);
+int64_t bdrv_block_status_above(BlockDriverState *bs,
+BlockDriverState *base,
+int64_t offset,
+int64_t bytes, int64_t *pnum,
+BlockDriverState **file);
 int bdrv_is_allocated(BlockDriverState *bs, int64_t offset, int64_t bytes,
   int64_t *pnum);
 int bdrv_is_allocated_above(BlockDriverState *top, BlockDriverState *base,
diff --git a/block/io.c b/block/io.c
index ac7399ad41..8f0434ce4f 100644
--- a/block/io.c
+++ b/block/io.c
@@ -1973,7 +1973,7 @@ static int64_t coroutine_fn 
bdrv_co_block_status_above(BlockDriverState *bs,
 return ret;
 }

-/* Coroutine wrapper for bdrv_get_block_status_above() */
+/* Coroutine wrapper for bdrv_block_status_above() */
 static void coroutine_fn bdrv_block_status_above_co_entry(void *opaque)
 {
 BdrvCoBlockStatusData *data = opaque;
@@ -2020,47 +2020,20 @@ static int64_t 
bdrv_common_block_status_above(BlockDriverState *bs,
 return data.ret;
 }

-int64_t bdrv_get_block_status_above(BlockDriverState *bs,
-BlockDriverState *base,
-int64_t sector_num,
-int nb_sectors, int *pnum,
-BlockDriverState **file)
+int64_t bdrv_block_status_above(BlockDriverState *bs, BlockDriverState *base,
+int64_t offset, int64_t bytes, int64_t *pnum,
+BlockDriverState **file)
 {
-int64_t ret;
-int64_t n;
-
-ret = bdrv_common_block_status_above(bs, base, true,
- sector_num * BDRV_SECTOR_SIZE,
- nb_sectors * BDRV_SECTOR_SIZE,
- , file);
-if (ret < 0) {
-return ret;
-}
-assert(QEMU_IS_ALIGNED(n, BDRV_SECTOR_SIZE));
-*pnum = n >> BDRV_SECTOR_BITS;
-return ret;
+return bdrv_common_block_status_above(bs, base, true, offset, bytes,
+  pnum, file);
 }

 int64_t bdrv_block_status(BlockDriverState *bs,
   int64_t offset, int64_t bytes, 

[Qemu-block] [PATCH v5 16/23] qemu-img: Drop redundant error message in compare

2017-10-03 Thread Eric Blake
If a read error is encountered during 'qemu-img compare', we
were printing the "Error while reading offset ..." message twice;
this was because our helper function was awkward, printing output
on some but not all paths.  Fix it to consistently report errors
on all paths, so that the callers do not risk a redundant message,
and update the testsuite for the improved output.

Further simplify the code by hoisting the conversion from an error
message to an exit code into the helper function, rather than
repeating that logic at all callers (yes, the helper function is
now less generic, but it's a net win in lines of code).

Signed-off-by: Eric Blake 
Reviewed-by: John Snow 

---
v5: tweak commit message, but no code change
v4: no change
v3: new patch
---
 qemu-img.c | 19 +--
 tests/qemu-iotests/074.out |  2 --
 2 files changed, 5 insertions(+), 16 deletions(-)

diff --git a/qemu-img.c b/qemu-img.c
index 12881f008e..a8ed5d990d 100644
--- a/qemu-img.c
+++ b/qemu-img.c
@@ -1197,8 +1197,10 @@ static int64_t sectors_to_bytes(int64_t sectors)
 /*
  * Check if passed sectors are empty (not allocated or contain only 0 bytes)
  *
- * Returns 0 in case sectors are filled with 0, 1 if sectors contain non-zero
- * data and negative value on error.
+ * Intended for use by 'qemu-img compare': Returns 0 in case sectors are
+ * filled with 0, 1 if sectors contain non-zero data (this is a comparison
+ * failure), and 4 on error (the exit status for read errors), after emitting
+ * an error message.
  *
  * @param blk:  BlockBackend for the image
  * @param sect_num: Number of first sector to check
@@ -1219,7 +1221,7 @@ static int check_empty_sectors(BlockBackend *blk, int64_t 
sect_num,
 if (ret < 0) {
 error_report("Error while reading offset %" PRId64 " of %s: %s",
  sectors_to_bytes(sect_num), filename, strerror(-ret));
-return ret;
+return 4;
 }
 idx = find_nonzero(buffer, sect_count * BDRV_SECTOR_SIZE);
 if (idx >= 0) {
@@ -1477,11 +1479,6 @@ static int img_compare(int argc, char **argv)
   filename2, buf1, quiet);
 }
 if (ret) {
-if (ret < 0) {
-error_report("Error while reading offset %" PRId64 ": %s",
- sectors_to_bytes(sector_num), strerror(-ret));
-ret = 4;
-}
 goto out;
 }
 }
@@ -1526,12 +1523,6 @@ static int img_compare(int argc, char **argv)
 ret = check_empty_sectors(blk_over, sector_num, nb_sectors,
   filename_over, buf1, quiet);
 if (ret) {
-if (ret < 0) {
-error_report("Error while reading offset %" PRId64
- " of %s: %s", 
sectors_to_bytes(sector_num),
- filename_over, strerror(-ret));
-ret = 4;
-}
 goto out;
 }
 }
diff --git a/tests/qemu-iotests/074.out b/tests/qemu-iotests/074.out
index 8fba5aea9c..ede66c3f81 100644
--- a/tests/qemu-iotests/074.out
+++ b/tests/qemu-iotests/074.out
@@ -4,7 +4,6 @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1073741824
 wrote 512/512 bytes at offset 512
 512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 qemu-img: Error while reading offset 0 of 
blkdebug:TEST_DIR/blkdebug.conf:TEST_DIR/t.IMGFMT: Input/output error
-qemu-img: Error while reading offset 0: Input/output error
 4
 Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1073741824
 Formatting 'TEST_DIR/t.IMGFMT.2', fmt=IMGFMT size=0
@@ -12,7 +11,6 @@ Formatting 'TEST_DIR/t.IMGFMT.2', fmt=IMGFMT size=0
 wrote 512/512 bytes at offset 512
 512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 qemu-img: Error while reading offset 0 of 
blkdebug:TEST_DIR/blkdebug.conf:TEST_DIR/t.IMGFMT: Input/output error
-qemu-img: Error while reading offset 0 of 
blkdebug:TEST_DIR/blkdebug.conf:TEST_DIR/t.IMGFMT: Input/output error
 Warning: Image size mismatch!
 4
 Cleanup
-- 
2.13.6




[Qemu-block] [PATCH v5 09/23] block: Switch BdrvCoGetBlockStatusData to byte-based

2017-10-03 Thread Eric Blake
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based.  Convert another internal
type (no semantic change), and rename it to match the corresponding
public function rename.

Signed-off-by: Eric Blake 
Reviewed-by: Fam Zheng 
Reviewed-by: John Snow 

---
v4-v5: no change
v3: rebase to context conflicts, simple enough to keep R-b
v2: rebase to earlier changes
---
 block/io.c | 31 ++-
 1 file changed, 18 insertions(+), 13 deletions(-)

diff --git a/block/io.c b/block/io.c
index b879e26154..1857191187 100644
--- a/block/io.c
+++ b/block/io.c
@@ -1744,17 +1744,17 @@ int bdrv_flush_all(void)
 }


-typedef struct BdrvCoGetBlockStatusData {
+typedef struct BdrvCoBlockStatusData {
 BlockDriverState *bs;
 BlockDriverState *base;
 BlockDriverState **file;
-int64_t sector_num;
-int nb_sectors;
-int *pnum;
+int64_t offset;
+int64_t bytes;
+int64_t *pnum;
 int64_t ret;
 bool mapping;
 bool done;
-} BdrvCoGetBlockStatusData;
+} BdrvCoBlockStatusData;

 int64_t coroutine_fn bdrv_co_get_block_status_from_file(BlockDriverState *bs,
 int64_t sector_num,
@@ -1983,14 +1983,16 @@ static int64_t coroutine_fn 
bdrv_co_get_block_status_above(BlockDriverState *bs,
 /* Coroutine wrapper for bdrv_get_block_status_above() */
 static void coroutine_fn bdrv_get_block_status_above_co_entry(void *opaque)
 {
-BdrvCoGetBlockStatusData *data = opaque;
+BdrvCoBlockStatusData *data = opaque;
+int n;

 data->ret = bdrv_co_get_block_status_above(data->bs, data->base,
data->mapping,
-   data->sector_num,
-   data->nb_sectors,
-   data->pnum,
+   data->offset >> 
BDRV_SECTOR_BITS,
+   data->bytes >> BDRV_SECTOR_BITS,
+   ,
data->file);
+*data->pnum = n * BDRV_SECTOR_SIZE;
 data->done = true;
 }

@@ -2007,13 +2009,14 @@ static int64_t 
bdrv_common_block_status_above(BlockDriverState *bs,
   BlockDriverState **file)
 {
 Coroutine *co;
-BdrvCoGetBlockStatusData data = {
+int64_t n;
+BdrvCoBlockStatusData data = {
 .bs = bs,
 .base = base,
 .file = file,
-.sector_num = sector_num,
-.nb_sectors = nb_sectors,
-.pnum = pnum,
+.offset = sector_num * BDRV_SECTOR_SIZE,
+.bytes = nb_sectors * BDRV_SECTOR_SIZE,
+.pnum = ,
 .mapping = mapping,
 .done = false,
 };
@@ -2027,6 +2030,8 @@ static int64_t 
bdrv_common_block_status_above(BlockDriverState *bs,
 bdrv_coroutine_enter(bs, co);
 BDRV_POLL_WHILE(bs, !data.done);
 }
+assert(data.ret < 0 || QEMU_IS_ALIGNED(n, BDRV_SECTOR_SIZE));
+*pnum = n >> BDRV_SECTOR_BITS;
 return data.ret;
 }

-- 
2.13.6




[Qemu-block] [PATCH v5 10/23] block: Switch bdrv_common_block_status_above() to byte-based

2017-10-03 Thread Eric Blake
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based.  Convert another internal
function (no semantic change).

Signed-off-by: Eric Blake 
Reviewed-by: Fam Zheng 
Reviewed-by: John Snow 

---
v4-v5: no change
v3: rebase to allocation/mapping sense change, simple enough to keep R-b
v2: new patch
---
 block/io.c | 41 +
 1 file changed, 21 insertions(+), 20 deletions(-)

diff --git a/block/io.c b/block/io.c
index 1857191187..4826751c27 100644
--- a/block/io.c
+++ b/block/io.c
@@ -2004,19 +2004,18 @@ static void coroutine_fn 
bdrv_get_block_status_above_co_entry(void *opaque)
 static int64_t bdrv_common_block_status_above(BlockDriverState *bs,
   BlockDriverState *base,
   bool mapping,
-  int64_t sector_num,
-  int nb_sectors, int *pnum,
+  int64_t offset,
+  int64_t bytes, int64_t *pnum,
   BlockDriverState **file)
 {
 Coroutine *co;
-int64_t n;
 BdrvCoBlockStatusData data = {
 .bs = bs,
 .base = base,
 .file = file,
-.offset = sector_num * BDRV_SECTOR_SIZE,
-.bytes = nb_sectors * BDRV_SECTOR_SIZE,
-.pnum = ,
+.offset = offset,
+.bytes = bytes,
+.pnum = pnum,
 .mapping = mapping,
 .done = false,
 };
@@ -2030,8 +2029,6 @@ static int64_t 
bdrv_common_block_status_above(BlockDriverState *bs,
 bdrv_coroutine_enter(bs, co);
 BDRV_POLL_WHILE(bs, !data.done);
 }
-assert(data.ret < 0 || QEMU_IS_ALIGNED(n, BDRV_SECTOR_SIZE));
-*pnum = n >> BDRV_SECTOR_BITS;
 return data.ret;
 }

@@ -2041,8 +2038,19 @@ int64_t bdrv_get_block_status_above(BlockDriverState *bs,
 int nb_sectors, int *pnum,
 BlockDriverState **file)
 {
-return bdrv_common_block_status_above(bs, base, true, sector_num,
-  nb_sectors, pnum, file);
+int64_t ret;
+int64_t n;
+
+ret = bdrv_common_block_status_above(bs, base, true,
+ sector_num * BDRV_SECTOR_SIZE,
+ nb_sectors * BDRV_SECTOR_SIZE,
+ , file);
+if (ret < 0) {
+return ret;
+}
+assert(QEMU_IS_ALIGNED(n, BDRV_SECTOR_SIZE));
+*pnum = n >> BDRV_SECTOR_BITS;
+return ret;
 }

 int64_t bdrv_block_status(BlockDriverState *bs,
@@ -2071,20 +2079,13 @@ int coroutine_fn bdrv_is_allocated(BlockDriverState 
*bs, int64_t offset,
int64_t bytes, int64_t *pnum)
 {
 int64_t ret;
-int psectors;
+int64_t dummy;

-assert(QEMU_IS_ALIGNED(offset, BDRV_SECTOR_SIZE));
-assert(QEMU_IS_ALIGNED(bytes, BDRV_SECTOR_SIZE) && bytes < INT_MAX);
-ret = bdrv_common_block_status_above(bs, backing_bs(bs), false,
- offset >> BDRV_SECTOR_BITS,
- bytes >> BDRV_SECTOR_BITS, ,
- NULL);
+ret = bdrv_common_block_status_above(bs, backing_bs(bs), false, offset,
+ bytes, pnum ? pnum : , NULL);
 if (ret < 0) {
 return ret;
 }
-if (pnum) {
-*pnum = psectors * BDRV_SECTOR_SIZE;
-}
 return !!(ret & BDRV_BLOCK_ALLOCATED);
 }

-- 
2.13.6




[Qemu-block] [PATCH v5 11/23] block: Switch bdrv_co_get_block_status_above() to byte-based

2017-10-03 Thread Eric Blake
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based.  Convert another internal
type (no semantic change), and rename it to match the corresponding
public function rename.

Signed-off-by: Eric Blake 
Reviewed-by: Fam Zheng 
Reviewed-by: John Snow 

---
v5: no change other than rebase to context
v4: no change
v3: rebase to allocation/mapping sense change, simple enough to keep R-b
v2: rebase to earlier changes
---
 block/io.c | 48 ++--
 1 file changed, 18 insertions(+), 30 deletions(-)

diff --git a/block/io.c b/block/io.c
index 4826751c27..ac7399ad41 100644
--- a/block/io.c
+++ b/block/io.c
@@ -1935,12 +1935,12 @@ static int64_t coroutine_fn 
bdrv_co_block_status(BlockDriverState *bs,
 return ret;
 }

-static int64_t coroutine_fn bdrv_co_get_block_status_above(BlockDriverState 
*bs,
+static int64_t coroutine_fn bdrv_co_block_status_above(BlockDriverState *bs,
 BlockDriverState *base,
 bool mapping,
-int64_t sector_num,
-int nb_sectors,
-int *pnum,
+int64_t offset,
+int64_t bytes,
+int64_t *pnum,
 BlockDriverState **file)
 {
 BlockDriverState *p;
@@ -1949,17 +1949,10 @@ static int64_t coroutine_fn 
bdrv_co_get_block_status_above(BlockDriverState *bs,

 assert(bs != base);
 for (p = bs; p != base; p = backing_bs(p)) {
-int64_t count;
-
-ret = bdrv_co_block_status(p, mapping,
-   sector_num * BDRV_SECTOR_SIZE,
-   nb_sectors * BDRV_SECTOR_SIZE, ,
-   file);
+ret = bdrv_co_block_status(p, mapping, offset, bytes, pnum, file);
 if (ret < 0) {
 break;
 }
-assert(QEMU_IS_ALIGNED(count, BDRV_SECTOR_SIZE));
-*pnum = count >> BDRV_SECTOR_BITS;
 if (ret & BDRV_BLOCK_ZERO && ret & BDRV_BLOCK_EOF && !first) {
 /*
  * Reading beyond the end of the file continues to read
@@ -1967,39 +1960,35 @@ static int64_t coroutine_fn 
bdrv_co_get_block_status_above(BlockDriverState *bs,
  * unallocated length we learned from an earlier
  * iteration.
  */
-*pnum = nb_sectors;
+*pnum = bytes;
 }
 if (ret & (BDRV_BLOCK_ZERO | BDRV_BLOCK_DATA)) {
 break;
 }
-/* [sector_num, pnum] unallocated on this layer, which could be only
- * the first part of [sector_num, nb_sectors].  */
-nb_sectors = MIN(nb_sectors, *pnum);
+/* [offset, pnum] unallocated on this layer, which could be only
+ * the first part of [offset, bytes].  */
+bytes = MIN(bytes, *pnum);
 first = false;
 }
 return ret;
 }

 /* Coroutine wrapper for bdrv_get_block_status_above() */
-static void coroutine_fn bdrv_get_block_status_above_co_entry(void *opaque)
+static void coroutine_fn bdrv_block_status_above_co_entry(void *opaque)
 {
 BdrvCoBlockStatusData *data = opaque;
-int n;

-data->ret = bdrv_co_get_block_status_above(data->bs, data->base,
-   data->mapping,
-   data->offset >> 
BDRV_SECTOR_BITS,
-   data->bytes >> BDRV_SECTOR_BITS,
-   ,
-   data->file);
-*data->pnum = n * BDRV_SECTOR_SIZE;
+data->ret = bdrv_co_block_status_above(data->bs, data->base,
+   data->mapping,
+   data->offset, data->bytes,
+   data->pnum, data->file);
 data->done = true;
 }

 /*
- * Synchronous wrapper around bdrv_co_get_block_status_above().
+ * Synchronous wrapper around bdrv_co_block_status_above().
  *
- * See bdrv_co_get_block_status_above() for details.
+ * See bdrv_co_block_status_above() for details.
  */
 static int64_t bdrv_common_block_status_above(BlockDriverState *bs,
   BlockDriverState *base,
@@ -2022,10 +2011,9 @@ static int64_t 
bdrv_common_block_status_above(BlockDriverState *bs,

 if (qemu_in_coroutine()) {
 /* Fast-path if already in coroutine context */
-bdrv_get_block_status_above_co_entry();
+bdrv_block_status_above_co_entry();
 } else {
-co = qemu_coroutine_create(bdrv_get_block_status_above_co_entry,
-   );
+co = qemu_coroutine_create(bdrv_block_status_above_co_entry, );
 bdrv_coroutine_enter(bs, co);
 BDRV_POLL_WHILE(bs, !data.done);
 }
-- 
2.13.6




[Qemu-block] [PATCH v5 08/23] block: Switch bdrv_co_get_block_status() to byte-based

2017-10-03 Thread Eric Blake
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based.  Convert another internal
function (no semantic change); and as with its public counterpart,
rename to bdrv_co_block_status() to make the compiler enforce that
we catch all uses.  For now, we assert that callers still pass
aligned data, but ultimately, this will be the function where we
hand off to a byte-based driver callback, and will eventually need
to add logic to ensure we round calls according to the driver's
request_alignment then touch up the result handed back to the
caller, to start permitting a caller to pass unaligned offsets.

Note that we are now prepared to accepts 'bytes' larger than INT_MAX;
this is okay as long as we clamp things internally before violating
any 32-bit limits, and makes no difference to how a client will
use the information (clients looping over the entire file must
already be prepared for consecutive calls to return the same status,
as drivers are already free to return shorter-than-maximal status
due to any other convenient split points, such as when the L2 table
crosses cluster boundaries in qcow2).

Signed-off-by: Eric Blake 

---
v5: rebase to earlier changes in 1/23, add comment
v4: no change
v3: rebase to allocation/mapping sense change, clamp bytes to 32-bits
when needed, drop R-b
v2: rebase to earlier changes
---
 block/io.c | 103 +
 1 file changed, 62 insertions(+), 41 deletions(-)

diff --git a/block/io.c b/block/io.c
index ab1853dc2d..b879e26154 100644
--- a/block/io.c
+++ b/block/io.c
@@ -1792,76 +1792,91 @@ int64_t coroutine_fn 
bdrv_co_get_block_status_from_backing(BlockDriverState *bs,
  * BDRV_BLOCK_ZERO where possible; otherwise, the result may omit those
  * bits particularly if it allows for a larger value in 'pnum'.
  *
- * If 'sector_num' is beyond the end of the disk image the return value is
+ * If 'offset' is beyond the end of the disk image the return value is
  * BDRV_BLOCK_EOF and 'pnum' is set to 0.
  *
- * 'pnum' is set to the number of sectors (including and immediately following
- * the specified sector) that are known to be in the same
- * allocated/unallocated state.
+ * 'pnum' is set to the number of bytes (including and immediately following
+ * the specified offset) that are known to be in the same
+ * allocated/unallocated state.  It may be NULL.
  *
- * 'nb_sectors' is the max value 'pnum' should be set to.  If nb_sectors goes
+ * 'bytes' is the max value 'pnum' should be set to.  If bytes goes
  * beyond the end of the disk image it will be clamped; if 'pnum' is set to
  * the end of the image, then the returned value will include BDRV_BLOCK_EOF.
  *
  * If returned value is positive, BDRV_BLOCK_OFFSET_VALID bit is set, and
- * 'file' is non-NULL, then '*file' points to the BDS which the sector range
- * is allocated in.
+ * 'file' is non-NULL, then '*file' points to the BDS which owns the
+ * allocated sector that contains offset.
  */
-static int64_t coroutine_fn bdrv_co_get_block_status(BlockDriverState *bs,
- bool mapping,
- int64_t sector_num,
- int nb_sectors, int *pnum,
- BlockDriverState **file)
+static int64_t coroutine_fn bdrv_co_block_status(BlockDriverState *bs,
+ bool mapping,
+ int64_t offset, int64_t bytes,
+ int64_t *pnum,
+ BlockDriverState **file)
 {
-int64_t total_sectors;
-int64_t n;
+int64_t total_size;
+int64_t n; /* bytes */
 int64_t ret, ret2;
 BlockDriverState *local_file = NULL;
-int local_pnum = 0;
+int64_t local_pnum = 0;
+int count; /* sectors */

 assert(pnum);
-total_sectors = bdrv_nb_sectors(bs);
-if (total_sectors < 0) {
-ret = total_sectors;
+total_size = bdrv_getlength(bs);
+if (total_size < 0) {
+ret = total_size;
 goto early_out;
 }

-if (sector_num >= total_sectors || !nb_sectors) {
-ret = sector_num >= total_sectors ? BDRV_BLOCK_EOF : 0;
+if (offset >= total_size || !bytes) {
+ret = offset >= total_size ? BDRV_BLOCK_EOF : 0;
 goto early_out;
 }

-n = total_sectors - sector_num;
-if (n < nb_sectors) {
-nb_sectors = n;
+n = total_size - offset;
+if (n < bytes) {
+bytes = n;
 }

 if (!bs->drv->bdrv_co_get_block_status) {
-local_pnum = nb_sectors;
+local_pnum = bytes;
 ret = BDRV_BLOCK_DATA | BDRV_BLOCK_ALLOCATED;
-if (sector_num + nb_sectors == total_sectors) {
+if (offset + bytes == total_size) {
 ret |= 

[Qemu-block] [PATCH v5 15/23] qemu-img: Add find_nonzero()

2017-10-03 Thread Eric Blake
During 'qemu-img compare', when we are checking that an allocated
portion of one file is all zeros, we don't need to waste time
computing how many additional sectors after the first non-zero
byte are also non-zero.  Create a new helper find_nonzero() to do
the check for a first non-zero sector, and rebase
check_empty_sectors() to use it.

The new interface intentionally uses bytes in its interface, even
though it still crawls the buffer a sector at a time; it is robust
to a partial sector at the end of the buffer.

Signed-off-by: Eric Blake 
Reviewed-by: John Snow 

---
v4-v5: no change
v3: new patch
---
 qemu-img.c | 32 
 1 file changed, 28 insertions(+), 4 deletions(-)

diff --git a/qemu-img.c b/qemu-img.c
index 43e3038894..12881f008e 100644
--- a/qemu-img.c
+++ b/qemu-img.c
@@ -1065,6 +1065,28 @@ done:
 }

 /*
+ * Returns -1 if 'buf' contains only zeroes, otherwise the byte index
+ * of the first sector boundary within buf where the sector contains a
+ * non-zero byte.  This function is robust to a buffer that is not
+ * sector-aligned.
+ */
+static int64_t find_nonzero(const uint8_t *buf, int64_t n)
+{
+int64_t i;
+int64_t end = QEMU_ALIGN_DOWN(n, BDRV_SECTOR_SIZE);
+
+for (i = 0; i < end; i += BDRV_SECTOR_SIZE) {
+if (!buffer_is_zero(buf + i, BDRV_SECTOR_SIZE)) {
+return i;
+}
+}
+if (i < n && !buffer_is_zero(buf + i, n - end)) {
+return i;
+}
+return -1;
+}
+
+/*
  * Returns true iff the first sector pointed to by 'buf' contains at least
  * a non-NUL byte.
  *
@@ -1189,7 +1211,9 @@ static int check_empty_sectors(BlockBackend *blk, int64_t 
sect_num,
int sect_count, const char *filename,
uint8_t *buffer, bool quiet)
 {
-int pnum, ret = 0;
+int ret = 0;
+int64_t idx;
+
 ret = blk_pread(blk, sect_num << BDRV_SECTOR_BITS, buffer,
 sect_count << BDRV_SECTOR_BITS);
 if (ret < 0) {
@@ -1197,10 +1221,10 @@ static int check_empty_sectors(BlockBackend *blk, 
int64_t sect_num,
  sectors_to_bytes(sect_num), filename, strerror(-ret));
 return ret;
 }
-ret = is_allocated_sectors(buffer, sect_count, );
-if (ret || pnum != sect_count) {
+idx = find_nonzero(buffer, sect_count * BDRV_SECTOR_SIZE);
+if (idx >= 0) {
 qprintf(quiet, "Content mismatch at offset %" PRId64 "!\n",
-sectors_to_bytes(ret ? sect_num : sect_num + pnum));
+sectors_to_bytes(sect_num) + idx);
 return 1;
 }

-- 
2.13.6




[Qemu-block] [PATCH v5 06/23] qemu-img: Switch get_block_status() to byte-based

2017-10-03 Thread Eric Blake
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based.  Continue by converting
an internal function (no semantic change), and simplifying its
caller accordingly.

Signed-off-by: Eric Blake 
Reviewed-by: Fam Zheng 
Reviewed-by: John Snow 

---
v2-v5: no change
---
 qemu-img.c | 24 +++-
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/qemu-img.c b/qemu-img.c
index e9c7b30c91..af3effdec5 100644
--- a/qemu-img.c
+++ b/qemu-img.c
@@ -2671,14 +2671,16 @@ static void dump_map_entry(OutputFormat output_format, 
MapEntry *e,
 }
 }

-static int get_block_status(BlockDriverState *bs, int64_t sector_num,
-int nb_sectors, MapEntry *e)
+static int get_block_status(BlockDriverState *bs, int64_t offset,
+int64_t bytes, MapEntry *e)
 {
 int64_t ret;
 int depth;
 BlockDriverState *file;
 bool has_offset;
+int nb_sectors = bytes >> BDRV_SECTOR_BITS;

+assert(bytes < INT_MAX);
 /* As an optimization, we could cache the current range of unallocated
  * clusters in each file of the chain, and avoid querying the same
  * range repeatedly.
@@ -2686,8 +2688,8 @@ static int get_block_status(BlockDriverState *bs, int64_t 
sector_num,

 depth = 0;
 for (;;) {
-ret = bdrv_get_block_status(bs, sector_num, nb_sectors, _sectors,
-);
+ret = bdrv_get_block_status(bs, offset >> BDRV_SECTOR_BITS, nb_sectors,
+_sectors, );
 if (ret < 0) {
 return ret;
 }
@@ -2707,7 +2709,7 @@ static int get_block_status(BlockDriverState *bs, int64_t 
sector_num,
 has_offset = !!(ret & BDRV_BLOCK_OFFSET_VALID);

 *e = (MapEntry) {
-.start = sector_num * BDRV_SECTOR_SIZE,
+.start = offset,
 .length = nb_sectors * BDRV_SECTOR_SIZE,
 .data = !!(ret & BDRV_BLOCK_DATA),
 .zero = !!(ret & BDRV_BLOCK_ZERO),
@@ -2837,16 +2839,12 @@ static int img_map(int argc, char **argv)

 length = blk_getlength(blk);
 while (curr.start + curr.length < length) {
-int64_t nsectors_left;
-int64_t sector_num;
-int n;
-
-sector_num = (curr.start + curr.length) >> BDRV_SECTOR_BITS;
+int64_t offset = curr.start + curr.length;
+int64_t n;

 /* Probe up to 1 GiB at a time.  */
-nsectors_left = DIV_ROUND_UP(length, BDRV_SECTOR_SIZE) - sector_num;
-n = MIN(1 << (30 - BDRV_SECTOR_BITS), nsectors_left);
-ret = get_block_status(bs, sector_num, n, );
+n = QEMU_ALIGN_DOWN(MIN(1 << 30, length - offset), BDRV_SECTOR_SIZE);
+ret = get_block_status(bs, offset, n, );

 if (ret < 0) {
 error_report("Could not read file metadata: %s", strerror(-ret));
-- 
2.13.6




[Qemu-block] [PATCH v5 01/23] block: Allow NULL file for bdrv_get_block_status()

2017-10-03 Thread Eric Blake
Not all callers care about which BDS owns the mapping for a given
range of the file.  This patch merely simplifies the callers by
consolidating the logic in the common call point, while guaranteeing
a non-NULL file to all the driver callbacks, for no semantic change.
The only caller that does not care about pnum is bdrv_is_allocated,
as invoked by vvfat; we can likewise add assertions that the rest
of the stack does not have to worry about a NULL pnum.

Furthermore, this will also set the stage for a future cleanup: when
a caller does not care about which BDS owns an offset, it would be
nice to allow the driver to optimize things to not have to return
BDRV_BLOCK_OFFSET_VALID in the first place.  In the case of fragmented
allocation (for example, it's fairly easy to create a qcow2 image
where consecutive guest addresses are not at consecutive host
addresses), the current contract requires bdrv_get_block_status()
to clamp *pnum to the limit where host addresses are no longer
consecutive, but allowing a NULL file means that *pnum could be
set to the full length of known-allocated data.

Signed-off-by: Eric Blake 

---
v5: use second label for cleaner exit logic [John], use local_pnum
v4: only context changes
v3: rebase to recent changes (qcow2_measure), dropped R-b
v2: use local variable and final transfer, rather than assignment
of parameter to local
[previously in different series]:
v2: new patch, 
https://lists.gnu.org/archive/html/qemu-devel/2017-05/msg05645.html
---
 include/block/block_int.h | 10 
 block/io.c| 58 ++-
 block/mirror.c|  3 +--
 block/qcow2.c |  8 ++-
 qemu-img.c| 10 
 5 files changed, 45 insertions(+), 44 deletions(-)

diff --git a/include/block/block_int.h b/include/block/block_int.h
index 79366b94b5..3b4158f576 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -202,10 +202,12 @@ struct BlockDriver {
 int64_t offset, int bytes);

 /*
- * Building block for bdrv_block_status[_above]. The driver should
- * answer only according to the current layer, and should not
- * set BDRV_BLOCK_ALLOCATED, but may set BDRV_BLOCK_RAW.  See block.h
- * for the meaning of _DATA, _ZERO, and _OFFSET_VALID.
+ * Building block for bdrv_block_status[_above] and
+ * bdrv_is_allocated[_above].  The driver should answer only
+ * according to the current layer, and should not set
+ * BDRV_BLOCK_ALLOCATED, but may set BDRV_BLOCK_RAW.  See block.h
+ * for the meaning of _DATA, _ZERO, and _OFFSET_VALID.  The block
+ * layer guarantees non-NULL pnum and file.
  */
 int64_t coroutine_fn (*bdrv_co_get_block_status)(BlockDriverState *bs,
 int64_t sector_num, int nb_sectors, int *pnum,
diff --git a/block/io.c b/block/io.c
index 1e246315a7..e5a6f63eea 100644
--- a/block/io.c
+++ b/block/io.c
@@ -698,7 +698,6 @@ int bdrv_make_zero(BdrvChild *child, BdrvRequestFlags flags)
 {
 int64_t target_sectors, ret, nb_sectors, sector_num = 0;
 BlockDriverState *bs = child->bs;
-BlockDriverState *file;
 int n;

 target_sectors = bdrv_nb_sectors(bs);
@@ -711,7 +710,7 @@ int bdrv_make_zero(BdrvChild *child, BdrvRequestFlags flags)
 if (nb_sectors <= 0) {
 return 0;
 }
-ret = bdrv_get_block_status(bs, sector_num, nb_sectors, , );
+ret = bdrv_get_block_status(bs, sector_num, nb_sectors, , NULL);
 if (ret < 0) {
 error_report("error getting block status at sector %" PRId64 ": 
%s",
  sector_num, strerror(-ret));
@@ -1800,8 +1799,9 @@ int64_t coroutine_fn 
bdrv_co_get_block_status_from_backing(BlockDriverState *bs,
  * beyond the end of the disk image it will be clamped; if 'pnum' is set to
  * the end of the image, then the returned value will include BDRV_BLOCK_EOF.
  *
- * If returned value is positive and BDRV_BLOCK_OFFSET_VALID bit is set, 'file'
- * points to the BDS which the sector range is allocated in.
+ * If returned value is positive, BDRV_BLOCK_OFFSET_VALID bit is set, and
+ * 'file' is non-NULL, then '*file' points to the BDS which the sector range
+ * is allocated in.
  */
 static int64_t coroutine_fn bdrv_co_get_block_status(BlockDriverState *bs,
  int64_t sector_num,
@@ -1811,16 +1811,19 @@ static int64_t coroutine_fn 
bdrv_co_get_block_status(BlockDriverState *bs,
 int64_t total_sectors;
 int64_t n;
 int64_t ret, ret2;
+BlockDriverState *local_file = NULL;
+int local_pnum = 0;

-*file = NULL;
+assert(pnum);
 total_sectors = bdrv_nb_sectors(bs);
 if (total_sectors < 0) {
-return total_sectors;
+ret = total_sectors;
+goto early_out;
 }

 if (sector_num >= total_sectors || !nb_sectors) {
-*pnum = 0;
-return sector_num >= total_sectors ? BDRV_BLOCK_EOF : 0;
+

[Qemu-block] [PATCH v5 04/23] qcow2: Switch is_zero_sectors() to byte-based

2017-10-03 Thread Eric Blake
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based.  Convert another internal
function (no semantic change), and rename it to is_zero() in the
process.

Signed-off-by: Eric Blake 
Reviewed-by: Fam Zheng 
Reviewed-by: John Snow 

---
v3-v5: no change
v2: rename function, rebase to upstream changes
---
 block/qcow2.c | 32 ++--
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git a/block/qcow2.c b/block/qcow2.c
index bcd5c4a34c..e0de46f530 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -2972,21 +2972,28 @@ finish:
 }


-static bool is_zero_sectors(BlockDriverState *bs, int64_t start,
-uint32_t count)
+static bool is_zero(BlockDriverState *bs, int64_t offset, int64_t bytes)
 {
 int nr;
 int64_t res;
+int64_t start;

-if (start + count > bs->total_sectors) {
-count = bs->total_sectors - start;
+/* Widen to sector boundaries, then clamp to image length, before
+ * checking status of underlying sectors */
+start = QEMU_ALIGN_DOWN(offset, BDRV_SECTOR_SIZE);
+bytes = QEMU_ALIGN_UP(offset + bytes, BDRV_SECTOR_SIZE) - start;
+
+if (start + bytes > bs->total_sectors * BDRV_SECTOR_SIZE) {
+bytes = bs->total_sectors * BDRV_SECTOR_SIZE - start;
 }

-if (!count) {
+if (!bytes) {
 return true;
 }
-res = bdrv_get_block_status_above(bs, NULL, start, count, , NULL);
-return res >= 0 && (res & BDRV_BLOCK_ZERO) && nr == count;
+res = bdrv_get_block_status_above(bs, NULL, start >> BDRV_SECTOR_BITS,
+  bytes >> BDRV_SECTOR_BITS, , NULL);
+return res >= 0 && (res & BDRV_BLOCK_ZERO) &&
+nr * BDRV_SECTOR_SIZE == bytes;
 }

 static coroutine_fn int qcow2_co_pwrite_zeroes(BlockDriverState *bs,
@@ -3004,24 +3011,21 @@ static coroutine_fn int 
qcow2_co_pwrite_zeroes(BlockDriverState *bs,
 }

 if (head || tail) {
-int64_t cl_start = (offset - head) >> BDRV_SECTOR_BITS;
 uint64_t off;
 unsigned int nr;

 assert(head + bytes <= s->cluster_size);

 /* check whether remainder of cluster already reads as zero */
-if (!(is_zero_sectors(bs, cl_start,
-  DIV_ROUND_UP(head, BDRV_SECTOR_SIZE)) &&
-  is_zero_sectors(bs, (offset + bytes) >> BDRV_SECTOR_BITS,
-  DIV_ROUND_UP(-tail & (s->cluster_size - 1),
-   BDRV_SECTOR_SIZE {
+if (!(is_zero(bs, offset - head, head) &&
+  is_zero(bs, offset + bytes,
+  tail ? s->cluster_size - tail : 0))) {
 return -ENOTSUP;
 }

 qemu_co_mutex_lock(>lock);
 /* We can have new write after previous check */
-offset = cl_start << BDRV_SECTOR_BITS;
+offset = QEMU_ALIGN_DOWN(offset, s->cluster_size);
 bytes = s->cluster_size;
 nr = s->cluster_size;
 ret = qcow2_get_cluster_offset(bs, offset, , );
-- 
2.13.6




[Qemu-block] [PATCH v5 07/23] block: Convert bdrv_get_block_status() to bytes

2017-10-03 Thread Eric Blake
We are gradually moving away from sector-based interfaces, towards
byte-based.  In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.

Changing the name of the function from bdrv_get_block_status() to
bdrv_block_status() ensures that the compiler enforces that all
callers are updated.  For now, the io.c layer still assert()s that
all callers are sector-aligned, but that can be relaxed when a later
patch implements byte-based block status in the drivers.

Note that we have an inherent limitation in the BDRV_BLOCK_* return
values: BDRV_BLOCK_OFFSET_VALID can only return the start of a
sector, even if we later relax the interface to query for the status
starting at an intermediate byte; document the obvious interpretation
that valid offsets are always sector-relative.

Therefore, for the most part this patch is just the addition of scaling
at the callers followed by inverse scaling at bdrv_block_status().  But
some code, particularly bdrv_is_allocated(), gets a lot simpler because
it no longer has to mess with sectors.

For ease of review, bdrv_get_block_status_above() will be tackled
separately.

Signed-off-by: Eric Blake 

---
v5: drop pointless 'if (pnum)' [John], add comment
v4: no change
v3: clamp bytes to 32-bits, rather than asserting
v2: rebase to earlier changes
---
 include/block/block.h | 12 +++-
 block/io.c| 35 +++
 block/qcow2-cluster.c |  2 +-
 qemu-img.c| 20 +++-
 4 files changed, 42 insertions(+), 27 deletions(-)

diff --git a/include/block/block.h b/include/block/block.h
index be49c4ae9d..4ecd2c4a65 100644
--- a/include/block/block.h
+++ b/include/block/block.h
@@ -138,8 +138,10 @@ typedef struct HDGeometry {
  *
  * If BDRV_BLOCK_OFFSET_VALID is set, bits 9-62 (BDRV_BLOCK_OFFSET_MASK)
  * represent the offset in the returned BDS that is allocated for the
- * corresponding raw data; however, whether that offset actually contains
- * data also depends on BDRV_BLOCK_DATA and BDRV_BLOCK_ZERO, as follows:
+ * corresponding raw data.  Individual bytes are at the same sector-relative
+ * locations (and thus, this bit cannot be set for mappings which are
+ * not equivalent modulo 512).  However, whether that offset actually
+ * contains data also depends on BDRV_BLOCK_DATA, as follows:
  *
  * DATA ZERO OFFSET_VALID
  *  ttt   sectors read as zero, returned file is zero at offset
@@ -422,9 +424,9 @@ int bdrv_has_zero_init_1(BlockDriverState *bs);
 int bdrv_has_zero_init(BlockDriverState *bs);
 bool bdrv_unallocated_blocks_are_zero(BlockDriverState *bs);
 bool bdrv_can_write_zeroes_with_unmap(BlockDriverState *bs);
-int64_t bdrv_get_block_status(BlockDriverState *bs, int64_t sector_num,
-  int nb_sectors, int *pnum,
-  BlockDriverState **file);
+int64_t bdrv_block_status(BlockDriverState *bs, int64_t offset,
+  int64_t bytes, int64_t *pnum,
+  BlockDriverState **file);
 int64_t bdrv_get_block_status_above(BlockDriverState *bs,
 BlockDriverState *base,
 int64_t sector_num,
diff --git a/block/io.c b/block/io.c
index afba2da1c4..ab1853dc2d 100644
--- a/block/io.c
+++ b/block/io.c
@@ -698,7 +698,6 @@ int bdrv_make_zero(BdrvChild *child, BdrvRequestFlags flags)
 {
 int64_t target_size, ret, bytes, offset = 0;
 BlockDriverState *bs = child->bs;
-int n; /* sectors */

 target_size = bdrv_getlength(bs);
 if (target_size < 0) {
@@ -710,24 +709,23 @@ int bdrv_make_zero(BdrvChild *child, BdrvRequestFlags 
flags)
 if (bytes <= 0) {
 return 0;
 }
-ret = bdrv_get_block_status(bs, offset >> BDRV_SECTOR_BITS,
-bytes >> BDRV_SECTOR_BITS, , NULL);
+ret = bdrv_block_status(bs, offset, bytes, , NULL);
 if (ret < 0) {
 error_report("error getting block status at offset %" PRId64 ": 
%s",
  offset, strerror(-ret));
 return ret;
 }
 if (ret & BDRV_BLOCK_ZERO) {
-offset += n * BDRV_SECTOR_BITS;
+offset += bytes;
 continue;
 }
-ret = bdrv_pwrite_zeroes(child, offset, n * BDRV_SECTOR_SIZE, flags);
+ret = bdrv_pwrite_zeroes(child, offset, bytes, flags);
 if (ret < 0) {
 error_report("error writing zeroes at offset %" PRId64 ": %s",
  offset, strerror(-ret));
 return ret;
 }
-offset += n * BDRV_SECTOR_SIZE;
+offset += bytes;
 }
 }

@@ -2021,13 +2019,26 @@ int64_t bdrv_get_block_status_above(BlockDriverState 
*bs,
  

[Qemu-block] [PATCH v5 02/23] block: Add flag to avoid wasted work in bdrv_is_allocated()

2017-10-03 Thread Eric Blake
Not all callers care about which BDS owns the mapping for a given
range of the file.  In particular, bdrv_is_allocated() cares more
about finding the largest run of allocated data from the guest
perspective, whether or not that data is consecutive from the
host perspective, and whether or not the data reads as zero.
Therefore, doing subsequent refinements such as checking how much
of the format-layer allocation also satisfies BDRV_BLOCK_ZERO at
the protocol layer is wasted work - in the best case, it just
costs extra CPU cycles during a single bdrv_is_allocated(), but
in the worst case, it results in a smaller *pnum, and forces
callers to iterate through more status probes when visiting the
entire file for even more extra CPU cycles.

This patch only optimizes the block layer (no behavior change when
mapping is true, but skip unnecessary effort when it is false).
Then when subsequent patches tweak the driver callback to be
byte-based, we can also pass this hint through to the driver.

Signed-off-by: Eric Blake 
Reviewed-by: John Snow 

---
v5: tweak commit message and one comment, rebase to previous changes,
minor enough to still add R-b
v4: only context changes
v3: s/allocation/mapping/ and flip sense of bool
v2: new patch
---
 block/io.c | 52 ++--
 1 file changed, 38 insertions(+), 14 deletions(-)

diff --git a/block/io.c b/block/io.c
index e5a6f63eea..0d8cdab583 100644
--- a/block/io.c
+++ b/block/io.c
@@ -1754,6 +1754,7 @@ typedef struct BdrvCoGetBlockStatusData {
 int nb_sectors;
 int *pnum;
 int64_t ret;
+bool mapping;
 bool done;
 } BdrvCoGetBlockStatusData;

@@ -1788,6 +1789,11 @@ int64_t coroutine_fn 
bdrv_co_get_block_status_from_backing(BlockDriverState *bs,
  * Drivers not implementing the functionality are assumed to not support
  * backing files, hence all their sectors are reported as allocated.
  *
+ * If 'mapping' is true, the caller is querying for mapping purposes,
+ * and the result should include BDRV_BLOCK_OFFSET_VALID and
+ * BDRV_BLOCK_ZERO where possible; otherwise, the result may omit those
+ * bits particularly if it allows for a larger value in 'pnum'.
+ *
  * If 'sector_num' is beyond the end of the disk image the return value is
  * BDRV_BLOCK_EOF and 'pnum' is set to 0.
  *
@@ -1804,6 +1810,7 @@ int64_t coroutine_fn 
bdrv_co_get_block_status_from_backing(BlockDriverState *bs,
  * is allocated in.
  */
 static int64_t coroutine_fn bdrv_co_get_block_status(BlockDriverState *bs,
+ bool mapping,
  int64_t sector_num,
  int nb_sectors, int *pnum,
  BlockDriverState **file)
@@ -1854,14 +1861,15 @@ static int64_t coroutine_fn 
bdrv_co_get_block_status(BlockDriverState *bs,

 if (ret & BDRV_BLOCK_RAW) {
 assert(ret & BDRV_BLOCK_OFFSET_VALID && local_file);
-ret = bdrv_co_get_block_status(local_file, ret >> BDRV_SECTOR_BITS,
+ret = bdrv_co_get_block_status(local_file, mapping,
+   ret >> BDRV_SECTOR_BITS,
local_pnum, _pnum, _file);
 goto out;
 }

 if (ret & (BDRV_BLOCK_DATA | BDRV_BLOCK_ZERO)) {
 ret |= BDRV_BLOCK_ALLOCATED;
-} else {
+} else if (mapping) {
 if (bdrv_unallocated_blocks_are_zero(bs)) {
 ret |= BDRV_BLOCK_ZERO;
 } else if (bs->backing) {
@@ -1873,12 +1881,13 @@ static int64_t coroutine_fn 
bdrv_co_get_block_status(BlockDriverState *bs,
 }
 }

-if (local_file && local_file != bs &&
+if (mapping && local_file && local_file != bs &&
 (ret & BDRV_BLOCK_DATA) && !(ret & BDRV_BLOCK_ZERO) &&
 (ret & BDRV_BLOCK_OFFSET_VALID)) {
 int file_pnum;

-ret2 = bdrv_co_get_block_status(local_file, ret >> BDRV_SECTOR_BITS,
+ret2 = bdrv_co_get_block_status(local_file, mapping,
+ret >> BDRV_SECTOR_BITS,
 local_pnum, _pnum, NULL);
 if (ret2 >= 0) {
 /* Ignore errors.  This is just providing extra information, it
@@ -1915,6 +1924,7 @@ static int64_t coroutine_fn 
bdrv_co_get_block_status(BlockDriverState *bs,

 static int64_t coroutine_fn bdrv_co_get_block_status_above(BlockDriverState 
*bs,
 BlockDriverState *base,
+bool mapping,
 int64_t sector_num,
 int nb_sectors,
 int *pnum,
@@ -1926,7 +1936,8 @@ static int64_t coroutine_fn 
bdrv_co_get_block_status_above(BlockDriverState *bs,

 assert(bs != base);
 for (p = bs; p != base; p = backing_bs(p)) {
-ret = bdrv_co_get_block_status(p, sector_num, nb_sectors, pnum, file);
+ret = bdrv_co_get_block_status(p, mapping, sector_num, nb_sectors,
+ 

[Qemu-block] [PATCH v5 00/23] make bdrv_get_block_status byte-based

2017-10-03 Thread Eric Blake
There are patches floating around to add NBD_CMD_BLOCK_STATUS,
but NBD wants to report status on byte granularity (even if the
reporting will probably be naturally aligned to sectors or even
much higher levels).  I've therefore started the task of
converting our block status code to report at a byte granularity
rather than sectors.

Now that 2.11 is open, I'm rebasing/reposting the remaining patches.

The overall conversion currently looks like:
part 1: bdrv_is_allocated (merged, commit 51b0a488)
part 2: dirty-bitmap (v10 is queued [1])
part 3: bdrv_get_block_status (this series, v4 at [2])
part 4: .bdrv_co_block_status (v3 is posted [4], mostly reviewed)

Available as a tag at:
git fetch git://repo.or.cz/qemu/ericb.git nbd-byte-status-v4

Based-on: <20170925145526.32690-1-ebl...@redhat.com>
([PATCH v10 00/20] make dirty-bitmap byte-based)
Based-on: <20171004014347.25099-1-ebl...@redhat.com>
([PATCH v2 0/5] block: Avoid copy-on-read assertions)

[1] https://lists.gnu.org/archive/html/qemu-devel/2017-09/msg06848.html
[2] https://lists.gnu.org/archive/html/qemu-devel/2017-09/msg03543.html
[3] https://lists.gnu.org/archive/html/qemu-devel/2017-09/msg03812.html
[4] https://lists.gnu.org/archive/html/qemu-devel/2017-10/msg00524.html

Since v4:
- rebase to fixes for copy-on-read
- tweak bdrv_co_block_status goto/label usage for easier reading [John]
- more added comments and improved commit messages
- fix a couple of bugs, such as wrong trace-events usage
- add R-b where things didn't change drastically

001/23:[0042] [FC] 'block: Allow NULL file for bdrv_get_block_status()'
002/23:[0006] [FC] 'block: Add flag to avoid wasted work in bdrv_is_allocated()'
003/23:[0003] [FC] 'block: Make bdrv_round_to_clusters() signature more useful'
004/23:[] [--] 'qcow2: Switch is_zero_sectors() to byte-based'
005/23:[] [--] 'block: Switch bdrv_make_zero() to byte-based'
006/23:[] [--] 'qemu-img: Switch get_block_status() to byte-based'
007/23:[0010] [FC] 'block: Convert bdrv_get_block_status() to bytes'
008/23:[0042] [FC] 'block: Switch bdrv_co_get_block_status() to byte-based'
009/23:[] [--] 'block: Switch BdrvCoGetBlockStatusData to byte-based'
010/23:[] [--] 'block: Switch bdrv_common_block_status_above() to 
byte-based'
011/23:[] [-C] 'block: Switch bdrv_co_get_block_status_above() to 
byte-based'
012/23:[0019] [FC] 'block: Convert bdrv_get_block_status_above() to bytes'
013/23:[0008] [FC] 'qemu-img: Simplify logic in img_compare()'
014/23:[] [--] 'qemu-img: Speed up compare on pre-allocated larger file'
015/23:[] [--] 'qemu-img: Add find_nonzero()'
016/23:[] [--] 'qemu-img: Drop redundant error message in compare'
017/23:[] [--] 'qemu-img: Change check_empty_sectors() to byte-based'
018/23:[] [--] 'qemu-img: Change compare_sectors() to be byte-based'
019/23:[] [--] 'qemu-img: Change img_rebase() to be byte-based'
020/23:[0005] [FC] 'qemu-img: Change img_compare() to be byte-based'
021/23:[0061] [FC] 'block: Align block status requests'
022/23:[] [--] 'block: Relax bdrv_aligned_preadv() assertion'
023/23:[] [--] 'qemu-io: Relax 'alloc' now that block-status doesn't assert'

Eric Blake (23):
  block: Allow NULL file for bdrv_get_block_status()
  block: Add flag to avoid wasted work in bdrv_is_allocated()
  block: Make bdrv_round_to_clusters() signature more useful
  qcow2: Switch is_zero_sectors() to byte-based
  block: Switch bdrv_make_zero() to byte-based
  qemu-img: Switch get_block_status() to byte-based
  block: Convert bdrv_get_block_status() to bytes
  block: Switch bdrv_co_get_block_status() to byte-based
  block: Switch BdrvCoGetBlockStatusData to byte-based
  block: Switch bdrv_common_block_status_above() to byte-based
  block: Switch bdrv_co_get_block_status_above() to byte-based
  block: Convert bdrv_get_block_status_above() to bytes
  qemu-img: Simplify logic in img_compare()
  qemu-img: Speed up compare on pre-allocated larger file
  qemu-img: Add find_nonzero()
  qemu-img: Drop redundant error message in compare
  qemu-img: Change check_empty_sectors() to byte-based
  qemu-img: Change compare_sectors() to be byte-based
  qemu-img: Change img_rebase() to be byte-based
  qemu-img: Change img_compare() to be byte-based
  block: Align block status requests
  block: Relax bdrv_aligned_preadv() assertion
  qemu-io: Relax 'alloc' now that block-status doesn't assert

 include/block/block.h  |  26 ++--
 include/block/block_int.h  |  11 +-
 block/io.c | 303 +
 block/blkdebug.c   |  13 +-
 block/mirror.c |  24 +--
 block/qcow2-cluster.c  |   2 +-
 block/qcow2.c  |  53 +++
 qemu-img.c | 365 -
 qemu-io-cmds.c |  13 --
 block/trace-events |   2 +-
 tests/qemu-iotests/074.out |   2 -
 tests/qemu-iotests/177 |  12 +-
 tests/qemu-iotests/177.out |  19 ++-
 13 files changed, 431 insertions(+), 

[Qemu-block] [PATCH v5 03/23] block: Make bdrv_round_to_clusters() signature more useful

2017-10-03 Thread Eric Blake
In the process of converting sector-based interfaces to bytes,
I'm finding it easier to represent a byte count as a 64-bit
integer at the block layer (even if we are internally capped
by SIZE_MAX or even INT_MAX for individual transactions, it's
still nicer to not have to worry about truncation/overflow
issues on as many variables).  Update the signature of
bdrv_round_to_clusters() to uniformly use int64_t, matching
the signature already chosen for bdrv_is_allocated and the
fact that off_t is also a signed type, then adjust clients
according to the required fallout (even where the result could
now exceed 32 bits, no client is directly assigning the result
into a 32-bit value without breaking things into a loop first).

Signed-off-by: Eric Blake 

---
v5: depends on copy-on-read fixes [John], fix incorrect trace, update
commit message to document rounding considerations, drop R-b
v4: only context changes
v3: no change
v2: fix commit message [John], rebase to earlier changes, including
mirror_clip_bytes() signature update
---
 include/block/block.h | 4 ++--
 block/io.c| 6 +++---
 block/mirror.c| 7 +++
 block/trace-events| 2 +-
 4 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/include/block/block.h b/include/block/block.h
index 3c3af462e4..be49c4ae9d 100644
--- a/include/block/block.h
+++ b/include/block/block.h
@@ -475,9 +475,9 @@ int bdrv_get_flags(BlockDriverState *bs);
 int bdrv_get_info(BlockDriverState *bs, BlockDriverInfo *bdi);
 ImageInfoSpecific *bdrv_get_specific_info(BlockDriverState *bs);
 void bdrv_round_to_clusters(BlockDriverState *bs,
-int64_t offset, unsigned int bytes,
+int64_t offset, int64_t bytes,
 int64_t *cluster_offset,
-unsigned int *cluster_bytes);
+int64_t *cluster_bytes);

 const char *bdrv_get_encrypted_filename(BlockDriverState *bs);
 void bdrv_get_backing_filename(BlockDriverState *bs,
diff --git a/block/io.c b/block/io.c
index 0d8cdab583..e4d5d33805 100644
--- a/block/io.c
+++ b/block/io.c
@@ -449,9 +449,9 @@ static void mark_request_serialising(BdrvTrackedRequest 
*req, uint64_t align)
  * Round a region to cluster boundaries
  */
 void bdrv_round_to_clusters(BlockDriverState *bs,
-int64_t offset, unsigned int bytes,
+int64_t offset, int64_t bytes,
 int64_t *cluster_offset,
-unsigned int *cluster_bytes)
+int64_t *cluster_bytes)
 {
 BlockDriverInfo bdi;

@@ -949,7 +949,7 @@ static int coroutine_fn bdrv_co_do_copy_on_readv(BdrvChild 
*child,
 struct iovec iov;
 QEMUIOVector local_qiov;
 int64_t cluster_offset;
-unsigned int cluster_bytes;
+int64_t cluster_bytes;
 size_t skip_bytes;
 int ret;
 int max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
diff --git a/block/mirror.c b/block/mirror.c
index e664a5dc5a..bac2324dce 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -190,10 +190,9 @@ static int mirror_cow_align(MirrorBlockJob *s, int64_t 
*offset,
 bool need_cow;
 int ret = 0;
 int64_t align_offset = *offset;
-unsigned int align_bytes = *bytes;
+int64_t align_bytes = *bytes;
 int max_bytes = s->granularity * s->max_iov;

-assert(*bytes < INT_MAX);
 need_cow = !test_bit(*offset / s->granularity, s->cow_bitmap);
 need_cow |= !test_bit((*offset + *bytes - 1) / s->granularity,
   s->cow_bitmap);
@@ -388,7 +387,7 @@ static uint64_t coroutine_fn 
mirror_iteration(MirrorBlockJob *s)
 while (nb_chunks > 0 && offset < s->bdev_length) {
 int64_t ret;
 int io_sectors;
-unsigned int io_bytes;
+int64_t io_bytes;
 int64_t io_bytes_acct;
 enum MirrorMethod {
 MIRROR_METHOD_COPY,
@@ -413,7 +412,7 @@ static uint64_t coroutine_fn 
mirror_iteration(MirrorBlockJob *s)
 io_bytes = s->granularity;
 } else if (ret >= 0 && !(ret & BDRV_BLOCK_DATA)) {
 int64_t target_offset;
-unsigned int target_bytes;
+int64_t target_bytes;
 bdrv_round_to_clusters(blk_bs(s->target), offset, io_bytes,
_offset, _bytes);
 if (target_offset == offset &&
diff --git a/block/trace-events b/block/trace-events
index 25dd5a3026..11c8d5f590 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -12,7 +12,7 @@ blk_co_pwritev(void *blk, void *bs, int64_t offset, unsigned 
int bytes, int flag
 bdrv_co_preadv(void *bs, int64_t offset, int64_t nbytes, unsigned int flags) 
"bs %p offset %"PRId64" nbytes %"PRId64" flags 0x%x"
 bdrv_co_pwritev(void *bs, int64_t offset, int64_t nbytes, unsigned int flags) 
"bs %p offset %"PRId64" nbytes %"PRId64" flags 0x%x"
 bdrv_co_pwrite_zeroes(void *bs, int64_t offset, int count, int 

[Qemu-block] [PATCH v5 05/23] block: Switch bdrv_make_zero() to byte-based

2017-10-03 Thread Eric Blake
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based.  Change the internal
loop iteration of zeroing a device to track by bytes instead of
sectors (although we are still guaranteed that we iterate by steps
that are sector-aligned).

Signed-off-by: Eric Blake 
Reviewed-by: Fam Zheng 
Reviewed-by: John Snow 

---
v3-v5: no change
v2: rebase to earlier changes
---
 block/io.c | 32 
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/block/io.c b/block/io.c
index e4d5d33805..afba2da1c4 100644
--- a/block/io.c
+++ b/block/io.c
@@ -696,38 +696,38 @@ int bdrv_pwrite_zeroes(BdrvChild *child, int64_t offset,
  */
 int bdrv_make_zero(BdrvChild *child, BdrvRequestFlags flags)
 {
-int64_t target_sectors, ret, nb_sectors, sector_num = 0;
+int64_t target_size, ret, bytes, offset = 0;
 BlockDriverState *bs = child->bs;
-int n;
+int n; /* sectors */

-target_sectors = bdrv_nb_sectors(bs);
-if (target_sectors < 0) {
-return target_sectors;
+target_size = bdrv_getlength(bs);
+if (target_size < 0) {
+return target_size;
 }

 for (;;) {
-nb_sectors = MIN(target_sectors - sector_num, 
BDRV_REQUEST_MAX_SECTORS);
-if (nb_sectors <= 0) {
+bytes = MIN(target_size - offset, BDRV_REQUEST_MAX_BYTES);
+if (bytes <= 0) {
 return 0;
 }
-ret = bdrv_get_block_status(bs, sector_num, nb_sectors, , NULL);
+ret = bdrv_get_block_status(bs, offset >> BDRV_SECTOR_BITS,
+bytes >> BDRV_SECTOR_BITS, , NULL);
 if (ret < 0) {
-error_report("error getting block status at sector %" PRId64 ": 
%s",
- sector_num, strerror(-ret));
+error_report("error getting block status at offset %" PRId64 ": 
%s",
+ offset, strerror(-ret));
 return ret;
 }
 if (ret & BDRV_BLOCK_ZERO) {
-sector_num += n;
+offset += n * BDRV_SECTOR_BITS;
 continue;
 }
-ret = bdrv_pwrite_zeroes(child, sector_num << BDRV_SECTOR_BITS,
- n << BDRV_SECTOR_BITS, flags);
+ret = bdrv_pwrite_zeroes(child, offset, n * BDRV_SECTOR_SIZE, flags);
 if (ret < 0) {
-error_report("error writing zeroes at sector %" PRId64 ": %s",
- sector_num, strerror(-ret));
+error_report("error writing zeroes at offset %" PRId64 ": %s",
+ offset, strerror(-ret));
 return ret;
 }
-sector_num += n;
+offset += n * BDRV_SECTOR_SIZE;
 }
 }

-- 
2.13.6




[Qemu-block] [PATCH v2 3/4] blockjob: expose persistent property

2017-10-03 Thread John Snow
For drive-backup and blockdev-backup, expose the persistent
property, having it default to false. There are no universal
creation parameters, so it must be added to each job type that
it makes sense for individually.

Signed-off-by: John Snow 
---
 blockdev.c   | 10 --
 qapi/block-core.json | 21 -
 2 files changed, 24 insertions(+), 7 deletions(-)

diff --git a/blockdev.c b/blockdev.c
index c08d6fb..8bbbf2a 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3198,6 +3198,9 @@ static BlockJob *do_drive_backup(DriveBackup *backup, 
BlockJobTxn *txn,
 if (!backup->has_job_id) {
 backup->job_id = NULL;
 }
+if (!backup->has_persistent) {
+backup->persistent = false;
+}
 if (!backup->has_compress) {
 backup->compress = false;
 }
@@ -3290,7 +3293,7 @@ static BlockJob *do_drive_backup(DriveBackup *backup, 
BlockJobTxn *txn,
 }
 }
 
-job = backup_job_create(backup->job_id, false, bs, target_bs,
+job = backup_job_create(backup->job_id, backup->persistent, bs, target_bs,
 backup->speed, backup->sync, bmap, 
backup->compress,
 backup->on_source_error, backup->on_target_error,
 BLOCK_JOB_DEFAULT, NULL, NULL, txn, _err);
@@ -3341,6 +3344,9 @@ BlockJob *do_blockdev_backup(BlockdevBackup *backup, 
BlockJobTxn *txn,
 if (!backup->has_job_id) {
 backup->job_id = NULL;
 }
+if (!backup->has_persistent) {
+backup->persistent = false;
+}
 if (!backup->has_compress) {
 backup->compress = false;
 }
@@ -3369,7 +3375,7 @@ BlockJob *do_blockdev_backup(BlockdevBackup *backup, 
BlockJobTxn *txn,
 goto out;
 }
 }
-job = backup_job_create(backup->job_id, false, bs, target_bs,
+job = backup_job_create(backup->job_id, backup->persistent, bs, target_bs,
 backup->speed, backup->sync, NULL, 
backup->compress,
 backup->on_source_error, backup->on_target_error,
 BLOCK_JOB_DEFAULT, NULL, NULL, txn, _err);
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 5cce49d..4c7c17b 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1104,6 +1104,11 @@
 # @job-id: identifier for the newly-created block job. If
 #  omitted, the device name will be used. (Since 2.7)
 #
+# @persistent: Whether or not the job created by this command needs to be
+#  cleaned up manually via block-job-reap or not. The default is
+#  false. When true, the job will remain in a "completed" state
+#  until reaped manually with block-job-reap. (Since 2.11)
+#
 # @device: the device name or node-name of a root node which should be copied.
 #
 # @target: the target of the new image. If the file exists, or if it
@@ -1144,9 +1149,10 @@
 # Since: 1.6
 ##
 { 'struct': 'DriveBackup',
-  'data': { '*job-id': 'str', 'device': 'str', 'target': 'str',
-'*format': 'str', 'sync': 'MirrorSyncMode', '*mode': 
'NewImageMode',
-'*speed': 'int', '*bitmap': 'str', '*compress': 'bool',
+  'data': { '*job-id': 'str', '*persistent': 'bool', 'device': 'str',
+'target': 'str', '*format': 'str', 'sync': 'MirrorSyncMode',
+'*mode': 'NewImageMode', '*speed': 'int', '*bitmap': 'str',
+'*compress': 'bool',
 '*on-source-error': 'BlockdevOnError',
 '*on-target-error': 'BlockdevOnError' } }
 
@@ -1156,6 +1162,11 @@
 # @job-id: identifier for the newly-created block job. If
 #  omitted, the device name will be used. (Since 2.7)
 #
+# @persistent: Whether or not the job created by this command needs to be
+#  cleaned up manually via block-job-reap or not. The default is
+#  false. When true, the job will remain in a "completed" state
+#  until reaped manually with block-job-reap. (Since 2.11)
+#
 # @device: the device name or node-name of a root node which should be copied.
 #
 # @target: the device name or node-name of the backup target node.
@@ -1185,8 +1196,8 @@
 # Since: 2.3
 ##
 { 'struct': 'BlockdevBackup',
-  'data': { '*job-id': 'str', 'device': 'str', 'target': 'str',
-'sync': 'MirrorSyncMode',
+  'data': { '*job-id': 'str', '*persistent': 'bool', 'device': 'str',
+'target': 'str', 'sync': 'MirrorSyncMode',
 '*speed': 'int',
 '*compress': 'bool',
 '*on-source-error': 'BlockdevOnError',
-- 
2.9.5




[Qemu-block] [PATCH v2 1/4] blockjob: add persistent property

2017-10-03 Thread John Snow
Add a persistent (manually reap) property to block jobs that forces
them to linger in the block job list (visible to QMP queries) until
the user explicitly dismisses them via QMP.

The reap command itself is implemented in the next commit, and the
feature is exposed to drive-backup and blockdev-backup in the subsequent
commit.

Signed-off-by: John Snow 
---
 block/backup.c   | 20 +--
 block/commit.c   |  2 +-
 block/mirror.c   |  2 +-
 block/replication.c  |  5 +++--
 block/stream.c   |  2 +-
 blockdev.c   |  8 
 blockjob.c   | 46 ++--
 include/block/block_int.h|  8 +---
 include/block/blockjob.h | 21 
 include/block/blockjob_int.h |  2 +-
 qapi/block-core.json |  7 ---
 tests/test-blockjob-txn.c|  2 +-
 tests/test-blockjob.c|  2 +-
 13 files changed, 97 insertions(+), 30 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 517c300..93ac194 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -532,15 +532,15 @@ static const BlockJobDriver backup_job_driver = {
 .drain  = backup_drain,
 };
 
-BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
-  BlockDriverState *target, int64_t speed,
-  MirrorSyncMode sync_mode, BdrvDirtyBitmap *sync_bitmap,
-  bool compress,
-  BlockdevOnError on_source_error,
-  BlockdevOnError on_target_error,
-  int creation_flags,
-  BlockCompletionFunc *cb, void *opaque,
-  BlockJobTxn *txn, Error **errp)
+BlockJob *backup_job_create(const char *job_id, bool persistent,
+BlockDriverState *bs, BlockDriverState *target,
+int64_t speed, MirrorSyncMode sync_mode,
+BdrvDirtyBitmap *sync_bitmap, bool compress,
+BlockdevOnError on_source_error,
+BlockdevOnError on_target_error,
+int creation_flags,
+BlockCompletionFunc *cb, void *opaque,
+BlockJobTxn *txn, Error **errp)
 {
 int64_t len;
 BlockDriverInfo bdi;
@@ -608,7 +608,7 @@ BlockJob *backup_job_create(const char *job_id, 
BlockDriverState *bs,
 }
 
 /* job->common.len is fixed, so we can't allow resize */
-job = block_job_create(job_id, _job_driver, bs,
+job = block_job_create(job_id, _job_driver, persistent, bs,
BLK_PERM_CONSISTENT_READ,
BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE |
BLK_PERM_WRITE_UNCHANGED | BLK_PERM_GRAPH_MOD,
diff --git a/block/commit.c b/block/commit.c
index 8f0e835..308a5fd 100644
--- a/block/commit.c
+++ b/block/commit.c
@@ -304,7 +304,7 @@ void commit_start(const char *job_id, BlockDriverState *bs,
 return;
 }
 
-s = block_job_create(job_id, _job_driver, bs, 0, BLK_PERM_ALL,
+s = block_job_create(job_id, _job_driver, false, bs, 0, 
BLK_PERM_ALL,
  speed, BLOCK_JOB_DEFAULT, NULL, NULL, errp);
 if (!s) {
 return;
diff --git a/block/mirror.c b/block/mirror.c
index 6f5cb9f..013e73a 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -1180,7 +1180,7 @@ static void mirror_start_job(const char *job_id, 
BlockDriverState *bs,
 }
 
 /* Make sure that the source is not resized while the job is running */
-s = block_job_create(job_id, driver, mirror_top_bs,
+s = block_job_create(job_id, driver, false, mirror_top_bs,
  BLK_PERM_CONSISTENT_READ,
  BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
  BLK_PERM_WRITE | BLK_PERM_GRAPH_MOD, speed,
diff --git a/block/replication.c b/block/replication.c
index 3a4e682..6c59f00 100644
--- a/block/replication.c
+++ b/block/replication.c
@@ -539,8 +539,9 @@ static void replication_start(ReplicationState *rs, 
ReplicationMode mode,
 bdrv_op_block_all(top_bs, s->blocker);
 bdrv_op_unblock(top_bs, BLOCK_OP_TYPE_DATAPLANE, s->blocker);
 
-job = backup_job_create(NULL, s->secondary_disk->bs, 
s->hidden_disk->bs,
-0, MIRROR_SYNC_MODE_NONE, NULL, false,
+job = backup_job_create(NULL, false, s->secondary_disk->bs,
+s->hidden_disk->bs, 0, MIRROR_SYNC_MODE_NONE,
+NULL, false,
 BLOCKDEV_ON_ERROR_REPORT,
 BLOCKDEV_ON_ERROR_REPORT, BLOCK_JOB_INTERNAL,
 backup_job_completed, bs, NULL, _err);
diff --git a/block/stream.c b/block/stream.c
index e6f7234..c644f34 100644
--- a/block/stream.c
+++ b/block/stream.c
@@ 

[Qemu-block] [PATCH v2 0/4] blockjobs: add explicit job reaping

2017-10-03 Thread John Snow
For jobs that complete when a monitor isn't looking, there's no way to
tell what the job's final return code was. We need to allow jobs to
remain in the list until queried for reliable management.

V2:
 - Added tests!
 - Changed property name (Jeff, Paolo)

RFC:
The next version will add tests for transactions.
Kevin, can you please take a look at bdrv_is_root_node and how it is
used with respect to do_drive_backup? I suspect that in this case that
"is root" should actually be "true", but a node in use by a job has
two roles; child_root and child_job, so it starts returning false here.

That's fine because we prevent a collision that way, but it makes the
error messages pretty bad and misleading. Do you have a quick suggestion?
(Should I just amend the loop to allow non-root nodes as long as they
happen to be jobs so that the job creation code can check permissions?)

John Snow (4):
  blockjob: add persistent property
  qmp: add block-job-reap command
  blockjob: expose persistent property
  iotests: test manual job reaping

 block/backup.c   |  20 ++--
 block/commit.c   |   2 +-
 block/mirror.c   |   2 +-
 block/replication.c  |   5 +-
 block/stream.c   |   2 +-
 block/trace-events   |   1 +
 blockdev.c   |  28 +-
 blockjob.c   |  46 -
 include/block/block_int.h|   8 +-
 include/block/blockjob.h |  21 
 include/block/blockjob_int.h |   2 +-
 qapi/block-core.json |  49 --
 tests/qemu-iotests/056   | 227 +++
 tests/qemu-iotests/056.out   |   4 +-
 tests/test-blockjob-txn.c|   2 +-
 tests/test-blockjob.c|   2 +-
 16 files changed, 384 insertions(+), 37 deletions(-)

-- 
2.9.5




[Qemu-block] [PATCH v2 2/4] qmp: add block-job-reap command

2017-10-03 Thread John Snow
For jobs that have finished (either completed or canceled), allow the
user to dismiss the job's status reports via block-job-reap.

Signed-off-by: John Snow 
---
 block/trace-events   |  1 +
 blockdev.c   | 14 ++
 qapi/block-core.json | 21 +
 3 files changed, 36 insertions(+)

diff --git a/block/trace-events b/block/trace-events
index 25dd5a3..9580efa 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -46,6 +46,7 @@ qmp_block_job_cancel(void *job) "job %p"
 qmp_block_job_pause(void *job) "job %p"
 qmp_block_job_resume(void *job) "job %p"
 qmp_block_job_complete(void *job) "job %p"
+qmp_block_job_reap(void *job) "job %p"
 qmp_block_stream(void *bs, void *job) "bs %p job %p"
 
 # block/file-win32.c
diff --git a/blockdev.c b/blockdev.c
index eeb4986..c08d6fb 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3766,6 +3766,20 @@ void qmp_block_job_complete(const char *device, Error 
**errp)
 aio_context_release(aio_context);
 }
 
+void qmp_block_job_reap(const char *device, Error **errp)
+{
+AioContext *aio_context;
+BlockJob *job = find_block_job(device, _context, errp);
+
+if (!job) {
+return;
+}
+
+trace_qmp_block_job_reap(job);
+block_job_reap(, errp);
+aio_context_release(aio_context);
+}
+
 void qmp_change_backing_file(const char *device,
  const char *image_node_name,
  const char *backing_file,
diff --git a/qapi/block-core.json b/qapi/block-core.json
index a4f5e10..5cce49d 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -2161,6 +2161,27 @@
 { 'command': 'block-job-complete', 'data': { 'device': 'str' } }
 
 ##
+# @block-job-reap:
+#
+# For jobs that have already completed, remove them from the block-job-query
+# list. This command only needs to be run for jobs which were started with the
+# persistent=true option.
+#
+# This command will refuse to operate on any job that has not yet reached
+# its terminal state. "cancel" or "complete" will still need to be used as
+# appropriate.
+#
+# @device: The job identifier. This used to be a device name (hence
+#  the name of the parameter), but since QEMU 2.7 it can have
+#  other values.
+#
+# Returns: Nothing on success
+#
+# Since: 2.11
+##
+{ 'command': 'block-job-reap', 'data': { 'device': 'str' } }
+
+##
 # @BlockdevDiscardOptions:
 #
 # Determines how to handle discard requests.
-- 
2.9.5




[Qemu-block] [PATCH v2 4/4] iotests: test manual job reaping

2017-10-03 Thread John Snow
RFC: The error returned by a job creation command when that device
already has a job attached has become misleading; "Someone should
do something about that!"

Signed-off-by: John Snow 
---
 tests/qemu-iotests/056 | 227 +
 tests/qemu-iotests/056.out |   4 +-
 2 files changed, 229 insertions(+), 2 deletions(-)

diff --git a/tests/qemu-iotests/056 b/tests/qemu-iotests/056
index 04f2c3c..d6bed20 100755
--- a/tests/qemu-iotests/056
+++ b/tests/qemu-iotests/056
@@ -29,6 +29,26 @@ backing_img = os.path.join(iotests.test_dir, 'backing.img')
 test_img = os.path.join(iotests.test_dir, 'test.img')
 target_img = os.path.join(iotests.test_dir, 'target.img')
 
+def img_create(img, fmt=iotests.imgfmt, size='64M', **kwargs):
+fullname = os.path.join(iotests.test_dir, '%s.%s' % (img, fmt))
+optargs = []
+for k,v in kwargs.iteritems():
+optargs = optargs + ['-o', '%s=%s' % (k,v)]
+args = ['create', '-f', fmt] + optargs + [fullname, size]
+iotests.qemu_img(*args)
+return fullname
+
+def try_remove(img):
+try:
+os.remove(img)
+except OSError:
+pass
+
+def io_write_patterns(img, patterns):
+for pattern in patterns:
+iotests.qemu_io('-c', 'write -P%s %s %s' % pattern, img)
+
+
 class TestSyncModesNoneAndTop(iotests.QMPTestCase):
 image_len = 64 * 1024 * 1024 # MB
 
@@ -108,5 +128,212 @@ class TestBeforeWriteNotifier(iotests.QMPTestCase):
 event = self.cancel_and_wait()
 self.assert_qmp(event, 'data/type', 'backup')
 
+class BackupTest(iotests.QMPTestCase):
+def setUp(self):
+self.vm = iotests.VM()
+self.test_img = img_create('test')
+self.dest_img = img_create('dest')
+self.vm.add_drive(self.test_img)
+self.vm.launch()
+
+def tearDown(self):
+self.vm.shutdown()
+try_remove(self.test_img)
+try_remove(self.dest_img)
+
+def hmp_io_writes(self, drive, patterns):
+for pattern in patterns:
+self.vm.hmp_qemu_io(drive, 'write -P%s %s %s' % pattern)
+self.vm.hmp_qemu_io(drive, 'flush')
+
+def qmp_backup_and_wait(self, cmd='drive-backup', serror=None,
+aerror=None, **kwargs):
+return (self.qmp_backup(cmd, serror, **kwargs) and
+self.qmp_backup_wait(kwargs['device'], aerror))
+
+def qmp_backup(self, cmd='drive-backup',
+   error=None, **kwargs):
+self.assertTrue('device' in kwargs)
+res = self.vm.qmp(cmd, **kwargs)
+if error:
+self.assert_qmp(res, 'error/desc', error)
+return False
+self.assert_qmp(res, 'return', {})
+return True
+
+def qmp_backup_wait(self, device, error=None):
+event = self.vm.event_wait(name="BLOCK_JOB_COMPLETED",
+   match={'data': {'device': device}})
+self.assertNotEqual(event, None)
+try:
+failure = self.dictpath(event, 'data/error')
+except AssertionError:
+# Backup succeeded.
+self.assert_qmp(event, 'data/offset', event['data']['len'])
+return True
+else:
+# Failure.
+self.assert_qmp(event, 'data/error', qerror)
+return False
+
+def test_reap_false(self):
+res = self.vm.qmp('query-block-jobs')
+self.assert_qmp(res, 'return', [])
+self.qmp_backup_and_wait(device='drive0', format=iotests.imgfmt,
+ sync='full', target=self.dest_img, 
persistent=False)
+res = self.vm.qmp('query-block-jobs')
+self.assert_qmp(res, 'return', [])
+
+def test_reap_true(self):
+res = self.vm.qmp('query-block-jobs')
+self.assert_qmp(res, 'return', [])
+self.qmp_backup_and_wait(device='drive0', format=iotests.imgfmt,
+ sync='full', target=self.dest_img, 
persistent=True)
+res = self.vm.qmp('query-block-jobs')
+self.assert_qmp(res, 'return[0]/finished', True)
+res = self.vm.qmp('block-job-reap', device='drive0')
+self.assert_qmp(res, 'return', {})
+res = self.vm.qmp('query-block-jobs')
+self.assert_qmp(res, 'return', [])
+
+def test_reap_bad_id(self):
+res = self.vm.qmp('query-block-jobs')
+self.assert_qmp(res, 'return', [])
+res = self.vm.qmp('block-job-reap', device='foobar')
+self.assert_qmp(res, 'error/class', 'DeviceNotActive')
+
+def test_reap_collision(self):
+res = self.vm.qmp('query-block-jobs')
+self.assert_qmp(res, 'return', [])
+self.qmp_backup_and_wait(device='drive0', format=iotests.imgfmt,
+ sync='full', target=self.dest_img, 
persistent=True)
+res = self.vm.qmp('query-block-jobs')
+self.assert_qmp(res, 'return[0]/finished', True)
+# Leave zombie job un-reaped, observe a failure:

[Qemu-block] [PATCH v2 5/5] iotests: Add test 197 for covering copy-on-read

2017-10-03 Thread Eric Blake
Add a test for qcow2 copy-on-read behavior, including exposure
for the just-fixed bugs.

The copy-on-read behavior is always to a qcow2 image, but the
test is careful to allow running with most image protocol/format
combos as the backing file being copied from (luks being the
exception, as it is harder to pass the right secret to all the
right places).  In fact, for './check nbd', this appears to be
the first time we've had a qcow2 image wrapping NBD, requiring
an additional line in _filter_img_create to match the similar
line in _filter_img_info.

Invoking blkdebug to prove we don't write too much took some
effort to get working; and it requires that $TEST_WRAP (based
on $TEST_DIR) not be subject to word splitting.  We may decide
later to have the entire iotests suite use relative rather than
absolute names, to avoid problems inherited by the absolute
name of $PWD or $TEST_DIR, at which point the sanity check in
this commit could be simplified.

Signed-off-by: Eric Blake 

---
v2: test 0-length query [Kevin], sanity check TEST_DIR [Jeff]

I only tested with -raw, -qcow2, -qed, and -nbd. I won't be
surprised if the test fails in some other setup...
---
 tests/qemu-iotests/common.filter |   1 +
 tests/qemu-iotests/197   | 102 +++
 tests/qemu-iotests/197.out   |  26 ++
 tests/qemu-iotests/group |   1 +
 4 files changed, 130 insertions(+)
 create mode 100755 tests/qemu-iotests/197
 create mode 100644 tests/qemu-iotests/197.out

diff --git a/tests/qemu-iotests/common.filter b/tests/qemu-iotests/common.filter
index 9d5442ecd9..227b37e941 100644
--- a/tests/qemu-iotests/common.filter
+++ b/tests/qemu-iotests/common.filter
@@ -111,6 +111,7 @@ _filter_img_create()
 sed -e "s#$IMGPROTO:$TEST_DIR#TEST_DIR#g" \
 -e "s#$TEST_DIR#TEST_DIR#g" \
 -e "s#$IMGFMT#IMGFMT#g" \
+-e 's#nbd:127.0.0.1:10810#TEST_DIR/t.IMGFMT#g' \
 -e "s# encryption=off##g" \
 -e "s# cluster_size=[0-9]\\+##g" \
 -e "s# table_size=[0-9]\\+##g" \
diff --git a/tests/qemu-iotests/197 b/tests/qemu-iotests/197
new file mode 100755
index 00..cc85388039
--- /dev/null
+++ b/tests/qemu-iotests/197
@@ -0,0 +1,102 @@
+#!/bin/bash
+#
+# Test case for copy-on-read into qcow2
+#
+# Copyright (C) 2017 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see .
+#
+
+# creator
+owner=ebl...@redhat.com
+
+seq="$(basename $0)"
+echo "QA output created by $seq"
+
+here="$PWD"
+status=1 # failure is the default!
+
+# get standard environment, filters and checks
+. ./common.rc
+. ./common.filter
+
+TEST_WRAP="$TEST_DIR/t.wrap.qcow2"
+BLKDBG_CONF="$TEST_DIR/blkdebug.conf"
+
+# Sanity check: our use of blkdebug fails if $TEST_DIR contains spaces
+# or other problems
+case "$TEST_DIR" in
+*[^-_a-zA-Z0-9/]*)
+_notrun "Suspicious TEST_DIR='$TEST_DIR', cowardly refusing to run" ;;
+esac
+
+_cleanup()
+{
+_cleanup_test_img
+rm -f "$BLKDBG_CONF"
+}
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+# Test is supported for any backing file; but we force qcow2 for our wrapper.
+_supported_fmt generic
+_supported_proto generic
+_supported_os Linux
+# LUKS support may be possible, but it complicates things.
+_unsupported_fmt luks
+
+echo
+echo '=== Copy-on-read ==='
+echo
+
+# Prep the images
+_make_test_img 4G
+$QEMU_IO -c "write -P 55 3G 1k" "$TEST_IMG" | _filter_qemu_io
+IMGPROTO=file IMGFMT=qcow2 IMGOPTS= TEST_IMG_FILE="$TEST_WRAP" \
+_make_test_img -F "$IMGFMT" -b "$TEST_IMG" | _filter_img_create
+$QEMU_IO -f qcow2 -c "write -z -u 1M 64k" "$TEST_WRAP" | _filter_qemu_io
+
+# Ensure that a read of two clusters, but where one is already allocated,
+# does not re-write the allocated cluster
+cat > "$BLKDBG_CONF" <&1 | _filter_testdir
+
+# Break the backing chain, and show that images are identical, and that
+# we properly copied over explicit zeros.
+$QEMU_IMG rebase -u -b "" -f qcow2 "$TEST_WRAP"
+$QEMU_IO -f qcow2 -c map "$TEST_WRAP"
+_check_test_img
+$QEMU_IMG compare -f $IMGFMT -F qcow2 "$TEST_IMG" "$TEST_WRAP"
+
+# success, all done
+echo '*** done'
+status=0
diff --git a/tests/qemu-iotests/197.out b/tests/qemu-iotests/197.out
new file mode 100644
index 00..52b4137d7b
--- /dev/null
+++ b/tests/qemu-iotests/197.out
@@ -0,0 +1,26 @@
+QA output created by 197
+
+=== Copy-on-read ===
+
+Formatting 'TEST_DIR/t.IMGFMT', 

[Qemu-block] [PATCH v2 3/5] block: Add blkdebug hook for copy-on-read

2017-10-03 Thread Eric Blake
Make it possible to inject errors on writes performed during a
read operation due to copy-on-read semantics.

Signed-off-by: Eric Blake 
Reviewed-by: Jeff Cody 
Reviewed-by: Kevin Wolf 
Reviewed-by: John Snow 
Reviewed-by: Stefan Hajnoczi 
---
 qapi/block-core.json | 5 -
 block/io.c   | 1 +
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/qapi/block-core.json b/qapi/block-core.json
index 750bb0c77c..ab96e348e6 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -2538,6 +2538,8 @@
 #
 # @l1_shrink_free_l2_clusters: discard the l2 tables. (since 2.11)
 #
+# @cor_write: a write due to copy-on-read (since 2.11)
+#
 # Since: 2.9
 ##
 { 'enum': 'BlkdebugEvent', 'prefix': 'BLKDBG',
@@ -2555,7 +2557,8 @@
 'flush_to_disk', 'pwritev_rmw_head', 'pwritev_rmw_after_head',
 'pwritev_rmw_tail', 'pwritev_rmw_after_tail', 'pwritev',
 'pwritev_zero', 'pwritev_done', 'empty_image_prepare',
-'l1_shrink_write_table', 'l1_shrink_free_l2_clusters' ] }
+'l1_shrink_write_table', 'l1_shrink_free_l2_clusters',
+'cor_write'] }

 ##
 # @BlkdebugInjectErrorOptions:
diff --git a/block/io.c b/block/io.c
index 1f5baac41d..d656a0485b 100644
--- a/block/io.c
+++ b/block/io.c
@@ -983,6 +983,7 @@ static int coroutine_fn bdrv_co_do_copy_on_readv(BdrvChild 
*child,
 goto err;
 }

+bdrv_debug_event(bs, BLKDBG_COR_WRITE);
 if (drv->bdrv_co_pwrite_zeroes &&
 buffer_is_zero(bounce_buffer, iov.iov_len)) {
 /* FIXME: Should we (perhaps conditionally) be setting
-- 
2.13.6




[Qemu-block] [PATCH v2 4/5] block: Perform copy-on-read in loop

2017-10-03 Thread Eric Blake
Improve our braindead copy-on-read implementation.  Pre-patch,
we have multiple issues:
- we create a bounce buffer and perform a write for the entire
request, even if the active image already has 99% of the
clusters occupied, and really only needs to copy-on-read the
remaining 1% of the clusters
- our bounce buffer was as large as the read request, and can
needlessly exhaust our memory by using double the memory of
the request size (the original request plus our bounce buffer),
rather than a capped maximum overhead beyond the original
- if a driver has a max_transfer limit, we are bypassing the
normal code in bdrv_aligned_preadv() that fragments to that
limit, and instead attempt to read the entire buffer from the
driver in one go, which some drivers may assert on
- a client can request a large request of nearly 2G such that
rounding the request out to cluster boundaries results in a
byte count larger than 2G.  While this cannot exceed 32 bits,
it DOES have some follow-on problems:
-- the call to bdrv_driver_pread() can assert for exceeding
BDRV_REQUEST_MAX_BYTES, if the driver is old and lacks
.bdrv_co_preadv
-- if the buffer is all zeroes, the subsequent call to
bdrv_co_do_pwrite_zeroes is a no-op due to a negative size,
which means we did not actually copy on read

Fix all of these issues by breaking up the action into a loop,
where each iteration is capped to sane limits.  Also, querying
the allocation status allows us to optimize: when data is
already present in the active layer, we don't need to bounce.

Note that the code has a telling comment that copy-on-read
should probably be a filter driver rather than a bolt-in hack
in io.c; but that remains a task for another day.

CC: qemu-sta...@nongnu.org
Signed-off-by: Eric Blake 

---
v2: avoid uninit ret on 0-length op [patchew, Kevin]
---
 block/io.c | 120 +
 1 file changed, 82 insertions(+), 38 deletions(-)

diff --git a/block/io.c b/block/io.c
index d656a0485b..1e246315a7 100644
--- a/block/io.c
+++ b/block/io.c
@@ -34,6 +34,9 @@

 #define NOT_DONE 0x7fff /* used while emulated sync operation in progress 
*/

+/* Maximum bounce buffer for copy-on-read and write zeroes, in bytes */
+#define MAX_BOUNCE_BUFFER (32768 << BDRV_SECTOR_BITS)
+
 static int coroutine_fn bdrv_co_do_pwrite_zeroes(BlockDriverState *bs,
 int64_t offset, int bytes, BdrvRequestFlags flags);

@@ -945,11 +948,14 @@ static int coroutine_fn 
bdrv_co_do_copy_on_readv(BdrvChild *child,

 BlockDriver *drv = bs->drv;
 struct iovec iov;
-QEMUIOVector bounce_qiov;
+QEMUIOVector local_qiov;
 int64_t cluster_offset;
 unsigned int cluster_bytes;
 size_t skip_bytes;
 int ret;
+int max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
+BDRV_REQUEST_MAX_BYTES);
+unsigned int progress = 0;

 /* FIXME We cannot require callers to have write permissions when all they
  * are doing is a read request. If we did things right, write permissions
@@ -961,53 +967,95 @@ static int coroutine_fn 
bdrv_co_do_copy_on_readv(BdrvChild *child,
 // assert(child->perm & (BLK_PERM_WRITE_UNCHANGED | BLK_PERM_WRITE));

 /* Cover entire cluster so no additional backing file I/O is required when
- * allocating cluster in the image file.
+ * allocating cluster in the image file.  Note that this value may exceed
+ * BDRV_REQUEST_MAX_BYTES (even when the original read did not), which
+ * is one reason we loop rather than doing it all at once.
  */
 bdrv_round_to_clusters(bs, offset, bytes, _offset, _bytes);
+skip_bytes = offset - cluster_offset;

 trace_bdrv_co_do_copy_on_readv(bs, offset, bytes,
cluster_offset, cluster_bytes);

-iov.iov_len = cluster_bytes;
-iov.iov_base = bounce_buffer = qemu_try_blockalign(bs, iov.iov_len);
+bounce_buffer = qemu_try_blockalign(bs,
+MIN(MIN(max_transfer, cluster_bytes),
+MAX_BOUNCE_BUFFER));
 if (bounce_buffer == NULL) {
 ret = -ENOMEM;
 goto err;
 }

-qemu_iovec_init_external(_qiov, , 1);
+while (cluster_bytes) {
+int64_t pnum;

-ret = bdrv_driver_preadv(bs, cluster_offset, cluster_bytes,
- _qiov, 0);
-if (ret < 0) {
-goto err;
-}
+ret = bdrv_is_allocated(bs, cluster_offset,
+MIN(cluster_bytes, max_transfer), );
+if (ret < 0) {
+/* Safe to treat errors in querying allocation as if
+ * unallocated; we'll probably fail again soon on the
+ * read, but at least that will set a decent errno.
+ */
+pnum = MIN(cluster_bytes, max_transfer);
+}

-bdrv_debug_event(bs, BLKDBG_COR_WRITE);
-if (drv->bdrv_co_pwrite_zeroes &&
-

[Qemu-block] [PATCH v2 0/5] block: Avoid copy-on-read assertions

2017-10-03 Thread Eric Blake
During my quest to switch block status to be byte-based, John
forced me to evaluate whether we have a situation during
copy-on-read where we could exceed BDRV_REQUEST_MAX_BYTES [1].
Sure enough, we have a number of pre-existing bugs in the
copy-on-read code.  Fix those, along with adding a test.

Available as a tag at:
git fetch git://repo.or.cz/qemu/ericb.git nbd-byte-status-v4

Since v1 (available at [2]):
- tweak patch 3 (now 4) to avoid uninit variable [Kevin, patchew]
- tweak patch 4 (now 5) to add 0-length test [Kevin]
- tweak patch 4 (now 5) to skip if TEST_DIR contains spaces [Jeff]
- new patch 2 to make testing 0-length read easier

[1] https://lists.gnu.org/archive/html/qemu-devel/2017-09/msg07286.html
[2] https://lists.gnu.org/archive/html/qemu-devel/2017-09/msg08200.html

001/5:[] [--] 'qemu-io: Add -C for opening with copy-on-read'
002/5:[down] 'block: Uniform handling of 0-length bdrv_get_block_status()'
003/5:[] [--] 'block: Add blkdebug hook for copy-on-read'
004/5:[0001] [FC] 'block: Perform copy-on-read in loop'
005/5:[0025] [FC] 'iotests: Add test 197 for covering copy-on-read'

Eric Blake (5):
  qemu-io: Add -C for opening with copy-on-read
  block: Uniform handling of 0-length bdrv_get_block_status()
  block: Add blkdebug hook for copy-on-read
  block: Perform copy-on-read in loop
  iotests: Add test 197 for covering copy-on-read

 qapi/block-core.json |   5 +-
 block/io.c   | 123 ++-
 qemu-io.c|  15 -
 tests/qemu-iotests/common.filter |   1 +
 tests/qemu-iotests/197   | 102 
 tests/qemu-iotests/197.out   |  26 +
 tests/qemu-iotests/group |   1 +
 7 files changed, 230 insertions(+), 43 deletions(-)
 create mode 100755 tests/qemu-iotests/197
 create mode 100644 tests/qemu-iotests/197.out

-- 
2.13.6




[Qemu-block] [PATCH v2 2/5] block: Uniform handling of 0-length bdrv_get_block_status()

2017-10-03 Thread Eric Blake
Handle a 0-length block status request up front, with a uniform
return value claiming the area is not allocated.

Most callers don't pass a length of 0 to bdrv_get_block_status()
and friends; but it definitely happens with a 0-length read when
copy-on-read is enabled.  While we could audit all callers to
ensure that they never make a 0-length request, and then assert
that fact, it was just as easy to fix things to always report
success (as long as the callers are careful to not go into an
infinite loop).  However, we had inconsistent behavior on whether
the status is reported as allocated or defers to the backing
layer, depending on what callbacks the driver implements, and
possibly wasting quite a few CPU cycles to get to that answer.
Consistently reporting unallocated up front doesn't really hurt
anything, and makes it easier both for callers (0-length requests
now have well-defined behavior) and for drivers (drivers don't
have to deal with 0-length requests).

Signed-off-by: Eric Blake 

---
v2: new patch
---
 block/io.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/block/io.c b/block/io.c
index e0f904583f..1f5baac41d 100644
--- a/block/io.c
+++ b/block/io.c
@@ -1773,9 +1773,9 @@ static int64_t coroutine_fn 
bdrv_co_get_block_status(BlockDriverState *bs,
 return total_sectors;
 }

-if (sector_num >= total_sectors) {
+if (sector_num >= total_sectors || !nb_sectors) {
 *pnum = 0;
-return BDRV_BLOCK_EOF;
+return sector_num >= total_sectors ? BDRV_BLOCK_EOF : 0;
 }

 n = total_sectors - sector_num;
-- 
2.13.6




[Qemu-block] [PATCH v2 1/5] qemu-io: Add -C for opening with copy-on-read

2017-10-03 Thread Eric Blake
Make it easier to enable copy-on-read during iotests, by
exposing a new bool option to main and open.

Signed-off-by: Eric Blake 
Reviewed-by: Jeff Cody 
Reviewed-by: Kevin Wolf 
Reviewed-by: John Snow 
Reviewed-by: Stefan Hajnoczi 
---
 qemu-io.c | 15 ---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/qemu-io.c b/qemu-io.c
index 265445ad89..c70bde3eb1 100644
--- a/qemu-io.c
+++ b/qemu-io.c
@@ -102,6 +102,7 @@ static void open_help(void)
 " Opens a file for subsequent use by all of the other qemu-io commands.\n"
 " -r, -- open file read-only\n"
 " -s, -- use snapshot file\n"
+" -C, -- use copy-on-read\n"
 " -n, -- disable host cache, short for -t none\n"
 " -U, -- force shared permissions\n"
 " -k, -- use kernel AIO implementation (on Linux only)\n"
@@ -120,7 +121,7 @@ static const cmdinfo_t open_cmd = {
 .argmin = 1,
 .argmax = -1,
 .flags  = CMD_NOFILE_OK,
-.args   = "[-rsnkU] [-t cache] [-d discard] [-o options] [path]",
+.args   = "[-rsCnkU] [-t cache] [-d discard] [-o options] [path]",
 .oneline= "open the file specified by path",
 .help   = open_help,
 };
@@ -145,7 +146,7 @@ static int open_f(BlockBackend *blk, int argc, char **argv)
 QDict *opts;
 bool force_share = false;

-while ((c = getopt(argc, argv, "snro:kt:d:U")) != -1) {
+while ((c = getopt(argc, argv, "snCro:kt:d:U")) != -1) {
 switch (c) {
 case 's':
 flags |= BDRV_O_SNAPSHOT;
@@ -154,6 +155,9 @@ static int open_f(BlockBackend *blk, int argc, char **argv)
 flags |= BDRV_O_NOCACHE;
 writethrough = false;
 break;
+case 'C':
+flags |= BDRV_O_COPY_ON_READ;
+break;
 case 'r':
 readonly = 1;
 break;
@@ -251,6 +255,7 @@ static void usage(const char *name)
 "  -r, --read-only  export read-only\n"
 "  -s, --snapshot   use snapshot file\n"
 "  -n, --nocachedisable host cache, short for -t none\n"
+"  -C, --copy-on-read   enable copy-on-read\n"
 "  -m, --misalign   misalign allocations for O_DIRECT\n"
 "  -k, --native-aio use kernel AIO implementation (on Linux only)\n"
 "  -t, --cache=MODE use the given cache mode for the image\n"
@@ -439,7 +444,7 @@ static QemuOptsList file_opts = {
 int main(int argc, char **argv)
 {
 int readonly = 0;
-const char *sopt = "hVc:d:f:rsnmkt:T:U";
+const char *sopt = "hVc:d:f:rsnCmkt:T:U";
 const struct option lopt[] = {
 { "help", no_argument, NULL, 'h' },
 { "version", no_argument, NULL, 'V' },
@@ -448,6 +453,7 @@ int main(int argc, char **argv)
 { "read-only", no_argument, NULL, 'r' },
 { "snapshot", no_argument, NULL, 's' },
 { "nocache", no_argument, NULL, 'n' },
+{ "copy-on-read", no_argument, NULL, 'C' },
 { "misalign", no_argument, NULL, 'm' },
 { "native-aio", no_argument, NULL, 'k' },
 { "discard", required_argument, NULL, 'd' },
@@ -492,6 +498,9 @@ int main(int argc, char **argv)
 flags |= BDRV_O_NOCACHE;
 writethrough = false;
 break;
+case 'C':
+flags |= BDRV_O_COPY_ON_READ;
+break;
 case 'd':
 if (bdrv_parse_discard_flags(optarg, ) < 0) {
 error_report("Invalid discard option: %s", optarg);
-- 
2.13.6




Re: [Qemu-block] [Qemu-devel] [PATCH 3/3] blockjob: expose manual-cull property

2017-10-03 Thread Jeff Cody
On Tue, Oct 03, 2017 at 11:59:28AM -0400, John Snow wrote:
> 
> 
> On 10/03/2017 11:57 AM, Paolo Bonzini wrote:
> > On 03/10/2017 05:15, John Snow wrote:
> >> For drive-backup and blockdev-backup, expose the manual-cull
> >> property, having it default to false. There are no universal
> >> creation parameters, so it must be added to each job type that
> >> it makes sense for individually.
> >>
> >> Signed-off-by: John Snow 
> 
> [...]
> 
> > 
> > The verb "cull" is a bit weird.  The only alternative that comes to mind
> > though are "reap" (like processes). There's also "join" (like threads),
> > but would imply waiting if the jobs hasn't completed yet, and we
> > probably don't want it.
> > 
> > Paolo
> > 
> 
> Sure, open to suggestions. I think Kevin suggested "delete" which I have
> reservations about because of people potentially confusing it with
> "cancel" or "complete" -- it does not have the capacity to
> end/terminate/finish/complete/cancel a job.
> 
> "reap" might be fine. I don't really have any strong preference.
>

As far as verbs go, I like both 'reap' and 'delete'.  As far as the
property, naming it 'manual_verb' is a bit odd, too. Maybe a clearer term
for the property would just be 'persistent', with the QMP command being
'block_job_reap' or 'block_job_delete'?

-Jeff



Re: [Qemu-block] [Qemu-devel] [PATCH v4 1/2] virtio: introduce `query-virtio' QMP command

2017-10-03 Thread Dr. David Alan Gilbert
* Jan Dakinevich (jan.dakinev...@virtuozzo.com) wrote:
> 
> 
> On 10/03/2017 05:02 PM, Eric Blake wrote:
> > On 10/03/2017 07:47 AM, Jan Dakinevich wrote:
> >> The command is intended for gathering virtio information such as status,
> >> feature bits, negotiation status. It is convenient and useful for debug
> >> purpose.
> >>
> >> The commands returns generic virtio information for virtio such as
> >> common feature names and status bits names and information for all
> >> attached to current machine devices.
> >>
> >> To retrieve names of device-specific features `get_feature_name'
> >> callback in VirtioDeviceClass also was introduced.
> >>
> >> Cc: Denis V. Lunev 
> >> Signed-off-by: Jan Dakinevich 
> >> ---
> >>  hw/block/virtio-blk.c   |  21 +
> >>  hw/char/virtio-serial-bus.c |  15 +++
> >>  hw/display/virtio-gpu.c |  13 ++
> >>  hw/net/virtio-net.c |  35 +++
> >>  hw/scsi/virtio-scsi.c   |  16 +++
> >>  hw/virtio/Makefile.objs |   2 +
> >>  hw/virtio/virtio-balloon.c  |  15 +++
> >>  hw/virtio/virtio-stub.c |   9 
> >>  hw/virtio/virtio.c  | 101 
> >> 
> >>  include/hw/virtio/virtio.h  |   2 +
> >>  qapi-schema.json|   1 +
> >>  qapi/virtio.json|  94 
> >> +
> >>  12 files changed, 324 insertions(+)
> >>  create mode 100644 hw/virtio/virtio-stub.c
> >>  create mode 100644 qapi/virtio.json
> > 
> > This creates a new .json file, but does not touch MAINTAINERS.  Our idea
> > in splitting the .json files was to make it easier for each sub-file
> > that needs a specific maintainer in addition to the overall *.json line
> > for QAPI maintainers, so this may deserve a MAINTAINERS entry.
> > 
> 
> Ok.
> 
> >> +++ b/qapi/virtio.json
> >> @@ -0,0 +1,94 @@
> >> +# -*- Mode: Python -*-
> >> +#
> >> +
> >> +##
> >> +# = Virtio devices
> >> +##
> >> +
> >> +{ 'include': 'common.json' }
> >> +
> >> +##
> >> +# @VirtioInfoBit:
> >> +#
> >> +# Named virtio bit
> >> +#
> >> +# @bit: bit number
> >> +#
> >> +# @name: bit name
> >> +#
> >> +# Since: 2.11.0
> >> +#
> >> +##
> >> +{
> >> +'struct': 'VirtioInfoBit',
> >> +'data': {
> >> +'bit': 'uint64',
> > 
> > Why is this a 64-bit value? Are the values 0-63, or are they 1, 2, 4, 8,
> > ...?  The documentation on 'bit number' is rather sparse.
> 
> I would prefer `uint' here, but I don't see generic unsigned type (may
> be, I am mistaken). I could use uint8 here, though.
> 
> > 
> >> +'name': 'str'
> > 
> > Wouldn't an enum type be better than an open-ended string?
> > 
> 
> Bit names are not known here, they are obtained from virtio device
> implementations.
> 
> >> +}
> >> +}
> >> +
> >> +##
> >> +# @VirtioInfoDevice:
> >> +#
> >> +# Information about specific virtio device
> >> +#
> >> +# @qom_path: QOM path of the device
> > 
> > Please make this 'qom-path' - new interfaces should prefer '-' over '_'.
> 
> Ok.
> 
> >> +#
> >> +# @feature-names: names of device-specific features
> >> +#
> >> +# @host-features: bitmask of features, provided by devices
> >> +#
> >> +# @guest-features: bitmask of features, acknowledged by guest
> >> +#
> >> +# @status: virtio device status bitmask
> >> +#
> >> +# Since: 2.11.0
> >> +#
> >> +##
> >> +{
> >> +'struct': 'VirtioInfoDevice',
> >> +'data': {
> >> +'qom_path': 'str',
> >> +'feature-names': ['VirtioInfoBit'],
> >> +'host-features': 'uint64',
> >> +'guest-features': 'uint64',
> >> +'status': 'uint64'
> > 
> > I'm wondering if this is the best representation (where the caller has
> > to parse the integer and then lookup in feature-names what each bit of
> > the integer represents).  But I'm not sure I have anything better off
> > the top of my head.
> > 
> 
> Consider it as way to tell caller about names of supported features.
> 
> >> +}
> >> +}
> >> +
> >> +##
> >> +# @VirtioInfo:
> >> +#
> >> +# Information about virtio devices
> >> +#
> >> +# @feature-names: names of common virtio features
> >> +#
> >> +# @status-names: names of bits which represents virtio device status
> >> +#
> >> +# @devices: list of per-device virtio information
> >> +#
> >> +# Since: 2.11.0
> >> +#
> >> +##
> >> +{
> >> +'struct': 'VirtioInfo',
> >> +'data': {
> >> +'feature-names': ['VirtioInfoBit'],
> > 
> > Why is feature-names listed at two different nestings of the return value?
> > 
> 
> These are different feature names. First names are common and predefined
> for all devices. Second names are device-specific.

If you can turn these into enums (union'd enums?) then you might
be able to get rid of a lot of your array filling/naming conversion
boilerplate. (Not sure if it's worth it, but it's worth looking).

Dave

> >> +'status-names': ['VirtioInfoBit'],
> >> +'devices': ['VirtioInfoDevice']
> >> +}
> 

Re: [Qemu-block] [Qemu-devel] [PATCH 3/3] blockjob: expose manual-cull property

2017-10-03 Thread John Snow


On 10/03/2017 11:57 AM, Paolo Bonzini wrote:
> On 03/10/2017 05:15, John Snow wrote:
>> For drive-backup and blockdev-backup, expose the manual-cull
>> property, having it default to false. There are no universal
>> creation parameters, so it must be added to each job type that
>> it makes sense for individually.
>>
>> Signed-off-by: John Snow 

[...]

> 
> The verb "cull" is a bit weird.  The only alternative that comes to mind
> though are "reap" (like processes). There's also "join" (like threads),
> but would imply waiting if the jobs hasn't completed yet, and we
> probably don't want it.
> 
> Paolo
> 

Sure, open to suggestions. I think Kevin suggested "delete" which I have
reservations about because of people potentially confusing it with
"cancel" or "complete" -- it does not have the capacity to
end/terminate/finish/complete/cancel a job.

"reap" might be fine. I don't really have any strong preference.

--js



Re: [Qemu-block] [Qemu-devel] [PATCH 3/3] blockjob: expose manual-cull property

2017-10-03 Thread Paolo Bonzini
On 03/10/2017 05:15, John Snow wrote:
> For drive-backup and blockdev-backup, expose the manual-cull
> property, having it default to false. There are no universal
> creation parameters, so it must be added to each job type that
> it makes sense for individually.
> 
> Signed-off-by: John Snow 
> ---
>  blockdev.c   | 10 --
>  qapi/block-core.json | 21 -
>  2 files changed, 24 insertions(+), 7 deletions(-)
> 
> diff --git a/blockdev.c b/blockdev.c
> index ee07bca..ba2ebfb 100644
> --- a/blockdev.c
> +++ b/blockdev.c
> @@ -3198,6 +3198,9 @@ static BlockJob *do_drive_backup(DriveBackup *backup, 
> BlockJobTxn *txn,
>  if (!backup->has_job_id) {
>  backup->job_id = NULL;
>  }
> +if (!backup->has_manual_cull) {
> +backup->manual_cull = false;
> +}
>  if (!backup->has_compress) {
>  backup->compress = false;
>  }
> @@ -3290,7 +3293,7 @@ static BlockJob *do_drive_backup(DriveBackup *backup, 
> BlockJobTxn *txn,
>  }
>  }
>  
> -job = backup_job_create(backup->job_id, false, bs, target_bs,
> +job = backup_job_create(backup->job_id, backup->manual_cull, bs, 
> target_bs,
>  backup->speed, backup->sync, bmap, 
> backup->compress,
>  backup->on_source_error, backup->on_target_error,
>  BLOCK_JOB_DEFAULT, NULL, NULL, txn, _err);
> @@ -3341,6 +3344,9 @@ BlockJob *do_blockdev_backup(BlockdevBackup *backup, 
> BlockJobTxn *txn,
>  if (!backup->has_job_id) {
>  backup->job_id = NULL;
>  }
> +if (!backup->has_manual_cull) {
> +backup->manual_cull = false;
> +}
>  if (!backup->has_compress) {
>  backup->compress = false;
>  }
> @@ -3369,7 +3375,7 @@ BlockJob *do_blockdev_backup(BlockdevBackup *backup, 
> BlockJobTxn *txn,
>  goto out;
>  }
>  }
> -job = backup_job_create(backup->job_id, false, bs, target_bs,
> +job = backup_job_create(backup->job_id, backup->manual_cull, bs, 
> target_bs,
>  backup->speed, backup->sync, NULL, 
> backup->compress,
>  backup->on_source_error, backup->on_target_error,
>  BLOCK_JOB_DEFAULT, NULL, NULL, txn, _err);
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index de322d1..c646743 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -1104,6 +1104,11 @@
>  # @job-id: identifier for the newly-created block job. If
>  #  omitted, the device name will be used. (Since 2.7)
>  #
> +# @manual-cull: Whether or not the job created by this command needs to be
> +#   cleaned up manually via block-job-cull or not. The default is
> +#   false. When true, the job will remain in a "completed" state
> +#   until culled manually with block-job-cull. (Since 2.11)
> +#
>  # @device: the device name or node-name of a root node which should be 
> copied.
>  #
>  # @target: the target of the new image. If the file exists, or if it
> @@ -1144,9 +1149,10 @@
>  # Since: 1.6
>  ##
>  { 'struct': 'DriveBackup',
> -  'data': { '*job-id': 'str', 'device': 'str', 'target': 'str',
> -'*format': 'str', 'sync': 'MirrorSyncMode', '*mode': 
> 'NewImageMode',
> -'*speed': 'int', '*bitmap': 'str', '*compress': 'bool',
> +  'data': { '*job-id': 'str', '*manual-cull': 'bool', 'device': 'str',
> +'target': 'str', '*format': 'str', 'sync': 'MirrorSyncMode',
> +'*mode': 'NewImageMode', '*speed': 'int', '*bitmap': 'str',
> +'*compress': 'bool',
>  '*on-source-error': 'BlockdevOnError',
>  '*on-target-error': 'BlockdevOnError' } }
>  
> @@ -1156,6 +1162,11 @@
>  # @job-id: identifier for the newly-created block job. If
>  #  omitted, the device name will be used. (Since 2.7)
>  #
> +# @manual-cull: Whether or not the job created by this command needs to be
> +#   cleaned up manually via block-job-cull or not. The default is
> +#   false. When true, the job will remain in a "completed" state
> +#   until culled manually with block-job-cull. (Since 2.11)

The verb "cull" is a bit weird.  The only alternative that comes to mind
though are "reap" (like processes). There's also "join" (like threads),
but would imply waiting if the jobs hasn't completed yet, and we
probably don't want it.

Paolo

>  # @device: the device name or node-name of a root node which should be 
> copied.
>  #
>  # @target: the device name or node-name of the backup target node.
> @@ -1185,8 +1196,8 @@
>  # Since: 2.3
>  ##
>  { 'struct': 'BlockdevBackup',
> -  'data': { '*job-id': 'str', 'device': 'str', 'target': 'str',
> -'sync': 'MirrorSyncMode',
> +  'data': { '*job-id': 'str', '*manual-cull': 'bool', 'device': 'str',
> +'target': 'str', 'sync': 'MirrorSyncMode',
>

Re: [Qemu-block] [Qemu-devel] [PATCH 0/3] blockjobs: add explicit job culling

2017-10-03 Thread John Snow


On 10/03/2017 05:20 AM, Vladimir Sementsov-Ogievskiy wrote:
> 03.10.2017 06:15, John Snow wrote:
>> For jobs that complete when a monitor isn't looking, there's no way to
>> tell what the job's final return code was. We need to allow jobs to
>> remain in the list until queried for reliable management.
>>
>> This is an RFC; tests are on the way.
>> (Tested only manually via qmp-shell for now.)
> 
> That's a cool feature!
> What about transactions support?
> 

Didn't test that yet (!) but the intent is that it will be compatible.
The jobs in the transaction, whether using grouped-completion mode or
not, will simply hang around in the list after completion.

For grouped-completion=false:

The jobs will complete individually and then remain in the list.

For grouped-completion=true:

The jobs will remain in their ready-to-commit-or-abort state until all
jobs in the transaction are ready to commit-or-abort, then all jobs will
either commit or abort. After commit-or-abort, all jobs (that were
created with manual-cull=true !) will remain in the query list.

The intended effect here is that this property changes NOTHING except
that the job will remain in the query list until it is dismissed, and
should not change anything about how it behaves during its lifetime.

One downside here is that since we have no universal "job creation
argument list" that I can't add it to all jobs universally. In the case
of transactions, though, I could at least add a property that *forces*
all jobs below to become manual-cull style jobs, and that way you'd only
have to specify it once instead of for each action.

--js

>>
>> John Snow (3):
>>    blockjob: add manual-cull property
>>    qmp: add block-job-cull command
>>    blockjob: expose manual-cull property
>>
>>   block/backup.c   | 20 +-
>>   block/commit.c   |  2 +-
>>   block/mirror.c   |  2 +-
>>   block/replication.c  |  5 +++--
>>   block/stream.c   |  2 +-
>>   block/trace-events   |  1 +
>>   blockdev.c   | 28 +
>>   blockjob.c   | 46
>> +++--
>>   include/block/block_int.h    |  8 +---
>>   include/block/blockjob.h | 21 +++
>>   include/block/blockjob_int.h |  2 +-
>>   qapi/block-core.json | 49
>> 
>>   tests/test-blockjob-txn.c    |  2 +-
>>   tests/test-blockjob.c    |  2 +-
>>   14 files changed, 155 insertions(+), 35 deletions(-)
>>
> 
> 

Oh, while I'm here, I should point out that another downside of this
patch is that it doesn't prevent "cancel" from attempting to re-enter
the job. Or rather, I had to patch that out specifically. The job
remains in a list of jobs that some portions of the code consider to be
"active" jobs. (look for any code that checks to see if the job has
started.)

A (perhaps) more provably cleaner approach would be to simply move any
completed job onto a different list upon completion, and patch
query-jobs to query both lists, and allow the cull command to remove any
jobs on that list. A downside of that approach, however, is that without
multiple job support, you may launch a second job that perhaps
overwrites the first job unless you're careful about how you manage that
data.

There are pros and cons to either way, but I'd rather not get in the
business of overhauling the blockjobs API unless it's for universal jobs.

--js



Re: [Qemu-block] [Qemu-devel] [PATCH v4 1/2] virtio: introduce `query-virtio' QMP command

2017-10-03 Thread Jan Dakinevich


On 10/03/2017 05:02 PM, Eric Blake wrote:
> On 10/03/2017 07:47 AM, Jan Dakinevich wrote:
>> The command is intended for gathering virtio information such as status,
>> feature bits, negotiation status. It is convenient and useful for debug
>> purpose.
>>
>> The commands returns generic virtio information for virtio such as
>> common feature names and status bits names and information for all
>> attached to current machine devices.
>>
>> To retrieve names of device-specific features `get_feature_name'
>> callback in VirtioDeviceClass also was introduced.
>>
>> Cc: Denis V. Lunev 
>> Signed-off-by: Jan Dakinevich 
>> ---
>>  hw/block/virtio-blk.c   |  21 +
>>  hw/char/virtio-serial-bus.c |  15 +++
>>  hw/display/virtio-gpu.c |  13 ++
>>  hw/net/virtio-net.c |  35 +++
>>  hw/scsi/virtio-scsi.c   |  16 +++
>>  hw/virtio/Makefile.objs |   2 +
>>  hw/virtio/virtio-balloon.c  |  15 +++
>>  hw/virtio/virtio-stub.c |   9 
>>  hw/virtio/virtio.c  | 101 
>> 
>>  include/hw/virtio/virtio.h  |   2 +
>>  qapi-schema.json|   1 +
>>  qapi/virtio.json|  94 +
>>  12 files changed, 324 insertions(+)
>>  create mode 100644 hw/virtio/virtio-stub.c
>>  create mode 100644 qapi/virtio.json
> 
> This creates a new .json file, but does not touch MAINTAINERS.  Our idea
> in splitting the .json files was to make it easier for each sub-file
> that needs a specific maintainer in addition to the overall *.json line
> for QAPI maintainers, so this may deserve a MAINTAINERS entry.
> 

Ok.

>> +++ b/qapi/virtio.json
>> @@ -0,0 +1,94 @@
>> +# -*- Mode: Python -*-
>> +#
>> +
>> +##
>> +# = Virtio devices
>> +##
>> +
>> +{ 'include': 'common.json' }
>> +
>> +##
>> +# @VirtioInfoBit:
>> +#
>> +# Named virtio bit
>> +#
>> +# @bit: bit number
>> +#
>> +# @name: bit name
>> +#
>> +# Since: 2.11.0
>> +#
>> +##
>> +{
>> +'struct': 'VirtioInfoBit',
>> +'data': {
>> +'bit': 'uint64',
> 
> Why is this a 64-bit value? Are the values 0-63, or are they 1, 2, 4, 8,
> ...?  The documentation on 'bit number' is rather sparse.

I would prefer `uint' here, but I don't see generic unsigned type (may
be, I am mistaken). I could use uint8 here, though.

> 
>> +'name': 'str'
> 
> Wouldn't an enum type be better than an open-ended string?
> 

Bit names are not known here, they are obtained from virtio device
implementations.

>> +}
>> +}
>> +
>> +##
>> +# @VirtioInfoDevice:
>> +#
>> +# Information about specific virtio device
>> +#
>> +# @qom_path: QOM path of the device
> 
> Please make this 'qom-path' - new interfaces should prefer '-' over '_'.

Ok.

>> +#
>> +# @feature-names: names of device-specific features
>> +#
>> +# @host-features: bitmask of features, provided by devices
>> +#
>> +# @guest-features: bitmask of features, acknowledged by guest
>> +#
>> +# @status: virtio device status bitmask
>> +#
>> +# Since: 2.11.0
>> +#
>> +##
>> +{
>> +'struct': 'VirtioInfoDevice',
>> +'data': {
>> +'qom_path': 'str',
>> +'feature-names': ['VirtioInfoBit'],
>> +'host-features': 'uint64',
>> +'guest-features': 'uint64',
>> +'status': 'uint64'
> 
> I'm wondering if this is the best representation (where the caller has
> to parse the integer and then lookup in feature-names what each bit of
> the integer represents).  But I'm not sure I have anything better off
> the top of my head.
> 

Consider it as way to tell caller about names of supported features.

>> +}
>> +}
>> +
>> +##
>> +# @VirtioInfo:
>> +#
>> +# Information about virtio devices
>> +#
>> +# @feature-names: names of common virtio features
>> +#
>> +# @status-names: names of bits which represents virtio device status
>> +#
>> +# @devices: list of per-device virtio information
>> +#
>> +# Since: 2.11.0
>> +#
>> +##
>> +{
>> +'struct': 'VirtioInfo',
>> +'data': {
>> +'feature-names': ['VirtioInfoBit'],
> 
> Why is feature-names listed at two different nestings of the return value?
> 

These are different feature names. First names are common and predefined
for all devices. Second names are device-specific.

>> +'status-names': ['VirtioInfoBit'],
>> +'devices': ['VirtioInfoDevice']
>> +}
>> +}
>> +
>> +
>> +##
>> +# @query-virtio:
>> +#
>> +# Returns generic and per-device virtio information
>> +#
>> +# Since: 2.11.0
>> +#
>> +##
>> +{
>> +'command': 'query-virtio',
>> +'returns': 'VirtioInfo'
>> +}
>>
> 

-- 
Best regards
Jan Dakinevich



Re: [Qemu-block] [PATCH 8/8] nbd: Minimal structured read for client

2017-10-03 Thread Paolo Bonzini
On 03/10/2017 15:35, Vladimir Sementsov-Ogievskiy wrote:
>>>
>>> In the end this probably means that you have a read_chunk_header
>>> function and a read_chunk function.  READ has a loop that calls
>>> read_chunk_header followed by direct reading into the QEMUIOVector,
>>> while everyone else calls read_chunk.
>>
>> accordingly to spec, we can receive several error reply chunks to any
>> request,
>> so loop, receiving them should be common for all requests I think
> 
> as well as handling error chunks should be common..

Yes, reading error chunks should be part of read_chunk_header.

Paolo



Re: [Qemu-block] [PATCH 8/8] nbd: Minimal structured read for client

2017-10-03 Thread Paolo Bonzini
On 03/10/2017 14:26, Vladimir Sementsov-Ogievskiy wrote:
> 03.10.2017 13:07, Paolo Bonzini wrote:
>> On 26/09/2017 00:19, Eric Blake wrote:
 +    /* here we deal with successful structured reply */
 +    switch (s->reply.type) {
 +    QEMUIOVector sub_qiov;
 +    case NBD_SREP_TYPE_OFFSET_DATA:
>>> This is putting a LOT of smarts directly into the receive routine.
>>> Here's where I was previously wondering (and I think Paolo as well)
>>> whether it might be better to split the efforts: the generic function
>>> reads off the chunk information and any payload, but a per-command
>>> callback function then parses the chunks.  Especially since the
>>> definition of the chunks differs on a per-command basis (yes, the NBD
>>> spec will try to not reuse an SREP chunk type across multiple commands
>>> unless the semantics are similar, but that's a bit more fragile).  This
>>> particularly matters given my statement above that you want a
>>> discriminated union, rather than a struct that contains unused fields,
>>> for handling different SREP chunk types.
>> I think there should be two kinds of replies: 1) read directly into a
>> QEMUIOVector, using structured replies only as an encapsulation of the
> 
> who should read to qiov? reply_entry, or calling coroutine?..
> reply_entry should anyway
> parse reply, to understand should it read it all or read it to qiov (or
> yield back, and then
> calling coroutine will read to qiov)..

The CMD_READ coroutine should---either directly or, in case you have a
structured reply, after reading each chunk header.

Paolo

>> payload; 2) read a chunk at a time into malloc-ed memory, yielding back
>> to the calling coroutine after receiving one complete chunk.
>>
>> In the end this probably means that you have a read_chunk_header
>> function and a read_chunk function.  READ has a loop that calls
>> read_chunk_header followed by direct reading into the QEMUIOVector,
>> while everyone else calls read_chunk.
>>
>> Maybe qio_channel_readv/writev_full could have "offset" and "bytes"
>> arguments.  Most code in iov_send_recv could be cut-and-pasted.  (When
>> sheepdog is converted to QIOChannel, iov_send_recv can go away).
>>
>> Paolo
> 
> 




Re: [Qemu-block] [Qemu-devel] [PATCH v4 1/2] virtio: introduce `query-virtio' QMP command

2017-10-03 Thread Eric Blake
On 10/03/2017 07:47 AM, Jan Dakinevich wrote:
> The command is intended for gathering virtio information such as status,
> feature bits, negotiation status. It is convenient and useful for debug
> purpose.
> 
> The commands returns generic virtio information for virtio such as
> common feature names and status bits names and information for all
> attached to current machine devices.
> 
> To retrieve names of device-specific features `get_feature_name'
> callback in VirtioDeviceClass also was introduced.
> 
> Cc: Denis V. Lunev 
> Signed-off-by: Jan Dakinevich 
> ---
>  hw/block/virtio-blk.c   |  21 +
>  hw/char/virtio-serial-bus.c |  15 +++
>  hw/display/virtio-gpu.c |  13 ++
>  hw/net/virtio-net.c |  35 +++
>  hw/scsi/virtio-scsi.c   |  16 +++
>  hw/virtio/Makefile.objs |   2 +
>  hw/virtio/virtio-balloon.c  |  15 +++
>  hw/virtio/virtio-stub.c |   9 
>  hw/virtio/virtio.c  | 101 
> 
>  include/hw/virtio/virtio.h  |   2 +
>  qapi-schema.json|   1 +
>  qapi/virtio.json|  94 +
>  12 files changed, 324 insertions(+)
>  create mode 100644 hw/virtio/virtio-stub.c
>  create mode 100644 qapi/virtio.json

This creates a new .json file, but does not touch MAINTAINERS.  Our idea
in splitting the .json files was to make it easier for each sub-file
that needs a specific maintainer in addition to the overall *.json line
for QAPI maintainers, so this may deserve a MAINTAINERS entry.


> +++ b/qapi/virtio.json
> @@ -0,0 +1,94 @@
> +# -*- Mode: Python -*-
> +#
> +
> +##
> +# = Virtio devices
> +##
> +
> +{ 'include': 'common.json' }
> +
> +##
> +# @VirtioInfoBit:
> +#
> +# Named virtio bit
> +#
> +# @bit: bit number
> +#
> +# @name: bit name
> +#
> +# Since: 2.11.0
> +#
> +##
> +{
> +'struct': 'VirtioInfoBit',
> +'data': {
> +'bit': 'uint64',

Why is this a 64-bit value? Are the values 0-63, or are they 1, 2, 4, 8,
...?  The documentation on 'bit number' is rather sparse.

> +'name': 'str'

Wouldn't an enum type be better than an open-ended string?

> +}
> +}
> +
> +##
> +# @VirtioInfoDevice:
> +#
> +# Information about specific virtio device
> +#
> +# @qom_path: QOM path of the device

Please make this 'qom-path' - new interfaces should prefer '-' over '_'.

> +#
> +# @feature-names: names of device-specific features
> +#
> +# @host-features: bitmask of features, provided by devices
> +#
> +# @guest-features: bitmask of features, acknowledged by guest
> +#
> +# @status: virtio device status bitmask
> +#
> +# Since: 2.11.0
> +#
> +##
> +{
> +'struct': 'VirtioInfoDevice',
> +'data': {
> +'qom_path': 'str',
> +'feature-names': ['VirtioInfoBit'],
> +'host-features': 'uint64',
> +'guest-features': 'uint64',
> +'status': 'uint64'

I'm wondering if this is the best representation (where the caller has
to parse the integer and then lookup in feature-names what each bit of
the integer represents).  But I'm not sure I have anything better off
the top of my head.

> +}
> +}
> +
> +##
> +# @VirtioInfo:
> +#
> +# Information about virtio devices
> +#
> +# @feature-names: names of common virtio features
> +#
> +# @status-names: names of bits which represents virtio device status
> +#
> +# @devices: list of per-device virtio information
> +#
> +# Since: 2.11.0
> +#
> +##
> +{
> +'struct': 'VirtioInfo',
> +'data': {
> +'feature-names': ['VirtioInfoBit'],

Why is feature-names listed at two different nestings of the return value?

> +'status-names': ['VirtioInfoBit'],
> +'devices': ['VirtioInfoDevice']
> +}
> +}
> +
> +
> +##
> +# @query-virtio:
> +#
> +# Returns generic and per-device virtio information
> +#
> +# Since: 2.11.0
> +#
> +##
> +{
> +'command': 'query-virtio',
> +'returns': 'VirtioInfo'
> +}
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-block] [PATCH 8/8] nbd: Minimal structured read for client

2017-10-03 Thread Vladimir Sementsov-Ogievskiy

03.10.2017 15:58, Vladimir Sementsov-Ogievskiy wrote:

03.10.2017 13:07, Paolo Bonzini wrote:

On 26/09/2017 00:19, Eric Blake wrote:

+    /* here we deal with successful structured reply */
+    switch (s->reply.type) {
+    QEMUIOVector sub_qiov;
+    case NBD_SREP_TYPE_OFFSET_DATA:

This is putting a LOT of smarts directly into the receive routine.
Here's where I was previously wondering (and I think Paolo as well)
whether it might be better to split the efforts: the generic function
reads off the chunk information and any payload, but a per-command
callback function then parses the chunks.  Especially since the
definition of the chunks differs on a per-command basis (yes, the NBD
spec will try to not reuse an SREP chunk type across multiple commands
unless the semantics are similar, but that's a bit more fragile).  This
particularly matters given my statement above that you want a
discriminated union, rather than a struct that contains unused fields,
for handling different SREP chunk types.

I think there should be two kinds of replies: 1) read directly into a
QEMUIOVector, using structured replies only as an encapsulation of the
payload; 2) read a chunk at a time into malloc-ed memory, yielding back
to the calling coroutine after receiving one complete chunk.

In the end this probably means that you have a read_chunk_header
function and a read_chunk function.  READ has a loop that calls
read_chunk_header followed by direct reading into the QEMUIOVector,
while everyone else calls read_chunk.


accordingly to spec, we can receive several error reply chunks to any 
request,

so loop, receiving them should be common for all requests I think


as well as handling error chunks should be common.. What do you think 
about my DRAFT proposal?







Maybe qio_channel_readv/writev_full could have "offset" and "bytes"
arguments.  Most code in iov_send_recv could be cut-and-pasted. (When
sheepdog is converted to QIOChannel, iov_send_recv can go away).

Paolo






--
Best regards,
Vladimir




Re: [Qemu-block] [PATCH 8/8] nbd: Minimal structured read for client

2017-10-03 Thread Vladimir Sementsov-Ogievskiy

03.10.2017 13:07, Paolo Bonzini wrote:

On 26/09/2017 00:19, Eric Blake wrote:

+/* here we deal with successful structured reply */
+switch (s->reply.type) {
+QEMUIOVector sub_qiov;
+case NBD_SREP_TYPE_OFFSET_DATA:

This is putting a LOT of smarts directly into the receive routine.
Here's where I was previously wondering (and I think Paolo as well)
whether it might be better to split the efforts: the generic function
reads off the chunk information and any payload, but a per-command
callback function then parses the chunks.  Especially since the
definition of the chunks differs on a per-command basis (yes, the NBD
spec will try to not reuse an SREP chunk type across multiple commands
unless the semantics are similar, but that's a bit more fragile).  This
particularly matters given my statement above that you want a
discriminated union, rather than a struct that contains unused fields,
for handling different SREP chunk types.

I think there should be two kinds of replies: 1) read directly into a
QEMUIOVector, using structured replies only as an encapsulation of the
payload; 2) read a chunk at a time into malloc-ed memory, yielding back
to the calling coroutine after receiving one complete chunk.

In the end this probably means that you have a read_chunk_header
function and a read_chunk function.  READ has a loop that calls
read_chunk_header followed by direct reading into the QEMUIOVector,
while everyone else calls read_chunk.


accordingly to spec, we can receive several error reply chunks to any 
request,

so loop, receiving them should be common for all requests I think



Maybe qio_channel_readv/writev_full could have "offset" and "bytes"
arguments.  Most code in iov_send_recv could be cut-and-pasted.  (When
sheepdog is converted to QIOChannel, iov_send_recv can go away).

Paolo



--
Best regards,
Vladimir




Re: [Qemu-block] [PATCH 8/8] nbd: Minimal structured read for client

2017-10-03 Thread Vladimir Sementsov-Ogievskiy

03.10.2017 13:07, Paolo Bonzini wrote:

On 26/09/2017 00:19, Eric Blake wrote:

+/* here we deal with successful structured reply */
+switch (s->reply.type) {
+QEMUIOVector sub_qiov;
+case NBD_SREP_TYPE_OFFSET_DATA:

This is putting a LOT of smarts directly into the receive routine.
Here's where I was previously wondering (and I think Paolo as well)
whether it might be better to split the efforts: the generic function
reads off the chunk information and any payload, but a per-command
callback function then parses the chunks.  Especially since the
definition of the chunks differs on a per-command basis (yes, the NBD
spec will try to not reuse an SREP chunk type across multiple commands
unless the semantics are similar, but that's a bit more fragile).  This
particularly matters given my statement above that you want a
discriminated union, rather than a struct that contains unused fields,
for handling different SREP chunk types.

I think there should be two kinds of replies: 1) read directly into a
QEMUIOVector, using structured replies only as an encapsulation of the


who should read to qiov? reply_entry, or calling coroutine?.. 
reply_entry should anyway
parse reply, to understand should it read it all or read it to qiov (or 
yield back, and then

calling coroutine will read to qiov)..


payload; 2) read a chunk at a time into malloc-ed memory, yielding back
to the calling coroutine after receiving one complete chunk.

In the end this probably means that you have a read_chunk_header
function and a read_chunk function.  READ has a loop that calls
read_chunk_header followed by direct reading into the QEMUIOVector,
while everyone else calls read_chunk.

Maybe qio_channel_readv/writev_full could have "offset" and "bytes"
arguments.  Most code in iov_send_recv could be cut-and-pasted.  (When
sheepdog is converted to QIOChannel, iov_send_recv can go away).

Paolo



--
Best regards,
Vladimir




Re: [Qemu-block] [Qemu-trivial] [PATCH] hw/block/onenand: Remove dead code block

2017-10-03 Thread Laurent Vivier
On 03/10/2017 11:57, Thomas Huth wrote:
> The condition of the for-loop makes sure that b is always smaller
> than s->blocks, so the "if (b >= s->blocks)" statement is completely
> superfluous here.
> 
> Buglink: https://bugs.launchpad.net/qemu/+bug/1715007
> Signed-off-by: Thomas Huth 
> ---
>  hw/block/onenand.c | 4 
>  1 file changed, 4 deletions(-)
> 
> diff --git a/hw/block/onenand.c b/hw/block/onenand.c
> index 30e40f3..de65c9e 100644
> --- a/hw/block/onenand.c
> +++ b/hw/block/onenand.c
> @@ -520,10 +520,6 @@ static void onenand_command(OneNANDState *s)
>  s->intstatus |= ONEN_INT;
>  
>  for (b = 0; b < s->blocks; b ++) {
> -if (b >= s->blocks) {
> -s->status |= ONEN_ERR_CMD;
> -break;
> -}
>  if (s->blockwp[b] == ONEN_LOCK_LOCKTIGHTEN)
>  break;
>  
> 

Looks like a bad cut'n'paste from case 0x23.

Reviewed-by: Laurent Vivier 



Re: [Qemu-block] [PATCH 8/8] nbd: Minimal structured read for client

2017-10-03 Thread Paolo Bonzini
On 26/09/2017 00:19, Eric Blake wrote:
>> +/* here we deal with successful structured reply */
>> +switch (s->reply.type) {
>> +QEMUIOVector sub_qiov;
>> +case NBD_SREP_TYPE_OFFSET_DATA:
> This is putting a LOT of smarts directly into the receive routine.
> Here's where I was previously wondering (and I think Paolo as well)
> whether it might be better to split the efforts: the generic function
> reads off the chunk information and any payload, but a per-command
> callback function then parses the chunks.  Especially since the
> definition of the chunks differs on a per-command basis (yes, the NBD
> spec will try to not reuse an SREP chunk type across multiple commands
> unless the semantics are similar, but that's a bit more fragile).  This
> particularly matters given my statement above that you want a
> discriminated union, rather than a struct that contains unused fields,
> for handling different SREP chunk types.

I think there should be two kinds of replies: 1) read directly into a
QEMUIOVector, using structured replies only as an encapsulation of the
payload; 2) read a chunk at a time into malloc-ed memory, yielding back
to the calling coroutine after receiving one complete chunk.

In the end this probably means that you have a read_chunk_header
function and a read_chunk function.  READ has a loop that calls
read_chunk_header followed by direct reading into the QEMUIOVector,
while everyone else calls read_chunk.

Maybe qio_channel_readv/writev_full could have "offset" and "bytes"
arguments.  Most code in iov_send_recv could be cut-and-pasted.  (When
sheepdog is converted to QIOChannel, iov_send_recv can go away).

Paolo



Re: [Qemu-block] [PATCH v1.1 DRAFT] nbd: Minimal structured read for client

2017-10-03 Thread Vladimir Sementsov-Ogievskiy

Eric?


27.09.2017 18:10, Vladimir Sementsov-Ogievskiy wrote:

Minimal implementation: drop most of additional error information.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
---

Hi!

Here is a draft of how to refactor reply-payload receiving if you don't
like my previous simple (but not flexible) try. Ofcourse, if we agree on this
approach this patch should be splitted into several ones and many things
(error handling) should be improved.

The idea is:

nbd_read_reply_entry reads only reply header through nbd/client.c code.

Then, the payload is read through block/nbd-client-cmds.c:
simple payload: generic per-command handler, however it should only exist
   for CMD_READ
structured NONE: no payload, handle in nbd_co_receive_one_chunk
structured error: read by nbd_handle_structured_error_payload
   defined in block/nbd-client-cmds.c
structured success: read by per-[command X reply-type] handler
   defined in block/nbd-client-cmds.c

For now nbd-client-cmds.c looks more like nbd-payload.c, but, may be, we
should move command sending special-casing (CMD_WRITE) to it too..

Don't waste time on careful reviewing this patch, let's consider first
the concept of nbd-client-cmds.c,

  block/nbd-client.h  |  10 +++
  include/block/nbd.h |  82 --
  nbd/nbd-internal.h  |  25 --
  block/nbd-client-cmds.c | 220 
  block/nbd-client.c  | 118 --
  nbd/client.c| 128 
  block/Makefile.objs |   2 +-
  7 files changed, 475 insertions(+), 110 deletions(-)
  create mode 100644 block/nbd-client-cmds.c

diff --git a/block/nbd-client.h b/block/nbd-client.h
index b435754b82..abb88e4ea5 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -35,6 +35,8 @@ typedef struct NBDClientSession {
  NBDClientRequest requests[MAX_NBD_REQUESTS];
  NBDReply reply;
  bool quit;
+
+bool structured_reply;
  } NBDClientSession;
  
  NBDClientSession *nbd_get_client_session(BlockDriverState *bs);

@@ -60,4 +62,12 @@ void nbd_client_detach_aio_context(BlockDriverState *bs);
  void nbd_client_attach_aio_context(BlockDriverState *bs,
 AioContext *new_context);
  
+int nbd_handle_structured_payload(QIOChannel *ioc, int cmd,

+  NBDStructuredReplyChunk *reply, void 
*opaque);
+int nbd_handle_simple_payload(QIOChannel *ioc, int cmd, void *opaque);
+
+int nbd_handle_structured_error_payload(QIOChannel *ioc,
+NBDStructuredReplyChunk *reply,
+int *request_ret);
+
  #endif /* NBD_CLIENT_H */
diff --git a/include/block/nbd.h b/include/block/nbd.h
index 314f2f9bbc..b9a4e1dfa9 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -57,12 +57,6 @@ struct NBDRequest {
  };
  typedef struct NBDRequest NBDRequest;
  
-struct NBDReply {

-uint64_t handle;
-uint32_t error;
-};
-typedef struct NBDReply NBDReply;
-
  typedef struct NBDSimpleReply {
  uint32_t magic;  /* NBD_SIMPLE_REPLY_MAGIC */
  uint32_t error;
@@ -77,6 +71,24 @@ typedef struct NBDStructuredReplyChunk {
  uint32_t length; /* length of payload */
  } QEMU_PACKED NBDStructuredReplyChunk;
  
+typedef union NBDReply {

+NBDSimpleReply simple;
+NBDStructuredReplyChunk structured;
+struct {
+uint32_t magic;
+uint32_t _skip;
+uint64_t handle;
+} QEMU_PACKED;
+} NBDReply;
+
+#define NBD_SIMPLE_REPLY_MAGIC  0x67446698
+#define NBD_STRUCTURED_REPLY_MAGIC  0x668e33ef
+
+static inline bool nbd_reply_is_simple(NBDReply *reply)
+{
+return reply->magic == NBD_SIMPLE_REPLY_MAGIC;
+}
+
  typedef struct NBDStructuredRead {
  NBDStructuredReplyChunk h;
  uint64_t offset;
@@ -88,6 +100,11 @@ typedef struct NBDStructuredError {
  uint16_t message_length;
  } QEMU_PACKED NBDStructuredError;
  
+typedef struct NBDPayloadOffsetHole {

+uint64_t offset;
+uint32_t hole_size;
+} QEMU_PACKED NBDPayloadOffsetHole;
+
  /* Transmission (export) flags: sent from server to client during handshake,
 but describe what will happen during transmission */
  #define NBD_FLAG_HAS_FLAGS (1 << 0) /* Flags are there */
@@ -178,12 +195,54 @@ enum {
  
  #define NBD_SREP_TYPE_NONE  0

  #define NBD_SREP_TYPE_OFFSET_DATA   1
+#define NBD_SREP_TYPE_OFFSET_HOLE   2
  #define NBD_SREP_TYPE_ERROR NBD_SREP_ERR(1)
+#define NBD_SREP_TYPE_ERROR_OFFSET  NBD_SREP_ERR(2)
+
+static inline bool nbd_srep_type_is_error(int type)
+{
+return type & (1 << 15);
+}
+
+/* NBD errors are based on errno numbers, so there is a 1:1 mapping,
+ * but only a limited set of errno values is specified in the protocol.
+ * Everything else is squashed to EINVAL.
+ */
+#define NBD_SUCCESS0
+#define NBD_EPERM  1
+#define NBD_EIO5
+#define NBD_ENOMEM 12
+#define NBD_EINVAL 22
+#define 

[Qemu-block] [PATCH] hw/block/onenand: Remove dead code block

2017-10-03 Thread Thomas Huth
The condition of the for-loop makes sure that b is always smaller
than s->blocks, so the "if (b >= s->blocks)" statement is completely
superfluous here.

Buglink: https://bugs.launchpad.net/qemu/+bug/1715007
Signed-off-by: Thomas Huth 
---
 hw/block/onenand.c | 4 
 1 file changed, 4 deletions(-)

diff --git a/hw/block/onenand.c b/hw/block/onenand.c
index 30e40f3..de65c9e 100644
--- a/hw/block/onenand.c
+++ b/hw/block/onenand.c
@@ -520,10 +520,6 @@ static void onenand_command(OneNANDState *s)
 s->intstatus |= ONEN_INT;
 
 for (b = 0; b < s->blocks; b ++) {
-if (b >= s->blocks) {
-s->status |= ONEN_ERR_CMD;
-break;
-}
 if (s->blockwp[b] == ONEN_LOCK_LOCKTIGHTEN)
 break;
 
-- 
1.8.3.1




Re: [Qemu-block] [Qemu-devel] [PATCH v4 14/23] qemu-img: Speed up compare on pre-allocated larger file

2017-10-03 Thread Vladimir Sementsov-Ogievskiy

13.09.2017 19:03, Eric Blake wrote:

Compare the following images with all-zero contents:
$ truncate --size 1M A
$ qemu-img create -f qcow2 -o preallocation=off B 1G
$ qemu-img create -f qcow2 -o preallocation=metadata C 1G

On my machine, the difference is noticeable for pre-patch speeds,
with more than an order of magnitude in difference caused by the
choice of preallocation in the qcow2 file:

$ time ./qemu-img compare -f raw -F qcow2 A B
Warning: Image size mismatch!
Images are identical.

real0m0.014s
user0m0.007s
sys 0m0.007s

$ time ./qemu-img compare -f raw -F qcow2 A C
Warning: Image size mismatch!
Images are identical.

real0m0.341s
user0m0.144s
sys 0m0.188s

Why? Because bdrv_is_allocated() returns false for image B but
true for image C, throwing away the fact that both images know
via lseek(SEEK_HOLE) that the entire image still reads as zero.
 From there, qemu-img ends up calling bdrv_pread() for every byte
of the tail, instead of quickly looking for the next allocation.
The solution: use block_status instead of is_allocated, giving:

$ time ./qemu-img compare -f raw -F qcow2 A C
Warning: Image size mismatch!
Images are identical.

real0m0.014s
user0m0.011s
sys 0m0.003s

which is on par with the speeds for no pre-allocation.

Signed-off-by: Eric Blake 

---
v3: new patch
---
  qemu-img.c | 8 
  1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/qemu-img.c b/qemu-img.c
index f8423e9b3f..f5ab29d176 100644
--- a/qemu-img.c
+++ b/qemu-img.c
@@ -1477,11 +1477,11 @@ static int img_compare(int argc, char **argv)
  while (sector_num < progress_base) {
  int64_t count;

-ret = bdrv_is_allocated_above(blk_bs(blk_over), NULL,
+ret = bdrv_block_status_above(blk_bs(blk_over), NULL,
sector_num * BDRV_SECTOR_SIZE,
(progress_base - sector_num) *
BDRV_SECTOR_SIZE,
-  );
+  , NULL);
  if (ret < 0) {
  ret = 3;
  error_report("Sector allocation test failed for %s",
@@ -1489,11 +1489,11 @@ static int img_compare(int argc, char **argv)
  goto out;

  }
-/* TODO relax this once bdrv_is_allocated_above does not enforce
+/* TODO relax this once bdrv_block_status_above does not enforce
   * sector alignment */
  assert(QEMU_IS_ALIGNED(count, BDRV_SECTOR_SIZE));
  nb_sectors = count >> BDRV_SECTOR_BITS;
-if (ret) {
+if (ret & BDRV_BLOCK_ALLOCATED && !(ret & BDRV_BLOCK_ZERO)) {
  nb_sectors = MIN(nb_sectors, IO_BUF_SIZE >> BDRV_SECTOR_BITS);
  ret = check_empty_sectors(blk_over, sector_num, nb_sectors,
filename_over, buf1, quiet);


Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir



Re: [Qemu-block] [Qemu-devel] [PATCH 0/3] blockjobs: add explicit job culling

2017-10-03 Thread Vladimir Sementsov-Ogievskiy

03.10.2017 06:15, John Snow wrote:

For jobs that complete when a monitor isn't looking, there's no way to
tell what the job's final return code was. We need to allow jobs to
remain in the list until queried for reliable management.

This is an RFC; tests are on the way.
(Tested only manually via qmp-shell for now.)


That's a cool feature!
What about transactions support?



John Snow (3):
   blockjob: add manual-cull property
   qmp: add block-job-cull command
   blockjob: expose manual-cull property

  block/backup.c   | 20 +-
  block/commit.c   |  2 +-
  block/mirror.c   |  2 +-
  block/replication.c  |  5 +++--
  block/stream.c   |  2 +-
  block/trace-events   |  1 +
  blockdev.c   | 28 +
  blockjob.c   | 46 +++--
  include/block/block_int.h|  8 +---
  include/block/blockjob.h | 21 +++
  include/block/blockjob_int.h |  2 +-
  qapi/block-core.json | 49 
  tests/test-blockjob-txn.c|  2 +-
  tests/test-blockjob.c|  2 +-
  14 files changed, 155 insertions(+), 35 deletions(-)




--
Best regards,
Vladimir




Re: [Qemu-block] blockdev-commit design

2017-10-03 Thread Kashyap Chamarthy
On Mon, Oct 02, 2017 at 05:32:52PM +0200, Kevin Wolf wrote:
> Am 02.10.2017 um 17:01 hat Kashyap Chamarthy geschrieben:
> > On Tue, Sep 26, 2017 at 07:59:42PM +0200, Kevin Wolf wrote:

[...]

> > {
> > "execute": "block-commit",
> > "arguments": {
> > "device": "node-D",
> > "job-id": "job0",
> > "top": "d.qcow2",
> > "base": "a.qcow2"
> > }
> > }
> > 
> > So when merging the top-most layer (D), there's at least one
> > scenario where we _are_ specifying the "active layer".  And 'top'
> > _is_ mandatory as seen above.  
> > 
> > So I wonder if I'm misinterpreting your wording.
> 
> The point is that you specify both "device" and "top". There is no real
> use in specifying "device", because "top" already identifies the node.
> (Of course, file names aren't a good way to identify nodes, so the
> assumption is that they are replaced by node names.)

Ah, thanks.  Now I recall that the existing `block-commit` is the only
command (among mirror, backup, stream, and commit) that doesn't yet
support 'node-name' / 'snapshot-node-name'.  Hence your proposal, to
bring it more in-line w/ blockdev-{mirror,backup}, and `block-stream`[*]
(which accepts 'base-node', and its "device" parameter takes a named
node).

[*] https://lists.gnu.org/archive/html/qemu-block/2017-05/msg01230.html

> In the active commit case, it is just duplicated information, but in the
> case of an intermediate commit, it is additional information that needs
> to be provided without good reason.

It's clearer now, thank you.

> > (2) Also, just for my own education, can you mind expanding a bit more
> > about the "there can be more than one active layer" scenario?
> 
> Something like this, C is the backing file of both D and E:
> 
> 
> +--- D
> |
> A <- B <- C <---+
> |
> +--- E
> 
> I want to commit C into B. But do I specify D or E as the active layer?
> They are both active layers above C.

Ah-ha, yes, it's the "thin provisoning" case.

Thanks for the explanation.

-- 
/kashyap