Re: [Qemu-block] [PATCH v3 04/33] make Device and Bus Resettable

2019-08-06 Thread David Gibson
On Wed, Jul 31, 2019 at 11:09:05AM +0200, Damien Hedde wrote:
> 
> 
> On 7/31/19 7:56 AM, David Gibson wrote:
> > On Mon, Jul 29, 2019 at 04:56:25PM +0200, Damien Hedde wrote:
> >> This add Resettable interface implementation for both Bus and Device.
> >>
> >> *resetting* counter and *reset_is_cold* flag are added in DeviceState
> >> and BusState.
> >>
> >> Compatibility with existing code base is ensured.
> >> The legacy bus or device reset method is called in the new exit phase
> >> and the other 2 phases are let empty. Using the exit phase guarantee that
> >> legacy resets are called in the "post" order (ie: children then parent)
> >> in hierarchical reset. That is the same order as legacy qdev_reset_all
> >> or qbus_reset_all were using.
> >>
> >> New *device_reset* and *bus_reset* function are proposed with an
> >> additional boolean argument telling whether the reset is cold or warm.
> >> Helper functions *device_reset_[warm|cold]* and *bus_reset_[warm|cold]*
> >> are defined also as helpers.
> >>
> >> Also add a [device|bus]_is_resetting and [device|bus]_is_reset_cold
> >> functions telling respectively whether the object is currently under reset 
> >> and
> >> if the current reset is cold or not.
> >>
> >> Signed-off-by: Damien Hedde 
> >> ---
> >>  hw/core/bus.c  | 85 ++
> >>  hw/core/qdev.c | 82 
> >>  include/hw/qdev-core.h | 84 ++---
> >>  tests/Makefile.include |  1 +
> >>  4 files changed, 247 insertions(+), 5 deletions(-)
> >>
> >> diff --git a/hw/core/bus.c b/hw/core/bus.c
> >> index 17bc1edcde..08a97addb6 100644
> >> --- a/hw/core/bus.c
> >> +++ b/hw/core/bus.c
> >> @@ -22,6 +22,7 @@
> >>  #include "qemu/module.h"
> >>  #include "hw/qdev.h"
> >>  #include "qapi/error.h"
> >> +#include "hw/resettable.h"
> >>  
> >>  void qbus_set_hotplug_handler(BusState *bus, Object *handler, Error 
> >> **errp)
> >>  {
> >> @@ -68,6 +69,75 @@ int qbus_walk_children(BusState *bus,
> >>  return 0;
> >>  }
> >>  
> >> +void bus_reset(BusState *bus, bool cold)
> >> +{
> >> +resettable_reset(OBJECT(bus), cold);
> >> +}
> >> +
> >> +bool bus_is_resetting(BusState *bus)
> >> +{
> >> +return (bus->resetting != 0);
> >> +}
> >> +
> >> +bool bus_is_reset_cold(BusState *bus)
> >> +{
> >> +return bus->reset_is_cold;
> >> +}
> >> +
> >> +static uint32_t bus_get_reset_count(Object *obj)
> >> +{
> >> +BusState *bus = BUS(obj);
> >> +return bus->resetting;
> >> +}
> >> +
> >> +static uint32_t bus_increment_reset_count(Object *obj)
> >> +{
> >> +BusState *bus = BUS(obj);
> >> +return ++bus->resetting;
> >> +}
> >> +
> >> +static uint32_t bus_decrement_reset_count(Object *obj)
> >> +{
> >> +BusState *bus = BUS(obj);
> >> +return --bus->resetting;
> >> +}
> >> +
> >> +static bool bus_set_reset_cold(Object *obj, bool cold)
> >> +{
> >> +BusState *bus = BUS(obj);
> >> +bool old = bus->reset_is_cold;
> >> +bus->reset_is_cold = cold;
> >> +return old;
> >> +}
> >> +
> >> +static bool bus_set_hold_needed(Object *obj, bool hold_needed)
> >> +{
> >> +BusState *bus = BUS(obj);
> >> +bool old = bus->reset_hold_needed;
> >> +bus->reset_hold_needed = hold_needed;
> >> +return old;
> >> +}
> >> +
> >> +static void bus_foreach_reset_child(Object *obj, void (*func)(Object *))
> >> +{
> >> +BusState *bus = BUS(obj);
> >> +BusChild *kid;
> >> +
> >> +QTAILQ_FOREACH(kid, &bus->children, sibling) {
> >> +func(OBJECT(kid->child));
> >> +}
> >> +}
> > 
> > IIUC, every resettable class would need more or less identical
> > implementations of the above.  That seems like an awful lot of
> > boilerplate.
> 
> Do you mean the get/increment_count/decrement_count, set_cold/hold part ?
> True, but it's limited to the base classes.
> Since Resettable is an interface, we have no state there to store what
> we need. Only alternative is to have some kind of single
> get_resettable_state method returning a pointer to the state (allowing
> us to keep the functions in the interface code).
> Beyond Device and Bus, which are done here, there is probably not so
> many class candidates for the Resettable interface.

Right.  I can't immediately see a better way to handle this, but it
still seems like a mild code smell.

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [Qemu-block] [PATCH] block: Simplify bdrv_filter_default_perms()

2019-08-06 Thread Max Reitz
On 02.08.19 16:03, Kevin Wolf wrote:
> The same change as commit 2b23f28639 ('block/copy-on-read: Fix
> permissions for inactive node') made for the copy-on-read driver can be
> made for bdrv_filter_default_perms(): Retaining the old permissions from
> the BdrvChild if it is given complicates things unnecessary when in the
> end this only means that the options set in the c == NULL case (i.e.
> during child creation) are retained.

My best guess is that we had this code because this way any party could
set or remove the GRAPH_MOD permission and it would be kept through
.bdrv_child_perm() invocations.  (But I think that’s kaputt.)

> Signed-off-by: Kevin Wolf 
> ---
>  block.c | 12 ++--
>  1 file changed, 2 insertions(+), 10 deletions(-)

Reviewed-by: Max Reitz 



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-block] [PATCH v5 0/6] Allow Valgrind checking all QEMU processes

2019-08-06 Thread Andrey Shinkevich
PINGING...

On 19/07/2019 19:30, Andrey Shinkevich wrote:
> In the current implementation of the QEMU bash iotests, only qemu-io
> processes may be run under the Valgrind, which is a useful tool for
> finding memory usage issues. Let's allow the common.rc bash script
> runing all the QEMU processes, such as qemu-kvm, qemu-img, qemu-ndb
> and qemu-vxhs, under the Valgrind tool.
> 
> v5:
>01: The patch "block/nbd: NBDReply is used being uninitialized" was 
> detached
>and taken into account in the patch "nbd: Initialize reply on failure"
>by Eric Blake.
> 
> v4:
>01: The patch "iotests: Set read-zeroes on in null block driver for 
> Valgrind"
>was extended with new cases and issued as a separate series.
>02: The new patch "block/nbd: NBDReply is used being uninitialized" was
>added to resolve the failure of the iotest 083 run under Valgrind.
> 
> v3:
>01: The new function _casenotrun() was added to the common.rc bash
>script to notify the user of test cases dropped for some reason.
>Suggested by Kevin.
>Particularly, the notification about the nonexistent TMPDIR in
>the test 051 was added (noticed by Vladimir).
>02: The timeout in some test cases was extended for Valgrind because
>it differs when running on the ramdisk.
>03: Due to the common.nbd script has been changed with the commit
>b28f582c, the patch "iotests: amend QEMU NBD process synchronization"
>is actual no more. Note that QEMU_NBD is launched in the bash nested
>shell in the _qemu_nbd_wrapper() as it was before in common.rc.
>04: The patch "iotests: new file to suppress Valgrind errors" was dropped
>due to my superficial understanding of the work of the function
>blk_pread_unthrottled(). Special thanks to Kevin who shed the light
>on the null block driver involved. Now, the parameter 'read-zeroes=on'
>is passed to the null block driver to initialize the buffer in the
>function guess_disk_lchs() that the Valgrind was complaining to.
> 
> v2:
>01: The patch 2/7 of v1 was merged into the patch 1/7, suggested by Daniel.
>02: Another patch 7/7 was added to introduce the Valgrind error suppression
>file into the QEMU project.
>Discussed in the email thread with the message ID:
><1560276131-683243-1-git-send-email-andrey.shinkev...@virtuozzo.com>
> 
> Andrey Shinkevich (6):
>iotests: allow Valgrind checking all QEMU processes
>iotests: exclude killed processes from running under  Valgrind
>iotests: Add casenotrun report to bash tests
>iotests: Valgrind fails with nonexistent directory
>iotests: extended timeout under Valgrind
>iotests: extend sleeping time under Valgrind
> 
>   tests/qemu-iotests/028   |  6 +++-
>   tests/qemu-iotests/039   |  5 +++
>   tests/qemu-iotests/039.out   | 30 +++--
>   tests/qemu-iotests/051   |  4 +++
>   tests/qemu-iotests/061   |  2 ++
>   tests/qemu-iotests/061.out   | 12 ++-
>   tests/qemu-iotests/137   |  1 +
>   tests/qemu-iotests/137.out   |  6 +---
>   tests/qemu-iotests/183   |  9 +-
>   tests/qemu-iotests/192   |  6 +++-
>   tests/qemu-iotests/247   |  6 +++-
>   tests/qemu-iotests/common.rc | 76 
> +---
>   12 files changed, 101 insertions(+), 62 deletions(-)
> 

-- 
With the best regards,
Andrey Shinkevich


Re: [Qemu-block] [PATCH v2] util/hbitmap: strict hbitmap_reset

2019-08-06 Thread Vladimir Sementsov-Ogievskiy
06.08.2019 19:09, Max Reitz wrote:
> On 06.08.19 17:26, Vladimir Sementsov-Ogievskiy wrote:
>> hbitmap_reset has an unobvious property: it rounds requested region up.
>> It may provoke bugs, like in recently fixed write-blocking mode of
>> mirror: user calls reset on unaligned region, not keeping in mind that
>> there are possible unrelated dirty bytes, covered by rounded-up region
>> and information of this unrelated "dirtiness" will be lost.
>>
>> Make hbitmap_reset strict: assert that arguments are aligned, allowing
>> only one exception when @start + @count == hb->orig_size. It's needed
>> to comfort users of hbitmap_next_dirty_area, which cares about
>> hb->orig_size.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy 
>> ---
>>
>> v2 based on Max's https://github.com/XanClic/qemu.git block
>> which will be merged soon to 4.1, and this patch goes to 4.2
>> Based-on: https://github.com/XanClic/qemu.git block
>>
>> v1 was "[PATCH] util/hbitmap: fix unaligned reset", and as I understand
>> we all agreed to just assert alignment instead of aligning down
>> automatically.
>>
>>   include/qemu/hbitmap.h | 5 +
>>   tests/test-hbitmap.c   | 2 +-
>>   util/hbitmap.c | 4 
>>   3 files changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h
>> index 4afbe6292e..7865e819ca 100644
>> --- a/include/qemu/hbitmap.h
>> +++ b/include/qemu/hbitmap.h
>> @@ -132,6 +132,11 @@ void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t 
>> count);
>>* @count: Number of bits to reset.
>>*
>>* Reset a consecutive range of bits in an HBitmap.
>> + * @start and @count must be aligned to bitmap granularity. The only 
>> exception
>> + * is resetting the tail of the bitmap: @count may be equal to @start +
>> + * hb->orig_size,
> 
> s/@start + hb->orig_size/hb->orig_size - @start/, I think.

Ha, I wanted to say start + count equal to orig_size. Yours is OK too of course.

> 
>>  in this case @count may be not aligned. @start + @count
> 
> +are
> 
> With those fixed:
> 
> Reviewed-by: Max Reitz 

Thanks!

> 
>> + * allowed to be greater than hb->orig_size, but only if @start < 
>> hb->orig_size
>> + * and @start + @count = ALIGN_UP(hb->orig_size, granularity).
>>*/
>>   void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count);
>>   
>> diff --git a/tests/test-hbitmap.c b/tests/test-hbitmap.c
>> index 592d8219db..2be56d1597 100644
>> --- a/tests/test-hbitmap.c
>> +++ b/tests/test-hbitmap.c
>> @@ -423,7 +423,7 @@ static void test_hbitmap_granularity(TestHBitmapData 
>> *data,
>>   hbitmap_test_check(data, 0);
>>   hbitmap_test_set(data, 0, 3);
>>   g_assert_cmpint(hbitmap_count(data->hb), ==, 4);
>> -hbitmap_test_reset(data, 0, 1);
>> +hbitmap_test_reset(data, 0, 2);
>>   g_assert_cmpint(hbitmap_count(data->hb), ==, 2);
>>   }
>>   
>> diff --git a/util/hbitmap.c b/util/hbitmap.c
>> index bcc0acdc6a..586920cb52 100644
>> --- a/util/hbitmap.c
>> +++ b/util/hbitmap.c
>> @@ -476,6 +476,10 @@ void hbitmap_reset(HBitmap *hb, uint64_t start, 
>> uint64_t count)
>>   /* Compute range in the last layer.  */
>>   uint64_t first;
>>   uint64_t last = start + count - 1;
>> +uint64_t gran = 1ULL << hb->granularity;
>> +
>> +assert(!(start & (gran - 1)));
>> +assert(!(count & (gran - 1)) || (start + count == hb->orig_size));
>>   
>>   trace_hbitmap_reset(hb, start, count,
>>   start >> hb->granularity, last >> hb->granularity);
>>
> 
> 


-- 
Best regards,
Vladimir


Re: [Qemu-block] [PATCH v2] util/hbitmap: strict hbitmap_reset

2019-08-06 Thread Max Reitz
On 06.08.19 17:26, Vladimir Sementsov-Ogievskiy wrote:
> hbitmap_reset has an unobvious property: it rounds requested region up.
> It may provoke bugs, like in recently fixed write-blocking mode of
> mirror: user calls reset on unaligned region, not keeping in mind that
> there are possible unrelated dirty bytes, covered by rounded-up region
> and information of this unrelated "dirtiness" will be lost.
> 
> Make hbitmap_reset strict: assert that arguments are aligned, allowing
> only one exception when @start + @count == hb->orig_size. It's needed
> to comfort users of hbitmap_next_dirty_area, which cares about
> hb->orig_size.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy 
> ---
> 
> v2 based on Max's https://github.com/XanClic/qemu.git block
> which will be merged soon to 4.1, and this patch goes to 4.2
> Based-on: https://github.com/XanClic/qemu.git block
> 
> v1 was "[PATCH] util/hbitmap: fix unaligned reset", and as I understand
> we all agreed to just assert alignment instead of aligning down
> automatically.
> 
>  include/qemu/hbitmap.h | 5 +
>  tests/test-hbitmap.c   | 2 +-
>  util/hbitmap.c | 4 
>  3 files changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h
> index 4afbe6292e..7865e819ca 100644
> --- a/include/qemu/hbitmap.h
> +++ b/include/qemu/hbitmap.h
> @@ -132,6 +132,11 @@ void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t 
> count);
>   * @count: Number of bits to reset.
>   *
>   * Reset a consecutive range of bits in an HBitmap.
> + * @start and @count must be aligned to bitmap granularity. The only 
> exception
> + * is resetting the tail of the bitmap: @count may be equal to @start +
> + * hb->orig_size,

s/@start + hb->orig_size/hb->orig_size - @start/, I think.

> in this case @count may be not aligned. @start + @count

+are

With those fixed:

Reviewed-by: Max Reitz 

> + * allowed to be greater than hb->orig_size, but only if @start < 
> hb->orig_size
> + * and @start + @count = ALIGN_UP(hb->orig_size, granularity).
>   */
>  void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count);
>  
> diff --git a/tests/test-hbitmap.c b/tests/test-hbitmap.c
> index 592d8219db..2be56d1597 100644
> --- a/tests/test-hbitmap.c
> +++ b/tests/test-hbitmap.c
> @@ -423,7 +423,7 @@ static void test_hbitmap_granularity(TestHBitmapData 
> *data,
>  hbitmap_test_check(data, 0);
>  hbitmap_test_set(data, 0, 3);
>  g_assert_cmpint(hbitmap_count(data->hb), ==, 4);
> -hbitmap_test_reset(data, 0, 1);
> +hbitmap_test_reset(data, 0, 2);
>  g_assert_cmpint(hbitmap_count(data->hb), ==, 2);
>  }
>  
> diff --git a/util/hbitmap.c b/util/hbitmap.c
> index bcc0acdc6a..586920cb52 100644
> --- a/util/hbitmap.c
> +++ b/util/hbitmap.c
> @@ -476,6 +476,10 @@ void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t 
> count)
>  /* Compute range in the last layer.  */
>  uint64_t first;
>  uint64_t last = start + count - 1;
> +uint64_t gran = 1ULL << hb->granularity;
> +
> +assert(!(start & (gran - 1)));
> +assert(!(count & (gran - 1)) || (start + count == hb->orig_size));
>  
>  trace_hbitmap_reset(hb, start, count,
>  start >> hb->granularity, last >> hb->granularity);
> 




signature.asc
Description: OpenPGP digital signature


[Qemu-block] [PATCH v2] util/hbitmap: strict hbitmap_reset

2019-08-06 Thread Vladimir Sementsov-Ogievskiy
hbitmap_reset has an unobvious property: it rounds requested region up.
It may provoke bugs, like in recently fixed write-blocking mode of
mirror: user calls reset on unaligned region, not keeping in mind that
there are possible unrelated dirty bytes, covered by rounded-up region
and information of this unrelated "dirtiness" will be lost.

Make hbitmap_reset strict: assert that arguments are aligned, allowing
only one exception when @start + @count == hb->orig_size. It's needed
to comfort users of hbitmap_next_dirty_area, which cares about
hb->orig_size.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
---

v2 based on Max's https://github.com/XanClic/qemu.git block
which will be merged soon to 4.1, and this patch goes to 4.2
Based-on: https://github.com/XanClic/qemu.git block

v1 was "[PATCH] util/hbitmap: fix unaligned reset", and as I understand
we all agreed to just assert alignment instead of aligning down
automatically.

 include/qemu/hbitmap.h | 5 +
 tests/test-hbitmap.c   | 2 +-
 util/hbitmap.c | 4 
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h
index 4afbe6292e..7865e819ca 100644
--- a/include/qemu/hbitmap.h
+++ b/include/qemu/hbitmap.h
@@ -132,6 +132,11 @@ void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t 
count);
  * @count: Number of bits to reset.
  *
  * Reset a consecutive range of bits in an HBitmap.
+ * @start and @count must be aligned to bitmap granularity. The only exception
+ * is resetting the tail of the bitmap: @count may be equal to @start +
+ * hb->orig_size, in this case @count may be not aligned. @start + @count
+ * allowed to be greater than hb->orig_size, but only if @start < hb->orig_size
+ * and @start + @count = ALIGN_UP(hb->orig_size, granularity).
  */
 void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count);
 
diff --git a/tests/test-hbitmap.c b/tests/test-hbitmap.c
index 592d8219db..2be56d1597 100644
--- a/tests/test-hbitmap.c
+++ b/tests/test-hbitmap.c
@@ -423,7 +423,7 @@ static void test_hbitmap_granularity(TestHBitmapData *data,
 hbitmap_test_check(data, 0);
 hbitmap_test_set(data, 0, 3);
 g_assert_cmpint(hbitmap_count(data->hb), ==, 4);
-hbitmap_test_reset(data, 0, 1);
+hbitmap_test_reset(data, 0, 2);
 g_assert_cmpint(hbitmap_count(data->hb), ==, 2);
 }
 
diff --git a/util/hbitmap.c b/util/hbitmap.c
index bcc0acdc6a..586920cb52 100644
--- a/util/hbitmap.c
+++ b/util/hbitmap.c
@@ -476,6 +476,10 @@ void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t 
count)
 /* Compute range in the last layer.  */
 uint64_t first;
 uint64_t last = start + count - 1;
+uint64_t gran = 1ULL << hb->granularity;
+
+assert(!(start & (gran - 1)));
+assert(!(count & (gran - 1)) || (start + count == hb->orig_size));
 
 trace_hbitmap_reset(hb, start, count,
 start >> hb->granularity, last >> hb->granularity);
-- 
2.18.0




Re: [Qemu-block] [PULL v2 0/7] Block patches for 4.1.0-rc4

2019-08-06 Thread Peter Maydell
On Tue, 6 Aug 2019 at 12:59, Max Reitz  wrote:
>
> The following changes since commit 9bb68d34dda9be60335e73e65c8fb61bca035362:
>
>   Merge remote-tracking branch 
> 'remotes/philmd-gitlab/tags/edk2-next-20190803' into staging (2019-08-05 
> 11:05:36 +0100)
>
> are available in the Git repository at:
>
>   https://github.com/XanClic/qemu.git tags/pull-block-2019-08-06
>
> for you to fetch changes up to 110571be4e70ac015628e76d2731f96dd8d1998c:
>
>   block/backup: disable copy_range for compressed backup (2019-08-06 13:17:27 
> +0200)
> 
> v2: Added “Cc: qemu-stable” tag where necessary
>
> 
> Block patches for 4.1.0-rc4:
> - Fix the backup block job when using copy offloading
> - Fix the mirror block job when using the write-blocking copy mode
> - Fix incremental backups after the image has been grown with the
>   respective bitmap attached to it
>


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/4.1
for any user-visible changes.

-- PMM



Re: [Qemu-block] [PATCH] util/hbitmap: fix unaligned reset

2019-08-06 Thread Vladimir Sementsov-Ogievskiy
06.08.2019 16:30, John Snow wrote:
> 
> 
> On 8/6/19 8:39 AM, Vladimir Sementsov-Ogievskiy wrote:
>> 03.08.2019 0:19, Max Reitz wrote:
>>> On 02.08.19 20:58, Vladimir Sementsov-Ogievskiy wrote:
 hbitmap_reset is broken: it rounds up the requested region. It leads to
 the following bug, which is shown by fixed test:

 assume granularity = 2
 set(0, 3) # count becomes 4
 reset(0, 1) # count becomes 2

 But user of the interface assume that virtual bit 1 should be still
 dirty, so hbitmap should report count to be 4!

 In other words, because of granularity, when we set one "virtual" bit,
 yes, we make all "virtual" bits in same chunk to be dirty. But this
 should not be so for reset.

 Fix this, aligning bound correctly.

 Signed-off-by: Vladimir Sementsov-Ogievskiy 
 ---

 Hi all!

 Hmm, is it a bug or feature? :)
 I don't have a test for mirror yet, but I think that sync mirror may be 
 broken
 because of this, as do_sync_target_write() seems to be using unaligned 
 reset.
>>>
>>> Crap.
>>>
>>>
>>> Yes, you’re right.  This would fix it, and it wouldn’t fix it in the
>>> worst way.
>>>
>>> But I don’t know whether this patch is the best way forward still.  I
>>> think call hbitmap_reset() with unaligned boundaries generally calls for
>>> trouble, as John has laid out.  If mirror’s do_sync_target_write() is
>>> the only offender right now,
>>
>> Another thing is migration/block. Should we care of it, is it supported at 
>> all?
>>
> 
> Downstream products always have time and room to get additional fixes; I
> think this is supported from an upstream POV so we should investigate this.
> 
> I assume migration/block has the same problem that it fully clears
> unaligned blocks?
> 


Hmm, after closer look, it seems like it's OK. It just a bit more difficult to
see than in other places with reset.


-- 
Best regards,
Vladimir


Re: [Qemu-block] [PATCH] util/hbitmap: fix unaligned reset

2019-08-06 Thread John Snow



On 8/6/19 8:39 AM, Vladimir Sementsov-Ogievskiy wrote:
> 03.08.2019 0:19, Max Reitz wrote:
>> On 02.08.19 20:58, Vladimir Sementsov-Ogievskiy wrote:
>>> hbitmap_reset is broken: it rounds up the requested region. It leads to
>>> the following bug, which is shown by fixed test:
>>>
>>> assume granularity = 2
>>> set(0, 3) # count becomes 4
>>> reset(0, 1) # count becomes 2
>>>
>>> But user of the interface assume that virtual bit 1 should be still
>>> dirty, so hbitmap should report count to be 4!
>>>
>>> In other words, because of granularity, when we set one "virtual" bit,
>>> yes, we make all "virtual" bits in same chunk to be dirty. But this
>>> should not be so for reset.
>>>
>>> Fix this, aligning bound correctly.
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy 
>>> ---
>>>
>>> Hi all!
>>>
>>> Hmm, is it a bug or feature? :)
>>> I don't have a test for mirror yet, but I think that sync mirror may be 
>>> broken
>>> because of this, as do_sync_target_write() seems to be using unaligned 
>>> reset.
>>
>> Crap.
>>
>>
>> Yes, you’re right.  This would fix it, and it wouldn’t fix it in the
>> worst way.
>>
>> But I don’t know whether this patch is the best way forward still.  I
>> think call hbitmap_reset() with unaligned boundaries generally calls for
>> trouble, as John has laid out.  If mirror’s do_sync_target_write() is
>> the only offender right now, 
> 
> Another thing is migration/block. Should we care of it, is it supported at 
> all?
> 

Downstream products always have time and room to get additional fixes; I
think this is supported from an upstream POV so we should investigate this.

I assume migration/block has the same problem that it fully clears
unaligned blocks?

(I can look shortly but can't just yet.)

--js



Re: [Qemu-block] [Qemu-devel] [PATCH 2/3] iotests: Enable -d for Python non-unittest tests

2019-08-06 Thread John Snow



On 8/6/19 5:51 AM, Kevin Wolf wrote:
> Am 06.08.2019 um 00:19 hat John Snow geschrieben:
>>
>>
>> On 8/2/19 10:07 AM, Kevin Wolf wrote:
>>> Am 01.08.2019 um 19:57 hat Max Reitz geschrieben:
 On 01.08.19 17:17, Kevin Wolf wrote:
> The part of iotests.main() that is related to the implementation of the
> debug option -d and enables QEMU and QMP logging is not only useful in
> tests that use the Python unittest framework, but also in tests that
> work by comparing with a reference output.
>
> Factor these parts out into iotests.init() and call it from the test
> cases that currently lack support for debug output.

 How does this relate to
 https://lists.nongnu.org/archive/html/qemu-block/2019-07/msg01212.html ?
>>>
>>> Hm, no idea? :-)
>>>
>>> Looks like John's patch depends on some other patches which would then
>>> conflict with mine, too, so maybe I'll just drop my patch and wait what
>>> happens?
>>>
>>> John, any opinion?
>>>
>>> Kevin
>>>
>>
>> My patches do roughly the same plus a little more. If you don't mind
>> waiting, I can take care of this for you when the tree reopens?
> 
> Okay, I'll drop my patch then.
> 
> Kevin
> 

Thank you! I will try to fix my bitmaps branch ASAP in light of the
recent fixes and get this staged in an easy to test way for the 4.2 branch.

--js



Re: [Qemu-block] [PATCH] util/hbitmap: fix unaligned reset

2019-08-06 Thread Vladimir Sementsov-Ogievskiy
03.08.2019 0:19, Max Reitz wrote:
> On 02.08.19 20:58, Vladimir Sementsov-Ogievskiy wrote:
>> hbitmap_reset is broken: it rounds up the requested region. It leads to
>> the following bug, which is shown by fixed test:
>>
>> assume granularity = 2
>> set(0, 3) # count becomes 4
>> reset(0, 1) # count becomes 2
>>
>> But user of the interface assume that virtual bit 1 should be still
>> dirty, so hbitmap should report count to be 4!
>>
>> In other words, because of granularity, when we set one "virtual" bit,
>> yes, we make all "virtual" bits in same chunk to be dirty. But this
>> should not be so for reset.
>>
>> Fix this, aligning bound correctly.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy 
>> ---
>>
>> Hi all!
>>
>> Hmm, is it a bug or feature? :)
>> I don't have a test for mirror yet, but I think that sync mirror may be 
>> broken
>> because of this, as do_sync_target_write() seems to be using unaligned reset.
> 
> Crap.
> 
> 
> Yes, you’re right.  This would fix it, and it wouldn’t fix it in the
> worst way.
> 
> But I don’t know whether this patch is the best way forward still.  I
> think call hbitmap_reset() with unaligned boundaries generally calls for
> trouble, as John has laid out.  If mirror’s do_sync_target_write() is
> the only offender right now, 

Another thing is migration/block. Should we care of it, is it supported at all?


-- 
Best regards,
Vladimir


[Qemu-block] [PULL v2 7/7] block/backup: disable copy_range for compressed backup

2019-08-06 Thread Max Reitz
From: Vladimir Sementsov-Ogievskiy 

Enabled by default copy_range ignores compress option. It's definitely
unexpected for user.

It's broken since introduction of copy_range usage in backup in
9ded4a011496.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Message-id: 20190730163251.755248-3-vsement...@virtuozzo.com
Reviewed-by: John Snow 
Reviewed-by: Max Reitz 
Cc: qemu-sta...@nongnu.org
Signed-off-by: Max Reitz 
---
 block/backup.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/backup.c b/block/backup.c
index 1ee271f9f1..b26c22c4b8 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -657,7 +657,7 @@ BlockJob *backup_job_create(const char *job_id, 
BlockDriverState *bs,
 job->cluster_size = cluster_size;
 job->copy_bitmap = copy_bitmap;
 copy_bitmap = NULL;
-job->use_copy_range = true;
+job->use_copy_range = !compress; /* compression isn't supported for it */
 job->copy_range_size = MIN_NON_ZERO(blk_get_max_transfer(job->common.blk),
 blk_get_max_transfer(job->target));
 job->copy_range_size = MAX(job->cluster_size,
-- 
2.21.0




[Qemu-block] [PULL v2 5/7] mirror: Only mirror granularity-aligned chunks

2019-08-06 Thread Max Reitz
In write-blocking mode, all writes to the top node directly go to the
target.  We must only mirror chunks of data that are aligned to the
job's granularity, because that is how the dirty bitmap works.
Therefore, the request alignment for writes must be the job's
granularity (in write-blocking mode).

Unfortunately, this forces all reads and writes to have the same
granularity (we only need this alignment for writes to the target, not
the source), but that is something to be fixed another time.

Cc: qemu-sta...@nongnu.org
Signed-off-by: Max Reitz 
Message-id: 20190805153308.2657-1-mre...@redhat.com
Reviewed-by: Vladimir Sementsov-Ogievskiy 
Fixes: d06107ade0ce74dc39739bac80de84b51ec18546
Signed-off-by: Max Reitz 
---
 block/mirror.c | 29 +
 1 file changed, 29 insertions(+)

diff --git a/block/mirror.c b/block/mirror.c
index 8cb75fb409..9f5c59ece1 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -1481,6 +1481,15 @@ static void bdrv_mirror_top_child_perm(BlockDriverState 
*bs, BdrvChild *c,
 *nshared = BLK_PERM_ALL;
 }
 
+static void bdrv_mirror_top_refresh_limits(BlockDriverState *bs, Error **errp)
+{
+MirrorBDSOpaque *s = bs->opaque;
+
+if (s && s->job && s->job->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING) {
+bs->bl.request_alignment = s->job->granularity;
+}
+}
+
 /* Dummy node that provides consistent read to its users without requiring it
  * from its backing file and that allows writes on the backing file chain. */
 static BlockDriver bdrv_mirror_top = {
@@ -1493,6 +1502,7 @@ static BlockDriver bdrv_mirror_top = {
 .bdrv_co_block_status   = bdrv_co_block_status_from_backing,
 .bdrv_refresh_filename  = bdrv_mirror_top_refresh_filename,
 .bdrv_child_perm= bdrv_mirror_top_child_perm,
+.bdrv_refresh_limits= bdrv_mirror_top_refresh_limits,
 };
 
 static BlockJob *mirror_start_job(
@@ -1637,6 +1647,25 @@ static BlockJob *mirror_start_job(
 s->should_complete = true;
 }
 
+/*
+ * Must be called before we start tracking writes, but after
+ *
+ * ((MirrorBlockJob *)
+ * ((MirrorBDSOpaque *)
+ * mirror_top_bs->opaque
+ * )->job
+ * )->copy_mode
+ *
+ * has the correct value.
+ * (We start tracking writes as of the following
+ * bdrv_create_dirty_bitmap() call.)
+ */
+bdrv_refresh_limits(mirror_top_bs, &local_err);
+if (local_err) {
+error_propagate(errp, local_err);
+goto fail;
+}
+
 s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, NULL, errp);
 if (!s->dirty_bitmap) {
 goto fail;
-- 
2.21.0




[Qemu-block] [PULL v2 4/7] iotests: Test incremental backup after truncation

2019-08-06 Thread Max Reitz
Signed-off-by: Max Reitz 
Message-id: 20190805152840.32190-1-mre...@redhat.com
Signed-off-by: Max Reitz 
---
 tests/qemu-iotests/124 | 38 ++
 tests/qemu-iotests/124.out |  4 ++--
 2 files changed, 36 insertions(+), 6 deletions(-)

diff --git a/tests/qemu-iotests/124 b/tests/qemu-iotests/124
index 80b356f7bb..3440f54781 100755
--- a/tests/qemu-iotests/124
+++ b/tests/qemu-iotests/124
@@ -212,25 +212,28 @@ class TestIncrementalBackupBase(iotests.QMPTestCase):
 return bitmap
 
 
-def prepare_backup(self, bitmap=None, parent=None):
+def prepare_backup(self, bitmap=None, parent=None, **kwargs):
 if bitmap is None:
 bitmap = self.bitmaps[-1]
 if parent is None:
 parent, _ = bitmap.last_target()
 
 target, _ = bitmap.new_target()
-self.img_create(target, bitmap.drive['fmt'], parent=parent)
+self.img_create(target, bitmap.drive['fmt'], parent=parent,
+**kwargs)
 return target
 
 
 def create_incremental(self, bitmap=None, parent=None,
-   parentFormat=None, validate=True):
+   parentFormat=None, validate=True,
+   target=None):
 if bitmap is None:
 bitmap = self.bitmaps[-1]
 if parent is None:
 parent, _ = bitmap.last_target()
 
-target = self.prepare_backup(bitmap, parent)
+if target is None:
+target = self.prepare_backup(bitmap, parent)
 res = self.do_qmp_backup(job_id=bitmap.drive['id'],
  device=bitmap.drive['id'],
  sync='incremental', bitmap=bitmap.name,
@@ -572,6 +575,33 @@ class TestIncrementalBackup(TestIncrementalBackupBase):
   'bitmap0', self.drives[0],
   granularity=64000)
 
+def test_growing_before_backup(self):
+'''
+Test: Add a bitmap, truncate the image, write past the old
+  end, do a backup.
+
+Incremental backup should not ignore dirty bits past the old
+image end.
+'''
+self.assert_no_active_block_jobs()
+
+self.create_anchor_backup()
+
+self.add_bitmap('bitmap0', self.drives[0])
+
+res = self.vm.qmp('block_resize', device=self.drives[0]['id'],
+  size=(65 * 1048576))
+self.assert_qmp(res, 'return', {})
+
+# Dirty the image past the old end
+self.vm.hmp_qemu_io(self.drives[0]['id'], 'write 64M 64k')
+
+target = self.prepare_backup(size='65M')
+self.create_incremental(target=target)
+
+self.vm.shutdown()
+self.check_backups()
+
 
 class TestIncrementalBackupBlkdebug(TestIncrementalBackupBase):
 '''Incremental backup tests that utilize a BlkDebug filter on drive0.'''
diff --git a/tests/qemu-iotests/124.out b/tests/qemu-iotests/124.out
index 281b69efea..fa16b5ccef 100644
--- a/tests/qemu-iotests/124.out
+++ b/tests/qemu-iotests/124.out
@@ -1,5 +1,5 @@
-
+.
 --
-Ran 12 tests
+Ran 13 tests
 
 OK
-- 
2.21.0




[Qemu-block] [PULL v2 6/7] iotests: Test unaligned blocking mirror write

2019-08-06 Thread Max Reitz
Signed-off-by: Max Reitz 
Message-id: 20190805113526.20319-1-mre...@redhat.com
Reviewed-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: Max Reitz 
---
 tests/qemu-iotests/151 | 25 +
 tests/qemu-iotests/151.out |  4 ++--
 2 files changed, 27 insertions(+), 2 deletions(-)

diff --git a/tests/qemu-iotests/151 b/tests/qemu-iotests/151
index 1bb74d67c4..ad7359fc8d 100755
--- a/tests/qemu-iotests/151
+++ b/tests/qemu-iotests/151
@@ -114,6 +114,31 @@ class TestActiveMirror(iotests.QMPTestCase):
 def testActiveIOFlushed(self):
 self.doActiveIO(True)
 
+def testUnalignedActiveIO(self):
+# Fill the source image
+result = self.vm.hmp_qemu_io('source', 'write -P 1 0 2M')
+
+# Start the block job (very slowly)
+result = self.vm.qmp('blockdev-mirror',
+ job_id='mirror',
+ filter_node_name='mirror-node',
+ device='source-node',
+ target='target-node',
+ sync='full',
+ copy_mode='write-blocking',
+ buf_size=(1048576 // 4),
+ speed=1)
+self.assert_qmp(result, 'return', {})
+
+# Start an unaligned request to a dirty area
+result = self.vm.hmp_qemu_io('source', 'write -P 2 %i 1' % (1048576 + 
42))
+
+# Let the job finish
+result = self.vm.qmp('block-job-set-speed', device='mirror', speed=0)
+self.assert_qmp(result, 'return', {})
+self.complete_and_wait(drive='mirror')
+
+self.potential_writes_in_flight = False
 
 
 if __name__ == '__main__':
diff --git a/tests/qemu-iotests/151.out b/tests/qemu-iotests/151.out
index fbc63e62f8..8d7e996700 100644
--- a/tests/qemu-iotests/151.out
+++ b/tests/qemu-iotests/151.out
@@ -1,5 +1,5 @@
-..
+...
 --
-Ran 2 tests
+Ran 3 tests
 
 OK
-- 
2.21.0




[Qemu-block] [PULL v2 2/7] iotests: Test backup job with two guest writes

2019-08-06 Thread Max Reitz
Perform two guest writes to not yet backed up areas of an image, where
the former touches an inner area of the latter.

Before HEAD^, copy offloading broke this in two ways:
(1) The target image differs from the reference image (what the source
was when the backup started).
(2) But you will not see that in the failing output, because the job
offset is reported as being greater than the job length.  This is
because one cluster is copied twice, and thus accounted for twice,
but of course the job length does not increase.

Signed-off-by: Max Reitz 
Message-id: 20190801173900.23851-3-mre...@redhat.com
Reviewed-by: Vladimir Sementsov-Ogievskiy 
Tested-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: Max Reitz 
---
 tests/qemu-iotests/056 | 39 ++
 tests/qemu-iotests/056.out |  4 ++--
 2 files changed, 41 insertions(+), 2 deletions(-)

diff --git a/tests/qemu-iotests/056 b/tests/qemu-iotests/056
index f40fc11a09..e761e465ae 100755
--- a/tests/qemu-iotests/056
+++ b/tests/qemu-iotests/056
@@ -133,6 +133,7 @@ class BackupTest(iotests.QMPTestCase):
 self.vm = iotests.VM()
 self.test_img = img_create('test')
 self.dest_img = img_create('dest')
+self.ref_img = img_create('ref')
 self.vm.add_drive(self.test_img)
 self.vm.launch()
 
@@ -140,6 +141,7 @@ class BackupTest(iotests.QMPTestCase):
 self.vm.shutdown()
 try_remove(self.test_img)
 try_remove(self.dest_img)
+try_remove(self.ref_img)
 
 def hmp_io_writes(self, drive, patterns):
 for pattern in patterns:
@@ -177,6 +179,43 @@ class BackupTest(iotests.QMPTestCase):
 self.assert_qmp(event, 'data/error', qerror)
 return False
 
+def test_overlapping_writes(self):
+# Write something to back up
+self.hmp_io_writes('drive0', [('42', '0M', '2M')])
+
+# Create a reference backup
+self.qmp_backup_and_wait(device='drive0', format=iotests.imgfmt,
+ sync='full', target=self.ref_img,
+ auto_dismiss=False)
+res = self.vm.qmp('block-job-dismiss', id='drive0')
+self.assert_qmp(res, 'return', {})
+
+# Now to the test backup: We simulate the following guest
+# writes:
+# (1) [1M + 64k, 1M + 128k): Afterwards, everything in that
+# area should be in the target image, and we must not copy
+# it again (because the source image has changed now)
+# (64k is the job's cluster size)
+# (2) [1M, 2M): The backup job must not get overeager.  It
+# must copy [1M, 1M + 64k) and [1M + 128k, 2M) separately,
+# but not the area in between.
+
+self.qmp_backup(device='drive0', format=iotests.imgfmt, sync='full',
+target=self.dest_img, speed=1, auto_dismiss=False)
+
+self.hmp_io_writes('drive0', [('23', '%ik' % (1024 + 64), '64k'),
+  ('66', '1M', '1M')])
+
+# Let the job complete
+res = self.vm.qmp('block-job-set-speed', device='drive0', speed=0)
+self.assert_qmp(res, 'return', {})
+self.qmp_backup_wait('drive0')
+res = self.vm.qmp('block-job-dismiss', id='drive0')
+self.assert_qmp(res, 'return', {})
+
+self.assertTrue(iotests.compare_images(self.ref_img, self.dest_img),
+'target image does not match reference image')
+
 def test_dismiss_false(self):
 res = self.vm.qmp('query-block-jobs')
 self.assert_qmp(res, 'return', [])
diff --git a/tests/qemu-iotests/056.out b/tests/qemu-iotests/056.out
index dae404e278..36376bed87 100644
--- a/tests/qemu-iotests/056.out
+++ b/tests/qemu-iotests/056.out
@@ -1,5 +1,5 @@
-.
+..
 --
-Ran 9 tests
+Ran 10 tests
 
 OK
-- 
2.21.0




[Qemu-block] [PULL v2 3/7] util/hbitmap: update orig_size on truncate

2019-08-06 Thread Max Reitz
From: Vladimir Sementsov-Ogievskiy 

Without this, hbitmap_next_zero and hbitmap_next_dirty_area are broken
after truncate. So, orig_size is broken since it's introduction in
76d570dc495c56bb.

Fixes: 76d570dc495c56bb
Signed-off-by: Vladimir Sementsov-Ogievskiy 
Message-id: 20190805120120.23585-1-vsement...@virtuozzo.com
Reviewed-by: Max Reitz 
Cc: qemu-sta...@nongnu.org
Signed-off-by: Max Reitz 
---
 util/hbitmap.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/util/hbitmap.c b/util/hbitmap.c
index 7905212a8b..bcc0acdc6a 100644
--- a/util/hbitmap.c
+++ b/util/hbitmap.c
@@ -53,7 +53,9 @@
  */
 
 struct HBitmap {
-/* Size of the bitmap, as requested in hbitmap_alloc. */
+/*
+ * Size of the bitmap, as requested in hbitmap_alloc or in 
hbitmap_truncate.
+ */
 uint64_t orig_size;
 
 /* Number of total bits in the bottom level.  */
@@ -732,6 +734,8 @@ void hbitmap_truncate(HBitmap *hb, uint64_t size)
 uint64_t num_elements = size;
 uint64_t old;
 
+hb->orig_size = size;
+
 /* Size comes in as logical elements, adjust for granularity. */
 size = (size + (1ULL << hb->granularity) - 1) >> hb->granularity;
 assert(size <= ((uint64_t)1 << HBITMAP_LOG_MAX_SIZE));
-- 
2.21.0




[Qemu-block] [PULL v2 0/7] Block patches for 4.1.0-rc4

2019-08-06 Thread Max Reitz
The following changes since commit 9bb68d34dda9be60335e73e65c8fb61bca035362:

  Merge remote-tracking branch 'remotes/philmd-gitlab/tags/edk2-next-20190803' 
into staging (2019-08-05 11:05:36 +0100)

are available in the Git repository at:

  https://github.com/XanClic/qemu.git tags/pull-block-2019-08-06

for you to fetch changes up to 110571be4e70ac015628e76d2731f96dd8d1998c:

  block/backup: disable copy_range for compressed backup (2019-08-06 13:17:27 
+0200)

v2: Added “Cc: qemu-stable” tag where necessary


Block patches for 4.1.0-rc4:
- Fix the backup block job when using copy offloading
- Fix the mirror block job when using the write-blocking copy mode
- Fix incremental backups after the image has been grown with the
  respective bitmap attached to it


Max Reitz (5):
  backup: Copy only dirty areas
  iotests: Test backup job with two guest writes
  iotests: Test incremental backup after truncation
  mirror: Only mirror granularity-aligned chunks
  iotests: Test unaligned blocking mirror write

Vladimir Sementsov-Ogievskiy (2):
  util/hbitmap: update orig_size on truncate
  block/backup: disable copy_range for compressed backup

 block/backup.c | 15 ---
 block/mirror.c | 29 
 util/hbitmap.c |  6 +-
 tests/qemu-iotests/056 | 39 ++
 tests/qemu-iotests/056.out |  4 ++--
 tests/qemu-iotests/124 | 38 +
 tests/qemu-iotests/124.out |  4 ++--
 tests/qemu-iotests/151 | 25 
 tests/qemu-iotests/151.out |  4 ++--
 9 files changed, 150 insertions(+), 14 deletions(-)

-- 
2.21.0




[Qemu-block] [PULL v2 1/7] backup: Copy only dirty areas

2019-08-06 Thread Max Reitz
The backup job must only copy areas that the copy_bitmap reports as
dirty.  This is always the case when using traditional non-offloading
backup, because it copies each cluster separately.  When offloading the
copy operation, we sometimes copy more than one cluster at a time, but
we only check whether the first one is dirty.

Therefore, whenever copy offloading is possible, the backup job
currently produces wrong output when the guest writes to an area of
which an inner part has already been backed up, because that inner part
will be re-copied.

Fixes: 9ded4a0114968e98b41494fc035ba14f84cdf700
Signed-off-by: Max Reitz 
Reviewed-by: Vladimir Sementsov-Ogievskiy 
Message-id: 20190801173900.23851-2-mre...@redhat.com
Cc: qemu-sta...@nongnu.org
Signed-off-by: Max Reitz 
---
 block/backup.c | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 715e1d3be8..1ee271f9f1 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -202,22 +202,31 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
 cow_request_begin(&cow_request, job, start, end);
 
 while (start < end) {
+int64_t dirty_end;
+
 if (!hbitmap_get(job->copy_bitmap, start)) {
 trace_backup_do_cow_skip(job, start);
 start += job->cluster_size;
 continue; /* already copied */
 }
 
+dirty_end = hbitmap_next_zero(job->copy_bitmap, start, (end - start));
+if (dirty_end < 0) {
+dirty_end = end;
+}
+
 trace_backup_do_cow_process(job, start);
 
 if (job->use_copy_range) {
-ret = backup_cow_with_offload(job, start, end, is_write_notifier);
+ret = backup_cow_with_offload(job, start, dirty_end,
+  is_write_notifier);
 if (ret < 0) {
 job->use_copy_range = false;
 }
 }
 if (!job->use_copy_range) {
-ret = backup_cow_with_bounce_buffer(job, start, end, 
is_write_notifier,
+ret = backup_cow_with_bounce_buffer(job, start, dirty_end,
+is_write_notifier,
 error_is_read, &bounce_buffer);
 }
 if (ret < 0) {
-- 
2.21.0




Re: [Qemu-block] [Qemu-devel] [PULL 0/7] Block patches for 4.1.0-rc4

2019-08-06 Thread Peter Maydell
On Tue, 6 Aug 2019 at 12:12, Max Reitz  wrote:
>
> On 06.08.19 12:12, Peter Maydell wrote:
> > On Mon, 5 Aug 2019 at 18:01, Max Reitz  wrote:
> >>
> >> On 05.08.19 18:59, Vladimir Sementsov-Ogievskiy wrote:
> >>> 05.08.2019 19:37, Max Reitz wrote:
>  The following changes since commit 
>  9bb68d34dda9be60335e73e65c8fb61bca035362:
> 
> Merge remote-tracking branch 
>  'remotes/philmd-gitlab/tags/edk2-next-20190803' into staging (2019-08-05 
>  11:05:36 +0100)
> 
>  are available in the Git repository at:
> 
> https://github.com/XanClic/qemu.git tags/pull-block-2019-08-05
> 
>  for you to fetch changes up to 07b0851c592efe188a87259adbda26a63c61dc92:
> 
> block/backup: disable copy_range for compressed backup (2019-08-05 
>  18:05:05 +0200)
> 
>  
>  Block patches for 4.1.0-rc4:
>  - Fix the backup block job when using copy offloading
>  - Fix the mirror block job when using the write-blocking copy mode
>  - Fix incremental backups after the image has been grown with the
> respective bitmap attached to it
> 
>  
>  Max Reitz (5):
> backup: Copy only dirty areas
> iotests: Test backup job with two guest writes
> iotests: Test incremental backup after truncation
> mirror: Only mirror granularity-aligned chunks
> iotests: Test unaligned blocking mirror write
> 
>  Vladimir Sementsov-Ogievskiy (2):
> util/hbitmap: update orig_size on truncate
> block/backup: disable copy_range for compressed backup
> 
> >>>
> >>> As I understand, this all should go to stable too? CC it.
> >> Ah, yes.  Thanks.
> >
> > Are you planning to send a respin with the CC:stable tags?
> > (I did a test merge of this version which all passed OK.)
>
> I thought Vladimir just meant to physically CC qemu-stable on the series
> (which he did).  Should I respin with the tags?

If you could do a quick respin that's probably most reliable --
I'm not sure exactly how the qemu-stable process works, though.

thanks
-- PMM



Re: [Qemu-block] [Qemu-devel] [PULL 0/7] Block patches for 4.1.0-rc4

2019-08-06 Thread Max Reitz
On 06.08.19 12:12, Peter Maydell wrote:
> On Mon, 5 Aug 2019 at 18:01, Max Reitz  wrote:
>>
>> On 05.08.19 18:59, Vladimir Sementsov-Ogievskiy wrote:
>>> 05.08.2019 19:37, Max Reitz wrote:
 The following changes since commit 
 9bb68d34dda9be60335e73e65c8fb61bca035362:

Merge remote-tracking branch 
 'remotes/philmd-gitlab/tags/edk2-next-20190803' into staging (2019-08-05 
 11:05:36 +0100)

 are available in the Git repository at:

https://github.com/XanClic/qemu.git tags/pull-block-2019-08-05

 for you to fetch changes up to 07b0851c592efe188a87259adbda26a63c61dc92:

block/backup: disable copy_range for compressed backup (2019-08-05 
 18:05:05 +0200)

 
 Block patches for 4.1.0-rc4:
 - Fix the backup block job when using copy offloading
 - Fix the mirror block job when using the write-blocking copy mode
 - Fix incremental backups after the image has been grown with the
respective bitmap attached to it

 
 Max Reitz (5):
backup: Copy only dirty areas
iotests: Test backup job with two guest writes
iotests: Test incremental backup after truncation
mirror: Only mirror granularity-aligned chunks
iotests: Test unaligned blocking mirror write

 Vladimir Sementsov-Ogievskiy (2):
util/hbitmap: update orig_size on truncate
block/backup: disable copy_range for compressed backup

>>>
>>> As I understand, this all should go to stable too? CC it.
>> Ah, yes.  Thanks.
> 
> Are you planning to send a respin with the CC:stable tags?
> (I did a test merge of this version which all passed OK.)

I thought Vladimir just meant to physically CC qemu-stable on the series
(which he did).  Should I respin with the tags?

Max



signature.asc
Description: OpenPGP digital signature


[Qemu-block] [PATCH] block: file-posix: Fix alignment probing on glsuter

2019-08-06 Thread Nir Soffer
On Gluster storage with sector size of 4096 bytes, buf_align may be
wrong; reading 4096 bytes into unaligned buffer succeeds. This probably
happens because the actual read happens on the Gluster node with aligned
buffer, and Gluster client does not enforce any alignment on the host.

However request_alignment is always right, since the same size is use on
the Gluster node to perform the actual I/O. Use the maximum value for
setting min_mem_alignment.

With this change we can provision a virtual machine with Gluster storage
using VDO device and fuse mount.

This is a partial fix for https://bugzilla.redhat.com/1737256. To make
this work, the management system must ensure that the first block of the
image is allocated, for example:

qemu-img create -f raw test.img 1g
dd if=/dev/zero bs=4096 count=1 of=test.img conv=nortunc

Signed-off-by: Nir Soffer 
---
 block/file-posix.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/file-posix.c b/block/file-posix.c
index 4479cc7ab4..d29b9e5229 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -1122,7 +1122,7 @@ static void raw_refresh_limits(BlockDriverState *bs, 
Error **errp)
 }
 
 raw_probe_alignment(bs, s->fd, errp);
-bs->bl.min_mem_alignment = s->buf_align;
+bs->bl.min_mem_alignment = MAX(s->buf_align, bs->bl.request_alignment);
 bs->bl.opt_mem_alignment = MAX(s->buf_align, getpagesize());
 }
 
-- 
2.20.1




Re: [Qemu-block] [Qemu-devel] [PULL 0/7] Block patches for 4.1.0-rc4

2019-08-06 Thread Peter Maydell
On Mon, 5 Aug 2019 at 18:01, Max Reitz  wrote:
>
> On 05.08.19 18:59, Vladimir Sementsov-Ogievskiy wrote:
> > 05.08.2019 19:37, Max Reitz wrote:
> >> The following changes since commit 
> >> 9bb68d34dda9be60335e73e65c8fb61bca035362:
> >>
> >>Merge remote-tracking branch 
> >> 'remotes/philmd-gitlab/tags/edk2-next-20190803' into staging (2019-08-05 
> >> 11:05:36 +0100)
> >>
> >> are available in the Git repository at:
> >>
> >>https://github.com/XanClic/qemu.git tags/pull-block-2019-08-05
> >>
> >> for you to fetch changes up to 07b0851c592efe188a87259adbda26a63c61dc92:
> >>
> >>block/backup: disable copy_range for compressed backup (2019-08-05 
> >> 18:05:05 +0200)
> >>
> >> 
> >> Block patches for 4.1.0-rc4:
> >> - Fix the backup block job when using copy offloading
> >> - Fix the mirror block job when using the write-blocking copy mode
> >> - Fix incremental backups after the image has been grown with the
> >>respective bitmap attached to it
> >>
> >> 
> >> Max Reitz (5):
> >>backup: Copy only dirty areas
> >>iotests: Test backup job with two guest writes
> >>iotests: Test incremental backup after truncation
> >>mirror: Only mirror granularity-aligned chunks
> >>iotests: Test unaligned blocking mirror write
> >>
> >> Vladimir Sementsov-Ogievskiy (2):
> >>util/hbitmap: update orig_size on truncate
> >>block/backup: disable copy_range for compressed backup
> >>
> >
> > As I understand, this all should go to stable too? CC it.
> Ah, yes.  Thanks.

Are you planning to send a respin with the CC:stable tags?
(I did a test merge of this version which all passed OK.)

thanks
-- PMM



Re: [Qemu-block] [Qemu-devel] [PATCH 2/3] iotests: Enable -d for Python non-unittest tests

2019-08-06 Thread Kevin Wolf
Am 06.08.2019 um 00:19 hat John Snow geschrieben:
> 
> 
> On 8/2/19 10:07 AM, Kevin Wolf wrote:
> > Am 01.08.2019 um 19:57 hat Max Reitz geschrieben:
> >> On 01.08.19 17:17, Kevin Wolf wrote:
> >>> The part of iotests.main() that is related to the implementation of the
> >>> debug option -d and enables QEMU and QMP logging is not only useful in
> >>> tests that use the Python unittest framework, but also in tests that
> >>> work by comparing with a reference output.
> >>>
> >>> Factor these parts out into iotests.init() and call it from the test
> >>> cases that currently lack support for debug output.
> >>
> >> How does this relate to
> >> https://lists.nongnu.org/archive/html/qemu-block/2019-07/msg01212.html ?
> > 
> > Hm, no idea? :-)
> > 
> > Looks like John's patch depends on some other patches which would then
> > conflict with mine, too, so maybe I'll just drop my patch and wait what
> > happens?
> > 
> > John, any opinion?
> > 
> > Kevin
> > 
> 
> My patches do roughly the same plus a little more. If you don't mind
> waiting, I can take care of this for you when the tree reopens?

Okay, I'll drop my patch then.

Kevin