Am 25.10.2023 um 14:27 hat Fiona Ebner geschrieben:
> Am 23.10.23 um 13:39 schrieb Fiona Ebner:
> > Am 19.10.23 um 15:36 schrieb Kevin Wolf:
> >> Most of this series looks good to me. Apart from the comments I made in
> >> the individual patches, I would like to see iotests coverage of changing
> >> the mirroring mode. At the least to show that the query result changes,
> >> but ideally also that requests really block after switchting to active.
> >> I think with a throttled target node and immediately reading the target
> >> when the write request completes we should be able to check this.
> >>
> > 
> > I'll try to work something out for v4.
> > 
> 
> I'm having a bit of a hard time unfortunately. I created a throttle
> group with
> 
> >                 'iops-total': iops,
> >                 'iops-total-max': iops

I would have throttled only writes, because you need to do a read to
check the target and don't want that one to be throttled until the
writes have completed.

> and used that for the 'throttle' driver for the target. I then tried
> issuing requests via qemu_io
> 
> >             self.vm.hmp_qemu_io('mirror-top',
> >                                 f'aio_write -P 1 {req_size * i} {req_size * 
> > (i + 1)}')
> 
> but when I create more requests than the 'iops' limit (to ensure that
> not all are completed immediately), it will get stuck when draining the
> temporary BlockBackend used by qemu_io [0]. Note this is while still in
> background mode.

You should be able to get around this by using an existing named
BlockBackend (created with -drive if=none) instead of a node name.
mirror-top stays at the root of the tree, right?

> I also wanted to have request going on while the copy mode is changed
> and for that, I was able to work around the issue by creating an NBD
> export of the source like in iotest 151 and issuing the requests to the
> NBD socket instead.
> 
> But after I switch to active mode and when I issue more than the 'iops'
> limit requests to the NBD export then, it also seems to get stuck,
> visible during shutdown when it tries to close the export[1].

Because the NBD server still throttles the I/O that needs to complete
until QEMU can shut down? If this is a problem, I suppose you can
lift the limit on the NBD server side if you use QSD.

graph-changes-while-io is an example of a test cases that uses QSD.

Kevin


Reply via email to