11.11.2021 15:08, Hanna Reitz wrote:
See the comment for why this is necessary.

Signed-off-by: Hanna Reitz <hre...@redhat.com>
---
  tests/qemu-iotests/030 | 11 ++++++++++-
  1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
index 5fb65b4bef..567bf1da67 100755
--- a/tests/qemu-iotests/030
+++ b/tests/qemu-iotests/030
@@ -251,7 +251,16 @@ class TestParallelOps(iotests.QMPTestCase):
                                   speed=1024)
              self.assert_qmp(result, 'return', {})
- for job in pending_jobs:
+        # Do this in reverse: After unthrottling them, some jobs may finish
+        # before we have unthrottled all of them.  This will drain their
+        # subgraph, and this will make jobs above them advance (despite those
+        # jobs on top being throttled).  In the worst case, all jobs below the
+        # top one are finished before we can unthrottle it, and this makes it
+        # advance so far that it completes before we can unthrottle it - which
+        # results in an error.
+        # Starting from the top (i.e. in reverse) does not have this problem:
+        # When a job finishes, the ones below it are not advanced.

Hmm, interesting why only jobs above the finished job may advance in the 
situation..

Looks like something may change and this workaround will stop working.

Isn't it better just handle the error, and don't care if job was just finished?

Something like

if result['return'] != {}:
   # Job was finished during drain caused by finish of already unthrottled job
   self.assert_qmp(result, 'error/class', 'DeviceNotActive')

Next thing in the test case is checking for completion events, so we'll get all 
events anyway.


+        for job in reversed(pending_jobs):
              result = self.vm.qmp('block-job-set-speed', device=job, speed=0)
              self.assert_qmp(result, 'return', {})


--
Best regards,
Vladimir

Reply via email to