Am 09.07.2020 um 20:41 hat Eduardo Habkost geschrieben: > On Thu, Jul 09, 2020 at 05:02:06PM +0200, Kevin Wolf wrote: > > Am 08.07.2020 um 00:05 hat Eduardo Habkost geschrieben: > > > On Tue, Jul 07, 2020 at 05:28:21PM +0200, Philippe Mathieu-Daudé wrote: > > > > On 6/26/20 12:25 PM, Stefan Hajnoczi wrote: > > > > > On Thu, Jun 25, 2020 at 02:31:14PM +0100, Peter Maydell wrote: > > > > >> On Wed, 24 Jun 2020 at 11:02, Stefan Hajnoczi <stefa...@redhat.com> > > > > >> wrote: > > > > >>> > > > > >>> The following changes since commit > > > > >>> 171199f56f5f9bdf1e5d670d09ef1351d8f01bae: > > > > >>> > > > > >>> Merge remote-tracking branch > > > > >>> 'remotes/alistair/tags/pull-riscv-to-apply-20200619-3' into staging > > > > >>> (2020-06-22 14:45:25 +0100) > > > > >>> > > > > >>> are available in the Git repository at: > > > > >>> > > > > >>> https://github.com/stefanha/qemu.git tags/block-pull-request > > > > >>> > > > > >>> for you to fetch changes up to > > > > >>> 7838c67f22a81fcf669785cd6c0876438422071a: > > > > >>> > > > > >>> block/nvme: support nested aio_poll() (2020-06-23 15:46:08 +0100) > > > > >>> > > > > >>> ---------------------------------------------------------------- > > > > >>> Pull request > > > > >>> > > > > >>> ---------------------------------------------------------------- > > > > >> > > > > >> Failure on iotest 030, x86-64 Linux: > > > > >> > > > > >> TEST iotest-qcow2: 030 [fail] > > > > >> QEMU -- > > > > >> "/home/petmay01/linaro/qemu-for-merges/build/alldbg/tests/qemu-iotests/../../x86_64-softmmu/qemu-system-x86_64" > > > > >> -nodefaults -display none -accel qtest > > > > >> QEMU_IMG -- > > > > >> "/home/petmay01/linaro/qemu-for-merges/build/alldbg/tests/qemu-iotests/../../qemu-img" > > > > >> QEMU_IO -- > > > > >> "/home/petmay01/linaro/qemu-for-merges/build/alldbg/tests/qemu-iotests/../../qemu-io" > > > > >> --cache writeback --aio threads -f qcow2 > > > > >> QEMU_NBD -- > > > > >> "/home/petmay01/linaro/qemu-for-merges/build/alldbg/tests/qemu-iotests/../../qemu-nbd" > > > > >> IMGFMT -- qcow2 (compat=1.1) > > > > >> IMGPROTO -- file > > > > >> PLATFORM -- Linux/x86_64 e104462 4.15.0-76-generic > > > > >> TEST_DIR -- > > > > >> /home/petmay01/linaro/qemu-for-merges/build/alldbg/tests/qemu-iotests/scratch > > > > >> SOCK_DIR -- /tmp/tmp.8tgdDjoZcO > > > > >> SOCKET_SCM_HELPER -- > > > > >> /home/petmay01/linaro/qemu-for-merges/build/alldbg/tests/qemu-iotest/socket_scm_helper > > > > >> > > > > >> --- /home/petmay01/linaro/qemu-for-merges/tests/qemu-iotests/030.out > > > > >> 2019-07-15 17:18:35.251364738 +0100 > > > > >> +++ > > > > >> /home/petmay01/linaro/qemu-for-merges/build/alldbg/tests/qemu-iotests/030.out.bad > > > > >> 2020-06-25 14:04:28.500534007 +0100 > > > > >> @@ -1,5 +1,17 @@ > > > > >> -........................... > > > > >> +.............F............. > > > > >> +====================================================================== > > > > >> +FAIL: test_stream_parallel (__main__.TestParallelOps) > > > > >> +---------------------------------------------------------------------- > > > > >> +Traceback (most recent call last): > > > > >> + File "030", line 246, in test_stream_parallel > > > > >> + self.assert_qmp(result, 'return', {}) > > > > >> + File > > > > >> "/home/petmay01/linaro/qemu-for-merges/tests/qemu-iotests/iotests.py", > > > > >> line 848, in assert_qmp > > > > >> + result = self.dictpath(d, path) > > > > >> + File > > > > >> "/home/petmay01/linaro/qemu-for-merges/tests/qemu-iotests/iotests.py", > > > > >> line 822, in dictpath > > > > >> + self.fail(f'failed path traversal for "{path}" in "{d}"') > > > > >> +AssertionError: failed path traversal for "return" in "{'error': > > > > >> {'class': 'DeviceNotActive', 'desc': "Block job 'stream-node8' not > > > > >> found"}}" > > > > >> + > > > > >> > > > > >> ---------------------------------------------------------------------- > > > > >> Ran 27 tests > > > > >> > > > > >> -OK > > > > >> +FAILED (failures=1) > > > > > > > > > > Strange, I can't reproduce this failure on my pull request branch or > > > > > on > > > > > qemu.git/master. > > > > > > > > > > Is this failure deterministic? Are you sure it is introduced by this > > > > > pull request? > > > > > > > > Probably not introduced by this pullreq, but I also hit it on FreeBSD: > > > > https://cirrus-ci.com/task/4620718312783872?command=main#L5803 > > > > > > > > TEST iotest-qcow2: 030 [fail] > > > > QEMU -- > > > > "/tmp/cirrus-ci-build/build/tests/qemu-iotests/../../aarch64-softmmu/qemu-system-aarch64" > > > > -nodefaults -display none -machine virt -accel qtest > > > > QEMU_IMG -- > > > > "/tmp/cirrus-ci-build/build/tests/qemu-iotests/../../qemu-img" > > > > QEMU_IO -- > > > > "/tmp/cirrus-ci-build/build/tests/qemu-iotests/../../qemu-io" --cache > > > > writeback --aio threads -f qcow2 > > > > QEMU_NBD -- > > > > "/tmp/cirrus-ci-build/build/tests/qemu-iotests/../../qemu-nbd" > > > > IMGFMT -- qcow2 (compat=1.1) > > > > IMGPROTO -- file > > > > PLATFORM -- FreeBSD/amd64 cirrus-task-4620718312783872 12.1-RELEASE > > > > TEST_DIR -- /tmp/cirrus-ci-build/build/tests/qemu-iotests/scratch > > > > SOCK_DIR -- /tmp/tmp.aZ5pxFLF > > > > SOCKET_SCM_HELPER -- > > > > --- /tmp/cirrus-ci-build/tests/qemu-iotests/030.out 2020-07-07 > > > > 14:48:48.123804000 +0000 > > > > +++ /tmp/cirrus-ci-build/build/tests/qemu-iotests/030.out.bad > > > > 2020-07-07 > > > > 15:05:07.863685000 +0000 > > > > @@ -1,5 +1,17 @@ > > > > -........................... > > > > +.............F............. > > > > +====================================================================== > > > > +FAIL: test_stream_parallel (__main__.TestParallelOps) > > > > ---------------------------------------------------------------------- > > > > +Traceback (most recent call last): > > > > + File "030", line 246, in test_stream_parallel > > > > + self.assert_qmp(result, 'return', {}) > > > > + File "/tmp/cirrus-ci-build/tests/qemu-iotests/iotests.py", line 848, > > > > in assert_qmp > > > > + result = self.dictpath(d, path) > > > > + File "/tmp/cirrus-ci-build/tests/qemu-iotests/iotests.py", line 822, > > > > in dictpath > > > > + self.fail(f'failed path traversal for "{path}" in "{d}"') > > > > +AssertionError: failed path traversal for "return" in "{'error': > > > > {'class': 'DeviceNotActive', 'desc': "Block job 'stream-node8' not > > > > found"}}" > > > > + > > > > +---------------------------------------------------------------------- > > > > Ran 27 tests > > > > > > Looks like a race condition that can be forced with a sleep call. > > > With the following patch, I can reproduce it every time: > > > > > > diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030 > > > index 1cdd7e2999..ee5374fc22 100755 > > > --- a/tests/qemu-iotests/030 > > > +++ b/tests/qemu-iotests/030 > > > @@ -241,6 +241,7 @@ class TestParallelOps(iotests.QMPTestCase): > > > result = self.vm.qmp('block-stream', device=node_name, > > > job_id=job_id, base=self.imgs[i-2], speed=512*1024) > > > self.assert_qmp(result, 'return', {}) > > > > > > + time.sleep(3) > > > for job in pending_jobs: > > > result = self.vm.qmp('block-job-set-speed', device=job, > > > speed=0) > > > self.assert_qmp(result, 'return', {}) > > > > We can "fix" it for probably all realistic cases by lowering the speed > > of the block job significantly. It's still not fully fixed for all > > theoretical cases, but the pattern of starting a block job that is > > throttled to a low speed so it will keep running for the next part of > > the test is very common. > > > > Kevin > > > > diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030 > > index 256b2bfbc6..31c028306b 100755 > > --- a/tests/qemu-iotests/030 > > +++ b/tests/qemu-iotests/030 > > @@ -243,7 +243,7 @@ class TestParallelOps(iotests.QMPTestCase): > > node_name = 'node%d' % i > > job_id = 'stream-%s' % node_name > > pending_jobs.append(job_id) > > - result = self.vm.qmp('block-stream', device=node_name, > > job_id=job_id, base=self.imgs[i-2], speed=512*1024) > > + result = self.vm.qmp('block-stream', device=node_name, > > job_id=job_id, base=self.imgs[i-2], speed=1024) > > self.assert_qmp(result, 'return', {}) > > > > for job in pending_jobs: > > Sounds good to me. This would change the expected job completion > time for the 2-4 MB images from 4-8 seconds to ~30-60 minutes, > right?
I'm not sure about the granularity in which it really happens, but the theory is that we have 2 MB of data in each image, so with 1024 bytes per second, it should take 2048 seconds = ~34 minutes. And if we don't manage to start and unthrottle four jobs within 34 minutes, we'll have more problems that just that. :-) > This is also a nice way to be sure (block-job-set-speed speed=0) > is really working as expected. speed=0 means unlimited, so this doesn't work for avoiding to make any progress. It's what the next loop does to actually get the jobs completed without waiting that long. Kevin