24.07.2020 01:03, Eric Blake wrote:
On 2/17/20 9:02 AM, Vladimir Sementsov-Ogievskiy wrote:
Mostly, satisfy pep8 complains.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsement...@virtuozzo.com>
---
tests/qemu-iotests/199 | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
With none of your series applied, I get:
$ ./check -qcow2 199
...
199 not run [16:52:34] [16:52:34] not suitable for
this cache mode: writeback
Not run: 199
Passed all 0 iotests
199 fail [16:53:37] [16:53:37] output mismatch
(see 199.out.bad)
--- /home/eblake/qemu/tests/qemu-iotests/199.out 2020-07-23
16:48:56.275529368 -0500
+++ /home/eblake/qemu/build/tests/qemu-iotests/199.out.bad 2020-07-23
16:53:37.728416207 -0500
@@ -1,5 +1,13 @@
-.
+E
+======================================================================
+ERROR: test_postcopy (__main__.TestDirtyBitmapPostcopyMigration)
+----------------------------------------------------------------------
+Traceback (most recent call last):
+ File "199", line 41, in setUp
+ os.mkfifo(fifo)
+FileExistsError: [Errno 17] File exists
+
----------------------------------------------------------------------
Ran 1 tests
-OK
+FAILED (errors=1)
Failures: 199
Failed 1 of 1 iotests
Ah, 'scratch/mig_fifo' was left over from some other aborted run of the test. I
removed that file (which implies it might be nice if the test handled that
automatically, instead of making me do it), and tried again; now I got the
desired:
199 pass [17:00:34] [17:01:48] 74s
Passed all 1 iotests
After trying to rebase your series, I once again got failures, but that could mean
I botched the rebase (since quite a few of the code patches earlier in the series
were non-trivially changed).> If you send a v3 (which would be really nice!),
I'd hoist this and 13/22 first in the series, to get to a point where testing 199
works, to then make it easier to demonstrate what the rest of the 199 enhancements
do in relation to the non-iotest patches. But I like that you separated the 199
improvements from the code - testing-wise, it's easy to apply the iotests patches
first, make sure it fails, then apply the code patches, and make sure it passes,
to prove that the enhanced test now covers what the code fixes did.
A bit off-topic:
Yes, that's our usual scheme: test goes after bug-fix, so careful reviewers
always have to apply patches in different order to check is there a real
bug-fix.. And the only benefit of such scheme - not to break git bisect. Still,
I think we can do better:
- apply test first, just put it into "bug" group
- check script should ignore "bug" group (unless it explicitly specified, or
direct test run)
- bug-fix patch removes test from "bug" group
So bisect is not broken and we achieve native ordering: problem exists (test fails)
-> fixing -> no problem (test pass).
I think, I'll add "bug" group as a follow up to my "[PATCH v4 0/9] Rework
iotests/check", which I really hope will land one day.
PS: I've successfully rebased the series, and test pass. I'll now fix all
review notes and resend soon.
--
Best regards,
Vladimir