On 30/03/2023 14.12, Alex Bennée wrote:
Thomas Huth <th...@redhat.com> writes:
On 30/03/2023 12.11, Alex Bennée wrote:
From: Kautuk Consul <kcon...@linux.vnet.ibm.com>
Avocado version 101.0 has a fix to re-compute the checksum
of an asset file if the algorithm used in the *-CHECKSUM
file isn't the same as the one being passed to it by the
avocado user (i.e. the avocado_qemu python module).
In the earlier avocado versions this fix wasn't there due
to which if the checksum wouldn't match the earlier
checksum (calculated by a different algorithm), the avocado
code would start downloading a fresh image from the internet
URL thus making the test-cases take longer to execute.
Bump up the avocado-framework version to 101.0.
Signed-off-by: Kautuk Consul <kcon...@linux.vnet.ibm.com>
Tested-by: Hariharan T S <hariharan...@linux.vnet.ibm.com>
Message-Id: <20230327115030.3418323-2-kcon...@linux.vnet.ibm.com>
---
tests/requirements.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tests/requirements.txt b/tests/requirements.txt
index 0ba561b6bd..a6f73da681 100644
--- a/tests/requirements.txt
+++ b/tests/requirements.txt
@@ -2,5 +2,5 @@
# in the tests/venv Python virtual environment. For more info,
# refer to: https://pip.pypa.io/en/stable/user_guide/#id1
# Note that qemu.git/python/ is always implicitly installed.
-avocado-framework==88.1
+avocado-framework==101.0
pycdlib==1.11.0
Did you check whether the same amount of avocado tests still works as
before? ... last time I tried to bump the version, a lot of things
were failing, and I think Cleber was recently working on fixing
things, but I haven't heart anything back from him yet that it would
be OK to bump to a newer version now ...
I ran it on my default build and the only failure was:
(008/222)
tests/avocado/boot_linux.py:BootLinuxS390X.test_s390_ccw_virtio_tcg:
INTERRUPTED: timeout (240.01 s)
which passed on a retry. But now I realise with failfast it skipped a bunch:
That one is also failing for me here when I apply the patch. Without the
patch, the test is working fine. I think this needs more careful testing
first - e.g. the tests are run in parallel now by default, which breaks a
lot of our timeout settings.
Thomas