Max Reitz <mre...@redhat.com> writes:
> On 09.02.21 12:24, Alex Bennée wrote: >> >> Max Reitz <mre...@redhat.com> writes: >> >>> On 04.02.21 14:23, Alex Bennée wrote: >>>> >>>> Cleber Rosa <cr...@redhat.com> writes: >>>> >>>>> If the vmlinuz variable is set to anything that evaluates to True, >>>>> then the respective arguments should be set. If the variable contains >>>>> an empty string, than it will evaluate to False, and the extra >>>>> arguments will not be set. >>>>> >>>>> This keeps the same logic, but improves readability a bit. >>>>> >>>>> Signed-off-by: Cleber Rosa <cr...@redhat.com> >>>>> --- >>>>> tests/acceptance/virtiofs_submounts.py | 2 +- >>>>> 1 file changed, 1 insertion(+), 1 deletion(-) >>>>> >>>>> diff --git a/tests/acceptance/virtiofs_submounts.py >>>>> b/tests/acceptance/virtiofs_submounts.py >>>>> index f1b49f03bb..f25a386a19 100644 >>>>> --- a/tests/acceptance/virtiofs_submounts.py >>>>> +++ b/tests/acceptance/virtiofs_submounts.py >>>>> @@ -241,7 +241,7 @@ class VirtiofsSubmountsTest(BootLinux): >>>>> >>>>> super(VirtiofsSubmountsTest, self).setUp(pubkey) >>>>> >>>>> - if len(vmlinuz) > 0: >>>>> + if vmlinuz: >>>>> self.vm.add_args('-kernel', vmlinuz, >>>>> '-append', 'console=ttyS0 root=/dev/sda1') >>>> >>>> And this is were I gave up because I can't see how to run the test: >>>> >>>> ./tests/venv/bin/avocado run ./tests/acceptance/virtiofs_submounts.py >>>> JOB ID : b3d9bfcfcd603189a471bec7d770fca3b50a81ee >>>> JOB LOG : >>>> /home/alex/avocado/job-results/job-2021-02-04T13.21-b3d9bfc/job.log >>>> (1/5) >>>> ./tests/acceptance/virtiofs_submounts.py:VirtiofsSubmountsTest.test_pre_virtiofsd_set_up: >>>> CANCEL: vmlinuz parameter not set; you must point it to a Linux kernel >>>> binary to test (to run this test with the on-image kernel, set it to an >>>> empty string) (0.00 s) >>>> (2/5) >>>> ./tests/acceptance/virtiofs_submounts.py:VirtiofsSubmountsTest.test_pre_launch_set_up: >>>> CANCEL: vmlinuz parameter not set; you must point it to a Linux kernel >>>> binary to test (to run this test with the on-image kernel, set it to an >>>> empty string) (0.00 s) >>>> (3/5) >>>> ./tests/acceptance/virtiofs_submounts.py:VirtiofsSubmountsTest.test_post_launch_set_up: >>>> CANCEL: vmlinuz parameter not set; you must point it to a Linux kernel >>>> binary >>>> to test (to run this test with the on-image kernel, set it to an empty >>>> string) (0.00 s) >>>> (4/5) >>>> ./tests/acceptance/virtiofs_submounts.py:VirtiofsSubmountsTest.test_post_mount_set_up: >>>> CANCEL: vmlinuz parameter not set; you must point it to a Linux kernel >>>> binary to test (to run this test with the on-image kernel, set it to an >>>> empty string) (0.00 s) >>>> (5/5) >>>> ./tests/acceptance/virtiofs_submounts.py:VirtiofsSubmountsTest.test_two_runs: >>>> CANCEL: vmlinuz parameter not set; you must point it to a Linux kernel >>>> binary to test (to run this test with the on-image kernel, set it to an >>>> empty string) (0.00 s) >>>> RESULTS : PASS 0 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 >>>> | CANCEL 5 >>>> JOB TIME : 0.56 s >>>> >>>> Given the test seems to make assumptions about an environment being >>>> setup for it I think we need some documentation somewhere about what >>>> those pre-requisites are. >>> >>> I find the cancel message pretty clear, but then again it was me who >>> wrote it... >>> >>> Either you point the vmlinuz parameter to some Linux kernel you built >>> yourself, or you set it to the empty string to just use the kernel >>> that’s part of the image. Setting Avocado parameters is done via -p, >>> i.e. “-p key=value”. So in this case >>> “-p vmlinuz=/path/to/linux/build/arch/x86/boot/bzImage”, or >>> “-p vmlinuz=''”. >>> >>> Ideally, vmlinuz='' would be the default, but there’s a problem with >>> that: I submitted this test along with the patches that added the >>> required feature to the Linux kernel, so at that point that feature was >>> not part of Linux. That’s why you generally have to point it to a Linux >>> kernel binary you built yourself that has this feature (5.10 does). >> >> This is where it deviates from the rest of the check-acceptance tests. >> Generally I don't have to worry about finding the right image myself. > > Yes, but there’s nothing I can do apart from just not having the test as > part of qemu, which I don’t find better than to just cancel it when not > run with the necessary parameters. No I agree having the tests is useful. > > Or, well, I could let the test download and compile the Linux sources, > but I don’t think that’s a viable alternative. There has been discussion before but I agree it's not viable given the compile times for such things. >> At the least it would be worth pointing to a part of our docs on how >> to build such an image. > > Well, I could perhaps come up with a comprehensive kernel configuration > that works, but I really don’t think that’s valuable for people who just > want to run the acceptance tests and don’t care about this test in > particular. I just don’t think they’re actually going to build a Linux > kernel just for it. Sure - but I was trying to review the series and as part of my review I generally like to run the things I review. Hence why I stopped as I couldn't get things running. > (Alternatively, I could just build a Linux kernel here on my machine, > upload the binary and point to it somewhere, e.g. in the cancel message. > That would be OK for me, and perhaps something I could imagine someone > might actually use.) I've actually done this with some Xen patches I'm working on at the moment. I'll probably decorate the test with: @skipUnless(os.getenv('AVOCADO_ALLOW_UNTRUSTED_CODE'), 'untrusted code') with a comment explaining what's waiting to be upstreamed. Once there are upstream binaries I plan on transitioning the test to those. > I’d rather just wait until the test image contains a kernel that’s > sufficiently new (should be the case once Fedora 34 is out), so we can > make -p vmlinuz='' the default. Until then I personally find it > completely reasonable to have this test just be cancelled. > > Max -- Alex Bennée