Dne 5.4.2017 v 17:06 Cleber Rosa napsal(a):

On 04/05/2017 09:00 AM, Eduardo Habkost wrote:

Hi,

I have been writing a few standalone Python scripts[1] to test
QEMU recently, and I would like to make them more useful for
people running tests using Avocado.

Most of them work this way:
1) Query QEMU to check which
   architectures/machine-types/CPU-models/devices/options
   it supports
2) Run QEMU multiple times for each
   architectures/machine-types/CPU-models/devices/options
   combination I want to test
3) Report success/failure/skip results (sometimes including
   warnings) for each combination


I went ahead and tried one of them:

QTEST_QEMU_BINARY=../x86_64-softmmu/qemu-system-x86_64 python
query-cpu-model-test.py
..
----------------------------------------------------------------------
Ran 2 tests in 34.227s

OK

One very important aspect here is result granularity.  You're currently
using Python's unittest.TestCase, which in this case gives you a
granularity of two tests.  At this point, Avocado could find those two
tests (with some help[1]):

$ ~/src/avocado/avocado/contrib/scripts/avocado-find-unittests
query-cpu-model-test.py
query-cpu-model-test.CPUModelTest.testTCGModels
query-cpu-model-test.CPUModelTest.testKVMModels

The other granularity level that can be achieved here is a test per
executed script:

$ QTEST_QEMU_BINARY=../x86_64-softmmu/qemu-system-x86_64 avocado run
query-cpu-model-test.py
JOB ID     : 11f0a5fbb02e6eb67580dd33270867b039806585
JOB LOG    :
/home/cleber/avocado/job-results/job-2017-04-05T10.31-11f0a5f/job.log
 (1/1) query-cpu-model-test.py: PASS (33.22 s)
RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 |
CANCEL 0
TESTS TIME : 33.22 s
JOB HTML   :
/home/cleber/avocado/job-results/job-2017-04-05T10.31-11f0a5f/html/results.html


Which is nothing but our SIMPLE[2] tests support.

About reporting results, PASS, FAIL and WARN are available for SIMPLE
tests. PASS is exit status 0, FAIL is anything but 0, and WARN is a PASS
that looks for a pattern in the test output[3][4].

I would like to keep the test scripts easy to run without
installing extra dependencies, so I want them to keep working as
standalone scripts even if Avocado modules aren't available.

That's OK, but I would like to add a few notes:

1) When removing the unittest dependency, you'll most probably end up
creating custom mechanisms for locating and executing test entry points
and reporting those results.

2) We've worked to make sure Avocado is really easy to depend on.  For
instance, it's available on stock Fedora (package name is
python-avocado) and on PyPI (installing it just requires a `pip install
avocado-framework`).

Adding a few "if avocado_available:" lines to the script would be
OK, though.


I can't really recommend this optional check of Avocado for INSTRUMENTED
tests.  I really think the complexity added would bring more bad than good.

If your tests are going to be treated as SIMPLE tests, you could just
make a check for environment variables that Avocado makes available[5].

Do you have any suggestions for making the test result output
from those scripts easily consumable by the Avocado test runner?


If those tests are treated as SIMPLE tests, then all of their output
will automatically be added to the Avocado logs.

Now, I do understand what your hopes are (or were after my possibly
disappointing recommendations :).  Ideally, Avocado would be able to
find the tests available on your standalone scripts[6], would know how
to run the individual tests[7].  Right now, this requires writing a
plugin.  I can see some ideas for a "Python universal" plugin here, that
would find other hints of tests in plain Python source code.  If you
think that makes sense, let's discuss it further.

Actually we already have `robot` plugin which is able to discover `robot`-like tests. How about creating a `unittest` loader, which would do the `avocado-find-unittest` and discover them individually? By default nothing would change as the `.py` files would be recognized as instrumented/simple tests, but when you change the order of loaders it'd detect them as unittests:

```
avocado run query-cpu-model-test.py => query-cpu-model-test.py
avocado run --loaders python-unittest => query-cpu-model-test.CPUModelTest.testTCGModels; query-cpu-model-test.CPUModelTest.testKVMModels
```

This should be really simple to develop (even simpler than robot as all things are already in place).

Lukáš


[1] Some examples:
    
https://github.com/ehabkost/qemu-hacks/blob/work/device-crash-script/scripts/device-crash-test.py
    
https://github.com/ehabkost/qemu-hacks/blob/work/x86-query-cpu-expansion-test/tests/query-cpu-model-test.py
    
https://github.com/ehabkost/qemu-hacks/blob/work/query-machines-bus-info/tests/qmp-machine-info.py
    (Note that some of the scripts use the unittest module, but I
    will probably get rid of it, because the list of test cases I
    want to run will be generated at runtime. I've even wrote
    code to add test methods dynamically to the test class, but I
    will probably remove that hack because it's not worth the
    extra complexity)


[1] -
https://github.com/avocado-framework/avocado/blob/master/contrib/scripts/avocado-find-unittests
    -
http://avocado-framework.readthedocs.io/en/48.0/GetStartedGuide.html#running-tests-with-an-external-runner

[2] -
http://avocado-framework.readthedocs.io/en/48.0/GetStartedGuide.html#writing-a-simple-test

[3] -
https://github.com/avocado-framework/avocado/blob/master/examples/tests/simplewarning.sh

[4] -
http://avocado-framework.readthedocs.io/en/48.0/WritingTests.html#test-statuses

[5] -
http://avocado-framework.readthedocs.io/en/48.0/WritingTests.html#environment-variables-for-simple-tests

[6] -
https://github.com/avocado-framework/avocado/blob/master/optional_plugins/robot/avocado_robot/__init__.py#L79

[7] -
https://github.com/avocado-framework/avocado/blob/master/optional_plugins/robot/avocado_robot/__init__.py#L52


Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to