Re: [Avocado-devel] Tips for making a standalone test script Avocado-friendly?

2017-04-05 Thread Lucas Meneghel Rodrigues
Some quick thoughts about what you could do (points 1 and 2, other
paragraphs are more thoughts on making avocado better for such cases):

1) For the cases using unittest, you could try to import avocado, if it
fails, fall back to unittest.TestCase, such as

try:
from avocado import Test as TestClass
from avocado import main
except ImportError:
from unittest import TestCase as TestClass
...
Make the classes inherit from TestClass and use

if __name__ == '__main__':
main()

2) For the tests that use the main() entry point, you can refactor main()
slightly to separate the argument parsing from test execution, and then
implement a small avocado test class that calls the test execution routine.
This way the script works standalone and avocado can still run and execute
the code. You won't get per dynamically generated test function granularity
in the runner. See last paragraph for thoughts on this.

A more complicated and long term solution would be to make avocado more
like pytest, in the sense of making the avocado test runner, on top of
running avocado instrumented test classes, also able to run arbitrary
callables that have certain names, such as `test_something`.

A final thought about dynamically generated testing functions: For a test
runner that has to inspect the files to figure out what is runnable to
generate a list of tests though, dynamic function generation makes things
harder. Maybe we can come up with an idea to make avocado aware of
dynamically generated callables and somehow make the avocado test
loader/runner able to locate them properly and run them as tests.

Maybe we could try to inspect the current global scope of imported test
modules for callables that have certain names and execute them as avocado
tests?

Let me know if this helps.

Cheers,

Lucas

On Wed, Apr 5, 2017 at 3:01 PM Eduardo Habkost  wrote:

>
> Hi,
>
> I have been writing a few standalone Python scripts[1] to test
> QEMU recently, and I would like to make them more useful for
> people running tests using Avocado.
>
> Most of them work this way:
> 1) Query QEMU to check which
>architectures/machine-types/CPU-models/devices/options
>it supports
> 2) Run QEMU multiple times for each
>architectures/machine-types/CPU-models/devices/options
>combination I want to test
> 3) Report success/failure/skip results (sometimes including
>warnings) for each combination
>
> I would like to keep the test scripts easy to run without
> installing extra dependencies, so I want them to keep working as
> standalone scripts even if Avocado modules aren't available.
> Adding a few "if avocado_available:" lines to the script would be
> OK, though.
>
> Do you have any suggestions for making the test result output
> from those scripts easily consumable by the Avocado test runner?
>
>
> [1] Some examples:
>
> https://github.com/ehabkost/qemu-hacks/blob/work/device-crash-script/scripts/device-crash-test.py
>
> https://github.com/ehabkost/qemu-hacks/blob/work/x86-query-cpu-expansion-test/tests/query-cpu-model-test.py
>
> https://github.com/ehabkost/qemu-hacks/blob/work/query-machines-bus-info/tests/qmp-machine-info.py
> (Note that some of the scripts use the unittest module, but I
> will probably get rid of it, because the list of test cases I
> want to run will be generated at runtime. I've even wrote
> code to add test methods dynamically to the test class, but I
> will probably remove that hack because it's not worth the
> extra complexity)
>
> --
> Eduardo
>
>


Re: [Avocado-devel] Tips for making a standalone test script Avocado-friendly?

2017-04-05 Thread Cleber Rosa

On 04/05/2017 09:00 AM, Eduardo Habkost wrote:
> 
> Hi,
> 
> I have been writing a few standalone Python scripts[1] to test
> QEMU recently, and I would like to make them more useful for
> people running tests using Avocado.
> 
> Most of them work this way:
> 1) Query QEMU to check which
>architectures/machine-types/CPU-models/devices/options
>it supports
> 2) Run QEMU multiple times for each
>architectures/machine-types/CPU-models/devices/options
>combination I want to test
> 3) Report success/failure/skip results (sometimes including
>warnings) for each combination
> 

I went ahead and tried one of them:

QTEST_QEMU_BINARY=../x86_64-softmmu/qemu-system-x86_64 python
query-cpu-model-test.py
..
--
Ran 2 tests in 34.227s

OK

One very important aspect here is result granularity.  You're currently
using Python's unittest.TestCase, which in this case gives you a
granularity of two tests.  At this point, Avocado could find those two
tests (with some help[1]):

$ ~/src/avocado/avocado/contrib/scripts/avocado-find-unittests
query-cpu-model-test.py
query-cpu-model-test.CPUModelTest.testTCGModels
query-cpu-model-test.CPUModelTest.testKVMModels

The other granularity level that can be achieved here is a test per
executed script:

$ QTEST_QEMU_BINARY=../x86_64-softmmu/qemu-system-x86_64 avocado run
query-cpu-model-test.py
JOB ID : 11f0a5fbb02e6eb67580dd33270867b039806585
JOB LOG:
/home/cleber/avocado/job-results/job-2017-04-05T10.31-11f0a5f/job.log
 (1/1) query-cpu-model-test.py: PASS (33.22 s)
RESULTS: PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 |
CANCEL 0
TESTS TIME : 33.22 s
JOB HTML   :
/home/cleber/avocado/job-results/job-2017-04-05T10.31-11f0a5f/html/results.html


Which is nothing but our SIMPLE[2] tests support.

About reporting results, PASS, FAIL and WARN are available for SIMPLE
tests. PASS is exit status 0, FAIL is anything but 0, and WARN is a PASS
that looks for a pattern in the test output[3][4].

> I would like to keep the test scripts easy to run without
> installing extra dependencies, so I want them to keep working as
> standalone scripts even if Avocado modules aren't available.

That's OK, but I would like to add a few notes:

1) When removing the unittest dependency, you'll most probably end up
creating custom mechanisms for locating and executing test entry points
and reporting those results.

2) We've worked to make sure Avocado is really easy to depend on.  For
instance, it's available on stock Fedora (package name is
python-avocado) and on PyPI (installing it just requires a `pip install
avocado-framework`).

> Adding a few "if avocado_available:" lines to the script would be
> OK, though.
> 

I can't really recommend this optional check of Avocado for INSTRUMENTED
tests.  I really think the complexity added would bring more bad than good.

If your tests are going to be treated as SIMPLE tests, you could just
make a check for environment variables that Avocado makes available[5].

> Do you have any suggestions for making the test result output
> from those scripts easily consumable by the Avocado test runner?
> 

If those tests are treated as SIMPLE tests, then all of their output
will automatically be added to the Avocado logs.

Now, I do understand what your hopes are (or were after my possibly
disappointing recommendations :).  Ideally, Avocado would be able to
find the tests available on your standalone scripts[6], would know how
to run the individual tests[7].  Right now, this requires writing a
plugin.  I can see some ideas for a "Python universal" plugin here, that
would find other hints of tests in plain Python source code.  If you
think that makes sense, let's discuss it further.

> 
> [1] Some examples:
> 
> https://github.com/ehabkost/qemu-hacks/blob/work/device-crash-script/scripts/device-crash-test.py
> 
> https://github.com/ehabkost/qemu-hacks/blob/work/x86-query-cpu-expansion-test/tests/query-cpu-model-test.py
> 
> https://github.com/ehabkost/qemu-hacks/blob/work/query-machines-bus-info/tests/qmp-machine-info.py
> (Note that some of the scripts use the unittest module, but I
> will probably get rid of it, because the list of test cases I
> want to run will be generated at runtime. I've even wrote
> code to add test methods dynamically to the test class, but I
> will probably remove that hack because it's not worth the
> extra complexity)
> 

[1] -
https://github.com/avocado-framework/avocado/blob/master/contrib/scripts/avocado-find-unittests
-
http://avocado-framework.readthedocs.io/en/48.0/GetStartedGuide.html#running-tests-with-an-external-runner

[2] -
http://avocado-framework.readthedocs.io/en/48.0/GetStartedGuide.html#writing-a-simple-test

[3] -
https://github.com/avocado-framework/avocado/blob/master/examples/tests/simplewarning.sh

[4] -
http://avocado-framework.readthedocs.io/en/48.0/WritingTests.html#test-statuses

Re: [Avocado-devel] Tips for making a standalone test script Avocado-friendly?

2017-04-06 Thread Lukáš Doktor

Dne 5.4.2017 v 17:06 Cleber Rosa napsal(a):


On 04/05/2017 09:00 AM, Eduardo Habkost wrote:


Hi,

I have been writing a few standalone Python scripts[1] to test
QEMU recently, and I would like to make them more useful for
people running tests using Avocado.

Most of them work this way:
1) Query QEMU to check which
   architectures/machine-types/CPU-models/devices/options
   it supports
2) Run QEMU multiple times for each
   architectures/machine-types/CPU-models/devices/options
   combination I want to test
3) Report success/failure/skip results (sometimes including
   warnings) for each combination



I went ahead and tried one of them:

QTEST_QEMU_BINARY=../x86_64-softmmu/qemu-system-x86_64 python
query-cpu-model-test.py
..
--
Ran 2 tests in 34.227s

OK

One very important aspect here is result granularity.  You're currently
using Python's unittest.TestCase, which in this case gives you a
granularity of two tests.  At this point, Avocado could find those two
tests (with some help[1]):

$ ~/src/avocado/avocado/contrib/scripts/avocado-find-unittests
query-cpu-model-test.py
query-cpu-model-test.CPUModelTest.testTCGModels
query-cpu-model-test.CPUModelTest.testKVMModels

The other granularity level that can be achieved here is a test per
executed script:

$ QTEST_QEMU_BINARY=../x86_64-softmmu/qemu-system-x86_64 avocado run
query-cpu-model-test.py
JOB ID : 11f0a5fbb02e6eb67580dd33270867b039806585
JOB LOG:
/home/cleber/avocado/job-results/job-2017-04-05T10.31-11f0a5f/job.log
 (1/1) query-cpu-model-test.py: PASS (33.22 s)
RESULTS: PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 |
CANCEL 0
TESTS TIME : 33.22 s
JOB HTML   :
/home/cleber/avocado/job-results/job-2017-04-05T10.31-11f0a5f/html/results.html


Which is nothing but our SIMPLE[2] tests support.

About reporting results, PASS, FAIL and WARN are available for SIMPLE
tests. PASS is exit status 0, FAIL is anything but 0, and WARN is a PASS
that looks for a pattern in the test output[3][4].


I would like to keep the test scripts easy to run without
installing extra dependencies, so I want them to keep working as
standalone scripts even if Avocado modules aren't available.


That's OK, but I would like to add a few notes:

1) When removing the unittest dependency, you'll most probably end up
creating custom mechanisms for locating and executing test entry points
and reporting those results.

2) We've worked to make sure Avocado is really easy to depend on.  For
instance, it's available on stock Fedora (package name is
python-avocado) and on PyPI (installing it just requires a `pip install
avocado-framework`).


Adding a few "if avocado_available:" lines to the script would be
OK, though.



I can't really recommend this optional check of Avocado for INSTRUMENTED
tests.  I really think the complexity added would bring more bad than good.

If your tests are going to be treated as SIMPLE tests, you could just
make a check for environment variables that Avocado makes available[5].


Do you have any suggestions for making the test result output
from those scripts easily consumable by the Avocado test runner?



If those tests are treated as SIMPLE tests, then all of their output
will automatically be added to the Avocado logs.

Now, I do understand what your hopes are (or were after my possibly
disappointing recommendations :).  Ideally, Avocado would be able to
find the tests available on your standalone scripts[6], would know how
to run the individual tests[7].  Right now, this requires writing a
plugin.  I can see some ideas for a "Python universal" plugin here, that
would find other hints of tests in plain Python source code.  If you
think that makes sense, let's discuss it further.

Actually we already have `robot` plugin which is able to discover 
`robot`-like tests. How about creating a `unittest` loader, which would 
do the `avocado-find-unittest` and discover them individually? By 
default nothing would change as the `.py` files would be recognized as 
instrumented/simple tests, but when you change the order of loaders it'd 
detect them as unittests:


```
avocado run query-cpu-model-test.py => query-cpu-model-test.py
avocado run --loaders python-unittest => 
query-cpu-model-test.CPUModelTest.testTCGModels; 
query-cpu-model-test.CPUModelTest.testKVMModels

```

This should be really simple to develop (even simpler than robot as all 
things are already in place).


Lukáš



[1] Some examples:

https://github.com/ehabkost/qemu-hacks/blob/work/device-crash-script/scripts/device-crash-test.py

https://github.com/ehabkost/qemu-hacks/blob/work/x86-query-cpu-expansion-test/tests/query-cpu-model-test.py

https://github.com/ehabkost/qemu-hacks/blob/work/query-machines-bus-info/tests/qmp-machine-info.py
(Note that some of the scripts use the unittest module, but I
will probably get rid of it, because the list of test cases I
want to run will be genera

Re: [Avocado-devel] Tips for making a standalone test script Avocado-friendly?

2017-04-06 Thread Cleber Rosa


On 04/06/2017 11:00 AM, Lukáš Doktor wrote:
> Dne 5.4.2017 v 17:06 Cleber Rosa napsal(a):
>>
>> On 04/05/2017 09:00 AM, Eduardo Habkost wrote:
>>>
>>> Hi,
>>>
>>> I have been writing a few standalone Python scripts[1] to test
>>> QEMU recently, and I would like to make them more useful for
>>> people running tests using Avocado.
>>>
>>> Most of them work this way:
>>> 1) Query QEMU to check which
>>>architectures/machine-types/CPU-models/devices/options
>>>it supports
>>> 2) Run QEMU multiple times for each
>>>architectures/machine-types/CPU-models/devices/options
>>>combination I want to test
>>> 3) Report success/failure/skip results (sometimes including
>>>warnings) for each combination
>>>
>>
>> I went ahead and tried one of them:
>>
>> QTEST_QEMU_BINARY=../x86_64-softmmu/qemu-system-x86_64 python
>> query-cpu-model-test.py
>> ..
>> --
>> Ran 2 tests in 34.227s
>>
>> OK
>>
>> One very important aspect here is result granularity.  You're currently
>> using Python's unittest.TestCase, which in this case gives you a
>> granularity of two tests.  At this point, Avocado could find those two
>> tests (with some help[1]):
>>
>> $ ~/src/avocado/avocado/contrib/scripts/avocado-find-unittests
>> query-cpu-model-test.py
>> query-cpu-model-test.CPUModelTest.testTCGModels
>> query-cpu-model-test.CPUModelTest.testKVMModels
>>
>> The other granularity level that can be achieved here is a test per
>> executed script:
>>
>> $ QTEST_QEMU_BINARY=../x86_64-softmmu/qemu-system-x86_64 avocado run
>> query-cpu-model-test.py
>> JOB ID : 11f0a5fbb02e6eb67580dd33270867b039806585
>> JOB LOG:
>> /home/cleber/avocado/job-results/job-2017-04-05T10.31-11f0a5f/job.log
>>  (1/1) query-cpu-model-test.py: PASS (33.22 s)
>> RESULTS: PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 |
>> CANCEL 0
>> TESTS TIME : 33.22 s
>> JOB HTML   :
>> /home/cleber/avocado/job-results/job-2017-04-05T10.31-11f0a5f/html/results.html
>>
>>
>>
>> Which is nothing but our SIMPLE[2] tests support.
>>
>> About reporting results, PASS, FAIL and WARN are available for SIMPLE
>> tests. PASS is exit status 0, FAIL is anything but 0, and WARN is a PASS
>> that looks for a pattern in the test output[3][4].
>>
>>> I would like to keep the test scripts easy to run without
>>> installing extra dependencies, so I want them to keep working as
>>> standalone scripts even if Avocado modules aren't available.
>>
>> That's OK, but I would like to add a few notes:
>>
>> 1) When removing the unittest dependency, you'll most probably end up
>> creating custom mechanisms for locating and executing test entry points
>> and reporting those results.
>>
>> 2) We've worked to make sure Avocado is really easy to depend on.  For
>> instance, it's available on stock Fedora (package name is
>> python-avocado) and on PyPI (installing it just requires a `pip install
>> avocado-framework`).
>>
>>> Adding a few "if avocado_available:" lines to the script would be
>>> OK, though.
>>>
>>
>> I can't really recommend this optional check of Avocado for INSTRUMENTED
>> tests.  I really think the complexity added would bring more bad than
>> good.
>>
>> If your tests are going to be treated as SIMPLE tests, you could just
>> make a check for environment variables that Avocado makes available[5].
>>
>>> Do you have any suggestions for making the test result output
>>> from those scripts easily consumable by the Avocado test runner?
>>>
>>
>> If those tests are treated as SIMPLE tests, then all of their output
>> will automatically be added to the Avocado logs.
>>
>> Now, I do understand what your hopes are (or were after my possibly
>> disappointing recommendations :).  Ideally, Avocado would be able to
>> find the tests available on your standalone scripts[6], would know how
>> to run the individual tests[7].  Right now, this requires writing a
>> plugin.  I can see some ideas for a "Python universal" plugin here, that
>> would find other hints of tests in plain Python source code.  If you
>> think that makes sense, let's discuss it further.
>>
> Actually we already have `robot` plugin which is able to discover
> `robot`-like tests. How about creating a `unittest` loader, which would
> do the `avocado-find-unittest` and discover them individually? By
> default nothing would change as the `.py` files would be recognized as
> instrumented/simple tests, but when you change the order of loaders it'd
> detect them as unittests:
> 

Eduardo mentioned that he'd eventually get rid of the unittest
dependency.  That's why I was suggesting/brainstorming about an even
more generic way of providing hints that there are tests on a Python
source file.

Still, I agree that it shouldn't be too hard.

- Cleber.

> ```
> avocado run query-cpu-model-test.py => query-cpu-model-test.py
> avocado run --loaders python-unittest =>
> query-cpu-model-test.CPUModelTest.testTCGModels;
> query-cpu-model-test.CPUModelTe

Re: [Avocado-devel] Tips for making a standalone test script Avocado-friendly?

2017-04-06 Thread Eduardo Habkost
On Thu, Apr 06, 2017 at 12:38:31PM -0400, Cleber Rosa wrote:
[...]
> >>> Do you have any suggestions for making the test result output
> >>> from those scripts easily consumable by the Avocado test runner?
> >>>
> >>
> >> If those tests are treated as SIMPLE tests, then all of their output
> >> will automatically be added to the Avocado logs.
> >>
> >> Now, I do understand what your hopes are (or were after my possibly
> >> disappointing recommendations :).  Ideally, Avocado would be able to
> >> find the tests available on your standalone scripts[6], would know how
> >> to run the individual tests[7].  Right now, this requires writing a
> >> plugin.  I can see some ideas for a "Python universal" plugin here, that
> >> would find other hints of tests in plain Python source code.  If you
> >> think that makes sense, let's discuss it further.
> >>
> > Actually we already have `robot` plugin which is able to discover
> > `robot`-like tests. How about creating a `unittest` loader, which would
> > do the `avocado-find-unittest` and discover them individually? By
> > default nothing would change as the `.py` files would be recognized as
> > instrumented/simple tests, but when you change the order of loaders it'd
> > detect them as unittests:
> > 
> 
> Eduardo mentioned that he'd eventually get rid of the unittest
> dependency.  That's why I was suggesting/brainstorming about an even
> more generic way of providing hints that there are tests on a Python
> source file.

I planned to get rid of the unittest dependency because it did
not seem useful to me. But if it provides Avocado test discovery
ability for free, I would probably keeping it.

Now, another problem is that I wish a test granuality that
requires running QEMU at least once during the test discovery
phase (e.g. one test case for each machine-type/device/CPU).
Maybe this will break some assumptions in Avocado?

Are tests able to get a few input parameters during thest
discovery? e.g. I want to tell the test script/module the list of
QEMU binaries I want to test. Today I use an environment variable
or command-line parameter for that.

-- 
Eduardo



Re: [Avocado-devel] Tips for making a standalone test script Avocado-friendly?

2017-04-07 Thread Cleber Rosa


On 04/06/2017 02:20 PM, Eduardo Habkost wrote:
> On Thu, Apr 06, 2017 at 12:38:31PM -0400, Cleber Rosa wrote:
> [...]
> Do you have any suggestions for making the test result output
> from those scripts easily consumable by the Avocado test runner?
>

 If those tests are treated as SIMPLE tests, then all of their output
 will automatically be added to the Avocado logs.

 Now, I do understand what your hopes are (or were after my possibly
 disappointing recommendations :).  Ideally, Avocado would be able to
 find the tests available on your standalone scripts[6], would know how
 to run the individual tests[7].  Right now, this requires writing a
 plugin.  I can see some ideas for a "Python universal" plugin here, that
 would find other hints of tests in plain Python source code.  If you
 think that makes sense, let's discuss it further.

>>> Actually we already have `robot` plugin which is able to discover
>>> `robot`-like tests. How about creating a `unittest` loader, which would
>>> do the `avocado-find-unittest` and discover them individually? By
>>> default nothing would change as the `.py` files would be recognized as
>>> instrumented/simple tests, but when you change the order of loaders it'd
>>> detect them as unittests:
>>>
>>
>> Eduardo mentioned that he'd eventually get rid of the unittest
>> dependency.  That's why I was suggesting/brainstorming about an even
>> more generic way of providing hints that there are tests on a Python
>> source file.
> 
> I planned to get rid of the unittest dependency because it did
> not seem useful to me. But if it provides Avocado test discovery
> ability for free, I would probably keeping it.
> 

OK, so that let's keep track of that:

https://trello.com/c/j9IE7BHy/992-loader-add-native-python-unittest-support

> Now, another problem is that I wish a test granuality that
> requires running QEMU at least once during the test discovery
> phase (e.g. one test case for each machine-type/device/CPU).
> Maybe this will break some assumptions in Avocado?
> 

That would be possible, again, with a custom plugin.  The Avocado-VT
loader, for instance, does "what-not" during test discovery.  How to do
that in a generic (and requirements free) way, may be a little more
tricky (or interesting).

One way of possibly doing this (which is halfway there, but still not
supported), is to use the `avocado.core.Job` API, which we intend to
make public in the future.

You could still have your standlone scripts (added simplicity here if
they're already unittest.TestCase based), and write a custom Job that
would create a custom test suite:

...
# regular script content
...
if __name__ == '__main__':
   if under_avocado(): # fictitious function
  from avocado import Job
 class QemuDynamicJob(Job):
def create_test_suite(self):  # [1]
   methods = find_tests_on_this_file()
   self.test_suite = methods * get_cpu_models() # [2]
  QemuDynamicJob().run()

> Are tests able to get a few input parameters during thest
> discovery? e.g. I want to tell the test script/module the list of
> QEMU binaries I want to test. Today I use an environment variable
> or command-line parameter for that.
> 

Right now a custom loader can have access to all sorts of parameters.
Tests do not discover themselves, so that would be against the current
architecture, but jobs do, as explained before.

Let me know if that makes sense.

[1] -
http://avocado-framework.readthedocs.io/en/48.0/api/core/avocado.core.html#avocado.core.job.Job.create_test_suite

[2] -
http://avocado-framework.readthedocs.io/en/48.0/api/core/avocado.core.html#avocado.core.job.Job.test_suite

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]
[  7ABB 96EB 8B46 B94D 5E0F  E9BB 657E 8D33 A5F2 09F3  ]



signature.asc
Description: OpenPGP digital signature