Hi Cleber,
Thanks for your quick reply. That's exactly what I understood, but here
is what is happening
I have a directory ~/avocado/xen/tests where I have the xentest.py
script. When I execute it, it does create the directory
~/avocado/xen/tests/xentest.py.data with stderr.expected and
stdout.expected (empty). It also creates the two files (stdout and
stderr) in the job-results/latest directory, but also empty.
The weird thing is that instead of saving, it reports to the job.log as
an error "L0151 ERROR| [stderr] Parsing config from /VM/guest1/vm.cf".
That's why I think I am missing something.
Thanks again for your help.
On 09/08/2016 02:59 PM, Cleber Rosa wrote:
On 09/08/2016 10:25 AM, Marcos E. Matsunaga wrote:
Hi All,
I am new to avocado and have just started to look into it.
I have been playing with avocado on Fedora 24 for a few weeks. I wrote a
small script to run commands and was exploring the option
"--output-check-record", but it never populate the files stderr.expected
and stdout.expected. Instead, it prints an error with "[stderr]" in the
job.log file. My understanding is that the output (stderr and stdout)
of commands/scripts executed by avocado would be captured and saved on
those files (like on synctest.py example), but it doesn't. I want to
know if I am doing something wrong or it is a bug.
Hi Marcos,
Avocado creates the `stdout` and `stderr` files in the test result
directory. In the synctest example, for instance, my contains:
$ avocado run examples/tests/synctest.py
$ cat
~/avocado/job-results/latest/test-results/1-examples_tests_synctest.py\:SyncTest.test/stdout
PAR : waiting
PASS : sync interrupted
`stderr` is actually empty for that test:
$ wc -l
~/avocado/job-results/latest/test-results/1-examples_tests_synctest.py\:SyncTest.test/stderr
0
/home/cleber/avocado/job-results/latest/test-results/1-examples_tests_synctest.py:SyncTest.test/stderr
What you have to do is, once you're satisfied with those outputs, and
they're considered "the gold standard", you'd move those to the test
*data directory*.
So, if you test is hosted at, `/tests/xl.py`, you'd created the
`/tests/xl.py.data`, and put those files there, named `stdout.expected`
and `stderr.expected`.
Whenever you run `avocado run --output-check-record all /tests/xl.py`,
those files will be used and the output of the *current* test execution
will be compared to those "gold standards".
The script is very simple and the way I execute the command is:
cmd = ('/usr/sbin/xl create /VM/guest1/vm.cfg')
if utils.system(cmd) == "0":
pass
else:
return False
The command send to stdout:
Parsing config from /VM/guest1/vm.cfg
I run the test as:
avocado run --output-check-record all xentest.py
The job.log file contains:
2016-09-07 13:04:48,015 test L0214 INFO | START
1-/root/avocado-vt/io-fs-autotest-xen/xen/tests/xentest.py:xentest.test_xen_start_stop;1
2016-09-07 13:04:48,051 xentest L0033 INFO |
1-/root/avocado-vt/io-fs-autotest-xen/xen/tests/xentest.py:xentest.test_xen_start_stop;1:
Running action create
2016-09-07 13:04:49,067 utils L0151 ERROR| [stderr] Parsing
config from /VM/guest1/vm.cfg
2016-09-07 13:04:49,523 test L0586 INFO | PASS
1-/root/avocado-vt/io-fs-autotest-xen/xen/tests/xentest.py:xentest.test_xen_start_stop;1
Thanks for your time and help.
Let me know if it's clear now! And thanks for trying Avocado out!
--
Regards,
Marcos Eduardo Matsunaga
Oracle USA
Linux Engineering
“The statements and opinions expressed here are my own and do not
necessarily represent those of Oracle Corporation.”