[Avocado-devel] Pre-Release (0.32.0) Test Plan Results

2016-01-20 Thread Cleber Rosa
Hi all,

This is the result of the "Release Test Plan" run for the upcoming 0.32.0 
release:

---

Test Plan: Release Test Plan
Run by 'cleber' at 2016-01-20T06:08:50.392512
PASS: 'Avocado source is sound': 
PASS: 'Avocado RPM build': 
PASS: 'Avocado RPM install': 
PASS: 'Avocado Test Run on RPM based installation': 
PASS: 'Avocado Test Run on Virtual Machine': 
PASS: 'Avocado Test Run on Remote Machine': With fix as per PR: 
https://github.com/avocado-framework/avocado/pull/971
PASS: 'Avocado Remote Machine HTML report': 
PASS: 'Avocado Server Source Checkout and Unittests': 
PASS: 'Avocado Server Run': 
PASS: 'Avocado Server Functional Test': 
PASS: 'Avocado Virt and VT Source Checkout': Added fix from PR: 
https://github.com/avocado-framework/avocado/pull/966
PASS: 'Avocado Virt Bootstrap': 
PASS: 'Avocado Virt Boot Test Run and HTML report': 
PASS: 'Avocado Virt - Assignment of values from the cmdline': 
PASS: 'Avocado Virt - Migration test': 
PASS: 'Avocado VT - Bootstrap': 
PASS: 'Avocado VT - List tests': 
PASS: 'Avocado VT - Run test': 
PASS: 'Avocado HTML report sysinfo': 
PASS: 'Avocado HTML report links':

---

Cheers,
Cleber Rosa.

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


[Avocado-devel] Avocado release 0.32.0: Road Runner

2016-01-20 Thread Cleber Rosa
Avocado release 0.32.0: Road Runner
===

Hi everyone! A new year brings a new Avocado release as the result of
Sprint #32: Avocado 0.32.0, aka, "Road Runner".

The major changes introduced in the previous releases were put to
trial on this release cycle, and as a result, we have responded with
documentation updates and also many fixes. This release also marks the
introduction of a great feature by a new member of our team: Amador
Pahim brought us the Job Replay feature! Kudos!!!

So, for Avocado the main changes are:

* Job Replay: users can now easily re-run previous jobs by using the
  --replay command line option. This will re-run the job with the same
  tests, configuration and multiplexer variants that were used on the
  origin one. By using --replay-test-status, users can, for example,
  only rerun the failed tests of the previous job. For more check
  our docs[1].
* Documentation changes in response to our users feedback, specially
  regarding the setup.py install/develop requirement.
* Fixed the static detection of test methods when using repeated
  names.
* Ported some Autotest tests to Avocado, now available on their own
  repository[2]. More contributions here are very welcome!

For a complete list of changes please check the Avocado changelog[3].

For Avocado-VT, there were also many changes, including:

* Major documentation updates, making them simpler and more in sync
  with the Avocado documentation style.
* Refactor of the code under the avocado_vt namespace. Previously
  most of the code lived under the plugin file itself, now it
  better resembles the structure in Avocado and the plugin files
  are hopefully easier to grasp.

Again, for a complete list of changes please check the Avocado-VT
changelog[4].

Install avocado
---

Instructions are available in our documentation on how to install
either with packages or from source[5].

Updated RPM packages are be available in the project repos for
Fedora 22, Fedora 23, EPEL 6 and EPEL 7.

Happy hacking and testing!

---

[1] http://avocado-framework.readthedocs.org/en/0.32.0/Replay.html
[2] http://github.com/avocado-framework/avocado-misc-tests
[3] https://github.com/avocado-framework/avocado/compare/0.31.0...0.32.0
[4] https://github.com/avocado-framework/avocado-vt/compare/0.31.0...0.32.0
[5] http://avocado-framework.readthedocs.org/en/0.32.0/GetStartedGuide.html

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] RFC: Static analysis vs. import to discover tests

2016-01-20 Thread Cleber Rosa


- Original Message -
> From: "Jan Scotka" 
> To: "Lukáš Doktor" , avocado-devel@redhat.com, "Cleber 
> Rosa" , "Ademar Reis"
> , "Amador Pahim" , "Lucas Meneghel 
> Rodrigues" 
> Sent: Wednesday, January 20, 2016 6:31:17 AM
> Subject: Re: RFC: Static analysis vs. import to discover tests
> 
> Hi,
> I think it is perfect solution,
> because exactly as you said, if I want to something to run I want it and
> I accept my fate :-) so static analysis is overhead, dynamic loading is
> more danger, but it is exactly what I want.
> One cons is, when running tests on VM/remote machine, in that case,
> locally, it should not run it, but as you mentioned for list command it
> will be static, and in that case it is good reason to not load the test
> module. because I'll run it on remote machine.
> This dynamic loading also solve that issue with inheritance causing
> infinite loop directly
> https://github.com/avocado-framework/avocado/issues/961

Jan,

I also like the proposal, but keep in mind that the infinite loop, while
triggered by running some tests as SIMPLE tests, instead of INSTRUMENTED
tests, actually lives in avocado.main(). There's been some discussion on
that at

https://github.com/avocado-framework/avocado/pull/968

Cheers!
CR.

>  Thanks&Regards
>  Honza
> 
> 
> 
> On 01/19/2016 05:19 PM, Lukáš Doktor wrote:
> > Hello guys,
> >
> > as the topic suggests, we'll be talking about ways to discover whether
> > given file is an Avocado test, unittest, or simple test.
> >
> >
> > History
> > ===
> >
> > At the beginning, Avocado used `inspect` to get all module-level classes
> > and queried if they were inherited from avocado.Test class. This worked
> > brilliantly, BUT it requires the module, to be actually loaded, which
> > means the module-level code is executed. This is fine when you intend to
> > run the test anyway, but it's potentially dangerous, when you just list
> > the tests. The problem is, that simple "avocado list /" could turn into
> > 'os.system("rm -rf / --no-preserve-root")' and we clearly don't want that.
> >
> > So couple of months back, we started using AST and static analysis to
> > discover tests.
> >
> > The problem is, that even the best static analysis can't cover
> > everything our users can come up with (for example creating Test classes
> > dynamically during import). So we added a way to override this behavior,
> > docstring "avocado: enable" and "avocado: disable". It can't handle
> > dynamically created `test` methods, but it can help you with inheritance
> > problems.
> >
> > Anyway it's not really friendly and you can't execute inherited `test`
> > methods, only the ones defined in your class. Recently we had questions
> > from multiple sources asking "Why is my class not recognized?" Or "Why
> > is my test executed as SIMPLE test?".
> >
> >
> > My solution
> > ===
> >
> > I don't like static analysis. It's always tricky, it can never achieve
> > 100% certainty. So I'm proposing moving back to loading the code, while
> > keeping the static analysis for listing purposes. So how would that work?
> >
> >
> > avocado list
> > 
> >
> > would use static analysis to discover tests. The output would be:
> >
> > ```
> > SIMPLE /bin/true
> > INSTRUMENTED   examples/tests/gdbtest.py:GdbTest.test_start_exit
> > INSTRUMENTED   examples/tests/gdbtest.py:GdbTest.test_existing_commands_raw
> > INSTRUMENTED   examples/tests/gdbtest.py:GdbTest.test_existing_commands
> > INSTRUMENTED
> > examples/tests/gdbtest.py:GdbTest.test_load_set_breakpoint_run_exit_raw
> > ?INSTRUMENTED? multiple_inheritance.py:Inherited.*
> > ?INSTRUMENTED? multiple_inheritance.py:BaseClass.*
> > ?INSTRUMENTED? library.py:*
> > ?INSTRUMENTED? simple_script.py:*
> > ```
> >
> > Where ?xxx? would be yellow and it stands for "It looks like
> > instrumented test, but I can't tell".
> >
> > When the user makes sure, he wants to run those tests and they are not
> > nasty, he would use `avocado list --unsafe`, which would actually load
> > the modules and give precise results:
> >
> > ```
> > SIMPLE   /bin/true
> > INSTRUMENTED examples/tests/gdbtest.py:GdbTest.test_start_exit
> > INSTRUMENTED examples/tests/gdbtest.py:GdbTest.test_existing_commands_raw
> > INSTRUMENTED examp

Re: [Avocado-devel] RFC: Static analysis vs. import to discover tests

2016-01-20 Thread Cleber Rosa


- Original Message -
> From: "Lukáš Doktor" 
> To: avocado-devel@redhat.com, "Jan Scotka" , "Cleber 
> Rosa" , "Ademar Reis"
> , "Amador Pahim" , "Lucas Meneghel 
> Rodrigues" 
> Sent: Tuesday, January 19, 2016 2:19:00 PM
> Subject: RFC: Static analysis vs. import to discover tests
> 
> Hello guys,
> 
> as the topic suggests, we'll be talking about ways to discover whether
> given file is an Avocado test, unittest, or simple test.
> 
> 
> History
> ===
> 
> At the beginning, Avocado used `inspect` to get all module-level classes
> and queried if they were inherited from avocado.Test class. This worked
> brilliantly, BUT it requires the module, to be actually loaded, which
> means the module-level code is executed. This is fine when you intend to
> run the test anyway, but it's potentially dangerous, when you just list
> the tests. The problem is, that simple "avocado list /" could turn into
> 'os.system("rm -rf / --no-preserve-root")' and we clearly don't want that.
> 
> So couple of months back, we started using AST and static analysis to
> discover tests.
> 
> The problem is, that even the best static analysis can't cover
> everything our users can come up with (for example creating Test classes
> dynamically during import). So we added a way to override this behavior,
> docstring "avocado: enable" and "avocado: disable". It can't handle
> dynamically created `test` methods, but it can help you with inheritance
> problems.
> 
> Anyway it's not really friendly and you can't execute inherited `test`
> methods, only the ones defined in your class. Recently we had questions
> from multiple sources asking "Why is my class not recognized?" Or "Why
> is my test executed as SIMPLE test?".
> 
> 
> My solution
> ===
> 
> I don't like static analysis. It's always tricky, it can never achieve
> 100% certainty. So I'm proposing moving back to loading the code, while
> keeping the static analysis for listing purposes. So how would that work?
> 
> 
> avocado list
> 
> 
> would use static analysis to discover tests. The output would be:
> 
> ```
> SIMPLE /bin/true
> INSTRUMENTED   examples/tests/gdbtest.py:GdbTest.test_start_exit
> INSTRUMENTED   examples/tests/gdbtest.py:GdbTest.test_existing_commands_raw
> INSTRUMENTED   examples/tests/gdbtest.py:GdbTest.test_existing_commands
> INSTRUMENTED
> examples/tests/gdbtest.py:GdbTest.test_load_set_breakpoint_run_exit_raw
> ?INSTRUMENTED? multiple_inheritance.py:Inherited.*
> ?INSTRUMENTED? multiple_inheritance.py:BaseClass.*
> ?INSTRUMENTED? library.py:*
> ?INSTRUMENTED? simple_script.py:*
> ```
> 
> Where ?xxx? would be yellow and it stands for "It looks like
> instrumented test, but I can't tell".
> 
> When the user makes sure, he wants to run those tests and they are not
> nasty, he would use `avocado list --unsafe`, which would actually load
> the modules and give precise results:
> 
> ```
> SIMPLE   /bin/true
> INSTRUMENTED examples/tests/gdbtest.py:GdbTest.test_start_exit
> INSTRUMENTED examples/tests/gdbtest.py:GdbTest.test_existing_commands_raw
> INSTRUMENTED examples/tests/gdbtest.py:GdbTest.test_existing_commands
> INSTRUMENTED
> examples/tests/gdbtest.py:GdbTest.test_load_set_breakpoint_run_exit_raw
> INSTRUMENTED multiple_inheritance.py:Inherited.test1
> INSTRUMENTED multiple_inheritance.py:Inherited.test2
> INSTRUMENTED multiple_inheritance.py:Inherited.test3
> SIMPLE   simple_script.py
> ```
> 
> You can see that this removed the `library.py`, it also discovered that
> the `multiple_inheritance.py:BaseClass`` is not test. On the other hand,
> it discovered that `multiple_inheritance.py:Inherited` holds 3 tests.
> This result is, what `avocado run` would actually execute.
> 
> 
> avocado run
> ---
> 
> I don't think `avocado run` should support safe and unsafe option, not
> even when running `--dry-run`. When one decides to run the tests, he
> wants to run them, so I think `avocado run` should always use the unsafe
> way using `inspect`. That way there are no surprises and complex
> workarounds to get things done.
> 
> 
> Summary
> ===
> 
> The current way is safe, `avocado list` and `avocado run` are always in
> sync, but it's strange to users as they write tests, they use multiple
> inheritance and they need BaseClasses. We have a solution, which
> requires docstrings, but it's just not flexible enough and can never be
> as flexible as actual loading.
> 
&g

Re: [Avocado-devel] New EC2 plugin

2016-01-29 Thread Cleber Rosa


- Original Message -
> From: "Lucas Meneghel Rodrigues" 
> To: "avocado-devel" 
> Sent: Thursday, January 28, 2016 7:28:04 PM
> Subject: Re: [Avocado-devel] New EC2 plugin
> 
> OK, today I talked to Cleber and we figured out what was wrong. Patches
> will follow.

For those interested in understanding what was wrong, and what still is wrong,
here are a few pointers.

1) https://github.com/avocado-framework/avocado/pull/992

This reverts a patch of mine (also a type of revert itself), that deal with
how the (remote) test results are set. Basically, it rendered that action a
noop, an no remote test result were effectively applied.

2) https://github.com/avocado-framework/avocado/pull/994

A code explanation of how fragile is the method to set the remote test result
we've been using.

I hope this will generate a broader understanding of all the issues involved
and that a comprehensive fix will follow. 

> 
> I did test the plugin and it's working well. This first PR was merged, and
> I'll work on packaging.
> 
> On Thu, Jan 28, 2016 at 1:17 AM Lucas Meneghel Rodrigues 
> wrote:
> 
> > Hi guys:
> >
> > As part of my effort to try to flush all my internal patches that have
> > been accumulating over the months, I've created a new, separate repo with
> > my EC2 plugin, and ask whomever has time to review my code to check:
> >
> > https://github.com/avocado-framework/avocado-ec2/pull/1
> >
> > I tried to follow the new plugin practices as much as I could figure, and
> > did some refactoring of the original code, that should be more robust in
> > cleaning up resources from AWS.
> >
> > Now, I can't for the life of me figure out how plugins to the 'run'
> > command are working these days.  I have installed my plugin with 'sudo
> > python setup.py develop', then checked the command line options:
> >
> > test execution on an EC2 (Amazon Elastic Cloud) instance:
> >   --ec2-ami-id EC2_AMI_ID
> > Amazon Machine Image ID. Example: ami-e08adb8a
> >   --ec2-ami-username EC2_AMI_USERNAME
> > User for the AMI image login. Defaults to root
> >   --ec2-ami-distro-type EC2_AMI_DISTRO_TYPE
> > AMI base Linux Distribution. Valid values: fedora
> > (for
> > Fedora > 22), el (for RHEL/CentOS > 6.0), ubuntu
> > (for
> > Ubuntu > 14.04). Defaults to fedora
> >   --ec2-instance-ssh-port EC2_INSTANCE_SSH_PORT
> > sshd port for the EC2 instance. Defaults to 22
> >   --ec2-security-group-ids EC2_SECURITY_GROUP_IDS
> > Comma separated list of EC2 security group IDs.
> > Example: sg-a5e1d7b0
> >   --ec2-subnet-id EC2_SUBNET_ID
> > EC2 subnet ID. Example: subnet-ec4a72c4
> >   --ec2-instance-type EC2_INSTANCE_TYPE
> > EC2 instance type. Example: c4.xlarge
> >   --ec2-login-timeout SECONDS
> > Amount of time (in seconds) to wait for a
> > successful
> > connection to the EC2 instance. Defaults to 120
> > seconds
> >
> > Sweet. It seems to be working, right? Now let me try to run it:
> >
> > avocado run passtest --ec2-ami-id ami-05f4ed35 --ec2-ami-distro-type
> > fedora --ec2-security-group-ids sg-81703ae4 --ec2-subnet-id subnet-5207ee37
> > --ec2-instance-type t2.micro
> > JOB ID : 58d7e8867502a12601e60d16ff17bc23df657a01
> > JOB LOG:
> > /home/lmr/avocado/job-results/job-2016-01-28T01.13-58d7e88/job.log
> > TESTS  : 1
> >  (1/1) passtest.py:PassTest.test: PASS (0.00 s)
> > RESULTS: PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
> > JOB HTML   :
> > /home/lmr/avocado/job-results/job-2016-01-28T01.13-58d7e88/html/results.html
> > TIME   : 0.00 s
> >
> > Hmm, that ended too fast, so there's something up. Let's try a bogus
> > command line for the remote plugin, then:
> >
> > avocado run passtest --vm-domain domain --vm-username user --vm-password
> > pass
> > JOB ID : c0349ba261f2dd0c47469394a2e58d88be1742b7
> > JOB LOG:
> > /home/lmr/avocado/job-results/job-2016-01-28T01.14-c0349ba/job.log
> > TESTS  : 1
> >  (1/1) passtest.py:PassTest.test: PASS (0.00 s)
> > RESULTS: PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
> > JOB HTML   :
> > /home/lmr/avocado/job-results/job-2016-01-28T01.14-c0349ba/html/results.html
> > TIME   : 0.00 s
> >
> > Same result. What's going on? I even checked if I had an older version of
> > avocado through apt-get (which I did, but then removed and re-ran the
> > develop commands).
> >
> > So I'm at a loss, and without energy to figure out what's going on. I've
> > been running a custom version of avocado on my test environments, pre
> > plugin refactor, so I haven't had any problems until I sit to flush my
> > patches.
> >
> > I appreciate any help you guys could provide.
> >
> > Thanks!
> >
> 
> 

[Avocado-devel] Avocado on its way to being self testable

2016-02-01 Thread Cleber Rosa
Hi folks,

I'd like to bring the following pull request to your attention:

 https://github.com/avocado-framework/avocado/pull/996

It's titled "Safe Loader: a second, more abstract and reusable
implementation [v0]", but the most interesting things about it
are:

* It contains the foundations to find other types of Python based
tests, including standard unittests

* It includes a (contrib) script that can be used in conjunction
with the "external runner" feature and allows Avocado to run
unittests *right now*, including its own tests.

Being self testable was one of the original goals of Avocado. While
this makes *me* quite happy, I believe matters to the whole community
of users and developers because it will bring:

* Increase of test scope, and thus quality, by being self testable

* Wider range of test that will be able to be run directly by Avocado

Imagine being able to combine in a single "acceptance job" your own
unittests, custom tests written in any language (aka SIMPLE tests)
and even virtualization (Avocado-VT) tests. Then benefit from all
of the niceties of the Avocado test runner for all of them.

So, hang tight, because that's where we're headed to.

Cheers,
Cleber Rosa.

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] Avocado 0.31.0 multi user problems

2016-02-04 Thread Cleber Rosa

- Olav Philipp Henschel  wrote:
> Thanks Cleber,
> 
> I've been able to locate and delete all the files you mentioned.
> 
> Then I proceeded to "make rpm", but it exits with an error:
> Error: No Package found for aexpect
> Even though it is already install by pip.
> 
> I proceeded anyway with "python setup.py develop --user". I don't need 
> to use a path that all users can access yet.
> I did the same for avocado-vt.
> 
> 
> Avocado is still getting .conf files from my other user, however 
> (powerkvm). I've tried with the absolute paths with the same results.
> The paths that it shows for the subcommand "config" and "vt-bootstrap" 
> differ:
> 
> $ sudo ~/.local/bin/avocado --config 
> /home/olavph/ibm-kvm-tests/avocado/etc/avocado/conf.d/pkvm.conf config
> [sudo] password for olavph:
> Config files read (in order):
>  /home/powerkvm/ibm-kvm-tests/avocado/etc/avocado/avocado.conf
>  /home/powerkvm/ibm-kvm-tests/avocado/etc/avocado/conf.d/gdb.conf
> /home/powerkvm/ibm-kvm-tests/avocado/etc/avocado/conf.d/pkvm.conf
>  /home/olavph/ibm-kvm-tests/avocado/etc/avocado/conf.d/pkvm.conf
> 
>  Section.Key Value
>  datadir.paths.base_dir  /home/olavph/avocado
>  datadir.paths.test_dir /home/olavph/avocado/tests
>  datadir.paths.data_dir /home/olavph/avocado/data
>  datadir.paths.logs_dir /home/olavph/avocado/job-results
>  ...
> 
> $ sudo ~/.local/bin/avocado --config 
> /home/olavph/ibm-kvm-tests/avocado/etc/avocado/conf.d/pkvm.conf 
> vt-bootstrap --vt-type libvirt --vt-guest-os PowerKVM --y
> es-to-all
> ...
> 10:44:58 INFO | 4 - Verifying directories
> 10:44:58 DEBUG| Dir /var/avocado/data/avocado-vt/images exists, not creating
> 10:44:58 DEBUG| Dir /var/avocado/data/avocado-vt/isos exists, not creating
> 10:44:58 DEBUG| Dir /var/avocado/data/avocado-vt/steps_data exists, not 
> creating
> 10:44:58 DEBUG| Dir /var/avocado/data/avocado-vt/gpg exists, not creating
> ...
> 
> 
> Any other hints?
> 
> Olav
> 

Olav,

Sorry for taking that long to get back on this. Hopefully this is
still relevant. So, better pinpoint the issue, first I tried to check
if avocado is using the "right" configuration files:

[cleber@localhost ~]$ sudo avocado config 
Config files read (in order):
/etc/avocado/avocado.conf
/etc/avocado/conf.d/gdb.conf
/root/.config/avocado/avocado.conf
...

So, both examples look fine, besides the system wide configuration,
avocado looks for the local user (in this case, the "sudoed" user,
root). Then I tried to check how it behaves with additional config
files:

[cleber@localhost ~]$ sudo avocado --config /tmp/external_avocado_config.conf 
config 
Config files read (in order):
/etc/avocado/avocado.conf
/etc/avocado/conf.d/gdb.conf
/root/.config/avocado/avocado.conf
/tmp/external_avocado_config.conf
...

And it also looks OK. Next step was to check that one can set
*avocado's* data_dir using an (external) configuration file:


[cleber@localhost ~]$ echo -e "[datadir.paths]\ndata_dir = 
/tmp/custom/data_dir" > /tmp/external_avocado_config.conf
[cleber@localhost ~]$ sudo avocado --config /tmp/external_avocado_config.conf 
config 
Config files read (in order):
/etc/avocado/avocado.conf
/etc/avocado/conf.d/gdb.conf
/root/.config/avocado/avocado.conf
/tmp/external_avocado_config.conf

Section.Key Value
datadir.paths.base_dir  /usr/share/avocado
datadir.paths.test_dir  /usr/share/avocado/tests
datadir.paths.data_dir  /tmp/custom/data_dir
^^  

Also looks OK. But then I believe I finally found the bug. From
avocado/core/data_dir.py(47):

   SETTINGS_DATA_DIR = os.path.expanduser(settings.get_value('datadir.paths', 
'data_dir'))

This is set when the module is evaluated, that is, when it's imported
by other modules. Then, much later, the extra configuration files
given on the command line are read, as per avocado/core/parser.py(73):

# Load settings from file, if user provides one
if self.args.config is not None:
settings.settings.process_config_path(self.args.config)

The big issue is that avocado/core/data_dir.py always uses
SETTINGS_DATA_DIR, which is already set and doesn't reflect the
configuration files that were added later. This looks like a simple
bug, but I believe it can open a can of worms, so I'm opening a proper
issue to deal with that:

https://trello.com/c/QSNlIbP6/574-bug-data-dir-module-ignores-settings

Sorry for the trouble and delay, and thanks for reporting that.
Cleber Rosa.



> 
> On 07-01-2016 19:15, Cleb

Re: [Avocado-devel] [Autotest] Feasibility study - issues clarification

2016-02-11 Thread Cleber Rosa


- Original Message -
> From: "Lukasz Majewski" 
> To: autotest-ker...@redhat.com
> Sent: Thursday, February 11, 2016 6:27:22 AM
> Subject: [Autotest]  Feasibility study - issues clarification
> 
> Dear all,
> 
> I'd be grateful for clarifying a few issues regarding Autotest.
> 
> I have following setup:
> 1. Custom HW interface to connect Target to Host
> 2. Target board with Linux
> 3. Host PC - debian/ubuntu.
> 
> I would like to unify the test setup and it seems that the Autotest
> test framework has all the features that I would need:
> 
> - Extensible Host class (other interfaces can be used for communication
>   - i.e. USB)
> - SSH support for sending client tests from Host to Target
> - Control of tests execution on Target from Host and gathering results
> - Standardized tests results format
> - Autotest host's and client's test results are aggregated and
>   displayed as HTML
> - Possibility to easily reuse other tests (like LTP, linaro's PM-QA)
> - Scheduling, HTML visualization (if needed)
> 
> On the beginning I would like to use test harness (server+client) to
> run tests and gather results in a structured way.
> 
> However, I have got a few questions (please correct me if I'm wrong):
> 
> - On several presentations it was mentioned that Avocado project is a
>   successor of Autotest. However it seems that Avocado is missing the
>   client + server approach from Autotest.

Right. It's something that is being worked on at this very moment:

https://trello.com/c/AnoH6vhP/530-experiment-multiple-machine-support-for-tests

> 
> - What is the future of Autotest? Will it be gradually replaced by
>   Avocado?

Autotest has been mostly in maintenance mode for the last 20 months or
so. Most of the energy of the Autotest maintainers has been shifted
towards Avocado. So, while no Open Source project can be killed (nor
should), yes, Autotest users should start looking into Avocado.

> 
> - It seems that there are only two statuses returned from a simple
>   test (like sleeptest), namely "PASS" and "FAIL". How can I indicate
>   that the test has ended because the environment was not ready to run
>   the test (something similar to LTP's "BROK" code, or exit codes
>   complying with POSIX 1003.1)?

I reckon this is a question on Autotest test result status, so I'll try
to answer in that context. First, the framework itself gives you intentionally
limited test result status. If you want to save additional information about
your test, including say the mapping to POSIX 1003.1 codes, you can try to use
the test's "keyval" store for that. The "keyval" is both saved to a local file
and to the server's database (when that is used).

Avocado INSTRUMENTED tests, though, have a better separation of test setup and
execution, and a test can be SKIPPED during the setup phase. A few pointers:

 * 
https://github.com/avocado-framework/avocado/blob/master/examples/tests/skiponsetup.py
 * 
http://avocado-framework.readthedocs.org/en/latest/api/core/avocado.core.html#avocado.core.test.Test.skip

> 
> - Is there any road map for Autotest development? I'm wondering if
>   avocado's features (like per test SHA1 generation) would be ported to
>   Autotest?

Not really. Avocado's roadmap though, is accessible here:

https://trello.com/b/WbqPNl2S/avocado

> 
> 
> Thanks in advance for support.
> 
> 
> --
> Best regards,
> 
> Lukasz Majewski
> 
> Samsung R&D Institute Poland (SRPOL) | Linux Platform Group
> 
> ___
> Autotest-kernel mailing list
> autotest-ker...@redhat.com
> https://www.redhat.com/mailman/listinfo/autotest-kernel
> 

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] [Autotest] Feasibility study - issues clarification

2016-02-11 Thread Cleber Rosa


- Original Message -
> From: "Ademar Reis" 
> To: "Cleber Rosa" 
> Cc: "Lukasz Majewski" , "avocado-devel" 
> ,
> autotest-ker...@redhat.com
> Sent: Thursday, February 11, 2016 3:08:16 PM
> Subject: Re: [Avocado-devel] [Autotest] Feasibility study - issues 
> clarification
> 
> On Thu, Feb 11, 2016 at 08:33:39AM -0500, Cleber Rosa wrote:
> > 
> > 
> > - Original Message -
> > > From: "Lukasz Majewski" 
> > > To: autotest-ker...@redhat.com
> > > Sent: Thursday, February 11, 2016 6:27:22 AM
> > > Subject: [Autotest]  Feasibility study - issues clarification
> > > 
> > > Dear all,
> > > 
> > > I'd be grateful for clarifying a few issues regarding Autotest.
> > > 
> > > I have following setup:
> > > 1. Custom HW interface to connect Target to Host
> > > 2. Target board with Linux
> > > 3. Host PC - debian/ubuntu.
> > > 
> > > I would like to unify the test setup and it seems that the Autotest
> > > test framework has all the features that I would need:
> > > 
> > > - Extensible Host class (other interfaces can be used for communication
> > >   - i.e. USB)
> > > - SSH support for sending client tests from Host to Target
> > > - Control of tests execution on Target from Host and gathering results
> > > - Standardized tests results format
> > > - Autotest host's and client's test results are aggregated and
> > >   displayed as HTML
> > > - Possibility to easily reuse other tests (like LTP, linaro's PM-QA)
> > > - Scheduling, HTML visualization (if needed)
> > > 
> > > On the beginning I would like to use test harness (server+client) to
> > > run tests and gather results in a structured way.
> > > 
> > > However, I have got a few questions (please correct me if I'm wrong):
> > > 
> > > - On several presentations it was mentioned that Avocado project is a
> > >   successor of Autotest. However it seems that Avocado is missing the
> > >   client + server approach from Autotest.
> > 
> > Right. It's something that is being worked on at this very moment:
> > 
> > https://trello.com/c/AnoH6vhP/530-experiment-multiple-machine-support-for-tests
> > 
> > > 
> > > - What is the future of Autotest? Will it be gradually replaced by
> > >   Avocado?
> > 
> > Autotest has been mostly in maintenance mode for the last 20 months or
> > so. Most of the energy of the Autotest maintainers has been shifted
> > towards Avocado. So, while no Open Source project can be killed (nor
> > should), yes, Autotest users should start looking into Avocado.
> > 
> > > 
> > > - It seems that there are only two statuses returned from a simple
> > >   test (like sleeptest), namely "PASS" and "FAIL". How can I indicate
> > >   that the test has ended because the environment was not ready to run
> > >   the test (something similar to LTP's "BROK" code, or exit codes
> > >   complying with POSIX 1003.1)?
> > 
> > I reckon this is a question on Autotest test result status, so I'll try
> > to answer in that context. First, the framework itself gives you
> > intentionally
> > limited test result status. If you want to save additional information
> > about
> > your test, including say the mapping to POSIX 1003.1 codes, you can try to
> > use
> > the test's "keyval" store for that. The "keyval" is both saved to a local
> > file
> > and to the server's database (when that is used).
> 
> You're probably referring to the whiteboard:
> http://avocado-framework.readthedocs.org/en/latest/WritingTests.html#saving-test-generated-custom-data

Nope, I was describing how that could be done in Autotest. I cross-posted the
message to the Avocado list, since it also related to it.

Then, yes, in Avocado, the whiteboard could be used for that.

> 
> Thanks.
>- Ademar
> 
> > 
> > Avocado INSTRUMENTED tests, though, have a better separation of test setup
> > and
> > execution, and a test can be SKIPPED during the setup phase. A few
> > pointers:
> > 
> >  *
> >  
> > https://github.com/avocado-framework/avocado/blob/master/examples/tests/skiponsetup.py
> >  *
> >  
> > http://avocado-framework.readthedocs.org/en/latest/api/core/avocado.core.html#avocado.core.test.Test.skip
> > 
> > > 
> > > - Is there any road map for Autotest development? I'm wondering if
> > >   avocado's features (like per test SHA1 generation) would be ported to
> > >   Autotest?
> > 
> > Not really. Avocado's roadmap though, is accessible here:
> > 
> > https://trello.com/b/WbqPNl2S/avocado
> > 
> 
> --
> Ademar Reis
> Red Hat
> 
> ^[:wq!
> 

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] Pre-release test plan results

2016-02-16 Thread Cleber Rosa
Lukáš,

I can see the given commits for avocado-vt and avocado-virt, but I can not
find the e1b986faa70472d94df08c955f64916a241e56c8 commit for avocado.

Can you double check that?

Thanks,
Cleber Rosa.

- Original Message -
> From: "Lukáš Doktor" 
> To: "Cleber Rosa" , "avocado-devel" 
> 
> Sent: Tuesday, February 16, 2016 10:40:58 AM
> Subject: Pre-release test plan results
> 
> Test Plan: Release Test Plan
> Run by 'ldoktor' at 2016-02-16T13:21:30.578948
> PASS: 'Avocado source is sound':
> PASS: 'Avocado RPM build':
> PASS: 'Avocado RPM install':
> PASS: 'Avocado Test Run on RPM based installation':
> PASS: 'Avocado Test Run on Virtual Machine':
> PASS: 'Avocado Test Run on Remote Machine':
> PASS: 'Avocado Remote Machine HTML report':
> PASS: 'Avocado Server Source Checkout and Unittests':
> PASS: 'Avocado Server Run':
> PASS: 'Avocado Server Functional Test':
> PASS: 'Avocado Virt and VT Source Checkout':
> PASS: 'Avocado Virt Bootstrap':
> PASS: 'Avocado Virt Boot Test Run and HTML report':
> PASS: 'Avocado Virt - Assignment of values from the cmdline':
> PASS: 'Avocado Virt - Migration test':
> PASS: 'Avocado VT - Bootstrap':
> PASS: 'Avocado VT - List tests':
> PASS: 'Avocado VT - Run test':
> PASS: 'Avocado HTML report sysinfo':
> PASS: 'Avocado HTML report links':
> 
> avocado: e1b986faa70472d94df08c955f64916a241e56c8
> avocado-vt: 3c6c247706195f7f5c65fca3cbf4906c2376600d
> avocado-virt: 7dd088e762f6133c70f79c0028080df6c89ef517
> 

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


[Avocado-devel] Avocado release 0.33.0: Lemonade Joe or Horse Opera

2016-02-17 Thread Cleber Rosa
Avocado release 0.33.0: Lemonade Joe or Horse Opera
===

Hello big farmers, backyard gardeners and supermarket reapers! Here is
a new announcement to all the appreciators of the most delicious green
fruit out here. Avocado release 0.33.0, aka, Lemonade Joe or Horse
Opera, is now out!

The main changes in Avocado are:

* Minor refinements to the Job Replay feature introduced in the last
  release.
* More consistency naming for the status of tests that were not
  executed. Namely, the TEST_NA has been renamed to SKIP all across
  the internal code and user visible places.
* The avocado Test class has received some cleanups and
  improvements. Some attributes that back the class implementation but
  are not intended for users to rely upon are now hidden or removed.
  Additionally some the internal attributes have been turned into
  proper documented properties that users should feel confident to
  rely upon.  Expect more work on this area, resulting in a cleaner
  and leaner base Test class on the upcoming releases.
* The avocado command line application used to show the main app help
  message even when help for a specific command was asked for. This
  has now been fixed.
* It's now possible to use the avocado process utility API to run
  privileged commands transparently via SUDO. Just add the "sudo=True"
  parameter to the API calls and have your system configured to allow
  that command without asking interactively for a password.
* The software manager and service utility API now knows about
  commands that require elevated privileges to be run, such as
  installing new packages and starting and stopping services (as
  opposed to querying packages and services status).  Those utility
  APIs have been integrated with the new SUDO features allowing
  unprivileged users to install packages, start and stop services more
  easily, given that the system is properly configured to allow that.
* A nasty "fork bomb" situation was fixed. It was caused when a SIMPLE
  test written in Python used the Avocado's "main()" function to run
  itself.
* A bug that prevented SIMPLE tests from being run if Avocado was not
  given the absolute path of the executable has been fixed.
* A cleaner internal API for registering test result classes has been
  put into place. If you have written your own test result class,
  please take a look at avocado.core.result.register_test_result_class.
* Our CI jobs now also do quick "smoke" checks on every new commit
  (not only the PR's branch HEAD) that are proposed on github.
* A new utility function, binary_from_shell_cmd, has been added to
  process API allows to extract the executable to be run from complex
  command lines, including ones that set shell variable names.
* There have been internal changes to how parameters, including the
  internally used timeout parameter, are handled by the test loader.
* Test execution can now be PAUSED and RESUMED interactively! By
  hitting CTRL+Z on the Avocado command line application, all processes
  of the currently running test are PAUSED. By hitting CTRL+Z again,
  they are RESUMED.
* The Remote/VM runners have received some refactors, and most of the
  code that used to live on the result test classes have been moved
  to the test runner classes. The original goal was to fix a bug, but
  turns out test runners were more suitable to house some parts of the
  needed functionality.

For a complete list of changes please check the Avocado changelog[1].

For Avocado-VT, there were also many changes, including:

* A new utility function, get_guest_service_status, to get service
  status in a VM.
* A fix for ssh login timeout error on remote servers.
* Fixes for usb ehci on PowerPC.
* Fixes for the screenshot path, when on a remote host
* Added libvirt function to create volumes with by XML files
* Added utility function to get QEMU threads (get_qemu_threads)

And many other changes. Again, for a complete list of changes please
check the Avocado-VT changelog[2].

Install avocado
---

Instructions are available in our documentation on how to install
either with packages or from source[3].

Updated RPM packages are be available in the project repos for
Fedora 22, Fedora 23, EPEL 6 and EPEL 7.

Happy hacking and testing!

---

[1] https://github.com/avocado-framework/avocado/compare/0.32.0...0.33.0
[2] https://github.com/avocado-framework/avocado-vt/compare/0.32.0...0.33.0
[3] 
http://avocado-framework.readthedocs.org/en/latest/GetStartedGuide.html#installing-avocado

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] Is it possible to run qemu-backend tests on XEN?

2016-03-11 Thread Cleber Rosa
Hi Zhangbo,

Indeed the tests on the qemu-backend are not intended to be run on XEN,
but with either pure qemu (tcg) or qemu + KVM.

That's actually an interesting point: how hard would it be to tweak the
qemu backend to make its test usable on XEN? Anyway, if you're not willing
to spend a reasonable amount of time and energy, you should focus on the
libvirt backend to test XEN.

Cheers,
Cleber.

- Original Message -
> From: "Zhangbo (Oscar)" 
> To: avocado-devel@redhat.com
> Cc: "zhuweilun" , "Zhuyijun" 
> Sent: Friday, March 11, 2016 5:02:37 AM
> Subject: [Avocado-devel] Is it possible to run qemu-backend tests on XEN?
> 
> Hi all:
>If I'm not wrong:
>1 The qemu-backend testcases in avocado-vt are used to run qemu as single
>guest without the helper of libvirt. It even communicate with the
>guest(qemu) in a lot of testcases.
>2 Because on XEN, qemu is just a device model without the implementation
>of CPU and memory(which are implemented in XEN hypervisor), it's not
>equals to a guest, so, it's not possible to run tests for qemu-backend on
>XEN.
> 
>   So, we could just test libvirt on XEN, and could not run qemu-backend test
>   there, am I right?
> 
>   Thanks in advance.
> 
> 
> Oscar.
> 
> ___
> Avocado-devel mailing list
> Avocado-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/avocado-devel
> 

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] how to create or get fedora17 qcow2 img?

2016-03-15 Thread Cleber Rosa


- Original Message -
> From: "Zhangbo (Oscar)" 
> To: avocado-devel@redhat.com
> Cc: "zhuweilun" , "Zhuyijun" 
> Sent: Tuesday, March 15, 2016 7:10:57 AM
> Subject: [Avocado-devel] how to create or get fedora17 qcow2 img?
> 
> Hi all:
>I tried to run libvirt-backended avocado-vt on my server, but failed.
>(this server is older than my last one, which was succeed in running
>avocado-vt tests.)
>Because I'm using fedora 19 as the guesOS, but my libosinfo just support
>fedora17.
>
> --
> Detailed problem info:
> 1
> linux-WRGNgW:/mnt/zwl/zhangbo/libosinfo-0.2.0 # avocado run
> io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native
> io-github-autotest-libvirt.virsh.create.none remove_guest.without_disk
> --vt-type libvirt --vt-guest-os JeOS.19
> JOB ID : 0521db48a53eaae83e9947e2d4d5eef072bae425
> JOB LOG: /root/avocado/job-results/job-2016-03-15T17.55-0521db4/job.log
> TESTS  : 3
>  (1/3)
>  
> io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native:
>  SKIP  //SKIPPED
>  (2/3) type_specific.io-github-autotest-libvirt.virsh.create.none: ERROR
>  (3/3) io-github-autotest-libvirt.remove_guest.without_disk: ERROR
> RESULTS: PASS 0 | ERROR 2 | FAIL 0 | SKIP 1 | WARN 0 | INTERRUPT 0
> JOB HTML   :
> /root/avocado/job-results/job-2016-03-15T17.55-0521db4/html/results.html
> TIME   : 5.59 s
> 
> 2
> 2016-03-15 17:56:03,240 test L0511 ERROR| SKIP
> io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native
> -> TestSkipError: Unsupported OS variant: fedora19.
> Supported variants:  Short ID
> centos6.0
>  centos6.1
>  debian1.0
> 
>  fedora14
>  fedora15
>  fedora16
>  fedora17  //no fedora 19.
>  fedora2
>  fedora3
>  fedora4
>  fedora5
> ..
> 
> 3
> It's because osinfo-query on my server is 0.1.2, too old to support fedora19
> : osinfo-query os --fields short-id
> 
> 4
> I could not update my libosinfo, because glib2 is too old here, it's of
> version 2.22, which has no symbol named " g_list_free_full ", that's needed
> by higher-version libosinfo.
> 
> -
> 
> 
> 
> 
> So, I tried to get qcow2 imgs of fedora17, but I still failed. The steps are
> as follows:
> 1
> Goto:
> http://archives.fedoraproject.org/pub/archive/fedora/linux/releases/17/Fedora/x86_64/os/LiveOS/
> To download the image of fedora17
> 
> 2 convert it to qcow2
> qemu-img convert -f raw ./squashfs.img -O qcow2
> /usr/share/avocado/data/avocado-vt/images/jeos-17-64.qcow2
> 
> 3 backup it and zip it:
> a. cp /usr/share/avocado/data/avocado-vt/images/jeos-17-64.qcow2
> /usr/share/avocado/data/avocado-vt/images/jeos-17-64.qcow2.backup
> b. 7za a /usr/share/avocado/data/avocado-vt/images/jeos-17-64.qcow2.7z
> /usr/share/avocado/data/avocado-vt/images/jeos-17-64.qcow2
> 
> 4 run avocado-vt test, but failed..
> linux-WRGNgW:/mnt/zwl/zhangbo/libosinfo-0.2.0 # avocado run
> io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native
> io-github-autotest-libvirt.virsh.create.none remove_guest.without_disk
> --vt-type libvirt --vt-guest-os JeOS.17
> Test discovery plugin 
> failed: option --vt-guest-os 'JeOS.17' is not on the known guest os for arch
> 'None' and machine type 'i440fx'. (see --vt-list-guests)
> Test discovery plugin 
> failed: option --vt-guest-os 'JeOS.17' is not on the known guest os for arch
> 'None' and machine type 'i440fx'. (see --vt-list-guests)
> Test discovery plugin 
> failed: option --vt-guest-os 'JeOS.17' is not on the known guest os for arch
> 'None' and machine type 'i440fx'. (see --vt-list-guests)
> 

This is *not* the procedure to create an Avocado JeOS image. You must run
an unattended install test with an appropriate guest configuration, example:

https://github.com/avocado-framework/avocado-vt/blob/master/shared/cfg/guest-os/Linux/JeOS/21.x86_64.cfg

With an also appropriate kickstart file, example:

https://github.com/avocado-framework/avocado-vt/blob/master/shared/unattended/JeOS-21.ks

You'd have to adapt at least both of those files to Fedora 17, if you're
really inclined to have a JeOS based on it. Alternatively, you can just use
the Fedora 17 guest type, say Linux.Fedora.17.x86_64.i440fx, instead of the 
JeOS.

> Unable to discover url(s)
> 'io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native',
> 'io-github-autotest-libvirt.virsh.create.none', 'remove_guest.without_disk'
> with loader plugins(s) 'file', 'vt', 'external', try running 'avocado list
> -V
> io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native
> io-github-autotest-libvirt.virsh.create.none remove_guest.without_disk' to
> see the details.
> 
> 5 list guests, fedora17 is not in the list.
> #avocado list --vt-list-guests
> Windows.Win7.i386.sp1.i440fx ESC[93m(missing win7-32-sp1.qcow2)ESC[0m
> Windows.Win7.x86_64.sp0.i440fx
> Windows.Win7.x86_64.sp1.i440fx ESC[93m(missing

[Avocado-devel] Pre-Release (0.34.0) Test Plan Results

2016-03-21 Thread Cleber Rosa
FIY:

Test Plan: Release Test Plan
Run by 'cleber' at 2016-03-21T08:07:44.090695
PASS: 'Avocado source is sound': 
PASS: 'Avocado RPM build': 
PASS: 'Avocado RPM install': 
PASS: 'Avocado Test Run on RPM based installation': 
PASS: 'Avocado Test Run on Virtual Machine': 
PASS: 'Avocado Test Run on Remote Machine': 
PASS: 'Avocado Remote Machine HTML report': 
PASS: 'Avocado Server Source Checkout and Unittests': 
PASS: 'Avocado Server Run': 
PASS: 'Avocado Server Functional Test': 
PASS: 'Avocado Virt and VT Source Checkout': 
PASS: 'Avocado Virt Bootstrap': 
PASS: 'Avocado Virt Boot Test Run and HTML report': 
PASS: 'Avocado Virt - Assignment of values from the cmdline': 
PASS: 'Avocado Virt - Migration test': 
PASS: 'Avocado VT - Bootstrap': 
PASS: 'Avocado VT - List tests': 
PASS: 'Avocado VT - Run test': 
PASS: 'Avocado HTML report sysinfo': 
PASS: 'Avocado HTML report links': 
PASS: 'Paginator':

Git repos/commits used:
avocado: 91bfac9f721895792f9dbb301a1e5a342b4dac36
avocado-vt: 42c22c98bc85833afa62cfa6f28d0562b8fe08b3
avocado-virt: a370dfdb30de64424ef1213add4755ed3152989d
avocado-server: 1491de32cb4e0ad4c0e83e57d1139af7f5eafccf

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


[Avocado-devel] Avocado release 0.34.0: The Hour of the Star

2016-03-22 Thread Cleber Rosa
Avocado release 0.34.0: The Hour of the Star


Hello to all test enthusiasts out there, specially to those that
cherish, care or are just keeping an eye on the greenest test
framework there is: Avocado release 0.34.0, aka The Hour of the Star,
is now out!

The main changes in Avocado for this release are:

* A complete overhaul of the logging and output implementation. This
  means that all Avocado output uses the standard Python logging library
  making it very consistent and easy to understand [1].

* Based on the logging and output overhaul, the command line test
  runner is now very flexible with its output. A user can choose
  exactly what should be output. Examples include application output
  only, test output only, both application and test output or any
  other combination of the builtin streams. The user visible command
  line option that controls this behavior is `--show`, which is an
  application level option, that is, it's available to all avocado
  commands. [2]

* Besides the builtin streams, test writers can use the standard
  Python logging API to create new streams. These streams can be shown
  on the command line as mentioned before, or persisted automatically
  in the job results by means of the `--store-logging-stream` command
  line option. [3][4]

* The new `avocado.core.safeloader` module, intends to make it easier
  to to write new test loaders for various types of Python
  code. [5][6]

* Based on the new `avocado.core.safeloader` module, a contrib script
  called `avocado-find-unittests`, returns the name of
  unittest.TestCase based tests found on a given number of Python
  source code files. [7]

* Avocado is now able to run its own selftest suite. By leveraging the
  `avocado-find-unittests` contrib script and the External Runner [8]
  feature. A Makefile target is available, allowing developers to run
  `make selfcheck` to have the selftest suite run by Avocado. [9]

* Partial Python 3 support. A number of changes were introduced that
  allow concurrent Python 2 and 3 support on the same code base.  Even
  though the support for Python 3 is still *incomplete*, the `avocado`
  command line application can already run some limited commands at
  this point.

* Asset fetcher utility library. This new utility library, and
  INSTRUMENTED test feature, allows users to transparently request
  external assets to be used in tests, having them cached for later
  use. [10]

* Further cleanups in the public namespace of the avocado Test class.
  
* [BUG FIX] Input from the local system was being passed to remote
  systems when running tests with either in remote systems or VMs.

* [BUG FIX] HTML report stability improvements, including better
  Unicode handling and support for other versions of the Pystache
  library.

* [BUG FIX] Atomic updates of the "latest" job symlink, allows for
  more reliable user experiences when running multiple parallel jobs.

* [BUG FIX] The avocado.core.data_dir module now dynamically checks
  the configuration system when deciding where the data directory
  should be located. This allows for later updates, such as when
  giving one extra `--config` parameter in the command line, to be
  applied consistently throughout the framework and test code.

* [MAINTENANCE] The CI jobs now run full checks on each commit on
  any proposed PR, not only on its topmost commit. This gives higher
  confidence that a commit in a series is not causing breakage that
  a later commit then inadvertently fixes.

For a complete list of changes please check the Avocado changelog[11].

For Avocado-VT, please check the full Avocado-VT changelog[12].

Avocado Videos
--

As yet another way to let users know about what's available in
Avocado, we're introducing short videos with very targeted content on
our very own YouTube channel:

 https://www.youtube.com/channel/UCP4xob52XwRad0bU_8V28rQ

The first video available demonstrates a couple of new features
related to the advanced logging mechanisms, introduced on this
release:

 https://www.youtube.com/watch?v=8Ur_p5p6YiQ

Install avocado
---

Instructions are available in our documentation on how to install
either with packages or from source[13].

Updated RPM packages are be available in the project repos for
Fedora 22, Fedora 23, EPEL 6 and EPEL 7.

Happy hacking and testing!

---

[1] http://avocado-framework.readthedocs.org/en/0.34.0/LoggingSystem.html
[2] 
http://avocado-framework.readthedocs.org/en/0.34.0/LoggingSystem.html#tweaking-the-ui
[3] 
http://avocado-framework.readthedocs.org/en/0.34.0/LoggingSystem.html#storing-custom-logs
[4] 
http://avocado-framework.readthedocs.org/en/0.34.0/WritingTests.html#advanced-logging-capabilities
[5] 
https://github.com/avocado-framework/avocado/blob/0.34.0/avocado/core/safeloader.py
[6] 
http://avocado-framework.readthedocs.org/en/0.34.0/api/core/avocado.core.html#module-avocado.core.safeloader
[7] 
https://github.com/avocado-

Re: [Avocado-devel] Parallel (clustered) testing.

2016-03-22 Thread Cleber Rosa
- Original Message -
> From: "Julio Faracco" 
> To: avocado-devel@redhat.com
> Sent: Friday, March 18, 2016 2:10:37 PM
> Subject: [Avocado-devel] Parallel (clustered) testing.
> 
> Hi guys.
> 
> Two questions:
> 1. Is there a way to run the same test in multiple hosts?
> Why? Here we usually test the same application in different systems
> such as RHEL 6.6, RHEL 6.7, RHEL 7.0, RHEL 7.1 and RHEL 7.2 (All of
> them are Virtual Machines setup for testing). Instead of executing 5
> single tests, I would like to run just one command and start all tests
> and get only one result.
> 
> 2. Can I configure it using the avocado config (.ini) file? Because I
> could define each host as a section. Btw, just thought...
> 

Hi Julio,

This is something we are planning to do:

https://trello.com/c/x5Nlkdjo/360-multiplexed-test-runners

But it's not part of the current sprint. Maybe if you push it forward
we can work together on it.

Thanks,
Cleber Rosa.

> Thanks! :-)
> 
> Julio Cesar Faracco
> 
> ___
> Avocado-devel mailing list
> Avocado-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/avocado-devel
> 

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] RFC: Multi-host tests

2016-03-28 Thread Cleber Rosa


- Original Message -
> From: "Lukáš Doktor" 
> To: "Ademar Reis" , "Cleber Rosa" , 
> "Amador Pahim" , "Lucas
> Meneghel Rodrigues" , "avocado-devel" 
> 
> Sent: Saturday, March 26, 2016 4:01:15 PM
> Subject: RFC: Multi-host tests
> 
> Hello guys,
> 
> Let's open a discussion regarding the multi-host tests for avocado.
> 
> The problem
> ===
> 
> A user wants to run netperf on 2 machines. To do it manually he does:
> 
>  machine1: netserver -D
>  machine1: # Wait till netserver is initialized
>  machine2: netperf -H $machine1 -l 60
>  machine2: # Wait till it finishes and report store the results
>  machine1: # stop the netserver and report possible failures
> 
> Now how to support this in avocado, ideally as custom tests, ideally
> even with broken connections/reboots?
> 
> 
> Super tests
> ===
> 
> We don't need to do anything and leave everything on the user. He is
> free to write code like:
> 
>  ...
>  machine1 = aexpect.ShellSession("ssh $machine1")
>  machine2 = aexpect.ShellSession("ssh $machine2")
>  machine1.sendline("netserver -D")
>  # wait till the netserver starts
>  machine1.read_until_any_line_matches(["Starting netserver"], 60)
>  output = machine2.cmd_output("netperf -H $machine1 -l $duration")
>  # interrupt the netserver
>  machine1.sendline("\03")
>  # verify netserver finished
>  machine1.cmd("true")
>  ...
> 
> the problem is it requires active connection and the user needs to
> manually handle the results.

And of course the biggest problem here is that it doesn't solve the
Avocado problem: providing a framework and tools for tests that span
multiple (Avocado) execution threads, possibly on multiple hosts. 

> 
> 
> Triggered simple tests
> ==
> 
> Alternatively we can say each machine/worker is nothing but yet another
> test, which occasionally needs a synchronization or data-exchange. The
> same example would look like this:
> 
> machine1.py:
> 
> process.run("netserver")
> barrier("server-started", 2)
> barrier("test-finished", 2)
> process.run("killall netserver")
> 
> machine2.py:
> 
>  barrier("server-started", 2)
>  self.log.debug(process.run("netperf -H %s -l 60"
> % params.get("server_ip"))
>  barrier("test-finished", 2)
> 
> where "barrier(name, no_clients)" is a framework function which makes
> the process wait till the specified number of processes are waiting for
> the same barrier.

The barrier mechanism looks like an appropriate and useful utility for the
example given.  Even though your use case example explicitly requires it,
it's worth pointing out and keeping in mind that there may be valid use cases
which won't require any kind of synchronization.  This may even be true to
the executions of tests that spawn multiple *local* "Avocado runs". 

> 
> The barrier needs to know which server to use for communication so we
> can either create a new service, or simply use one of the executions as
> "server" and make both processes use it for data exchange. So to run the
> above tests the user would have to execute 2 avocado commands:
> 
>  avocado run machine1.py --sync-server machine1:6547
>  avocado run machine2.py --remote-hostname machine2 --mux-inject
> server_ip:machine1 --sync machine1:6547
> 
> where:
>  --sync-server tells avocado to listen on ip address machine1 port 6547
>  --remote-hostname tells the avocado to run remotely on machine2
>  --mux-inject adds the "server_ip" into params
>  --sync tells the second avocado to connect to machine1:6547 for
> synchronization

To be honest, apart from the barrier utility, this provides little value
from the PoV of a *test framework*, and possibly unintentionally, competes
and overlaps with "remote" tools such as fabric.

Also, given that the multiplexer is an optional Avocado feature, such
a feature should not depend on it.

> 
> Running those two tests has only one benefit compare to the previous
> solution and that is it gathers the results independently and makes
> allows one to re-use simple tests. For example you can create a 3rd
> test, which uses different params for netperf, run it on "machine2" and
> keep the same script for "machine1". Or running 2 netperf senders at the
> same time. This would require libraries and more custo

Re: [Avocado-devel] RFC: Multi-host tests

2016-03-28 Thread Cleber Rosa


- Original Message -
> From: "Cleber Rosa" 
> To: "Lukáš Doktor" 
> Cc: "Amador Pahim" , "avocado-devel" 
> , "Ademar Reis" 
> Sent: Monday, March 28, 2016 4:44:15 PM
> Subject: Re: [Avocado-devel] RFC: Multi-host tests
> 
> 
> 
> - Original Message -
> > From: "Lukáš Doktor" 
> > To: "Ademar Reis" , "Cleber Rosa" ,
> > "Amador Pahim" , "Lucas
> > Meneghel Rodrigues" , "avocado-devel"
> > 
> > Sent: Saturday, March 26, 2016 4:01:15 PM
> > Subject: RFC: Multi-host tests
> > 
> > Hello guys,
> > 
> > Let's open a discussion regarding the multi-host tests for avocado.
> > 
> > The problem
> > ===
> > 
> > A user wants to run netperf on 2 machines. To do it manually he does:
> > 
> >  machine1: netserver -D
> >  machine1: # Wait till netserver is initialized
> >  machine2: netperf -H $machine1 -l 60
> >  machine2: # Wait till it finishes and report store the results
> >  machine1: # stop the netserver and report possible failures
> > 
> > Now how to support this in avocado, ideally as custom tests, ideally
> > even with broken connections/reboots?
> > 
> > 
> > Super tests
> > ===
> > 
> > We don't need to do anything and leave everything on the user. He is
> > free to write code like:
> > 
> >  ...
> >  machine1 = aexpect.ShellSession("ssh $machine1")
> >  machine2 = aexpect.ShellSession("ssh $machine2")
> >  machine1.sendline("netserver -D")
> >  # wait till the netserver starts
> >  machine1.read_until_any_line_matches(["Starting netserver"], 60)
> >  output = machine2.cmd_output("netperf -H $machine1 -l $duration")
> >  # interrupt the netserver
> >  machine1.sendline("\03")
> >  # verify netserver finished
> >  machine1.cmd("true")
> >  ...
> > 
> > the problem is it requires active connection and the user needs to
> > manually handle the results.
> 
> And of course the biggest problem here is that it doesn't solve the
> Avocado problem: providing a framework and tools for tests that span
> multiple (Avocado) execution threads, possibly on multiple hosts.
> 
> > 
> > 
> > Triggered simple tests
> > ==
> > 
> > Alternatively we can say each machine/worker is nothing but yet another
> > test, which occasionally needs a synchronization or data-exchange. The
> > same example would look like this:
> > 
> > machine1.py:
> > 
> > process.run("netserver")
> > barrier("server-started", 2)
> > barrier("test-finished", 2)
> > process.run("killall netserver")
> > 
> > machine2.py:
> > 
> >  barrier("server-started", 2)
> >  self.log.debug(process.run("netperf -H %s -l 60"
> > % params.get("server_ip"))
> >  barrier("test-finished", 2)
> > 
> > where "barrier(name, no_clients)" is a framework function which makes
> > the process wait till the specified number of processes are waiting for
> > the same barrier.
> 
> The barrier mechanism looks like an appropriate and useful utility for the
> example given.  Even though your use case example explicitly requires it,
> it's worth pointing out and keeping in mind that there may be valid use cases
> which won't require any kind of synchronization.  This may even be true to
> the executions of tests that spawn multiple *local* "Avocado runs".
> 
> > 
> > The barrier needs to know which server to use for communication so we
> > can either create a new service, or simply use one of the executions as
> > "server" and make both processes use it for data exchange. So to run the
> > above tests the user would have to execute 2 avocado commands:
> > 
> >  avocado run machine1.py --sync-server machine1:6547
> >  avocado run machine2.py --remote-hostname machine2 --mux-inject
> > server_ip:machine1 --sync machine1:6547
> > 
> > where:
> >  --sync-server tells avocado to listen on ip address machine1 port 6547
> >  --remote-hostname tells the avocado to run remotely on machine2
> >  --mux-inject adds the "server_ip" into params
> >  --sync tells the second avocado to co

Re: [Avocado-devel] RFC: Multi-host tests

2016-03-29 Thread Cleber Rosa



On 03/29/2016 04:11 AM, Lukáš Doktor wrote:

Dne 28.3.2016 v 21:49 Cleber Rosa napsal(a):



- Original Message -

From: "Cleber Rosa" 
To: "Lukáš Doktor" 
Cc: "Amador Pahim" , "avocado-devel" 
, "Ademar Reis" 

Sent: Monday, March 28, 2016 4:44:15 PM
Subject: Re: [Avocado-devel] RFC: Multi-host tests



- Original Message -

From: "Lukáš Doktor" 
To: "Ademar Reis" , "Cleber Rosa" 
,

"Amador Pahim" , "Lucas
Meneghel Rodrigues" , "avocado-devel"

Sent: Saturday, March 26, 2016 4:01:15 PM
Subject: RFC: Multi-host tests

Hello guys,

Let's open a discussion regarding the multi-host tests for avocado.

The problem
===

A user wants to run netperf on 2 machines. To do it manually he does:

  machine1: netserver -D
  machine1: # Wait till netserver is initialized
  machine2: netperf -H $machine1 -l 60
  machine2: # Wait till it finishes and report store the results
  machine1: # stop the netserver and report possible failures

Now how to support this in avocado, ideally as custom tests, ideally
even with broken connections/reboots?


Super tests
===

We don't need to do anything and leave everything on the user. He is
free to write code like:

  ...
  machine1 = aexpect.ShellSession("ssh $machine1")
  machine2 = aexpect.ShellSession("ssh $machine2")
  machine1.sendline("netserver -D")
  # wait till the netserver starts
  machine1.read_until_any_line_matches(["Starting netserver"], 60)
  output = machine2.cmd_output("netperf -H $machine1 -l 
$duration")

  # interrupt the netserver
  machine1.sendline("\03")
  # verify netserver finished
  machine1.cmd("true")
  ...

the problem is it requires active connection and the user needs to
manually handle the results.


And of course the biggest problem here is that it doesn't solve the
Avocado problem: providing a framework and tools for tests that span
multiple (Avocado) execution threads, possibly on multiple hosts.

Well it does, each "ShellSession" is a new parallel process. The only 
problem I have with this design is that it does not allow easy code 
reuse and the results strictly depend on the test writer.




Yes, *aexpect* allows parallel execution in an asynchronous fashion. Not 
targeted to tests *at all*. Avocado, as a test framework, should deliver 
more. Repeating the previous wording, it should be "providing a 
framework and tools for tests that span multiple (Avocado) execution 
threads, possibly on multiple hosts."





Triggered simple tests
==

Alternatively we can say each machine/worker is nothing but yet 
another

test, which occasionally needs a synchronization or data-exchange. The
same example would look like this:

machine1.py:

 process.run("netserver")
 barrier("server-started", 2)
 barrier("test-finished", 2)
 process.run("killall netserver")

machine2.py:

  barrier("server-started", 2)
  self.log.debug(process.run("netperf -H %s -l 60"
 % params.get("server_ip"))
  barrier("test-finished", 2)

where "barrier(name, no_clients)" is a framework function which makes
the process wait till the specified number of processes are waiting 
for

the same barrier.


The barrier mechanism looks like an appropriate and useful utility 
for the
example given.  Even though your use case example explicitly 
requires it,
it's worth pointing out and keeping in mind that there may be valid 
use cases
which won't require any kind of synchronization.  This may even be 
true to

the executions of tests that spawn multiple *local* "Avocado runs".

Absolutely, this would actually allow Julio to run his "Parallel 
(clustered) testing".


So, let's try to identify what we're really looking for. For both the 
use case I mentioned and Julio's "Parallel (clustered) testing", we need 
a (the same) test run by multiple *runners*. A runner in this context is 
something that implements the `TestRunner` interface, such as the 
`RemoteTestRunner`:


https://github.com/avocado-framework/avocado/blob/master/avocado/core/remote/runner.py#L37

The following (pseudo) Avocado Test could be written:

from avocado import Test

# These are currently private APIs that could/should be or

# be exposed under another level. Also, the current API is

# very different from what is used here, please take it as

# pseudo code that might look like a future implementation

from avocado.core.remote.runner import RemoteTestRunner

from avocado.core.runner import run_multi

from avocado.core.resolver import TestResolver

from avocado.utils.wait import wait_for

Re: [Avocado-devel] RFC: Multi-host tests

2016-03-30 Thread Cleber Rosa



On 03/30/2016 09:31 AM, Lukáš Doktor wrote:

Dne 29.3.2016 v 20:25 Cleber Rosa napsal(a):



On 03/29/2016 04:11 AM, Lukáš Doktor wrote:

Dne 28.3.2016 v 21:49 Cleber Rosa napsal(a):



- Original Message -

From: "Cleber Rosa" 
To: "Lukáš Doktor" 
Cc: "Amador Pahim" , "avocado-devel"
, "Ademar Reis" 
Sent: Monday, March 28, 2016 4:44:15 PM
Subject: Re: [Avocado-devel] RFC: Multi-host tests



- Original Message -

From: "Lukáš Doktor" 
To: "Ademar Reis" , "Cleber Rosa"
,
"Amador Pahim" , "Lucas
Meneghel Rodrigues" , "avocado-devel"

Sent: Saturday, March 26, 2016 4:01:15 PM
Subject: RFC: Multi-host tests

Hello guys,

Let's open a discussion regarding the multi-host tests for avocado.

The problem
===

A user wants to run netperf on 2 machines. To do it manually he does:

  machine1: netserver -D
  machine1: # Wait till netserver is initialized
  machine2: netperf -H $machine1 -l 60
  machine2: # Wait till it finishes and report store the results
  machine1: # stop the netserver and report possible failures

Now how to support this in avocado, ideally as custom tests, ideally
even with broken connections/reboots?


Super tests
===

We don't need to do anything and leave everything on the user. He is
free to write code like:

  ...
  machine1 = aexpect.ShellSession("ssh $machine1")
  machine2 = aexpect.ShellSession("ssh $machine2")
  machine1.sendline("netserver -D")
  # wait till the netserver starts
  machine1.read_until_any_line_matches(["Starting netserver"],
60)
  output = machine2.cmd_output("netperf -H $machine1 -l
$duration")
  # interrupt the netserver
  machine1.sendline("\03")
  # verify netserver finished
  machine1.cmd("true")
  ...

the problem is it requires active connection and the user needs to
manually handle the results.


And of course the biggest problem here is that it doesn't solve the
Avocado problem: providing a framework and tools for tests that span
multiple (Avocado) execution threads, possibly on multiple hosts.


Well it does, each "ShellSession" is a new parallel process. The only
problem I have with this design is that it does not allow easy code
reuse and the results strictly depend on the test writer.



Yes, *aexpect* allows parallel execution in an asynchronous fashion. Not
targeted to tests *at all*. Avocado, as a test framework, should deliver
more. Repeating the previous wording, it should be "providing a
framework and tools for tests that span multiple (Avocado) execution
threads, possibly on multiple hosts."


That was actually my point. You can implement multi-host-tests that way,
but you can't share the tests (only include some shared pieces from
libraries).



Right, then not related to Avocado, just an example of how a test writer 
could do it (painfully) today.





Triggered simple tests
==

Alternatively we can say each machine/worker is nothing but yet
another
test, which occasionally needs a synchronization or data-exchange.
The
same example would look like this:

machine1.py:

 process.run("netserver")
 barrier("server-started", 2)
 barrier("test-finished", 2)
 process.run("killall netserver")

machine2.py:

  barrier("server-started", 2)
  self.log.debug(process.run("netperf -H %s -l 60"
 % params.get("server_ip"))
  barrier("test-finished", 2)

where "barrier(name, no_clients)" is a framework function which makes
the process wait till the specified number of processes are waiting
for
the same barrier.


The barrier mechanism looks like an appropriate and useful utility
for the
example given.  Even though your use case example explicitly
requires it,
it's worth pointing out and keeping in mind that there may be valid
use cases
which won't require any kind of synchronization.  This may even be
true to
the executions of tests that spawn multiple *local* "Avocado runs".


Absolutely, this would actually allow Julio to run his "Parallel
(clustered) testing".


So, let's try to identify what we're really looking for. For both the
use case I mentioned and Julio's "Parallel (clustered) testing", we need
a (the same) test run by multiple *runners*. A runner in this context is
something that implements the `TestRunner` interface, such as the
`RemoteTestRunner`:

https://github.com/avocado-framework/avocado/blob/master/avocado/core/remote/runner.py#L37



The following (pseudo) Avocado Test could be written:

from avocado import Test

# These are currently private APIs that could/should be or

# be exposed under another level

Re: [Avocado-devel] RFC: Multi-host tests

2016-03-30 Thread Cleber Rosa

Lukáš,

This RFC has already had a lot of strong points raised, and it's now a 
bit hard to follow the proposals and general direction.


I believe it's time for a v2. What do you think?

Thanks,
- Cleber.


On 03/30/2016 11:54 AM, Lukáš Doktor wrote:

Dne 30.3.2016 v 16:52 Lukáš Doktor napsal(a):

Dne 30.3.2016 v 15:52 Cleber Rosa napsal(a):



On 03/30/2016 09:31 AM, Lukáš Doktor wrote:

Dne 29.3.2016 v 20:25 Cleber Rosa napsal(a):



On 03/29/2016 04:11 AM, Lukáš Doktor wrote:

Dne 28.3.2016 v 21:49 Cleber Rosa napsal(a):



- Original Message -

From: "Cleber Rosa" 
To: "Lukáš Doktor" 
Cc: "Amador Pahim" , "avocado-devel"
, "Ademar Reis" 
Sent: Monday, March 28, 2016 4:44:15 PM
Subject: Re: [Avocado-devel] RFC: Multi-host tests



- Original Message -

From: "Lukáš Doktor" 
To: "Ademar Reis" , "Cleber Rosa"
,
"Amador Pahim" , "Lucas
Meneghel Rodrigues" , "avocado-devel"

Sent: Saturday, March 26, 2016 4:01:15 PM
Subject: RFC: Multi-host tests

Hello guys,

Let's open a discussion regarding the multi-host tests for
avocado.

The problem
===

A user wants to run netperf on 2 machines. To do it manually he
does:

  machine1: netserver -D
  machine1: # Wait till netserver is initialized
  machine2: netperf -H $machine1 -l 60
  machine2: # Wait till it finishes and report store the
results
  machine1: # stop the netserver and report possible failures

Now how to support this in avocado, ideally as custom tests,
ideally
even with broken connections/reboots?


Super tests
===

We don't need to do anything and leave everything on the user.
He is
free to write code like:

  ...
  machine1 = aexpect.ShellSession("ssh $machine1")
  machine2 = aexpect.ShellSession("ssh $machine2")
  machine1.sendline("netserver -D")
  # wait till the netserver starts
  machine1.read_until_any_line_matches(["Starting netserver"],
60)
  output = machine2.cmd_output("netperf -H $machine1 -l
$duration")
  # interrupt the netserver
  machine1.sendline("\03")
  # verify netserver finished
  machine1.cmd("true")
  ...

the problem is it requires active connection and the user needs to
manually handle the results.


And of course the biggest problem here is that it doesn't solve the
Avocado problem: providing a framework and tools for tests that
span
multiple (Avocado) execution threads, possibly on multiple hosts.


Well it does, each "ShellSession" is a new parallel process. The only
problem I have with this design is that it does not allow easy code
reuse and the results strictly depend on the test writer.



Yes, *aexpect* allows parallel execution in an asynchronous fashion.
Not
targeted to tests *at all*. Avocado, as a test framework, should
deliver
more. Repeating the previous wording, it should be "providing a
framework and tools for tests that span multiple (Avocado) execution
threads, possibly on multiple hosts."


That was actually my point. You can implement multi-host-tests that
way,
but you can't share the tests (only include some shared pieces from
libraries).



Right, then not related to Avocado, just an example of how a test writer
could do it (painfully) today.




Triggered simple tests
==

Alternatively we can say each machine/worker is nothing but yet
another
test, which occasionally needs a synchronization or data-exchange.
The
same example would look like this:

machine1.py:

 process.run("netserver")
 barrier("server-started", 2)
 barrier("test-finished", 2)
 process.run("killall netserver")

machine2.py:

  barrier("server-started", 2)
  self.log.debug(process.run("netperf -H %s -l 60"
 % params.get("server_ip"))
  barrier("test-finished", 2)

where "barrier(name, no_clients)" is a framework function which
makes
the process wait till the specified number of processes are
waiting
for
the same barrier.


The barrier mechanism looks like an appropriate and useful utility
for the
example given.  Even though your use case example explicitly
requires it,
it's worth pointing out and keeping in mind that there may be valid
use cases
which won't require any kind of synchronization.  This may even be
true to
the executions of tests that spawn multiple *local* "Avocado runs".


Absolutely, this would actually allow Julio to run his "Parallel
(clustered) testing".


So, let's try to identify what we're really looking for. For both the
use case I mentioned and Julio's "Parallel (clustered) testing", we
need
a (the same) test run by multiple *runners*. A runner in this
cont

[Avocado-devel] [RFC] Pre/Post test hooks

2016-04-01 Thread Cleber Rosa

MOTIVATION
==

The idea of adding hooks to be run by Avocado before and after tests is 
general enough, and may be used by the community in unpredictable ways. 
And that is good.


For this team, the initial motivation was to be able to bring back an 
Autotest feature that some of our users are missing: the ability to set 
the system-wide "kernel core pattern" configuration for tests.


Having a pre-test hook would allow "/proc/sys/kernel/core_pattern" to be 
read, saved and modified to point to the test results directory. Having 
a post-test hook would allow "/proc/sys/kernel/core_pattern" to be 
reverted back to its original state.


Other currently core features such as sysinfo collection, could be 
re-implemented as pre/post test hooks.


GENERAL DESIGN POINTS
=

These are the most important design decisions to be acknowledged or 
questioned. Please reply with either ACK or your questions/suggestions.


1) Hooks are implemented as plugin classes, based on a given defined 
interface, in the same way current "CLICmd" and "CLI" interfaces allow 
plugin writers to extend Avocado and give it new commands and command 
line options.


2) The hooks are executed by the *runner*, and not by the test process. 
The goal is not interfere with the test itself. The pre and post code 
that runs before and after the test should not *directly* change the 
test behavior and outcome. Of course, the test environment can be 
changed in a way (say having packages removed) that a test may fail 
because of hook actions.


3) Test execution time should not be changed by pre and post hooks. If a 
pre-test hook takes "n" seconds to run, "n" should not be added to the 
test run time.


4) Job run time: right now, Avocado times a Job based on the sum of 
individual test run times. With pre and post test hooks, this can be 
very different from job "wall clock" times. My instinct is to change 
that, so that a Job run time is the job "wall clock" time. I'm unsure if 
we should add yet another time measure, that is, the sum of individual 
test run time. This is also bound to be broken when parallel run of 
tests is implemented.


5) The pre test hook is given the test "early status". Information such 
as the test tagged name, the fact that it has not yet started to run and 
the test results directory are all part of the early status.


6) Because of point #5, the test is instantiated on the test process, 
its early state is sent, but the test execution itself is held until the 
runner finishes running the pre-test hooks.


7) The post test hook is given the last test status, which is also used 
by the runner to identify test success, failure, etc.



Thanks,
 - Cleber.

--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] RFC: Multi tests (previously multi-host test) [v2]

2016-04-03 Thread Cleber Rosa
 avocado listens on some port and the spawned
workers connect to this port, identify themselves and ask for
barriers/data exchange, with the support for re-connection. To do so we
have various possibilities:

Standard multiprocess API
-

The standard python's multiprocessing library contains over the TCP
synchronization. The only problem is that "barriers" were introduced in
python3 so we'd have to backport it and it does not fit all our needs so
we'd have to tweak it a bit.


Autotest's syncdata
---

Python 2.4 friendly, supports barriers and data synchronization. On the
contrary it's quite hackish and full of shortcuts.


Custom code
---

We can inspire by the above and create simple human-readable (easy to
debug or interact with manually) protocol to support barriers and data
exchange via pickling. IMO that would be easier to maintain than
backporting and adjusting of the multiprocessing or fixing the autotest
syncdata. A proof-of-concept can be found here:

 https://github.com/avocado-framework/avocado/pull/1019

It modifies the "passtest" to be only executed when it's executed by 2
workers at the same time. It does not support the multi-tests yet, so
one has to run "avocado run passtest" twice using the same
"--sync-server" (once --sync-server and once --sync).


Conclusion
==

Given the reasons I like the idea of "API backed by cmdline" as all
cmdline options are stable, the output is machine readable and known to
users so easily to debug manually.

For synchronization that requires the "--sync" and "--sync-server"
arguments to be present, also not necessarily used when the users uses
the multi-test (the multi-test can start the the server if not already
started and add "--sync" for each worker if not provided).

The netperf example from introduction would look like this:

The client tests are ordinary "avocado.Test" tests that can even be
executed manually without any synchronization (by providing no_client=1)

 class NetServer(avocado.Test):
 def setUp(self):
 process.run("netserver")
 self.barrier("setup", params.get("no_clients"))
 def test(self):
 pass
 def tearDown(self):
 self.barrier("finished", params.get("no_clients"))
 process.run("killall netserver")

 class NetPerf(avocado.Test):
 def setUp(self):
 self.barrier("setup", params.get("no_clients"))
 def test(self):
 process.run("netperf -H %s -l 60"
 % params.get("server_ip"))
 barrier("finished", params.get("no_clients"))

One would be able to run this manually (or from build systems) using:

 avocado run NetServer --sync-server $IP:12345 &
 avocado run NetPerf --sync $IP:12345 &

(one would have to hardcode or provide the "no_clients" and "server_ip"
params on the cmdline)

and the NetPerf would wait till NetServer is initialized, then it'd run
the test while NetServer would wait till it finishes. For some users
this is sufficient, but let's add the multi-test test to get a single
results (pseudo code):

 class MultiNetperf(avocado.MultiTest):
 machines = params.get("machines")
 assert len(machines) > 1
 for machine in params.get("machines"):
 self.add_worker(machine, sync=True) # enable sync server
 self.workers[0].add_test("NetServer")
 self.workers[0].set_params({"no_clients": len(self.workers)})
 for worker in self.workers[1:]:
 worker.add_test("NetPerf")
 worker.set_parmas({"no_clients": len(self.workers),
"server_ip": machines[0]})
 self.run()

Running:

 avocado run MultiNetperf

would run a single test, which based on the params given to the test
would run on several machines using the first machine as server and the
rest as clients and all of them would start at the same time.

It'd produce a single results with one test id and following structure
(example):


 $ tree $RESULTDIR
   └── test-results
   └── simple.mht


As you pointed out during our chat, the suffices ".mht" was not intended 
here.



   ├── job.log
   ...
   ├── 1
       │   └── job.log
   ...
   └── 2
   └── job.log
   ...



Getting back to the definitions that were laid out, I revised my 
understanding and now I believe/suggest that we should have a single 
"job.log" per job.



where 1 and 2 are the results of worker 1 and worker 2. For all of the
solution proposed those would give the user the standard results as they
know them from normal avocado executions, each with a unique id, which
should help analyzing and debugging the results.


[1] - Using "streams" instead of "threads" to reduce confusion with the 
classical multi-processing pattern of threaded programming and the OS 
features that support the same pattern. That being said, "threads" could 
be one type of execution "stream" supported by Avocado, albeit it's not 
a primary development target for various reasons, including the good 
support for threads already present in the underlying Python standard 
library.


--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] [RFC] Pre/Post test hooks

2016-04-04 Thread Cleber Rosa



On 04/04/2016 08:01 AM, Lukáš Doktor wrote:

Dne 1.4.2016 v 16:00 Cleber Rosa napsal(a):

MOTIVATION
==

The idea of adding hooks to be run by Avocado before and after tests is
general enough, and may be used by the community in unpredictable ways.
And that is good.

For this team, the initial motivation was to be able to bring back an
Autotest feature that some of our users are missing: the ability to set
the system-wide "kernel core pattern" configuration for tests.

Having a pre-test hook would allow "/proc/sys/kernel/core_pattern" to be
read, saved and modified to point to the test results directory. Having
a post-test hook would allow "/proc/sys/kernel/core_pattern" to be
reverted back to its original state.

Other currently core features such as sysinfo collection, could be
re-implemented as pre/post test hooks.

GENERAL DESIGN POINTS
=

These are the most important design decisions to be acknowledged or
questioned. Please reply with either ACK or your questions/suggestions.

1) Hooks are implemented as plugin classes, based on a given defined
interface, in the same way current "CLICmd" and "CLI" interfaces allow
plugin writers to extend Avocado and give it new commands and command
line options.

I'd prefer "pluginizing" the whole "runner" instead of custom pre and
post classes. What am I talking about:

The CLICmd and CLI allows one to add several methods + "run" method
which is executed to do the action. It makes sense for CLI, but IMO it
does not suit this case.

Instead we can create plugin interface which allows to do things on
certain occasions (hooks), one of them `start_test` and `stop_test`.
It's similar to `ResultsProxy`, `LoaderProxy`, 

They both can achieve the same, the main reason is convenience:

The CLI-like:

+ clearly defines the interface
+ adds itself by publishing itself into the correct namespace
- for pre+post plugins requires double plugin initialization
- to reuse information from pre-hook in post-hook one needs to store the
state inside the results.

The *Proxy-like:

+ defines the interface
+ adds itself by publishing itself into the correct namespace
+ pre+post plugins are initialized just once (pure pre-plugins define
only `pre_test` hook, post-plugins only `post_test` hook...)
+ the state is preserved throughout the execution, so one can store the
details inside `self`.
+ is easily extensible of another hooks related to this

Details in
https://github.com/avocado-framework/avocado/pull/1106#discussion_r58193746



I believe this actually escapes the scope of this RFC, so I'll write a 
separate RFC regarding how we define the Plugin interfaces and its 
granularity.






2) The hooks are executed by the *runner*, and not by the test process.
The goal is not interfere with the test itself. The pre and post code
that runs before and after the test should not *directly* change the
test behavior and outcome. Of course, the test environment can be
changed in a way (say having packages removed) that a test may fail
because of hook actions.

ACK



3) Test execution time should not be changed by pre and post hooks. If a
pre-test hook takes "n" seconds to run, "n" should not be added to the
test run time.

ACK



4) Job run time: right now, Avocado times a Job based on the sum of
individual test run times. With pre and post test hooks, this can be
very different from job "wall clock" times. My instinct is to change
that, so that a Job run time is the job "wall clock" time. I'm unsure if
we should add yet another time measure, that is, the sum of individual
test run time. This is also bound to be broken when parallel run of
tests is implemented.

I'm fine with either "real time" `time.time - start`, or with the
"user+sys time" `sum(test.time for test in job.tests)` (so sum of all
test times). I don't think we should do anything smart in here as it
might be misleading.

 time stress -c 8 -t 10
 stress: info: [23182] dispatching hogs: 8 cpu, 0 io, 0 vm, 0 hdd
 stress: info: [23182] successful run completed in 10s

 real0m10.001s
 user1m19.005s
 sys 0m0.003s



I created a card to set Avocado job time as the "wall clock time":

https://trello.com/c/TXZlbQ4u/639-job-time-should-be-the-wall-clock-time





5) The pre test hook is given the test "early status". Information such
as the test tagged name, the fact that it has not yet started to run and
the test results directory are all part of the early status.

Does it means the test execution would wait for the pre-job hooks
completion? It's logical, but currently it requires bi-directional
communication with the runner (not after the Test/runner cleanup).


A simple `multiprocessing.Lock` does the trick here, no need for 
bi-directional communication.




Anyway yes, th

Re: [Avocado-devel] [RFC] Pre/Post test hooks

2016-04-05 Thread Cleber Rosa



On 04/05/2016 03:23 PM, Jeff Nelson wrote:

On Fri, Apr 01, 2016 at 11:00:39AM -0300, Cleber Rosa wrote:

MOTIVATION
==

The idea of adding hooks to be run by Avocado before and after tests
is general enough, and may be used by the community in unpredictable
ways. And that is good.

For this team, the initial motivation was to be able to bring back an
Autotest feature that some of our users are missing: the ability to
set the system-wide "kernel core pattern" configuration for tests.

Having a pre-test hook would allow "/proc/sys/kernel/core_pattern" to
be read, saved and modified to point to the test results directory.
Having a post-test hook would allow "/proc/sys/kernel/core_pattern" to
be reverted back to its original state.

Other currently core features such as sysinfo collection, could be
re-implemented as pre/post test hooks.

GENERAL DESIGN POINTS
=

These are the most important design decisions to be acknowledged or
questioned. Please reply with either ACK or your questions/suggestions.


I have some questions (hope you don't mind).

What are the outputs of pre- and post-test hooks?



The outputs are defined by the actual plugin "hooked in the hook". The 
current interface doesn't define it. They can be completely silent, they 
can generate output to the UI, they can write to files at the test's 
result directory, etc.



Are there limits to the actions that are permitted in pre- and
post-test hooks? Of course, the primary use-case of the pre-test hook
is to set up the environment for the test--so environment changes are
permitted--and the use-case for a matching post-hook is to restore the
environment. About the only operation I can imagine NOT being
permitted is to abort (kill itself, or kill its controlling parent
process).



The pre/post test hooks are fed the test status, so they can gather 
information, but not (directly) influence its outcome.



Can a pre-test hook return a status that causes the test execution to
be skipped? I can imagine this being done for another use-case:
validate the test environment (e.g., check to see if required hardware
is present).



No, this should be done at the test's "setUp" stage. Pointers:

https://github.com/avocado-framework/avocado/blob/master/avocado/core/test.py#L368

But I understand the value in influencing all tests with a single, 
plugable, block of code. Still, this looks like something that 
could/should be addressed by what we're calling "Job API". With some 
Python code and such an API, you could select the exact tests you want 
to run.



Can a post-test hook alter the result (status) of the test?



In the current implementation draft it could. But just because we're 
sharing the same (last, final) test state among the post plugins and 
test result handlers. I'm not sure that a plugin *should* change it, though.



Has there been any thought given to having pre- and post-job hooks?
For example, setting the kernel core pattern is something I would want
to do globally, for all tests in a job. It would be faster and more
convenient to do this just once. But I admit this is a +1 optimization
and need not be considered now. (It also complicates things when tests
run on multiple machines.)


Pre/Post job hooks are actually simpler, and implemented in the 
following PR:


https://github.com/avocado-framework/avocado/pull/1106



Can there be multiple hooks for a given test? If so, how does one
define the order in which they are executed? Since there are pre- and
post-test hooks, there are really two orders to consider.



You mean multiple plugins registered under a given hook, or simply put, 
multiple pre (and/or) post plugins active, right? The answer is yes.


We have not implemented any kind of ordering management, simply because 
we have not hit a use case that requires it.



I found myself wanting to make an assumption so I better ask: must
hooks come in pairs (for every pre-test there must be a post-test, and
vice-versa)?


Not at all. You can have only pre hooks enabled, only post hooks enabled 
or both.


The given *job* mail notification plugin example, for instance, is 
something that makes more sense activated only in a post hook:


https://github.com/clebergnu/avocado/blob/job_pre_post/examples/plugins/job-pre-post/mail/avocado_job_mail.py



That's all for now.

-Jeff



Thanks a lot for the feedback!
- Cleber.

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


[Avocado-devel] RFC: Avocado Job API

2016-04-11 Thread Cleber Rosa
from a custom Job execution to a
custom Job runner, example::

  #!/usr/bin/env python
  import sys
  from avocado import Job
  from avocado.plugin_manager import require
  from avocado.resolver import resolve

  test = resolve(sys.argv[1])
  host_list = sys.argv[2:]

  runner_plugin = 'avocado.plugins.runner:RemoteTestRunner'
  require(runner_plugin)

  job = Job()
  print('JOB ID: %s' % job.unique_id)
  print('JOB LOG: %s' % job.log)
  env = job.environment # property
  env.config.set('plugin.runner', 'default', runner_plugin)
  env.config.set('plugin.runner.RemoteTestRunner', 'username', 'root')
  env.config.set('plugin.runner.RemoteTestRunner', 'password', '123456')

  for host in host_list:
  env.config.set('plugin.runner.RemoteTestRunner', 'host', host)
  job.run_test(test)

  print('JOB STATUS: %s' % job.status)

Which could be run as::

  $ multi hardware_validation.py:RHEL.test 
rhel{6,7}.{x86_64,ppc64}.internal

  JOB ID: 54cacfb42f3fa9566b6307ad540fbe594f4a5fa2
  JOB LOG: 
/home//avocado/job-results/job-2016-04-07T16.46-54cacfb/job.log

  JOB STATUS: AVOCADO_ALL_OK

API Requirements


1. Job creation API
2. Test resolution API
3. Configuration API
4. Plugin Management API
5. Single test execution API

Current shortcomings


1. The current Avocado runner implementations do not follow the "new
   style" plugin standard.

2. There's no concept of job environment

3. Lack uniform definition of plugin implementation for "driver" style
   plugins.

4. Lack of automatic ownership of configuration namespace by plugin name.


Other use cases
===

The following is a list of other valid use cases which can be
discussed at a later time:

* Use the multiplexer only for some tests.

* Use the gdb or wrapper feature only for some tests.

* Run Avocado tests and external-runner tests in the same job.

* Run tests in parallel.

* Take actions based on test results (for example, run or skip other
  tests)

* Post-process the logs or test results before the job is done

Development Milestones
==

Since it's clear that Avocado demands many changes to be able to
completely fulfill all mentioned use cases, it seems like a good idea
to define milestones.  Those milestones are not intended to set the
pace of development, but to allow for the maximum number of real world
use cases fulfillment as soon as possible.

Milestone 1
---

Includes the delivery of the following APIs:

* Job creation API
* Test resolution API
* Single test execution API

Milestone 2
---

Adds to the previous milestone:

* Configuration API

Milestone 3
---

Adds to the previous milestone:

* Plugin management API

Milestone 4
---

Introduces proper interfaces where previously Configuration and Plugin
management APIs were being used.  For instance, where the following
pseudo code was being used to set the current test runner::

  env = job.environment
  env.config.set('plugin.runner', 'default',
 'avocado.plugins.runner:RemoteTestRunner')
  env.config.set('plugin.runner.RemoteTestRunner', 'username', 'root')
  env.config.set('plugin.runner.RemoteTestRunner', 'password', '123456')

APIs would be introduced that would allow for the following pseudo
code::

  job.load_runner_by_name('RemoteTestRunner')
  if job.runner.accepts_credentials():
  job.runner.set_credentials(username='root', password='123456')

.. _settings: 
https://github.com/avocado-framework/avocado/blob/0.34.0/avocado/core/settings.py
.. _getting the value: 
https://github.com/avocado-framework/avocado/blob/0.34.0/avocado/core/settings.py#L221
.. _default runner: 
https://github.com/avocado-framework/avocado/blob/0.34.0/avocado/core/runner.py#L193
.. _remote runner: 
https://github.com/avocado-framework/avocado/blob/0.34.0/avocado/core/remote/runner.py#L37
.. _vm runner: 
https://github.com/avocado-framework/avocado/blob/0.34.0/avocado/core/remote/runner.py#L263
.. _entry points: 
https://pythonhosted.org/setuptools/pkg_resources.html#entry-points


--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] [RFC] Pre/Post test hooks

2016-04-12 Thread Cleber Rosa



On 04/11/2016 09:44 PM, Amador Pahim wrote:



On 04/01/2016 11:00 AM, Cleber Rosa wrote:

MOTIVATION
==

The idea of adding hooks to be run by Avocado before and after tests
is general enough, and may be used by the community in unpredictable
ways. And that is good.

For this team, the initial motivation was to be able to bring back an
Autotest feature that some of our users are missing: the ability to
set the system-wide "kernel core pattern" configuration for tests.

Having a pre-test hook would allow "/proc/sys/kernel/core_pattern" to
be read, saved and modified to point to the test results directory.
Having a post-test hook would allow "/proc/sys/kernel/core_pattern" to
be reverted back to its original state.

Other currently core features such as sysinfo collection, could be
re-implemented as pre/post test hooks.

GENERAL DESIGN POINTS
=

These are the most important design decisions to be acknowledged or
questioned. Please reply with either ACK or your questions/suggestions.

1) Hooks are implemented as plugin classes, based on a given defined
interface, in the same way current "CLICmd" and "CLI" interfaces allow
plugin writers to extend Avocado and give it new commands and command
line options.


Given the discussion is way ahead and we already have a PR in place,
maybe this is too late, but I'd like to give my 2 cents here. I like the
plugins approach, but I don't think that's the best way to implement the
hooks mechanism. From the experience with oVirt, I'm used to think in
hooks as a directory where I put some executable(s) that will be
executed in a given moment. Each directory corresponds to a moment. The
order that the executables are executed is simple given by their names.
The hooking api itself can and should be a plugin, but user should not
have to bother with the plugin stuff.

Example: directories '~/avocado/before_tests/' and
'~/avocado/after_tests/' are the first candidates to cover this RFC. In
the future, if we have a request for it, we can have '/before_test/',
'/after_test/', '/after_failed_test/' and so on.



This functionality is also included in the upcoming *test* pre/post PR. 
It's built using the pre and post interfaces defined/discussed here. So, 
for the majority of use cases the scripts will do and this will probably 
not be used directly. Still, there are some use cases which require 
closer coupling.  Let's call this "test pre/post scripts", a plugin that 
uses the "test pre/post" interfaces.



For those executables inside the directories, if using python and
importing the avocado hooking module, then they can access avocado job
and/or tests information, probably the same information you're already
considering to expose in your current proposal.


Do you mean something like:

   from avocado.hooks import get_test_info

   current_test = get_test_info()

It hurts my brain when I think of possible implementations.  Please 
excuse if I'm mistaken, but it looks like this kind of approach maps 
oVirt a lot better, because it has a central database and a remotely 
accessible API.  So, it's a lot easier and cleaner to get information 
and manipulate every aspect of every object in a hook.


And for non Python code, this would require the definition of an API 
based on either environment variables or command line parameters, right? 
 My idea is to turn test state into a set environment variables that 
the test pre/post scripts will have access to.


Example test state:

  {"results_dir": 
"/home/foo/avocado/job-results/job-2016-04-11T19.38-9eb1b73/test-results/bar", 
... }


Would become:


AVOCADO_TEST_RESULTS_DIR="/home/foo/avocado/job-results/job-2016-04-11T19.38-9eb1b73/test-results/bar"

The test pre/post script could then be something like:

   #!/bin/bash
   echo "great success" > $AVOCADO_TEST_RESULTS_DIR/status





2) The hooks are executed by the *runner*, and not by the test
process. The goal is not interfere with the test itself. The pre and
post code that runs before and after the test should not *directly*
change the test behavior and outcome. Of course, the test environment
can be changed in a way (say having packages removed) that a test may
fail because of hook actions.


I don't see a problem in change the test directly. It can be dangerous,
of course, but user is expected to know what he's doing when using
hooks. In oVirt hooks we are able, for example, to edit everything we
want in the VM xml before using the xml to create the VM in libvirt.
Anyway, since we don't have a request or use case for this currently,
ACK for the 'not interfere with the test itself'.



3) Test execution time should not be changed by pre and post hooks. If
a pre-test hook takes "n" seconds to run, "n&quo

Re: [Avocado-devel] RFC: Avocado Job API

2016-04-12 Thread Cleber Rosa



On 04/11/2016 09:31 PM, Ademar Reis wrote:

On Mon, Apr 11, 2016 at 09:09:58AM -0300, Cleber Rosa wrote:

Note: the same content on this message is available at:

https://github.com/clebergnu/avocado/blob/rfc_job_api/docs/rfcs/job-api.rst

Some users may find it easier to read with a prettier formatting.

Problem statement
=

An Avocado job is created by running the command line ``avocado``
application with the ``run`` command, such as::

   $ avocado run passtest.py

But most of Avocado's power is activated by additional command line
arguments, such as::

   $ avocado run passtest.py --vm-domain=vm1
   $ avocado run passtest.py --remote-hostname=machine1

Even though Avocado supports many features, such as running tests
locally, on a Virtual Machine and on a remote host, only one those can
be used on a given job.

The observed limitations are:

* Job creation is limited by the expressiveness of command line
   arguments, this causes mutual exclusion of some features
* Mapping features to a subset of tests or conditions is not possible
* Once created, and while running, a job can not have its status
   queried and can not be manipulated

Even though Avocado is a young project, its current feature set
already exceeds its flexibility.  Unfortunately, advanced users are
not always free to mix and match those features at will.

Reviewing and Evaluating Avocado


In light of the given problem, let's take a look at what Avocado is,
both by definition and based on its real world, day to day, usage.

Avocado By Definition
-

Avocado is, by definition, "a set of tools and libraries to help with
automated testing".  Here, some points can be made about the two
components that Avocado are made of:

1. Libraries are commonly flexible enough and expose the right
features in a consistent way.  Libraries that provide good APIs
allow users to solve their own problems, not always anticipated by
the library authors.

2. The majority of the Avocado library code fall in two categories:
utility and test APIs.  Avocado's core libraries are so far, not
intended to be consumed by third party code and its use is not
supported in any way.

3. Tools (as in command line applications), are commonly a lot less
flexible than libraries.  Even the ones driven by command line
arguments, configuration files and environment variables fall
short in flexibility when compared to libraries.  That is true even
when respecting the basic UNIX principles and features that help to
reuse and combine different tools in a single shell session.

How Avocado is used
---

The vast majority of the observed Avocado use cases, present and
future, includes running tests.  Given the Avocado architecture and
its core concepts, this means running a job.

Avocado, with regards to its real world usage, is pretty much a job
(and test) runner, and there's no escaping that.  It's probable that,
for every one hundredth ``avocado run`` commands, a different
``avocado `` is executed.

Proposed solution & RFC goal


By now, the title of this document may seem a little less
misleading. Still, let's attempt to make it even more clear.

Since Avocado is mostly a job runner that needs to be more flexible,
the most natural approach is to turn more of it into a library.  This
would lead to the creation of a new set of user consumable APIs,
albeit for a different set of users.  Those APIs should allow the
creation of custom job executions, in ways that the Avocado authors
have not yet anticipated.

Having settled on this solution to the stated problem, the primary
goal of this RFC is to propose how such a "Job API" can be
implemented.


So in theory, given a comprehensive enough API it should be
possible to rewrite the entire "Avocado Test Runner" using the
Job API.



If the answer had to be binary, I'd answer 1 (yes).  But let's take this 
statement with a grain of salt.  It's probable, though, that not all 
code that the "Avocado Test Runner" uses or contains will benefit users.



Actually, in the future we could have multiple Test Runners (for
example in contrib/) with different feature sets or approaches at
creating and managing jobs.



Yes, one example of a "custom job" that can easily become a custom "job 
runner" was given.



(in practice we will approach the problem incrementally, so this
should be a very long term goal)



Exactly, and this is my main concern with the positive answer I gave. 
Let's not attempt to have "let users rewrite the whole Avocado Test 
Runner" as our short term mission or primary outcome of this RFC.




Analysis of a Job Environment
=

To properly implement a Job API, it's necessary to review what
influences the creation and

Re: [Avocado-devel] RFC: Avocado Job API

2016-04-12 Thread Cleber Rosa



On 04/12/2016 05:06 AM, Lukáš Doktor wrote:

Hello Cleber,

in general I welcome this RFC. This is my 3rd attempt to make my
response understandable. First I'm mentioning the problems, but some
explanations follow at the end of the email.



Thanks.  One question though, this is your first reply to this thread, 
right?  If not, I'm misreading (or not sorting properly) the dates/times 
on my MUA.



Dne 11.4.2016 v 14:09 Cleber Rosa napsal(a):

Note: the same content on this message is available at:

https://github.com/clebergnu/avocado/blob/rfc_job_api/docs/rfcs/job-api.rst


Some users may find it easier to read with a prettier formatting.

Problem statement
=

An Avocado job is created by running the command line ``avocado``
application with the ``run`` command, such as::

   $ avocado run passtest.py

But most of Avocado's power is activated by additional command line
arguments, such as::

   $ avocado run passtest.py --vm-domain=vm1
   $ avocado run passtest.py --remote-hostname=machine1

Even though Avocado supports many features, such as running tests
locally, on a Virtual Machine and on a remote host, only one those can
be used on a given job.

The observed limitations are:

* Job creation is limited by the expressiveness of command line
   arguments, this causes mutual exclusion of some features
* Mapping features to a subset of tests or conditions is not possible
* Once created, and while running, a job can not have its status
   queried and can not be manipulated

Even though Avocado is a young project, its current feature set
already exceeds its flexibility.  Unfortunately, advanced users are
not always free to mix and match those features at will.

Reviewing and Evaluating Avocado


In light of the given problem, let's take a look at what Avocado is,
both by definition and based on its real world, day to day, usage.

Avocado By Definition
-

Avocado is, by definition, "a set of tools and libraries to help with
automated testing".  Here, some points can be made about the two
components that Avocado are made of:

1. Libraries are commonly flexible enough and expose the right
features in a consistent way.  Libraries that provide good APIs
allow users to solve their own problems, not always anticipated by
the library authors.

2. The majority of the Avocado library code fall in two categories:
utility and test APIs.  Avocado's core libraries are so far, not
intended to be consumed by third party code and its use is not
supported in any way.

3. Tools (as in command line applications), are commonly a lot less
flexible than libraries.  Even the ones driven by command line
arguments, configuration files and environment variables fall
short in flexibility when compared to libraries.  That is true even
when respecting the basic UNIX principles and features that help to
reuse and combine different tools in a single shell session.

How Avocado is used
---

The vast majority of the observed Avocado use cases, present and
future, includes running tests.  Given the Avocado architecture and
its core concepts, this means running a job.

Avocado, with regards to its real world usage, is pretty much a job
(and test) runner, and there's no escaping that.  It's probable that,
for every one hundredth ``avocado run`` commands, a different
``avocado `` is executed.

Proposed solution & RFC goal


By now, the title of this document may seem a little less
misleading. Still, let's attempt to make it even more clear.

Since Avocado is mostly a job runner that needs to be more flexible,
the most natural approach is to turn more of it into a library.  This
would lead to the creation of a new set of user consumable APIs,
albeit for a different set of users.  Those APIs should allow the
creation of custom job executions, in ways that the Avocado authors
have not yet anticipated.

Having settled on this solution to the stated problem, the primary
goal of this RFC is to propose how such a "Job API" can be
implemented.

Analysis of a Job Environment
=

To properly implement a Job API, it's necessary to review what
influences the creation and execution of a job.  Currently, a Job
execution based on the current command line, is driven by, at least,
the following factors:

* Configuration state
* Command line parameters
* Active plugins

The following subsections examines how these would behave in an API
based approach to Job execution.

Configuration state
---

Even though Avocado has a well defined `settings`_ module, it only
provides support for `getting the value`_ of configuration keys. It
lacks the ability to set configuration values at run time.

If the configuration state allowed modifications at run time (in a
well defined and supported way), use

Re: [Avocado-devel] RFC: Avocado Job API

2016-04-12 Thread Cleber Rosa



On 04/12/2016 06:43 AM, Lukáš Doktor wrote:

Dne 12.4.2016 v 10:06 Lukáš Doktor napsal(a):

Hello Cleber,

in general I welcome this RFC. This is my 3rd attempt to make my
response understandable. First I'm mentioning the problems, but some
explanations follow at the end of the email.

Dne 11.4.2016 v 14:09 Cleber Rosa napsal(a):

Note: the same content on this message is available at:

https://github.com/clebergnu/avocado/blob/rfc_job_api/docs/rfcs/job-api.rst



Some users may find it easier to read with a prettier formatting.

Problem statement
=

An Avocado job is created by running the command line ``avocado``
application with the ``run`` command, such as::

   $ avocado run passtest.py

But most of Avocado's power is activated by additional command line
arguments, such as::

   $ avocado run passtest.py --vm-domain=vm1
   $ avocado run passtest.py --remote-hostname=machine1

Even though Avocado supports many features, such as running tests
locally, on a Virtual Machine and on a remote host, only one those can
be used on a given job.

The observed limitations are:

* Job creation is limited by the expressiveness of command line
   arguments, this causes mutual exclusion of some features
* Mapping features to a subset of tests or conditions is not possible
* Once created, and while running, a job can not have its status
   queried and can not be manipulated

Even though Avocado is a young project, its current feature set
already exceeds its flexibility.  Unfortunately, advanced users are
not always free to mix and match those features at will.

Reviewing and Evaluating Avocado


In light of the given problem, let's take a look at what Avocado is,
both by definition and based on its real world, day to day, usage.

Avocado By Definition
-

Avocado is, by definition, "a set of tools and libraries to help with
automated testing".  Here, some points can be made about the two
components that Avocado are made of:

1. Libraries are commonly flexible enough and expose the right
features in a consistent way.  Libraries that provide good APIs
allow users to solve their own problems, not always anticipated by
the library authors.

2. The majority of the Avocado library code fall in two categories:
utility and test APIs.  Avocado's core libraries are so far, not
intended to be consumed by third party code and its use is not
supported in any way.

3. Tools (as in command line applications), are commonly a lot less
flexible than libraries.  Even the ones driven by command line
arguments, configuration files and environment variables fall
short in flexibility when compared to libraries.  That is true even
when respecting the basic UNIX principles and features that help to
reuse and combine different tools in a single shell session.

How Avocado is used
---

The vast majority of the observed Avocado use cases, present and
future, includes running tests.  Given the Avocado architecture and
its core concepts, this means running a job.

Avocado, with regards to its real world usage, is pretty much a job
(and test) runner, and there's no escaping that.  It's probable that,
for every one hundredth ``avocado run`` commands, a different
``avocado `` is executed.

Proposed solution & RFC goal


By now, the title of this document may seem a little less
misleading. Still, let's attempt to make it even more clear.

Since Avocado is mostly a job runner that needs to be more flexible,
the most natural approach is to turn more of it into a library.  This
would lead to the creation of a new set of user consumable APIs,
albeit for a different set of users.  Those APIs should allow the
creation of custom job executions, in ways that the Avocado authors
have not yet anticipated.

Having settled on this solution to the stated problem, the primary
goal of this RFC is to propose how such a "Job API" can be
implemented.

Analysis of a Job Environment
=

To properly implement a Job API, it's necessary to review what
influences the creation and execution of a job.  Currently, a Job
execution based on the current command line, is driven by, at least,
the following factors:

* Configuration state
* Command line parameters
* Active plugins

The following subsections examines how these would behave in an API
based approach to Job execution.

Configuration state
---

Even though Avocado has a well defined `settings`_ module, it only
provides support for `getting the value`_ of configuration keys. It
lacks the ability to set configuration values at run time.

If the configuration state allowed modifications at run time (in a
well defined and supported way), users could then create many types of
custom jobs with that "tool" alone.

Command line parameters

Re: [Avocado-devel] RFC: Avocado Job API

2016-04-12 Thread Cleber Rosa



On 04/12/2016 06:22 AM, Lukáš Doktor wrote:

Dne 12.4.2016 v 02:31 Ademar Reis napsal(a):

On Mon, Apr 11, 2016 at 09:09:58AM -0300, Cleber Rosa wrote:

Note: the same content on this message is available at:

https://github.com/clebergnu/avocado/blob/rfc_job_api/docs/rfcs/job-api.rst


Some users may find it easier to read with a prettier formatting.

Problem statement
=

An Avocado job is created by running the command line ``avocado``
application with the ``run`` command, such as::

   $ avocado run passtest.py

But most of Avocado's power is activated by additional command line
arguments, such as::

   $ avocado run passtest.py --vm-domain=vm1
   $ avocado run passtest.py --remote-hostname=machine1

Even though Avocado supports many features, such as running tests
locally, on a Virtual Machine and on a remote host, only one those can
be used on a given job.

The observed limitations are:

* Job creation is limited by the expressiveness of command line
   arguments, this causes mutual exclusion of some features
* Mapping features to a subset of tests or conditions is not possible
* Once created, and while running, a job can not have its status
   queried and can not be manipulated

Even though Avocado is a young project, its current feature set
already exceeds its flexibility.  Unfortunately, advanced users are
not always free to mix and match those features at will.

Reviewing and Evaluating Avocado


In light of the given problem, let's take a look at what Avocado is,
both by definition and based on its real world, day to day, usage.

Avocado By Definition
-

Avocado is, by definition, "a set of tools and libraries to help with
automated testing".  Here, some points can be made about the two
components that Avocado are made of:

1. Libraries are commonly flexible enough and expose the right
features in a consistent way.  Libraries that provide good APIs
allow users to solve their own problems, not always anticipated by
the library authors.

2. The majority of the Avocado library code fall in two categories:
utility and test APIs.  Avocado's core libraries are so far, not
intended to be consumed by third party code and its use is not
supported in any way.

3. Tools (as in command line applications), are commonly a lot less
flexible than libraries.  Even the ones driven by command line
arguments, configuration files and environment variables fall
short in flexibility when compared to libraries.  That is true even
when respecting the basic UNIX principles and features that help to
reuse and combine different tools in a single shell session.

How Avocado is used
---

The vast majority of the observed Avocado use cases, present and
future, includes running tests.  Given the Avocado architecture and
its core concepts, this means running a job.

Avocado, with regards to its real world usage, is pretty much a job
(and test) runner, and there's no escaping that.  It's probable that,
for every one hundredth ``avocado run`` commands, a different
``avocado `` is executed.

Proposed solution & RFC goal


By now, the title of this document may seem a little less
misleading. Still, let's attempt to make it even more clear.

Since Avocado is mostly a job runner that needs to be more flexible,
the most natural approach is to turn more of it into a library.  This
would lead to the creation of a new set of user consumable APIs,
albeit for a different set of users.  Those APIs should allow the
creation of custom job executions, in ways that the Avocado authors
have not yet anticipated.

Having settled on this solution to the stated problem, the primary
goal of this RFC is to propose how such a "Job API" can be
implemented.


So in theory, given a comprehensive enough API it should be
possible to rewrite the entire "Avocado Test Runner" using the
Job API.

Actually, in the future we could have multiple Test Runners (for
example in contrib/) with different feature sets or approaches at
creating and managing jobs.

(in practice we will approach the problem incrementally, so this
should be a very long term goal)


Exactly, for example run on several machines, or run in parallel.



Agreed.



Analysis of a Job Environment
=

To properly implement a Job API, it's necessary to review what
influences the creation and execution of a job.  Currently, a Job
execution based on the current command line, is driven by, at least,
the following factors:

* Configuration state
* Command line parameters
* Active plugins

The following subsections examines how these would behave in an API
based approach to Job execution.

Configuration state
---

Even though Avocado has a well defined `settings`_ module, it only
provides support for `getting the value`_ of configurati

Re: [Avocado-devel] RFC: Avocado Job API

2016-04-12 Thread Cleber Rosa



On 04/12/2016 12:04 PM, Ademar Reis wrote:

On Tue, Apr 12, 2016 at 11:22:40AM +0200, Lukáš Doktor wrote:

Dne 12.4.2016 v 02:31 Ademar Reis napsal(a):

On Mon, Apr 11, 2016 at 09:09:58AM -0300, Cleber Rosa wrote:

Note: the same content on this message is available at:

https://github.com/clebergnu/avocado/blob/rfc_job_api/docs/rfcs/job-api.rst

Some users may find it easier to read with a prettier formatting.

Problem statement
=

An Avocado job is created by running the command line ``avocado``
application with the ``run`` command, such as::

   $ avocado run passtest.py

But most of Avocado's power is activated by additional command line
arguments, such as::

   $ avocado run passtest.py --vm-domain=vm1
   $ avocado run passtest.py --remote-hostname=machine1

Even though Avocado supports many features, such as running tests
locally, on a Virtual Machine and on a remote host, only one those can
be used on a given job.

The observed limitations are:

* Job creation is limited by the expressiveness of command line
   arguments, this causes mutual exclusion of some features
* Mapping features to a subset of tests or conditions is not possible
* Once created, and while running, a job can not have its status
   queried and can not be manipulated

Even though Avocado is a young project, its current feature set
already exceeds its flexibility.  Unfortunately, advanced users are
not always free to mix and match those features at will.

Reviewing and Evaluating Avocado


In light of the given problem, let's take a look at what Avocado is,
both by definition and based on its real world, day to day, usage.

Avocado By Definition
-

Avocado is, by definition, "a set of tools and libraries to help with
automated testing".  Here, some points can be made about the two
components that Avocado are made of:

1. Libraries are commonly flexible enough and expose the right
features in a consistent way.  Libraries that provide good APIs
allow users to solve their own problems, not always anticipated by
the library authors.

2. The majority of the Avocado library code fall in two categories:
utility and test APIs.  Avocado's core libraries are so far, not
intended to be consumed by third party code and its use is not
supported in any way.

3. Tools (as in command line applications), are commonly a lot less
flexible than libraries.  Even the ones driven by command line
arguments, configuration files and environment variables fall
short in flexibility when compared to libraries.  That is true even
when respecting the basic UNIX principles and features that help to
reuse and combine different tools in a single shell session.

How Avocado is used
---

The vast majority of the observed Avocado use cases, present and
future, includes running tests.  Given the Avocado architecture and
its core concepts, this means running a job.

Avocado, with regards to its real world usage, is pretty much a job
(and test) runner, and there's no escaping that.  It's probable that,
for every one hundredth ``avocado run`` commands, a different
``avocado `` is executed.

Proposed solution & RFC goal


By now, the title of this document may seem a little less
misleading. Still, let's attempt to make it even more clear.

Since Avocado is mostly a job runner that needs to be more flexible,
the most natural approach is to turn more of it into a library.  This
would lead to the creation of a new set of user consumable APIs,
albeit for a different set of users.  Those APIs should allow the
creation of custom job executions, in ways that the Avocado authors
have not yet anticipated.

Having settled on this solution to the stated problem, the primary
goal of this RFC is to propose how such a "Job API" can be
implemented.


So in theory, given a comprehensive enough API it should be
possible to rewrite the entire "Avocado Test Runner" using the
Job API.

Actually, in the future we could have multiple Test Runners (for
example in contrib/) with different feature sets or approaches at
creating and managing jobs.

(in practice we will approach the problem incrementally, so this
should be a very long term goal)


Exactly, for example run on several machines, or run in parallel.



Analysis of a Job Environment
=

To properly implement a Job API, it's necessary to review what
influences the creation and execution of a job.  Currently, a Job
execution based on the current command line, is driven by, at least,
the following factors:

* Configuration state
* Command line parameters
* Active plugins

The following subsections examines how these would behave in an API
based approach to Job execution.

Configuration state
---

Even though Avocado has a well defined `settings`_ module, it only
provides

Re: [Avocado-devel] RFC: Avocado Job API

2016-04-12 Thread Cleber Rosa



On 04/12/2016 12:50 PM, Lukáš Doktor wrote:

Dne 12.4.2016 v 17:04 Ademar Reis napsal(a):

On Tue, Apr 12, 2016 at 11:22:40AM +0200, Lukáš Doktor wrote:

Dne 12.4.2016 v 02:31 Ademar Reis napsal(a):

On Mon, Apr 11, 2016 at 09:09:58AM -0300, Cleber Rosa wrote:

Note: the same content on this message is available at:

https://github.com/clebergnu/avocado/blob/rfc_job_api/docs/rfcs/job-api.rst


Some users may find it easier to read with a prettier formatting.

Problem statement
=

An Avocado job is created by running the command line ``avocado``
application with the ``run`` command, such as::

   $ avocado run passtest.py

But most of Avocado's power is activated by additional command line
arguments, such as::

   $ avocado run passtest.py --vm-domain=vm1
   $ avocado run passtest.py --remote-hostname=machine1

Even though Avocado supports many features, such as running tests
locally, on a Virtual Machine and on a remote host, only one those can
be used on a given job.

The observed limitations are:

* Job creation is limited by the expressiveness of command line
   arguments, this causes mutual exclusion of some features
* Mapping features to a subset of tests or conditions is not possible
* Once created, and while running, a job can not have its status
   queried and can not be manipulated

Even though Avocado is a young project, its current feature set
already exceeds its flexibility.  Unfortunately, advanced users are
not always free to mix and match those features at will.

Reviewing and Evaluating Avocado


In light of the given problem, let's take a look at what Avocado is,
both by definition and based on its real world, day to day, usage.

Avocado By Definition
-

Avocado is, by definition, "a set of tools and libraries to help with
automated testing".  Here, some points can be made about the two
components that Avocado are made of:

1. Libraries are commonly flexible enough and expose the right
features in a consistent way.  Libraries that provide good APIs
allow users to solve their own problems, not always anticipated by
the library authors.

2. The majority of the Avocado library code fall in two categories:
utility and test APIs.  Avocado's core libraries are so far, not
intended to be consumed by third party code and its use is not
supported in any way.

3. Tools (as in command line applications), are commonly a lot less
flexible than libraries.  Even the ones driven by command line
arguments, configuration files and environment variables fall
short in flexibility when compared to libraries.  That is true
even
when respecting the basic UNIX principles and features that
help to
reuse and combine different tools in a single shell session.

How Avocado is used
---

The vast majority of the observed Avocado use cases, present and
future, includes running tests.  Given the Avocado architecture and
its core concepts, this means running a job.

Avocado, with regards to its real world usage, is pretty much a job
(and test) runner, and there's no escaping that.  It's probable that,
for every one hundredth ``avocado run`` commands, a different
``avocado `` is executed.

Proposed solution & RFC goal


By now, the title of this document may seem a little less
misleading. Still, let's attempt to make it even more clear.

Since Avocado is mostly a job runner that needs to be more flexible,
the most natural approach is to turn more of it into a library.  This
would lead to the creation of a new set of user consumable APIs,
albeit for a different set of users.  Those APIs should allow the
creation of custom job executions, in ways that the Avocado authors
have not yet anticipated.

Having settled on this solution to the stated problem, the primary
goal of this RFC is to propose how such a "Job API" can be
implemented.


So in theory, given a comprehensive enough API it should be
possible to rewrite the entire "Avocado Test Runner" using the
Job API.

Actually, in the future we could have multiple Test Runners (for
example in contrib/) with different feature sets or approaches at
creating and managing jobs.

(in practice we will approach the problem incrementally, so this
should be a very long term goal)


Exactly, for example run on several machines, or run in parallel.



Analysis of a Job Environment
=

To properly implement a Job API, it's necessary to review what
influences the creation and execution of a job.  Currently, a Job
execution based on the current command line, is driven by, at least,
the following factors:

* Configuration state
* Command line parameters
* Active plugins

The following subsections examines how these would behave in an API
based approach to Job execution.

Configuration state
---

Even though Avocado has a w

Re: [Avocado-devel] RFC: multi-stream test (previously multi-test) [v3]

2016-04-19 Thread Cleber Rosa
es=False)



How would a user specify where a given stream is going to be run?


Executing of the complex example would become:

 avocado run MultiNetperf

You can see that the test allows running several NetPerf tests
simultaneously, either locally, or distributed across multiple machines
(or combinations) just by changing parameters. Additionally by adding
features to the nested tests, one can use different NetPerf commands, or
add other tests to be executed together.

The results could look like this:


 $ tree $RESULTDIR
   └── test-results
   └── MultiNetperf
   ├── job.log
   ...
   ├── 1
   │   └── job.log
   ...
   └── 2
   └── job.log
   ...



The multiple `job.log` files here makes things confusing... do we have a 
single job that ran a single test?



Where the MultiNetperf/job.log contains combined logs of the "master"
test and all the "nested" tests and the sync server.

Directories [12] contain results of the created (possibly even named)
streams. I think they should be in form of standard avocado Job to keep
the well known structure.


To keep the Avocado Job structure, they'd either have to be Avocado 
Jobs, or we'd have to fake them...  Then, of a sudden, we have things 
that look like jobs, but are not jobs.  How would users of the Job API 
react when then find out that their custom jobs have a single `job.log` 
and users of a multi-stream tests have multiple `job.log`s?


I'd not trade the familiarity of the job log format for the structure of 
the architecture we've been struggling to define.


My final suggestion: define all the core concepts and let us know how 
they all fit. In text form.  Then, when we get to code examples, they 
should all be obvious.  Refrain from implementation details at this point.


--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] Maintainers of autotest/avocado projects

2016-04-20 Thread Cleber Rosa



On 04/19/2016 04:00 AM, Lukáš Doktor wrote:

Hello guys,

I sent the update PRs, hopefully it addresses all claims from this
thread. For `tp-qemu` I additionally picked the most active people based
on commits from the last year.

https://github.com/avocado-framework/avocado-vt/pull/461
https://github.com/autotest/tp-qemu/pull/591
https://github.com/autotest/tp-libvirt/pull/759



Looks like only the Avocado-VT PR is still pending, but seems ready to 
me merged.


Thanks for taking this laborious but important task Lukáš!

- Cleber.


I'll wait for the majority of people on GH + acks (or nacks) of all new
maintainers and then merge it.

Regards,
Lukáš

PS: Feel free to send amendments every time the situation changes.


Dne 13.4.2016 v 16:02 Lukáš Doktor napsal(a):

Dear Autotest/Avocado maintainers,

I noticed some outdated information in the `MAINTAINERS` files and I'd
like to ask you if you are still interested in being an official
contact, or if you want to nominate someone instead of you, or simply
resign (for any reason).

Feel free to send pull request, or just comment on this email, I can
update it accordingly.

Also note that "virt-test" is officially dead. I'm adding it here as I
think it'd be useful to transfer the updated MAINTAINERS file to
`avocado-framework/avocado-vt`.


tp-qemu
===

Pull request maintenance - QEMU subtests


M: Jiri Zupka 
M: Lukas Doktor 
M: Yiqiao Pu 
M: Feng Yang 

Pull request maintenance - openvswitch subtests


M: Jiri Zupka 


tp-libvirt
==

Pull request maintenance - Libvirt subtests
---

M: Christopher Evich 
M: Yu Mingfei 
M: Yang Dongsheng 
M: Li Yang 


Pull request maintenance - LVSB subtests
---

M: Christopher Evich 


Pull request maintenance - Libguestfs
-

M: Yu Mingfei 


Pull request maintenance - v2v subtests
---

M: Alex Jia 


avocado-framework/avocado-vt (virt-test)


Pull request maintenance - QEMU subtests


M: Lucas Meneghel Rodrigues 
M: Cleber Rosa 
M: Jiri Zupka 
M: Lukas Doktor 
M: Yiqiao Pu 
M: Feng Yang 


Pull request maintenance - Libvirt subtests
---

M: Christopher Evich 
M: Yu Mingfei 
M: Yang Dongsheng 
M: Li Yang 



Pull request maintenance - LVSB subtests
---

M: Christopher Evich 


Pull request maintenance - Libguestfs
-

M: Yu Mingfei 


Pull request maintenance - v2v subtests
---

M: Alex Jia 


Pull request maintenance - openvswitch subtests


M: Jiri Zupka 


Sincerely yours,
Lukáš Doktor




--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] RFC: multi-stream test (previously multi-test) [v3]

2016-04-20 Thread Cleber Rosa



On 04/20/2016 03:02 PM, Lukáš Doktor wrote:

Dne 19.4.2016 v 22:18 Cleber Rosa napsal(a):



On 04/15/2016 03:05 AM, Lukáš Doktor wrote:

Hello again,

There were couple of changes and the new Job API RFC, which might sound
similar to this RFC, but it covers different parts. Let's update the
multi-test RFC and fix the terminology, which might had been a bit
misleading.

Changes:

  v2: Rewritten from scratch
  v2: Added examples for the demonstration to avoid confusion
  v2: Removed the mht format (which was there to demonstrate manual
  execution)
  v2: Added 2 solutions for multi-tests
  v2: Described ways to support synchronization
  v3: Renamed to multi-stream as it befits the purpose
  v3: Improved introduction
  v3: Workers are renamed to streams
  v3: Added example which uses library, instead of new test
  v3: Multi-test renamed to nested tests
  v3: Added section regarding Job API RFC
  v3: Better description of the Synchronization section
  v3: Improved conclusion
  v3: Removed the "Internal API" section (it was a transition between
  no support and "nested test API", not a "real" solution)
  v3: Using per-test granularity in nested tests (requires plugins
  refactor from Job API, but allows greater flexibility)


The problem
===

Allow tests to have some if its block of code run in separate stream(s).
We'll discuss the range of "block of code" further in the text.



I believe it's also important to define what "stream" means.  The reason
is that it's used both as an abstraction, and as a more concrete
component in the code examples that follow.


OK, I'll add this to v4


One example could be a user, who wants to run netperf on 2 machines,
which requires following manual steps:


  machine1: netserver -D
  machine1: # Wait till netserver is initialized
  machine2: netperf -H $machine1 -l 60
  machine2: # Wait till it finishes and report the results
  machine1: # stop the netserver and report possible failures

the test would have to contain the code for both, machine1 and machine2
and it executes them in two separate streams, which might or not be
executed on the same machine.



I can understand what you mean here just fine, but it's rather confusing
to say "machine1 and machine2" and at the same time "migh or not be
executed on the same machine".

This brings us back to the stream concept.  I see the streams as the
running, isolated, execution of "code blocks".  This execution may be on
the same machine or not.

With those statements in mind, I'd ask you to give your formal
definition and vision of the the stream concept.


I hope we share the same view, I'll try to put it on paper (keyboard)
while writing the v4.


You can see that each stream is valid even without the other, so
additional requirement would be to allow easy share of those block of
codes among other tests. Splitting the problem in two could also
sometimes help in analyzing the failures.



Here you say that a stream is isolated from each other.  This matches my
understanding of streams as "running, isolated execution of code blocks".

But "help in analyzing failures" should not be a core part or reason for
this architecture.  It can be a bonus point.  Still, let's try to focus
on the very core components on the architecture and drop the discussion
about the lesser important aspects.


Yep, I'm sorry for confusion, I meant it as another possible benefit,
but not a requirement.


Some other examples might be:

1. A simple stress routine being executed in parallel (the same or
different hosts)
2. Several code blocks being combined into a complex scenario(s)
3. Running the same test along with stress test in background

For demonstrating purposes this RFC uses a very simple example fitting
in the category (1). It downloads the main page from "example.org"
location using "wget" (almost) concurrently from several machines.


Standard python libraries
-

One can run pieces of python code directly using python's
multiprocessing library, without any need for the avocado-framework
support. But there is quite a lot of cons:

+ no need for framework API
- lots of boilerplate code in each test
- each solution would be unique and therefor hard to analyze the logs
- no decent way of sharing the code with other tests



IMHO you can drop the reasons on why *not* to use lower level or just
different code.  If, during research we came to find some other external
project/framework/library, we should just have used it and documented
it.  Since this is not the case, let's just avoid getting distracted on
this RFC.


Yep, as I got only response from you, I wanted to keep the variants
here. I'll remove them in the next version.

[Avocado-devel] Pre-release test plan results

2016-04-25 Thread Cleber Rosa

Test Plan: Release Test Plan
Run by 'cleber' at 2016-04-25T08:22:50.974554
PASS: 'Avocado source is sound':
PASS: 'Avocado RPM build':
PASS: 'Avocado RPM install':
PASS: 'Avocado Test Run on RPM based installation':
PASS: 'Avocado Test Run on Virtual Machine':
PASS: 'Avocado Test Run on Remote Machine':
PASS: 'Avocado Remote Machine HTML report':
PASS: 'Avocado Server Source Checkout and Unittests':
PASS: 'Avocado Server Run':
PASS: 'Avocado Server Functional Test':
PASS: 'Avocado Virt and VT Source Checkout':
PASS: 'Avocado Virt Bootstrap':
PASS: 'Avocado Virt Boot Test Run and HTML report':
PASS: 'Avocado Virt - Assignment of values from the cmdline':
PASS: 'Avocado Virt - Migration test':
PASS: 'Avocado VT - Bootstrap':
PASS: 'Avocado VT - List tests':
PASS: 'Avocado VT - Run test':
PASS: 'Avocado HTML report sysinfo':
PASS: 'Avocado HTML report links':
PASS: 'Paginator':

Repo info:

* avocado: 00e09e3958247ab26e532314e9448baa9fbc075d
* avocado-vt: 948d91e40c318aea695cc0921cbda47def0cc024
* avocado-virt: bc316aa4504353c48db70bd9a2ca0a4dbf80e36e
* avocado-virt-tests: 23e9f6ace369b09b6e53007e5dc759c18576914a
* avocado-server: 1491de32cb4e0ad4c0e83e57d1139af7f5eafccf

--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


[Avocado-devel] Heads up: changes in Avocado (impacts Avocado-VT)

2016-04-26 Thread Cleber Rosa

Hi all,

Some pull requests just merged to Avocado have the potential to impact 
Avocado-VT users.


If you have an Avocado-VT source based install, and you're tracking the 
the master branch, you will need to update Avocado to the latest master 
branch also.


We're approaching the 35.0 release (new numbering scheme), which is 
going to set the stage for 36.0lts (a long term stability) release.


Until then, please be aware of the, not ideal, compatibility between 
various source checkouts of Avocado-VT and Avocado.


Thanks!

--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


[Avocado-devel] Avocado release 35.0: Mr. Robot

2016-04-27 Thread Cleber Rosa

This is another proud announcement: Avocado release 35.0, aka "Mr
Robot", is now out!

This release, while a "regular" release, will also serve as a beta for
our first "long term stability" (aka "lts") release.  That means that
the next release, will be version "36.0lts" and will receive only bug
fixes and minor improvements.  So, expect release 35.0 to be pretty
much like "36.0lts" feature-wise.  New features will make into the
"37.0" release, to be released after "36.0lts".  Read more about the
details on the specific RFC[9].

The main changes in Avocado for this release are:

* A big round of fixes and on machine readable output formats, such
  as xunit (aka JUnit) and JSON.  The xunit output, for instance,
  now includes tests with schema checking.  This should make sure
  interoperability is even better on this release.

* Much more robust handling of test references, aka test URLs.
  Avocado now properly handles very long test references, and also
  test references with non-ascii characters.

* The avocado command line application now provides richer exit
  status[1].  If your application or custom script depends on the
  avocado exit status code, you should be fine as avocado still
  returns zero for success and non-zero for errors.  On error
  conditions, though, the exit status code are richer and made of
  combinable (ORable) codes.  This way it's possible to detect that,
  say, both a test failure and a job timeout occurred in a single
  execution.

* [SECURITY RELATED] The remote execution of tests (including in
  Virtual Machines) now allows for proper checks of host keys[2].
  Without these checks, avocado is susceptible to a man-in-the-middle
  attack, by connecting and sending credentials to the wrong machine.
  This check is *disabled* by default, because users depend on this
  behavior when using machines without any prior knowledge such as
  cloud based virtual machines.  Also, a bug in the underlying SSH
  library may prevent existing keys to be used if these are in ECDSA
  format[3].  There's an automated check in place to check for the
  resolution of the third party library bug.  Expect this feature to
  be *enabled* by default in the upcoming releases.

* Pre/Post Job hooks.  Avocado now defines a proper interface for
  extension/plugin writers to execute actions while a Job is runnning.
  Both Pre and Post hooks have access to the Job state (actually, the
  complete Job instance).  Pre job hooks are called before tests are
  run, and post job hooks are called at the very end of the job (after
  tests would have usually finished executing).

* Pre/Post job scripts[4].  As a feature built on top of the Pre/Post job
  hooks described earlier, it's now possible to put executable scripts
  in a configurable location, such as `/etc/avocado/scripts/job/pre.d`
  and have them called by Avocado before the execution of tests.  The
  executed scripts will receive some information about the job via
  environment variables[5].

* The implementation of proper Test-IDs[6] in the test result
  directory.

Also, while not everything is (yet) translated into code, this release
saw various and major RFCs, which are definitely shaping the future of
Avocado.  Among those:

* Introduce proper test IDs[6]

* Pre/Post *test* hooks[7]

* Multi-stream tests[8]

* Avocado maintainability and integration with avocado-vt[9]

* Improvements to job status (completely implemented)[10]

For a complete list of changes please check the Avocado changelog[11].

For Avocado-VT, please check the full Avocado-VT changelog[12].

Install avocado
---

Instructions are available in our documentation on how to install
either with packages or from source[13].

Updated RPM packages are be available in the project repos for
Fedora 22, Fedora 23, EPEL 6 and EPEL 7.

Packages


As a heads up, we still package the latest version of the various
Avocado sub projects, such as the very popular Avocado-VT and the
pretty much experimental Avocado-Virt and Avocado-Server projects.

For the upcoming releases, there will be changes in our package
offers, with a greater focus on long term stability packages for
Avocado.  Other packages may still be offered as a convenience, or
may see a change of ownership.  All in the best interest of our users.
If you have any concerns or questions, please let us know.

Happy hacking and testing!

---

[1] 
http://avocado-framework.readthedocs.org/en/35.0/ResultFormats.html#exit-codes
[2] 
https://github.com/avocado-framework/avocado/blob/35.0/etc/avocado/avocado.conf#L41
[3] 
https://github.com/avocado-framework/avocado/blob/35.0/selftests/functional/test_thirdparty_bugs.py#L17
[4] 
http://avocado-framework.readthedocs.org/en/35.0/ReferenceGuide.html#job-pre-and-post-scripts
[5] 
http://avocado-framework.readthedocs.org/en/35.0/ReferenceGuide.html#script-execution-environment

[6] https://www.redhat.com/archives/avocado-devel/2016-March/msg00024.html
[7] https://www.redhat.com/archive

Re: [Avocado-devel] RFC: multi-stream test (previously multi-test) [v4]

2016-04-28 Thread Cleber Rosa
oach:

job-2016-04-16T.../
├── id
├── job.log
└── test-results
└── 1-MultiNetperf
├── debug.log
├── whiteboard
├── 1-Netperf.bigbuf
│   ├── debug.log
│   └── whiteboard
├── 2-Netperf.smallbuf
│   ├── debug.log
│   └── whiteboard
└── 3-Netperf.smallbuf
├── debug.log
└── whiteboard

The difference is that queue-like approach bundles the result
per-worker, which could be useful when using multiple machines.

The single-task approach makes it easier to follow how the execution
went, but one needs to see the log to see on which machine was the task
executed.




The logs can indeed be useful.  And the choices about single .vs. queue 
wouldn't really depend on this... this is, quite obviously the *result* 
of that choice.



Job API RFC
===

Recently introduced Job API RFC covers very similar topic as "nested
test", but it's not the same. The Job API is enabling users to modify
the job execution, eventually even write a runner which would suit them
to run groups of tests. On the contrary this RFC covers a way to combine
code-blocks/tests to reuse them into a single test. In a hackish way,
they can supplement each others, but the purpose is different.



"nested", without a previous definition, really confuses me.  Other than 
that, ACK.



One of the most obvious differences is, that a failed "nested" test can
be intentional (eg. reusing the NetPerf test to check if unreachable
machines can talk to each other), while in Job API it's always a failure.



It may just be me, but I fail to see how this is one obvious difference.


I hope you see the pattern. They are similar, but on a different layer.
Internally, though, they can share some pieces like execution the
individual tests concurrently with different params/plugins
(locally/remotely). All the needed plugin modifications would also be
useful for both of these RFCs.



The layers involved, and the proposed usage, should be the obvious 
differences.  If they're not cleanly seen, we're doing something wrong.



Some examples:

User1 wants to run "compile_kernel" test on a machine followed by
"install_compiled_kernel passtest failtest warntest" on "machine1
machine2". They depend on the status of the previous test, but they
don't create a scenario. So the user should use Job API (or execute 3
jobs manually).

User2 wants to create migration test, which starts migration from
machine1 and receives the migration on machine2. It requires cooperation
and together it creates one complex usecase so the user should use
multi-stream test.




OK.


Conclusion
==

This RFC proposes to add a simple API to allow triggering
avocado.Test-like instances on local or remote machine. The main point
is it should allow very simple code-reuse and modular test development.
I believe it'll be easier, than having users to handle the
multiprocessing library, which might allow similar features, but with a
lot of boilerplate code and even more code to handle possible exceptions.

This concept also plays nicely with the Job API RFC, it could utilize
most of tasks needed for it and together they should allow amazing
flexibility with known and similar structure (therefor easy to learn).



Thanks for the much cleaner v4!  I see that consensus and a common view 
is now approaching.


--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] RFC: multi-stream test (previously multi-test) [v4]

2016-05-02 Thread Cleber Rosa
nice
detail that will help with debugging and make our lives easier
when implementing the feature, but again, purely an
implementation detail.

The test writer should have strict control of what gets run in a
stream, with a constrained API where the concepts are very clear.
We should not, under any circumstances, induce users to think of
streams as something that runs tests. To me this is utterly
important.

For example, if we allow streams to run tests, or Test
References, then running `avocado run *cpuid*` and
`stream.run("*cpuid*")` will look similar at first, but with
several subtle differences in behavior, confusing users.

Users will inevitably ask questions about these differences and
we'll end up having to revisit some concepts and refine the
documentation, a result of breaking the abstraction.

A few examples of these differences which might not be
immediately clear:

   * No pre/post hooks for jobs or tests get run inside a stream.
   * No per-test sysinfo collection inside a stream.
   * No per-job sysinfo collection inside a stream.
   * Per-stream, there's basically nothing that can be configured
 about the environment other than *where* it runs.
 Everything is inherited from the actual test. Streams should
 have access to the exact same APIs that *tests* have.
   * If users see streams as something that runs tests, it's
 inevitable that they will start asking for knobs
 to fine-tune the runtime environment:
 * Should there be a timeout per stream?
 * Hmm, at least support enabling/disabling gdb or wrappers
   in a stream? No? Why not!?
 * Hmm, maybe allow multiplex="file" in stream.run()?
 * Why can't I disable or enable plugins per-stream? Or at
   least configure them?


Basically just running a RAW test, without any features the default
avocado runner provides. I'm fine with that.

I slightly disagree there are no way of modifying the environment as the
resolver resolves into template, which contains all the params given to
the test. So one could modify basically everything regarding the test.
The only thing one can't configure, nor use are the job features (like
the pre-post hooks, plugins, ...)


And here are some other questions, which seem logical at first:

   * Hey, you know what would be awesome? Let me upload the
 test results from a stream as if it was a job! Maybe a
 tool to convert stream test results to job results? Or a
 plugin that handles them!
   * Even more awesome: a feature to replay a stream!
   * And since I can run multiple tests in a stream, why can't I
 run a job there? It's a logical next step!

The simple fact the questions above are being asked is a sign the
abstraction is broken: we shouldn't have to revisit previous
concepts to clarify the behavior when something is being added in
a different layer.

Am I making sense?


IMO you're describing a different situation. We should have the Job API,
which should suit users, who need the features you described, so they
don't need to "workaround" it using this API.

Other users might prefer the multiprocessing, fabric or autotest's
remote_commander, to execute just a plain simple methods/scripts on
other machines.

But if you need to run something complex, you need a runner, which gives
you the neat features to avoid the boilerplate code used to produce
outputs in case of failure, or other features like streams, datadirs, ...).

Therefor I believe allowing to trigger tests in background from test
would be very useful and the best way of solving this I can imagine. As
a test writer I would not want to learn yet another way of expressing
myself when splitting the task in several streams. I want the same
development, I expect the same results and yes, I don't expect the full
job. Having just a raw test without any extra job features is sufficient
and well understandable.

Btw the only controversial think I can imagine is, that some (me
including) would have nothing against offloading multi-stream tests into
a stream (so basically nesting). And yes, I expect it to work and create
yet another directory inside the stream's results. (eg. to run
multi-host netperf as a stresser while running multi-host migration. I
could either reference each party - netserver, netclient, migrate_from,
migrate_to - or I can just say - multi_netperf, multi_migrate and expect
the netserver+netclient streams to be created inside multi_netperf
results and the same for migrate. Conceptually I have no problem with
that and as a test writer I'd use the second, because putting together
building blocks is IMO the way to go.



I can only say that, at this time, it's very clear to me what's 
nested-test support and what's multi-stream test support.  Let's call 
them by different names, because they're indeed different, and decide on 
one.


- Cleber.


Lukáš


Thanks.
   - Ademar






--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


[Avocado-devel] RFC: Plugin Management API

2016-05-09 Thread Cleber Rosa
  self.runner = plugin_mgmt.new('avocado.plugins.runner',
  6 default_runner)

Lines 5 and 6, refer to a proposed method called ``new()`` of an also
proposed module named ``plugin_mgmt``.  Names are quite controversial,
and not really the goal at this point, so please bear with the naming
choices made so far, and feel free to suggest better ones.

It should be clear that the goal of the ``new()`` method is to make
an extensible subsystem implementation ready to be used.  Its
implementation, directly or indirectly, may involve locating the
Python file that contains the associated class, loading that file into
the current interpreter, creating a class instance, and finally,
returning it.

This maps well to the driver pattern, where little or no code is necessary
around the plugin class instance itself.

For usage patterns that map to the extensions definition given before, the
"dispatcher" code may have higher level and additional methods::

  01 class ResultFormatterDispatcher:
  02
  03 NAMESPACE = "avocado.plugin.result.format"
  04
  05 def add(self, name):
  06 "Adds a plugin to the list of active result format writers"
  07 self.active_set.add(plugin_mgmt.new(self.NAMESPACE, name)
  08
  09 def remove(self, name):
  10 "Removes a plugin from the list of active result format 
writers"

  11 self.active_set.remove_by_name(name)
  12
  13 def set_active(self, names):
  14 "Adds or removes plugins so that only given plugin names 
are active"

  15 ...

Which could be used as::

  class Job(object):
  def __init__(self):
  ...
  self.result_formats = settings.get_value('result', 'formats',
   default=['json', 
'xunit'])

  self.result_dispatcher = plugin_mgmt.ResultFormatterDispatcher()
  ...

  def run(self):
  self.result_dispatcher.set_active(self.result_formats)
  ...
  for test in tests:
  self.runner.run_test(test)
  ...


Activation Scope


It was mentioned during the definition of the different plugin patterns
that only one driver plugin would be active at a given time.  This is a
simplification, one that doesn't take into account any kind of scopes.

Avocado's code should implement contained scopes and add/remove
plugins instances to these scopes only.  For instance, on a single
job, there may be multiple parallel test runners.  The activation
scope for a test runner driver plugin, is of course, individual to
each runner.

Layered APIs


It may be useful to provide more focused APIs as, supposedly, thin
layers around the features provided by Plugin Management API.  One
example may be the activation and deactivation of test result
formatters.  Example::

class Job(object):
def add_runner(self, plugin_name):
self.runners.append(plugin_mgmt.new('avocado.plugins.runner',
 plugin_name))

This simplistic but quite real example has the goal of allowing users
of the ``Job`` class to simply call::

  parallel_job = Job()
  for count in xrange(multiprocessing.cpu_count()):
   parallel_job.add_runner('local')
  ...
  job.run_parallel(test_list)

Conclusion
==

Hopefully this text helps to pin-point the aspects of the Avocado
architecture that, even though may need adjustments, can contribute to
the implementation of the ultimate goal of providing a "Job API".

.. _duck typing: https://en.wikipedia.org/wiki/Duck_typing
.. _stevedore: https://pypi.python.org/pypi/stevedore
.. _abstract base classes: https://docs.python.org/2/library/abc.html



--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] Test counter is not working.

2016-05-11 Thread Cleber Rosa

On 05/11/2016 12:15 AM, Julio Faracco wrote:

Hi Wei,

I ran a git-bisect here...
This commit introduces the error to me.

commit 5973e898f0df1179bc1bff425d06f93fdcc44c31
Author: Lukáš Doktor 
Date:   Tue Apr 26 06:37:52 2016 +0200

I was reading the documentation and the commit info and it makes sense
since it is a big change to test IDs. I wonder if this behaviour is
normal, if it is a bug or it will be changed in the future too.

Btw, I can run a stable version here (0.34.0).

I haven't changed anything inside avocado framework (a trash in my
master branch, for example... nothing...  clean...).



Julio,

This was fixed on avocado-vt commit 
00224e2849b1f7b02c3acceffd6bc7d7633542a5.


Can you try that?

Thanks,
- Cleber.


--
Julio Cesar Faracco


2016-05-10 23:35 GMT-03:00 Wei, Jiangang :

Hi,

On Tue, 2016-05-10 at 22:37 -0300, Julio Faracco wrote:

Hi guys,

I often follow the discussions and planning tool of Avocado{-VT} by
trello and mailing list. But not so hard as I wish
However, is there some new feature being developed?

I pull recently the last commits and I'm having a strange issue.
The test counter does not increase.

what's the id of last commits?
I haven't reproduce it with 101bf36ab1c9c7258b15a66ab60abc99db0089cd.



$ ./avocado run type_specific.io-github-autotest-qemu.qemu_img
JOB ID : 50719fa60bd6b48f1629d1296407b8f755bc9fb0
JOB LOG: 
/home/jfaracco/avocado/job-results/job-2016-05-10T16.47-50719fa/job.log
TESTS  : 21
 (0/21) type_specific.io-github-autotest-qemu.qemu_img.check: PASS
 (0/21) 
type_specific.io-github-autotest-qemu.qemu_img.create.non-preallocated.cluster_size_default:
PASS
 (0/21) 
type_specific.io-github-autotest-qemu.qemu_img.create.non-preallocated.cluster_size.cluster_512:
PASS
 (0/21) 
type_specific.io-github-autotest-qemu.qemu_img.create.non-preallocated.cluster_size.cluster_1024:
PASS
 (0/21) 
type_specific.io-github-autotest-qemu.qemu_img.create.non-preallocated.cluster_size.cluster_4096:
PASS
 (0/21) 
type_specific.io-github-autotest-qemu.qemu_img.create.non-preallocated.cluster_size.cluster_1M:
PASS
 (0/21) 
type_specific.io-github-autotest-qemu.qemu_img.create.non-preallocated.cluster_size.cluster_2M:
PASS
 (0/21) 
type_specific.io-github-autotest-qemu.qemu_img.create.preallocated.cluster_size_default:
PASS
 (0/21) 
type_specific.io-github-autotest-qemu.qemu_img.create.preallocated.cluster_size.cluster_512:
PASS
 (0/21) 
type_specific.io-github-autotest-qemu.qemu_img.create.preallocated.cluster_size.cluster_1024:
PASS
 (0/21) 
type_specific.io-github-autotest-qemu.qemu_img.create.preallocated.cluster_size.cluster_4096:
PASS
 (0/21) 
type_specific.io-github-autotest-qemu.qemu_img.create.preallocated.cluster_size.cluster_1M:
PASS
 (0/21) 
type_specific.io-github-autotest-qemu.qemu_img.create.preallocated.cluster_size.cluster_2M:
PASS
 (0/21) 
type_specific.io-github-autotest-qemu.qemu_img.convert.to_qcow2.cluster_size_default:
ERROR
 (0/21) 
type_specific.io-github-autotest-qemu.qemu_img.convert.to_qcow2.cluster_size_2048:
ERROR
 (0/21) type_specific.io-github-autotest-qemu.qemu_img.convert.to_raw: ERROR
 (0/21) type_specific.io-github-autotest-qemu.qemu_img.convert.to_qed: ERROR
 (0/21) type_specific.io-github-autotest-qemu.qemu_img.snapshot: PASS
 (0/21) type_specific.io-github-autotest-qemu.qemu_img.info: PASS
 (0/21) type_specific.io-github-autotest-qemu.qemu_img.rebase: ERROR
 (0/21) type_specific.io-github-autotest-qemu.qemu_img.commit: ERROR
RESULTS: PASS 15 | ERROR 6 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
JOB HTML   : 
/home/jfaracco/avocado/job-results/job-2016-05-10T16.47-50719fa/html/results.html
TIME   : 185.66 s

The counter is always 0.

Am I missing something?

--
Julio Cesar Faracco

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel








___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel



--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] Can't execute avocado run --vt-type uptime as VT guide.

2016-05-16 Thread Cleber Rosa



On 05/16/2016 09:18 AM, Wei WA Li wrote:


Hi all,

I am following the guide as below.
http://avocado-vt.readthedocs.io/en/latest/WritingTests/WritingSimpleTests.html


I have create uptime.py file in local dir, but as guide said, I can't list
and run uptime, I think I did not enable this python to excution list.
Did I miss some steps? But I can't find any information about this on the
guide.

[root@zs95kv2 tests]# pwd
/avocado/Code/tp-qemu/generic/tests
[root@zs95kv2 tests]# ll uptime.py
-rw-r--r--. 1 root root 631 May 16 08:05 uptime.py
[root@zs95kv2 tests]# /avocado/avocado/scripts/avocado run --vt-type uptime


The documentation is broken. Can you try without `--vt-type` and let us 
know how it goes?


Thanks,
-Cleber.


Test discovery plugin  failed: Virt Backend uptime is not currently supported by
avocado-vt. Check for typos and the list of supported backends

No urls provided nor any arguments produced runable tests. Please double
check the executed command.
[root@zs95kv2 tests]# /avocado/avocado/scripts/avocado list uptime
Unable to discover url(s) 'uptime' with loader plugins(s) 'file', 'vt',
'external', try running 'avocado list -V uptime' to see the details.
[root@zs95kv2 tests]#


Best regards,
-
Li, Wei (李 伟)
zKVM Solution Test
IBM China Systems & Technology Lab, Beijing
E-Mail: li...@cn.ibm.com
Tel: 86-10-82450631  Notes: Wei WA Li/China/IBM
Address: 3BW298, Ring Bldg. No.28 Building, ZhongGuanCun Software Park,No.8
 DongBeiWang West Road, ShangDi, Haidian District, Beijing,
P.R.China



___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel



--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


[Avocado-devel] Avocado 36.0lts Release

2016-05-17 Thread Cleber Rosa

This is a very proud announcement: Avocado release 36.0lts, our very
first "Long Term Stability" release, is now out!

LTS in a nutshell
-

This release marks the beginning of a special cycle that will last for
18 months.  Avocado usage in production environments should favor the
use of this LTS release, instead of non-LTS releases.

Bug fixes will be provided on the "36lts"[1] branch until, at least,
September 2017.  Minor releases, such as "36.1lts", "36.2lts" an so
on, will be announced from time to time, incorporating those stability
related improvements.

Keep in mind that no new feature will be added.  For more information,
please read the "Avocado Long Term Stability" RFC[2].

Changes from 35.0:
--

As mentioned in the release notes for the previous release (35.0),
only bug fixes and other stability related changes would be added to
what is now 36.0lts.  For the complete list of changes, please check
the GIT repo change log[3].

Install avocado
---

The Avocado LTS packages are available on a separate repository, named
"avocado-lts".  These repositories are available for Fedora 22, Fedora
23, EPEL 6 and EPEL 7.

Updated ".repo" files are available on the usual locations:

 * https://repos-avocadoproject.rhcloud.com/static/avocado-fedora.repo
 * https://repos-avocadoproject.rhcloud.com/static/avocado-el.repo

Those repo files now contain definitions for both the "LTS" and
regular repositories.  Users interested in the LTS packages, should
disable the regular repositories and enable the "avocado-lts" repo.

Instructions are available in our documentation on how to install
either with packages or from source[4].

Happy hacking and testing!

---

[1] https://github.com/avocado-framework/avocado/tree/36lts
[2] https://www.redhat.com/archives/avocado-devel/2016-April/msg00038.html
[3] https://github.com/avocado-framework/avocado/compare/35.0...36.0lts
[4] 
http://avocado-framework.readthedocs.io/en/36lts/GetStartedGuide.html#installing-avocado



--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] Can't execute avocado run --vt-type uptime as VT guide.

2016-05-18 Thread Cleber Rosa



On 05/16/2016 11:17 AM, Wei WA Li wrote:

Hi Cleber,

What I did is as below, did I miss something?

1) mkdir Code


The key point here is that Avocado-VT will use the test provider repos 
located, by default, at:


 $AVOCADO_DATA/avocado-vt/test-providers.d/downloads

This is shown in the docs as the following step:

 $ cd 
$AVOCADO_DATA/avocado-vt/test-providers.d/downloads/io-github-autotest-qemu


Which is usually:

 $ cd 
~/avocado/data/avocado-vt/test-providers.d/downloads/io-github-autotest-qemu


Please try adding your test to the repo on this location instead, or 
set/link your own repo at this filesystem location.


BTW, these doc fixes where "just" committed:

https://github.com/avocado-framework/avocado-vt/pull/514/commits/1cf640646ad777501d6bc347a177db17de444d3d

Thanks,
- Cleber.


2)git clone https://github.com/autotest/tp-qemu.git
3) touch generic/tests/uptime.py
  git add generic/tests/uptime.py
4) vi generic/tests/uptime.py

[root@zs95kv2 Code]# cat /avocado/Code/tp-qemu/generic/tests/uptime.py
import logging

def run(test, params, env):

"""
Uptime test for virt guests:

1) Boot up a VM.
2) Establish a remote connection to it.
3) Run the 'uptime' command and log its results.

:param test: QEMU test object.
:param params: Dictionary with the test parameters.
:param env: Dictionary with test environment.
"""

vm = env.get_vm(params["main_vm"])
vm.verify_alive()
timeout = float(params.get("login_timeout", 240))
session = vm.wait_for_login(timeout=timeout)
uptime = session.cmd("uptime")
logging.info("Guest uptime result is: %s", uptime)
session.close()

[root@zs95kv2 Code]#
5) Since we have no external repo setting, I have not installed inspektor,
I think it just a source checking tool.
6)
[root@zs95kv2 Code]# /avocado/avocado/scripts/avocado list uptime
Unable to discover url(s) 'uptime' with loader plugins(s) 'file', 'vt',
'external', try running 'avocado list -V uptime' to see the details.
[root@zs95kv2 Code]# /avocado/avocado/scripts/avocado run uptime

Unable to discover url(s) 'uptime' with loader plugins(s) 'file', 'vt',
'external', try running 'avocado list -V uptime' to see the details.
[root@zs95kv2 Code]#





Best regards,
-
Li, Wei (李 伟)
zKVM Solution Test
IBM China Systems & Technology Lab, Beijing
E-Mail: li...@cn.ibm.com
Tel: 86-10-82450631  Notes: Wei WA Li/China/IBM
Address: 3BW298, Ring Bldg. No.28 Building, ZhongGuanCun Software Park,No.8
 DongBeiWang West Road, ShangDi, Haidian District, Beijing,
P.R.China




From:   Cleber Rosa 
To: Wei WA Li/China/IBM@IBMCN, avocado-devel@redhat.com
Date:   2016/05/16 22:04
Subject:Re: [Avocado-devel] Can't execute avocado run --vt-type uptime
as VT guide.





On 05/16/2016 09:18 AM, Wei WA Li wrote:


Hi all,

I am following the guide as below.


http://avocado-vt.readthedocs.io/en/latest/WritingTests/WritingSimpleTests.html




I have create uptime.py file in local dir, but as guide said, I can't

list

and run uptime, I think I did not enable this python to excution list.
Did I miss some steps? But I can't find any information about this on the
guide.

[root@zs95kv2 tests]# pwd
/avocado/Code/tp-qemu/generic/tests
[root@zs95kv2 tests]# ll uptime.py
-rw-r--r--. 1 root root 631 May 16 08:05 uptime.py
[root@zs95kv2 tests]# /avocado/avocado/scripts/avocado run --vt-type

uptime

The documentation is broken. Can you try without `--vt-type` and let us
know how it goes?

Thanks,
-Cleber.


Test discovery plugin  failed: Virt Backend uptime is not currently supported by
avocado-vt. Check for typos and the list of supported backends

No urls provided nor any arguments produced runable tests. Please double
check the executed command.
[root@zs95kv2 tests]# /avocado/avocado/scripts/avocado list uptime
Unable to discover url(s) 'uptime' with loader plugins(s) 'file', 'vt',
'external', try running 'avocado list -V uptime' to see the details.
[root@zs95kv2 tests]#


Best regards,
-
Li, Wei (李 伟)
zKVM Solution Test
IBM China Systems & Technology Lab, Beijing
E-Mail: li...@cn.ibm.com
Tel: 86-10-82450631  Notes: Wei WA Li/China/IBM
Address: 3BW298, Ring Bldg. No.28 Building, ZhongGuanCun Software

Park,No.8

 DongBeiWang West Road, ShangDi, Haidian District, Beijing,
P.R.China



_______
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel



--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]




--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] RFC: Nested tests (previously multi-stream test) [v5]

2016-05-25 Thread Cleber Rosa
 so I think the `NestedRunner` should append last line to the
test's log saying `Expected FAILURE` to avoid confusion while looking at
results.



This special injection, and special handling for that matter, actually 
makes me more confused.



Note2: It might be impossible to pass messages in real-time across
multiple machines, so I think at the end the main job.log should be
copied to `raw_job.log` and the `job.log` should be reordered according
to date-time of the messages. (alternatively we could only add a contrib
script to do that).



Definitely no to another special handling.  Definitely yes to a post-job 
contrib script that can reorder the log lines.




Conclusion
==

I believe nested tests would help people covering very complex scenarios
by splitting them into pieces similarly to Lego. It allows easier
per-component development, consistent results which are easy to analyze
as one can see both, the overall picture and the specific pieces and it
allows fixing bugs in all tests by fixing the single piece (nested test).



It's pretty clear that running other tests from tests is *useful*, 
that's why it's such a hot topic and we've been devoting so much energy 
to discussing possible solutions.  NestedTests is one to do it, but I'm 
not confident we have enough confidence to make it *the* way to do it. 
The feeling that I have at this point, is that maybe we should prototype 
it as utilities to:


 * give Avocado a kickstart on this niche/feature set
 * avoid as much as possible user-written boiler plate code
 * avoid introducing *core* test APIs that would be set in stone

The gotchas that we have identified so far, are IMHO, enough to restrain 
us from forcing this kind of feature into the core test API, which we're 
in fact, trying to clean up.


With user exposition and feedback, this, a modified version or a 
completely different solution, can evolve into *the* core (and 
supported) way to do it.


--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] RFC: Nested tests (previously multi-stream test) [v5]

2016-05-26 Thread Cleber Rosa



On 05/26/2016 11:37 AM, Ademar Reis wrote:

On Thu, May 26, 2016 at 09:15:11AM +0200, Lukáš Doktor wrote:

Dne 25.5.2016 v 21:18 Cleber Rosa napsal(a):



On 05/24/2016 11:53 AM, Lukáš Doktor wrote:

Hello guys,

this version returns to roots and tries to define clearly the
single solution I find teasing for multi-host and other complex
tests.

Changes:

v2: Rewritten from scratch v2: Added examples for the demonstration
to avoid confusion v2: Removed the mht format (which was there to
demonstrate manual execution) v2: Added 2 solutions for
multi-tests v2: Described ways to support synchronization v3:
Renamed to multi-stream as it befits the purpose v3: Improved
introduction v3: Workers are renamed to streams v3: Added example
which uses library, instead of new test v3: Multi-test renamed to
nested tests v3: Added section regarding Job API RFC v3: Better
description of the Synchronization section v3: Improved conclusion
v3: Removed the "Internal API" section (it was a transition
between no support and "nested test API", not a "real" solution)
v3: Using per-test granularity in nested tests (requires plugins
refactor from Job API, but allows greater flexibility) v4: Removed
"Standard python libraries" section (rejected) v4: Removed "API
backed by cmdline" (rejected) v4: Simplified "Synchronization"
section (only describes the purpose) v4: Refined all sections v4:
Improved the complex example and added comments v4: Formulated the
problem of multiple tasks in one stream v4: Rejected the idea of
bounding it inside MultiTest class inherited from avocado.Test,
using a library-only approach v5: Avoid mapping ideas to
multi-stream definition and clearly define the idea I bear in my
head for test building blocks called nested tests.


Motivation ==

Allow building complex tests out of existing tests producing a
single result depending on the complex test's requirements.
Important thing is, that the complex test might run those tests on
the same, but also on a different machine allowing simple
development of multi-host tests. Note that the existing tests
should stay (mostly) unchanged and executable as simple scenarios,
or invoked by those complex tests.

Examples of what could be implemented using this feature:

1. Adding background (stress) tasks to existing test producing
real-world scenarios. * cpu stress test + cpu hotplug test * memory
stress test + migration * network+cpu+memory test on host, memory
test on guest while running migration * running several migration
tests (of the same and different type)

2. Multi-host tests implemented by splitting them into components
and leveraging them from the main test. * multi-host migration *
stressing a service from different machines


Nested tests 

Test 

A test is a receipt explaining prerequisites, steps to check how
the unit under testing behaves and cleanup after successful or
unsuccessful execution.



You probably meant "recipe" instead of "receipt".  OK, so this is an
abstract definition...

yep, sorry for confusion.




Test itself contains lots of neat features to simplify logging,
results analysis and error handling evolved to simplify testing.



... while this describes concrete conveniences and utilities that
users of the Avocado Test class can expect.


Test runner ---

Is responsible for driving the test(s) execution, which includes
the standard test workflow (setUp/test/tearDown), handle plugin
hooks (results/pre/post) as well as safe interruption.



OK.


Nested test ---

Is a test invoked by other test. It can either be executed in
foreground


I got from this proposal that a nested test always has a parent.
Basic question is: does this parent have to be a regular (that is,
non-nested) test?

I think it's mentioned later, nested test should be unmodified normal test 
executed from a test, which means there is no limit. On the other hand the main 
test has no knowledge whatsoever about the nested-nested tests as they are 
masked by the nested test.

Basically the knowledge transfer is:

  main test
  -> trigger a nested test
   nested test
   -> trigger nested test
   nested nested test
   <- report result (PASS/FAIL/...)
   process nested nested results
   <- report nested result (PASS/FAIL/...)
   process nested results
   <- report result (PASS/FAIL/...)

therefor in the json/xunit results you only see main test's result 
(PASS/FAIL/...), but you can poke around and for details either.

The main test's logs could look like this:

START 1-passtest.py:PassTest.test
Not logging /var/log/messages (lack of permissions)
1-nested.py:Nested.test: START 1-nested.py:Nested.test
1-nested.py:Nested.test: 1-nestednested.py:NestedNested.test: START 
1-nestednested.py:NestedNested.test
1-nested.py:Nested.test: 1-nestednested.py:NestedNested.test: Some mes

Re: [Avocado-devel] TestRunner API

2016-05-26 Thread Cleber Rosa



On 05/26/2016 03:39 PM, Vincent Matossian wrote:

[more of an avocado-users than a devel question but this is the only list I
could find, sorry if I missed something]

Hopefully a quick question: what's the right way to invoke the test runner
programmatically rather than through the  "avocado run " CLI?



Hi Vincent,


I've played with something like

import avocado
from sleeptest import SleepTest

avocado.Test.run(SleepTest())



You can obviously do something like:

 >>> from sleeptest import SleepTest
 >>> s = SleepTest()
 >>> print s
 Test('0-SleepTest')
 >>> print s.run_avocado()
 None

But this is probably not what you want.


but I haven't explored further to find out how to parametrize and get all
the goodies we get with the CLI options, is all this feasible?



TBH, this is something we intend to tackle with the "Job API".  You can 
find some references about it in this very same mailling list.  After 
you catch up with the concepts and ideas there, please let us know if it 
matches your requirements and ideas.


Cheers,
 - Cleber.


Thanks!

Vincent



___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel



--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] How to enable a new platform and Linux on avocado-vt?

2016-05-30 Thread Cleber Rosa



On 05/18/2016 11:00 PM, Wei WA Li wrote:



Hi all,

Could you help me to take a look for this issue? I am still trying to do
it, but I have no idea about it. Thank you very much.
https://github.com/avocado-framework/avocado-vt/issues/512



Hi Wei,

I have responded on the issue itself.  Adding a few extra notes here.



I just installed avocado-vt on our s390x platform, the result is as below.

[root@zs95kv2 avocado]# ./scripts/avocado run
--vt-machine-type=s390-ccw-kvmibm-1.1.1
type_specific.io-github-autotest-qemu.migrate.default.tcp --show-g
Config
file /usr/share/avocado/data/avocado-vt/backends/qemu/cfg/guest-os.cfg auto
generated from guest OS samples
Config
file /usr/share/avocado/data/avocado-vt/backends/qemu/cfg/subtests.cfg auto
generated from subtest samples
Test discovery plugin  failed: option --vt-guest-os 'JeOS.23' is not on the known
guest os for aone' and machine type 's390-ccw-kvmibm-1.1.1'. (see
--vt-list-guests)

Unable to discover url(s)
'type_specific.io-github-autotest-qemu.migrate.default.tcp' with loader
plugins(s) 'file', 'vt', 'external', try running 'avoist -V
type_specific.io-github-autotest-qemu.migrate.default.tcp' to see the
details.
[root@zs95kv2 avocado]#



First thing is `--vt-machine-type`.  While s390-ccw-kvmibm-1.1.1 may be 
a valid machine type from qemu's perspective (qemu --machine help), they 
are not valid (as in present in upstream) avocado machine types.


Unfortunately, there's no standard way to check the valid Avocado 
machine types.  I created a card to tackle this:


https://trello.com/c/aumjDJde/715-avocado-vt-add-vt-list-machines

For now, you can look at avocado-vt/shared/cfg/machines.cfg



s390x is our platfrom on Z mianframe, and I am trying to test it
virtualiztion functions by avocado. The relationship is as below.

s390 likeX86
zLinux   likeFedora
zKVM likeKVM

So current avocado does not support our platfrom, I want to modify it and
contribute it.


Shoud I need to add some cfg here firstly?



See the machines.cfg reference here and the other reference on the 
github issue.


- Cleber.


[root@zs95kv2 Linux]# ll
total 72
drwxr-xr-x.  4 root root 4096 May 11 04:45 CentOS
-rw-r--r--.  1 root root  416 May 11 04:45 CentOS.cfg
drwxr-xr-x.  2 root root 4096 May 11 04:45 Debian
-rw-r--r--.  1 root root  275 May 11 04:45 Debian.cfg
drwxr-xr-x.  2 root root 4096 May 11 04:45 Fedora
-rw-r--r--.  1 root root 1787 May 11 04:45 Fedora.cfg
drwxr-xr-x.  2 root root 4096 May 11 04:45 JeOS
-rw-r--r--.  1 root root   21 May 11 04:45 JeOS.cfg
drwxr-xr-x.  2 root root 4096 May 11 04:45 LinuxCustom
-rw-r--r--.  1 root root   64 May 11 04:45 LinuxCustom.cfg
drwxr-xr-x.  2 root root 4096 May 11 04:45 OpenSUSE
-rw-r--r--.  1 root root  549 May 11 04:45 OpenSUSE.cfg
drwxr-xr-x. 27 root root 4096 May 11 04:45 RHEL
-rw-r--r--.  1 root root  973 May 11 04:45 RHEL.cfg
drwxr-xr-x.  2 root root 4096 May 11 04:45 SLES
-rw-r--r--.  1 root root  772 May 11 04:45 SLES.cfg
drwxr-xr-x.  2 root root 4096 May 11 04:45 Ubuntu
-rw-r--r--.  1 root root  361 May 11 04:45 Ubuntu.cfg
[root@zs95kv2 Linux]# pwd
/avocado/avocado-vt/shared/cfg/guest-os/Linux
[root@zs95kv2 Linux]#

Best regards,
-
Li, Wei (李 伟)
zKVM Solution Test
IBM China Systems & Technology Lab, Beijing
E-Mail: li...@cn.ibm.com
Tel: 86-10-82450631  Notes: Wei WA Li/China/IBM
Address: 3BW298, Ring Bldg. No.28 Building, ZhongGuanCun Software Park,No.8
 DongBeiWang West Road, ShangDi, Haidian District, Beijing,
P.R.China



___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel



--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] No guest information on host arp table

2016-05-30 Thread Cleber Rosa

On 05/19/2016 10:42 AM, Wei WA Li wrote:


I found these 3 mothod to get guest IP, but none works for me now.
Is it a problem that self.address_cache is {}?

--  source ---
arp_ip = self.address_cache.get(nic.mac.lower())
or
arp_ip = self.address_cache.get(nic.mac.upper())
or
arp_ip = ip_map.get(nic.mac.lower())


-- variable  --
nic.mac = {'netdst': 'virbr0', 'ip': None, 'nic_name': 'nic1', 'mac':
'02:00:00:93:42:41', 'nettype': 'bridge', 'nic_model': 'virtio',
'g_nic_name': None}
self.address_cache = {}





Wei,

Can you better explain what you're trying to do, and what is your 
environment?


I have added a line to boot.py:

diff --git a/generic/tests/boot.py b/generic/tests/boot.py
index bf4ddc9..2176493 100644
--- a/generic/tests/boot.py
+++ b/generic/tests/boot.py
@@ -23,6 +23,7 @@ def run(test, params, env):
 timeout = float(params.get("login_timeout", 240))
 vms = env.get_all_vms()
 for vm in vms:
+logging.info("VM IP address: %s", vm.get_address(0))
 error.context("Try to log into guest '%s'." % vm.name, 
logging.info)

 session = vm.wait_for_login(timeout=timeout)
 session.close()

And got the following result:

...
2016-05-30 10:17:25,036 qemu_monitor L0286 DEBUG| (monitor hmp1) 
Sending command 'cont'

2016-05-30 10:17:25,037 qemu_monitor L0714 DEBUG| Send command: cont
2016-05-30 10:17:25,044 boot L0026 INFO | VM IP address: 
127.0.0.1
2016-05-30 10:17:25,044 errorL0085 INFO | Context: Try to 
log into guest 'avocado-vt-vm1'.

...

This IP address is actually accurate because for this run I used user 
level networking (the default setup).


-Cleber.


Best regards,
-
Li, Wei (李 伟)
zKVM Solution Test
IBM China Systems & Technology Lab, Beijing
E-Mail: li...@cn.ibm.com
Tel: 86-10-82450631  Notes: Wei WA Li/China/IBM
Address: 3BW298, Ring Bldg. No.28 Building, ZhongGuanCun Software Park,No.8
 DongBeiWang West Road, ShangDi, Haidian District, Beijing,
P.R.China




From:   Wei WA Li/China/IBM
To: Avocado-devel@redhat.com
Date:   2016/05/19 18:21
Subject:No guest information on host arp table


Hi all,

I have a question about def get_address(self, index=0) and log guest.

I found there is no guest information on our host arp table. I am not sure
whether it is only way to get guest IP.
If it is not, how can I get guest IP?


[root@zs95kv2 guests]# arp
Address  HWtype  HWaddress   Flags Mask
Iface
zkvm-10-20-92-160.pokpr  ether   02:00:00:af:33:8a   CM
vlan1292
v1292gw10-20-92-254.pok  ether   00:26:88:57:b7:f0   C
vlan1292
ZP93K6.pokprv.stglabs.i  ether   02:00:00:12:65:53   C
vlan1292
v508gw9-12-23-1.pok.stg  ether   00:26:88:59:b7:f0   C
vlan508
9.12.23.95   ether   00:04:96:10:44:b0   C
vlan508
[root@zs95kv2 guests]# arp -a
zkvm-10-20-92-160.pokprv.stglabs.ibm.com (10.20.92.160) at
02:00:00:af:33:8a [ether] PERM on vlan1292
v1292gw10-20-92-254.pokprv.stglabs.ibm.com (10.20.92.254) at
00:26:88:57:b7:f0 [ether] on vlan1292
ZP93K6.pokprv.stglabs.ibm.com (10.20.92.60) at 02:00:00:12:65:53 [ether] on
vlan1292
v508gw9-12-23-1.pok.stglabs.ibm.com (9.12.23.1) at 00:26:88:59:b7:f0
[ether] on vlan508
? (9.12.23.95) at 00:04:96:10:44:b0 [ether] on vlan508
[root@zs95kv2 guests]#

[root@zs95kv2 guests]# virsh list
 IdName   State


[root@zs95kv2 guests]# virsh start avocado-vt-vm1
Domain avocado-vt-vm1 started

[root@zs95kv2 guests]# virsh console avocado-vt-vm1

root@zp93k6g93160:~# ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state
UP group default qlen 1000
link/ether 02:00:00:93:42:41 brd ff:ff:ff:ff:ff:ff
inet 10.20.93.160/24 brd 10.20.93.255 scope global eth0
   valid_lft forever preferred_lft forever
inet6 fd55:faaf:e1ab:a21:0:ff:fe93:4241/64 scope global mngtmpaddr
dynamic
   valid_lft 2591990sec preferred_lft 604790sec
inet6 fe80::ff:fe93:4241/64 scope link
   valid_lft forever preferred_lft forever
root@zp93k6g93160:~#


Best regards,
-
Li, Wei (李 伟)
zKVM Solution Test
IBM China Systems & Technology Lab, Beijing
E-Mail: li...@cn.ibm.com
Tel: 86-10-82450631  Notes: Wei WA Li/China/IBM
Address: 3BW298, Ring Bldg. No.28 Building, ZhongGuanCun Software Park,No.8
 DongBeiWang West Road, ShangDi, Haidian District, Beijing,
P.R.China



___
Avocado-devel mailing list
Avocado-devel@r

Re: [Avocado-devel] [RFC] Environment Variables

2016-05-31 Thread Cleber Rosa


On 05/25/2016 05:31 AM, Amador Pahim wrote:

Hi folks,

We have requests to handle the environment variables that we can set to
the tests. This is the RFC in that regard, with a summary of the ideas
already exposed in the original request and some additional planning.

The original request is here:
https://trello.com/c/Ddcly0oG/312-mechanism-to-provide-environment-variables-to-tests-run-on-a-virtual-machine-remote


Motivation
==
Avocado tests are executed in a fork process or even in a remote
machine. Regardless the fact that Avocado is hard coded to set some
environment variables, they are for internal consumption and user is not
allowed to control/configure its behavior.


You mean this:

http://avocado-framework.readthedocs.io/en/latest/WritingTests.html#environment-variables-for-simple-tests

Right? Basically, the fact that Avocado sets some of the job/test state 
as environment variables, that can be used by SIMPLE tests.



The motivation is the request to provide users an interface to set
and/or keep environment variables for test consumption.

Use cases
=
1) Use the command line or the config file to set the environment
variables in tests processes environment; access those variables from
inside the test.
2) Copy from current environment some environment variable(s) to the
tests processes environment; access those variables from inside the test.

Proposal

- To create a command line option, under the `run` command, to set
environment variables that will be available in tests environment process:

 $ avocado run --test-env='FOO=BAR,FOO1=BAR1' passtest.py



I can relate to this use case...


- To create an option in config file with a dictionary of environment
variables to set in test process environment. It can be used as a
replacement or complement to the command line option (with lower priority):

 [tests.env]
 test_env_vars = {'FOO': 'BAR', 'FOO1': 'BAR1'}



... while putting those in a config file does not seem like something 
one would do.


In all cases, and more explicitly in the config file example, this is 
only really necessary if/when the environment variable to pass to the 
test actually harms Avocado (considering a local execution, that is, in 
a forked process).


So, if Avocado and the test, share the use of environment variables by 
the same name, then this is a must.  Also in the case of execution in 
other "runners", such as remote/vm, this can be quite valuable.



- Create an option in config file with a list of environment variable
names to copy from avocado main process environment to the test process
environment (similar to env_keep in the /etc/sudoers file):

 [tests.env]
 env_keep = ['FOO', 'FOO1', 'FOO2']




Right, this makes sense. But it also brings the point that we may 
actually change the default behavior of keeping environment variables 
from Avocado in the tests' process.  That is, they would get a much 
cleaner environment by default.  While this sounds cleaner, it may break 
a lot of expectations.



For every configuration entry point, the setting have to be respected in
local and remote executions.

Drawbacks
=

While setting an environment variable, user will be able to change the
behavior of a test and probably the behavior of Avocado itself. Maybe
even the OS behavior as well. We should:
- Warn users about the danger when using such options.


I fail to see where an environment variable, to be set by Avocado in the 
test process, can or should impact Avocado itself.  If it does, then 
we'd probably be doing something wrong.  I'm not sure we need warnings 
that exceed documenting the intended behavior.



- Protect Avocado environment variables from overwriting.


About protecting the Avocado's own environment variables: agreed.



Looking forward to read your comments.



Overall, this is definitely welcome.  Let's discuss possible 
implementation issues, such as remote/vm support, because it wouldn't be 
nice to introduce something like this with too many caveats.


Cheers,
- Cleber.


Best Regards,
--
Amador Pahim

_______
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] Collaboration Workflow

2016-05-31 Thread Cleber Rosa

On 05/31/2016 06:09 AM, Amador Pahim wrote:

Hello,

We are receiving a good number of Pull Requests from new contributors
and this is great.

In order to optimize the time spent on code reviews and also the time
the code writers are investing in adjust the code according to the
reviews, I'd like to expose my own workflow that I believe is close to
the workflow used by the others full-time avocado developers.

The hope is that the new comers get inspired by this and probably take
advantage of it.

As the biggest number of PRs are coming to avocado-misc-tests, I will
use this repository as example.

- Fork the repository.

- Clone from your fork:

 $ git clone g...@github.com:/avocado-misc-tests.git

- Enter directory:

 $ cd avocado-misc-tests/

- Setup upstream:

 $ git remote add upstream
g...@github.com:avocado-framework/avocado-misc-tests.git

At this point, you should have your name and e-mail configured on git.
Also, we encourage you to sign your commits using GPG signature:

http://avocado-framework.readthedocs.io/en/latest/ContributionGuide.html#signing-commits


Start coding:

- Create a new local branch and checkout to it:

 $ git checkout -b my_new_local_branch

- Code and then commit your changes:

 $ git add new-file.py
 $ git commit -s (include also a '-S' if signing with GPG)

Please write a good commit message, pointing motivation, issues that
you're addressing. Usually I try to explain 3 points of my code in the
commit message: motivation, approach and effects. Example:

https://github.com/avocado-framework/avocado/commit/661a9abbd21310ef7803ea0286fcb818cb93dfa9


If the commit is related to a trello card or an issue in github, I also
add the line "Reference: " to the commit message bottom. You can
mention it in Pull Request message instead, but the main point is not to
omit that information.

- If working on 'avocado' repository, this is the time to run 'make check'.

- Push your commit(s) to your fork:

 $ git push --set-upstream origin my_new_local_branch

- Create the Pull Request on github.

Now you're waiting for feedback on github Pull Request page. Once you
get some, new versions of your code should not be force-updated.
Instead, you should:

- Close the Pull Request on github.

- Create a new branch out of your previous branch, naming it with '_v2'
in the end (this will further allow code-reviewers to simple run '$ git
diff user_my_new_local_branch{,_v2}' to see what changed between versions):

 $ git checkout my_new_local_branch
 $ git checkout -b my_new_local_branch_v2

- Code and amend the commit. If you have more than one commit in the PR,
you will probably need to rebase interactively to amend the right commits.

- Push your changes:

 $ git push --set-upstream origin my_new_local_branch_v2

- Create a new Pull Request for this new branch. In the PR message,
point the previous PR and the changes this PR introduced when compared
to the previous PRs. Example of PR message for a 'V2':

https://github.com/avocado-framework/avocado/pull/1228

After your PR gets merged, you can sync your local repository and your
fork on github:

 $ git checkout master
 $ git pull upstream master
 $ git push

That's it. That's my personal workflow, what means it probably differs
from what others developers are used to do, but the important here is to
someway cover the good practices we have in the project.

Please feel free to comment and to add more information here.



Amador,

Thanks for cooking this sort of tutorial for new contributors!  I 
imagine it's going to be very useful.


Once it matures, that is, if people add some extra touches and tips to 
it, please merge this into our contribution guide so that's more 
persistent and easy to find.


We can/should even advertise this section more aggressively, say, in the 
README file.


Cheers,
- Cleber.


Best,


--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] [RFC] Environment Variables

2016-06-01 Thread Cleber Rosa

On 06/01/2016 03:07 PM, Ademar Reis wrote:

On Tue, May 31, 2016 at 07:30:43AM -0300, Cleber Rosa wrote:




I'm replying on top of Cleber because he already said a few
things I was going to say.


On 05/25/2016 05:31 AM, Amador Pahim wrote:

Hi folks,

We have requests to handle the environment variables that we can set to
the tests. This is the RFC in that regard, with a summary of the ideas
already exposed in the original request and some additional planning.

The original request is here:
https://trello.com/c/Ddcly0oG/312-mechanism-to-provide-environment-variables-to-tests-run-on-a-virtual-machine-remote


Motivation
==
Avocado tests are executed in a fork process or even in a remote
machine. Regardless the fact that Avocado is hard coded to set some
environment variables, they are for internal consumption and user is not
allowed to control/configure its behavior.


You mean this:

http://avocado-framework.readthedocs.io/en/latest/WritingTests.html#environment-variables-for-simple-tests

Right? Basically, the fact that Avocado sets some of the job/test state as
environment variables, that can be used by SIMPLE tests.


The motivation is the request to provide users an interface to set
and/or keep environment variables for test consumption.


I'm not sure if they're necessarily for test consumption. I think
the motivation for the original request was to provide the
standard Unix interface of environment variables for when tests
are run remotely.



If the motivation is basically about setting the env vars when running 
tests remotely, than this brings the discussion about the *local* 
behavior to:


1. Should Avocado default to the standard UNIX behavior of cloning the 
environment?


 A: IMHO, yes.

2. Could Avocado have have a feature to start tests in a clean(er) 
environment?


 A: Possibly yes, but seems low priority.  The use case here could be 
seen as a plus in predictability, helping to achieve expected test 
results in spite of the runner environment.  A real world example could 
be a CI environment that sets a VERBOSE environment variable. This env 
var will be passed over to Avocado, to the test process and finally to a 
custom binary (say a benchmark tool) that will produce different output 
depending on that environment variable.  Doing that type of cleaning in 
the test code is possible, but the framework could help with that.


2.1. If Avocado provides a "clean(er) test environment" feature, how to 
determine which environment variables are passed along?


 A: The "env-keep" approach seems like the obvious way to do it.  If 
the mechanism is enabled, which I believe should be disabled by default 
(see #1), its default list could contain the more or less standard UNIX 
environment variables (TERM, SHELL, LANG, etc).



These environment variables can change the behavior of both
Avocado (the runner itself), the tests (after all nothing
prevents the test writer from using them) and all sub-processes
executed by the test.



Right.


Locally, this is standard:

  $ TMPDIR=/whatever/tmp VAR=foo ./avocado run test1.py

But when running avocado remotely, there's no way to configure
the environment in the destination. The environment variables set
in the command line below will not be "forwarded" to the remote
environment:

  $ TMPDIR=/whatever/tmp VAR=foo ./avocado run test1.py \
 --remote...



Right.



Use cases
=
1) Use the command line or the config file to set the environment
variables in tests processes environment; access those variables from
inside the test.
2) Copy from current environment some environment variable(s) to the
tests processes environment; access those variables from inside the test.


I think we don't even have to go that far. We can simply say the
intention is to set the environment variables in the environment
where Avocado is run. The mechanism is quite standard and well
understood.

And here comes an important point: I don't think this should be a
mechanism to pass variables to tests. Although, again,
environment variables can be used for that purpose, Avocado
should have a proper interface to provide a dictionary of
configuration and variables to each test.



The only valid reason for having such a mechanism to pass *different* 
environment variables to tests, talking about local environment, would 
be *if and only if* the same environment variable to be set when running 
Avocado would change the behavior of Avocado itself.  Example:


 $ AVOCADO_LOG_EARLY=1 avocado run avocado-self-tests.py

This way, both the first level avocado process (our "real" runner) and 
other instances run by the "avocado-self-test.py" code would react to 
that variable. *BUT* this seems a corner case, and I wouldn't think it 
justifies the implementation of such a feature at this point.



Currently, this is erroneously provided by the multiplex

Re: [Avocado-devel] [RFC] Environment Variables

2016-06-01 Thread Cleber Rosa

On 06/01/2016 04:39 PM, Ademar Reis wrote:

On Wed, Jun 01, 2016 at 04:02:54PM -0300, Cleber Rosa wrote:

On 06/01/2016 03:07 PM, Ademar Reis wrote:

On Tue, May 31, 2016 at 07:30:43AM -0300, Cleber Rosa wrote:




I'm replying on top of Cleber because he already said a few
things I was going to say.


On 05/25/2016 05:31 AM, Amador Pahim wrote:

Hi folks,

We have requests to handle the environment variables that we can set to
the tests. This is the RFC in that regard, with a summary of the ideas
already exposed in the original request and some additional planning.

The original request is here:
https://trello.com/c/Ddcly0oG/312-mechanism-to-provide-environment-variables-to-tests-run-on-a-virtual-machine-remote


Motivation
==
Avocado tests are executed in a fork process or even in a remote
machine. Regardless the fact that Avocado is hard coded to set some
environment variables, they are for internal consumption and user is not
allowed to control/configure its behavior.


You mean this:

http://avocado-framework.readthedocs.io/en/latest/WritingTests.html#environment-variables-for-simple-tests

Right? Basically, the fact that Avocado sets some of the job/test state as
environment variables, that can be used by SIMPLE tests.


The motivation is the request to provide users an interface to set
and/or keep environment variables for test consumption.


I'm not sure if they're necessarily for test consumption. I think
the motivation for the original request was to provide the
standard Unix interface of environment variables for when tests
are run remotely.



If the motivation is basically about setting the env vars when running tests
remotely, than this brings the discussion about the *local* behavior to:

1. Should Avocado default to the standard UNIX behavior of cloning the
environment?

 A: IMHO, yes.


That's the current behavior (see my example at the end of the
previous email). Except when one runs tests remotely, which is
precisely the use case this feature would "fix".



2. Could Avocado have have a feature to start tests in a clean(er)
environment?

 A: Possibly yes, but seems low priority.  The use case here could be seen
as a plus in predictability, helping to achieve expected test results in
spite of the runner environment.  A real world example could be a CI
environment that sets a VERBOSE environment variable. This env var will be
passed over to Avocado, to the test process and finally to a custom binary
(say a benchmark tool) that will produce different output depending on that
environment variable.  Doing that type of cleaning in the test code is
possible, but the framework could help with that.

2.1. If Avocado provides a "clean(er) test environment" feature, how to
determine which environment variables are passed along?

 A: The "env-keep" approach seems like the obvious way to do it.  If the
mechanism is enabled, which I believe should be disabled by default (see
#1), its default list could contain the more or less standard UNIX
environment variables (TERM, SHELL, LANG, etc).


Agree. But like you said such a feature would be low priority and
optional. The important thing is that the implementation of what
we're discussing in this RFC would not interfere with it.




These environment variables can change the behavior of both
Avocado (the runner itself), the tests (after all nothing
prevents the test writer from using them) and all sub-processes
executed by the test.



Right.


Locally, this is standard:

  $ TMPDIR=/whatever/tmp VAR=foo ./avocado run test1.py

But when running avocado remotely, there's no way to configure
the environment in the destination. The environment variables set
in the command line below will not be "forwarded" to the remote
environment:

  $ TMPDIR=/whatever/tmp VAR=foo ./avocado run test1.py \
 --remote...



Right.



Use cases
=
1) Use the command line or the config file to set the environment
variables in tests processes environment; access those variables from
inside the test.
2) Copy from current environment some environment variable(s) to the
tests processes environment; access those variables from inside the test.


I think we don't even have to go that far. We can simply say the
intention is to set the environment variables in the environment
where Avocado is run. The mechanism is quite standard and well
understood.

And here comes an important point: I don't think this should be a
mechanism to pass variables to tests. Although, again,
environment variables can be used for that purpose, Avocado
should have a proper interface to provide a dictionary of
configuration and variables to each test.



The only valid reason for having such a mechanism to pass *different*
environment variables to tests, talking about local environment, would be
*if and only if* the same environment variable to be set when running
Avocado would chan

Re: [Avocado-devel] [RFC] Environment Variables

2016-06-03 Thread Cleber Rosa



On 06/03/2016 07:01 AM, Lukáš Doktor wrote:

Hello guys, let me just share my view (nothing radical here)

Dne 2.6.2016 v 18:13 Amador Pahim napsal(a):

On 06/01/2016 09:39 PM, Ademar Reis wrote:

On Wed, Jun 01, 2016 at 04:02:54PM -0300, Cleber Rosa wrote:

On 06/01/2016 03:07 PM, Ademar Reis wrote:

On Tue, May 31, 2016 at 07:30:43AM -0300, Cleber Rosa wrote:




I'm replying on top of Cleber because he already said a few
things I was going to say.


On 05/25/2016 05:31 AM, Amador Pahim wrote:

Hi folks,

We have requests to handle the environment variables that we can
set to
the tests. This is the RFC in that regard, with a summary of the
ideas
already exposed in the original request and some additional
planning.

The original request is here:
https://trello.com/c/Ddcly0oG/312-mechanism-to-provide-environment-variables-to-tests-run-on-a-virtual-machine-remote




Motivation
==
Avocado tests are executed in a fork process or even in a remote
machine. Regardless the fact that Avocado is hard coded to set some
environment variables, they are for internal consumption and user
is not
allowed to control/configure its behavior.


You mean this:

http://avocado-framework.readthedocs.io/en/latest/WritingTests.html#environment-variables-for-simple-tests



Right? Basically, the fact that Avocado sets some of the job/test
state as
environment variables, that can be used by SIMPLE tests.


The motivation is the request to provide users an interface to set
and/or keep environment variables for test consumption.


I'm not sure if they're necessarily for test consumption. I think
the motivation for the original request was to provide the
standard Unix interface of environment variables for when tests
are run remotely.



If the motivation is basically about setting the env vars when
running tests
remotely, than this brings the discussion about the *local* behavior
to:

1. Should Avocado default to the standard UNIX behavior of cloning the
environment?

 A: IMHO, yes.

Yes



That's the current behavior (see my example at the end of the
previous email). Except when one runs tests remotely, which is
precisely the use case this feature would "fix".


Yes, agreed we should only extend the current behavior in local tests to
remote tests. That seems to be the best approach for this RFC.





2. Could Avocado have have a feature to start tests in a clean(er)
environment?


I also see a benefit, not a critical feature, though. I could imagine
something like `--env-clean` which along with `--env-keep` and
`--env-ignore` should cover most of the needs. It should also contain a
blacklist (probably in config) to disallow overriding the reserved values.

Actually we might add the support for the `--env-ignore` even now as it
could be useful (locally one can unset the variable, but remotely it's a
bit harder).


 A: Possibly yes, but seems low priority.  The use case here could be
seen
as a plus in predictability, helping to achieve expected test
results in
spite of the runner environment.  A real world example could be a CI
environment that sets a VERBOSE environment variable. This env var
will be
passed over to Avocado, to the test process and finally to a custom
binary
(say a benchmark tool) that will produce different output depending
on that
environment variable.  Doing that type of cleaning in the test code is
possible, but the framework could help with that.

2.1. If Avocado provides a "clean(er) test environment" feature, how to
determine which environment variables are passed along?

 A: The "env-keep" approach seems like the obvious way to do it.  If
the
mechanism is enabled, which I believe should be disabled by default
(see
#1), its default list could contain the more or less standard UNIX
environment variables (TERM, SHELL, LANG, etc).


Agree. But like you said such a feature would be low priority and
optional. The important thing is that the implementation of what
we're discussing in this RFC would not interfere with it.


"clean(er) test environment" would affect both local and remote
implementations and should be considered regardless this RFC. Still, I
don't see relevance in have a cleaner env right now. Agreed it's low
priority or even unwanted feature.






These environment variables can change the behavior of both
Avocado (the runner itself), the tests (after all nothing
prevents the test writer from using them) and all sub-processes
executed by the test.



Right.


Locally, this is standard:

  $ TMPDIR=/whatever/tmp VAR=foo ./avocado run test1.py

But when running avocado remotely, there's no way to configure
the environment in the destination. The environment variables set
in the command line below will not be "forwarded" to the remote
environment:

  $ TMPDIR=/whatever/tmp VAR=foo ./avocado run test1.py \
 --remote...



Right.



Use cases
=
1) Use the 

[Avocado-devel] Avocado release 37.0: Trabant vs. South America

2016-06-14 Thread Cleber Rosa

This is another proud announcement: Avocado release 37.0, aka "Trabant
vs. South America", is now out!

This release is yet another collection of bug fixes and some new
features.  Along with the same changes that made the 36.0lts
release[1], this brings the following additional changes:

* TAP[2] version 12 support, bringing better integration with other
  test tools that accept this streaming format as input.

* Added niceties on Avocado's utility libraries "build" and "kernel",
  such as automatic parallelism and resource caching.  It makes tests
  such as "linuxbuild.py" (and your similar tests) run up to 10 times
  faster.

* Fixed an issue where Avocado could leave processes behind after the
  test was finished.

* Fixed a bug where the configuration for tests data directory would
  be ignored.

* Fixed a bug where SIMPLE tests would not properly exit with WARN
  status.

For a complete list of changes please check the Avocado changelog[3].

For Avocado-VT, please check the full Avocado-VT changelog[4].

Install avocado
---

Instructions are available in our documentation on how to install
either with packages or from source[5].

Updated RPM packages are be available in the project repos for
Fedora 22, Fedora 23, EPEL 6 and EPEL 7.

Happy hacking and testing!

---

[1] https://www.redhat.com/archives/avocado-devel/2016-May/msg00025.html
[2] https://en.wikipedia.org/wiki/Test_Anything_Protocol
[3] https://github.com/avocado-framework/avocado/compare/35.0...37.0
[4] https://github.com/avocado-framework/avocado-vt/compare/35.0...37.0
[5] 
http://avocado-framework.readthedocs.io/en/37.0/GetStartedGuide.html#installing-avocado 



___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] Pre-release test plan results

2016-07-04 Thread Cleber Rosa



On 07/04/2016 08:21 AM, Amador Pahim wrote:

Test Plan: Release Test Plan
Run by 'apahim' at 2016-07-04T10:14:02.179147
PASS: 'Avocado source is sound':
FAIL: 'Avocado RPM build': https://paste.fedoraproject.org/387753/
FAIL: 'Avocado RPM install': Cannot build the rpms
FAIL: 'Avocado Test Run on RPM based installation': Cannot build the rpms
FAIL: 'Avocado Test Run on Virtual Machine': Cannot build the rpms
FAIL: 'Avocado Test Run on Remote Machine': Cannot build the rpms
FAIL: 'Avocado Remote Machine HTML report': Cannot build the rpms
FAIL: 'Avocado Server Source Checkout and Unittests': Cannot build the rpms
FAIL: 'Avocado Server Run': Cannot build the rpms
FAIL: 'Avocado Server Functional Test': Cannot build the rpms
PASS: 'Avocado Virt and VT Source Checkout':
PASS: 'Avocado Virt Bootstrap':
PASS: 'Avocado Virt Boot Test Run and HTML report':
PASS: 'Avocado Virt - Assignment of values from the cmdline':
PASS: 'Avocado Virt - Migration test':
PASS: 'Avocado VT':
PASS: 'Avocado HTML report sysinfo':
PASS: 'Avocado HTML report links':
PASS: 'Paginator':


Merged PR intended to fix the 'make rpm':
https://github.com/avocado-framework/avocado/pull/1284, still not
working. We should probably revert that then?



+1


Anyway, one issue should be fixed by
https://github.com/avocado-framework/avocado/pull/1285, to be applied on
top of (not reverted then) PR 1284. Second issue (Failed to mkfs) under
investigation.



I'm not convinced about that fix.  I'd like to try a `os.umask()` based 
approach too.




--
apahim


--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] How to create a s390-virtio image.

2016-07-04 Thread Cleber Rosa
cal/bin/qemu-system-s390x boot
# Install it
yum install genisoimage
mkdir  /usr/share/avocado/data/avocado-vt/isos/linux
wget
https://dl.fedoraproject.org/pub/fedora-secondary/releases/23/Server/s390x/iso/Fedora-Server-DVD-s390x-23.iso

-O
/usr/share/avocado/data/avocado-vt/isos/linux/Fedora-Server-DVD-s390x-23.iso

modprobe kvm
avocado run --vt-guest-os Fedora.23 --vt-arch s390x --vt-machine-type
s390-virtio --vt-qemu-bin /usr/local/bin/qemu-system-s390x
unattended_install.cdrom.extra_cdrom_ks.default_install.aio_threads
--show-job-log
# And now we can run it (or any other test)
avocado run --vt-guest-os Fedora.23 --vt-arch s390x --vt-machine-type
s390-virtio --vt-qemu-bin /usr/local/bin/qemu-system-s390x boot
```

I'm sorry for confusion, we plan to add the `arch/machine` to the `list`
commands too.

Kind regards,
Lukáš

Dne 30.6.2016 v 10:39 Wei WA Li napsal(a):

Hi all,

I am testing libvirt test case on s390x now, I found all of guest are
i440fx machine_type.
How can I create a s390-virtio image? Which cfg file I need to modify?
Thanks in advance.


[root@zs95kv2 guests]# avocado list --vt-type libvirt --vt-list-guests
..
Linux.Ubuntu.12.04-server.i386.i440fx (missing
ubuntu-12.04-server-32.qcow2)
Linux.Ubuntu.12.04-server.x86_64.i440fx (missing
ubuntu-12.04-server-64.qcow2)
Linux.Ubuntu.14.04-server.i386.i440fx (missing
ubuntu-14.04-server-32.qcow2)
Linux.Ubuntu.14.04-server.x86_64.i440fx (missing
ubuntu-14.04-server-64.qcow2)
Linux.Ubuntu.14.04.1-server.i386.i440fx (missing
ubuntu-14.04.1-server-32.qcow2)
Linux.Ubuntu.14.04.1-server.x86_64.i440fx (missing
ubuntu-14.04.1-server-64.qcow2)
Linux.Ubuntu.14.04.3-server.i386.i440fx (missing
ubuntu-14.04.3-server-32.qcow2)
Linux.Ubuntu.14.04.3-server.x86_64.i440fx


[root@zs95kv2 guests]# avocado list --vt-type libvirt --vt-list-guests
--vt-machine-type s390-virtio
Searched /usr/share/avocado/data/avocado-vt/images for guest images

Available guests in config:

[root@zs95kv2 guests]#



Best regards,
-
Li, Wei (李 伟)
zKVM Solution Test
IBM China Systems & Technology Lab, Beijing
E-Mail: li...@cn.ibm.com
Tel: 86-10-82450631 Notes: Wei WA Li/China/IBM
Address: 3BW298, Ring Bldg. No.28 Building, ZhongGuanCun Software
Park,No.8
DongBeiWang West Road, ShangDi, Haidian District, Beijing, P.R.China



[附件 "signature.asc" 被 Wei WA Li/China/IBM 删除]





_______
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel



--
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


[Avocado-devel] Avocado release 38.0: Love, Ken

2016-07-05 Thread Cleber Rosa

Hello everyone,

You guessed it right: this is another Avocado release announcement:
release 38.0, aka "Love, Ken", is now out!

Another development cycle has just finished, and our community will
receive this new release containing a nice assortment of bug fixes and
new features.

* The download of assets in tests now allow for an expiration time.
  This means that tests that need to download any kind of external
  asset, say a tarball, can now automatically benefit from the
  download cache, but can also keep receiving new versions
  automatically.

  Suppose your asset uses an asset named `myproject-daily.tar.bz2`,
  and that your test runs 50 times a day.  By setting the expire time
  to `1d` (1 day), your test will benefit from cache on most runs, but
  will still fetch the new version when the the 24 hours from the
  first download have passed.

  For more information, please check out the documentation on the
  `expire` parameter to the `fetch_asset()` method[1].

* Environment variables can be propagated into tests running on remote
  systems.  It's a known fact that one way to influence application 
behavior,
  including test, is to set environment variables.  A command line such 
as::


$ MYAPP_DEBUG=1 avocado run myapp_test.py

  Will work as expected on a local system.  But Avocado also allows
  running tests on remote machines, and up until now, it has been
  lacking a way to propagate environment variables to the remote
  system.

  Now, you can use::

$ MYAPP_DEBUG=1 avocado run --env-keep MYAPP_DEBUG \
  --remote-host test-machine myapp_test.py

* The plugin interfaces have been moved into the
  `avocado.core.plugin_interfaces` module.  This means that plugin
  writers now have to import the interface definitions this namespace,
  example::

...
from avocado.core.plugin_interfaces import CLICmd

class MyCommand(CLICmd):
...

  This is a way to keep ourselves honest, and say that there's no
  difference from plugin interfaces to Avocado's core implementation,
  that is, they may change at will.  For greater stability, one should
  be tracking the LTS releases.

  Also, it effectively makes all plugins the same, whether they're
  implemented and shipped as part of Avocado, or as part of external
  projects.

* A contrib script for running kvm-unit-tests.  As some people are
  aware, Avocado has indeed a close relation to virtualization
  testing.  Avocado-VT is one obvious example, but there are other
  virtualization related test suites can Avocado can run.

  This release adds a contrib script that will fetch, download,
  compile and run kvm-unit-tests using Avocado's external runner
  feature.  This gives results in a better granularity than the
  support that exists in Avocado-VT, which gives only a single
  PASS/FAIL for the entire test suite execution.

For more information, please check out the Avocado changelog[2].

Also, while we focused on Avocado, let's also not forget that
Avocado-VT maintains it's own fast pace of incoming niceties.

* s390 support: Avocado-VT is breaking into new grounds, and now has
  support for the s390 architecture.  Fedora 23 for s390 has been added
  as a valid guest OS, and s390-virtio has been added as a new machine
  type.

* Avocado-VT is now more resilient against failures to persist its
  environment file, and will only give warnings instead of errors when
  it fails to save it.

* An improved implementation of the "job lock" plugin, which prevents
  multiple Avocado jobs with VT tests to run simultaneously.  Since
  there's no finer grained resource locking in Avocado-VT, this is a
  global lock that will prevent issues such as image corruption when
  two jobs are run at the same time.

  This new implementation will now check if existing lock files are
  stale, that is, they are leftovers from previous run.  If the
  processes associated with these files are not present, the stale
  lock files are deleted, removing the need to clean them up manually.
  It also outputs better debugging information when failures to
  acquire lock.

The complete list of changes to Avocado-VT are available on its
changelog[3].

While not officially part of this release, this development cycle saw
the introduction of new tests on our avocado-misc-tests. Go check it
out!

Finally, since Avocado and Avocado-VT are not newly born anymore, we
decided to update information mentioning KVM-Autotest, virt-test on so
on around the web.  This will hopefully redirect new users to the
Avocado community and avoid confusion.

Install avocado
---

Instructions are available in our documentation on how to install
either with packages or from source[4].

Updated RPM packages are be available in the project repos for EPEL 6,
EPEL 7, Fedora 22, Fedora 23 and the newly released Fedora 24.

Please note that on the next release, we'll drop support for Fedora 22
packages.

Happy hacking and testing!

---

[1] http://avocado-framework.readthedocs.io/en/

[Avocado-devel] Avocado-VT JeOS 23 image update

2016-07-10 Thread Cleber Rosa
Hi folks,

I'd like to inform Avocado-VT users, that we're updating the JeOS 23
image file.  The goal is to improve compatibility with older QEMU
versions.  There are *no* changes within the guest image, only in the
Qcow2 version.

If you're interested in a longer explanation, here it is: the JeOS 23
image was created on a Fedora 23 host machine, which will create Qcow2
images with compatibility level set at "1.1".  This prevents older, but
still relevant platforms to use that image AS IS.

Users will notice that, during an interactive run of "avocado
vt-bootstrap", the following prompt will appear:


6 - Verifying (and possibly downloading) guest image
Verifying expected SHA1 sum from
http://assets-avocadoproject.rhcloud.com/static/SHA1SUM_JEOS23
Expected SHA1 sum: f9c24d609c37ee96f4d53778dc7190cb05c38295
Found /usr/share/avocado/data/avocado-vt/images/jeos-23-64.qcow2.7z
Actual SHA1 sum: b88d553e19736aa3ec61caa7653c5b30e6d4b59a
The file seems corrupted or outdated. Would you like to download it? (y/n)

By answering 'y', you should get the updated image without any other
software update.

If you have any questions, or find any issues, please let us know.

Thanks!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature
___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] avocado-vt: How to use Host_* variants?

2016-07-10 Thread Cleber Rosa


On 07/08/2016 06:18 AM, Lukáš Doktor wrote:
> Dne 30.6.2016 v 21:21 Eduardo Habkost napsal(a):
>> While trying to run the cpuid test cases using avocado-vt, I
>> found out that the machine_rhel variants are being automatically
>> filtered out. Then I found out that the most recent version of
>> qemu_cpu.cfg depends on a "Host_RHEL" variant being defined.
>>
>> I don't know how this host-version check system works. Does
>> anybody know how to make the Host_RHEL variant be defined and
>> available when running the test cases under avocado-vt?
>>
>> Or is this Host_* magic not supported by avocado-vt yet and we
>> can't run any of the variants containing "only Host_RHEL" under
>> avocado-vt?
>>
> 
> Hello Eduardo,
> 
> I haven't played with that part for a while, but Host_RHEL used to be
> set by the internal runner used by QA and I don't think it was added to
> avocado-vt, therefor the filters should not be pushed upstream (or the
> support for it should have been added as well).
> 
> CC: Xu and Feng, do you guys know more about this?
> 
> Regards,
> Lukáš
> 

Eduardo and Lukáš,

As you're surely noticed, I was even more confused than you guys.  The
extra confusion was caused by the fact that, when I started to review:

  https://github.com/autotest/tp-qemu/pull/686

I was still unaware of this thread.  Looks like a MUA problem, but
that's now irrelevant.

The important thing here is that, under no circumstance, we can have
upstream code that depends on tools, configuration files, or know-how
that's not upstream.  I'm not judging the "Host_*" variant creation
mechanism at this point, but simply stating that *any* upstream user
should be able to run tests.  At the most, users should be able to read
documentation and setup their systems accordingly, but the information
should be available.

Feng, Xu,

We really need you help.  First to identify what kind of tool is
generating the "Host_" variants config files.  Second, to port that to
upstream Avocado-VT.

Thanks!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature
___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] avocado-vt: How to use Host_* variants?

2016-07-11 Thread Cleber Rosa


On 07/11/2016 04:01 AM, Lukáš Doktor wrote:
> Dne 11.7.2016 v 05:05 xutian napsal(a):
>>
>>
>> On 07/11/2016 07:49 AM, Cleber Rosa wrote:
>>>
>>> On 07/08/2016 06:18 AM, Lukáš Doktor wrote:
>>>> Dne 30.6.2016 v 21:21 Eduardo Habkost napsal(a):
>>>>> While trying to run the cpuid test cases using avocado-vt, I
>>>>> found out that the machine_rhel variants are being automatically
>>>>> filtered out. Then I found out that the most recent version of
>>>>> qemu_cpu.cfg depends on a "Host_RHEL" variant being defined.
>>>>>
>>>>> I don't know how this host-version check system works. Does
>>>>> anybody know how to make the Host_RHEL variant be defined and
>>>>> available when running the test cases under avocado-vt?
>>>>>
>>>>> Or is this Host_* magic not supported by avocado-vt yet and we
>>>>> can't run any of the variants containing "only Host_RHEL" under
>>>>> avocado-vt?
>>>>>
>>>> Hello Eduardo,
>>>>
>>>> I haven't played with that part for a while, but Host_RHEL used to be
>>>> set by the internal runner used by QA and I don't think it was added to
>>>> avocado-vt, therefor the filters should not be pushed upstream (or the
>>>> support for it should have been added as well).
>>>>
>>>> CC: Xu and Feng, do you guys know more about this?
>>>>
>>>> Regards,
>>>> Lukáš
>>>>
>>> Eduardo and Lukáš,
>>>
>>> As you're surely noticed, I was even more confused than you guys.  The
>>> extra confusion was caused by the fact that, when I started to review:
>>>
>>>   https://github.com/autotest/tp-qemu/pull/686
>>>
>>> I was still unaware of this thread.  Looks like a MUA problem, but
>>> that's now irrelevant.
>>>
>>> The important thing here is that, under no circumstance, we can have
>>> upstream code that depends on tools, configuration files, or know-how
>>> that's not upstream.  I'm not judging the "Host_*" variant creation
>>> mechanism at this point, but simply stating that *any* upstream user
>>> should be able to run tests.  At the most, users should be able to read
>>> documentation and setup their systems accordingly, but the information
>>> should be available.
>>>
>>> Feng, Xu,
>>>
>>> We really need you help.  First to identify what kind of tool is
>>> generating the "Host_" variants config files.  Second, to port that to
>>> upstream Avocado-VT.
>> "Host_" variants generate by internal tool "staf-kvm", the configuration
>> used to load RHEL host special configuration. Internal guys keep such
>> kind of configuration because qemu-kvm-rhev and qemu-kvm has different
>> feature list or /worse//yet, same feature in /qemu-kvm for RHEL6 and
>> qemu-kvm for RHEL7 has different behave (eg. drive mirror).And it's a
>> internal qemu-kvm issue not related upstream user, so we keep it in
>> internal repo.
>>
> Hello Xu,
> 
> yep, that's what I thought. Btw isn't there a simpler way to distinguish
> between features? Most obvious would be `qemu-kvm -version` executed
> when `QContainer` is initialized. Then you can ask whether the qemu is
> `el6`, `el7`, `fc23` or sha from git.
> 
> Alternatively we can add OS detection to avocado-vt, that should not be
> that hard ;-)
> 
> Regards,
> Lukáš
> 
>> Hi Cleber,
>>
>> that the story of "Host_" variants. if upstream guys don't like it, any
>> suggestion for resolve it.
>>

Thanks for the info Xu.  As I've said before, the issue here is not
about the mechanism itself, but the fact there's no way for an upstream
user to use it.

IMHO we should start simply by porting what's done by "staff-kvm" into
either a contrib script, or "vt-boostrap" action.  Even sample
configuration files could do at this point.

Then, at a later point, as Lukáš suggested, we can revisit how it's done.

Thanks!
- Cleber.

>> Thanks,
>> Xu
>>
>>>
>>> Thanks!
>>>
>>
> 

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature
___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] RFC: Avocado multiplexer plugin

2016-07-18 Thread Cleber Rosa
gt; Hi Lukáš.
>>>>
>>>> I believe we're in sync, but I miss the high level overview, or
>>>> at least review, of how params, variants and the multiplexer or
>>>> other plugins are all related to each other.
>>>>
>>>> Please check the definitions/examples below to see if we're in
>>>> sync:
>>>>
>>>> Params
>>>> --
>>>>
>>>> A dictionary of key/values, with an optional path (we could
>>>> simply call it prefix), which is used to identify the key
>>>> when there are multiple versions of it. The path is
>>>> interpreted from right to left to find a match.
>>>>
>>>> The Params data structure can be populated by multiple
>>>> sources.
>>>>
>>>> Example:
>>>> (implementation and API details are not discussed here)
>>>>
>>>> key: var1=a
>>>> path: /foo/bar/baz
>>>>
>>>> key: var1=b
>>>> path: /foo/bar
>>>>
>>>> key: var2=c
>>>> path: NULL (empty)
>>>>
>>>> get(key=var1, path=/foo/) ==> error ("/foo/var1" not found)
>>>> get(key=var1, path=/foo/*) ==> error (multiple var1)
>>>> get(key=var1, path=/foo/bar/baz/w/) ==> error
>>>> get(key=var1, path=/foo/bar/w/) ==> error
>>>>
>>>> get(key=var2) ==> c
>>>> get(key=var2, path=foobar) ==> error ("foobar/var2" not found)
>>>>
>>>> get(key=var1, path=/foo/bar/baz/) ==> a
>>>> (unique match for "/foo/bar/baz/var1")
>>>>
>>>> get(key=var1, path=/foo/bar/) ==> b
>>>> (unique match for "/foo/bar/var1/")
>>>>
>>>> get(key=var1, path=baz) ==> a
>>>> (unique match for "baz/var1")
>>>>
>>>> get(key=var1, path=bar) ==> b
>>>> (unique match for "bar/var1")
>>>>
>>>> This kind of "get" API is exposed in the Test API.
>>>>
>>>>
>>>> Variants
>>>> 
>>>>
>>>> Multiple sets of params, all with the same set of keys and
>>>> paths, but potentially different values. Each variant is
>>>> identified by a "Variant ID" (see the "Test ID RFC").
>>>>
>>>> The test runner is responsible for the association of tests
>>>> and variants. That is, the component creating the
>>>> variants has absolutely no visibility on which tests are
>>>> going to be associated with variants.
>>>>
>>>> This is also completely abstract to tests: they don't have
>>>> any visibility about which variant they're using, or which
>>>> variants exist.
>>>>
>>> Hello Ademar,
>>>
>>> Thank you for the overview, I probably should have included it. I
>>> omitted it
>>> as it's described in the documentation, so I only mentioned in the
>>> `Plugin
>>> AvocadoParams` that I don't think we should turn that part into a
>>> plugin.
>>>
>>> The variant, as described earlier, is the method which modifies the
>>> `test_template` and as you pointed out it compounds of `Variant ID` and
>>> `Params`. The way it works now it can go even further and alter all the
>>> test's arguments (name, methodName, params, base_logdir, tag, job,
>>> runner_queue) but it's not documented and might change in the future.
>>
>> OK, so I think we should change this. The layers should have
>> clear responsibilities and abstractions, with variants being
>> restricted to params only, as defined above.
>>
>> I don't think the component responsible for creating variants
>> needs any visibility or knowledge about tests.
>>
> Yes, there is no need for that, it was only simplification:
> 
> https://github.com/avocado-framework/avocado/pull/1293
> 

BTW, why was this PR closed?  Inteded to be sent again with other work?

>>>
>>>> Given the above, the multiplexer (or any other component, like a
>>>> "cartesian config" implementation from Autotest) would be bound
>>>> to these APIs.
>>> The cartesian config is not related to params at all. Avocado-vt uses a
>>> hybrid mode and it replaces the params with their custom object, while
>>> keeping the `avocado` params in `test.avocado_params`. So in
>>> `avocado_vt`
>>> tests you don't have `self.params`, but you have `test.params` and
>>> `test.avocado_params`, where `test.params` is a dictionary and
>>> `test.avocado_params` the avocado params interface with path/key/value.
>>> Cartesian config produces variants not by creating test variants, but by
>>> adding multiple tests with different parameters to the test_suite.
>>
>> What I mean is that we probably could, in theory at least,
>> implement a plugin that parses a "cartesian config" and provides
>> the data as needed to fill the variants and param APIs I
>> described above. I'm not saying we should do that, much less that
>> it would be useful as a replacement for the current cartesian
>> config implementation in avocado-vt. I'm simply stating that once
>> we have a clear plugin API for Params and Variants, we should be
>> able to replace the multiplexer with other mechanisms that
>> provide a similar functionality.
>>
>> Thanks.
>>- Ademar
>>
> 
> In that case yes. You can see it in the conclusion that even the simpler
> version (parser->tree) is capable of using cartesian_config as source of
> params.
> 

So, to make sure we're on the same page:  we intend to allow users to
write and choose their own tree producers (it's pluggable).  With a
given tree producer active, the multiplex mechanism is going to be, at
this point, a single, non-pluggable one.

Right?

> Regards,
> Lukáš
> 

[1] - https://docs.python.org/2.6/glossary.html#term-iterable

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature
___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


[Avocado-devel] Pre-Release Test Plan 39.0 - FAIL

2016-07-25 Thread Cleber Rosa
Test Plan: Release Test Plan
Run by 'cleber' at 2016-07-25T10:40:13.761379
PASS: 'Avocado source is sound':
FAIL: 'Avocado source does not contain spelling errors':
PASS: 'Avocado RPM build':
PASS: 'Avocado RPM install':
PASS: 'Avocado Test Run on RPM based installation':
PASS: 'Avocado Test Run on Virtual Machine':
PASS: 'Avocado Test Run on Remote Machine':
PASS: 'Avocado Remote Machine HTML report':
PASS: 'Avocado Server Source Checkout and Unittests':
PASS: 'Avocado Server Run':
PASS: 'Avocado Server Functional Test':
PASS: 'Avocado Virt and VT Source Checkout':
PASS: 'Avocado Virt Bootstrap':
FAIL: 'Avocado Virt Boot Test Run and HTML report': TestError: Process
died before it pushed early test_status.
FAIL: 'Avocado Virt - Assignment of values from the cmdline': avocado
run avocado-virt-tests/qemu/boot.py --sysinfo on --open-browser
FAIL: 'Avocado Virt - Migration test': TestError: Process died before it
pushed early test_status.
PASS: 'Avocado VT':
PASS: 'Avocado HTML report sysinfo':
PASS: 'Avocado HTML report links':
PASS: 'Paginator':

Code used:
==
avocado: 8c4f9cc9cbf06fc8d1d57e3a6049d54009658ad0
avocado-vt: 77073d14879835e2940f3d71b594301bf9e9ab2b
avocado-virt: 336e68b02874daaaf0fb51b183d082b5ba92cf21
avocado-virt-tests: 23e9f6ace369b09b6e53007e5dc759c18576914a
avocado-server: 1491de32cb4e0ad4c0e83e57d1139af7f5eafccf

Action items:
=
* Words to be added to spelling ignore list
* Avocado-virt issued to be resolved


-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


[Avocado-devel] Pre-Release Test Plan 39.0 - PASS

2016-07-25 Thread Cleber Rosa
Test Plan: Release Test Plan
Run by 'cleber' at 2016-07-25T18:22:39.855043
PASS: 'Avocado source is sound':
PASS: 'Avocado source does not contain spelling errors':
PASS: 'Avocado RPM build':
PASS: 'Avocado RPM install':
PASS: 'Avocado Test Run on RPM based installation':
PASS: 'Avocado Test Run on Virtual Machine':
PASS: 'Avocado Test Run on Remote Machine':
PASS: 'Avocado Remote Machine HTML report':
PASS: 'Avocado Server Source Checkout and Unittests':
PASS: 'Avocado Server Run':
PASS: 'Avocado Server Functional Test':
PASS: 'Avocado Virt and VT Source Checkout':
PASS: 'Avocado Virt Bootstrap':
PASS: 'Avocado Virt Boot Test Run and HTML report':
PASS: 'Avocado Virt - Assignment of values from the cmdline':
PASS: 'Avocado Virt - Migration test':
PASS: 'Avocado VT':
PASS: 'Avocado HTML report sysinfo':
PASS: 'Avocado HTML report links':
PASS: 'Paginator':

Code used:
==
avocado: ef4c97cb0afb477cea339d9660993cfdf431a3b2
avocado-vt: 77073d14879835e2940f3d71b594301bf9e9ab2b
avocado-virt: d1451b1d3e69c73a41a67cbdd39e7b04fe8d50ef
avocado-virt-tests: 23e9f6ace369b09b6e53007e5dc759c18576914a
avocado-server: 1491de32cb4e0ad4c0e83e57d1139af7f5eafccf

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]





signature.asc
Description: OpenPGP digital signature


[Avocado-devel] Avocado release 39: The Hateful Eight

2016-07-26 Thread Cleber Rosa
The Avocado team is proud to present another incremental release:
version 39.0, aka, "The Hateful Eight", is now available!

The major changes introduced on this version are listed below.

* Support for running tests in Docker container.  Now, in addition to
  running tests on a (libvirt based) Virtual Machine or on a remote host,
  you can now run tests in transient Docker containers.  The usage is as
  simple as::

$ avocado run mytests.py --docker ldoktor/fedora-avocado

  The container will be started, using ``ldoktor/fedora-avocado`` as
  the image.  This image contains a Fedora based system with Avocado
  already installed, and it's provided at the official Docker hub.

* Introduction of the "Fail Fast" feature.

  By running a job with the ``--failfast`` flag, the job will be
  interrupted after the very first test failure.  If your job only
  makes sense if it's a complete PASS, this feature can save you a lot
  of time.

* Avocado supports replaying previous jobs, selected by using their
  Job IDs.  Now, it's also possible to use the special keyword
  ``latest``, which will cause Avocado to rerun the very last job.

* Python's standard signal handling is restored for SIGPIPE, and thus
  for all tests running on Avocado.

  In previous releases, Avocado introduced a change that set the
  default handler to SIGPIPE, which caused the application to be
  terminated.  This seemed to be the right approach when testing how
  the Avocado app would behave on broken pipes on the command line,
  but it introduced side effects to a lot of Python code.  Instead of
  exceptions, the affected Python code would receive the signal themselves.

  This is now reverted to the Python standard, and the signal behavior
  of Python based tests running on Avocado should not surprise anyone.

* The project release notes are now part of the official
  documentation.  That means that users can quickly find when a given
  change was introduced.

Together with those changes listed, a total of 38 changes made into
this release.  For more information, please check out the complete
`Avocado changelog
<https://github.com/avocado-framework/avocado/compare/38.0...39.0>`_.

Sprint Theme


After so much love that we had on the previous version, let's twist
things a bit with an antagonist title.  Info on this pretty good movie
by Tarantino can be found at:

 http://www.imdb.com/title/tt3460252/?ref_=nm_flmg_wr_2

 https://www.youtube.com/watch?v=6_UI1GzaWv0

The story line:

In the dead of a Wyoming winter, a bounty hunter and his prisoner
find shelter in a cabin currently inhabited by a collection of
nefarious characters.

Release Meeting
===

The Avocado release meetings are now open to the community via
Hangouts on Air.  The meetings are recorded and made available on the
Avocado Test Framework YouTube channel:

 https://www.youtube.com/channel/UC-RVZ_HFTbEztDM7wNY4NfA

For this release, you can watch the meeting on this link:

 https://www.youtube.com/watch?v=GotEH7SmHSw


Install Avocado
===

Instructions are available in our documentation on how to install
either with packages or from source:

 
http://avocado-framework.readthedocs.io/en/39.0/GetStartedGuide.html#installing-avocado

Updated RPM packages are be available in the project repos for EPEL 6,
EPEL 7, Fedora 23 and the newly released Fedora 24.

Happy hacking and testing!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


[Avocado-devel] Avocado release 36.1 (lts)

2016-07-26 Thread Cleber Rosa
This is an announcement for the users of our Long Term Stability version
of Avocado.  This is a minor release that introduces bug fixes that are
considered important to our users.

For a full list of changes, please refer to:

 https://github.com/avocado-framework/avocado/compare/36.0lts...36.1

LTS in a nutshell
=

The LTS releases have a special cycle that lasts for
18 months.  Avocado usage in production environments should favor the
use of this LTS release, instead of non-LTS releases.

For more information, please refer to:

 https://www.redhat.com/archives/avocado-devel/2016-May/msg00025.html
 https://www.redhat.com/archives/avocado-devel/2016-April/msg00038.html

Install Avocado
===

Instructions are available in our documentation on how to install
either with packages or from source:

 
http://avocado-framework.readthedocs.io/en/36lts/GetStartedGuide.html#installing-avocado

Updated RPM packages are be available in the project repos for EPEL 6,
EPEL 7, Fedora 22, Fedora 23 and the newly released Fedora 24.

Users subscribed to the LTS "channel", will get this 36.1 update, while
users using the non-LTS repo, will probably be running 39.0 (also
released Today) after an update.

Version notice (dropping the "lts" suffix)
==

We have noticed that some tools, including Python and pip, have troubles
with the version numbers we have been using for the LTS releases.
Because of that, we have then decided to drop the "lts" suffix from
version numbers.

Still, all releases with the 36.x major number, including this 36.1
release, will be LTS releases.  When a new major LTS release is
announced, it will follow the same pattern (without the "lts" suffix).

Happy hacking and testing!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]





signature.asc
Description: OpenPGP digital signature


Re: [Avocado-devel] [RFC] Avocado Misc Tests repository policy

2016-08-03 Thread Cleber Rosa
On 08/03/2016 05:07 AM, Amador Pahim wrote:
> Hello,
> 
> We are receiving a big number of Pull Requests in avocado-misc-tests
> repository. If on the one hand it is really great to see the community
> using and contributing to Avocado, on the other hand the Avocado devel
> team should not be that involved in review code of tests, since our
> business is to write Avocado itself.
> 
> This RFC aims to outline the workflow of the Avocado devel team
> regarding the avocado-misc-tests repository.
> 
> Motivation
> 
> The code review is a time consuming activity and review code of tests
> is not part of the Avocado devel team business. Given the high number
> of Pull Requests on avocado-miscs-tests repository, we need a policy
> to officially state our participation there.
> 
> Proposal
> ===
> We don't believe that we should just not to look to the
> avocado-misc-tests repository. A good number of bugfixes and features
> in Avocado were born as consequence of tests posted there.
> 
> To find a balance, the initial proposal is that the Avocado devel
> team, currently the only maintainers of that repository, will only get
> involved after a third party code review and ACK of a given Pull
> Request. That way, the code author should be also in charge of find
> someone to review its code, being the reviewer from the same company
> or not. We can always invite people to review code there, but it's
> essentially the author's responsibility.
> 
> For the reviewer, it is expected that he/she:
> - Reads the code, commenting with suggestions of improvements: good
> practices, general standards, effectiveness of code, verify comments
> and docstrings.
> - Test the code: run the test and make sure it's working as expected.

I'd suggest, at least as an experiment, to attach the generated job.log
to the review process. GitHub has a "Attach files by dragging & dropping
or selecting them" link.  This can help both process-wise (kind of a
ticking a check box), and for secondary reviewers to debug their
possible failures running the same code.

> - Ping the authors of Pull Requests already reviewed and not updated
> for a long time.
> - When the code is considered ready, comment the Pull Request with an
> 'Looks Good To Me'.
> 
> The 'Looks Good To Me' comment will be the trigger for the maintainers
> to go there and take a final look on the Pull Request and merge it.
> 
> Expected Results
> ==
> The expected result is to decrease the load of Avocado devel team in
> regards to the code review in that repository.
> Another important expected outcome of this process is to give the
> merge permission for (aka promote to maintainer) those assiduous
> reviewers with good quality of reviews.
> When this RFC is considered ready, we will update our documentation
> and the avocado-misc-test README file to reflect the information.
> 
> Additional Information
> 
> Any individual willing to make the code review is eligible to do so.
> And the process is simple. Just go there and review the code.
> Given the high volume of code coming from IBM, I had a chat with
> Praveen Pandey, an IBMer and assiduous author of Pull Requests for
> avocado-misc-tests, and he agreed in make reviews in
> avocado-misc-tests.
> 
> 
> Looking forward to read your comments.
> --
> apahim
> 

LGTM.

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


[Avocado-devel] Avocado release 36.2 (lts)

2016-08-04 Thread Cleber Rosa
This is an announcement for the users of our Long Term Stability version
of Avocado.  This is a minor release that introduces one bug fix that
was considered important to our users:

 
https://github.com/avocado-framework/avocado/commit/b2f595c8540d964e534dbacb60431dc1719914c0

For a full list of changes, please refer to:

 https://github.com/avocado-framework/avocado/compare/36.1...36.2

LTS in a nutshell
=

The LTS releases have a special cycle that lasts for
18 months.  Avocado usage in production environments should favor the
use of this LTS release, instead of non-LTS releases.

For more information, please refer to:

 https://www.redhat.com/archives/avocado-devel/2016-May/msg00025.html
 https://www.redhat.com/archives/avocado-devel/2016-April/msg00038.html

Install Avocado
===

Instructions are available in our documentation on how to install
either with packages or from source:

 
http://avocado-framework.readthedocs.io/en/36lts/GetStartedGuide.html#installing-avocado

Updated RPM packages are be available in the project repos for EPEL 6,
EPEL 7, Fedora 22, 23 and 24.

Users subscribed to the LTS "channel", will get this 36.2 update, while
users using the non-LTS repo, will be running 39.0 after an update.

Version notice (dropping the "lts" suffix)
==

We have noticed that some tools, including Python and pip, have troubles
with the version numbers we have been using for the LTS releases.
Because of that, we have then decided to drop the "lts" suffix from
version numbers.

Still, all releases with the 36.x major number, including this 36.2
release, will be LTS releases.  When a new major LTS release is
announced, it will follow the same pattern (without the "lts" suffix).

Happy hacking and testing!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]




signature.asc
Description: OpenPGP digital signature


[Avocado-devel] Avocado release 40.0: Dr Who

2016-08-15 Thread Cleber Rosa
Hello everyone,

This is yet another Avocado release announcement!  Since we're now
hosting the release notes alongside our official documentation, please
refer to the following link for the complete information about this release:

http://avocado-framework.readthedocs.io/en/40.0/release_notes/40_0.html

Installing Avocado
==

Instructions are available in our documentation on how to install
either with packages or from source:

 
http://avocado-framework.readthedocs.io/en/40.0/GetStartedGuide.html#installing-avocado

Updated RPM packages are be available in the project repos for EPEL 6,
EPEL 7, Fedora 23 and 24.

Happy hacking and testing!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]





signature.asc
Description: OpenPGP digital signature


Re: [Avocado-devel] (potential) design issue in multiplexer

2016-08-17 Thread Cleber Rosa
/qemu` -> failure
> 
> Yes, one could solve it by defining another `mux-path` to `/my` or even
> `/my/kvm`, but that just adds the complexity.
> 
> Let me also mention why do we like to extend nodes from right. Imagine
> we expect `disk_type` in `/virt/hw/disk/*`. The yaml file might look
> like this:
> 
> ```
> virt:
> hw:
> disk: !mux
> virtio_blk:
> disk_type: virtio_blk
> virtio_scsi:
> disk_type: virtio_scsi
> ```
> 
> Now the user develops `virtio_scsi_next` and he wants to compare them.
> Today he simply merges this config with the above:
> 
> ```
> virt:
> hw:
> disk: !mux
> virtio_scsi_debug:
> disk_type: virtio_scsi
> enable_next: True
> ```
> and avocado produces 3 variants, where `params.get("disk_type",
> "/virt/hw/disk/*")` reports the 3 defined variants. If we try to do the
> same with `*/virt/hw/disk` we have to modify the first file:
> 
> ```
> !mux
> virtio_blk:
> virt:
> hw:
> disk:
> disk_type: virtio_blk
> virtio_scsi:
> virt:
> hw:
> disk:
> disk_type: virtio_scsi
> ```
> 
> One would want to prepend yet another node in front of it, because we
> don't want to vary over disk types only, but also over other items (like
> cpus, ...). The problem is, that the first category has to again be
> unique to the whole multiplex tree in order to not clash with the other
> items. And that is what the tree path was actually introduced, to get
> rid of this global-namespace.
> 
> Right now the only solution I see is to change the way `!mux` works.
> Currently it multiplexes all the children, but (not sure if easily done)
> it should only define the children, which mix together. Therefor (back
> to the original example) one would be able to say:
> 
> ```
> plugins:
> virt:
> qemu:
> enabled: !newmux
> kvm: on
> disabled: !newmux
> kvm: off
> paths:
> qemu_dst_bin: None
> qemu_img_bin: None
> qemu_bin: None
> migrate:
> timeout: 60.0
> ```
> 
> which would produce:
> 
> ```
>  ┗━━ plugins
>   ┗━━ virt
>┣━━ qemu
>┃╠══ enabled
>┃║ → kvm: on
>┃╠══ disabled
>┃┃ → kvm: off
>┃┣━━ paths
>┃┃ → qemu_dst_bin: None
>┃┃ → qemu_img_bin: None
>┃┃ → qemu_bin: None
>┃┗━━ migrate
>┃  → timeout: 60.0
> ```
> 
> and in terms of variants:
> 
> ```

Even though this is an example, and we're worried about core concepts, I
fail to see the point of the "/paths" and "/migrate" nodes here.  Both
"enabled" and "disabled" nodes, actually mean the user indent different
multiplexed variants, while "/paths" and "/migrate" are "bins" for other
values.

It looks like your proposal for a new type of "!mux" tag/behavior is
partially due to this mixed used of nodes (to be multiplexed and to
serve as "bins" for misc values).

> Variant 1:/plugins/virt/qemu/enabled, /plugins/virt/paths,
> /plugins/virt/migrate
> Variant 2:/plugins/virt/qemu/disabled, /plugins/virt/paths,
> /plugins/virt/migrate
> ```
> 
> I'm looking forward to your suggestions and I hope I'm wrong and that
> the multiplexer (at least the full-spec) can handle this nicely.
> 
> Kind regards,
> Lukáš
> 

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


Re: [Avocado-devel] Broken cartesian_config by commit 81c6ce860b2f625e

2016-08-18 Thread Cleber Rosa
On 08/18/2016 06:50 AM, Andrei Stepanov wrote:
> Hello.
> 

Hi Andrei,

> We have now broken cartesian_config.py.
> 
> It was broken by:
> 
> commit 81c6ce860b2f625ec31533779c479cf9bf14af38
> Author: Xu Tian 
> Date:   Mon May 23 15:10:48 2016 +0800
> 
> virttest.cartesian_config: enable postfix_parse
> 
> postfix string '_fixed', '_max' and '_min' doesn't work, because
> 'postfix_parse' not call in get_dict function. this commit enable
> it, because tp-qemu tests need these params.
> 
> Signed-off-by: Xu Tian 
> 
> 
> 
> The error is:
> 
> [root@localhost cfg]# cartesian_config.py tests.cfg
> Traceback (most recent call last):
>   File "/mnt/tests/spice/qe-tests/avocado-vt/virttest/cartesian_config.py",
> line 2301, in 
> print_dicts(options, dicts)
>   File "/mnt/tests/spice/qe-tests/avocado-vt/virttest/cartesian_config.py",
> line 2187, in print_dicts
> print_dicts_default(options, dicts)
>   File "/mnt/tests/spice/qe-tests/avocado-vt/virttest/cartesian_config.py",
> line 2160, in print_dicts_def
> ault
> for count, dic in enumerate(dicts):
>   File "/mnt/tests/spice/qe-tests/avocado-vt/virttest/cartesian_config.py",
> line 1939, in get_dicts
> for d in self.get_dicts_plain(node, ctx, content, shortname, dep):
>   File "/mnt/tests/spice/qe-tests/avocado-vt/virttest/cartesian_config.py",
> line 2145, in get_dicts_plain
> for d in self.get_dicts(n, ctx, new_content, shortname, dep):
>   File "/mnt/tests/spice/qe-tests/avocado-vt/virttest/cartesian_config.py",
> line 1939, in get_dicts
> for d in self.get_dicts_plain(node, ctx, content, shortname, dep):
> 
> .
> 
>   File "/mnt/tests/spice/qe-tests/avocado-vt/virttest/cartesian_config.py",
> line 2145, in get_dicts_plain
> for d in self.get_dicts(n, ctx, new_content, shortname, dep):
>   File "/mnt/tests/spice/qe-tests/avocado-vt/virttest/cartesian_config.py",
> line 1942, in get_dicts
> postfix_parse(d)
>   File "/mnt/tests/spice/qe-tests/avocado-vt/virttest/cartesian_config.py",
> line 2244, in postfix_parse
> if key.endswith("_max"):
> AttributeError: 'tuple' object has no attribute 'endswith'
> 
> 
> If I do:
> 
> git revert 81c6ce860b2f625ec31533779c479cf9bf14af38
> 
> than I do not have such error. Please fix.
> 

Please propose the revert as a PR.  Xu and the others Avocado-VT
maintainers can review, comment and (optionally) apply your proposal.

Thanks!


-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



Re: [Avocado-devel] option --output-check-record behavior

2016-09-08 Thread Cleber Rosa


On 09/08/2016 10:25 AM, Marcos E. Matsunaga wrote:
> Hi All,
> 
> I am new to avocado and have just started to look into it.
> 
> I have been playing with avocado on Fedora 24 for a few weeks. I wrote a
> small script to run commands and was exploring the option
> "--output-check-record", but it never populate the files stderr.expected
> and stdout.expected. Instead, it prints an error with "[stderr]" in the
> job.log file. My understanding is that the output (stderr and stdout) 
> of commands/scripts executed by avocado would be captured and saved on
> those files (like on synctest.py example), but it doesn't. I want to
> know if I am doing something wrong or it is a bug.
> 

Hi Marcos,

Avocado creates the `stdout` and `stderr` files in the test result
directory.  In the synctest example, for instance, my contains:

$ avocado run examples/tests/synctest.py
$ cat
~/avocado/job-results/latest/test-results/1-examples_tests_synctest.py\:SyncTest.test/stdout

PAR : waiting
PASS : sync interrupted

`stderr` is actually empty for that test:

$ wc -l
~/avocado/job-results/latest/test-results/1-examples_tests_synctest.py\:SyncTest.test/stderr
0
/home/cleber/avocado/job-results/latest/test-results/1-examples_tests_synctest.py:SyncTest.test/stderr

What you have to do is, once you're satisfied with those outputs, and
they're considered "the gold standard", you'd move those to the test
*data directory*.

So, if you test is hosted at, `/tests/xl.py`, you'd created the
`/tests/xl.py.data`, and put those files there, named `stdout.expected`
and `stderr.expected`.

Whenever you run `avocado run --output-check-record all /tests/xl.py`,
those files will be used and the output of the *current* test execution
will be compared to those "gold standards".

> The script is very simple and the way I execute the command is:
> 
> cmd = ('/usr/sbin/xl create /VM/guest1/vm.cfg')
> if utils.system(cmd) == "0":
>   pass
> else:
>   return False
> 
> The command send to stdout:
> 
> Parsing config from /VM/guest1/vm.cfg
> 
> I run the test as:
> 
> avocado run --output-check-record all xentest.py
> 
> The job.log file contains:
> 
> 2016-09-07 13:04:48,015 test L0214 INFO | START
> 1-/root/avocado-vt/io-fs-autotest-xen/xen/tests/xentest.py:xentest.test_xen_start_stop;1
> 
> 2016-09-07 13:04:48,051 xentest  L0033 INFO |
> 1-/root/avocado-vt/io-fs-autotest-xen/xen/tests/xentest.py:xentest.test_xen_start_stop;1:
> Running action create
> 2016-09-07 13:04:49,067 utilsL0151 ERROR| [stderr] Parsing
> config from /VM/guest1/vm.cfg
> 2016-09-07 13:04:49,523 test L0586 INFO | PASS
> 1-/root/avocado-vt/io-fs-autotest-xen/xen/tests/xentest.py:xentest.test_xen_start_stop;1
> 
> 
> Thanks for your time and help.

Let me know if it's clear now! And thanks for trying Avocado out!

> 
> 

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


Re: [Avocado-devel] option --output-check-record behavior

2016-09-08 Thread Cleber Rosa

On 09/08/2016 11:34 AM, Marcos E. Matsunaga wrote:
> Hi Cleber,
> 
> Thanks for your quick reply. That's exactly what I understood, but here
> is what is happening
> 
> I have a directory ~/avocado/xen/tests where I have the xentest.py
> script. When I execute it, it does create the directory
> ~/avocado/xen/tests/xentest.py.data with stderr.expected and
> stdout.expected (empty). It also creates the two files (stdout and
> stderr) in the job-results/latest directory, but also empty.
> 
> The weird thing is that instead of saving, it reports to the job.log as
> an error "L0151 ERROR| [stderr] Parsing config from /VM/guest1/vm.cf".
> 
> That's why I think I am missing something.

Can you post the full test code and the resulting `job.log` file?

> 
> Thanks again for your help.
> 
> On 09/08/2016 02:59 PM, Cleber Rosa wrote:
>>
>> On 09/08/2016 10:25 AM, Marcos E. Matsunaga wrote:
>>> Hi All,
>>>
>>> I am new to avocado and have just started to look into it.
>>>
>>> I have been playing with avocado on Fedora 24 for a few weeks. I wrote a
>>> small script to run commands and was exploring the option
>>> "--output-check-record", but it never populate the files stderr.expected
>>> and stdout.expected. Instead, it prints an error with "[stderr]" in the
>>> job.log file. My understanding is that the output (stderr and stdout)
>>> of commands/scripts executed by avocado would be captured and saved on
>>> those files (like on synctest.py example), but it doesn't. I want to
>>> know if I am doing something wrong or it is a bug.
>>>
>> Hi Marcos,
>>
>> Avocado creates the `stdout` and `stderr` files in the test result
>> directory.  In the synctest example, for instance, my contains:
>>
>> $ avocado run examples/tests/synctest.py
>> $ cat
>> ~/avocado/job-results/latest/test-results/1-examples_tests_synctest.py\:SyncTest.test/stdout
>>
>>
>> PAR : waiting
>> PASS : sync interrupted
>>
>> `stderr` is actually empty for that test:
>>
>> $ wc -l
>> ~/avocado/job-results/latest/test-results/1-examples_tests_synctest.py\:SyncTest.test/stderr
>>
>> 0
>> /home/cleber/avocado/job-results/latest/test-results/1-examples_tests_synctest.py:SyncTest.test/stderr
>>
>>
>> What you have to do is, once you're satisfied with those outputs, and
>> they're considered "the gold standard", you'd move those to the test
>> *data directory*.
>>
>> So, if you test is hosted at, `/tests/xl.py`, you'd created the
>> `/tests/xl.py.data`, and put those files there, named `stdout.expected`
>> and `stderr.expected`.
>>
>> Whenever you run `avocado run --output-check-record all /tests/xl.py`,
>> those files will be used and the output of the *current* test execution
>> will be compared to those "gold standards".
>>
>>> The script is very simple and the way I execute the command is:
>>>
>>> cmd = ('/usr/sbin/xl create /VM/guest1/vm.cfg')
>>> if utils.system(cmd) == "0":
>>>pass
>>> else:
>>>return False
>>>
>>> The command send to stdout:
>>>
>>> Parsing config from /VM/guest1/vm.cfg
>>>
>>> I run the test as:
>>>
>>> avocado run --output-check-record all xentest.py
>>>
>>> The job.log file contains:
>>>
>>> 2016-09-07 13:04:48,015 test L0214 INFO | START
>>> 1-/root/avocado-vt/io-fs-autotest-xen/xen/tests/xentest.py:xentest.test_xen_start_stop;1
>>>
>>>
>>> 2016-09-07 13:04:48,051 xentest  L0033 INFO |
>>> 1-/root/avocado-vt/io-fs-autotest-xen/xen/tests/xentest.py:xentest.test_xen_start_stop;1:
>>>
>>> Running action create
>>> 2016-09-07 13:04:49,067 utilsL0151 ERROR| [stderr] Parsing
>>> config from /VM/guest1/vm.cfg
>>> 2016-09-07 13:04:49,523 test L0586 INFO | PASS
>>> 1-/root/avocado-vt/io-fs-autotest-xen/xen/tests/xentest.py:xentest.test_xen_start_stop;1
>>>
>>>
>>>
>>> Thanks for your time and help.
>> Let me know if it's clear now! And thanks for trying Avocado out!
>>
>>>
> 

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


Re: [Avocado-devel] option --output-check-record behavior

2016-09-08 Thread Cleber Rosa
On 09/08/2016 01:50 PM, Marcos E. Matsunaga wrote:
> 
> On 09/08/2016 05:44 PM, Cleber Rosa wrote:
>> On 09/08/2016 11:34 AM, Marcos E. Matsunaga wrote:
>>> Hi Cleber,
>>>
>>> Thanks for your quick reply. That's exactly what I understood, but here
>>> is what is happening
>>>
>>> I have a directory ~/avocado/xen/tests where I have the xentest.py
>>> script. When I execute it, it does create the directory
>>> ~/avocado/xen/tests/xentest.py.data with stderr.expected and
>>> stdout.expected (empty). It also creates the two files (stdout and
>>> stderr) in the job-results/latest directory, but also empty.
>>>
>>> The weird thing is that instead of saving, it reports to the job.log as
>>> an error "L0151 ERROR| [stderr] Parsing config from /VM/guest1/vm.cf".
>>>
>>> That's why I think I am missing something.
>> Can you post the full test code and the resulting `job.log` file?
> Sure.. It is attached.
> And the multiplex file I am using is:
> 
> xentest:
> guest1:
> action: !mux
> start:
> run_action: "create"
> domain_name: "perf1"
> sleep_time: 1
> stop:
>     run_action: "shutdown"
> domain_name: "perf1"
> sleep_time: 60
> guest_cfg: /Repo/VM/perf1/vm.cfg
> 
>>
>>> Thanks again for your help.
>>>
>>> On 09/08/2016 02:59 PM, Cleber Rosa wrote:
>>>> On 09/08/2016 10:25 AM, Marcos E. Matsunaga wrote:
>>>>> Hi All,
>>>>>
>>>>> I am new to avocado and have just started to look into it.
>>>>>
>>>>> I have been playing with avocado on Fedora 24 for a few weeks. I
>>>>> wrote a
>>>>> small script to run commands and was exploring the option
>>>>> "--output-check-record", but it never populate the files
>>>>> stderr.expected
>>>>> and stdout.expected. Instead, it prints an error with "[stderr]" in
>>>>> the
>>>>> job.log file. My understanding is that the output (stderr and stdout)
>>>>> of commands/scripts executed by avocado would be captured and saved on
>>>>> those files (like on synctest.py example), but it doesn't. I want to
>>>>> know if I am doing something wrong or it is a bug.
>>>>>
>>>> Hi Marcos,
>>>>
>>>> Avocado creates the `stdout` and `stderr` files in the test result
>>>> directory.  In the synctest example, for instance, my contains:
>>>>
>>>> $ avocado run examples/tests/synctest.py
>>>> $ cat
>>>> ~/avocado/job-results/latest/test-results/1-examples_tests_synctest.py\:SyncTest.test/stdout
>>>>
>>>>
>>>>
>>>> PAR : waiting
>>>> PASS : sync interrupted
>>>>
>>>> `stderr` is actually empty for that test:
>>>>
>>>> $ wc -l
>>>> ~/avocado/job-results/latest/test-results/1-examples_tests_synctest.py\:SyncTest.test/stderr
>>>>
>>>>
>>>> 0
>>>> /home/cleber/avocado/job-results/latest/test-results/1-examples_tests_synctest.py:SyncTest.test/stderr
>>>>
>>>>
>>>>
>>>> What you have to do is, once you're satisfied with those outputs, and
>>>> they're considered "the gold standard", you'd move those to the test
>>>> *data directory*.
>>>>
>>>> So, if you test is hosted at, `/tests/xl.py`, you'd created the
>>>> `/tests/xl.py.data`, and put those files there, named `stdout.expected`
>>>> and `stderr.expected`.
>>>>
>>>> Whenever you run `avocado run --output-check-record all /tests/xl.py`,
>>>> those files will be used and the output of the *current* test execution
>>>> will be compared to those "gold standards".
>>>>
>>>>> The script is very simple and the way I execute the command is:
>>>>>
>>>>> cmd = ('/usr/sbin/xl create /VM/guest1/vm.cfg')
>>>>> if utils.system(cmd) == "0":

The issue seems to be related to the fact that you're using old autotest
libraries to execute your external commands.

The output record/check support is built into Avocado's libraries,
namely `avocado.utils.process`.

Try to replace your code with:

   from avocado.utils import

[Avocado-devel] Avocado release 41.0: Outlander

2016-09-12 Thread Cleber Rosa
Hello everyone,

This is yet another Avocado release announcement!  Since we're now
hosting the release notes alongside our official documentation, please
refer to the following link for the complete information about this release:

http://avocado-framework.readthedocs.io/en/41.0/release_notes/41_0.html

Installing Avocado
==

Instructions are available in our documentation on how to install
either with packages or from source:

 
http://avocado-framework.readthedocs.io/en/41.0/GetStartedGuide.html#installing-avocado

Updated RPM packages are available for EPEL 6, EPEL 7, Fedora 23 and 24.

Happy hacking and testing!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]





signature.asc
Description: OpenPGP digital signature


Re: [Avocado-devel] Test assumptions question.

2016-09-14 Thread Cleber Rosa


On 09/14/2016 09:57 AM, Dmitry Monakhov wrote:
> Fam Zheng  writes:
> 
>> On Wed, 09/14 10:51, Dmitry Monakhov wrote:
>>> class AvocadoRelease(Test):
>>>
>>> def setUp(self):
>>> self.log.info("do setUp: install requirements, fetch source")
>>>
>>> def test_a(self):
>>> self.log.info("do test_a: inspekt lint")
>>>
>>> def test_b(self):
>>> self.log.info("do test_b: inspekt style")
>>>
>>> def tearDown(self):
>>> self.log.info("do tearDown")
>>> My assumptions was that test sequence will be:
>>> do setUp
>>> do test_a: inspekt lint
>>> do test_b: inspekt style
>>> do tearDown
>>> But it is appeared that each testcase is wrapped with setUp()/teerDown()
>>> 
>>> START 1-simpletest.py:AvocadoReliase.test_a
>>> do setUp: install requirements, fetch source
>>> do test_a: inspekt lint
>>> do tearDown
>>> PASS 1-simpletest.py:AvocadoReliase.test_a
>>> START 2-simpletest.py:AvocadoReliase.test_b
>>> do setUp: install requirements, fetch source
>>> do test_b: inspekt style
>>> do tearDown
>>> PASS 2-simpletest.py:AvocadoReliase.test_b
>>> 
>>> This is not obvious. And makes it hard to divide test to
>>> fine-grained testcases because setUp/teerDown() for each test may
>>> be too intrusive. What is convenient way implement this scenario?
>>
>> This is the interface of Python's unittest.TestCase (base class of avocado 
>> Test
>> class).  It also offers setUpClass and tearDownClass that do what you want
>> above.
>>
>> See also `pydoc unittest.TestCase`.
> Indeed. But avocado.Test() does not call setUpClass/tearDownClass.
> it call only setUp/tearDown. AIFAU this is done that way because
> each test is executed in supbrocess. See runner.py:TestRunner.run_test:
> proc = multiprocessing.Process(target=self._run_test,
>args=(test_factory, queue,))
> 

Right, that's the exact reason.  It was also better discussed/explained
here:

https://github.com/avocado-framework/avocado/issues/1148#issuecomment-245686047

Which makes me think that we should officially document that:

https://trello.com/c/bN7w5Vzh/826-unittest-compatibility-document-process-model-and-class-setup-teardown-support

> IMHO this is a bit unusual, but cool because allow to support
> concurrent test execution in future.
> IMHO It would be nice document it somewhere, for example add compat mode 
> where:
> setUpClass/tearDownClass are called similar to setUp/tearDown, but
> dump warning in a log.
> 
> BTW: Are any chance to implement parallel executions in near future ?
> https://trello.com/c/xNeR2slj/255-support-running-tests-in-parallel

We have had this in mind for a while, but other features have been
considered higher priority.  Discussing it and sending prototypes is a
good way to get this running.

Thanks!

>> Fam

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


Re: [Avocado-devel] New questions

2016-09-14 Thread Cleber Rosa


On 09/14/2016 12:59 PM, Lucas Meneghel Rodrigues wrote:
> On Wed, Sep 14, 2016 at 8:32 AM Marcos E. Matsunaga <
> marcos.matsun...@oracle.com> wrote:
> 
>> Hi Folks,
>>
>> I have some questions about how avocado works.
>>
>> 1. If I run avocado and give it a directory that has all tests. Is there
>> a way to specify the order of execution? I mean, if I name the files
>> 001-xxx.py, 010-aa.py, will it execute 001-xxx.py before 010-aa.py or it
>> doesn't follow an alphabetical order?
>>
> 
> 
> There is - You can specify their order of execution in the command line:
> 
> avocado run failtest.py raise.py doublefree.py
> JOB ID : 6047dedc2996815659a75841f00518fa0f83b1ee
> JOB LOG:
> /home/lmr/avocado/job-results/job-2016-09-14T12.53-6047ded/job.log
> TESTS  : 3
>  (1/3) failtest.py:FailTest.test: FAIL (0.00 s)
>  (2/3) raise.py:Raise.test: PASS (0.11 s)
>  (3/3) doublefree.py:DoubleFreeTest.test: PASS (1.02 s)
> RESULTS: PASS 2 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
> TESTS TIME : 1.13 s
> JOB HTML   :
> /home/lmr/avocado/job-results/job-2016-09-14T12.53-6047ded/html/results.html
> 
> 
> 
>> 2. Lets take into consideration that same directory. Some of the scripts
>> will have multiplex configuration files. Does avocado automatically look
>> at some specific directory for those multiplex configuration files? I've
>> tried to add them to the data, cfg and even the 

[Avocado-devel] RFC on Job phases

2016-09-21 Thread Cleber Rosa
ly defined in
`Job.test_suite`.

Post-tests execution


A new job execution phase called "post_tests" will be created.  The
dispatcher instantiation, if not already performed during the
"pre_tests" phase, will be done here.  This is what
`avocado.core.job.Job.post_tests` can look something like::

  ...
  def post_tests(self):
  if self.job_pre_post_dispatcher is None:
  self.job_pre_post_dispatcher = dispatcher.JobPrePostDispatcher()

output.log_plugin_failures(self.job_pre_post_dispatcher.load_failures)
  self.job_pre_post_dispatcher.map_methods('post', self)
  ...

Post-job plugins would be renamed, thus better named, "job post-tests"
(note the plural).

Job overall execution
-

The job overall execution is certainly a valid use case.  That is, in
some cases, it may be desirable to create the test suite, run the
pre-tests execution plugins, run the tests and all other steps defined
here at once.

A method called `run()`, meaning the execution of all job phases, can
formally be defined as the execution of all steps of a job.  Its
implementation could look something like::

  def run(self):
  self.create_test_suite()
  # at this point, self.test_suite contains all tests resolved by
  # the various test loaders enabled,  which could in fact be
  # an empty test suite.

  # now run the pre_tests step, which include pre-tests execution
  # plugins
  self.pre_tests()

  # run all tests
  self.run_tests()

  # now run the post_tests step, which include post-tests
  # execution plugins
  self.post_tests()

Job results
---

There's been already a lot of work towards moving the generation of
results outside the job.  The proposal here is to maintain the same
approach.

Conclusion
==

The most important point here is to properly define steps and
responsibilities of each job phase.

For that, each job phase should be self contained, and it should be
possible, to skip one of the defined steps and still have a
functioning job instance.

One quick example is a custom Job instance written like this::

  ...
  job = job.Job(args)
  job.create_test_suite()
  job.run_tests()
  ...

This Job will have no pre/post-tests plugins executed.  Other than that,
it should still perform a fully functional job.

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


Re: [Avocado-devel] avocado-vt: 'avocado list' slower than usual

2016-09-22 Thread Cleber Rosa
On 09/22/2016 03:34 PM, Eduardo Habkost wrote:
> Hi,
> 
> I haven't been using avocado-vt for a while, but today I have
> updated and git-cleanded all my git clones (autotest, avocado,
> avocado-vt, tp-qemu), removed my old ~/avocado dir, re-run
> vt-bootstrap, and noticed that 'avocado list' is very slow. It is
> taking 29 seconds to run and list the avocado-vt test cases. I
> don't remember seeing it take so long to run, before.
> 
> When I interrupt avocado, I get a backtrace that shows a very
> deep call chain with recursive get_dicts() calls inside
> virttest/cartesian_config.py (see below).
> 
> Is this expected? Has anybody else noticed this recently?
> 

I'll try to reproduce.  Can you please check the exact avocado and
avocado-vt versions (commits) you're using?

Thanks!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



Re: [Avocado-devel] RFC on Job phases

2016-09-29 Thread Cleber Rosa
On 09/21/2016 05:08 PM, Cleber Rosa wrote:
> This is a simple proposal for the execution phases/steps of the
> Avocado Job class.  Based on its natural organic evolution, some of
> the job steps purposes do not have a clearly defined responsibility.
> 
> The original motivation of this RFC is to discuss and fix issue
> reported on GitHub PR #1412.  On that issue/PR, it was noticed that
> result plugins would be run after the `--archive|-z` feature, thus
> missing some of the results.  To add to the confusion, the user's own
> Post-Job plugin was also executed in an order that was not intended.
> 
> Clear job phases, and also order control on plugin execution (not the
> scope of this RFC) are being proposed as two abstract mechanisms that
> would allow a definitive fix for that (and other similar) issues.
> 

FIY, a pull request was posted at:

https://github.com/avocado-framework/avocado/pull/1498

That gives a general idea of how this would look like.

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


[Avocado-devel] RFC: Plugin execution order

2016-09-29 Thread Cleber Rosa
As detailed in the following card:

  https://trello.com/c/oWzrV48E/837-execution-order-support-in-plugins

It should be possible to specify a custom order for plugins to be
executed by setting specific configuration.

The first observed approach would be to create a section
called `[plugins.]` where the ``  conforms to the
description on fully qualified plugin names given here:


https://github.com/avocado-framework/avocado/pull/1495/commits/193a10ce98cb5747395eefcb485dd452696b4b11#diff-0f4f89ace79fa15278d9b283c2d9d9b2R84

Then, by creating a key named `order`, containing the short names as a
list.  Enabled plugins not listed will be executed *after* plugins
listed, but in non-determined order.

For instance, consider the following entry points::

  'avocado.plugins.result' : [
 'xunit = avocado.plugins.xunit:XUnitResult',
 'json = avocado.plugins.jsonresult:JSONResult',
 'archive = avocado.plugins.archive:Archive',
 'mail = avocado.plugins.mail:Mail',
 'html = avocado_result_html:HTMLResult'
   ]

We can say that:

* The plugin type, according to the fully qualified plugin name
  definition here is `result`.

* The plugin fully qualified names are:
  - result.xunit
  - result.json
  - result.archive
  - result.mail
  - result.html

* The short names for plugins of type "result" are:
  - xunit
  - json
  - archive
  - mail
  - html

To make sure that the mail plugin is run after (and thus includes)
the HTML result, the following configuration entry can be set::

  [plugins.result]
  order = html, archive

The other result plugins, namely xunit, json and mail, will still
be run.  It's guaranteed they'll be run *after* the other result
plugins.  The order in which they'll run after the explicitly
ordered plugins is undefined.

Other possible approach
---

The other approach possible, would require a default order value
for plugins.  This would still preferably be done in configuration
rather than in code.  Then, the fully qualified name for a plugin could
be used as part of the configuration section.  Example::

  [plugin.result.archive]
  order = 50

  [plugin.result.html]
  order = 30

This would make the `html` plugin run before the `archive` plugin.
While more verbose, it would allow for external plugins to ship with
stock configuration files that would set, by default, its ordering.

Feedback is highly appreciated!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



Re: [Avocado-devel] RFC: Plugin execution order

2016-09-30 Thread Cleber Rosa

On 09/29/2016 04:14 PM, Jeff Nelson wrote:
> On Thu, Sep 29, 2016 at 01:24:22PM -0300, Cleber Rosa wrote:
>> As detailed in the following card:
>>
>>  https://trello.com/c/oWzrV48E/837-execution-order-support-in-plugins
>>
>> It should be possible to specify a custom order for plugins to be
>> executed by setting specific configuration.
>>
>> The first observed approach would be to create a section
>> called `[plugins.]` where the ``  conforms to the
>> description on fully qualified plugin names given here:
>>
>>
>> https://github.com/avocado-framework/avocado/pull/1495/commits/193a10ce98cb5747395eefcb485dd452696b4b11#diff-0f4f89ace79fa15278d9b283c2d9d9b2R84
>>
>>
>> Then, by creating a key named `order`, containing the short names as a
>> list.  Enabled plugins not listed will be executed *after* plugins
>> listed, but in non-determined order.
> 
> The phrase "enabled plugins" implies that there can be disabled
> plugins as well.
> 
>


Yeah, support for that has just been added:

http://avocado-framework.readthedocs.io/en/latest/Plugins.html#disabling-a-plugin

> 
>> For instance, consider the following entry points::
>>
>>  'avocado.plugins.result' : [
>> 'xunit = avocado.plugins.xunit:XUnitResult',
>> 'json = avocado.plugins.jsonresult:JSONResult',
>> 'archive = avocado.plugins.archive:Archive',
>> 'mail = avocado.plugins.mail:Mail',
>> 'html = avocado_result_html:HTMLResult'
>>   ]
>>
>> We can say that:
>>
>> * The plugin type, according to the fully qualified plugin name
>>  definition here is `result`.
>>
>> * The plugin fully qualified names are:
>>  - result.xunit
>>  - result.json
>>  - result.archive
>>  - result.mail
>>  - result.html
>>
>> * The short names for plugins of type "result" are:
>>  - xunit
>>  - json
>>  - archive
>>  - mail
>>  - html
> 
> I like this use of examples. The illustrations are clear and easy to
> understand.
> 
> 

Thanks.

>> To make sure that the mail plugin is run after (and thus includes)
>> the HTML result, the following configuration entry can be set::
>>
>>  [plugins.result]
>>  order = html, archive
> 
> What does it mean for a plugin to "include" the results of an earlier
> plugin?
> 
> 

I meant that a imaginary "mail" plugin, would include all previously
generated results, so it would include the HTML report.  That wasn't a
really good example when it comes to *core* functionality, that is, the
plugin system wouldn't by its own do any of those inclusions.

>> The other result plugins, namely xunit, json and mail, will still
>> be run.  It's guaranteed they'll be run *after* the other result
>> plugins.  The order in which they'll run after the explicitly
>> ordered plugins is undefined.
> 
> Can a plugin determine what plugin(s) have run before it? (I don't
> think it's necessary.)
> 
> 

In theory, a plugin could look at the configuration system and get that
order.  But, I don't think that's going to be a common use case.

>> Other possible approach
>> ---
>>
>> The other approach possible, would require a default order value
>> for plugins.  This would still preferably be done in configuration
>> rather than in code.  Then, the fully qualified name for a plugin could
>> be used as part of the configuration section.  Example::
>>
>>  [plugin.result.archive]
>>  order = 50
>>
>>  [plugin.result.html]
>>  order = 30
> 
> Small typo: "plugins.result" not "plugin.result" (two lines).
> 
> 

Yep, thanks!

>> This would make the `html` plugin run before the `archive` plugin.
>> While more verbose, it would allow for external plugins to ship with
>> stock configuration files that would set, by default, its ordering.
> 
> Order is here is used as a numerical value to indicate a relative
> ordering, correct? I'm not sure I like the name "order"; how about
> "sequence"?
> 
> A drawback to this approach is that you still have to come up with a
> rule for what happens when two plugins have the same sequence number.
> 

Then the order is undefined.  I don't see a clean way around this.

>> Feedback is highly appreciated!
> 
> Another way to specify the order is to use an attribute at the
> [plugins] or [plugins.result] level with certain expected values:
> 
> [plugins]
> execution-order = ran

Re: [Avocado-devel] avocado-vt: 'avocado list' slower than usual

2016-10-03 Thread Cleber Rosa


On 10/03/2016 08:01 AM, Lukáš Doktor wrote:
> Dne 23.9.2016 v 00:44 Eduardo Habkost napsal(a):
>> On Thu, Sep 22, 2016 at 05:29:37PM -0300, Cleber Rosa wrote:
>>> On 09/22/2016 03:34 PM, Eduardo Habkost wrote:
>>>> Hi,
>>>>
>>>> I haven't been using avocado-vt for a while, but today I have
>>>> updated and git-cleanded all my git clones (autotest, avocado,
>>>> avocado-vt, tp-qemu), removed my old ~/avocado dir, re-run
>>>> vt-bootstrap, and noticed that 'avocado list' is very slow. It is
>>>> taking 29 seconds to run and list the avocado-vt test cases. I
>>>> don't remember seeing it take so long to run, before.
>>>>
>>>> When I interrupt avocado, I get a backtrace that shows a very
>>>> deep call chain with recursive get_dicts() calls inside
>>>> virttest/cartesian_config.py (see below).
>>>>
>>>> Is this expected? Has anybody else noticed this recently?
>>>>
>>>
>>> I'll try to reproduce.  Can you please check the exact avocado and
>>> avocado-vt versions (commits) you're using?
>>
>> avocado: b82344f707424e53c1ad85429bc74a8d40e0cf31 Merging pull request
>> 1484
>> avocado-vt: 7a12dc6d19c8ca356cdaaa211d74158c2064145f Merging pull
>> request 708
>>
> 
> Hello Eduardo,
> 
> does this problem still persists? I tried running `avocado run boot
> boot`, which takes 29s (20s first boot, 7s the second) using the git
> commits your mentioned. The first thing I'd try is to re-run
> `vt-bootstra` as with new options new default filters can be added,
> which might lead to a lot of combinations created.
> 
> Another thing which might help us identify the root cause is to try
> running `--dry-run`, which only discovers the tests (which means only
> parsing the cartesian config).
> 
> As for the very deep call, that's expected, cartesian config is written
> recursively and to parse it takes quite a time, but unless you parse a
> huge cfg it should not take so long.
> 
> If none helps, please share the exact command along with all the
> required configs in order I can reproduce it.
> 
> Regards,
> Lukáš
> 

Thanks Lukáš for stepping in on this.  Let me also share my results
using those commits:

1) avocado vt-bootstrap --yes-to-all

10.30user 0.69system 0:11.40elapsed 96%CPU (0avgtext+0avgdata
68004maxresident)k
408inputs+1751720outputs (1major+58012minor)pagefaults 0swaps

2) avocado run boot

6.38user 2.83system 0:17.04elapsed 54%CPU (0avgtext+0avgdata
264040maxresident)k
38544inputs+3531872outputs (175major+218983minor)pagefaults 0swaps

So, it seems related to your cartesian configuration indeed.  Please
share them so we can better investigate it.

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


[Avocado-devel] (Pre-)Release Test Plan for 42.0

2016-10-09 Thread Cleber Rosa
Result is FAIL with fixes on PR:

 https://github.com/avocado-framework/avocado/pull/1529

---

Test Plan: Release Test Plan
Run by 'cleber' at 2016-10-09T09:31:52.947016

PASS: 'Avocado source is sound':
FAIL: 'Avocado source does not contain spelling errors': Commit adding
words to spell checker white list: 66e5647973de8ba07c4a1349343fa33c9f4ddebf
FAIL: 'Avocado RPM build': Commits
4cef684c461e470733d8dc6ab8ff0c8328e250db and
e25cb866947ca44a9b1c426e70a4aad6d7dc566b addresses the issues found.
PASS: 'Avocado RPM install':
PASS: 'Avocado Test Run on RPM based installation':
PASS: 'Avocado Test Run on Virtual Machine':
PASS: 'Avocado Test Run on Remote Machine':
PASS: 'Avocado Remote Machine HTML report':
PASS: 'Avocado Server Source Checkout and Unittests':
PASS: 'Avocado Server Run':
PASS: 'Avocado Server Functional Test':
PASS: 'Avocado Virt and VT Source Checkout':
PASS: 'Avocado Virt Bootstrap':
PASS: 'Avocado Virt Boot Test Run and HTML report':
PASS: 'Avocado Virt - Assignment of values from the cmdline':
PASS: 'Avocado Virt - Migration test':
PASS: 'Avocado VT':
PASS: 'Avocado HTML report sysinfo':
PASS: 'Avocado HTML report links':
PASS: 'Paginator':


avocado: ac035ebbfb0268644288fe6286b892c1b72e496b
avocado-server: 1491de32cb4e0ad4c0e83e57d1139af7f5eafccf
avocado-virt: 53bc132b87b8458d2386ff05bacf883b7cd2ea47
avocado-virt-tests: 23e9f6ace369b09b6e53007e5dc759c18576914a
avocado-vt: f3e4259bc05a9388664cca7ee9a4b038b5000a7d


-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


[Avocado-devel] (Pre-)Release Test Plan for 42.0 (Post fixes)

2016-10-09 Thread Cleber Rosa
Result is PASS.

---

Test Plan: Release Test Plan
Run by 'cleber' at 2016-10-09T18:51:05.547461

PASS: 'Avocado source is sound':
PASS: 'Avocado source does not contain spelling errors':
PASS: 'Avocado RPM build':
PASS: 'Avocado RPM install':
PASS: 'Avocado Test Run on RPM based installation':
PASS: 'Avocado Test Run on Virtual Machine':
PASS: 'Avocado Test Run on Remote Machine':
PASS: 'Avocado Remote Machine HTML report':
PASS: 'Avocado Server Source Checkout and Unittests':
PASS: 'Avocado Server Run':
PASS: 'Avocado Server Functional Test':
PASS: 'Avocado Virt and VT Source Checkout':
PASS: 'Avocado Virt Bootstrap':
PASS: 'Avocado Virt Boot Test Run and HTML report':
PASS: 'Avocado Virt - Assignment of values from the cmdline':
PASS: 'Avocado Virt - Migration test':
PASS: 'Avocado VT':
PASS: 'Avocado HTML report sysinfo':
PASS: 'Avocado HTML report links':
PASS: 'Paginator':

avocado: c7f321444790ff2e3299a6c0a5f6fc8d6c74822f
avocado-server: 1491de32cb4e0ad4c0e83e57d1139af7f5eafccf
avocado-virt: 53bc132b87b8458d2386ff05bacf883b7cd2ea47
avocado-virt-tests: 23e9f6ace369b09b6e53007e5dc759c18576914a
avocado-vt: f3e4259bc05a9388664cca7ee9a4b038b5000a7d

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]





signature.asc
Description: OpenPGP digital signature


[Avocado-devel] Sprint #42 Release Meeting

2016-10-10 Thread Cleber Rosa
Dear Avocado users and developers,

I'd like to invite you all to a sprint release and planning meeting.

https://plus.google.com/b/103348962855861989427/events/ctm9cvqr1v9o4uf2qpjlj14tjt4

It is going to take place at `date -d "Mon Oct 10 11:00:00 BRT 2016"`
using the Hangouts on Air:

https://www.youtube.com/watch?v=LlrXKEOxeAY

The link above contains read-only stream but we are going to share the
actual hangout url before the meeting begins. You can decide whether
you want to join directly, or just watch the streaming and ask
questions via IRC irc://irc.oftc.net/#avocado

The meeting will be split in two parts (roughly 30 min each):

   * Part 1: Sprint Review
 * Review of the changes introduced during this sprint
 * Short demonstrations of some of the new features
 * Live sprint status: release readiness, last minute
   blockers, etc.

   * Part 2: Sprint Planning
 * New sprint planning boards will be created and its
   tasks prioritized.
 * Quick review of the expectations for the next sprint
   (high-level vision, goals, highlights)

The meeting is going to be recorded and shared on Youtube afterwards.
It is worth mentioning that most of the tasks during the meeting are
going to take place on our Trello board:

https://trello.com/b/WbqPNl2S/avocado

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


[Avocado-devel] Avocado release 42.0: Stranger Things

2016-10-10 Thread Cleber Rosa
Hello everyone,

This is yet another Avocado release announcement!  Since we're now
hosting the release notes alongside our official documentation, please
refer to the following link for the complete information about this release:

http://avocado-framework.readthedocs.io/en/42.0/release_notes/42_0.html

Installing Avocado
==

Instructions are available in our documentation on how to install
either with packages or from source:

 
http://avocado-framework.readthedocs.io/en/42.0/GetStartedGuide.html#installing-avocado

Updated RPM packages are available for EPEL 6, EPEL 7, Fedora 23 and 24.

Happy hacking and testing!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]







signature.asc
Description: OpenPGP digital signature


Re: [Avocado-devel] Tests stable tmpdir

2016-10-24 Thread Cleber Rosa

On 10/24/2016 10:27 AM, Amador Pahim wrote:
> Hello,
> 
> I saw a number of requests about setUpClass/tearDownClass. We don't
> actually support them in Avocado, as already stated in our docs, but
> most of the requests are actually interested in have a temporary
> directory that can be the same throughout the job, so every test can
> use that directory to share information that is common to all the
> tests.
> 
> One way to provide that would be exposing the Job temporary directory,
> but providing a supported API where a test can actually write to
> another test results can break our promise that tests are independent
> from each other.
> 

Yes, the initial goal of a job temporary directory is to prevent clashes
and allow proper cleanup when a job is finished.  For those not familiar
with the current problems of (global) temporary directories:

https://trello.com/c/qgSTIK0Y/859-single-data-dir-get-tmp-dir-per-interpreter-breaks-multiple-jobs


> Another way that comes to my mind is to use the pre/post plugin to
> handle that. On `pre`, we can create a temporary directory and set an
> environment variable with the path for it. On `post` we remove that
> directory. Something like:
> 
> ```
> class TestsTmpdir(JobPre, JobPost):
> ...
> 
> def pre(self, job):
> os.environ['AVOCADO_TESTS_TMPDIR'] = 
> tempfile.mkdtemp(prefix='avocado_')
> 
> def post(self, job):
> if os.environ.get('AVOCADO_TESTS_TMPDIR') is not None:
> shutil.rmtree(os.environ.get('AVOCADO_TESTS_TMPDIR'))
> ```
> 
> Thoughts?
> 

I think this can be a valid solution, that promises very little to
tests.  It doesn't break our assumption of how tests should not depend
on each other, and it reinforces that we aim at providing job level
orchestration.

Although, since we have discussed giving a job its own temporary dir,
and we already expose a lot via environment variables to tests:

http://avocado-framework.readthedocs.io/en/latest/WritingTests.html#environment-variables-for-simple-tests

And also to job pre/post script plugins:

http://avocado-framework.readthedocs.io/en/latest/ReferenceGuide.html#script-execution-environment

I'm afraid this could bring inconsistencies or clashes in the very near
future.  What I propose for the immediate terms is to write a
contrib/example plugin, that we can either fold into the Job class
itself (giving it a real temporary dir, with variables exposed to test
processes) or make it a 1st class plugin.

How does it sound?

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


Re: [Avocado-devel] Tests stable tmpdir

2016-10-30 Thread Cleber Rosa

On 10/25/2016 12:15 PM, Ademar Reis wrote:
> On Mon, Oct 24, 2016 at 06:14:14PM -0300, Cleber Rosa wrote:
>>
>> On 10/24/2016 10:27 AM, Amador Pahim wrote:
>>> Hello,
>>>
>>> I saw a number of requests about setUpClass/tearDownClass. We don't
>>> actually support them in Avocado, as already stated in our docs, but
>>> most of the requests are actually interested in have a temporary
>>> directory that can be the same throughout the job, so every test can
>>> use that directory to share information that is common to all the
>>> tests.
>>>
>>> One way to provide that would be exposing the Job temporary directory,
>>> but providing a supported API where a test can actually write to
>>> another test results can break our promise that tests are independent
>>> from each other.
>>>
>>
>> Yes, the initial goal of a job temporary directory is to prevent clashes
>> and allow proper cleanup when a job is finished.  For those not familiar
>> with the current problems of (global) temporary directories:
>>
>> https://trello.com/c/qgSTIK0Y/859-single-data-dir-get-tmp-dir-per-interpreter-breaks-multiple-jobs
> 
> Also, let's keep in mind that the architecture of Avocado is
> hierarchical and tests should not have access or knowledge about
> the job they're running on (I honestly don't know how much of
> this is true in practice today, but if it happens somewhere, it
> should be considered a problem).
> 
> Anyway, what I want to say is that we should not expose a job
> directory to tests.
> 

I believe we have to be clear about our architecture proposal, but
honest also about how we currently deviate from it.  Avocado-VT, for
instance, relies on the temporary dir that exists across tests.

>>
>>
>>> Another way that comes to my mind is to use the pre/post plugin to
>>> handle that. On `pre`, we can create a temporary directory and set an
>>> environment variable with the path for it. On `post` we remove that
>>> directory. Something like:
>>>
>>> ```
>>> class TestsTmpdir(JobPre, JobPost):
>>> ...
>>>
>>> def pre(self, job):
>>> os.environ['AVOCADO_TESTS_TMPDIR'] = 
>>> tempfile.mkdtemp(prefix='avocado_')
>>>
>>> def post(self, job):
>>> if os.environ.get('AVOCADO_TESTS_TMPDIR') is not None:
>>> shutil.rmtree(os.environ.get('AVOCADO_TESTS_TMPDIR'))
>>> ```
>>>
>>> Thoughts?
>>>
>>
>> I think this can be a valid solution, that promises very little to
>> tests.  It doesn't break our assumption of how tests should not depend
>> on each other, and it reinforces that we aim at providing job level
>> orchestration.
> 
> Thinking from the architecture perspective once again, this is a
> bit different from what you proposed before, but not that much
> (let's say it's a third-party "entity" called
> "AVOCADO_TESTS_TMPDIR" available to all processes in the job
> environment, unique per job).
> 
> It's a bit better, but first of all, it should be named,
> implemented and even enabled in a more explicit way to prevent
> users from abusing it.
> 

This kind of proposal is really a short (or mid) term compromise.  We
don't want to endorse this as part of our architecture or propose that
tests are written to depend on it.  Still, we can't at the moment, offer
a better solution.

Shipping it as a contrib plugin, can help real users to have better
tests.  Not optimal or perfect ones, but still better than what can be
honestly done Today.

> But my real solution is below:
> 
>>
>> Although, since we have discussed giving a job its own temporary dir,
>> and we already expose a lot via environment variables to tests:
>>
>> http://avocado-framework.readthedocs.io/en/latest/WritingTests.html#environment-variables-for-simple-tests
>>
>> And also to job pre/post script plugins:
>>
>> http://avocado-framework.readthedocs.io/en/latest/ReferenceGuide.html#script-execution-environment
>>
>> I'm afraid this could bring inconsistencies or clashes in the very near
>> future.  What I propose for the immediate terms is to write a
>> contrib/example plugin, that we can either fold into the Job class
>> itself (giving it a real temporary dir, with variables exposed to test
>> processes) or make it a 1st class plugin.
>>
>> How does it sound?
> 
> If we expose something like this as a supported API, we should
> make it as an "ext

Re: [Avocado-devel] Tests stable tmpdir

2016-10-31 Thread Cleber Rosa

On 10/30/2016 06:37 AM, Fam Zheng wrote:
> On Thu, 10/27 15:28, Ademar Reis wrote:
>>>
>>> That indeed becomes true when we start offering the locking mechanism.
>>> Right now, our users simply want/need to do setup that is valid for many
>>> (usually all) tests.  According to Amador, the lack of a such a
>>> mechanism, has led users to write larger tests, when they should really
>>> be smaller ones.
>>
>> So we can offer the contrib plugin without the locking mechanism,
>> leaving it to users to decide what to do with it. Documented as a
>> non-supported feature.
>>
>> As we learn more about this use-case, we can expose it as a
>> fully-supported API.
> 
> Just want to add two cents on how sharing resources across multiple tests can 
> be
> useful:
> 
> Sometimes a test itself is way quicker than the preparation, and the latter on
> the other hand could ususally be common across many tests. Without a mechanism
> to reuse a setup across multiple tests cases (at least inside the same test
> class), test cases are forced to be combined into one. That is, when a test
> script would ideally look like this:

It's true that test setups can be many many times more expensive than
the test themselves.  We've seen many "solutions" to that, including
considering a large setup *phase* a test itself.  Then we'd have other
tests that depend on this "setup test".

What all examples have in common is that there must be knowledge about
some common state.  And besides knowledge, sometimes real assets.

What Avocado has tried to set as a basic principle, is that tests should
not depend on "other tests", "job state", "shared resources", etc.  I
believe this is a good thing because it keeps the architecture clean and
can allow advanced use cases (most of which we haven't implemented yet).

> 
> class MyTestCase:
> def setUp(self):
> self.do_a_complicated_setup()
> 
> def test1():
> self.a_quick_test();
> 
> def test2():
> self.another_quick_test();
> 
> def test3():
> self.yet_another_quick_test();
> 

But, we cannot deny that this example is better than the second one.
One approach, as mentioned earlier, is to move the commonalities to the
job phase.  Still, we have said that tests shouldn't have knowledge
about their job.  Also, we can certainly see users needing jobs in which
different sets of tests need different sets of common setups.

So, what I see, and this is a brainstorm, is that Avocado will need in
the future, a way for users to opt out of this complete test execution
independence.  Please don't be alarmed at this point, I'm not
suggesting breaking our core foundations.

What I'm suggesting is a way for users to note that a set of tests are
related, and they should share more, such as the same machine or
temporary dir.  A command line switch such as
`--keep-together=` could do the trick.

> It would be squashed into one test case like this:
> 
> class MyTestCase:
> def setUp(self):
> self.do_a_complicated_setup()
> 
> def testAll():
> self.a_quick_test();
> self.another_quick_test();
> self.yet_another_quick_test();
> 
> 
> I think the first way is much cleaner because the function names can be
> self-explanatory.

Agreed.  This is what Amador mentioned that ends up getting done.  You
miss the right granularity, and if a regression is introduced, it's not
crystal clear what actually broke.

> 
> Fam
> 

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]




signature.asc
Description: OpenPGP digital signature


[Avocado-devel] Avocado release 43.0: The Emperor and the Golem

2016-11-07 Thread Cleber Rosa
Hello everyone,

This is yet another Avocado release announcement!  Since we're now
hosting the release notes alongside our official documentation, please
refer to the following link for the complete information about this release:

http://avocado-framework.readthedocs.io/en/43.0/release_notes/43_0.html

Installing Avocado
==

Instructions are available in our documentation on how to install
either with packages or from source:

 
http://avocado-framework.readthedocs.io/en/43.0/GetStartedGuide.html#installing-avocado

Updated RPM packages are available for EPEL 6, EPEL 7, Fedora 23 and 24.

Happy hacking and testing!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]









signature.asc
Description: OpenPGP digital signature


[Avocado-devel] TreeNode constructor parent parameter

2016-11-08 Thread Cleber Rosa
Lukáš,

While reviewing your PR (mux-separation3), I came across the fact that
the "parent" parameter of TreeNode doesn't do what *I* expected it to
do. That is, the following test code fails:

import unittest

from avocado.core import tree

class ParentTest(unittest.TestCase):

def test_parent_parameter(self):
parent = tree.TreeNode(name='parent')
child = tree.TreeNode(name='child', parent=parent)
grandchild = tree.TreeNode(name='grandchild', parent=child)
self.assertIn(child, parent.children)
self.assertIn(grandchild, child.children)


But it would work with this simple change:


diff --git a/avocado/core/tree.py b/avocado/core/tree.py
index 27d30f0..2fb3e11 100644
--- a/avocado/core/tree.py
+++ b/avocado/core/tree.py
@@ -69,6 +69,8 @@ class TreeNode(object):
 children = []
 self.name = name
 self.value = value
+if parent is not None:
+parent.add_child(self)
 self.parent = parent
 self.children = []
 self.ctrl = []


This is similar to what is already done with "children", and I have the
feeling that both should behave similarly.

Does it make sense?

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


Re: [Avocado-devel] [RFC] white-spaces in test references

2016-11-10 Thread Cleber Rosa
 clear.
Avocado gets '/tmp/test script.sh arg1', which doesn't exist, so the
message *would be* correct.

With the current support for passing arguments, this is clearly a bug.

> Example 15:
> 
> $ avocado run 'test\ script.sh arg1'
> PASS, script receives arg1.
> 
> Example 16:
> 
> $ avocado run '/tmp/test\ script.sh arg1'
> PASS, script receives arg1.
> 

This non-standard quoting for *some* cases is broken.  No other words
about it.

> ---
> 
> Example 8 and Example 10 are affected by an issue in
> SimpleTest.filename. This issue is caused by the pipes.quote(filename)
> call in the FileLoader. The pipes.quote(filename) is putting single
> quotes around the entire filename and making
> os.path.abspath(filename), which is present in SimpleTest.filename, to
> return the incorrect location. Btw, the same issue is affecting
> filenames with non-ascii characters.
> 

Right.

> In order to fix this issue, we have some options, like 'handle the
> quoted filename coming from the loader inside the
> SimpleTest.filename', which fixes Examples 8 and 10 and does not
> change anything else, or 'to remove the pipes.quote(filename) from the
> loader' which makes the syntax on Examples 7, 8, 9 and 10 invalid (so
> all white-spaces in filenames have to be escaped AND the test
> reference have to be enclosed inside quotes when the filename contains
> white-spaces, like in examples 11, 12, 15 and 16).
> 

Doing the escaping inside Avocado is counter intuitive.  We should not
attempt to be a shell.  The following should *never* be an issue:

 $ ./foo/bar\ baz.sh
 SUCESS

 $ avocado run ./foo/bar\ baz.sh
 ...
 (1/1) './foo/bar baz.sh': PASS (0.01 s)
 ...

But this is an issue:

 $ './foo/bar baz.sh'
 SUCCESS

 $ './foo/bar\ baz.sh'
 bash: ./foo/bar\ baz: No such file or directory

 $ avocado run './foo/bar\ baz.sh'
 ...
 (1/1) ./foo/bar\ baz.sh: PASS (0.01 s)
 ...

Are we in sync up to this point?  This deserves a card in Trello
describing the bug.  The resolution should include tests, such as
running avocado under a shell (as most functional tests do) and passing
test references that exercise the intended behavior.

> But this issue raised a new discussion: right now, for both
> INSTRUMENTED and SIMPLE tests, we accept non-escaped white-spaces in
> the test reference, as long as the test reference is enclosed into
> quotes (Examples 1, 2, 7 and 8). But there is one exception: if we

Let's change the wording a bit: the escaping has been done by the shell
(enclosed in quotes).  Avocado gets a test reference that contains white
spaces (which should be fine).

> have a SIMPLE test with white-spaces in the filename AND arguments,
> then the white-spaces in the filename have to be escaped (Examples 15
> and 16). This change of behaviour based on the presence/absence of
> arguments seems confusing from the user perspective. This syntax

Agreed.  The presence of a feature (passing arguments to simple tests)
should not change the overall expectation/behavior for the application.

> (Examples 1, 2, 7 and 8) maybe can make sense for INSTRUMENTED tests,
> since we don't have arguments there, but it does not make sense for
> SIMPLE tests, because we do support arguments in the test references
> for SIMPLE tests.
> 
> So, before sending a new Pull Request, I'd like to have some feedback
> about this. What are the valid syntaxes that we have to support and
> what syntaxes should not be valid from the list of examples above?
> Should we keep all of them as they are currently and just fix Examples
> 8 and 10? Or the Examples 7, 8, 9 and 10 are looking wrong for you as
> well?
> 

Unless we can deliver a way to support "simple tests + arguments" without:

1) Confusing change of expectation and behavior for references on other
test types
2) Avocado acting as a shell (doing quoting)

I recommend we drop support for passing arguments to simple tests.

If a case can be made for the feature (support for simple tests +
arguments) *and* at the same time removing the two points listed
previously, we can consider keeping the feature.

> Best,
> --
> apahim
> 

Thanks for the thorough analysis of this issue!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


Re: [Avocado-devel] RFC: ABRT integration

2016-11-11 Thread Cleber Rosa
On 11/11/2016 07:20 AM, Jakub Filak wrote:
> Avocado developers,
> 
> 
> I came across an Avocado trello card requesting integration with
> systemd-coredumpctl. As a developer working on Automatic Bug Reporting Tool
> (ABRT) I got interested in it. I contacted Lukas Doktor with a question of
> what purpose the integration should be and with an offer to engage ABRT too.
> After short chat he asked me to send an RFC to this mailing list. So here
> you are Lukas.
> 
> 

Hi Jakub,

First of all, thanks for taking the time!

> Let me first briefly introduce ABRT. Despite its name ABRT is more about
> detecting software (and hardware) problems and the reporting is just a one
> step in the whole processes of handling a problem. Currently ABRT is capable
> of detecting core files, uncaught Python exceptions, uncaught Ruby
> exceptions, uncaught Java exceptions, Kernel oopses (including those stored
> in pstore), Kernel vmcores and Machine Check Exceptions. ABRT provides
> several ways accessing detect problems which includes command line tools
> (abrt), GUI tools (gnome-abrt), Python API, C API and D-Bus interface. If
> interested, you can find more about ABRT at:
> http://abrt.readthedocs.io/
> 
> 
> I propose to enhance Avocado to become aware of ABRT and include detected
> problems in test results. Here is my initial proof-of-concept commit:
> https://github.com/jfilak/avocado/commit/e3258706bdfffb8b2f1bc51328af2958617d
> 
> 

Now, I have to repeat: thanks for taking the time to write this PoC.

> Some technical details:
> - the implementation will capture only problems of a current user

This is not a big deal since our current implementation for catching
coredumps is pretty much limited to super users.

Still, let me get this straight: if a test exercises and crashes a
running daemon (running as a different user) ABRT won't be able to
capture the daemon's problems, right?

But, if the user is running the test as root, then ABRT will be able to
capture all the system's problems?

> - a proper configuration of PolicyKit can allow any user to read all system
>   problems

Right.  For our purposes, it's fine to ship an example policy as a
contrib file.

> - there can be several simultaneous connections to ABRT D-Bus connection
> 
> 

If PolicyKit is configured to allow all system problems to be read, and
there are multiple connections to ABRT, will all of those receive all
problem notifications?  Or is it a single queue and whomever reads first
removes it from the queue?

> Please have a look at my patch and let me know if ABRT integration is
> something you are interested in.
> 
> 

I will indeed.

> Should you have any question, please contact me or any ABRT developer on
> crash-catc...@lists.fedorahosted.org or on #abrt.
> 
> 

Sure!

> 
> 
> Kind regards,
> Jakub
> 

Regards,

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


Re: [Avocado-devel] Running tests in parallel

2016-11-22 Thread Cleber Rosa
On 11/22/2016 07:53 AM, Zubair Lutfullah Kakakhel wrote:
> Hi,
> 

Hi Zubair,

> There are quite a few threads about this and a trello card
> https://trello.com/c/xNeR2slj/255-support-running-tests-in-parallel
> 
> And the discussion leads to a complex multi-host RFC.
> https://www.redhat.com/archives/avocado-devel/2016-March/msg00025.html
> 
> Our requirement is simpler.
> All we wanted to do is run disjoint simple (c executables) tests in
> parallel.
> 

Sounds fair enough.

> I was wondering if somebody has a WIP branch that has some level of
> implementation for this?

I'm not familiar with a WiP or PoC on this (yet).  If anyone has
experimented with it, I'd happy to hear about it.

> Or If somebody is familiar with the code base, I'd appreciate some
> direction on how to implement this.
> 

Avocado already runs every single test in a fresh new process.  This is,
at least theoretically,  a good start.  Also, the test process is
handled based on the standard Python multiprocessing module:

https://github.com/avocado-framework/avocado/blob/master/avocado/core/runner.py#L363

The first experimentation I'd do would be to attempt using the also
Python standard multiprocessing.Pool:

https://docs.python.org/2.7/library/multiprocessing.html#using-a-pool-of-workers

This would most certainly lead to changes in how Avocado currently
serially waits for the test status:

https://github.com/avocado-framework/avocado/blob/master/avocado/core/runner.py#L403

Which ultimately is added to the (Job wide) results:

https://github.com/avocado-framework/avocado/blob/master/avocado/core/runner.py#L455

Since the results for many tests will now be acquired in unpredictable
order, this will require changes to the ResultEvent based plugins (such
as the UI).

> Thanks
> 
> Regards,
> ZubairLK
> 

I hope this is a good initial set of pointers.  If you feel adventurous
and wants to start hacking on this, you're more then welcome.

BTW: we've had quite a number of features that started as
experiments/ideas/not-really-perfect-pull-requests from the community
that Avocado "core team" members embraced and pushed all the way to
completeness.

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


Re: [Avocado-devel] Running tests in parallel

2016-11-23 Thread Cleber Rosa

On 11/23/2016 07:07 AM, Zubair Lutfullah Kakakhel wrote:
> Hi,
> 
> Thank you for your comprehensive reply!
> 
> Comments inline.
> 
> On 11/22/2016 02:11 PM, Cleber Rosa wrote:
>> On 11/22/2016 07:53 AM, Zubair Lutfullah Kakakhel wrote:
>>> Hi,
>>>
>>
>> Hi Zubair,
>>
>>> There are quite a few threads about this and a trello card
>>> https://trello.com/c/xNeR2slj/255-support-running-tests-in-parallel
>>>
>>> And the discussion leads to a complex multi-host RFC.
>>> https://www.redhat.com/archives/avocado-devel/2016-March/msg00025.html
>>>
>>> Our requirement is simpler.
>>> All we wanted to do is run disjoint simple (c executables) tests in
>>> parallel.
>>>
>>
>> Sounds fair enough.
>>
>>> I was wondering if somebody has a WIP branch that has some level of
>>> implementation for this?
>>
>> I'm not familiar with a WiP or PoC on this (yet).  If anyone has
>> experimented with it, I'd happy to hear about it.
>>
>>> Or If somebody is familiar with the code base, I'd appreciate some
>>> direction on how to implement this.
>>>
>>
>> Avocado already runs every single test in a fresh new process.  This is,
>> at least theoretically,  a good start.  Also, the test process is
>> handled based on the standard Python multiprocessing module:
>>
>> https://github.com/avocado-framework/avocado/blob/master/avocado/core/runner.py#L363
>>
>>
>> The first experimentation I'd do would be to attempt using the also
>> Python standard multiprocessing.Pool:
>>
>> https://docs.python.org/2.7/library/multiprocessing.html#using-a-pool-of-workers
>>
> 
> In this case, there would be a separate python thread for each test
> being run in parallel.
> Each python thread would actually call the test executable using a
> sub-process?
> 

Ideally, the Avocado test runner would remain a single process, that is,
without one additional thread (or process) to manage each *test* process.

> That can be OK for Desktops but won't scale well for using avocado in
> memory
> constrained Embedded devices.
> 

I must admit I haven't attempted to run Avocado in resource constrained
environments.  Can you explain what is your bigger concern?

Do you feel that Avocado (as a single process test *runner*) plus one
process for each *test* is not suitable to those environments?

- Cleber.

> Please correct me if I am reading this incorrectly.
> 
> Regards,
> ZubairLK
> 
>>
>> This would most certainly lead to changes in how Avocado currently
>> serially waits for the test status:
>>
>> https://github.com/avocado-framework/avocado/blob/master/avocado/core/runner.py#L403
>>
>>
>> Which ultimately is added to the (Job wide) results:
>>
>> https://github.com/avocado-framework/avocado/blob/master/avocado/core/runner.py#L455
>>
>>
>> Since the results for many tests will now be acquired in unpredictable
>> order, this will require changes to the ResultEvent based plugins (such
>> as the UI).
>>
>>> Thanks
>>>
>>> Regards,
>>> ZubairLK
>>>
>>
>> I hope this is a good initial set of pointers.  If you feel adventurous
>> and wants to start hacking on this, you're more then welcome.
>>
>> BTW: we've had quite a number of features that started as
>> experiments/ideas/not-really-perfect-pull-requests from the community
>> that Avocado "core team" members embraced and pushed all the way to
>> completeness.
>>

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


Re: [Avocado-devel] Fwd: [Qemu-devel] a suggestion to place *.c hunks last in patches

2016-11-30 Thread Cleber Rosa


On 11/30/2016 12:14 PM, Lucas Meneghel Rodrigues wrote:
> +1. Looks interesting!
> 
>

Indeed! +1 from me too.

> On Wed, Nov 30, 2016, 12:10 PM Ademar Reis  <mailto:ar...@redhat.com>> wrote:
> 
> Saw this message on qemu-devel and I think it's a nice suggestion
> for Avocado developers.
> 
> The ordering for a python project should be different, but you
> get the idea (replies to this thread with the suggested list are
> welcome).
> 
> Thanks.
>- Ademar
> 
> - Forwarded message from Laszlo Ersek  <mailto:ler...@redhat.com>> -
> 
> Date: Wed, 30 Nov 2016 11:08:27 +0100
> From: Laszlo Ersek mailto:ler...@redhat.com>>
> Subject: [Qemu-devel] a suggestion to place *.c hunks last in patches
> To: qemu devel list  <mailto:qemu-de...@nongnu.org>>
> 
> Recent git releases support the diff.orderFile permanent setting. (In
> older releases, the -O option had to be specified on the command line,
> or in aliases, for the same effect, which was quite inconvenient.) From
> git-diff(1):
> 
>-O
>Output the patch in the order specified in the ,
>which has one shell glob pattern per line. This overrides
>the diff.orderFile configuration variable (see git-
>config(1)). To cancel diff.orderFile, use -O/dev/null.
> 
> In my experience, an order file such as:
> 
> configure
> *Makefile*
> *.json
> *.txt
> *.h
> *.c
> 

Since most Python projects have very few files not ending in `.py`, I
suspect most relevant configurations will contain a list of paths instead.

For Avocado, I believe something like this could make sense:

Makefile
docs/source/*.rst
avocado/utils/*.py
avocado/core/*.py
avocado/plugins/*.py
scripts/*.py
selftests/*

Reasoning: it's nice to read the docs to get a grasp about the feature.
Then, take a look at utility functions that may have added, and then
used by core code.

A new or existing plugin may leverage those changes, and so can the
avocado test runner tool itself.

Finally, check how that is being tested.  We could also add unittests
right after avocado/{utils,core}/*.py.  In reality, though, we tend to
keep a utility API change in its commit...

Anyway, let's try that out.  I'm all in favor of easier to read commits.

- Cleber.

> that is, a priority order that goes from
> descriptive/declarative/abstract to imperative/specific works wonders
> for reviewing.
> 
> Randomly picked example:
> 
> [Qemu-devel] [PATCH] virtio-gpu: track and limit host memory allocations
> http://lists.nongnu.org/archive/html/qemu-devel/2016-11/msg05144.html
> 
> This patch adds several fields to several structures first, and then it
> does things with those new fields. If you think about what the English
> verb "to declare" means, it's clear you want to see the declaration
> first (same as the compiler), and only then how the field is put to use.
> 
> Thanks!
> Laszlo
> 
> 
> - End forwarded message -
> 
> --
> Ademar Reis
> Red Hat
> 
> ^[:wq!
> 

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]



signature.asc
Description: OpenPGP digital signature


[Avocado-devel] Avocado release 44.0: The Shadow Self

2016-12-07 Thread Cleber Rosa
Hello everyone,

This is yet another Avocado release announcement!  Since we
host the release notes alongside our official documentation, please
refer to the following link for the complete information about this release:

http://avocado-framework.readthedocs.io/en/44.0/release_notes/44_0.html

Installing Avocado
==

Instructions are available in our documentation on how to install
either with packages or from source:

 
http://avocado-framework.readthedocs.io/en/44.0/GetStartedGuide.html#installing-avocado

Updated RPM packages are available for EPEL 6, EPEL 7, Fedora 23 (for
the last time), 24 and now 25.

Happy hacking and testing!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]











signature.asc
Description: OpenPGP digital signature


  1   2   3   4   5   >