Re: [Avocado-devel] RFC: Parameters as Environment Variables

2018-01-10 Thread Ademar Reis
 public a Python API for
> loading that serialized structure back in a variant object, so Python
> users would have the same experience as in INSTRUMENTED tests when
> writing non-INSTRUMENTED Python tests.

I may be missing some of the last developments, but in my mind,
tests receive parameters, not variants. Is that still the case?

In other words, there would be no references to variants in the
structure visible to tests (not even variant_id). I also don't
understand why we have '"paths": ["/run/*"]' there, could you please
explain?

Perhaps you can provide a more complete example, and make it
consistent with the example from the motivation section. Bonus
points if it's a real-world one.

I also remember past conversations about parameters potentially
coming from different sources in hierarchical order (say, via
hierarchical configuration files, command line options, etc) with
the varianter being one of the providers.  That's one of the reasons
why I consider it important to provide parameters to tests, not
variants.  If we keep it abstract like that, nothing has to change
in this environment variable feature if we make changes to the
underlying test parametrization mechanism.

Thanks.
   - Ademar

> 
> Expected Results
> 
> 
> Mitigate the cases where the parameters can not be accessed by
> non-INSTRUMENTED tests and avoid collision with preexisting
> environment variables.
> 
> 
> Looking forward to read your comments.
> --
> apahim
> 

-- 
Ademar Reis
Red Hat

^[:wq!



Re: [Avocado-devel] Request to backport PR 1376 to 36lts

2017-09-11 Thread Ademar Reis
On Mon, Sep 11, 2017 at 10:23:50AM +0200, Lukáš Doktor wrote:
> Hello Guannan,
> 
> theoretically we should not accept such backport, because it's a
> new feature, not a bugfix. Anyway it touches `avocado.utils` and
> it's not changing existing callbacks. What do you think, guys? In
> my view the `avocado/core` is the core and I wouldn't like such
> changes there, but I'd be fine with some exceptions when it goes
> to `avocado/utils`.

I think 36lts should not be touched, unless you're doing a critical
bugfix, which is not the case here.

It's not just about a simple backport, it's about expectations and
maintenance of a LTS release. Think of new versions, release notes,
"interrupting" conservative users, testing, potential for
regressions, etc. This would create a bad precedent.

Just my 2 cents.

Thanks.
   - Ademar

> Dne 8.9.2017 v 08:41 Guannan Sun napsal(a):
> > Hi,
> > 
> > As RHEL6 still need use 36lts, and cases updated with using the function in
> > PR 1376:
> > 
> > https://github.com/avocado-framework/avocado/pull/1376
> > 
> > could you help backport the commits to 36lts?
> > 
> > Thanks!
> > Guannan
> > 
> 
> 




-- 
Ademar Reis
Red Hat

^[:wq!



Re: [Avocado-devel] Parameter System Overhaul

2017-08-16 Thread Ademar Reis
On Wed, Aug 16, 2017 at 02:28:47PM -0400, Cleber Rosa wrote:
> 
> 
> On 08/16/2017 12:58 PM, Ademar Reis wrote:
> > On Wed, Aug 16, 2017 at 12:01:08PM -0400, Cleber Rosa wrote:
> >>
> >>
> >> On 08/07/2017 05:49 PM, Ademar Reis wrote:
> >>> On Tue, Aug 01, 2017 at 03:37:34PM -0400, Cleber Rosa wrote:
> >>>> Even though Avocado has had a parameter passing system for
> >>>> instrumented tests almost from day one, it has been intertwined with
> >>>> the varianter (then multiplexer) and this is fundamentally wrong.  The
> >>>> most obvious example of this broken design is the `mux-inject` command
> >>>> line option::
> >>>>
> >>>>   --mux-inject [MUX_INJECT [MUX_INJECT ...]]
> >>>> Inject [path:]key:node values into the final
> >>>> multiplex
> >>>> tree.
> >>>>
> >>>> This is broken design not because such a varianter implementations can
> >>>> be tweaked over the command line, that's fine.  It's broken because it
> >>>> is the recommended way of passing parameters on the command line.
> >>>>
> >>>> The varianter (or any other subsystem) should be able to act as a
> >>>> parameter provider, but can not dictate that parameters must first be
> >>>> nodes/key/values of its own internal structure.
> >>>
> >>> Correct. It's broken because it violates several layers. There would
> >>> be nothing wrong with something like "--param [prefix:]",
> >>> for example (more below).
> >>>
> >>>>
> >>>> The proposed design
> >>>> ===
> >>>>
> >>>> A diagram has been used on a few different occasions, to describe how
> >>>> the parameters and variants generation mechanism should be connected
> >>>> to a test and to the overall Avocado architecture.  Here it is, in its
> >>>> original form::
> >>>>
> >>>>+--+
> >>>>| Test |
> >>>>+--+
> >>>>  |
> >>>>  |
> >>>>+-v-+++
> >>>>| Parameters System |--->| Variants Generation Plugin API |
> >>>>+---+++
> >>>>  ^^
> >>>>  ||
> >>>>+--+   |
> >>>>| +--+ +-+ |   |
> >>>>| | avocado-virt | | other providers | |   |
> >>>>| +--+ +-+ |   |
> >>>>+--+   |
> >>>>   |
> >>>>  ++-+
> >>>>  |  |
> >>>>  |  |
> >>>>  |  |
> >>>>++   +-+
> >>>>| Multiplexer Plugin |   | Other variant plugin(s) |
> >>>>++   +-+
> >>>>  |
> >>>>  |
> >>>>+-v---+
> >>>>| ++ +--+ |
> >>>>| | --mux-yaml | | --mux-inject | |
> >>>>| ++ +--+ |
> >>>>+-+
> >>>>
> >>>>
> >>>> Given that the "Parameter System" is the entry point into the parameters
> >>>> providers, it should provide two different interfaces:
> >>>>
> >>>>  1) An interface for its users, that is, developers writing
> >>>> `avocado.Test` based tests
> >>>>
> >>>>  2) An interface for developers of additional providers, such as the
> >>>> "avocado-virt" and "other providers" box on the diagram.
> >>>>
> >>>>

Re: [Avocado-devel] Parameter System Overhaul

2017-08-16 Thread Ademar Reis
On Wed, Aug 16, 2017 at 12:01:08PM -0400, Cleber Rosa wrote:
> 
> 
> On 08/07/2017 05:49 PM, Ademar Reis wrote:
> > On Tue, Aug 01, 2017 at 03:37:34PM -0400, Cleber Rosa wrote:
> >> Even though Avocado has had a parameter passing system for
> >> instrumented tests almost from day one, it has been intertwined with
> >> the varianter (then multiplexer) and this is fundamentally wrong.  The
> >> most obvious example of this broken design is the `mux-inject` command
> >> line option::
> >>
> >>   --mux-inject [MUX_INJECT [MUX_INJECT ...]]
> >> Inject [path:]key:node values into the final
> >> multiplex
> >> tree.
> >>
> >> This is broken design not because such a varianter implementations can
> >> be tweaked over the command line, that's fine.  It's broken because it
> >> is the recommended way of passing parameters on the command line.
> >>
> >> The varianter (or any other subsystem) should be able to act as a
> >> parameter provider, but can not dictate that parameters must first be
> >> nodes/key/values of its own internal structure.
> > 
> > Correct. It's broken because it violates several layers. There would
> > be nothing wrong with something like "--param [prefix:]",
> > for example (more below).
> > 
> >>
> >> The proposed design
> >> ===
> >>
> >> A diagram has been used on a few different occasions, to describe how
> >> the parameters and variants generation mechanism should be connected
> >> to a test and to the overall Avocado architecture.  Here it is, in its
> >> original form::
> >>
> >>+--+
> >>| Test |
> >>+--+
> >>  |
> >>  |
> >>+-v-+++
> >>| Parameters System |--->| Variants Generation Plugin API |
> >>+---+++
> >>  ^^
> >>  ||
> >>+--+   |
> >>| +--+ +-+ |   |
> >>| | avocado-virt | | other providers | |   |
> >>| +--+ +-+ |   |
> >>+--+   |
> >>   |
> >>  ++-+
> >>  |  |
> >>  |  |
> >>  |  |
> >>++   +-+
> >>| Multiplexer Plugin |   | Other variant plugin(s) |
> >>++   +-+
> >>  |
> >>  |
> >>+-v---+
> >>| ++ +--+ |
> >>| | --mux-yaml | | --mux-inject | |
> >>| ++ +--+ |
> >>+-+
> >>
> >>
> >> Given that the "Parameter System" is the entry point into the parameters
> >> providers, it should provide two different interfaces:
> >>
> >>  1) An interface for its users, that is, developers writing
> >> `avocado.Test` based tests
> >>
> >>  2) An interface for developers of additional providers, such as the
> >> "avocado-virt" and "other providers" box on the diagram.
> >>
> >> The current state of the the first interface is the ``self.params``
> >> attribute.  Hopefully, it will be possible to keep its current interface,
> >> so that tests won't need any kind of compatibility adjustments.
> > 
> > Right. The way I envision the parameters system includes a
> > resolution mechanism, the "path" currently used in params.get().
> > This adds extra specificity to the user who requests a parameter.
> > 
> > But these parameters can be provided by any entity. In the diagram
> > above, they're part of the "Parameter System" box. Examples of
> > "other providers" could be support for a configuration file or a
> > &q

Re: [Avocado-devel] Parameter System Overhaul

2017-08-08 Thread Ademar Reis
On Tue, Aug 08, 2017 at 01:01:26PM +0200, Lukáš Doktor wrote:
> Hello guys,
> 
> I'm sorry for such a late response, I totally forgot about this email (thanks 
> to Ademar, your response reminded it to me).
> 
> Dne 7.8.2017 v 23:49 Ademar Reis napsal(a):
> > On Tue, Aug 01, 2017 at 03:37:34PM -0400, Cleber Rosa wrote:
> >> Even though Avocado has had a parameter passing system for
> >> instrumented tests almost from day one, it has been intertwined with
> >> the varianter (then multiplexer) and this is fundamentally wrong.  The
> >> most obvious example of this broken design is the `mux-inject` command
> >> line option::
> >>
> >>   --mux-inject [MUX_INJECT [MUX_INJECT ...]]
> >> Inject [path:]key:node values into the final
> >> multiplex
> >> tree.
> >>
> >> This is broken design not because such a varianter implementations can
> >> be tweaked over the command line, that's fine.  It's broken because it
> >> is the recommended way of passing parameters on the command line.
> >>
> >> The varianter (or any other subsystem) should be able to act as a
> >> parameter provider, but can not dictate that parameters must first be
> >> nodes/key/values of its own internal structure.
> > 
> > Correct. It's broken because it violates several layers. There would
> > be nothing wrong with something like "--param [prefix:]",
> > for example (more below).
> > 
> Well I wouldn't call it broken. The implementation is fine we only lack other 
> providers which would allow to inject just params so people are abusing 
> `mux-inject` for that.
> 
> >>
> >> The proposed design
> >> ===
> >>
> >> A diagram has been used on a few different occasions, to describe how
> >> the parameters and variants generation mechanism should be connected
> >> to a test and to the overall Avocado architecture.  Here it is, in its
> >> original form::
> >>
> >>+--+
> >>| Test |
> >>+--+
> >>  |
> >>  |
> >>+-v-+++
> >>| Parameters System |--->| Variants Generation Plugin API |
> >>+---+++
> >>  ^^
> >>  ||
> >>+--+   |
> >>| +--+ +-+ |   |
> >>| | avocado-virt | | other providers | |   |
> >>| +--+ +-+ |   |
> >>+--+   |
> >>   |
> >>  ++-+
> >>  |  |
> >>  |  |
> >>  |  |
> >>++   +-+
> >>| Multiplexer Plugin |   | Other variant plugin(s) |
> >>++   +-+
> >>  |
> >>  |
> >>+-v---+
> >>| ++ +--+ |
> >>| | --mux-yaml | | --mux-inject | |
> >>| ++ +--+ |
> >>+-+
> >>
> >>
> >> Given that the "Parameter System" is the entry point into the parameters
> >> providers, it should provide two different interfaces:
> >>
> >>  1) An interface for its users, that is, developers writing
> >> `avocado.Test` based tests
> >>
> >>  2) An interface for developers of additional providers, such as the
> >> "avocado-virt" and "other providers" box on the diagram.
> >>
> >> The current state of the the first interface is the ``self.params``
> >> attribute.  Hopefully, it will be possible to keep its current interface,
> >> so that tests won't need any kind of compatibility adjustments.
> > 
> > Right. The way I envision the parameters system includes a
> > resolution mechanism, the "path" currentl

Re: [Avocado-devel] Parameter System Overhaul

2017-08-07 Thread Ademar Reis
rameter provider could effectively
> override the values in another parameter provider, given that both
> used the same paths for a number of parameters.
> 
> Yet another approach would be to *not* use paths, and resort to
> completely separate namespaces.  A parameter namespace would be an
> additional level of isolation, which can quickly become exceedingly
> complex.

I think using paths is confusing because it mixes concepts which are
exclusive to the multiplexer (a particular implementation of the
varianter) with an API that is shared by all other parameter
providers.

For example, when you say "merge everything into the tree root
node", are you talking about namespace paths, or paths as used by
the multiplexer when the "!mux" keyword is present?

> 
> As can be seen from the section name, I'm not proposing one solution
> at this point, but hoping that a discussion on the topic would help
> achieve the best possible design.

I think this should be abstract to the Test (in other words, not
exposed through any API). The order, priority and merge of
parameters is a problem to be solved at run-time by the test runner.

All a test needs to "know" is that there's a parameter with the name
it wants.

In the case of clashes, specifying a prioritization should be easy.
We could use a similar approach to how we prioritize Avocado's own
configuration.

Example: from less important to top priorities, when resolving a
call to params.get():

   * "default" value provided to params.get() inside the test code.
   * Params from /etc/avocado/global-variants.ini
   * Params from ~/avocado/global-variants.ini
   * Params from "--params="
   * Params from "--param=[prefix:]"
   * Params from the variant generated from --mux-yaml= (and
 using --mux-inject would have the same effect of changing
  before using it)

The key of this proposal is simplicity and scalability: it doesn't
matter if the user is running the test with the varianter, a simple
config file (--params=) or passing some parameters by hand
(--param=key:value). The Test API and behavior are the same and the
users get a consistent experience.

Thanks.
   - Ademar

> [1] -
> http://avocado-framework.readthedocs.io/en/52.0/api/core/avocado.core.html#avocado.core.varianter.AvocadoParams.get



-- 
Ademar Reis
Red Hat

^[:wq!



Re: [Avocado-devel] [RFC] Recursive Test Discovery [V2]

2017-06-08 Thread Ademar Reis
scover the parent classes ;-)
> >
> >
> >> It's hard coming up with a descriptive word that describes the process
> >> of discovering the classes in the inheritance hierarchy. The word
> >> "ancestry" is the term I favor the most.
> >>
> >> This is not a NACK; maybe it will spark some ideas. If not, that's
> >> OK too.
> >>
> >> -Jeff
> >>
> >>
> >> [1] https://en.wikipedia.org/wiki/Recursion#In_computer_science
> >>
> >>
> >>>
> >>> - How deep is the recursion?
> >>>The proposal is that the recursion goes all the way up to the class
> >>>inheriting from `avocado.Test`.
> >>>
> >>> - Will the recursion respect the parents docstrings?
> >>>The proposal is that the docstrings in the parents are ignored when
> >>>    recursively discovering. Example:
> >>>
> >>>File `/usr/share/avocado/tests/test_base_class.py`::
> >>>
> >>>  from avocado import Test
> >>>
> >>>
> >>>  class BaseClass(Test):
> >>>  """
> >>>  :avocado: disable
> >>>  """
> >>>
> >>>  def test_basic(self):
> >>>  pass
> >>>
> >>>File `/usr/share/avocado/tests/test_first_child.py`::
> >>>
> >>>  from test_base_class import BaseClass
> >>>
> >>>
> >>>  class FirstChild(BaseClass):
> >>>  """
> >>>  :avocado: recursive
> >>>  """
> >>>
> >>>  def test_first_child(self):
> >>>  pass
> >>>
> >>>Will result in::
> >>>
> >>>  $ avocado list test_first_child.py
> >>>  INSTRUMENTED test_first_child.py:FirstChild.test_first_child
> >>>  INSTRUMENTED test_first_child.py:BaseClass.test_basic
> >>>
> >>>
> >>> Expected Results
> >>> 
> >>>
> >>> The expected result is to provide users more flexibility when creating
> >>> the Avocado tests, being ablr to create a chain of test classes and
> >>> providing only the module containing the last one as a test reference.
> >>>
> >>> Additional Information
> >>> ==
> >>>
> >>> Avocado uses only static analysis to examine the files and this
> >>> feature should stick to this principle in its implementation.
> >>>
> >>
> >
> >
> 

-- 
Ademar Reis
Red Hat

^[:wq!



Re: [Avocado-devel] RFC: Guidelines for categorizing tests

2017-05-22 Thread Ademar Reis
On Mon, May 22, 2017 at 02:58:18PM -0400, Cleber Rosa wrote:
> 
> 
> On 05/22/2017 02:00 PM, Ademar Reis wrote:
> > On Wed, May 17, 2017 at 05:49:36PM -0400, Cleber Rosa wrote:
> >> Introduction
> >> 
> >>
> >> Avocado allows users to select tests to run based on free form "tags".
> >> These tags are given as "docstring directives", that is, special
> >> entries on a class or function docstring.
> >>
> >> As a user of an Avocado based test suite, I'd see value in not **having**
> >> to look at all the test tags before realizing that to not run tests that
> >> require "super user" permissions I should run::
> >>
> >>   $ avocado run test.py --filter-by-tags=-root
> >>
> >> Instead of::
> >>
> >>   $ avocado run test.py --filter-by-tags=-privileged
> >>
> >> Not even that, by having different tests as part of the same job,
> >> the following very odd sequence of command line options may be
> >> needed::
> >>
> >>   $ avocado run test.py test2.py --filter-by-tags=-root,-privileged
> >>
> >> So the goal here is to let users familiar with a given Avocado based
> >> test, to have fair expectations when running another Avocado based
> >> tests.
> >>
> >> This was initially going to be a documentation update, but I felt that
> >> it was not fair to make a formal proposal without without some initial
> >> brainstorming.
> >>
> >> Proposal
> >> 
> >>
> >> To set the tone for my proposal, I'd like to make most things simple
> >> and easy, while allowing for "everything else" to be doable.
> >>
> >> My general impression is that higher level information about the test
> >> itself and its requirements are going to be the most commonly used
> >> tags, so they must be easily set.  Some examples follow.
> >>
> >> Simple (standalone) tags
> >> 
> >>
> >> Tags by functional area:
> >>
> >>  * cpu - Exercises a system's CPU
> >>  * net - Exercises a system's network devices or networking stack
> >>  * storage - Exercises a system's local storage
> >>  * fs - Exercises a system's file system
> >>
> >> Tags by architecture:
> >>
> >>  * x86_64 - Requires a x86_64 architecture
> >>  * ppc64 - Requries a ppc64
> >>
> >> Tags by access privileges:
> >>
> >>  * privileged - requires the test to be run with the most privileged,
> >>unrestricted privileges.  For Linux systems, this usually means the
> >>root account
> >>
> >> Composed (key:value) tags
> >> -
> >>
> >> The more specific tags can be achieved by composing a predefined key
> >> with a value.  For instance, to tag a test as needing a specific
> >> CPU flag:
> >>
> >>  * cpu_flag:vmx
> >>
> >> Or a specific PCI device:
> >>
> >>  * pci_id:8086:08b2
> >>
> >> Or even a software package:
> >>
> >>  * package:gcc
> >>
> >> Or a package group altogether:
> >>
> >>  * package_group:development-tools
> >>
> >> Some examples
> >> -
> >>
> >>  * ``cpu,x86_64`` - The test exercises the CPU and requires a
> >>``x86_64`` based platform
> >>
> >>  * ``net,privileged,pci_id:14e4:1657`` - The test exercises either a
> >>network device or the network stack, needs super user privileges
> >>and a "Broadcom Limited NetXtreme BCM5719 Gigabit Ethernet PCIe
> >>(rev 01)" device.
> > 
> > In my understanding, one of the key design aspects of tags is
> > that they're user-defined, optional, and totally arbitrary to the
> > rest of Avocado.  In other words, to Avocado there's no semantics
> > in a tag called "ppc64", "priviledged" or "pci_id:8086:08b2".
> > 
> 
> Right, the fact that there are no semantics in tags, and that Avocado
> itself will *not* attempt to interpret the tags has been repeated a
> couple of times.  I hope it's somewhat clearer now.
> 
> > This should be clear in the documentation and in this RFC,
> > otherwise users might be tempted to start tagging tests following
> > some sort of "official list of tags" provided by Avocado, or
>

Re: [Avocado-devel] RFC: Guidelines for categorizing tests

2017-05-22 Thread Ademar Reis
t; A list of the tests that were filtered out from the job test suite can
> certainly be a useful part of the job results.

This is close to what I had in mind years ago when I proposed a
simple dependency resolution mechanism, I think we discussed
it in the past:

 - Tests could have a set of tags listed as dependencies (I don't
   think generic tags should be used for it);
 - Environments where tests are run provide a list of tags as
   capabilities;
 - Avocado, when running tests, only runs the test in
   environments where all dependencies are matched by
   capabilities.

> 
> As in every RFC, feedback is extremely welcome!
> 

I think the generic tags mechanism should be kept arbitrary and
abstract. My first reaction is that there should be no
interpretation of the contents of the tags by other parts of the
system or "official list of tags and their semantics". And I'm
not sure if this is what you're proposing. :-)

Thanks.
   - Ademar

> 
> -- 
> Cleber Rosa
> [ Sr Software Engineer - Virtualization Team - Red Hat ]
> [ Avocado Test Framework - avocado-framework.github.io ]
> [  7ABB 96EB 8B46 B94D 5E0F  E9BB 657E 8D33 A5F2 09F3  ]
> 




-- 
Ademar Reis
Red Hat

^[:wq!



Re: [Avocado-devel] miniRFC: Call `tearDown` after `setUp` failure

2017-03-23 Thread Ademar Reis
On Thu, Mar 23, 2017 at 07:10:23AM +0100, Lukáš Doktor wrote:
> Dne 22.3.2017 v 20:00 Ademar Reis napsal(a):
> > On Wed, Mar 22, 2017 at 07:05:48PM +0100, Lukáš Doktor wrote:
> > > Hello guys,
> > > 
> > > I remember early in development we decided to abort the execution when
> > > `setUp` fails without executing `tearDown`, but over the time I keep
> > > thinking about it and I don't think it's optimal. My main reason is to
> > > simplify `setUp` code, let me demonstrate it on an example.
> > 
> > So you're basically proposing that setUp() and tearDown() are
> > always executed "together". In other words, if setUp() is called,
> > then tearDown() will also be called, even if setUp() resulted in
> > error, exception or cancellation.
> > 
> > I think it makes perfect sense.
> > 
> > The only problem I see is in the case of test interruption
> > (ctrl+c or timeout). I think this will have to be the only
> > exception. In case of interruption, we can't wait to run
> > tearDown().
> > 
> > In summary, the behavior would be:
> > 
> >  normal execution
> >- test status = PASS | FAIL | WARN
> >- setUp() is run
> >- tearDown() is run
> yep
> 
> > 
> >  skip (test runner decision)
> this one is tricky, but we can refine it when we have dep solver...

My understanding is that there are cases where the skip happens
as a decision from the test runner, such as during job replay
and/or timeout. The behavior of such a skip should be exactly the
same as when skipIf() is used.

> 
> >  skipIf() (decorator)
> >- Test is completely skipped by the test runner
> >- Test status = SKIP
> >- Neither setUp() nor tearDown() are run.
> yep
> 
> > 
> >  Uncaught error/exception
> >- test status = ERROR
> >- tearDown() is run
> This is the same behavior as PASS | FAIL | WARN, right?

Right, I see no reason why not.

> 
> > 
> >  self.cancel()
> >- Can be called during any stage of the test execution.
> >  Results in test status = CANCEL.
> >- tearDown() is run.
> Again, the same behavior as PASS | FAIL | WARN.

Ditto.

> 
> > 
> >  Test is interrupted (ctrl+C or timeout)
> >- test status = INTERRUPT
> >- tearDown() is not run
> On timeout I agree, on single ctrl+c we wait till the test is over. I think
> it'd make sense to let the tearDown finish as well unless the double ctrl+c
> is pressed, what do you think?

Agree. What I had in mind was the double ctrl+c.

Thanks.
   - Ademar

> 
> Lukáš
> 
> > 
> > Thanks.
> >- Ademar
> > 
> > > 
> > > Currently when you (safely) want to get a few remote servers, nfs share 
> > > and
> > > local service, you have to:
> > > 
> > > class MyTest(test):
> > > def setUp(self):
> > > self.nfs_share = None
> > > self.remote_servers = []
> > > self.service = None
> > > try:
> > > for addr in self.params.get("remote_servers"):
> > > self.remote_servers.append(login(addr))
> > > self.nfs_share = get_nfs_share()
> > > self.service = get_service()
> > > except:
> > > for server in self.remote_servers:
> > > server.close()
> > > if self.nfs_share:
> > > self.nfs_share.close()
> > > if self.service:
> > > self.service.stop()
> > > raise
> > > 
> > > def tearDown(self):
> > > if self.nfs_share:
> > > self.nfs_share.close()
> > > for server in self.remote_servers:
> > > server.close()
> > > if self.service:
> > > self.service.stop()
> > > 
> > > 
> > > But if the tearDown was also executed, you'd simply write:
> > > 
> > > class MyTest(test):
> > > def setUp(self):
> > > self.nfs_share = None
> > > self.remote_servers = []
> > > self.service = None
> > > 
> > > for addr in self.params.get("remote_servers"):
> > > self.remote_servers.append(login(addr))
> > > self.nfs_share 

Re: [Avocado-devel] miniRFC: Call `tearDown` after `setUp` failure

2017-03-22 Thread Ademar Reis
On Wed, Mar 22, 2017 at 07:05:48PM +0100, Lukáš Doktor wrote:
> Hello guys,
> 
> I remember early in development we decided to abort the execution when
> `setUp` fails without executing `tearDown`, but over the time I keep
> thinking about it and I don't think it's optimal. My main reason is to
> simplify `setUp` code, let me demonstrate it on an example.

So you're basically proposing that setUp() and tearDown() are
always executed "together". In other words, if setUp() is called,
then tearDown() will also be called, even if setUp() resulted in
error, exception or cancellation.

I think it makes perfect sense.

The only problem I see is in the case of test interruption
(ctrl+c or timeout). I think this will have to be the only
exception. In case of interruption, we can't wait to run
tearDown().

In summary, the behavior would be:

 normal execution
   - test status = PASS | FAIL | WARN
   - setUp() is run
   - tearDown() is run

 skip (test runner decision)
 skipIf() (decorator)
   - Test is completely skipped by the test runner
   - Test status = SKIP
   - Neither setUp() nor tearDown() are run.

 Uncaught error/exception
   - test status = ERROR
   - tearDown() is run

 self.cancel()
   - Can be called during any stage of the test execution.
 Results in test status = CANCEL.
   - tearDown() is run.

 Test is interrupted (ctrl+C or timeout)
   - test status = INTERRUPT
   - tearDown() is not run

Thanks.
   - Ademar

> 
> Currently when you (safely) want to get a few remote servers, nfs share and
> local service, you have to:
> 
> class MyTest(test):
> def setUp(self):
> self.nfs_share = None
> self.remote_servers = []
> self.service = None
> try:
> for addr in self.params.get("remote_servers"):
> self.remote_servers.append(login(addr))
> self.nfs_share = get_nfs_share()
> self.service = get_service()
> except:
> for server in self.remote_servers:
> server.close()
> if self.nfs_share:
> self.nfs_share.close()
> if self.service:
> self.service.stop()
> raise
> 
> def tearDown(self):
> if self.nfs_share:
> self.nfs_share.close()
> for server in self.remote_servers:
> server.close()
> if self.service:
> self.service.stop()
> 
> 
> But if the tearDown was also executed, you'd simply write:
> 
> class MyTest(test):
> def setUp(self):
> self.nfs_share = None
> self.remote_servers = []
> self.service = None
> 
> for addr in self.params.get("remote_servers"):
> self.remote_servers.append(login(addr))
> self.nfs_share = get_nfs_share()
> self.service = get_service()
> 
> def tearDown(self):
> if self.nfs_share:
> self.nfs_share.close()
> for server in self.remote_servers:
> server.close()
> if self.service:
> self.service.stop()
> 
> As you can see the current solution requires catching exceptions and
> basically writing the tearDown twice. Yes, properly written tearDown could
> be manually executed from the `setUp`, but that looks a bit odd and my
> experience is that people usually just write:
> 
> class MyTest(test):
> def setUp(self):
> self.remote_servers = []
> for addr in self.params.get("remote_servers"):
> self.remote_servers.append(login(addr))
> self.nfs_share = get_nfs_share()
> self.service = get_service()
> 
> def tearDown(self):
> self.nfs_share.close()
> for server in self.remote_servers:
> server.close()
> self.service.stop()
> 
> which usually works but when the `get_nfs_share` fails, the remote_servers
> are not cleaned, which might spoil the following tests (as they might be
> persistent, eg. via aexpect).
> 
> Kind regards,
> Lukáš
> 
> PS: Yes, the tearDown is unsafe as when `nfs_share.close()` fails the rest
> is not cleaned up. This is just a demonstration and the proper tearDown and
> a proper setUp for the current behavior would be way more complex.
> 




-- 
Ademar Reis
Red Hat

^[:wq!



Re: [Avocado-devel] Avocado-vt support for Ubuntu 14.04

2017-01-25 Thread Ademar Reis
On Wed, Jan 25, 2017 at 10:44:17AM +0500, Adil Kamal wrote:
> Hello,
> 
> Please confirm if avocado (and more particularly avocado-vt) is supported
> for Ubuntu in general, and Ubuntu 14.04 in specific. I am able to install
> avocado with some plugin errors though, however can not get avocado-vt to
> install on Ubuntu. Any clarity would be appreciated. I am looking for a
> tool to test KVM compatibility with ARM v8 hardware. Thank you.
> 

Hi Adil.

Avocado is not supported by the primary development team on
Ubuntu, but we welcome bug reports and messages seeking
assistance on the mailing list. The team will do their best to
fix low hanging fruit problems and will gladly accept patches
with fixes on other distros or platforms.  Besides, others from
the community may be able to help.

Since we're talking about supportability, it's worth linking to
some previous documentation about this subject:

RFC: Avocado maintainability and integration with avocado-vt
https://www.redhat.com/archives/avocado-devel/2016-April/msg00038.html

Getting Started documentation:
http://avocado-framework.readthedocs.io/en/45.0/GetStartedGuide.html

Thanks.
   - Ademar

-- 
Ademar Reis
Red Hat

^[:wq!



Re: [Avocado-devel] Docs also available now on pythonhosted.org/avocado-framework

2017-01-12 Thread Ademar Reis
On Thu, Jan 12, 2017 at 12:26:09PM -0200, Cleber Rosa wrote:
> Hi folks,
> 
> While working on "Check/Work around PIP upload failures"[1], I noticed
> that the "http://pythonhosted.org/avocado-framework"; URL is ours, and
> PyPI let us host documentation there.
> 
> Instead of a 404, let's also upload the latest released docs[2] together
> with the released (code) tarball to PyPI.
> 
> Thoughts?

Hi Cleber.

Duplicating content on the web is not a good practice and will
hurt our search engine ranking.

Instead, I recommend a temporary redirection (302 or 307).

Thanks.
   - Ademar

> 
> [1] -
> https://trello.com/c/w6dk6RDE/888-check-work-around-pip-upload-failures
> [2] -
> https://trello.com/c/hdh4w8WJ/890-upload-docs-to-pypi-keep-http-pythonhosted-org-avocado-framework-with-content
> 
> -- 
> Cleber Rosa
> [ Sr Software Engineer - Virtualization Team - Red Hat ]
> [ Avocado Test Framework - avocado-framework.github.io ]
> 




-- 
Ademar Reis
Red Hat

^[:wq!



[Avocado-devel] Fwd: [Qemu-devel] [PATCH v2] scripts: add "git.orderfile" for ordering diff hunks by pathname patterns

2016-12-03 Thread Ademar Reis
FYI, follow-up to the discussion about git.orderFile. This patch
is likely to be merged in QEMU. The discussion that lead to this
patch can be found here:

https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg05339.html
https://lists.gnu.org/archive/html/qemu-devel/2016-12/msg00221.html
https://lists.gnu.org/archive/html/qemu-devel/2016-12/msg00224.html

My guess is that this setting does affect github and (our version of
this change) is worth adding to our git repositories.

Thanks.
   - Ademar

- Forwarded message from Laszlo Ersek  -

Date: Fri,  2 Dec 2016 22:01:52 +0100
From: Laszlo Ersek 
Subject: [Qemu-devel] [PATCH v2] scripts: add "git.orderfile" for ordering diff 
hunks by pathname patterns
To: qemu devel list 
Cc: Fam Zheng , "Michael S. Tsirkin" , Max 
Reitz , Gerd Hoffmann ,
Stefan Hajnoczi , John Snow 

When passed to git-diff (and to every other git command producing diffs
and/or diffstats) with "-O" or "diff.orderFile", this list of patterns
will place the more declarative / abstract hunks first, while changes to
imperative code / details will be near the end of the patches. This saves
on scrolling / searching and makes for easier reviewing.

We intend to advise contributors in the Wiki to run

  git config diff.orderFile scripts/git.orderfile

once, as part of their initial setup, before formatting their first (or,
for repeat contributors, next) patches.

See the "-O" option and the "diff.orderFile" configuration variable in
git-diff(1) and git-config(1).

Cc: "Michael S. Tsirkin" 
Cc: Eric Blake 
Cc: Fam Zheng 
Cc: Gerd Hoffmann 
Cc: John Snow 
Cc: Max Reitz 
Cc: Stefan Hajnoczi 
Signed-off-by: Laszlo Ersek 
---

Notes:
v2:
- "Makefile" -> "Makefile*" [Gerd]
- add leading comment [Gerd]
- add "docs/*" (note, there are *.txt files outside of docs/, so keeping
  those too) [Max, Fam, Eric]

 scripts/git.orderfile | 20 
 1 file changed, 20 insertions(+)
 create mode 100644 scripts/git.orderfile

diff --git a/scripts/git.orderfile b/scripts/git.orderfile
new file mode 100644
index ..3cab16e0505c
--- /dev/null
+++ b/scripts/git.orderfile
@@ -0,0 +1,20 @@
+# Apply this diff order to your git configuration with the command
+#
+#   git config diff.orderFile scripts/git.orderfile
+
+docs/*
+*.txt
+configure
+GNUmakefile
+makefile
+Makefile*
+*.mak
+qapi-schema*.json
+qapi/*.json
+include/qapi/visitor.h
+include/qapi/visitor-impl.h
+scripts/qapi.py
+scripts/*.py
+*.h
+qapi/qapi-visit-core.c
+*.c
-- 
2.9.2



- End forwarded message -

-- 
Ademar Reis
Red Hat

^[:wq!



[Avocado-devel] Fwd: [Qemu-devel] a suggestion to place *.c hunks last in patches

2016-11-30 Thread Ademar Reis
Saw this message on qemu-devel and I think it's a nice suggestion
for Avocado developers.

The ordering for a python project should be different, but you
get the idea (replies to this thread with the suggested list are
welcome).

Thanks.
   - Ademar

- Forwarded message from Laszlo Ersek  -

Date: Wed, 30 Nov 2016 11:08:27 +0100
From: Laszlo Ersek 
Subject: [Qemu-devel] a suggestion to place *.c hunks last in patches
To: qemu devel list 

Recent git releases support the diff.orderFile permanent setting. (In
older releases, the -O option had to be specified on the command line,
or in aliases, for the same effect, which was quite inconvenient.) From
git-diff(1):

   -O
   Output the patch in the order specified in the ,
   which has one shell glob pattern per line. This overrides
   the diff.orderFile configuration variable (see git-
   config(1)). To cancel diff.orderFile, use -O/dev/null.

In my experience, an order file such as:

configure
*Makefile*
*.json
*.txt
*.h
*.c

that is, a priority order that goes from
descriptive/declarative/abstract to imperative/specific works wonders
for reviewing.

Randomly picked example:

[Qemu-devel] [PATCH] virtio-gpu: track and limit host memory allocations
http://lists.nongnu.org/archive/html/qemu-devel/2016-11/msg05144.html

This patch adds several fields to several structures first, and then it
does things with those new fields. If you think about what the English
verb "to declare" means, it's clear you want to see the declaration
first (same as the compiler), and only then how the field is put to use.

Thanks!
Laszlo


- End forwarded message -----

-- 
Ademar Reis
Red Hat

^[:wq!



Re: [Avocado-devel] Tests stable tmpdir

2016-10-30 Thread Ademar Reis
On Thu, Oct 27, 2016 at 02:00:53PM -0300, Cleber Rosa wrote:
> 
> On 10/25/2016 12:15 PM, Ademar Reis wrote:
> > On Mon, Oct 24, 2016 at 06:14:14PM -0300, Cleber Rosa wrote:
> >>
> >> On 10/24/2016 10:27 AM, Amador Pahim wrote:
> >>> Hello,
> >>>
> >>> I saw a number of requests about setUpClass/tearDownClass. We don't
> >>> actually support them in Avocado, as already stated in our docs, but
> >>> most of the requests are actually interested in have a temporary
> >>> directory that can be the same throughout the job, so every test can
> >>> use that directory to share information that is common to all the
> >>> tests.
> >>>
> >>> One way to provide that would be exposing the Job temporary directory,
> >>> but providing a supported API where a test can actually write to
> >>> another test results can break our promise that tests are independent
> >>> from each other.
> >>>
> >>
> >> Yes, the initial goal of a job temporary directory is to prevent clashes
> >> and allow proper cleanup when a job is finished.  For those not familiar
> >> with the current problems of (global) temporary directories:
> >>
> >> https://trello.com/c/qgSTIK0Y/859-single-data-dir-get-tmp-dir-per-interpreter-breaks-multiple-jobs
> > 
> > Also, let's keep in mind that the architecture of Avocado is
> > hierarchical and tests should not have access or knowledge about
> > the job they're running on (I honestly don't know how much of
> > this is true in practice today, but if it happens somewhere, it
> > should be considered a problem).
> > 
> > Anyway, what I want to say is that we should not expose a job
> > directory to tests.
> > 
> 
> I believe we have to be clear about our architecture proposal, but
> honest also about how we currently deviate from it.  Avocado-VT, for
> instance, relies on the temporary dir that exists across tests.

Agree. Avocado-vt is an exceptional case. It's intrusive and
depends on multiple parts of avocado-core, mixing job and test
concepts, even though in theory it's a third-party plugin.

> 
> >>
> >>
> >>> Another way that comes to my mind is to use the pre/post plugin to
> >>> handle that. On `pre`, we can create a temporary directory and set an
> >>> environment variable with the path for it. On `post` we remove that
> >>> directory. Something like:
> >>>
> >>> ```
> >>> class TestsTmpdir(JobPre, JobPost):
> >>> ...
> >>>
> >>> def pre(self, job):
> >>> os.environ['AVOCADO_TESTS_TMPDIR'] = 
> >>> tempfile.mkdtemp(prefix='avocado_')
> >>>
> >>> def post(self, job):
> >>> if os.environ.get('AVOCADO_TESTS_TMPDIR') is not None:
> >>> shutil.rmtree(os.environ.get('AVOCADO_TESTS_TMPDIR'))
> >>> ```
> >>>
> >>> Thoughts?
> >>>
> >>
> >> I think this can be a valid solution, that promises very little to
> >> tests.  It doesn't break our assumption of how tests should not depend
> >> on each other, and it reinforces that we aim at providing job level
> >> orchestration.
> > 
> > Thinking from the architecture perspective once again, this is a
> > bit different from what you proposed before, but not that much
> > (let's say it's a third-party "entity" called
> > "AVOCADO_TESTS_TMPDIR" available to all processes in the job
> > environment, unique per job).
> > 
> > It's a bit better, but first of all, it should be named,
> > implemented and even enabled in a more explicit way to prevent
> > users from abusing it.
> > 
> 
> This kind of proposal is really a short (or mid) term compromise.  We
> don't want to endorse this as part of our architecture or propose that
> tests are written to depend on it.  Still, we can't at the moment, offer
> a better solution.
> 
> Shipping it as a contrib plugin, can help real users to have better
> tests.  Not optimal or perfect ones, but still better than what can be
> honestly done Today.

Agree. I suggest using a name that better represents what this
resource is. Maybe calling it something like
"XXX_TESTS_COMMON_TMPDIR" instead of "AVOCADO_TESTS_TMPDIR".

("XXX" is my attempt to show this is a non-supported variable --
given it's a contrib plugin, users should be free 

Re: [Avocado-devel] Tests stable tmpdir

2016-10-25 Thread Ademar Reis
On Mon, Oct 24, 2016 at 06:14:14PM -0300, Cleber Rosa wrote:
> 
> On 10/24/2016 10:27 AM, Amador Pahim wrote:
> > Hello,
> > 
> > I saw a number of requests about setUpClass/tearDownClass. We don't
> > actually support them in Avocado, as already stated in our docs, but
> > most of the requests are actually interested in have a temporary
> > directory that can be the same throughout the job, so every test can
> > use that directory to share information that is common to all the
> > tests.
> > 
> > One way to provide that would be exposing the Job temporary directory,
> > but providing a supported API where a test can actually write to
> > another test results can break our promise that tests are independent
> > from each other.
> > 
> 
> Yes, the initial goal of a job temporary directory is to prevent clashes
> and allow proper cleanup when a job is finished.  For those not familiar
> with the current problems of (global) temporary directories:
> 
> https://trello.com/c/qgSTIK0Y/859-single-data-dir-get-tmp-dir-per-interpreter-breaks-multiple-jobs

Also, let's keep in mind that the architecture of Avocado is
hierarchical and tests should not have access or knowledge about
the job they're running on (I honestly don't know how much of
this is true in practice today, but if it happens somewhere, it
should be considered a problem).

Anyway, what I want to say is that we should not expose a job
directory to tests.

> 
> 
> > Another way that comes to my mind is to use the pre/post plugin to
> > handle that. On `pre`, we can create a temporary directory and set an
> > environment variable with the path for it. On `post` we remove that
> > directory. Something like:
> > 
> > ```
> > class TestsTmpdir(JobPre, JobPost):
> > ...
> > 
> > def pre(self, job):
> > os.environ['AVOCADO_TESTS_TMPDIR'] = 
> > tempfile.mkdtemp(prefix='avocado_')
> > 
> > def post(self, job):
> > if os.environ.get('AVOCADO_TESTS_TMPDIR') is not None:
> > shutil.rmtree(os.environ.get('AVOCADO_TESTS_TMPDIR'))
> > ```
> > 
> > Thoughts?
> > 
> 
> I think this can be a valid solution, that promises very little to
> tests.  It doesn't break our assumption of how tests should not depend
> on each other, and it reinforces that we aim at providing job level
> orchestration.

Thinking from the architecture perspective once again, this is a
bit different from what you proposed before, but not that much
(let's say it's a third-party "entity" called
"AVOCADO_TESTS_TMPDIR" available to all processes in the job
environment, unique per job).

It's a bit better, but first of all, it should be named,
implemented and even enabled in a more explicit way to prevent
users from abusing it.

But my real solution is below:

> 
> Although, since we have discussed giving a job its own temporary dir,
> and we already expose a lot via environment variables to tests:
> 
> http://avocado-framework.readthedocs.io/en/latest/WritingTests.html#environment-variables-for-simple-tests
> 
> And also to job pre/post script plugins:
> 
> http://avocado-framework.readthedocs.io/en/latest/ReferenceGuide.html#script-execution-environment
> 
> I'm afraid this could bring inconsistencies or clashes in the very near
> future.  What I propose for the immediate terms is to write a
> contrib/example plugin, that we can either fold into the Job class
> itself (giving it a real temporary dir, with variables exposed to test
> processes) or make it a 1st class plugin.
> 
> How does it sound?

If we expose something like this as a supported API, we should
make it as an "external resource available for tests" with proper
access control (locking) mechanisms. In other words, this feature
is a lot more about the locking API than about a global directory
for tests.

In summary, a "job/global directory available to all tests"
should in fact be handled as "a global resource available to all
tests".  Notice it has no relationship to jobs whatsoever.
Creating it per-job would be simply an implementation detail.

Think of the hypothetical examples below and consider the
architectural implication:

(all tests in these examples are making use of the shared dir)

$ export MY_DIR=~/tmp/foobar
$ avocado run my-test.py
$ avocado run my-test1.py my-test2.py
$ avocado run my-test.py & avocado run my-test.py
$ avocado run --enable-parallel-run my-test*.py

Some of the above will break with today's Avocado. Now imagine we
provide a locking API for shared resources. Tests could then do
this:

lock($MY_DIR)
  do_something...
unlock($MY_DIR)

Or maybe even better, simply declare they're using $MY_DIR during
their entire execution via a decorator or some (future)
dependency API:

@using($MY_DIR)
def ...

With that in place, we could have a plugin, or even a first-class
citizen API, to create and expose a unique directory per job.

Thanks.
   - Ademar

-- 
Ademar Reis
Red Hat

^[:wq!



Re: [Avocado-devel] RFC: Plugin execution order

2016-10-04 Thread Ademar Reis
ering.
> > 
> > Order is here is used as a numerical value to indicate a relative
> > ordering, correct? I'm not sure I like the name "order"; how about
> > "sequence"?
> > 
> > A drawback to this approach is that you still have to come up with a
> > rule for what happens when two plugins have the same sequence number.
> > 
> 
> Then the order is undefined.  I don't see a clean way around this.
> 
> >> Feedback is highly appreciated!
> > 
> > Another way to specify the order is to use an attribute at the
> > [plugins] or [plugins.result] level with certain expected values:
> > 
> > [plugins]
> > execution-order = random | lexical | user-defined
> > 
> > where 'user-defined' would require yet another attribute
> > that defines the sequence. 'user-defined' would also have to
> > handle the 'unspecified' condition.
> > 
> > I think this is too complicated, but I offer it as a counter-example.
> > 
> 
> Random is default, and I fail to see the practical use of lexical.
> About user-defined, I was trying to avoid (at this point) code based
> ordering.

Please be careful with the wording here: I think you mean
arbitrary or undefined, not random.

If you're not purposefully implementing random ordering (by
reading from an entropy source such as /dev/random), then it's
simply undefined or arbitrary (dependent of an external,
undefined source, such as the file-system ordering, or the
behavior of a library).

The problem with undefined behavior is that users might start
trusting it if they always see the same behavior in practice. For
example, plugins will always execute in the same order if the
user is not touching the file-system. Then, the order *might*
change when the user upgrades avocado.

If you're implementing plugin ordering, the default should be
something stable and predictable.

Thanks.
   - Ademar

> 
> > Finally, the typo got me to thinking: should it be "plugin" or
> > "plugins"? I don't care one way or the other. However, I do believe
> > that the attribute should be named consistent with the rest of
> > avocado. If there isn't a style guide for avocado proposals and naming
> > conventions, it would be good to have one. Consistency is going to be
> > hard to establish and maintain without it.
> > 
> 
> I would go with "plugins", because we already have a global section that
> configures "all plugins behavior", and that section is called "plugins".
>  Then, I see "plugins." as "sub section" (logically only) of that
> main one, that configures plugins of a given type.
> 
> > -Jeff
> > 
> > 
> 
> Thanks for the feedback!
> 
> >>
> >> -- 
> >> Cleber Rosa
> >> [ Sr Software Engineer - Virtualization Team - Red Hat ]
> >> [ Avocado Test Framework - avocado-framework.github.io ]
> >>
> 
> -- 
> Cleber Rosa
> [ Sr Software Engineer - Virtualization Team - Red Hat ]
> [ Avocado Test Framework - avocado-framework.github.io ]
> 




-- 
Ademar Reis
Red Hat

^[:wq!



Re: [Avocado-devel] option --output-check-record behavior

2016-09-14 Thread Ademar Reis
On Wed, Sep 14, 2016 at 05:27:03PM -0300, Ademar Reis wrote:
> On Wed, Sep 14, 2016 at 06:54:31PM +0200, Lukáš Doktor wrote:
> > Dne 9.9.2016 v 23:25 Lucas Meneghel Rodrigues napsal(a):
> > > 
> > > 
> > > On Fri, Sep 9, 2016 at 8:14 AM Marcos E. Matsunaga
> > > mailto:marcos.matsun...@oracle.com>> wrote:
> > > 
> > > Hi guys,
> > > 
> > > First of all, thanks again for your help. I really appreciate it.
> > > 
> > > I found an interesting behavior. If I set loglevel=info in
> > > /etc/avocado/avocado.conf, it will not produce any content in
> > > stderr.expected and stdout.expected. If I set loglevel=debug, then
> > > it will work as it should. I don't mind running in debug mode, but I
> > > am not sure the behavior should be affected by loglevel.
> > > 
> > > Anyway, the question I have is about using --output-check-record
> > > when multiplexing. I notice that the files stdout.expected and
> > > stderr.expected get overwritten on each variant. I will assume there
> > > is a way to save each of the variant results and then use them to
> > > check. The problem is that I went through the documentation and
> > > didn't find anything that talks about it.
> > This is the expected behavior. The `--output-check-record` is a simple tool
> > to allow checking simple tests like `cat /etc/fedora-release`, it was never
> > meant for heavy stuff including multiplexer.
> 
> Not really.
> 
> --output-check-* should be fully compatible with the multiplexer.
> What happens is that it was designed in a time when the concepts
> of what a Test is where not very clear and it needs to be fixed
> now. That is, we have a bug.
> 
> Following the definitions from the "Test ID RFC", I would say the
> .data directory should be in the format
> [.Variant-ID].data. Which means the multiplexer should
> work fine when combined with output-check: both -record and
> -check.

https://trello.com/c/wiGkOFSa/828-test-s-data-directory-should-include-the-variant-id

Thanks.
   - Ademar

> 
> > Consider running the same test
> > with a different file or with adjusted multiplex file (different number of
> > variants, ...). What would be the expected results?
> > 
> > Anyway looking at your test, I'd implement it as two tests:
> > 
> > 1. start
> > 2. stop
> > 
> > Looking something like this:
> > 
> > ```
> > def start(...):
> > # start the xen machine with given attributes
> > 
> > def stop(...):
> > # stop the xen machine with given attributes
> > 
> > class StartTest(...):
> > def test(self):
> > start()
> > def tearDown(self):
> > stop()
> > 
> > class StopTest(...):
> > def setUp(self):
> > start()
> > def test(self):
> > stop()
> > ```
> > 
> > Which would make sure to always cleanup after itself. Other solution would
> > be to have start & stop as a single test, but having one test to start a
> > machine and leaving it after the test is finished does not look nice to me.
> > 
> > 
> > > 
> > > Thanks again.
> > > 
> > > BTW, is the whole development team Brazilian?
> > > 
> > > No, we also have Lukas, from Czech republic, and also contributors in
> > > China and India.
> > Actually we have two core (Red Hat) people located in Czech republic and one
> > in the USA a incrementally we get more and more contributors from all around
> > the world.
> > 
> > > 
> > > 
> > > 
> > > Regards,
> > 
> > Regards,
> > Lukáš
> > 
> > > 
> > > Marcos Eduardo Matsunaga
> > > 
> > > Oracle USA
> > > Linux Engineering
> > > 
> > > “The statements and opinions expressed here are my own and do not
> > > necessarily represent those of Oracle Corporation.”
> > > 
> > 
> 
> 
> 
> 
> -- 
> Ademar Reis
> Red Hat
> 
> ^[:wq!

-- 
Ademar Reis
Red Hat

^[:wq!



Re: [Avocado-devel] option --output-check-record behavior

2016-09-14 Thread Ademar Reis
On Wed, Sep 14, 2016 at 06:54:31PM +0200, Lukáš Doktor wrote:
> Dne 9.9.2016 v 23:25 Lucas Meneghel Rodrigues napsal(a):
> > 
> > 
> > On Fri, Sep 9, 2016 at 8:14 AM Marcos E. Matsunaga
> > mailto:marcos.matsun...@oracle.com>> wrote:
> > 
> > Hi guys,
> > 
> > First of all, thanks again for your help. I really appreciate it.
> > 
> > I found an interesting behavior. If I set loglevel=info in
> > /etc/avocado/avocado.conf, it will not produce any content in
> > stderr.expected and stdout.expected. If I set loglevel=debug, then
> > it will work as it should. I don't mind running in debug mode, but I
> > am not sure the behavior should be affected by loglevel.
> > 
> > Anyway, the question I have is about using --output-check-record
> > when multiplexing. I notice that the files stdout.expected and
> > stderr.expected get overwritten on each variant. I will assume there
> > is a way to save each of the variant results and then use them to
> > check. The problem is that I went through the documentation and
> > didn't find anything that talks about it.
> This is the expected behavior. The `--output-check-record` is a simple tool
> to allow checking simple tests like `cat /etc/fedora-release`, it was never
> meant for heavy stuff including multiplexer.

Not really.

--output-check-* should be fully compatible with the multiplexer.
What happens is that it was designed in a time when the concepts
of what a Test is where not very clear and it needs to be fixed
now. That is, we have a bug.

Following the definitions from the "Test ID RFC", I would say the
.data directory should be in the format
[.Variant-ID].data. Which means the multiplexer should
work fine when combined with output-check: both -record and
-check.

Thanks.
   - Ademar

> Consider running the same test
> with a different file or with adjusted multiplex file (different number of
> variants, ...). What would be the expected results?
> 
> Anyway looking at your test, I'd implement it as two tests:
> 
> 1. start
> 2. stop
> 
> Looking something like this:
> 
> ```
> def start(...):
> # start the xen machine with given attributes
> 
> def stop(...):
> # stop the xen machine with given attributes
> 
> class StartTest(...):
> def test(self):
> start()
> def tearDown(self):
> stop()
> 
> class StopTest(...):
> def setUp(self):
> start()
> def test(self):
> stop()
> ```
> 
> Which would make sure to always cleanup after itself. Other solution would
> be to have start & stop as a single test, but having one test to start a
> machine and leaving it after the test is finished does not look nice to me.
> 
> 
> > 
> > Thanks again.
> > 
> > BTW, is the whole development team Brazilian?
> > 
> > No, we also have Lukas, from Czech republic, and also contributors in
> > China and India.
> Actually we have two core (Red Hat) people located in Czech republic and one
> in the USA a incrementally we get more and more contributors from all around
> the world.
> 
> > 
> > 
> > 
> > Regards,
> 
> Regards,
> Lukáš
> 
> > 
> > Marcos Eduardo Matsunaga
> > 
> > Oracle USA
> > Linux Engineering
> > 
> > “The statements and opinions expressed here are my own and do not
> > necessarily represent those of Oracle Corporation.”
> > 
> 




-- 
Ademar Reis
Red Hat

^[:wq!



Re: [Avocado-devel] (potential) design issue in multiplexer

2016-08-17 Thread Ademar Reis
nd finds it in `/my/kvm/plugins/virt/qemu` and in
> `/plugins/virt/qemu` -> failure
> 
> Yes, one could solve it by defining another `mux-path` to `/my` or even
> `/my/kvm`, but that just adds the complexity.
> 
> Let me also mention why do we like to extend nodes from right. Imagine we
> expect `disk_type` in `/virt/hw/disk/*`. The yaml file might look like this:
> 
> ```
> virt:
> hw:
> disk: !mux
> virtio_blk:
> disk_type: virtio_blk
> virtio_scsi:
> disk_type: virtio_scsi
> ```
> 
> Now the user develops `virtio_scsi_next` and he wants to compare them. Today
> he simply merges this config with the above:
> 
> ```
> virt:
> hw:
> disk: !mux
> virtio_scsi_debug:
> disk_type: virtio_scsi
> enable_next: True
> ```
> and avocado produces 3 variants, where `params.get("disk_type",
> "/virt/hw/disk/*")` reports the 3 defined variants. If we try to do the same
> with `*/virt/hw/disk` we have to modify the first file:
> 
> ```
> !mux
> virtio_blk:
> virt:
> hw:
> disk:
> disk_type: virtio_blk
> virtio_scsi:
> virt:
> hw:
> disk:
> disk_type: virtio_scsi
> ```
> 
> One would want to prepend yet another node in front of it, because we don't
> want to vary over disk types only, but also over other items (like cpus,
> ...). The problem is, that the first category has to again be unique to the
> whole multiplex tree in order to not clash with the other items. And that is
> what the tree path was actually introduced, to get rid of this
> global-namespace.
> 
> Right now the only solution I see is to change the way `!mux` works.
> Currently it multiplexes all the children, but (not sure if easily done) it
> should only define the children, which mix together. Therefor (back to the
> original example) one would be able to say:
> 
> ```
> plugins:
> virt:
> qemu:
> enabled: !newmux
> kvm: on
> disabled: !newmux
> kvm: off
> paths:
> qemu_dst_bin: None
> qemu_img_bin: None
> qemu_bin: None
> migrate:
> timeout: 60.0
> ```
> 
> which would produce:
> 
> ```
>  ┗━━ plugins
>   ┗━━ virt
>┣━━ qemu
>┃╠══ enabled
>┃║ → kvm: on
>┃╠══ disabled
>┃┃ → kvm: off
>┃┣━━ paths
>┃┃ → qemu_dst_bin: None
>┃┃ → qemu_img_bin: None
>┃┃ → qemu_bin: None
>┃┗━━ migrate
>┃  → timeout: 60.0
> ```
> 
> and in terms of variants:
> 
> ```
> Variant 1:/plugins/virt/qemu/enabled, /plugins/virt/paths,
> /plugins/virt/migrate
> Variant 2:/plugins/virt/qemu/disabled, /plugins/virt/paths,
> /plugins/virt/migrate
> ```
> 
> I'm looking forward to your suggestions and I hope I'm wrong and that the
> multiplexer (at least the full-spec) can handle this nicely.
> 
> Kind regards,
> Lukáš
> 




-- 
Ademar Reis
Red Hat

^[:wq!



Re: [Avocado-devel] [RFC] Avocado Misc Tests repository policy

2016-08-10 Thread Ademar Reis
e
> merge permission for (aka promote to maintainer) those assiduous
> reviewers with good quality of reviews.
> When this RFC is considered ready, we will update our documentation
> and the avocado-misc-test README file to reflect the information.
> 
> Additional Information
> 
> Any individual willing to make the code review is eligible to do so.
> And the process is simple. Just go there and review the code.
> Given the high volume of code coming from IBM, I had a chat with
> Praveen Pandey, an IBMer and assiduous author of Pull Requests for
> avocado-misc-tests, and he agreed in make reviews in
> avocado-misc-tests.
> 
> 
> Looking forward to read your comments.
> --
> apahim
> 

-- 
Ademar Reis
Red Hat

^[:wq!



Re: [Avocado-devel] RFC: Avocado multiplexer plugin

2016-07-19 Thread Ademar Reis
base directly. This is not a problem now, it would be no
> > > problem even after turning it into a proper plugin, but it'd become a
> > > problem if we chose to replace `tree` with any arbitrary database, 
> > > therefor
> > > I think it's time to extract the `create_from_yaml` and instead of 
> > > creating
> > > the tree we should only ask the `tree` (or database) to 
> > > inject/update/remove
> > > variables/filters/flags. Then we have a clear interface.
> > > 
> > > So the multiplexer is core and IMO should stay there. Only the
> > > params-feeders should be extracted, which means the PyYAML would be
> > > optional. Also multiplexer is just a database with combinations so it 
> > > could
> > > be used for anything, also we want to keep using one database for test
> > > variants and not mixed settings/config_files/test_params.
> > > 
> > > I hope everything is clear, because if not I'd have to draw a
> > > chart :-D
> > 
> > I think I get what you're proposing. I would do things
> > differently, but this is all going to be internal for a while
> > more and we can improve it incrementally.
> Sure, this is the first (actually second, the first one was to split
> tree+multiplexer and create the Mux object to define the avocado->variant
> interface) step. How I see params now is:
> 
> 1. params-feeders => --mux-inject or yaml parser which should inject
> path/environment/tags into the tree (or let's call it a database, should
> become an independent plugin in the close future)
> 2. tree (database) of path/environment/tags. It's purpose is to store
> key/value pairs as well as information which allows Mux to produce different
> variants (basically slices of the database's environment)
> 3. Mux which takes the tree (database), applies filters and allows producing
> variants
> 4. variant => list of path/environment mappings (slice of the tree),
> currently list of full TreeNode structures. Only their path+environment are
> used (so if you insist we can only send tuple(path, key+values))
> 5. AvocadoParams => driver to get values based on key+path from the variant
> 
> Lukáš
> 
> > 
> > Thanks.
> >- Ademar
> > 
> > > 
> > > Regards,
> > > Lukáš
> > > 
> > > > Thanks.
> > > >- Ademar
> > > > 
> > > > > 2. multiplexer - to produce variants
> > > > > 3. avocado params - to obtain params from variant
> > > > > 
> > > > > We should probably find a suitable name to ambiguously identify each 
> > > > > part as
> > > > > of today we only have multiplexer and avocado params, which can lead 
> > > > > to
> > > > > confusions. I can't come up with any good name so unless you have a 
> > > > > good
> > > > > name we maybe end up with the same name. The situation should be 
> > > > > better,
> > > > > though, because those parts will be really separated.
> > > > > 
> > > > > Thank you for the feedback, let's get my hands dirty. :-)
> > > > > 
> > > > > Regards,
> > > > > Lukáš
> > > > > 
> > > > > > 
> > > > > > > Regards,
> > > > > > > Lukáš
> > > > > > > 
> > > > > > 
> > > > > > [1] - https://docs.python.org/2.6/glossary.html#term-iterable
> > > > > > 
> > > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > 
> > 
> > 
> > 
> > 
> 




-- 
Ademar Reis
Red Hat

^[:wq!

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] RFC: Avocado multiplexer plugin

2016-07-19 Thread Ademar Reis
On Tue, Jul 19, 2016 at 11:39:03AM +0200, Lukáš Doktor wrote:
> Dne 18.7.2016 v 22:41 Ademar Reis napsal(a):
> > On Mon, Jul 18, 2016 at 07:33:43PM +0200, Lukáš Doktor wrote:
> > > Dne 18.7.2016 v 12:46 Cleber Rosa napsal(a):
> > > > 
> > > > On 07/07/2016 10:44 AM, Lukáš Doktor wrote:
> > > > > 0Dne 5.7.2016 v 16:10 Ademar Reis napsal(a):
> > > > > > On Fri, Jul 01, 2016 at 03:57:31PM +0200, Lukáš Doktor wrote:
> > > > > > > Dne 30.6.2016 v 22:57 Ademar Reis napsal(a):
> > > > > > > > On Thu, Jun 30, 2016 at 06:59:39PM +0200, Lukáš Doktor wrote:
> > > > > > > > > Hello guys,
> > > > > > > > > 
> > > > > > > > > the purpose of this RFC is to properly split and define the 
> > > > > > > > > way test
> > > > > > > > > parameters are processed. There are several ways to split them
> > > > > > > > > apart, each
> > > > > > > > > with some benefits and drawbacks.
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > Current params process
> > > > > > > > > ==
> > > > > > > > > 
> > > > > > > > > `tree.TreeNode` -> Object allowing to store things 
> > > > > > > > > (environment,
> > > > > > > > > filters,
> > > > > > > > > ...) in tree-structure
> > > > > > > > > `multiplexer.Mux` -> Interface between job and multiplexer. 
> > > > > > > > > Reports
> > > > > > > > > number
> > > > > > > > > of tests and yields modified test_suite templates
> > > > 
> > > > There has been ideas and suggestions about using the multiplexer for
> > > > purposes other than multiplexing tests.  While those other use cases are
> > > > only ideas, `Mux` and its `itertests` method are a good enough name, but
> > > > there may be the opportunity here to provide a common and more standard
> > > > interface for different "multiplexations".  Example:
> > > > 
> > > > 1) For tests, a class TestMux, with an standard Python iterable
> > > > interface[1].
> > > > 2) For test execution location (say different hosts), a HostMux class,
> > > > with the same standard Python interface.
> > > > 
> > > > Even if we find no real use for other Mux* classes, having a default
> > > > iterable implementation is a good idea.  So moving the `tests` from the
> > > > `itertests` method to the `Mux` class name, from `Mux` to `TestsMux`,
> > > > feels right to me.
> > > > 
> > > > UPDATE: then I looked at PR #1293, and noticed that it intends to take
> > > > the custom variant responsibility out of the `multiplexer.Mux()` class.
> > > > It feels right because it moves the variant processing into its own
> > > > domain, kind of what I had in mind by naming it `TestsMux()`.  Still,
> > > > having a standard iterable interface instead of `itertests()` feels
> > > > right to me.
> > > > 
> > > Well without that PR it's impossible to use the python standard __iter__
> > > method as it requires argument. With that cleanup sure, `__iter__` method 
> > > is
> > > better.
> > > 
> > > As for the multiple classes I don't see a reason for it. Multiplexer (as 
> > > the
> > > variants generator) is independent on anything, it simply produces all
> > > possible variants. So let me just turn the `iter_tests` into `__iter__` 
> > > and
> > > we're done.
> > > 
> > > > > > > > > `multiplexer.MuxTree` -> Object representing part of the tree 
> > > > > > > > > from
> > > > > > > > > the root
> > > > > > > > > to leaves or another multiplex domain. Recursively it creates
> > > > > > > > > multiplexed
> > > > > > > > > variants of the full tree.
> > > > > > > > > `multiplexer.AvocadoParams` -> Params object used to retrieve
> > > > > > > > > params from
> > > > > > > > > given path, allows defining several domains for relative paths
> > 

Re: [Avocado-devel] RFC: Avocado multiplexer plugin

2016-07-18 Thread Ademar Reis
On Mon, Jul 18, 2016 at 07:33:43PM +0200, Lukáš Doktor wrote:
> Dne 18.7.2016 v 12:46 Cleber Rosa napsal(a):
> > 
> > On 07/07/2016 10:44 AM, Lukáš Doktor wrote:
> > > 0Dne 5.7.2016 v 16:10 Ademar Reis napsal(a):
> > > > On Fri, Jul 01, 2016 at 03:57:31PM +0200, Lukáš Doktor wrote:
> > > > > Dne 30.6.2016 v 22:57 Ademar Reis napsal(a):
> > > > > > On Thu, Jun 30, 2016 at 06:59:39PM +0200, Lukáš Doktor wrote:
> > > > > > > Hello guys,
> > > > > > > 
> > > > > > > the purpose of this RFC is to properly split and define the way 
> > > > > > > test
> > > > > > > parameters are processed. There are several ways to split them
> > > > > > > apart, each
> > > > > > > with some benefits and drawbacks.
> > > > > > > 
> > > > > > > 
> > > > > > > Current params process
> > > > > > > ==
> > > > > > > 
> > > > > > > `tree.TreeNode` -> Object allowing to store things (environment,
> > > > > > > filters,
> > > > > > > ...) in tree-structure
> > > > > > > `multiplexer.Mux` -> Interface between job and multiplexer. 
> > > > > > > Reports
> > > > > > > number
> > > > > > > of tests and yields modified test_suite templates
> > 
> > There has been ideas and suggestions about using the multiplexer for
> > purposes other than multiplexing tests.  While those other use cases are
> > only ideas, `Mux` and its `itertests` method are a good enough name, but
> > there may be the opportunity here to provide a common and more standard
> > interface for different "multiplexations".  Example:
> > 
> > 1) For tests, a class TestMux, with an standard Python iterable
> > interface[1].
> > 2) For test execution location (say different hosts), a HostMux class,
> > with the same standard Python interface.
> > 
> > Even if we find no real use for other Mux* classes, having a default
> > iterable implementation is a good idea.  So moving the `tests` from the
> > `itertests` method to the `Mux` class name, from `Mux` to `TestsMux`,
> > feels right to me.
> > 
> > UPDATE: then I looked at PR #1293, and noticed that it intends to take
> > the custom variant responsibility out of the `multiplexer.Mux()` class.
> > It feels right because it moves the variant processing into its own
> > domain, kind of what I had in mind by naming it `TestsMux()`.  Still,
> > having a standard iterable interface instead of `itertests()` feels
> > right to me.
> > 
> Well without that PR it's impossible to use the python standard __iter__
> method as it requires argument. With that cleanup sure, `__iter__` method is
> better.
> 
> As for the multiple classes I don't see a reason for it. Multiplexer (as the
> variants generator) is independent on anything, it simply produces all
> possible variants. So let me just turn the `iter_tests` into `__iter__` and
> we're done.
> 
> > > > > > > `multiplexer.MuxTree` -> Object representing part of the tree from
> > > > > > > the root
> > > > > > > to leaves or another multiplex domain. Recursively it creates
> > > > > > > multiplexed
> > > > > > > variants of the full tree.
> > > > > > > `multiplexer.AvocadoParams` -> Params object used to retrieve
> > > > > > > params from
> > > > > > > given path, allows defining several domains for relative paths
> > > > > > > matching
> > > > > > > defined as `mux_path`s.
> > > > > > > `multiplexer.AvocadoParam` -> Slice of the `AvocadoParams` which
> > > > > > > handles
> > > > > > > given `mux_path`.
> > > > > > > `test.Test.default_params` -> Dictionary which can define test's
> > > > > > > default
> > > > > > > values, it's intended for removal for some time.
> > > > > > > 
> > > > > > > 
> > > > > > > Creating variants
> > > > > > > -
> > > > > > > 
> > > > > > > 1. app
> > > > > > > 2. parser -> creates the root tree `args.default_avocado_params =
> > > > > > > TreeNod

Re: [Avocado-devel] RFC: Avocado multiplexer plugin

2016-07-05 Thread Ademar Reis
On Fri, Jul 01, 2016 at 03:57:31PM +0200, Lukáš Doktor wrote:
> Dne 30.6.2016 v 22:57 Ademar Reis napsal(a):
> > On Thu, Jun 30, 2016 at 06:59:39PM +0200, Lukáš Doktor wrote:
> > > Hello guys,
> > > 
> > > the purpose of this RFC is to properly split and define the way test
> > > parameters are processed. There are several ways to split them apart, each
> > > with some benefits and drawbacks.
> > > 
> > > 
> > > Current params process
> > > ==
> > > 
> > > `tree.TreeNode` -> Object allowing to store things (environment, filters,
> > > ...) in tree-structure
> > > `multiplexer.Mux` -> Interface between job and multiplexer. Reports number
> > > of tests and yields modified test_suite templates
> > > `multiplexer.MuxTree` -> Object representing part of the tree from the 
> > > root
> > > to leaves or another multiplex domain. Recursively it creates multiplexed
> > > variants of the full tree.
> > > `multiplexer.AvocadoParams` -> Params object used to retrieve params from
> > > given path, allows defining several domains for relative paths matching
> > > defined as `mux_path`s.
> > > `multiplexer.AvocadoParam` -> Slice of the `AvocadoParams` which handles
> > > given `mux_path`.
> > > `test.Test.default_params` -> Dictionary which can define test's default
> > > values, it's intended for removal for some time.
> > > 
> > > 
> > > Creating variants
> > > -
> > > 
> > > 1. app
> > > 2. parser -> creates the root tree `args.default_avocado_params =
> > > TreeNode()`
> > > 3. plugins.* -> inject key/value into `args.default_avocado_params` (or 
> > > it's
> > > children). One example is `plugins.run`'s --mux-inject, the other is
> > > `avocado_virt`'s default values.
> > > 4. job -> creates multiplexer.Mux() object
> > > a. If "-m" specified, parses and filters the yaml file(s), otherwise
> > > creates an empty TreeNode() called `mux_tree`
> > > b. If `args.default_avocado_params` exists, it merges it into the
> > > `mux_tree` (no filtering of the default params)
> > > c. Initializes `multiplexer.MuxTree` object using the `mux_tree`
> > > 5. job -> asks the Mux() object for number of tests
> > > a. Mux iterates all MuxTree variants and reports `no_variants *
> > > no_tests`
> > > 6. runner -> iterates through test_suite
> > > a. runner -> iterates through Mux:
> > > i.  multiplexer.Mux -> iterates through MuxTree
> > >   * multiplexer.MuxTree -> yields list of leaves of the `mux_tree`
> > > ii, yields the modified test template
> > > b. runs the test template:
> > > i. Test.__init__: |
> > > if isinstance(params, dict):
> > > # update test's default params
> > > elif params is None:
> > > # New empty multiplexer.AvocadoParams are created
> > > elif isinstance(params, tuple):
> > > # multiplexer.AvocadoParams are created from params
> > > 7. exit
> > > 
> > > AvocadoParams initialization
> > > 
> > > 
> > > def __init__(self, leaves, test_id, tag, mux_path, default_params):
> > > """
> > > :param leaves: List of TreeNode leaves defining current variant
> > > :param test_id: test id
> > > :param tag: test tag
> > > :param mux_path: list of entry points
> > > :param default_params: dict of params used when no matches found
> > > """
> > > 
> > > 1. Iterates through `mux_path` and creates `AvocadoParam` slices 
> > > containing
> > > params from only matching nodes, storing them in `self._rel_paths`
> > > 2. Creates `AvocadoParam` slice containing the remaining params, storing
> > > them in `self._abs_path`
> > > 
> > > Test params
> > > ---
> > > 
> > > def get(self, key, path=None, default=None):
> > > """
> > > Retrieve value associated with key from params
> > > :param key: Key you're looking for
> > > :param path: namespace ['*']
> > > :param default: default value when not f

Re: [Avocado-devel] RFC: Avocado multiplexer plugin

2016-06-30 Thread Ademar Reis
gt; method as it only requires us to move the yaml parsing into the module and
> the `AvocadoParams` would stay as they are. The cons is that the plugin
> writers would only be able to produce params compatible with the tree
> structure (flat, or tree-like).
> 
> If we decide to chose this method, we can keep the current avocado arguments
> and only allow replacing the parser plugin, eg. by `--multiplex-plugin
> NAME`. Alternatively we might even detect the suitable plugin based on the
> multiplex file and even allow combining them (`.cfg vs. .yaml, ...)
> 
> The plugin would have to support:
> 
> * parse_file(FILE) => tree_node
> * check_file(FILE) => BOOL// in case we want automatic detection of
> file->plugin
> 
> Plugin parser->variant
> ==
> 
> This would require deeper changes, but allow greater flexibility. We'd also
> have to chose whether we want to allow combinations, or whether the plugin
> should handle the whole workflow. I don't think we should allow combinations
> as that would imply another convention for storing the parsed results.
> 
> The user would configure in config or on cmdline which plugin he wants to
> use and the arguments would stay the same (optionally extended by the
> plugin's arguments)
> 
> The plugin would have to support:
> 
> * configure(parser)   // optional, add extended options like --mux-version
> * parse_file(FILE)// does not return as it's up to plugin to store the
> results
> * inject_value(key, value, path)  // used by avocado-virt to inject 
> default
> values
> * __len__()   // Return number of variants (we might want to 
> extend this to
> accept TEMPLATE and allow different number of variants per TEMPLATE. That is
> currently not supported, but it might be handy
> * itervariants(TEMPLATE)  // yields modified TEMPLATE with params set in
> AvocadoParams understandable format
> 
> 
> Plugin AvocadoParams
> 
> 
> I don't think we should make the AvocadoParams replaceable, but if we want
> to we should strictly require `params.get` compatibility so all tests can
> run seamlessly with all params. Anyway if we decided to make AvocadoParams
> replaceable, then we can create a proxy between the params and the plugin.
> 
> 
> Conclusion
> ==
> 
> I'm looking forward to cleaner multiplexer API. I don't think people would
> like to invest much time in developing fancy multiplexer plugins so I'd go
> with the `parser->tree` variant, which allows easy extensibility with some
> level of flexibility. The flexibility is for example sufficient to implement
> cartesian_config parser.
> 
> As for the automatic detection, I donẗ like the idea as people might want to
> use the same format with different custom tags.

Hi Lukáš.

I believe we're in sync, but I miss the high level overview, or
at least review, of how params, variants and the multiplexer or
other plugins are all related to each other.

Please check the definitions/examples below to see if we're in
sync:

Params
--

A dictionary of key/values, with an optional path (we could
simply call it prefix), which is used to identify the key
when there are multiple versions of it. The path is
interpreted from right to left to find a match.

The Params data structure can be populated by multiple
sources.

Example:
(implementation and API details are not discussed here)

key: var1=a
path: /foo/bar/baz

key: var1=b
path: /foo/bar

key: var2=c
path: NULL (empty)

get(key=var1, path=/foo/) ==> error ("/foo/var1" not found)
get(key=var1, path=/foo/*) ==> error (multiple var1)
get(key=var1, path=/foo/bar/baz/w/) ==> error
get(key=var1, path=/foo/bar/w/) ==> error

get(key=var2) ==> c
get(key=var2, path=foobar) ==> error ("foobar/var2" not found)

get(key=var1, path=/foo/bar/baz/) ==> a
(unique match for "/foo/bar/baz/var1")

get(key=var1, path=/foo/bar/) ==> b
(unique match for "/foo/bar/var1/")

get(key=var1, path=baz) ==> a
(unique match for "baz/var1")

get(key=var1, path=bar) ==> b
(unique match for "bar/var1")

This kind of "get" API is exposed in the Test API.


Variants


Multiple sets of params, all with the same set of keys and
paths, but potentially different values. Each variant is
identified by a "Variant ID" (see the "Test ID RFC").

The test runner is responsible for the association of tests
and variants. That is, the component creating the
variants has absolutely no visibility on which tests are
going to be associated with variants.

This is also completely abstract to tests: they don't have
any visibility about which variant they're using, or which
variants exist.

Given the above, the multiplexer (or any other component, like a
"cartesian config" implementation from Autotest) would be bound
to these APIs.

Thanks.
   - Ademar

-- 
Ademar Reis
Red Hat

^[:wq!

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] [RFC] Environment Variables

2016-06-01 Thread Ademar Reis
On Wed, Jun 01, 2016 at 04:02:54PM -0300, Cleber Rosa wrote:
> On 06/01/2016 03:07 PM, Ademar Reis wrote:
> > On Tue, May 31, 2016 at 07:30:43AM -0300, Cleber Rosa wrote:
> > > 
> > 
> > I'm replying on top of Cleber because he already said a few
> > things I was going to say.
> > 
> > > On 05/25/2016 05:31 AM, Amador Pahim wrote:
> > > > Hi folks,
> > > > 
> > > > We have requests to handle the environment variables that we can set to
> > > > the tests. This is the RFC in that regard, with a summary of the ideas
> > > > already exposed in the original request and some additional planning.
> > > > 
> > > > The original request is here:
> > > > https://trello.com/c/Ddcly0oG/312-mechanism-to-provide-environment-variables-to-tests-run-on-a-virtual-machine-remote
> > > > 
> > > > 
> > > > Motivation
> > > > ==
> > > > Avocado tests are executed in a fork process or even in a remote
> > > > machine. Regardless the fact that Avocado is hard coded to set some
> > > > environment variables, they are for internal consumption and user is not
> > > > allowed to control/configure its behavior.
> > > 
> > > You mean this:
> > > 
> > > http://avocado-framework.readthedocs.io/en/latest/WritingTests.html#environment-variables-for-simple-tests
> > > 
> > > Right? Basically, the fact that Avocado sets some of the job/test state as
> > > environment variables, that can be used by SIMPLE tests.
> > > 
> > > > The motivation is the request to provide users an interface to set
> > > > and/or keep environment variables for test consumption.
> > 
> > I'm not sure if they're necessarily for test consumption. I think
> > the motivation for the original request was to provide the
> > standard Unix interface of environment variables for when tests
> > are run remotely.
> > 
> 
> If the motivation is basically about setting the env vars when running tests
> remotely, than this brings the discussion about the *local* behavior to:
> 
> 1. Should Avocado default to the standard UNIX behavior of cloning the
> environment?
> 
>  A: IMHO, yes.

That's the current behavior (see my example at the end of the
previous email). Except when one runs tests remotely, which is
precisely the use case this feature would "fix".

> 
> 2. Could Avocado have have a feature to start tests in a clean(er)
> environment?
> 
>  A: Possibly yes, but seems low priority.  The use case here could be seen
> as a plus in predictability, helping to achieve expected test results in
> spite of the runner environment.  A real world example could be a CI
> environment that sets a VERBOSE environment variable. This env var will be
> passed over to Avocado, to the test process and finally to a custom binary
> (say a benchmark tool) that will produce different output depending on that
> environment variable.  Doing that type of cleaning in the test code is
> possible, but the framework could help with that.
> 
> 2.1. If Avocado provides a "clean(er) test environment" feature, how to
> determine which environment variables are passed along?
> 
>  A: The "env-keep" approach seems like the obvious way to do it.  If the
> mechanism is enabled, which I believe should be disabled by default (see
> #1), its default list could contain the more or less standard UNIX
> environment variables (TERM, SHELL, LANG, etc).

Agree. But like you said such a feature would be low priority and
optional. The important thing is that the implementation of what
we're discussing in this RFC would not interfere with it.

> 
> > These environment variables can change the behavior of both
> > Avocado (the runner itself), the tests (after all nothing
> > prevents the test writer from using them) and all sub-processes
> > executed by the test.
> > 
> 
> Right.
> 
> > Locally, this is standard:
> > 
> >   $ TMPDIR=/whatever/tmp VAR=foo ./avocado run test1.py
> > 
> > But when running avocado remotely, there's no way to configure
> > the environment in the destination. The environment variables set
> > in the command line below will not be "forwarded" to the remote
> > environment:
> > 
> >   $ TMPDIR=/whatever/tmp VAR=foo ./avocado run test1.py \
> >  --remote...
> > 
> 
> Right.
> 
> > > > 
> > > > Use cases
> > > > =
> > > > 1) Use the command line or the config file 

Re: [Avocado-devel] [RFC] Environment Variables

2016-06-01 Thread Ademar Reis
se. Like I said in a previous paragraph, this interface
should not be a mechanism for passing variables to tests.

> 
> > - Create an option in config file with a list of environment variable
> > names to copy from avocado main process environment to the test process
> > environment (similar to env_keep in the /etc/sudoers file):
> > 
> >  [tests.env]
> >  env_keep = ['FOO', 'FOO1', 'FOO2']
> > 

I like this approach because it reinforces the message that we're
keeping (or forwarding) some of the environment variables from
the original environment where the test runner was run.

> > 
> 
> Right, this makes sense. But it also brings the point that we may actually
> change the default behavior of keeping environment variables from Avocado in
> the tests' process.  That is, they would get a much cleaner environment by
> default.  While this sounds cleaner, it may break a lot of expectations.

I wonder what the motivation would be to clean-up the environment
where tests are run. Can you please elaborate?  If we indeed
decide to implement this change, then I would say we should honor
whatever is set in env_keep.

> 
> > For every configuration entry point, the setting have to be respected in
> > local and remote executions.
> > 
> > Drawbacks
> > =
> > 
> > While setting an environment variable, user will be able to change the
> > behavior of a test and probably the behavior of Avocado itself. Maybe
> > even the OS behavior as well. We should:
> > - Warn users about the danger when using such options.
> 
> I fail to see where an environment variable, to be set by Avocado in the
> test process, can or should impact Avocado itself.  If it does, then we'd
> probably be doing something wrong.  I'm not sure we need warnings that
> exceed documenting the intended behavior.

I think the environment has to be set where Avocado is run, not
where tests are run. Which is why I prefer --env-keep and
[env-keep].

So in the case of:

  $ FOO=bla avocado run test.py --remote=...

$FOO is available inside the environment where Avocado and its
tests are run, both locally and remotely.

> 
> > - Protect Avocado environment variables from overwriting.
> 
> About protecting the Avocado's own environment variables: agreed.

This is something that won't need any change. There are variables
which are read by Avocado and others which are written by it.

For example: 

 * TMPDIR will influence Avocado's behavior (standard Unix
   variable)
 * AVOCADO_VERSION is written by Avocado. Setting it externally
   won't make any difference.

  $ TMPDIR=/home/ademar/tmp avocado run examples/tests/env_variables.sh 
--show-job-log | grep Temporary
  Temporary dir: /home/ademar/tmp/avocado_L9YiE4

  $ AVOCADO_VERSION=0 ./scripts/avocado run examples/tests/env_variables.sh 
--show-job-log | grep Version
  [stdout] Avocado Version: 35.0
   
> 
> > 
> > Looking forward to read your comments.
> > 
> 
> Overall, this is definitely welcome.  Let's discuss possible implementation
> issues, such as remote/vm support, because it wouldn't be nice to introduce
> something like this with too many caveats.
> 

We appear to have different understandings about what this
feature should be about. IMO it should be about the standard unix
environment where Avocado and tests are run. The primary use-case
is for remote/vm support (in other words, whenever there's a
change in the environment).

Thanks.
   - Ademar

-- 
Ademar Reis
Red Hat

^[:wq!

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] Collaboration Workflow

2016-06-01 Thread Ademar Reis
On Tue, May 31, 2016 at 11:09:06AM +0200, Amador Pahim wrote:
> Hello,
> 
> We are receiving a good number of Pull Requests from new contributors and
> this is great.
> 
> In order to optimize the time spent on code reviews and also the time the
> code writers are investing in adjust the code according to the reviews, I'd
> like to expose my own workflow that I believe is close to the workflow used
> by the others full-time avocado developers.
> 
> The hope is that the new comers get inspired by this and probably take
> advantage of it.
> 
> As the biggest number of PRs are coming to avocado-misc-tests, I will use
> this repository as example.
> 
> - Fork the repository.
> 
> - Clone from your fork:
> 
>  $ git clone g...@github.com:/avocado-misc-tests.git
> 
> - Enter directory:
> 
>  $ cd avocado-misc-tests/
> 
> - Setup upstream:
> 
>  $ git remote add upstream
> g...@github.com:avocado-framework/avocado-misc-tests.git
> 
> At this point, you should have your name and e-mail configured on git. Also,
> we encourage you to sign your commits using GPG signature:
> 
> http://avocado-framework.readthedocs.io/en/latest/ContributionGuide.html#signing-commits
> 
> Start coding:
> 
> - Create a new local branch and checkout to it:
> 
>  $ git checkout -b my_new_local_branch
> 
> - Code and then commit your changes:
> 
>  $ git add new-file.py
>  $ git commit -s (include also a '-S' if signing with GPG)
> 
> Please write a good commit message, pointing motivation, issues that you're
> addressing. Usually I try to explain 3 points of my code in the commit
> message: motivation, approach and effects. Example:
> 
> https://github.com/avocado-framework/avocado/commit/661a9abbd21310ef7803ea0286fcb818cb93dfa9
> 
> If the commit is related to a trello card or an issue in github, I also add
> the line "Reference: " to the commit message bottom. You can mention it
> in Pull Request message instead, but the main point is not to omit that
> information.
> 
> - If working on 'avocado' repository, this is the time to run 'make check'.
> 
> - Push your commit(s) to your fork:
> 
>  $ git push --set-upstream origin my_new_local_branch
> 
> - Create the Pull Request on github.
> 
> Now you're waiting for feedback on github Pull Request page. Once you get
> some, new versions of your code should not be force-updated. Instead, you
> should:
> 
> - Close the Pull Request on github.
> 
> - Create a new branch out of your previous branch, naming it with '_v2' in
> the end (this will further allow code-reviewers to simple run '$ git diff
> user_my_new_local_branch{,_v2}' to see what changed between versions):
> 
>  $ git checkout my_new_local_branch
>  $ git checkout -b my_new_local_branch_v2
> 
> - Code and amend the commit. If you have more than one commit in the PR, you
> will probably need to rebase interactively to amend the right commits.
> 
> - Push your changes:
> 
>  $ git push --set-upstream origin my_new_local_branch_v2
> 
> - Create a new Pull Request for this new branch. In the PR message, point
> the previous PR and the changes this PR introduced when compared to the
> previous PRs. Example of PR message for a 'V2':
> 
> https://github.com/avocado-framework/avocado/pull/1228
> 
> After your PR gets merged, you can sync your local repository and your fork
> on github:
> 
>  $ git checkout master
>  $ git pull upstream master
>  $ git push

>From time to time, please remove old branches to avoid polution.
I don't know if this can be done from the github interface or
not. Manually, you can do it like this:

 $ git push origin :my_old_branch

Thanks for the good tutorial.
   - Ademar

> 
> That's it. That's my personal workflow, what means it probably differs from
> what others developers are used to do, but the important here is to someway
> cover the good practices we have in the project.
> 
> Please feel free to comment and to add more information here.
> 
> Best,
> -- 
> apahim
> 
> ___
> Avocado-devel mailing list
> Avocado-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/avocado-devel

-- 
Ademar Reis
Red Hat

^[:wq!

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] RFC: Nested tests (previously multi-stream test) [v5]

2016-05-27 Thread Ademar Reis
On Fri, May 27, 2016 at 10:33:30AM +0200, Lukáš Doktor wrote:
> Dne 25.5.2016 v 23:36 Ademar Reis napsal(a):
> > On Wed, May 25, 2016 at 04:18:38PM -0300, Cleber Rosa wrote:
> > > 
> > > 
> > > On 05/24/2016 11:53 AM, Lukáš Doktor wrote:
> > > > Hello guys,
> > > > 
> > > > this version returns to roots and tries to define clearly the single
> > > > solution I find teasing for multi-host and other complex tests.
> > > > 
> > > > Changes:
> > > > 
> > > > v2: Rewritten from scratch
> > > > v2: Added examples for the demonstration to avoid confusion
> > > > v2: Removed the mht format (which was there to demonstrate manual
> > > > execution)
> > > > v2: Added 2 solutions for multi-tests
> > > > v2: Described ways to support synchronization
> > > > v3: Renamed to multi-stream as it befits the purpose
> > > > v3: Improved introduction
> > > > v3: Workers are renamed to streams
> > > > v3: Added example which uses library, instead of new test
> > > > v3: Multi-test renamed to nested tests
> > > > v3: Added section regarding Job API RFC
> > > > v3: Better description of the Synchronization section
> > > > v3: Improved conclusion
> > > > v3: Removed the "Internal API" section (it was a transition between
> > > > no support and "nested test API", not a "real" solution)
> > > > v3: Using per-test granularity in nested tests (requires plugins
> > > > refactor from Job API, but allows greater flexibility)
> > > > v4: Removed "Standard python libraries" section (rejected)
> > > > v4: Removed "API backed by cmdline" (rejected)
> > > > v4: Simplified "Synchronization" section (only describes the
> > > > purpose)
> > > > v4: Refined all sections
> > > > v4: Improved the complex example and added comments
> > > > v4: Formulated the problem of multiple tasks in one stream
> > > > v4: Rejected the idea of bounding it inside MultiTest class
> > > > inherited from avocado.Test, using a library-only approach
> > > > v5: Avoid mapping ideas to multi-stream definition and clearly
> > > > define the idea I bear in my head for test building blocks
> > > > called nested tests.
> > > > 
> > > > 
> > > > Motivation
> > > > ==
> > > > 
> > > > Allow building complex tests out of existing tests producing a single
> > > > result depending on the complex test's requirements. Important thing is,
> > > > that the complex test might run those tests on the same, but also on a
> > > > different machine allowing simple development of multi-host tests. Note
> > > > that the existing tests should stay (mostly) unchanged and executable as
> > > > simple scenarios, or invoked by those complex tests.
> > > > 
> > > > Examples of what could be implemented using this feature:
> > > > 
> > > > 1. Adding background (stress) tasks to existing test producing
> > > > real-world scenarios.
> > > >* cpu stress test + cpu hotplug test
> > > >* memory stress test + migration
> > > >* network+cpu+memory test on host, memory test on guest while
> > > >  running migration
> > > >* running several migration tests (of the same and different type)
> > > > 
> > > > 2. Multi-host tests implemented by splitting them into components and
> > > > leveraging them from the main test.
> > > >* multi-host migration
> > > >* stressing a service from different machines
> > > > 
> > > > 
> > > > Nested tests
> > > > 
> > > > 
> > > > Test
> > > > 
> > > > 
> > > > A test is a receipt explaining prerequisites, steps to check how the
> > > > unit under testing behaves and cleanup after successful or unsuccessful
> > > > execution.
> > > > 
> > > 
> > > You probably meant "recipe" instead of "receipt".  OK, so this is an
> > > abstract definition...
> > > 
> > > > Test itself contains lots of neat features to simplify logging, resul

Re: [Avocado-devel] RFC: Nested tests (previously multi-stream test) [v5]

2016-05-26 Thread Ademar Reis
gt; > > 
> > > Imagine a very complex scenario, for example a cloud with several
> > > services. One could write a big-fat test tailored just for this
> > > scenario and keep adding sub-scenarios producing unreadable source
> > > code.
> > > 
> > > With nested tests one could split this task into tests:
> > > 
> > > * Setup a fake network * Setup cloud service * Setup in-cloud
> > > service A/B/C/D/... * Test in-cloud service A/B/C/D/... * Stress
> > > network * Migrate nodes
> > > 
> > > New variants could be easily added, for example DDoS attack to
> > > some nodes, node hotplug/unplug, ... by invoking those existing
> > > tests and combining them into a complex test.
> > > 
> > > Additionally note that some of the tests, eg. the setup cloud
> > > service and setup in-cloud service are quite generic tests, what
> > > could be reused many times in different tests. Yes, one could write
> > > a library to do that, but in that library he'd have to handle all
> > > exceptions and provide nice logging, while not clutter the main
> > > output with unnecessary information.
> > > 
> > > Job results ---
> > > 
> > > Combine (multiple) test results into understandable format. There
> > > are several formats, the most generic one is file format:
> > > 
> > > . ├── id  -- id of this job ├── job.log  -- overall job log └──
> > > test-results  -- per-test-directories with test results ├──
> > > 1-passtest.py:PassTest.test  -- first test's results └──
> > > 2-failtest.py:FailTest.test  -- second test's results
> > > 
> > > Additionally it contains other files and directories produced by
> > > avocado plugins like json, xunit, html results, sysinfo gathering
> > > and info regarding the replay feature.
> > > 
> > 
> > OK, this is pretty much a review.
> > 
> > > Test results 
> > > 
> > > In the end, every test produces results, which is what we're
> > > interested in. The results must clearly define the test status,
> > > should provide a record of what was executed and in case of
> > > failure, they should provide all the information in order to find
> > > the cause and understand the failure.
> > > 
> > > Standard tests does that by providing test log (debug, info,
> > > warning, error, critical), stdout, stderr, allowing to write to
> > > whiteboard and attach files in the results directory. Additionally
> > > due to structure of the test one knows what stage(s) of the test
> > > failed and pinpoint exact location of the failure (traceback in the
> > > log).
> > > 
> > > . ├── data  -- place for other files produced by a test ├──
> > > debug.log  -- debug, info, warn, error log ├── remote.log  --
> > > additional log regarding remote session ├── stderr  -- standard
> > > error ├── stdout  -- standard output ├── sysinfo  -- provided by
> > > sysinfo plugin │   ├── post │   ├── pre │   └── profile └──
> > > whiteboard  -- file for arbitrary test data
> > > 
> > > I'd like to extend this structure of either a directory "subtests",
> > > or convention for directories intended for nested test results
> > > `r"\d+-.*"`.
> > > 
> > 
> > Having them on separate sub directory is less intrusive IMHO.  I'd
> > even argue that `data/nested` is the way to go.
> I like the idea of `nested`. It's short and goes along with the 
> `avocado.utils.nested`. (If it was `avocado.utils`, I'd prefer the results 
> directly in the main dir)
> 
> > 
> > > The `r"\d+-.*"` reflects the current test-id notation, which
> > > nested tests should also respect, replacing the serialized-id by
> > > in-test-serialized-id. That way we easily identify which of the
> > > nested tests was executed first (which does not necessarily mean it
> > > finished as first).
> > > 
> > > In the end nested tests should be assigned a directory inside the
> > > main test's results (or main test's results/subtests) and it should
> > > produce the data/debug.log/stdout/stderr/whiteboard in there as
> > > well as propagate the debug.log with a prefix to the main test's
> > > debug.log (as well as job.log).
> > > 
> > > └── 1-parallel_wget.py:WgetExample.test  -- main test ├── data ├──
> > > debug.log  -- contains main

Re: [Avocado-devel] RFC: Nested tests (previously multi-stream test) [v5]

2016-05-25 Thread Ademar Reis
. For example, when looking for
"1-foobar.py", I may find:

  - foobar.py, the first test run inside the job
  AND
  - multiple foobar.py, run as a nested test inside an arbitrary
parent test.

That's why I said you would need "In-Test-Test-IDs" (or
"Nested-Test-IDs").

> > 
> > Note that nested tests can finish with any result and it's up to the
> > main test to evaluate that. This means that theoretically you could find
> > nested tests which states `FAIL` or `ERROR` in the end. That might be
> > confusing, so I think the `NestedRunner` should append last line to the
> > test's log saying `Expected FAILURE` to avoid confusion while looking at
> > results.
> > 
> 
> This special injection, and special handling for that matter, actually makes
> me more confused.

Agree. This is something to add to the parent log (which is
waiting for the nested-test result).

> 
> > Note2: It might be impossible to pass messages in real-time across
> > multiple machines, so I think at the end the main job.log should be
> > copied to `raw_job.log` and the `job.log` should be reordered according
> > to date-time of the messages. (alternatively we could only add a contrib
> > script to do that).

You probably mean debug.log (parent test), not job.log.

I'm assuming the nested tests would run in "jobless" mode (is
that the case? If yes, you need to specify what it means).

> > 
> 
> Definitely no to another special handling.  Definitely yes to a post-job
> contrib script that can reorder the log lines.

+1

> 
> > 
> > Conclusion
> > ==
> > 
> > I believe nested tests would help people covering very complex scenarios
> > by splitting them into pieces similarly to Lego. It allows easier
> > per-component development, consistent results which are easy to analyze
> > as one can see both, the overall picture and the specific pieces and it
> > allows fixing bugs in all tests by fixing the single piece (nested test).
> > 
> 
> It's pretty clear that running other tests from tests is *useful*, that's
> why it's such a hot topic and we've been devoting so much energy to
> discussing possible solutions.  NestedTests is one to do it, but I'm not
> confident we have enough confidence to make it *the* way to do it. The
> feeling that I have at this point, is that maybe we should prototype it as
> utilities to:
> 
>  * give Avocado a kickstart on this niche/feature set
>  * avoid as much as possible user-written boiler plate code
>  * avoid introducing *core* test APIs that would be set in stone
> 
> The gotchas that we have identified so far, are IMHO, enough to restrain us
> from forcing this kind of feature into the core test API, which we're in
> fact, trying to clean up.
> 
> With user exposition and feedback, this, a modified version or a completely
> different solution, can evolve into *the* core (and supported) way to do it.
> 

I tend to disagree. I think it should be the other way around:
maybe, once we have a Job API, we can consider the possibilities
of supporting nested-tests, reusing some of the other concepts.

Nested tests (as in: "simply running tests inside tests") is
relatively OK to digest. Not that I like it, but it's relatively
simple.

But what Lukas is proposing involves at least three more features
or APIs, all of which relate to a Job and should be implemented
there before being considered in the context of a test:

 - API and mechanism for running tests on different machines or
   environments (at least at first, a Job API)
 - API and mechanism for running tests in parallel (ditto)
 - API and mechanism to allow tests to synchronize and wait for
   barriers (which might be useful once we can run tests in
   parallel).

To me the idea of "nested tests than can be run in multiple
machines, under different configurations and with synchronization
between them" is fundamentally flawed. It's a huge layer
violation that brings all kinds of architectural problems.

Thanks.
   - Ademar

-- 
Ademar Reis
Red Hat

^[:wq!

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] Do we need a copy of tp-qemu/tp-avocado for avocado?

2016-05-06 Thread Ademar Reis
On Tue, May 03, 2016 at 04:33:56AM +, Wei, Jiangang wrote:
> Hi all,
> 
> There're some arguments about the compatibility of test providers with
> autotest
>  since virttest shift to avocado-vt.  
> 
> Some cases of tp-qemu/tp-libvirt needs autotest's common functions.
> Now avocado also supports these common functions (not all).
> (avocado-vt still need autotest,we can find clue in requirements.txt)

That's correct. There's a one-direction dependency. Given
avocado-vt could be described as a compatibility module (plugin)
in Avocado to support "autotest-based virt-tests".

> 
> Someone supports replacing the common function of autotest with
> avocado's. 
> In the long term, it's right.
> But there're a lot of people who still uses tp-qemu with autotest.
> and recommended by the Qemu community .

Why do you say Autotest is recommended by the QEMU community? We
would like to change this perception, if it still exists.

> 
> Besides above,
> The tp-qemu is still kept in autotest main page.【autotest/tp-qemu】,
> and it hasn't been *definitely* declared to shift to avocado now.

That's probably a mistake. Although tp-* repositories can live
anywhere, once virt-test was declared deprecated in favor of
avocado-vt, tp-* should have been moved as well.

> 
> So I suggest to copy tp-libvirt/tp-qemu to the avocado organization and
> No longer accept new testcase,but bugfix.

I don't understand the motivation for accepting bugfixes. Just
like we did with virt-test, the autotest version of tp-* should
be considered deprecated (or read-only, kept there for historical
purposes).

But as open source components, people should be free to maintain
the project if there's interest.

> 
> so that, We can concentrate on maintaining them based on avocado, 
> and develop new test cases on it.
> 
> what about this proposal?

I support the idea. My suggestion is that tp-* get moved to
under the avocado-umbrella and the autotest/tp-* reset to the
commit from when virt-test was declared deprecated. This way we
have two copies:
 
  - autotest/virt-test, autotest/tp-*: frozen in time, considered
deprecated, no risk of avocado-related changes getting merged
there.

  - avocado/avocado-vt, tp-*: newer version, using avocado
infrastructure.

Thanks.
   - Ademar

-- 
Ademar Reis
Red Hat

^[:wq!

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] RFC: multi-stream test (previously multi-test) [v4]

2016-05-03 Thread Ademar Reis
On Tue, May 03, 2016 at 06:29:25PM +0200, Lukáš Doktor wrote:
> Dne 3.5.2016 v 02:32 Cleber Rosa napsal(a):
> > 
> > 
> > On 04/29/2016 05:35 AM, Lukáš Doktor wrote:
> >> Dne 29.4.2016 v 00:48 Ademar Reis napsal(a):
> >>> On Thu, Apr 28, 2016 at 05:10:07PM +0200, Lukáš Doktor wrote:



> >>>> Conclusion
> >>>> ==
> >>>>
> >>>> This RFC proposes to add a simple API to allow triggering
> >>>> avocado.Test-like instances on local or remote machine. The main point
> >>>> is it should allow very simple code-reuse and modular test development.
> >>>> I believe it'll be easier, than having users to handle the
> >>>> multiprocessing library, which might allow similar features, but with a
> >>>> lot of boilerplate code and even more code to handle possible
> >>>> exceptions.
> >>>>
> >>>> This concept also plays nicely with the Job API RFC, it could utilize
> >>>> most of tasks needed for it and together they should allow amazing
> >>>> flexibility with known and similar structure (therefor easy to learn).
> >>>>
> >>>
> >>> I see you are trying to make the definitions more clear and a bit
> >>> less strict, but at the end of the day, what you're proposing is
> >>> that a test should be able to run other tests, plain and simple.
> >>> Maybe even worse, a Test would be able to run "jobs", disguised
> >>> as streams that run multiple tests.
> >>>
> >>> This is basically what you've been proposing since the beginning
> >>> and in case it's not crystal clear yet, I'm strongly against it
> >>> because I think it's a fundamental breakage of the abstractions
> >>> present in Avocado.
> >>>
> >>> I insist on something more abstract, like this:
> >>>
> >>>Tests can run multiple streams, which can be defined as
> >>>different processes or threads that run parts of the test
> >>>being executed. These parts are implemented in the form of
> >>>classes that inherit from avocado.Test.
> >>>
> >>>(My initial feeling is that these parts should not even have
> >>>setUp() and tearDown() methods; or if they have, they should
> >>>be ignored by default when the implementation is run in a
> >>>stream. In my view, these parts should be defined as "one
> >>>method in a class that inherits from avocado.Test", with the
> >>>class being instantiated in the actual stream runtime
> >>>environment.  But this probably deserves some discussion, I
> >>>miss some real-world use-cases here)
> >>>
> >>>The only runtime variable that can be configured per-stream is
> >>>the execution location (or where it's run): a VM, a container,
> >>>remotely, etc. For everything else, Streams are run under the
> >>>same environment as the test is.
> >>>
> >>>Notice Streams are not handled as tests: they are not visible
> >>>outside of the test that is running them. They don't have
> >>>individual variants, don't have Test IDs, don't trigger
> >>>pre/post hooks, can't change the list of plugins
> >>>enabled/disabled (or configure them) and their results are not
> >>>visible at the Job level.  The actual Test is responsible for
> >>>interpreting and interacting with the code that is run in a
> >>>stream.
> >>>
> >> So basically you're proposing to extract the method, copy it over to the
> >> other host and trigger it. In the end copy back the results, right?
> >>
> >> That would work in case of no failures. But if anything goes wrong, you
> >> have absolute no idea what happened, unless you prepare the code
> >> intended for execution to it. I really prefer being able to trigger real
> >> tests in remote environment from my tests, because:
> >>
> >> 1. I need to write the test just once and either use it as one test, or
> >> combine it with other existing tests to create a complex scenario
> >> 2. I know exactly what happened and where, because test execution
> >> follows certain workflow. I'm used to the workflow from normal execution
> >> so if anything goes wrong, I get quite extensive set of information
&

Re: [Avocado-devel] RFC: multi-stream test (previously multi-test) [v4]

2016-05-02 Thread Ademar Reis
  │   └── whiteboard
> >> ├── stream2
> >> │   └── 1-NetServer
> >> │   ├── debug.log
> >> │   └── whiteboard
> >> └── whiteboard
> >>
> >> Single task approach:
> >>
> >> job-2016-04-16T.../
> >> ├── id
> >> ├── job.log
> >> └── test-results
> >> └── 1-MultiNetperf
> >> ├── debug.log
> >> ├── whiteboard
> >> ├── 1-Netperf.bigbuf
> >> │   ├── debug.log
> >> │   └── whiteboard
> >> ├── 2-Netperf.smallbuf
> >> │   ├── debug.log
> >> │   └── whiteboard
> >> └── 3-Netperf.smallbuf
> >> ├── debug.log
> >> └── whiteboard
> >>
> >> The difference is that queue-like approach bundles the result
> >> per-worker, which could be useful when using multiple machines.
> >>
> >> The single-task approach makes it easier to follow how the execution
> >> went, but one needs to see the log to see on which machine was the task
> >> executed.
> >>
> >>
> > 
> > The logs can indeed be useful.  And the choices about single .vs. queue
> > wouldn't really depend on this... this is, quite obviously the *result*
> > of that choice.

Agree.

> > 
> >> Job API RFC
> >> ===
> >>
> >> Recently introduced Job API RFC covers very similar topic as "nested
> >> test", but it's not the same. The Job API is enabling users to modify
> >> the job execution, eventually even write a runner which would suit them
> >> to run groups of tests. On the contrary this RFC covers a way to combine
> >> code-blocks/tests to reuse them into a single test. In a hackish way,
> >> they can supplement each others, but the purpose is different.
> >>
> > 
> > "nested", without a previous definition, really confuses me.  Other than
> > that, ACK.
> > 
> copy&past, thanks.
> 
> >> One of the most obvious differences is, that a failed "nested" test can
> >> be intentional (eg. reusing the NetPerf test to check if unreachable
> >> machines can talk to each other), while in Job API it's always a failure.
> >>
> > 
> > It may just be me, but I fail to see how this is one obvious difference.
> Because Job API is here to allow one to create jobs, not to modify the
> results. If the test fails, the job should fail. At least that's my
> understanding.

That's basically the only difference between the Job API and this
proposal. And I don't think that's good (more below).

> 
> > 
> >> I hope you see the pattern. They are similar, but on a different layer.
> >> Internally, though, they can share some pieces like execution the
> >> individual tests concurrently with different params/plugins
> >> (locally/remotely). All the needed plugin modifications would also be
> >> useful for both of these RFCs.
> >>
> > 
> > The layers involved, and the proposed usage, should be the obvious
> > differences.  If they're not cleanly seen, we're doing something wrong.
> > 

+1.

> I'm not sure what you're proposing here. I put the section here to
> clarify Job API is a different story, while they share some bits
> (internally and could be abused to do the same)
> 

I think the point is that you're actually proposing nested-tests,
or sub-tests and those concepts break the abstraction and do not
belong here. Make the definitions and proposals abstract engough
and with clear and limited APIs, and there's no need for a
section to explain that this is different from the Job API.

> >> Some examples:
> >>
> >> User1 wants to run "compile_kernel" test on a machine followed by
> >> "install_compiled_kernel passtest failtest warntest" on "machine1
> >> machine2". They depend on the status of the previous test, but they
> >> don't create a scenario. So the user should use Job API (or execute 3
> >> jobs manually).
> >>
> >> User2 wants to create migration test, which starts migration from
> >> machine1 and receives the migration on machine2. It requires cooperation
> >> and together it creates one complex usecase so the user should use
> >> multi-stream test.
> >>
> >>
> > 
> > OK.
> > 
> So I should probably skip the introduction and use only the
> examples :-)
> 
> >> Conclusion
> >> ==
> >>
> >> This RFC proposes to add a simple API to allow triggering
> >> avocado.Test-like instances on local or remote machine. The main point
> >> is it should allow very simple code-reuse and modular test development.
> >> I believe it'll be easier, than having users to handle the
> >> multiprocessing library, which might allow similar features, but with a
> >> lot of boilerplate code and even more code to handle possible exceptions.
> >>
> >> This concept also plays nicely with the Job API RFC, it could utilize
> >> most of tasks needed for it and together they should allow amazing
> >> flexibility with known and similar structure (therefor easy to learn).
> >>
> > 
> > Thanks for the much cleaner v4!  I see that consensus and a common view
> > is now approaching.
> > 
> 
> Now the big question is, do we want queue-like or single-task interface?
> They are quite different. The single-task interface actually does not
> require any streams generation. It could just be the stream object and
> you could say hey, stream, run this for me on this guest and return ID
> so I can query for status later. Hey stream, please run also this and
> report when it's finished. Oh stream, did the first task already finish?

The queue-like interface is probably the concept I'm more
strongly against in your RFC, so I would love to see it removed
from the proposal.

I wrote a lot more in my other reply. I hope Cleber can respond
there and we can converge on a few topics before v5.

Thanks.
   - Ademar

> 
> So the interface would be actually simpler and if we add the optional
> "stream tag" (viz my response in Queue vs. single task section), I'd be
> perfectly fine with it. Note that we could also just use the hostname/ip
> as the stream tag, but sometimes it might be better to allow to override
> it (eg. when running everything on localhost, one might use "stress"
> stream and "test" stream).
> 
> After thinking of it a bit more I'm probably more inclined to the
> single-task execution with optional tag. The interface would be:
> 
> streams = avocado.Streams(self)
> tid = streams.run_bg(task, **kwargs)
> results = streams.run_fg(task, **kwargs)
> results = streams.wait(tid)
> streams.wait()
> 
> where the **kwargs might contain:
> 
> host -> to run the task remotely
> stream_tag -> prefix for logs and results dir
> 
> the remaining arguments would be combined with test-class arguments, so
> one could add `params={"foo": "bar"}`. This would not be needed in case
> the user first resolves the test, but it'd be super-convenient for
> simpler use cases. The alternative to params parsing could be:
> 
> task = resolver.resolve(task)
> task[1]["params"].update(my_params)
> tid = streams.run_bg(task)
> 
> Anyway if we implement the resolver quickly, we might just skip the
> implicit resolver (so require the additional 1-2 steps and avoid the
> **kwargs).

-- 
Ademar Reis
Red Hat

^[:wq!

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] RFC: multi-stream test (previously multi-test) [v4]

2016-04-28 Thread Ademar Reis
. In my view, these parts should be defined as "one
   method in a class that inherits from avocado.Test", with the
   class being instantiated in the actual stream runtime
   environment.  But this probably deserves some discussion, I
   miss some real-world use-cases here)
   
   The only runtime variable that can be configured per-stream is
   the execution location (or where it's run): a VM, a container,
   remotely, etc. For everything else, Streams are run under the
   same environment as the test is.

   Notice Streams are not handled as tests: they are not visible
   outside of the test that is running them. They don't have
   individual variants, don't have Test IDs, don't trigger
   pre/post hooks, can't change the list of plugins
   enabled/disabled (or configure them) and their results are not
   visible at the Job level.  The actual Test is responsible for
   interpreting and interacting with the code that is run in a
   stream.

Now let me repeat something from a previous e-mail, originally
written as feedback to v3:

I'm convinced that your proposal breaks the abstraction and will
result in numerous problems in the future.

To me whatever we run inside a stream is not and should not be
defined as a test.  It's simply a block of code that gets run
under the control of the actual test. The fact we can find these
"blocks of code" using the resolver is secondary. A nice and
useful feature, but secondary. The fact we can reuse the avocado
test runner remotely is purely an implementation detail. A nice
detail that will help with debugging and make our lives easier
when implementing the feature, but again, purely an
implementation detail.

The test writer should have strict control of what gets run in a
stream, with a constrained API where the concepts are very clear.
We should not, under any circumstances, induce users to think of
streams as something that runs tests. To me this is utterly
important.

For example, if we allow streams to run tests, or Test
References, then running `avocado run *cpuid*` and
`stream.run("*cpuid*")` will look similar at first, but with
several subtle differences in behavior, confusing users.

Users will inevitably ask questions about these differences and
we'll end up having to revisit some concepts and refine the
documentation, a result of breaking the abstraction.

A few examples of these differences which might not be
immediately clear:

   * No pre/post hooks for jobs or tests get run inside a stream.
   * No per-test sysinfo collection inside a stream.
   * No per-job sysinfo collection inside a stream.
   * Per-stream, there's basically nothing that can be configured
 about the environment other than *where* it runs.
 Everything is inherited from the actual test. Streams should
 have access to the exact same APIs that *tests* have.
   * If users see streams as something that runs tests, it's
 inevitable that they will start asking for knobs
 to fine-tune the runtime environment:
 * Should there be a timeout per stream?
 * Hmm, at least support enabling/disabling gdb or wrappers
   in a stream? No? Why not!?
 * Hmm, maybe allow multiplex="file" in stream.run()?
 * Why can't I disable or enable plugins per-stream? Or at
   least configure them?

And here are some other questions, which seem logical at first:

   * Hey, you know what would be awesome? Let me upload the
 test results from a stream as if it was a job! Maybe a
 tool to convert stream test results to job results? Or a
 plugin that handles them!
   * Even more awesome: a feature to replay a stream!
   * And since I can run multiple tests in a stream, why can't I
 run a job there? It's a logical next step!

The simple fact the questions above are being asked is a sign the
abstraction is broken: we shouldn't have to revisit previous
concepts to clarify the behavior when something is being added in
a different layer.

Am I making sense?

Thanks.
   - Ademar

-- 
Ademar Reis
Red Hat

^[:wq!

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] RFC: multi-stream test (previously multi-test) [v3]

2016-04-21 Thread Ademar Reis
On Thu, Apr 21, 2016 at 08:45:54AM +0200, Lukáš Doktor wrote:
> Dne 21.4.2016 v 01:58 Ademar Reis napsal(a):
> > On Wed, Apr 20, 2016 at 07:38:10PM +0200, Lukáš Doktor wrote:
> >> Dne 16.4.2016 v 01:58 Ademar Reis napsal(a):
> >>> On Fri, Apr 15, 2016 at 08:05:09AM +0200, Lukáš Doktor wrote:
> >>>> Hello again,
> >>>
> >>> Hi Lukas.
> >>>
> >> Hello to you, Ademar,
> >>
> >>> Thanks for v3. Some inline feedback below:
> >>>
> >>>>
> >>>> There were couple of changes and the new Job API RFC, which might sound
> >>>> similar to this RFC, but it covers different parts. Let's update the
> >>>> multi-test RFC and fix the terminology, which might had been a bit
> >>>> misleading.
> >>>>
> >>>> Changes:
> >>>>
> >>>> v2: Rewritten from scratch
> >>>> v2: Added examples for the demonstration to avoid confusion
> >>>> v2: Removed the mht format (which was there to demonstrate manual
> >>>> execution)
> >>>> v2: Added 2 solutions for multi-tests
> >>>> v2: Described ways to support synchronization
> >>>> v3: Renamed to multi-stream as it befits the purpose
> >>>> v3: Improved introduction
> >>>> v3: Workers are renamed to streams
> >>>> v3: Added example which uses library, instead of new test
> >>>> v3: Multi-test renamed to nested tests
> >>>> v3: Added section regarding Job API RFC
> >>>> v3: Better description of the Synchronization section
> >>>> v3: Improved conclusion
> >>>> v3: Removed the "Internal API" section (it was a transition between
> >>>> no support and "nested test API", not a "real" solution)
> >>>> v3: Using per-test granularity in nested tests (requires plugins
> >>>> refactor from Job API, but allows greater flexibility)
> >>>>
> >>>>
> >>>> The problem
> >>>> ===
> >>>>
> >>>> Allow tests to have some if its block of code run in separate stream(s).
> >>>> We'll discuss the range of "block of code" further in the text.
> >>>>
> >>>> One example could be a user, who wants to run netperf on 2 machines, 
> >>>> which
> >>>> requires following manual steps:
> >>>>
> >>>>
> >>>> machine1: netserver -D
> >>>> machine1: # Wait till netserver is initialized
> >>>> machine2: netperf -H $machine1 -l 60
> >>>> machine2: # Wait till it finishes and report the results
> >>>> machine1: # stop the netserver and report possible failures
> >>>>
> >>>> the test would have to contain the code for both, machine1 and machine2 
> >>>> and
> >>>> it executes them in two separate streams, which might or not be executed 
> >>>> on
> >>>> the same machine.
> >>>>
> >>>> You can see that each stream is valid even without the other, so 
> >>>> additional
> >>>> requirement would be to allow easy share of those block of codes among 
> >>>> other
> >>>> tests. Splitting the problem in two could also sometimes help in 
> >>>> analyzing
> >>>> the failures.
> >>>
> >>> I would like to understand this requirement better, because to me
> >>> it's not clear why this is important. I think this might be a
> >>> consequence of a particular implementation, not necessarily a
> >>> requirement.
> >>>
> >> Yes, I wanted to mention that there might be additional benefit, not
> >> directly related only to this RFC. I should probably mention it only
> >> where it applies and not here.
> >>
> >>>>
> >>>> Some other examples might be:
> >>>>
> >>>
> >>> I suggest you add real world examples here (for a v4). My
> >>> suggestions:
> >>>
> >>>> 1. A simple stress routine being executed in parallel (the same or 
> >>>> different
> >>>> hosts)
> >>>
> >>>  - run a script in multiple hosts, all of them interacting with a
>

Re: [Avocado-devel] RFC: multi-stream test (previously multi-test) [v3]

2016-04-20 Thread Ademar Reis
On Wed, Apr 20, 2016 at 07:38:10PM +0200, Lukáš Doktor wrote:
> Dne 16.4.2016 v 01:58 Ademar Reis napsal(a):
> > On Fri, Apr 15, 2016 at 08:05:09AM +0200, Lukáš Doktor wrote:
> >> Hello again,
> > 
> > Hi Lukas.
> > 
> Hello to you, Ademar,
> 
> > Thanks for v3. Some inline feedback below:
> > 
> >>
> >> There were couple of changes and the new Job API RFC, which might sound
> >> similar to this RFC, but it covers different parts. Let's update the
> >> multi-test RFC and fix the terminology, which might had been a bit
> >> misleading.
> >>
> >> Changes:
> >>
> >> v2: Rewritten from scratch
> >> v2: Added examples for the demonstration to avoid confusion
> >> v2: Removed the mht format (which was there to demonstrate manual
> >> execution)
> >> v2: Added 2 solutions for multi-tests
> >> v2: Described ways to support synchronization
> >> v3: Renamed to multi-stream as it befits the purpose
> >> v3: Improved introduction
> >> v3: Workers are renamed to streams
> >> v3: Added example which uses library, instead of new test
> >> v3: Multi-test renamed to nested tests
> >> v3: Added section regarding Job API RFC
> >> v3: Better description of the Synchronization section
> >> v3: Improved conclusion
> >> v3: Removed the "Internal API" section (it was a transition between
> >> no support and "nested test API", not a "real" solution)
> >> v3: Using per-test granularity in nested tests (requires plugins
> >> refactor from Job API, but allows greater flexibility)
> >>
> >>
> >> The problem
> >> ===
> >>
> >> Allow tests to have some if its block of code run in separate stream(s).
> >> We'll discuss the range of "block of code" further in the text.
> >>
> >> One example could be a user, who wants to run netperf on 2 machines, which
> >> requires following manual steps:
> >>
> >>
> >> machine1: netserver -D
> >> machine1: # Wait till netserver is initialized
> >> machine2: netperf -H $machine1 -l 60
> >> machine2: # Wait till it finishes and report the results
> >> machine1: # stop the netserver and report possible failures
> >>
> >> the test would have to contain the code for both, machine1 and machine2 and
> >> it executes them in two separate streams, which might or not be executed on
> >> the same machine.
> >>
> >> You can see that each stream is valid even without the other, so additional
> >> requirement would be to allow easy share of those block of codes among 
> >> other
> >> tests. Splitting the problem in two could also sometimes help in analyzing
> >> the failures.
> > 
> > I would like to understand this requirement better, because to me
> > it's not clear why this is important. I think this might be a
> > consequence of a particular implementation, not necessarily a
> > requirement.
> > 
> Yes, I wanted to mention that there might be additional benefit, not
> directly related only to this RFC. I should probably mention it only
> where it applies and not here.
> 
> >>
> >> Some other examples might be:
> >>
> > 
> > I suggest you add real world examples here (for a v4). My
> > suggestions:
> > 
> >> 1. A simple stress routine being executed in parallel (the same or 
> >> different
> >> hosts)
> > 
> >  - run a script in multiple hosts, all of them interacting with a
> >central service (like a DDoS test). Worth noting that this
> >kind of testing could also be done with the Job API.
> > 
> ack
> 
> >> 2. Several code blocks being combined into a complex scenario(s)
> > 
> >  - netperf
> >  - QEMU live migration
> >  - other examples?
> ack
> > 
> >> 3. Running the same test along with stress test in background
> >>
> > 
> >   - write your own stress test and run it (inside a guest, for
> > example) while testing live-migration, or collecting some
> > performance metrics
> >   - run bonnie or trinity in background inside the guest while
> > testing migration in the host
> >   - run bonnie or trinity in background while collecting real
> > time metrics
> ack, thank you for the explicit examples
> 
> > 
> 

Re: [Avocado-devel] RFC: multi-stream test (previously multi-test) [v3]

2016-04-15 Thread Ademar Reis
ux-inject /plugins/sync_server:sync-server
> $SYNCSERVER &
> avocado run NetPerf --mux-inject /plugins/sync_server:sync-server
> $SYNCSERVER &
> 
> (where the --mux-inject passes the address of the "syncserver" into test
> params)

I think using --mux-inject should be strongly discouraged if one
is not using the multiplexer. I know that's currently the only
way to provide parameters to a test, but this should IMO be
considered a bug. Using it in a RFC may actually *encourage*
users to use it.

> 
> When the code is stable one would write this multi-stream test (or multiple
> variants of them) to do the above automatically:
> 
> class MultiNetperf(avocado.NestedTest):
> def setUp(self):
> self.failif(len(self.streams) < 2)
> def test(self):
> self.streams[0].run_bg("NetServer",
>{"no_clients": len(self.streams)})
> for stream in self.streams[1:]:
> stream.add_test("NetPerf",
> {"no_clients": len(self.workers),
>  "server_ip": machines[0]})
> self.wait(ignore_failures=False)

I don't understand why NestedTest is used all the time. It think
it's not necessary (we could use composition instead of
inheritance).

Let me give the same example using a different API implementation
and you tell me if you see something *architecturally* wrong with
it, or if these are just *implementation details* that still
match your original idea:

netperf.py:

```
  import avocado
  from avocado import multi
  from avocado.utils import process

  class NetPerf(Test):
  def test(self):
  s_params = ... # server parameters
  c_params = ... # client parameters

  server = NetServer()
  client = NetClient()

  m = multi.Streams()
  ...
  m.run_bg(server, s_params, ...)
  m.run_bg(client, c_params, ...)
  m.wait(ignore_errors=False)
  
  class NetServer(multi.TestWorker)
  def test(self):
  process.run("netserver")
  self.barrier("server", self.params.get("no_clients"))
  def tearDown(self):
  self.barrier("finished", self.params.get("no_clients"))
  process.run("killall netserver")

  class NetClient(multi.TestWorker):
  def setUp(self):
  self.barrier("server", params.get("no_clients"))
  def test(self):
  process.run("netperf -H %s -l 60"
  % params.get("server_ip"))
  barrier("finished", params.get("no_clients"))
`

 $ avocado list netperf.py --> returns *1* Test
 (NetPerf:test)
 $ avocado run multi.py --> runs this *1* Test

But given multi.TestWorker is implemented as a "class that
inherit from Test", for debug purposes users could run them
individually, without any warranty or expectation that they'll
work consistently given it'll miss the instrumentation and
parameter handling that the main test does (the code from
netperf.py:NetPerf:test). Example:

 $ avocado run netperf.py:NetClient
   -> runs NetClient as a standalone test (in this example it
   won't work unless we provide the right parameters, for
   example, via the multiplexer)
 $ avocado run netperf.py:NetServer
   -> runs NetServer as a standalone test (same thing)

The main justification I see for the existence of test.TestWorker
is to prevent the test runner from discovering these tests by
default ('avocado run' and 'avocado list'). Maybe we could do
things differently (again, composition instead of inheritance)
and get rid of test.TestWorker. Just an idea, I'm not sure.

> 
> Executing of the complex example would become:
> 
> avocado run MultiNetperf
> 
> You can see that the test allows running several NetPerf tests
> simultaneously, either locally, or distributed across multiple machines (or
> combinations) just by changing parameters. Additionally by adding features
> to the nested tests, one can use different NetPerf commands, or add other
> tests to be executed together.
> 
> The results could look like this:
> 
> 
> $ tree $RESULTDIR
>   └── test-results
>   └── MultiNetperf
>   ├── job.log
>   ...
>   ├── 1
>   │   └── job.log
>   ...
>   └── 2
>   └── job.log
>   ...
> 
> Where the MultiNetperf/job.log contains combined logs of the "master" test
> and all the "nested" tests and the sync server.
> 
> Directories [12] contain results of the created (possibly even named)
> streams. I think they should be in form of standard avocado Job to keep the
> well known structure.

Only one job was executed, so there shouldn't be multiple job.log
files. The structure should be consistent with what we already
have:

$ tree job-2016-04-15T.../
job-2016-04-15T.../
├── job.log
├── id
├── replay/
├── sysinfo/
└── test-results/
├── 01-NetPerf/ (the serialized Test ID)
│   ├── data/
│   ├── debug.log
│   ├── whiteboard
│   ├── ...
│   ├── NetServer/ (a name, not a Test ID)
│   │   ├── data/
│   │   ├── ...
│   │   └── debug.log
│   └── NetClient (a name, not a Test ID)
│   ├── data/
│   ├── ...
│   └── debug.log
│   
├── 02... (other Tests from the same job)
├── 03... (other Tests from the same job)
...


Finally, I suggest you also cover the other cases you introduced
in the beginning of this RFC.

For example, if multi.run() is implemented in a flexible way, we
could actually run Avocado Tests (think of Test Name) in multiple
streams:

...
from avocado import resolver
...

t = resolver("passtest.py", strict=True)
multi.run_bg(t, env1, ...)
multi.run_bg(t, env2, ...)
multi.run_bg(t, env3, ...)
multi.wait(ignore_errors=False)

Thanks.
   - Ademar

-- 
Ademar Reis
Red Hat

^[:wq!

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


[Avocado-devel] [RFC v2]: Avocado maintainability and integration with avocado-vt (was: "Avocado supportability and integration with avocado-vt)

2016-04-13 Thread Ademar Reis
bility or
stability.

* Which Avocado version should be used by avocado-vt?

  This is up to the avocado-vt community to decide, but the
  current consensus is that to guarantee some stability in
  production environments, avocado-vt should stick to a specific
  LTS release of Avocado. In other words, the Avocado team
  recommends production users of avocado-vt not to install Avocado
  from its master branch or upgrade it from Sprint Releases.
  
  Given each LTS release will be maintained for 18 months, it
  should be reasonable to expect avocado-vt to upgrade to a new
  LTS release once a year or so. This process will be done with
  support from the Avocado team to avoid disruptions, with proper
  coordination via the avocado mailing lists.

  In practice the Avocado development team will keep watching
  avocado-vt to detect and document incompatibilities, so when
  the time comes to do an upgrade in production, it's expected
  that it should happen smoothly.

* Will it be possible to use the latest Avocado and avocado-vt
  together?

  Users are welcome to *try* this combination.  The Avocado
  development team itself will probably do it internally as a way
  to monitor incompatibilities and regressions.
  
  Given the open source nature of both projects, we expect
  volunteers to step up and maintain an upstream branch of
  avocado-vt that works with the most recent Avocado Sprint
  Release.

  If no volunteers show up, we might release snapshots of
  avocado-vt in the Avocado LTS channel, for convenience only,
  just as we do today with our Sprint Releases.

Thanks.
   - Ademar

-- 
Ademar Reis
Red Hat

^[:wq!

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] RFC: Avocado Job API

2016-04-12 Thread Ademar Reis
On Tue, Apr 12, 2016 at 11:22:40AM +0200, Lukáš Doktor wrote:
> Dne 12.4.2016 v 02:31 Ademar Reis napsal(a):
> >On Mon, Apr 11, 2016 at 09:09:58AM -0300, Cleber Rosa wrote:
> >>Note: the same content on this message is available at:
> >>
> >>https://github.com/clebergnu/avocado/blob/rfc_job_api/docs/rfcs/job-api.rst
> >>
> >>Some users may find it easier to read with a prettier formatting.
> >>
> >>Problem statement
> >>=
> >>
> >>An Avocado job is created by running the command line ``avocado``
> >>application with the ``run`` command, such as::
> >>
> >>   $ avocado run passtest.py
> >>
> >>But most of Avocado's power is activated by additional command line
> >>arguments, such as::
> >>
> >>   $ avocado run passtest.py --vm-domain=vm1
> >>   $ avocado run passtest.py --remote-hostname=machine1
> >>
> >>Even though Avocado supports many features, such as running tests
> >>locally, on a Virtual Machine and on a remote host, only one those can
> >>be used on a given job.
> >>
> >>The observed limitations are:
> >>
> >>* Job creation is limited by the expressiveness of command line
> >>   arguments, this causes mutual exclusion of some features
> >>* Mapping features to a subset of tests or conditions is not possible
> >>* Once created, and while running, a job can not have its status
> >>   queried and can not be manipulated
> >>
> >>Even though Avocado is a young project, its current feature set
> >>already exceeds its flexibility.  Unfortunately, advanced users are
> >>not always free to mix and match those features at will.
> >>
> >>Reviewing and Evaluating Avocado
> >>
> >>
> >>In light of the given problem, let's take a look at what Avocado is,
> >>both by definition and based on its real world, day to day, usage.
> >>
> >>Avocado By Definition
> >>-
> >>
> >>Avocado is, by definition, "a set of tools and libraries to help with
> >>automated testing".  Here, some points can be made about the two
> >>components that Avocado are made of:
> >>
> >>1. Libraries are commonly flexible enough and expose the right
> >>features in a consistent way.  Libraries that provide good APIs
> >>allow users to solve their own problems, not always anticipated by
> >>the library authors.
> >>
> >>2. The majority of the Avocado library code fall in two categories:
> >>utility and test APIs.  Avocado's core libraries are so far, not
> >>intended to be consumed by third party code and its use is not
> >>supported in any way.
> >>
> >>3. Tools (as in command line applications), are commonly a lot less
> >>flexible than libraries.  Even the ones driven by command line
> >>arguments, configuration files and environment variables fall
> >>short in flexibility when compared to libraries.  That is true even
> >>when respecting the basic UNIX principles and features that help to
> >>reuse and combine different tools in a single shell session.
> >>
> >>How Avocado is used
> >>---
> >>
> >>The vast majority of the observed Avocado use cases, present and
> >>future, includes running tests.  Given the Avocado architecture and
> >>its core concepts, this means running a job.
> >>
> >>Avocado, with regards to its real world usage, is pretty much a job
> >>(and test) runner, and there's no escaping that.  It's probable that,
> >>for every one hundredth ``avocado run`` commands, a different
> >>``avocado `` is executed.
> >>
> >>Proposed solution & RFC goal
> >>
> >>
> >>By now, the title of this document may seem a little less
> >>misleading. Still, let's attempt to make it even more clear.
> >>
> >>Since Avocado is mostly a job runner that needs to be more flexible,
> >>the most natural approach is to turn more of it into a library.  This
> >>would lead to the creation of a new set of user consumable APIs,
> >>albeit for a different set of users.  Those APIs should allow the
> >>creation of custom job executions, in ways that the Avocado authors
> >>have not yet anticipated.
> >>
> >>Having settled on this solution 

Re: [Avocado-devel] RFC: Avocado Job API

2016-04-11 Thread Ademar Reis
will look like still needs to be properly
defined. Please let me know if we're headed in the same
direction.

Thanks.
   - Ademar

> 
> * Run tests in parallel.
> 
> * Take actions based on test results (for example, run or skip other
>   tests)
> 
> * Post-process the logs or test results before the job is done
> 
> Development Milestones
> ==
> 
> Since it's clear that Avocado demands many changes to be able to
> completely fulfill all mentioned use cases, it seems like a good idea
> to define milestones.  Those milestones are not intended to set the
> pace of development, but to allow for the maximum number of real world
> use cases fulfillment as soon as possible.
> 
> Milestone 1
> ---
> 
> Includes the delivery of the following APIs:
> 
> * Job creation API
> * Test resolution API
> * Single test execution API
> 
> Milestone 2
> ---
> 
> Adds to the previous milestone:
> 
> * Configuration API
> 
> Milestone 3
> ---
> 
> Adds to the previous milestone:
> 
> * Plugin management API
> 
> Milestone 4
> ---
> 
> Introduces proper interfaces where previously Configuration and Plugin
> management APIs were being used.  For instance, where the following
> pseudo code was being used to set the current test runner::
> 
>   env = job.environment
>   env.config.set('plugin.runner', 'default',
>  'avocado.plugins.runner:RemoteTestRunner')
>   env.config.set('plugin.runner.RemoteTestRunner', 'username', 'root')
>   env.config.set('plugin.runner.RemoteTestRunner', 'password', '123456')
> 
> APIs would be introduced that would allow for the following pseudo
> code::
> 
>   job.load_runner_by_name('RemoteTestRunner')
>   if job.runner.accepts_credentials():
>   job.runner.set_credentials(username='root', password='123456')
> 
> .. _settings: 
> https://github.com/avocado-framework/avocado/blob/0.34.0/avocado/core/settings.py
> .. _getting the value: 
> https://github.com/avocado-framework/avocado/blob/0.34.0/avocado/core/settings.py#L221
> .. _default runner: 
> https://github.com/avocado-framework/avocado/blob/0.34.0/avocado/core/runner.py#L193
> .. _remote runner: 
> https://github.com/avocado-framework/avocado/blob/0.34.0/avocado/core/remote/runner.py#L37
> .. _vm runner: 
> https://github.com/avocado-framework/avocado/blob/0.34.0/avocado/core/remote/runner.py#L263
> .. _entry points:
> https://pythonhosted.org/setuptools/pkg_resources.html#entry-points
> 
> -- 
> Cleber Rosa
> [ Sr Software Engineer - Virtualization Team - Red Hat ]
> [ Avocado Test Framework - avocado-framework.github.io ]
> 
> ___
> Avocado-devel mailing list
> Avocado-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/avocado-devel

-- 
Ademar Reis
Red Hat

^[:wq!

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


[Avocado-devel] [RFC]: Avocado supportability and integration with avocado-vt

2016-04-04 Thread Ademar Reis
leases should be done carefully, with ample time for
  announcements, testing and documentation.  It's recommended
  that at least two sprints are dedicated as preparations for a
  LTS release, with the first Sprint Release serving as a "LTS
  beta" release.
 

Misc details


Sprint and LTS releases, when packaged, should be preferably
distributed through different package channels (repositories).
Users can opt to follow whatever channel they prefer. The actual
layout of the packages repositories has not been specified yet.

Via pip, Avocado Sprint Releases will be made available.

The existence of LTS releases should never be used as an excuse
to break a Sprint Release or to introduce gratuitous
incompatibilities there. In other words, Sprint Releases should
still be taken seriously, just as they are today.


Timeline example


For simplicity, assume each sprint is taking 1 month. The number
of LTS releases is exaggerated to show how they would co-exist
before EOL.

sprint release 33.0
sprint release 34.0
   --> start preparing a LTS release, so 35.0 is a beta LTS
sprint release 35.0
RTL release 36.0lts (36lts branch is created)
   --> major bug is found, fix gets added to master and to
   the 36lts branch
sprint release 37.0 + 36.1lts
sprint release 38.0
   --> major bug is found, fix gets added to master and
   36lts branches
sprint release 39.0 + LTS 36.2lts
sprint release 40.0
sprint release 41.0
   --> start preparing a LTS release, so 42.0 is a beta LTS
sprint release 42.0
   --> review and document all compatibility changes
   and features introduced since 36.2lts
RTL release 43.0lts (43lts branch is created)
sprint release 44.0
sprint release 45.0
  --> major bug is found, fix gets added to master and RTL
  branches 36lts and 43lts (if the bug affects users
  there)
sprint release 46.0 + LTS 36.3lts + LTS 43.1lts
sprint release 47.0
sprint release 48.0
   --> start preparing a LTS release, so 49.0 is a beta LTS
sprint release 49.0
   --> review and document all compatibility changes and
   features introduced since 43.1lts
sprint release 50.0lts (50lts branch is created)
sprint release 51.0
sprint release 52.0
sprint release 53.0
sprint release 54.0
   --> EOL for 36lts (18 months since the release of 36.0lts)
sprint release 55.0
...


avocado-vt
--

avocado-vt is an Avocado plugin that allows "VT tests" to be run
inside Avocado.  It's a third-party project maintained mostly by
Engineers from Red Hat QE with assistance from the Avocado team.

It's a general consensus that QE teams use avocado-vt directly
from git, usually following the master branch, which they
control. 

There's no support statement for avocado-vt. Even though the
upstream community is usually quite friendly and open to both
contributions and bug reports, avocado-vt is made available
without any promises for compatibility of supportability.

When packaged and versioned, avocado-vt rpms should be considered
just snapshots, available in packaged form as a convenience to
users outside of the avocado-vt development community.  Again,
they are made available without any promises of compatibility or
stability.

- Which Avocado version should be used by avocado-vt?

  This is up to the avocado-vt community to decide, but the
  current consensus is that to guarantee stability in production
  environments, avocado-vt should stick to a specific LTS release
  of Avocado. In other words, production users of avocado-vt
  should not install Avocado from its master branch or upgrade it
  from Sprint Releases.
  
  Given each LTS release will be supported for 18 months, it
  should be reasonable to expect avocado-vt to upgrade to a new
  LTS release once a year or so. This process will be done with
  support from the Avocado team to avoid disruptions, with proper
  coordination via the avocado mailing lists.

  In practice the Avocado development team will keep watching
  avocado-vt to detect and document incompatibilities, so when
  the time comes to do an upgrade in production, it's expected
  that it should happen smoothly.

- Will it be possible to use the latest Avocado and avocado-vt
  together?

  Users are welcome to *try* this combination.  The Avocado
  development team itself will probably do it internally as a way
  to monitor incompatibilities and regressions.
  
  Given the open source nature of both projects, we expect
  volunteers to step up and maintain an upstream branch of
  avocado-vt that works with the most recent Avocado Sprint
  Release.

  If no volunteers show up, we might release snapshots of
  avocado-vt in the Avocado LTS channel, for convenience only,
  just as we do today with our Sprint Releases.

Thanks.
   - Ademar

-- 
Ademar Re

[Avocado-devel] [RFC] Introduce proper test IDs

2016-03-25 Thread Ademar Reis
 in logs should use the full
  Test ID string, unformatted.
  
  The UI can interpret the test ID to make it look "nicer" by
  hiding or highlighting fields or separators, but the three
  parts should be completely abstract and handled as strings (as
  defined), without any parsing or interpretation.

  This RFC doesn't cover the specifics of how the UI will format
  test IDs, but based on the description and definitions above,
  the current UI is actually compliant, although a few minor
  changes would be welcome.

  A couple of hypothetical examples:

## based on the current UI (the  is hidden)
$ avocado run /bin/true passtest --multiplex 2variants.yaml
...
TESTS: 4
 (1/4) /bin/true;1: PASS
 (2/4) /bin/true;2: PASS
 (3/4) passtest.py:PassTest.test_foobar;1: PASS
 (4/4) passtest.py:PassTest.test_foobar;2: PASS
 

# the  is hidden and the  is
# highlighted
$ avocado run /bin/true passtest --multiplex 2variants.yaml
TESTS: 4
 (1/4) /bin/true [1]: PASS
 (2/4) /bin/true [2]: PASS
 (3/4) passtest.py:PassTest.test_foobar [1]: PASS
 (4/4) passtest.py:PassTest.test_foobar [2]: PASS
 


- Using Test References in Avocado (e.g.: in 'avocado run'):

  A full Test ID cannot be safely parsed and split when used as a
  Test References because there's no proper way to unambiguously
  split the fields. If used as a Test Reference, a full Test ID
  will be interpreted as a raw string.

  There's a special case for the usage of the combination
  ;, but it requires explicit
  configuration of Avocado. The suggested mechanism for this
  would be:

   --extract-variant-ids={on|off}' (default: 'off')
   config:extract-variant-ids={on|off}' (default: 'off')
 Tells avocado to try to extract variant ids from Test
 References. With this enabled, the rightmost ';', if
 present, will be interpreted as a separator between the Test
 Reference and a Variant ID.

   --strict-test-references={on|off} (default: off)
   config:[strict-test-references={on|off} (default: off)
 Forces avocado to interpret Test References as Test Names.
 Meaning only tests which have a perfect 1:1 match for each
 test reference will be loaded.

   Examples:

   $ avocado run 1-foobar;2

 --> will use the raw string '1-foobar;2' as a Test
 Reference. The resulting tests will depend on how the Test
 Resolvers interpret this string.

   $ avocado run foobar;2

 --> will use 'foobar;2' as a Test Reference. The resulting
 tests will depend on the behavior of the available Test
 Resolvers;

   $ avocado run foobar;2 --multiplex 2variants.yaml

 --> ditto (Test Names and References are arbitrary strings,
 so there's no way for Avocado to tell if ';2' is a Variant
 ID, or if it's part of the Test Reference)

   $ avocado run foobar;2 --multiplex 2variants.yaml \
 --extract-variants-ids --strict-test-references

 --> will interpret 'foobar' as the Test Name (not just a
 Test Reference) and '2' as a Variant ID. In this case, only
 the test 'foobar' with a variant '2' will be run (if a match
 is found). The resulting Test ID would be '1-foobar;2'.

   $ avocado run 1-foobar;2 --multiplex 2variants.yaml \
 --extract-variants-ids --strict-test-references

 --> will interpret '1-foobar' as a Test Name and '2' as a
 Variant ID. If a match is found, the resulting Test ID will
 be 1-1-foobar;2.

Thanks.
   - Ademar

-- 
Ademar Reis
Red Hat

^[:wq!

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel


Re: [Avocado-devel] [Autotest] Feasibility study - issues clarification

2016-02-11 Thread Ademar Reis
On Thu, Feb 11, 2016 at 08:33:39AM -0500, Cleber Rosa wrote:
> 
> 
> - Original Message -
> > From: "Lukasz Majewski" 
> > To: autotest-ker...@redhat.com
> > Sent: Thursday, February 11, 2016 6:27:22 AM
> > Subject: [Autotest]  Feasibility study - issues clarification
> > 
> > Dear all,
> > 
> > I'd be grateful for clarifying a few issues regarding Autotest.
> > 
> > I have following setup:
> > 1. Custom HW interface to connect Target to Host
> > 2. Target board with Linux
> > 3. Host PC - debian/ubuntu.
> > 
> > I would like to unify the test setup and it seems that the Autotest
> > test framework has all the features that I would need:
> > 
> > - Extensible Host class (other interfaces can be used for communication
> >   - i.e. USB)
> > - SSH support for sending client tests from Host to Target
> > - Control of tests execution on Target from Host and gathering results
> > - Standardized tests results format
> > - Autotest host's and client's test results are aggregated and
> >   displayed as HTML
> > - Possibility to easily reuse other tests (like LTP, linaro's PM-QA)
> > - Scheduling, HTML visualization (if needed)
> > 
> > On the beginning I would like to use test harness (server+client) to
> > run tests and gather results in a structured way.
> > 
> > However, I have got a few questions (please correct me if I'm wrong):
> > 
> > - On several presentations it was mentioned that Avocado project is a
> >   successor of Autotest. However it seems that Avocado is missing the
> >   client + server approach from Autotest.
> 
> Right. It's something that is being worked on at this very moment:
> 
> https://trello.com/c/AnoH6vhP/530-experiment-multiple-machine-support-for-tests
> 
> > 
> > - What is the future of Autotest? Will it be gradually replaced by
> >   Avocado?
> 
> Autotest has been mostly in maintenance mode for the last 20 months or
> so. Most of the energy of the Autotest maintainers has been shifted
> towards Avocado. So, while no Open Source project can be killed (nor
> should), yes, Autotest users should start looking into Avocado.
> 
> > 
> > - It seems that there are only two statuses returned from a simple
> >   test (like sleeptest), namely "PASS" and "FAIL". How can I indicate
> >   that the test has ended because the environment was not ready to run
> >   the test (something similar to LTP's "BROK" code, or exit codes
> >   complying with POSIX 1003.1)?
> 
> I reckon this is a question on Autotest test result status, so I'll try
> to answer in that context. First, the framework itself gives you intentionally
> limited test result status. If you want to save additional information about
> your test, including say the mapping to POSIX 1003.1 codes, you can try to use
> the test's "keyval" store for that. The "keyval" is both saved to a local file
> and to the server's database (when that is used).

You're probably referring to the whiteboard:
http://avocado-framework.readthedocs.org/en/latest/WritingTests.html#saving-test-generated-custom-data

Thanks.
   - Ademar

> 
> Avocado INSTRUMENTED tests, though, have a better separation of test setup and
> execution, and a test can be SKIPPED during the setup phase. A few pointers:
> 
>  * 
> https://github.com/avocado-framework/avocado/blob/master/examples/tests/skiponsetup.py
>  * 
> http://avocado-framework.readthedocs.org/en/latest/api/core/avocado.core.html#avocado.core.test.Test.skip
> 
> > 
> > - Is there any road map for Autotest development? I'm wondering if
> >   avocado's features (like per test SHA1 generation) would be ported to
> >   Autotest?
> 
> Not really. Avocado's roadmap though, is accessible here:
> 
> https://trello.com/b/WbqPNl2S/avocado
> 

-- 
Ademar Reis
Red Hat

^[:wq!

___
Avocado-devel mailing list
Avocado-devel@redhat.com
https://www.redhat.com/mailman/listinfo/avocado-devel