On Tue, Aug 15, 2017 at 5:45 AM, Tim Flink <tfl...@redhat.com> wrote:

> On Wed, 9 Aug 2017 14:31:49 +0200
> Lukas Brabec <lbra...@redhat.com> wrote:
>
> > Hey, gang!
> >
> > As I read through standard interface and tried ansiblized branch of
> > libtaskotron, I found things that were not exactly clear to me and I
> > have some questions. My summer afternoon schedule involves feeding
> > rabbits (true story!) and I keep missing people on IRC, hence this
> > email.
> >
> >
> > = Test output and its format =
> >
> > Standard test interface specifies that [1]:
> > 1) "test system must examine the exit code of the playbook. A zero
> > exit code is successful test result, non-zero is failure"
> > and
> > 2) "test suite must treat the file test.log in the artifacts folder as
> > the main readable output of the test"
> >
> > ad 1) Examining the exit code is pretty straight forward. The mapping
> > to outcome would be zero to PASSED and non-zero to FAILED. Currently
> > we use more than these two outcomes, i.e. INFO and NEEDS_INSPECTION.
> > Are we still going to use them, if so, what would be the cases? The
> > playbook can fail by itself (e.g. fail like command not found, or
> > permission denied), but I presume this failure would be reported to
> > ExecDB not to ResultsDB. Any thoughts on this?
>
>
> I think that, for now at least, we won't be using the INFO and
> NEEDS_INSPECTION. The standard interface definition is what it is and
> they've decided that results are binary. It's possible that could
> change in the future but for now, the INFO and NEEDS_INSPECTION states
> probably won't be used when reporting to resultsdb.
>

When it comes to distgit tests, yes. Our generic tasks can still use the
full range of supported outcomes, we just need to define a nice way to
export them (e.g. looking for ./artifacts/taskotron-results.yml).

However, Lukas has a very good point about failure which are not actual
test failures (the opposite of passing), but crashes. In Taskotron we
distinguish those - crashes don't get reported to ResultsDB, but stay in
ExecDB (before that, we used to report them as CRASHED into ResultsDB).
That allows maintainer to distinguish between these two states, and allows
us to potentially re-run the task for crashes (but not for failures). The
standard interface (SI) doesn't distinguish those, which I find quite a
step backwards. Any network hiccup or random error will now force package
maintainers to read the logs and understand what went wrong, instead of us
being able to re-run the test. This might be a good topic for discussion
with the CI team on Flock, perhaps.



>
> > ad 2) The standard interface does not specify the format of test
> > output, just that the test.log must be readable. Does this mean that
> > the output can be in any arbitrary format and the parsing of it would
> > be left to people who care, i.e. packagers? Wouldn't be this a problem
> > with if, for example, bodhi wanted to extract/parse this information
> > from ResultsDB and show it on update page?
>

Bodhi will not deal with test.log at all. Bodhi will look into ResultsDB
for PASSED/FAILED information and display that, and that's it.


>
> As far as I know, this was left vague on purpose so that it can change
> and be specified by the the test system. In this case, we'd definitely
> need to support the xunit xml format that I suspect most folks will be
> using but I'm open to the idea of keeping our results yaml format alive
> if it makes sense.
>
> So long as we're explicit about which formats are supported for our
> implementation of a test system and don't make silly choices, I think
> we're good here.
>

I'm a bit confused here. I suspected we would support our internal
ResultYAML format for our generic tasks, available e.g. at
./artifacts/taskotron-results.yml, on top of SI. It might be handy for
task-rpmlint (specifying more outcomes, notes, etc) and necessary for
multi-item tasks (task-rpmdeplint, task-upgradepth). Are you saying we
could support even more formats here, and if we find e.g.
./arfifacts/xunit-results.xml, we process it and use it for submitting more
detailed results into ResultsDB?


>
> > = Triggering generic tasks =
> >
> > Standard interface is centered around dist-git style tasks and doesn't
> > cover generic tasks like rpmlint or rpmdeplint. As these tasks are
> > Fedora QA specific, are we going to create custom extension to
> > standard interface, used only by our team, to be able to run generic
> > tasks?
>
> It'd be nice if it wasn't just us using it but the standard interface
> may indeed require some tweaking to get it to cover all of the usecases
> that we're interested in.
>
> Do you have any suggestions as to how we could make non-dist-git tasks
> work reasonably well without making drastic changes to what the
> standard interface currently is?
>

I believe the biggest obstacle here is reporting. The SI assumes this is
all that is needed:
* test subject
* exit code
For reporting to ResultsDB, we will reuse test subject as 'item" (well, not
exactly, test subject is a file path and we want to use NVR, but close
enough), exit code as outcome, and probably compute a dynamic one from the
test subject ("pkg.firefox.tests").

However, for generic tasks, we definitely need more. For rpmdeplint-like
tasks, the test subject doesn't map to "item"s reported. Also, we need to
specify multiple results. If SI defined a task result format that the test
must create (e.g. ./artifacts/results.yml), which would allow us to do what
we need, that would definitely help us here. Otherwise we'll need to run
with our custom solution.


>
> > = Reporting to ResultsDB =
> >
> > Gating requirements for CI and CD contains [2]:
> > "It must be possible to represent CI test results in resultsdb."
> > However standard interface does not speak about resultsdb.
>
> The standard interface arguably shouldn't care about resultsdb. It does
> say "it must be possible to represent ..." and not "must report to
> resultsdb".
>
> > Does this mean, that task playbook won't contain something like
> > ResultsDB module (in contrast to ResultsDB directive in formulae), as
> > the task playbook should be agnostic to system in which it is run, and
> > the reporting will be done by our code in runtask?
>
> That's how I'm understanding things, yes. There may be other systems
> that we'll need to interface with but let's cross that bridge if and
> when we get there.
>

That's also my understanding.


>
> > = Output of runtask =
> >
> > Libtaskotron's output is nice and readable, but output of the parts,
> > handled by ansible now, is not. My knowledge of ansible is still
> > limited, but as far as my experience goes, debuging ansible playbooks
> > or even asnible modules is kind of PITA. Are we going to address this
> > in some way, or just bite the bullet and move along?
>
> Do you have any ideas on how to improve that? One thing that I had in
> mind was to look at using ara to make the output a bit easier to digest.
>
> https://github.com/openstack/ara


That is a very neat project, thanks for the link. Unfortunately it seems it
needs to run on the very same system the playbook is executed from (so in
this case the minion VM).

It would be definitely nice to somehow improve the output, because ansible
wrapped in ansible sure doesn't look pretty.


>
>
> > = Params of runtask =
> >
> > When I tried ansiblized branch of libtaskotron, I ran into issues such
> > as unsupported params: ansible told me to run it with "-vvv" param,
> > which runtask does not understand. Is there a plan on how are we going
> > to forward such parameters (--ansible-opts= or just forward any params
> > we don't understand)?
>
> Personally, I think that --ansible-opts= makes sense here but I don't
> have terribly strong feelings about it.
>

Yeah, I guess something like this. But, for local development, it might be
easier to print out a command that people can use to sidestep runtask and
debug issues using the very ansible-playbook command. I.e. once we've
created the minion, downloaded the RPMs and set everything up that's
required of us, we can hint something like this:

"""
If you need to debug this task manually, you can it like this:
ssh root@192.168.1.5 'ANSIBLE_INVENTORY=/path/to/inventory
TEST_SUBJECTS=/path/to/some.rpm ansible-playbook /path/to/tests.yml'
"""

If we include something like this, it's quite simply to append any debug
commands directly, without forwarding it through runtask.


>
> > Runtask, at the moment, maps our params to ansible-playbook params and
> > those defined by standard interface. Are we going to stick with this
> > or change our params to match the ones of ansible-playbook and
> > standard interface (e.g. item would become subject, etc)?
>
> I think that it'll likely make sense to adjust some of the params but
> to be honest, I haven't looked too closely at the differences in
> params. There are at least parts of the standard interface which aren't
> going anywhere soon so it would seem wise to start aligning with what
> they're doing where it makes sense.
>
> > = Future of runtask =
> >
> > For now, runtask is user-facing part of Taskotron. However, standard
> > interface is designed in such way, that authors of task playbooks
> > shouldn't care about Taskotron (or any other system that will run
> > their code). They can develop the tasks by simply using
> > ansible-playbook. Does this mean that runtask will become convenient
> > script for us that parses arguments and spins up a VM? Because
> > everything else is in wrapping ansbile playbook...
>
> Yeah, they could use ansible-playbook but in my mind, there are more
> user friendly ways to run a test than formatting args for
> ansible-playbook, downloading the target by myself.
>

Correct, as a test system we don't just spin up a VM, but are also required
to do more stuff - install basic packages, download the RPMs (or image,
etc), set up environment variables to test subjects and inventory, gather
results and artifacts after execution, and report them. That's not trivial.

Of course when you develop a task and all your environment is set up and
you don't care about processing results, then yes, running just
ansible-playbook tests.yml is probably the easiest way for you.


>
> I think that the bigest way we can add value with runtask is making
> things easier - spawn declared vms (either in openstack, locally etc.
> depending on config), generating an inventory file so that the test
> author just has to worry about using what they requested, putting
> output into a more easily human understandable format, improving the
> local development experience etc.
>
> The role of runtask is certainly going to change as we switch from a
> custom yaml-based task language to using Ansible and its runner to
> handle most of the heavy lifting. That being said, I do see places
> where we can add value and improve the experience of our users.
>
> Tim
>
> > Lukas
>
_______________________________________________
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org

Reply via email to