Re: orphaning Taskotron-related packages

2020-11-25 Thread Josef Skladanka
On Mon, Nov 23, 2020 at 7:11 PM Tim Flink  wrote:
>
> On Thu, 12 Nov 2020 18:25:17 +0100
> Kamil Paral  wrote:
>
> > Note: The email subject should have said "retiring" instead of
> > "orphaning". There is little reason to orphan them, retiring is the
> > right approach here. Perhaps except for mongoquery, somebody else
> > could be interested in maintaining that, so that one should be
> > orphaned instead.
>
> Orphaning python-mongoquery and retiring everything else makes sense to
> me.
>
> Tim


+1
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/qa-devel@lists.fedoraproject.org


Re: status report

2018-06-10 Thread Josef Skladanka
Sorry, wrong list. I blame the heat!

On Sun, Jun 10, 2018 at 5:19 PM, Josef Skladanka 
wrote:

> = Highlights =
>
> * Participated in interviewing candidates to replace Petr
> * Deployed Vault on dev
>   * there still are some quirks with OIDC login, that I need to iron out,
> but the overall concept seems good for the usecase
> * Modified libtaskotron to allow grabbing secrets from the Vault <
> https://pagure.io/taskotron/libtaskotron/c/4f3d0d0b3be6f065cb5a578070220f
> a5f4a212f5?branch=develop>
> * Fixed buidmaster-configure steps to enable proper support for launching
> tasks from a non-mirrored repo (aka the "discover feature"0
> * Deployed a task that builds docker images (resultsdb at the moment) in
> dev <https://pagure.io/taskotron/task-dockerbuild>
>   * The trigger seems to ignore the fedmessages, but when triggered via
> jobrunner, for a specific fedmesage, the whole process works fine
> * <https://taskotron-dev.fedoraproject.org/resultsdb/results/20658684>
> * <https://hub.docker.com/r/fedoraqa/resultsdb/tags/>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/qa-devel@lists.fedoraproject.org/message/M3HOJU424WC7TT7NX4ETZB3CCBMCO4TN/


status report

2018-06-10 Thread Josef Skladanka
= Highlights =

* Participated in interviewing candidates to replace Petr
* Deployed Vault on dev
  * there still are some quirks with OIDC login, that I need to iron out,
but the overall concept seems good for the usecase
* Modified libtaskotron to allow grabbing secrets from the Vault <
https://pagure.io/taskotron/libtaskotron/c/4f3d0d0b3be6f065cb5a578070220fa5f4a212f5?branch=develop
>
* Fixed buidmaster-configure steps to enable proper support for launching
tasks from a non-mirrored repo (aka the "discover feature"0
* Deployed a task that builds docker images (resultsdb at the moment) in
dev 
  * The trigger seems to ignore the fedmessages, but when triggered via
jobrunner, for a specific fedmesage, the whole process works fine
* 
* 
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/qa-devel@lists.fedoraproject.org/message/SQCX5OMPAAN2INQBRFVJ66NU52VEHPKW/


Re: Proposal to CANCEL: 2018-02-05 QA Devel Meeting

2018-02-05 Thread Josef Skladanka
ACK

On Sun, Feb 4, 2018 at 10:26 PM, Tim Flink  wrote:

> I'm not aware of any topics that need urgent discussion this week, so I
> propose that we cancel the QA Devel meeting on 2018-02-05.
>
> If there are some topics that need discussing, please reply here and
> the meeting can happen.
>
> Tim
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Please review - Infra Ansible - move slaves from home to srv

2017-11-21 Thread Josef Skladanka
The raw diff was attached to the original email, I could have mentioned
that, I guess. /me was not able to make gmail send unformatted/unwrapped
text.
Sorry for the inconvenience.

j.

On Mon, Nov 20, 2017 at 6:32 PM, Tim Flink  wrote:

> On Mon, 20 Nov 2017 10:36:03 +0100
> Josef Skladanka  wrote:
>
> > I'm not sure what is the best way to ask for review for a pagure-less
> > project, since we don't use Phabricator any more, so... let the
> > funmail begin:
>
> The wrapped diff is hard to read but it looks pretty good to me. I
> think that the patch should be applied in parts as we reimage the
> client-host machines but that's more of a nitpick :)
>
> Tim
>
> > diff --git a/inventory/host_vars/qa10.qa.fedoraproject.org
> > b/inventory/host_vars/qa10.qa.fedoraproject.org
> > index 297f614e3..d2119dc47 100644
> > --- a/inventory/host_vars/qa10.qa.fedoraproject.org
> > +++ b/inventory/host_vars/qa10.qa.fedoraproject.org
> > @@ -9,18 +9,18 @@ gw: 10.5.124.254
> >
> >  short_hostname: qa10.qa
> >  slaves:
> > -  - { user: "{{ short_hostname }}-1", home: "/home/{{ short_hostname
> > }}-1", dir: "/home/{{ short_hostname }}-1/slave" }
> > -  - { user: "{{ short_hostname }}-2", home: "/home/{{ short_hostname
> > }}-2", dir: "/home/{{ short_hostname }}-2/slave" }
> > -  - { user: "{{ short_hostname }}-3", home: "/home/{{ short_hostname
> > }}-3", dir: "/home/{{ short_hostname }}-3/slave" }
> > -  - { user: "{{ short_hostname }}-4", home: "/home/{{ short_hostname
> > }}-4", dir: "/home/{{ short_hostname }}-4/slave" }
> > -  - { user: "{{ short_hostname }}-5", home: "/home/{{ short_hostname
> > }}-5", dir: "/home/{{ short_hostname }}-5/slave" }
> > -  - { user: "{{ short_hostname }}-6", home: "/home/{{ short_hostname
> > }}-6", dir: "/home/{{ short_hostname }}-6/slave" }
> > -  - { user: "{{ short_hostname }}-7", home: "/home/{{ short_hostname
> > }}-7", dir: "/home/{{ short_hostname }}-7/slave" }
> > -  - { user: "{{ short_hostname }}-8", home: "/home/{{ short_hostname
> > }}-8", dir: "/home/{{ short_hostname }}-8/slave" }
> > -  - { user: "{{ short_hostname }}-9", home: "/home/{{ short_hostname
> > }}-9", dir: "/home/{{ short_hostname }}-9/slave" }
> > -  - { user: "{{ short_hostname }}-10", home: "/home/{{ short_hostname
> > }}-10", dir: "/home/{{ short_hostname }}-10/slave" }
> > -  - { user: "{{ short_hostname }}-11", home: "/home/{{ short_hostname
> > }}-11", dir: "/home/{{ short_hostname }}-11/slave" }
> > -  - { user: "{{ short_hostname }}-12", home: "/home/{{ short_hostname
> > }}-12", dir: "/home/{{ short_hostname }}-12/slave" }
> > -  - { user: "{{ short_hostname }}-13", home: "/home/{{ short_hostname
> > }}-13", dir: "/home/{{ short_hostname }}-13/slave" }
> > -  - { user: "{{ short_hostname }}-14", home: "/home/{{ short_hostname
> > }}-14", dir: "/home/{{ short_hostname }}-14/slave" }
> > -  - { user: "{{ short_hostname }}-15", home: "/home/{{ short_hostname
> > }}-15", dir: "/home/{{ short_hostname }}-15/slave" }
> > +  - { user: "{{ short_hostname }}-1", home: "/srv/buildslaves/{{
> > short_hostname }}-1", dir:
> > "/srv/buildslaves/{{ short_hostname }}-1/slave" }
> > +  - { user: "{{ short_hostname }}-2", home: "/srv/buildslaves/{{
> > short_hostname }}-2", dir:
> > "/srv/buildslaves/{{ short_hostname }}-2/slave" }
> > +  - { user: "{{ short_hostname }}-3", home: "/srv/buildslaves/{{
> > short_hostname }}-3", dir:
> > "/srv/buildslaves/{{ short_hostname }}-3/slave" }
> > +  - { user: "{{ short_hostname }}-4", home: "/srv/buildslaves/{{
> > short_hostname }}-4", dir:
> > "/srv/buildslaves/{{ short_hostname }}-4/slave" }
> > +  - { user: "{{ short_hostname }}-5", home: "/srv/buildslaves/{{
> > short_hostname }}-5", dir:
> > "/srv/buildslaves/{{ short_hostname }}-5/slave" }
> > +  - { user: "{{ short_hostname }}-6", home: "/srv/buildslaves/{{
> > short_hostname }}-6", dir:
> > "/srv/buildslaves/{{ short_hostname }}-6/slave" }
> > +  - { user: "{{ short_hostname }}-7",

Please review - Infra Ansible - move slaves from home to srv

2017-11-20 Thread Josef Skladanka
I'm not sure what is the best way to ask for review for a pagure-less
project, since we don't use Phabricator any more, so... let the funmail
begin:


diff --git a/inventory/host_vars/qa10.qa.fedoraproject.org
b/inventory/host_vars/qa10.qa.fedoraproject.org
index 297f614e3..d2119dc47 100644
--- a/inventory/host_vars/qa10.qa.fedoraproject.org
+++ b/inventory/host_vars/qa10.qa.fedoraproject.org
@@ -9,18 +9,18 @@ gw: 10.5.124.254

 short_hostname: qa10.qa
 slaves:
-  - { user: "{{ short_hostname }}-1", home: "/home/{{ short_hostname
}}-1", dir: "/home/{{ short_hostname }}-1/slave" }
-  - { user: "{{ short_hostname }}-2", home: "/home/{{ short_hostname
}}-2", dir: "/home/{{ short_hostname }}-2/slave" }
-  - { user: "{{ short_hostname }}-3", home: "/home/{{ short_hostname
}}-3", dir: "/home/{{ short_hostname }}-3/slave" }
-  - { user: "{{ short_hostname }}-4", home: "/home/{{ short_hostname
}}-4", dir: "/home/{{ short_hostname }}-4/slave" }
-  - { user: "{{ short_hostname }}-5", home: "/home/{{ short_hostname
}}-5", dir: "/home/{{ short_hostname }}-5/slave" }
-  - { user: "{{ short_hostname }}-6", home: "/home/{{ short_hostname
}}-6", dir: "/home/{{ short_hostname }}-6/slave" }
-  - { user: "{{ short_hostname }}-7", home: "/home/{{ short_hostname
}}-7", dir: "/home/{{ short_hostname }}-7/slave" }
-  - { user: "{{ short_hostname }}-8", home: "/home/{{ short_hostname
}}-8", dir: "/home/{{ short_hostname }}-8/slave" }
-  - { user: "{{ short_hostname }}-9", home: "/home/{{ short_hostname
}}-9", dir: "/home/{{ short_hostname }}-9/slave" }
-  - { user: "{{ short_hostname }}-10", home: "/home/{{ short_hostname
}}-10", dir: "/home/{{ short_hostname }}-10/slave" }
-  - { user: "{{ short_hostname }}-11", home: "/home/{{ short_hostname
}}-11", dir: "/home/{{ short_hostname }}-11/slave" }
-  - { user: "{{ short_hostname }}-12", home: "/home/{{ short_hostname
}}-12", dir: "/home/{{ short_hostname }}-12/slave" }
-  - { user: "{{ short_hostname }}-13", home: "/home/{{ short_hostname
}}-13", dir: "/home/{{ short_hostname }}-13/slave" }
-  - { user: "{{ short_hostname }}-14", home: "/home/{{ short_hostname
}}-14", dir: "/home/{{ short_hostname }}-14/slave" }
-  - { user: "{{ short_hostname }}-15", home: "/home/{{ short_hostname
}}-15", dir: "/home/{{ short_hostname }}-15/slave" }
+  - { user: "{{ short_hostname }}-1", home: "/srv/buildslaves/{{
short_hostname }}-1", dir: "/srv/buildslaves/{{ short_hostname }}-1/slave" }
+  - { user: "{{ short_hostname }}-2", home: "/srv/buildslaves/{{
short_hostname }}-2", dir: "/srv/buildslaves/{{ short_hostname }}-2/slave" }
+  - { user: "{{ short_hostname }}-3", home: "/srv/buildslaves/{{
short_hostname }}-3", dir: "/srv/buildslaves/{{ short_hostname }}-3/slave" }
+  - { user: "{{ short_hostname }}-4", home: "/srv/buildslaves/{{
short_hostname }}-4", dir: "/srv/buildslaves/{{ short_hostname }}-4/slave" }
+  - { user: "{{ short_hostname }}-5", home: "/srv/buildslaves/{{
short_hostname }}-5", dir: "/srv/buildslaves/{{ short_hostname }}-5/slave" }
+  - { user: "{{ short_hostname }}-6", home: "/srv/buildslaves/{{
short_hostname }}-6", dir: "/srv/buildslaves/{{ short_hostname }}-6/slave" }
+  - { user: "{{ short_hostname }}-7", home: "/srv/buildslaves/{{
short_hostname }}-7", dir: "/srv/buildslaves/{{ short_hostname }}-7/slave" }
+  - { user: "{{ short_hostname }}-8", home: "/srv/buildslaves/{{
short_hostname }}-8", dir: "/srv/buildslaves/{{ short_hostname }}-8/slave" }
+  - { user: "{{ short_hostname }}-9", home: "/srv/buildslaves/{{
short_hostname }}-9", dir: "/srv/buildslaves/{{ short_hostname }}-9/slave" }
+  - { user: "{{ short_hostname }}-10", home: "/srv/buildslaves/{{
short_hostname }}-10", dir: "/srv/buildslaves/{{ short_hostname
}}-10/slave" }
+  - { user: "{{ short_hostname }}-11", home: "/srv/buildslaves/{{
short_hostname }}-11", dir: "/srv/buildslaves/{{ short_hostname
}}-11/slave" }
+  - { user: "{{ short_hostname }}-12", home: "/srv/buildslaves/{{
short_hostname }}-12", dir: "/srv/buildslaves/{{ short_hostname
}}-12/slave" }
+  - { user: "{{ short_hostname }}-13", home: "/srv/buildslaves/{{
short_hostname }}-13", dir: "/srv/buildslaves/{{ short_hostname
}}-13/slave" }
+  - { user: "{{ short_hostname }}-14", home: "/srv/buildslaves/{{
short_hostname }}-14", dir: "/srv/buildslaves/{{ short_hostname
}}-14/slave" }
+  - { user: "{{ short_hostname }}-15", home: "/srv/buildslaves/{{
short_hostname }}-15", dir: "/srv/buildslaves/{{ short_hostname
}}-15/slave" }
diff --git a/inventory/host_vars/qa11.qa.fedoraproject.org
b/inventory/host_vars/qa11.qa.fedoraproject.org
index de99d2ba1..47c5b702d 100644
--- a/inventory/host_vars/qa11.qa.fedoraproject.org
+++ b/inventory/host_vars/qa11.qa.fedoraproject.org
@@ -9,18 +9,18 @@ gw: 10.5.124.254

 short_hostname: qa11
 slaves:
-  - { user: "{{ short_hostname }}-1", home: "/home/{{ short_hostname
}}-1", dir: "/home/{{ short_hostname }}-1/slave" }
-  - { user: "{{ short_hostname }}-2", home: "/home/{{ short_hostname
}}-2", dir: "/home/{{ short_hostna

Re: 2017-10-16 @ 14:00 UTC - Fedora QA Devel Meeting

2017-10-16 Thread Josef Skladanka
Looks like it will be just the two of us today, Tim - I don't have any
serious updates, but I'm all for doing it, if you deem it useful.

On Mon, Oct 16, 2017 at 6:36 AM, Tim Flink  wrote:

> # Fedora QA Devel Meeting
> # Date: 2017-10-16
> # Time: 14:00 UTC
> (https://fedoraproject.org/wiki/Infrastructure/UTCHowto)
> # Location: #fedora-meeting-1 on irc.freenode.net
>
>
> https://fedoraproject.org/wiki/QA:Qadevel-20171016
>
> If you have any additional topics, please reply to this thread or add
> them in the wiki doc.
>
> Tim
>
>
> Proposed Agenda
> ===
>
> Announcements and Information
> -
>   - Please list announcements or significant information items below so
> the meeting goes faster
>
> Tasking
> ---
>   - Does anyone need tasks to do?
>
> Potential Other Topics
> --
>
>   - deployment of ansiblize branches
>
> Open Floor
> --
>   - TBD
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Proposal to CANCEL: 2017-08-28 QA Devel Meeting

2017-08-28 Thread Josef Skladanka
ack

On Mon, Aug 28, 2017 at 5:58 AM, Tim Flink  wrote:

> There are more than one of us traveling to Flock on Monday and as such,
> I propose that we cancel the regularly scheduled QA Devel meeting.
>
> If there is some urgent topic to discuss, please reply to this thread
> and the meeting can happen if there is someone around who is willing to
> lead such meeting.
>
> Tim
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Discontinuing Phabricator

2017-08-04 Thread Josef Skladanka
As you all probably know, we decided that keeping Phab up and running is
not the best use of our - rather limited, and shrinking - resources, so we
moved all our projects to Pagure. Yay!

As of now, all (relevant) tickes are moved to Pagure, and we have the
Differential revisions archived as html snapshots here:
https://fedorapeople.org/groups/qa/phabarchive/differentials/phab.qa.fedoraproject.org/
(note that this is not the final version, once kparal gets to update it,
the "download raw diff" links will provide you with just that).

Links between tickets, and ticket dependencies are hopefully moved too, as
are the references for the Differential revisions tied to that ticket - I
was able to manually check a few tickets, and "it was fine" (tm). In
phabricator, ticket could be a part of multiple projects (like execdb +
resultsdb + libtaskotron) - we (kparal mostly) cleaned up quite a deal of
those, but some still made sense to keep. Pagure can not represent these,
so I ended up duplicating the tickets. Such tickets' first comment (or some
of the few first comments) says "This is a duplicate of ..." - meaning just
that this was part of several projects and the referenced ticket is just
the same.

This also means, that as of now, we won't be actively taking part in
maintaining, or using Phabricator. We are still to decide on a reasonable
way to do code reviews, so any tips on the topic are more than welcomed. If
you have some un-merged differential revisions, that you'd like to see
taken care of, please create a pull request, and mention that it is WRT to
a specific diff.

I'm sad to see this great tool go, hopefully, we'll be able to make decent
use of Pagure.

Josef

P.S. if you feel brave enough, feel free to have a look at the junk-code
that made this possible at https://pagure.io/fedora-qa/phabarchive/
(disclaimer - the code should die in fire!)
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Proposal to CANCEL: 2017-07-03 QA Devel Meeting

2017-07-02 Thread Josef Skladanka
+1

On Sat, Jul 1, 2017 at 7:30 PM, Tim Flink  wrote:

> There are multiple holidays this week and I suspect that most folks
> (including me) won't be around for a QA Devel meeting so I propose that
> we cancel the regular meeting.
>
> If there is some urgent topic to discuss, reply to this thread and the
> meeting can happen but I won't be around and someone else would have to
> lead it.
>
> Tim
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Re-Scheduling Jobs for Taskotron as a User

2017-04-20 Thread Josef Skladanka
On Thu, Apr 20, 2017 at 12:07 AM, Adam Williamson <
adamw...@fedoraproject.org> wrote:

> OK, like I said, half-baked =) But wdyt?
>
>
Love it! (And I swear, it has nothing to do with the fact, that I also
thought this would be a great way to solve it in a more generic manner.)
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Proposal to CANCEL: 2017-03-27 QA Devel Meeting

2017-03-26 Thread Josef Skladanka
OK

On Mon, Mar 27, 2017 at 5:30 AM, Tim Flink  wrote:

> I have a conflict during the normal QA Devel meeting this week so
> unless someone else wants to lead the meeting, I propose that we cancel
> it.
>
> Tim
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: ExecDB rewrite - call for comments

2017-02-20 Thread Josef Skladanka
Just FYi, I transformed the document (thx for the comments and nitpicks) to
Phab Wiki page:
https://phab.qa.fedoraproject.org/w/taskotron/execdb_rewrite/

On Wed, Feb 15, 2017 at 2:40 PM, Kamil Paral  wrote:

> Hey gang!
>
> With the incoming changes, I'd like to make ExecDB a bit more worth its
> name, and make it less tied to Buildbot than it is at the moment, and also
> make some changes to what functionality it provides.
>
> Please, comment!
>
> Thanks, Joza
>
>
> 
> https://docs.google.com/a/redhat.com/document/d/1sOAn2WJ0-XAJu9ssckevS9-
> m2BM7DwyGbfUaSHx3Nq0/edit?usp=sharing
>
>
> Looks reasonable, I also added some nitpicks.
>
> One thing though, it seems that the document is not available for public
> viewing. Unfortunately it seems that google drive available for redhat.com
> accounts doesn't allow to share documents publicly, only inside redhat.com
> domain. So if you want the document publicly available, you'll need to use
> a different account/service next time :/
>
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Trigger changes - call for comments

2017-02-16 Thread Josef Skladanka
Hey, gang!

As with the ExecDB, I took some time to try and formalize what I think is
to be done with Trigger in the near-ish future.
Since it came to my attention, that the internal G-Docs can not be accessed
outside of RH, this time, it is shared from my personal account - hopefully
more people will be enabled to read and comment on the document.
Without further ado -
https://docs.google.com/document/d/1BEWdgm0jO4p5DsTO4vntwUumgLZGtU5lBaBX5av7MGQ/edit?usp=sharing

Thanks,
joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Taskotron CI in Taskotron

2017-02-15 Thread Josef Skladanka
On Wed, Feb 15, 2017 at 5:55 PM, Adam Williamson  wrote:

> On Wed, 2017-02-15 at 12:59 +0100, Josef Skladanka wrote:
> > On Tue, Feb 14, 2017 at 8:51 PM, Adam Williamson <
> adamw...@fedoraproject.org
> > > wrote:
> > > Are you aware of fedmsg-dg-replay? It's a fairly easy way to 'replay'
> > > fedmsgs for testing. All you need (IIRC) is the fedmsg-relay service
> > > running on the same system, and you can run
> > >
> >
> > I am, but it has this bad quality of changing the topic, so we would have
> > to change the consumers' topics, so we would need to change those too. Or
> > make it configurable in some way...
> > I'd rather do it the way I have it now - using the trigger's internal
> > replay functionality instead of doing unnecessary complicated changes
> just
> > for the sake of using it to test stuff once in a while.
>
> I was thinking that it's probably not that difficult to set up a
> testing fedmsg bus as a test fixture with some canned messages that can
> be replayed on request, but I haven't looked at doing it so I really
> don't know how much work it is. I wonder if fedmsg's test suite does
> it.
>
>
Ah so, I did not get that at the first read, and now it is obvious even
from the previous email *facepalm*. Yeah, that would make sense, I guess.
We'll see about that, at the moment, we have bigger fish to fry, but in the
end I'd like to have this stuff covered too.
Thanks for the good idea, though!

joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Taskotron CI in Taskotron

2017-02-15 Thread Josef Skladanka
On Tue, Feb 14, 2017 at 8:51 PM, Adam Williamson  wrote:

> Are you aware of fedmsg-dg-replay? It's a fairly easy way to 'replay'
> fedmsgs for testing. All you need (IIRC) is the fedmsg-relay service
> running on the same system, and you can run
>
I am, but it has this bad quality of changing the topic, so we would have
to change the consumers' topics, so we would need to change those too. Or
make it configurable in some way...
I'd rather do it the way I have it now - using the trigger's internal
replay functionality instead of doing unnecessary complicated changes just
for the sake of using it to test stuff once in a while.

J.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


ExecDB rewrite - call for comments

2017-02-12 Thread Josef Skladanka
Hey gang!

With the incoming changes, I'd like to make ExecDB a bit more worth its
name, and make it less tied to Buildbot than it is at the moment, and also
make some changes to what functionality it provides.

Please, comment!

Thanks, Joza

https://docs.google.com/a/redhat.com/document/d/1sOAn2WJ0-XAJu9ssckevS9-m2BM7DwyGbfUaSHx3Nq0/edit?usp=sharing
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Taskotron CI in Taskotron

2017-02-10 Thread Josef Skladanka
So, the repo now has working PoC
https://pagure.io/taskotron/task-taskotron-ci
Readme contains example on how to run the task.
Works on my setup, and I'd be glad if somebody else tried.

J.

On Fri, Feb 10, 2017 at 7:31 AM, Josef Skladanka 
wrote:

>
>
> On Thu, Feb 9, 2017 at 5:58 PM, Matthew Miller 
> wrote:
>
>> On Thu, Feb 09, 2017 at 03:29:13AM +0100, Josef Skladanka wrote:
>> > I finally got some work done on the CI task for Taskotron in Taskotron.
>> The
>> > idea here is that after each commit (of a relevant project - trigger,
>> > execdb, resultsdb, libtaskotron) to pagure, we will run the whole stack
>> in
>> > docker containers, and execute a known "phony" task, to see whether it
>> all
>> > goes fine.
>>
>> This is excellent. I'd love, eventually, to get to a point where we can
>> run the checks _pre_ commit and gate on them. Is there a path from this
>> to that?
>
>
> Absolutely, that is the goal.
>
> Generally speaking, we'd like to run tests on Pagure's PRs.
> For taskotron specifically, we'll need to figure out some Phabricator
> plugin that fires off a fedmsg (or calls some API, whatever) on new
> Differential request, but generally it is the same idea.
>
> Joza
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Wiki page gardening

2017-02-09 Thread Josef Skladanka
Awesome, thanks!

On Fri, Feb 10, 2017 at 4:27 AM, Adam Williamson  wrote:

> Hi folks! I did a bit of light gardening on the Taskotron and ResultsDB
>  and a few other wiki pages today:
>
> * https://fedoraproject.org/wiki/Taskotron
> * https://fedoraproject.org/wiki/Taskotron_contribution_guide
>   (moved from User:Tflink/taskotron_contribution_guide)
> * https://fedoraproject.org/wiki/QA:Phabricator
>   (moved from QA/Phabricator)
> * https://fedoraproject.org/wiki/ResultsDB
> * https://fedoraproject.org/wiki/QA:Tools
>   (moved from QA/Tools)
>
> I guess most significantly, I tried to consolidate the 'how to
> contribute' instructions a bit to make it easier for people to find
> their way through. The main 'how to use arcanist' stuff is now in the
> Phabricator page, and you can use this anchor link to link to it:
>
> https://fedoraproject.org/wiki/QA:Phabricator#issues-diffs
>
> That content was moved from
> https://phab.qa.fedoraproject.org/w/contributing/ . I sprinkled links
> to it around a few other pages. The Taskotron_contribution_guide page
> links to that page for the generic instructions, and just includes
> Taskotron-specific stuff. Notably, I tried to include a comprehensive
> and up-to-date list of the Taskotron repositories on that page;
> hopefully that can be the sole place where such a list lives now (I
> removed the other incomplete and out of date lists I could find).
>
> I updated QA:Tools to link to a few more things, and removed various
> bits of out-of-date content to make the pages look less...sad. :)
>
> Please let me know about (or just fix) any problems you see :) Thanks!
> --
> Adam Williamson
> Fedora QA Community Monkey
> IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
> http://www.happyassassin.net
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Taskotron CI in Taskotron

2017-02-09 Thread Josef Skladanka
On Thu, Feb 9, 2017 at 5:58 PM, Matthew Miller 
wrote:

> On Thu, Feb 09, 2017 at 03:29:13AM +0100, Josef Skladanka wrote:
> > I finally got some work done on the CI task for Taskotron in Taskotron.
> The
> > idea here is that after each commit (of a relevant project - trigger,
> > execdb, resultsdb, libtaskotron) to pagure, we will run the whole stack
> in
> > docker containers, and execute a known "phony" task, to see whether it
> all
> > goes fine.
>
> This is excellent. I'd love, eventually, to get to a point where we can
> run the checks _pre_ commit and gate on them. Is there a path from this
> to that?


Absolutely, that is the goal.

Generally speaking, we'd like to run tests on Pagure's PRs.
For taskotron specifically, we'll need to figure out some Phabricator
plugin that fires off a fedmsg (or calls some API, whatever) on new
Differential request, but generally it is the same idea.

Joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Taskotron CI in Taskotron

2017-02-08 Thread Josef Skladanka
Gang,

I finally got some work done on the CI task for Taskotron in Taskotron. The
idea here is that after each commit (of a relevant project - trigger,
execdb, resultsdb, libtaskotron) to pagure, we will run the whole stack in
docker containers, and execute a known "phony" task, to see whether it all
goes fine.

The way I devised is that I'll build a 'testsuite' container based on the
Trigger, and instead of running the fedmsg hub, I'll just use the CLI to
"replay" what would happen on a known, predefined fedmsg.
The testsuite will then watch execdb and resultsdb, whether everything went
fine.

It is not at all finished, but I started hacking on it here:
https://pagure.io/taskotron/task-taskotron-ci
I hope to finish it (to a point where it runs the phony task) till the end
of the week. At that point, I'd be glad for any actual, sensible task ideas
to ideally test as much of the capabilities of the
libtaskotron/execdb/resultsdb as possible.

The only problem with this kind of testing is, that we still don't really
have a good way to test trigger, as it is tied to external events. My idea
here was that I could add something like wiki edit consumer, and trigger
tasks off of that, making that one "triggering" edit from inside the
testsuite. But As it's almost 4am here, I'm not sure it is the best idea.
Once again, I'll be glad for any input/ideas/evil laughter.

Joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Libtaskotron - allow non-cli data input

2017-02-08 Thread Josef Skladanka
On Wed, Feb 8, 2017 at 8:06 PM, Adam Williamson 
wrote:

> Wouldn't it be great if we had a brand new project which would be the
> ideal place to represent such conventions, so the bit of taskotron
> which reported the results could construct them conveniently? :P


https://xkcd.com/684/ :) (I mean no offense just really reminded me of that)
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Libtaskotron - allow non-cli data input

2017-02-08 Thread Josef Skladanka
On Wed, Feb 8, 2017 at 7:39 PM, Kamil Paral  wrote:

> > I mentioned this in IRC but why not have a bit of both and allow input
> > as either a file or on the CLI. I don't think that json would be too
> > bad to type on the command line as an option for when you're running
> > something manually:
> >
> >   runtask sometask.yml -e "{'namespace':'someuser',\
> > 'module':'somemodule', 'commithash': 'abc123df980'}"
>
> I probably misunderstood you on IRC. In my older response here, I actually
> suggested something like this - having "--datafile data.json", which can
> also be used like "--datafile -" meaning stdin. You can then use "echo
>  | runtask --datafile - ". But your solution is probably
> easier to look at.
>

I honestl like the `--datafile [fname, -]` approach a lot. We could sure
name the param better, but that's about it. I like it better than
necessarily having a long cmdline, and you can still use "echo " if
you wanted to have a cmdline example, or "cat " for the common usage



> > There would be some risk of running into the same problems we had with
> > AutoQA where depcheck commands were too long for bash to parse but
> > that's when I'd say "you need to use a file for that"
>
> Definitely.
>

And that's why I'd rather stay away from long cmdlines :)


>
> > > I'm a bit torn between providing as much useful data as we can when
> > > scheduling (because a) yaml formulas are very limited and you can't
> > > do stuff like string parsing/splitting b) might save you a lot of
> > > work/code to have this data presented to you right from the start),
> > > and the easy manual execution (when you need to gather and provide
> > > all that data manually). It's probably about finding the right
> > > balance. We can't avoid having structured multi-data input, I don't
> > > think.
> >
> > If we did something along the lines of allowing input on the CLI, we
> > could have both, no? We'd need to be clear on the precedence of file vs
> > CLI input but that seems to me like something that could solve the
> > issue of dealing with more complicated inputs without requiring users
> > to futz with a file when running tasks locally.
>
> That's not the worry I had. Creating a file or writing json to a command
> line is a bit more work than the current state, but not a problem. What I'm
> a bit afraid of is that we'll start adding many keyvals into the json just
> because it is useful or convenient. As an artificial example, let's say for
> a koji_build FOO we supply NVR, name, epoch, owner, build_id and
> build_timestamp. And if we receive all of that in the fedmsg (or from some
> koji query that we'll need to do anyway for some reason), it makes sense to
> pass that data, it's free for us and it's less work for the task (it
> doesn't have to do its own queries). However, running the task manually as
> a task developer (and I don't mean re-running an existing task on FOO by
> copy-pasting the existing data json from a log file, but running it on a
> fresh new koji build BAR) makes it much more difficult for the developer,
> because he needs to figure out (manually) all those values for BAR just to
> be able to run his task.
>

Even more extreme (deliberately, to illustrate the point) example would be
> to pass the whole koji buildinfo dict structure that you get when running
> koji.getBuild(). Which could be actually easier for the developer to
> emulate, because we could document a single command that retrieves exactly
> that. Unless we start adding additional data to it...
>
> So on one hand, I'd like to pass as much data as we have to make task
> formulas simpler, but on the other hand, I'm afraid task development
> (manual task execution, without having a trigger to get all this data by
> magic) will get harder. (I hope I managed to explain it better this time:))
> _


As I mentioned in one of the other emails - the dev (while developing)
should really only need to provide the data that is relevant for the
task/formula. Why have a ton of stuff that you never use in the "testing
data" - it is unnecessary work, and even makes it more prone to error IMO.
If I had task that only needs NVR, name and build_timestamp, I'd (while
developing/testing) just pass a structure containing these.

Or do you think that is a bad idea? I sure can see how (e.g.) the resultsdb
directive could be spitting warnings out about missing data, but that is
why we have the different profiles - the resultsdb could fail in production
mode, if data was missing (and that probably means some serious error) or
just warn you in development mode.
If you wanted to "test it thoroughly" you'd better use some real data
anyway - and if we store the "input data structure" in logs for the tasks,
then there even is a good source of those, should you want to copy-paste it.

I hope I understood what you meant.

joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe sen

Re: Libtaskotron - allow non-cli data input

2017-02-08 Thread Josef Skladanka
On Wed, Feb 8, 2017 at 4:11 PM, Tim Flink  wrote:

> On Wed, 8 Feb 2017 08:26:30 -0500 (EST)
> Kamil Paral  wrote:
>
> I think another question is whether we want to keep assuming that the
> *user supplies the item* that is used as a UID in resultsdb. As you say,
> it seems a bit odd to require people to munge stuff together like
> "namespace/module#commithash" at the same time that it can be separated
> out into a dict-like data structure for easy access.
>
>
Emphasis mine. I think that we should not really be assuming that at all.
In most cases, the item should be provided by the trigger automagically,
the same with the type. With what I'd like to see for the structured input,
the conventions module could/should take that data into account while
constructing the "default" results.
Keep in mind, that the one result can also have multiple "items" (as it can
have a multiple of any extra data field), if it makes sense. One, the
"auto-provided" and the second could be user-added. That would make it both
consistent (the tirgger generated item) and flexible, if a different "item"
makes sense.

Would it make more sense to just pass in the dict and have semi-coded
> conventions for reporting to resultsdb based on the item_type which
> could be set during the task instead of requiring that to be known
> before task execution time?
>
> Something along the lines of enabling some common kinds of input for
> the resultsdb directive - module commit, dist-git rpm change, etc. so
> that you could specify the item_type to the resultsdb directive and it
> would know to look for certain bits to construct the UID item that's
> reported to resultsdb.
>

Yup, I think that setting some conventions, and making sure we keep the
same (or at least very similar) set of metadata for the relevant type is a
key.
I mentioned this in the previous email, but I am, in the past few days,
thinking about making the types a bit more general - the pretty specific
types we have now made sense, when we first designed stuff, and had a very
narrow usecase.
Now that we want to make the stack usable in stuff like Platform CI, I
think it would make sense to abstract a bit more, so we don't have
`koji_build`, `brew_build`, `copr_build` which are essentialy the same, but
differ in minor details. We can specify those classes/details in extradata,
or could even use multiple types - having the common set of information
guaranteed for all the 'build' type, and add other kind of data to
`koji_build`, `brew_build` of `whatever_build` as needed.


> Using Kamil's example, assume that we have a task for a module and the
> following data is passed in:
>
>   {'namespace':'someuser', 'module':'httpd', 'commithash':'abc123df980'}
>
> Neither item nor type is specified on the CLI at execution time. The
> task executes using that input data and when it comes time to report to
> resultsdb:
>
>   - name: report results to resultsdb
> resultsdb:
>   results: ${some_task_output}
>   type: module
>
> By passing in that type of module, the directive would look through the
> input data and construct the "item" from input.namespace, input.module
> and input.commithash.
>
> I'm not sure if it makes more sense to have a set of "types" that the
> resultsdb directive understands natively or to actually require item
> but allow variable names in it along the lines of
>
>   "item":"${namespace}/${module}#${commithash}"
>

I'd rather have that in "conventions" than the resultsdb directive, but I
guess it is essentialy the same thing, once you think about it.


>
> > > My take on this is, that we will say which variables are provided
> > > by the trigger for each type. If a variable is missing, the
> > > formula/execution should just crash when it tries to access it.
> >
> > Sounds reasonable.
>
> +1 from me as well. Assume everything is there, crash if there's
> something requested that isn't available (missing data etc.)
>
>
yup, that's what I have in mind.


> > We'll probably end up having a mix of necessary and convenience
> > values in the inputdata. "name" is probably a convenience value here,
> > so that tasks don't have to parse if they need to use it in a certain
> > directive. "epoch" might be an important value for some test cases,
> > and let's say we learn the value in trigger during scheduling
> > investigation, so we decide to pass it down. But that information is
> > not that easy to get manually. If you know what to do, you'll open up
> > a particular koji page and see it. But you can also be clueless about
> > how to figure it out. The same goes for build_id, again can be
> > important, but also can be retrieved later, so more of a convenience
> > data (saving you from writing a koji query). This is just an example
> > for illustration, might not match real-world use cases.
>
> I mentioned this in IRC but why not have a bit of both and allow input
> as either a file or on the CLI. I don't think that json would be too
> bad to type on the command line a

Re: Libtaskotron - allow non-cli data input

2017-02-08 Thread Josef Skladanka
On Wed, Feb 8, 2017 at 2:26 PM, Kamil Paral  wrote:

> This is what I meant - keeping item as is, but being able to pass another
> structure to the formula, which can then be used from it. I'd still like to
> keep the item to a single string, so it can be queried easily in the
> resultsdb. The item should still represent what was tested. It's just that
> I want to be able to pass arbitrary data to the formulae, without the need
> for ugly hacks like we have seen with the git commits lately.
>
>
> So, the question is now how much we want the `item` to uniquely identify
> the item under test. Currently we mostly do (rpmlint, rpmgrill) and
> sometimes don't (depcheck, because item is NVR, but the full ID is NEVRA,
> and we store arch in the results extradata section).
>
>
I still kind of believe that the `item` should be chosen with great respect
to what actually is the item under test, but it also really depends on what
you want to do with it later on. Note that the `item` is actually a
convention (yay, more water to adamw's "if we only had some awesome new
project" mill), and is not enforced in any way. I believe that there should
be firm rules (once again - conventions) on what the item is for each "well
known" item type, so you can kind-of assume that if you query for
`item=foo&type=koji_build` you are getting the results related to that
build.
As we were discussing privately with the item types (I'm not going to go
into much detail here, but for the rest of you guys - I'm contemplating
making the types more general, and using more of the 'metadata' to store
additional spefics - like replacing `type=koji_build` with `type=build,
source=koji`, or `type=build, source=brew` - on the high level, you know
that a package/build was tested, and you don't really care where it came
from, but you sometimes might care, and so there is the additional metadata
stored. We could even have more types stored for one results, or I don't
know... It's complicated), the idea behind item is that it should be a
reasonable value, that carries the "what was tested" information, and you
will use the other "extra-data" fields to provide more details (like we
kind-of want to do with arch, but we don't really..). The reason for it to
be 'reasonable value" and not "superset of all values that we have" is to
make the general querying a bit more straightforward.


> If we have structured input data, what happens to `item` for
> check_modulemd? Currently it is "namespace/module#commithash". Will it stay
> the same, and they'll just avoid parsing it because we'll also provide
> ${data.namespace}, ${data.module} and ${data.hash}? Or will the `item` be
> perhaps just "module" (and the rest will be stored as extradata)? What
> happens when we have a generic git_commit type, and the source can be an
> arbitrary service? Will have some convention to use item as
> "giturl#commithash"?
>
>
Once again - whatever makes sense as the item. For me that would be the
Repo/SHA combo, with server, repo, branch, and commit in extradata.
And it comes to "storing as much relevant metadata as possible" once again.
The thing is, that as long as stuff is predictable, it almost does not
matter what it is, and it once again points out how good of an idea is the
conventions stuff. I believe that we are now storing much less metadata in
resultsdb than we should, and it is caused mostly by the fact that
 - we did not really need to use the results much so far
 - it is pretty hard to pass data into libtaskotron, and querying all the
services all the time, to get the metadata, is/was deemed a bad idea - why
do it ourselves, if the consumer can get it himself. They know that it is
koji_build, so they can query koji.

There is a fine balance to be struck, IMO, so we don't end up storing "all
the data" in resultsdb. But I believe that the stuff relevant for the
result consumption should be there.


Because the ideal way would be to store the whole item+data structure as
> item in resultsdb. But that's hard to query for humans, so we want a simple
> string as an identifier.
>

This, for me, is once again about being predictable. As I said above, I
still think that `item` should be a reasonable identifier, but not
necessary a superset of all the info. That is what the extra data is for.
Talking about...


> But sometimes there can be a lot of data points which uniquely identify
> the thing under test only when you specify it all (for example what Dan
> wrote, sometimes the ID is the old NVR *plus* the new NVR). Will we want to
> somehow combine them into a single item value? We should give some
> directions how people should construct their items.
>
>
My gut feeling here would be storing the "new NVR" (the thing that actually
caused the test to be executed) as item, and adding 'old nvr' to extra
data. But I'm not that familiar with the specific usecase. To me, this
would make sense, because when you query for "this NVR related results"
you'd get the results too. If you wan

Re: Libtaskotron - allow non-cli data input

2017-02-07 Thread Josef Skladanka
On Mon, Feb 6, 2017 at 6:49 PM, Kamil Paral  wrote:

> The formulas already provide a way to 'query' structured data via the
> dot-format, so we could do with as much as passing some variable like
> 'task_data' that would contain the parsed json/yaml.
>
>
> Or are you proposing we add another variable with these extra values, like
> this?
>
> echo " {'branch': 'master', 'commit': '6e4fc7'} " | runtask --item
> libtaskotron --type pagure_git_commit --data-file - runtask.yaml
>
> or this:
>
> echo " {'name': 'htop'} " | runtask --item htop-2.0-1.fc25 --type
> koji_build --data-file - runtask.yaml
>
> and then use ${item} and ${data.branch}, ${data.commit}, or ${data.name} ?
>
>
>
This is what I meant - keeping item as is, but being able to pass another
structure to the formula, which can then be used from it. I'd still like to
keep the item to a single string, so it can be queried easily in the
resultsdb. The item should still represent what was tested. It's just that
I want to be able to pass arbitrary data to the formulae, without the need
for ugly hacks like we have seen with the git commits lately.



> I guess it depends whether the extra data will be mandatory and exactly
> defined ("this item type provides these input values") or not (what will
> formulas do when they're not there?). Also whether we want to make it still
> possible to execute a task with simple `--item string` in some kind of
> fallback mode, to keep local execution on dev machines still easy and
> simple.
>
>
My take on this is, that we will say which variables are provided by the
trigger for each type. If a variable is missing, the formula/execution
should just crash when it tries to access it.
Not sure about the fallback mode, but my take on this is, that if the user
will want to run the task, he will have to just write the "extra data" once
to a file, and then it will be just passed in as usual.
We could even make some templates for each item_type (I guess trigger docs
are the place for it?), so people can easily just copy-paste it, and make
changes.
I also think that providing a sample json file to the existing tasks (that
are using it) is a best practice we should strive for.

Makes sense?

Joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Lift type-restriction on ibtaskotron's cli + resultsdb directive

2017-02-07 Thread Josef Skladanka
On Mon, Feb 6, 2017 at 6:19 PM, Kamil Paral  wrote:

So this is about removing `_ITEM_TYPES` from `main.py`, correct?
>

Yes, I could have been more specific.

>
> I don't have a problem with that, as long as any relevant docs are updated
> and we're able to present a reasonable message when something goes wrong -
> either if there's a typo or the user executes a formula expecting type X
> with type Y (and therefore e.g. `koji_build` variable is required but not
> provided). It also means we should document the most commonly used types
> somewhere, so that people know for which events they can write their tasks
> and have them executed in our infra.
>

Agreed, I think that we can not really "stop" the execution on a "supposed
typo", but some heuristics + warnings are IMO a good thing to have. I
thought that the types are already documented but if not, then putting
together the list of the most common ones is IMO a good idea.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: making test suites work the same way

2017-02-06 Thread Josef Skladanka
+1 I'm glad that our usual flame-war yielded a common ground that we can
agree upon :) Hope that it was not as painfull as usual.

J.

On Mon, Feb 6, 2017 at 6:04 PM, Kamil Paral  wrote:

> Well, after more discussions with kparal, we are still unsure about the
> "right" way to tackle this.
> Our current call would be:
> 1) sync requirements.txt versions with fedora (mostly done)
> 2) allow --system-site-packages in the test_env
> 3) do `pip install -r requirements.txt` (with possible flags to enforce
> versions) to the makefile virtualenv creation step
> 4) add info to readme, that testing needs installation of packages from
> pypi, and that some of them need compilation
> 4-1) put together a list of packages that need to be installed (the
> python-foobar kind, not -devel + gcc) to the system, in order to "skip" the
> stuff that needs to be compiled
>
> Sounds reasonable, Kamil? Others?
>
>
> I went back and forth on this. I thought it would be a really simple
> change, and as usual, it seems more pain than gain. So, I went forward with
> this:
> 1. add tox.ini to projects to allow simple test suite execution with
> `pytest` (non-controversial)
> 2. configure tox.ini to print out test coverage (non-controversial)
> 3. remove --system-site-packages from all places (readme, makefile) for
> those projects, that can be *fully* installed from pypi *without any
> compilation* (hopefully non-controversial).
> 4. keep (or add) --system-site-packages to readme/makefile for the
> remaining projects, and add readme info how to deal with pypi compilation
> or local rpm installation
>
> What Josef mentioned is that he wouldn't try to replicate a perfect
> environment directly on dev machine, because that's  a lot of work.
> Instead, use the current non-perfect environment on dev machines (which
> should be fine most of the time anyway) and have a separate CI service
> (hopefully in the future) with more strict environment configuration. I
> guess that's the most practical solution.
>
> We might even want to reopen the question how to version deps in
> requirements.txt vs spec file, but I'd keep that for a separate thread, if
> needed.
>
> My current patches for resultsdb projects are these:
> https://phab.qa.fedoraproject.org/D1114
> https://phab.qa.fedoraproject.org/D1116
> https://phab.qa.fedoraproject.org/D1117
>
>
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: making test suites work the same way

2017-02-06 Thread Josef Skladanka
Well, after more discussions with kparal, we are still unsure about the
"right" way to tackle this.
Our current call would be:
1) sync requirements.txt versions with fedora (mostly done)
2) allow --system-site-packages in the test_env
3) do `pip install -r requirements.txt` (with possible flags to enforce
versions) to the makefile virtualenv creation step
4) add info to readme, that testing needs installation of packages from
pypi, and that some of them need compilation
4-1) put together a list of packages that need to be installed (the
python-foobar kind, not -devel + gcc) to the system, in order to "skip" the
stuff that needs to be compiled

Sounds reasonable, Kamil? Others?

Joza

On Mon, Feb 6, 2017 at 2:11 PM, Kamil Paral  wrote:

>
> 3. use a separate virtualenv when running under `make test`, without
> --system-site-packages if possible, and ensure up-to-date deps are always
> installed, to eliminate any differences that can occur on different setups
>
> The only problem I see here, is that some of the packages that you'd need
> to install into the test-virtualenv need some C-compilation when installing
> from PyPi, and that (if used without the --system-site-packages) would mean
> having to install not only gcc but also plenty of -dev packages.
> Not a _huge_ ussue but an issue nevertheless.
>
>
> That's a good point. But do we have a good alternative here? If we depend
> on packages like that, I see only two options:
>
> a) ask the person to install pyfoo as an RPM (in readme)
> b) ask the person to install gcc and libfoo-devel as an RPM (in readme)
> and pyfoo will be then compiled and installed from pypi
>
> Approach a) is somewhat easier and does not require compilation stack and
> devel libraries. OTOH it requires using virtualenv with
> --system-site-packages, which means people get different results on
> different setups. That's exactly what I'm trying to eliminate (or at least
> reduce). E.g. https://phab.qa.fedoraproject.org/D where I can run the
> test suite from makefile and you can't, and it's quite difficult to figure
> out why.
>
> With b) approach, you need compilation stack on the system. I don't think
> it's such a huge problem, because you're a developer after all. The
> advantage is that virtualenv can be created without --system-site-packages,
> which means locally installed libraries do not affect the execution/test
> suite results. Also, pyfoo is installed with exactly the right version,
> further reducing differences between setups. The only thing that can differ
> is the version of libfoo-devel, which can affect the behavior. But the
> likeliness of that happening is much smaller than having pyfoo of a
> different version or pulling any deps from the system site packages.
>
>
> Sigh, OTOH Josef has a very good point in https://phab.qa.fedoraproject.
> org/D1112#20744 that figuring out which devel packages are needed on the
> system for pypi module compilation is quite a non-trivial task. Seems
> harder than figuring out which packages you need to install on the system
> when using --system-site-packages.
>
> So, I don't really see a simple solution here that would guard us from
> inconsistent dev setups (system site packages) or not require us to have a
> painful deps maintenance (no system site packages, but devel packages) :/
>
> Even if we decide to keep --system-site-packages for simplicity, I can
> implement all the rest of the proposed improvements. Just the reliability
> will not be as great as I wished.
>
> The main question is now, I believe, whether we want to install
> compilation-dependent packages from pypi or rpm.
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: making test suites work the same way

2017-02-06 Thread Josef Skladanka
On Mon, Feb 6, 2017 at 1:35 PM, Kamil Paral  wrote:

>
> That's a good point. But do we have a good alternative here? If we depend
> on packages like that, I see only two options:
>
> a) ask the person to install pyfoo as an RPM (in readme)
> b) ask the person to install gcc and libfoo-devel as an RPM (in readme)
> and pyfoo will be then compiled and installed from pypi
>
> Approach a) is somewhat easier and does not require compilation stack and
> devel libraries. OTOH it requires using virtualenv with
> --system-site-packages, which means people get different results on
> different setups. That's exactly what I'm trying to eliminate (or at least
> reduce). E.g. https://phab.qa.fedoraproject.org/D where I can run the
> test suite from makefile and you can't, and it's quite difficult to figure
> out why.
>
>
With b) approach, you need compilation stack on the system. I don't think
> it's such a huge problem, because you're a developer after all. The
> advantage is that virtualenv can be created without --system-site-packages,
> which means locally installed libraries do not affect the execution/test
> suite results. Also, pyfoo is installed with exactly the right version,
> further reducing differences between setups. The only thing that can differ
> is the version of libfoo-devel, which can affect the behavior. But the
> likeliness of that happening is much smaller than having pyfoo of a
> different version or pulling any deps from the system site packages.
>
>
The reason why I want to recommend `make test` for running the test suite
> (at least in readme), is because in the makefile we can ensure that a clean
> virtualenv with correct properties is created, and only and exactly the
> right versions of deps from requirements.txt are installed. We can perform
> further necessary steps, like installing the project
> . That further increases
> reliability. Compare this to manually running `pytest`- a custom virtualenv
> must be active; it can be configured differently than recommended in
> readme, it can be out of date, or it can have more packages installed than
> needed; you might forget some necessary steps.
>
>
Sure, I am a devel, but not a C-devel... As I told you in our other
conversation - I see what you are trying to accomplish, but for me the gain
does not even balance the issues. With variant 'a', all you need to do is
make sure "these python packages are installed" to run the test suite. I'd
rather have something like `requirements_testing.txt` where all the deps
are speeled out in the proper versions, and using that as a base for the
virtualenv population (I guess we could easily make do with the
requirements.py we have now). Either you have the right version in your
system (or in your own development virtualenv from which you are running
the tests), or the right version will be installed for you from pip.
Yes, we might get down to people having to install bunch of header files,
and gcc, if for some reason their system is so different that they can not
obtain the right version in any other way, but it will work most of the
time.



> Of course nothing prevents you from simply running the test suite using
> `pytest`. It's the same approach that Phab will do when submitting a patch.
> However, when some issues arises, I'd like all parties to be able to run
> `make test` and it should return the same result. That should be the most
> reliable method, and if it doesn't return the same thing, it means we have
> an important problem somewhere, and it's not just "a wrongly configured
> project on one dev machine".
>
So, I see these main use cases for `make test` and b) approach:
> * good a reliable default for newcomers, an approach that's the least
> likely to go wrong
> * determining the reason for failures that only one party sees and the
> other doesn't
> * `make test-ci` target, that will hopefully be used one day to perform
> daily/per-commit CI testing of our codebases. Again, using the most
> reliable method available.
>
>
Sure, nobody forces _me_ to do it this way, but I still fail to see the
overall general benefit. If a random _python web app_ project that I wanted
to submit a patch for wanted me to install gcc and tons of -devel libs, I'd
be going to the next door. We were talking "accessibility" a lot with Phab,
and one of the arguments against it (not saying it was you in particular)
was that "it is complicated, and needs additional packages installed". This
is even worse version of the same. At least to me.
On top of that - who is going to be syncing up the versions of said
packages between Fedora (our target) and the requirements.txt? What release
are we going to be using as the target? And is it even the right place and
way to do it?



> For some codebases this is not viable anyway, e.g. libtaskotron, because
> they depend on packages not available in pypi (koji) and thus need
> --system-site-packages. But e.g. resultsdb projects seem that t

Libtaskotron - allow non-cli data input

2017-02-06 Thread Josef Skladanka
Chaps,

we were discussing this many times in the past, and as with the
type-restriction, I think this is the right time to get this done, actually.

It sure ties to the fact, that I'm trying to put together
Taskotron-continuously-testing-Taskotron together - the idea here being
that on each commit to a devel branch on any of the Taskotron components,
we will spin-up a testing instance of the whole stack, and run some
integration tests.

To do this, I added a new consumer to Trigger (
https://phab.qa.fedoraproject.org/D1110) that eats Pagure.io commits, and
spins jobs based on that.
This means, that I want to have the repo, branch and commit id as input for
the job, thus making yet-another-nasty-hack to pass the combined data into
the job (https://phab.qa.fedoraproject.org/D1110#C16697NL18) so I can hack
it apart later on either in the formula or in the task itself.

It would be very helpfull to be able to pass some structured data into the
task instead.

I kind of remember that we agreed on json/yaml. The possibilities were
either reading it from stdin or file. I don't really care that much either
way, but would probably feel a bit better about having a cli-param to pass
the filename there.

The formulas already provide a way to 'query' structured data via the
dot-format, so we could do with as much as passing some variable like
'task_data' that would contain the parsed json/yaml.

What do you think?

Joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Lift type-restriction on ibtaskotron's cli + resultsdb directive

2017-02-06 Thread Josef Skladanka
Hey Gang,

this is bugging me for quite a while now, and although I know why we put
the restrictions there back then, I'm not sure the benefits still outweigh
the problems.

Especially now, when we'll probably be getting some traction, I'd like to
propose removing the type-check completely. On top of that, we could have
some "known" types (koji_build, bodhi_update, compose, ...), and implement
a "spellcheck" - like a Hamming or Levenshtein distance - to catch typos,
and warn the user.

All the Taskotron jobs are given the type programatically now anyway, so
the worry-for-typos is now IMO a bit lessened, and actual human users would
be warned in the logs.

(lib)Taskotron is pretty agnostic to what the people are doing with it, and
this seems to be a lefover arbitrary limit, that may have sense, but should
probably be implemented in an other part of the stack (like trigger) that
the users may be (in the future) directly interacting with.

Thoughts?

Joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: making test suites work the same way

2017-02-04 Thread Josef Skladanka
On Fri, Feb 3, 2017 at 11:05 PM, Kamil Paral  wrote:

> I spent a bit of time fixing minor issues in our test suite and makefiles
> and would like to do the following further changes across all our taskotron
> projects:
>
> 1. run the test suite while inside virtualenv with simple `pytest` command
> 2. run the test suite outside of virtualenv with `make test` or `doit
> test` and recommend this approach in readme
> 3. use a separate virtualenv when running under `make test`, without
> --system-site-packages if possible, and ensure up-to-date deps are always
> installed, to eliminate any differences that can occur on different setups
>

The only problem I see here, is that some of the packages that you'd need
to install into the test-virtualenv need some C-compilation when installing
from PyPi, and that (if used without the --system-site-packages) would mean
having to install not only gcc but also plenty of -dev packages.
Not a _huge_ ussue but an issue nevertheless.



> 4. configure pytest to print out coverage by default, so that it shows up
> for both `pytest` and `make test`
>
> Any concerns or different suggestions?
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: ResultsDB 2.0 - DB migration on DEV

2017-01-25 Thread Josef Skladanka
Estimate on the PROD migration finish is in about 24 hours from now. STG
was seamless, so I'm not expecting any troubles here either.

On Wed, Jan 25, 2017 at 10:47 AM, Josef Skladanka 
wrote:

> STG is done (took about 15 hours), starting the archive migration for
> PROD, and I'll start figuring way to merge the data. Probably tomorrow.
>
> On Tue, Jan 24, 2017 at 5:49 PM, Josef Skladanka 
> wrote:
>
>> So I started the data migration for the STG archives - should be done in
>> about 15 hours from now (running for cca six hours already) - estimated on
>> the number of results that were already converted.
>> If that goes well, I'll start the PROD archives migration tomorrow, and
>> start working on merging the archives with the "base".
>> If nothing goes sideways, we should have all the data in one place by the
>> end of this week.
>>
>> J.
>>
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Static dashboards PoC

2017-01-25 Thread Josef Skladanka
Folks,

lbrabec and I made the static dashboards happen, a sample can be seen here:
https://jskladan.fedorapeople.org/dashboards/

Note that these are all generated from a yaml config that defines the
packages/testcases + real resultsdb data. Not that the dashboards make much
sense, but it shows off what we can easily do.

One non-obvious feature is that next to the dashboard name in the left part
of the screen, there is a "dropdown" icon. Clicking on that will show you
the previous results of that dashboard. We only show the current ones for
each one to minimize the visual clutter, and it's what you care about most
of the time anyway.

J.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: ResultsDB 2.0 - DB migration on DEV

2017-01-25 Thread Josef Skladanka
STG is done (took about 15 hours), starting the archive migration for PROD,
and I'll start figuring way to merge the data. Probably tomorrow.

On Tue, Jan 24, 2017 at 5:49 PM, Josef Skladanka 
wrote:

> So I started the data migration for the STG archives - should be done in
> about 15 hours from now (running for cca six hours already) - estimated on
> the number of results that were already converted.
> If that goes well, I'll start the PROD archives migration tomorrow, and
> start working on merging the archives with the "base".
> If nothing goes sideways, we should have all the data in one place by the
> end of this week.
>
> J.
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: ResultsDB 2.0 - DB migration on DEV

2017-01-24 Thread Josef Skladanka
So I started the data migration for the STG archives - should be done in
about 15 hours from now (running for cca six hours already) - estimated on
the number of results that were already converted.
If that goes well, I'll start the PROD archives migration tomorrow, and
start working on merging the archives with the "base".
If nothing goes sideways, we should have all the data in one place by the
end of this week.

J.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Proposal to CANCEL: 2017-01-23 Fedora QA Devel Meeting

2017-01-23 Thread Josef Skladanka
+1

On Mon, Jan 23, 2017 at 10:19 AM, Tim Flink  wrote:

> There are a bunch of Red Hat related events this week and a majority
> of the usual suspects are going to be busy with other things. I'm not
> aware of any urgent topics that need to be discussed/reviewed as a group
> this week, so I propose that we cancel the weekly Fedora QA devel
> meeting.
>
> If there are any topics that I'm forgetting about and/or you think
> should be brought up with the group, reply to this thread and we can
> un-cancel the meeting.
>
> Tim
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: OpenQA templates with hardcoded nfs address

2017-01-18 Thread Josef Skladanka
On Wed, Jan 18, 2017 at 3:19 PM, Normand  wrote:

> I am looking at the nfs tests already defined in OpenQA templates file for
> fedora (1)
>
> I would like to try those tests locally in a PowerPC environment,
> so plan to modify templates to replace the hardcoded value
> by a variable to be set by fedora-openqa-schedule.
>

Can't comment on how the nfs is set up (I suppose that having a look at the
relevant testcases would be beneficial to you), but why don't you simply
change the IP address via the Web interface for your local setup? Would be
IMO quite easier (and from my POW also cleaner), than changing the
scheduler.

j.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Proposal: Migrating More Git Projects to Pagure

2017-01-14 Thread Josef Skladanka
On Fri, Jan 13, 2017 at 5:49 PM, Adam Williamson  wrote:

> On Fri, 2017-01-13 at 14:16 +0100, Josef Skladanka wrote:
> > > I am personaly against issues/pull requests on Pagure - logging into
> Phab
> > is about as difficult as logging into Pagure, and I don't see the benefit
> > of "allowing people to do it, since it's possible" even balancing out the
> > problem of split environments.
> > But that's just me.
>
> The difficult thing with Phab isn't logging into it (any more), it's
> setting up the entire arcanist workflow you need to be able to submit
> diffs. People aren't going to do that for drive-bys
>

And the thing is - they do not need to setup arcanist and all that:
https://phab.qa.fedoraproject.org/differential/diff/create/
This is as simple as it gets - paste raw diff, or upload a file. This will
just create a new differential revision, no fuss.
While it's not github-ish, it is easy, and just works - what's the problem?

j.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Proposal: Migrating More Git Projects to Pagure

2017-01-13 Thread Josef Skladanka
On Fri, Jan 13, 2017 at 2:11 PM, Kamil Paral  wrote:

> > I don't have any serious issues, as long as we only use pagure as git
> host. I
> > hate that "we wanted to copy github, but stopped just after we found out
> > it's too much functionality" thing (also don't like how github works on
> top
> > of that, so even a 1:1 clone would be awful from my perspective).
>
> Just a remark, Pagure development hasn't stopped, it constantly receives
> new functionality [1]. But I agree it's not likely to be that featureful as
> github any time soon, and I'd be wary if we wanted to move our whole
> workflow there (i.e. ditching Phabricator). For just hosting git repos,
> it's of course absolutely fine. We'll need to discuss whether we want to
> keep the issues/pull requests open there to receive some simple
> reports/patches (and ask people to move more complex ones to Phab), or
> whether we'll not use that in Pagure at all. (Let's not forget we also have
> libtaskotron in Bugzilla, fortunately it's not used much yet).
>
> I am personaly against issues/pull requests on Pagure - logging into Phab
is about as difficult as logging into Pagure, and I don't see the benefit
of "allowing people to do it, since it's possible" even balancing out the
problem of split environments.
But that's just me.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Proposal: Migrating More Git Projects to Pagure

2017-01-13 Thread Josef Skladanka
I don't have any serious issues, as long as we only use pagure as git host.
I hate that "we wanted to copy github, but stopped just after we found out
it's too much functionality" thing (also don't like how github works on top
of that, so even a 1:1 clone would be awful from my perspective).

Just let us know once the transition is done, so I can change my git
remotes.

Thanks, Tim!

On Mon, Jan 9, 2017 at 5:21 PM, Tim Flink  wrote:

> This came up in the qadevel meeting today and I wanted to put a bit
> more detail out.
>
> Bitbucket was never intended to be the long-term home for our git
> projects - I think we're about the only folks in Fedora using it and
> it's not free software. As fedorahosted is closed down, we need to find
> a new home for blockerbugs but I figure that now is as good of a time
> as any to get all of our git projects in the same place.
>
> I'm proposing the following moves:
>
> * Move all Taskotron projects to pagure.io using the taskotron group:
>- pagure.io/taskotron/libtaskotron
>- pagure.io/taskotron/resultsdb
>- etc.
>
> * Move blockerbugs under the existing fedora-qa namespace in pagure:
>- pagure.io/fedora-qa/blockerbugs
>
> I'm not sure if there are any plans for the openqa stuff that currently
> lives on bitbucket but it'd be nice to see that moved as well.
>
> Any objections, comments, concerns?
>
> Tim
>
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Task Result Dashboards

2017-01-13 Thread Josef Skladanka
On Thu, Jan 12, 2017 at 7:42 AM, Tim Flink  wrote:

> The idea was to start with static site generation because it doesn't
> require an application server, is easy to host and likely easier to
> develop, at least initially.
>
> I don't really have a strong preference either way, just wanted to say
that "initial development" time is the same for web app, and for static
generated pages - it both does the same thing - takes an input + output
template and produces output. You can't really get around that from what
I'm seeing here. Static generated page equals cached data in the app, and
for the starters we can go on using just the stupidest of caches provided
in Flask (even though it might well be cool and interesting to use some
document store later on, but that's premature optimization now).


> >After brief discussion with jskladan, I understand that resultsDB
> > would be able to handle requests from dynamic page.
>
> Sure but then someone would have to write and maintain it. The things
> that drove me towards static site generation are:
>

Write and maintain what? I'm being sarcastic here, but this sounds like the
code for static generated pages will not have to be written and
maintained... And once again - the actual code that does the actual thing
will be the same, regardless of whether the output is a web page, or a http
response.

>
> > * I'm not sure what exactly is meant by 'item tag' in the examples
> > section.
> >
> > * Would the YAML configuration look something like this:
> >
> >url: link.to.resultsdbapi.org
> >overview:
> >- testplan:
> >  - name: LAMP
> >  - items:
> >- mariadb
> >- httpd
> >  - tasks:
> >- and:
> >  - rpmlint
> >  - depcheck
> >  - or:
> >- foo
> >- bar
>
> I was thinking more of the example yaml that's in the git repo at
> taskdash/mockups/yamlspec.yml [1] but I'm not really tied to it strongly
> - so long as it works and the format is easy enough to understand.
>
>
I guess I know where you were going with that example, but it is a bit
lacking. For one all it really allows for is "hard and" relationship
between the testcases in the testplan (dashboard, call it whatever you
like), which might be enough, but with what was said here it will start
being insufficient pretty fast. The other thing is, that we really want to
be able to do the "item selection" in some way. We sure could say "take all
results for all these four testcases, and produce a line-per-item" but that
is so broad, that it IMO stops making sense anywhere beyond the "global"
(read applicable to all the items in the resutsdb) testplans.


> >Is there going to be any additional grouping (for example, based
> > on arch) or some kind of more precise outcome aggregation (only warn
> > if part of testplan is failing, etc.)
>
> Maybe but I think those features can be added later. Are you of the
> mind that we need to take those things into account now?
>
>
I don't really think that they can. Take a simple "gating" dashboard for
example. There is a pretty huge difference between "package passes, if
rpmlint, depcheck and abicheck pass on it" and "package passses if rpmlint,
depcheck and abicheck pass for all the required arches". And I'm certain we
want to be able to do the latter. Like it is not really "pass" when rpmlint
passed on ARM, depcheck on X86_64 and abicheck on i386, but all the other
combinations failed.

It might seem like unnecessarily overcomplicating things, but I don't thin
that the dashboard-generating tool should make assumptions (like that
grouping by arch is what you want to do) - it should be spelled out in the
input format, so there is as much black box removed as possible.
Will it take more time to write the input? Sure. Is it worth it? Absolutely.



> > * Are we going to generate the dashbord for the latest results only,
> > or/and some kind of summary over given period in history?
>
> For now, the latest results. In my mind, we'd be running the dashboard
> creation on a cron job or in response to fedmsgs. At that point, we'd
> date the generated dashboards and keep a record of those without
> needing a lot more complexity
>

The question here is "what is latest results"? Do we just take now-month
for the first run, and then "update" on top of that? I would not
necessarily have a problem with that, it's just that we most deffinitely
would want to capture _some_ timespan, and I think this is more about "what
timespan it its".
If we decide to go with "take the old state, apply updates on top of that",
then we will (I think) pretty fast arrive to a point where we mirror the
data from ResultsDB, just in a different format, stored in a document store
instead of relational database. Not saying it's a bad or wrong thing to do.
I actually think it's a pretty good solution - better than querying
increasingly more data from ResultsDB anyway.
___
qa-devel mail

Re: Enabling running Taskotron tasks on Koji scratch builds

2017-01-13 Thread Josef Skladanka
I don't have much to add, just that I agree with Kamil. I see some minor
problems even in what he wrote, but that's well beyond what I think we
should be solving now.

j.

On Tue, Jan 10, 2017 at 5:19 PM, Kamil Paral  wrote:

> > Couldn't we use the task ID as the primary identifier but use the srpm
> > for readability sake since there is no "build" NVR for scratch builds?
>
> Systems like new hotness will need to query according to the task ID,
> though (to get reliable results). So we're talking about hacking just
> resultsdb *frontend* here, e.g. by having "item: task_id" and
> "item_display: nvr" in the results yaml. I don't like it much. Searching in
> the frontend would have to search both in item and item_display.
>
> Or we could use our existing "note" field to put NEVR inside, and it would
> be easily visible in the frontend without any ugly hacks. People would have
> to know that if they want to search by NEVR for scratch builds, they need
> to put the NEVR in the note search field (we'd have to add it).
>
> I assume you proposed using task id as primary identifier just to scratch
> builds (but not standard builds). If you also mean it for standard builds,
> then scheduling tasks locally starts to be quite inconvenient (for each
> build you want to run your task on, you need to find its task id first;
> it's hard to re-run something from shell history). We would also be
> changing something that's a de-facto standard in our infra (using NEVR as a
> unique identifier across projects).
>
>
> > Either that or make the primary ID a tuple of (srpm name, koji task
> > start time) either stored that way or rendered as a string e.g
> > foo-1.2-3.fc99.src.rpm built at 201701092100 (MMDDHHMM) would become
> > foo-1.2-3.fc99-201701092100.
>
> The same problem with inconvenient local execution. The command line
> starts to be hard to read (in tutorials, etc).
>
> Also, the srpm name doesn't have to be anything reasonable, Koji will
> happily accept anything (it doesn't care, for scratch builds). So we can
> easily receive "1.srpm" or "my-new-patch.srpm" from the fedmsg (we actually
> tried this [1]). Deriving any useful info from this input is probably a
> mistake. We would have to download the srpm and look into the included spec
> file to reliably decide which package this is related to.
>
> [1] https://apps.fedoraproject.org/datagrepper/id?id=2017-
> 2fe001d2-32d4-434a-b3a9-29ab31eebbb0&is_raw=true&size=extra-large
>
> >
> > > During a discussion with Kamil, a few solutions were mentioned (none
> > > of them is pretty):
> > >
> > > 1. We can ask koji developers if there is a way to add a method that
> > > would return all koji scratch builds for a given NVR - we would then
> > > take the latest one and work with it.
> >
> > How would we get the NVR if there's no build identifier on scratch
> > builds? Are you talking about the SRPM name in the fedmsg?
>
> Yes. Not from the fedmsg, but from the Koji task (the fedmsg gets this
> value from the srpm filename included in the Koji task, I believe). But see
> the problem described above with srpm file naming (can be arbitrary). So
> even better would be if Koji could search and return scratch build info
> containing the metadata of the srpm or from the spec file included.
>
> But I'm honestly not sure it is a good idea even if they wanted to
> implement it. More on that below.
>
> >
> > > 2. We can use "koji download-task" which works for both. That would
> > > mean koji_build item type would eat koji task IDs instead of NVRs.
> > > This would lead to having koji task IDs in resultsdb instead of NVR's
> > > which kills readability. Unless libtaskotron finds the NVR from the
> > > koji task ID in the case of "real" build" and stores it in a field in
> > > resultsdb...Also, we need NVRs for fedmsgs. So add code to the fedmsg
> > > layer that would take care of somehow adding a NVR to fedmsg of
> > > completed scratch builds tasks...
> >
> > Can't we tell the difference between an NVR and a task ID just by the
> > form that the content takes? Wouldn't the following work:
> >
> >   1. attempt to parse input as NVR, if there are no issues, assume that
> >  it's a build ID
> >   2. if the NVR parse fails, check to see if the input is an int. if
> >  assume that it's a koji task ID
>
> The reverse might be easier, but yes, that's of course possible. There are
> other caveats, though.
>
> >
> > It'd mean that our koji downloading code would get more complicated but
> > it seems like nothing that more unit tests couldn't mitigate. Unless
> > I'm mis-thinking here, the changes needed to pull this off in trigger
> > would be even smaller.
>
> I'm afraid our code could get infested with if clauses pretty soon. This
> is not just about downloading rpms in the koji directive. We have more koji
> build-related functionality and we'll sure get more in the future. For
> example, we currently support "download_latest_stable", but that might be
> very tricky for scrat

Re: New ExecDB

2017-01-12 Thread Josef Skladanka
There's not been a huge amount of effort put to this - I've had other
priorities ever since, but I can get back on it, if you feel it's the time
to do it. The only code to work in that direction is here:
https://bitbucket.org/fedoraqa/execdb/branch/feature/pony where I only
basically started on removing the tight coupling between execdb and
buildbot, and then I went on trying to figure out what's in this thread.

On Tue, Jan 10, 2017 at 6:57 AM, Tim Flink  wrote:

> On Fri, 21 Oct 2016 13:16:04 +0200
> Josef Skladanka  wrote:
>
> > So, after a long discussion, we arrived to this solution.
> >
> > We will clearly split up the "who to notify" part, and "should we
> > re-schedule" part of the proposal. The party to notify will be stored
> > in the `notify` field, with `taskotron, task, unknown` options.
> > Initially any crashes in `shell` or `python` directive, during
> > formula parsing, and when installing the packages specified in the
> > formula's environment will be sent to task maintainers, every other
> > crash to taskotron maintainer. That covers what I initially wanted
> > from the multiple crashed states.
> >
> > On top of that, we feel that having an information on "what went
> > wrong" is important, and we'd like to have as much detail as
> > possible, but on the other hand we don't want the re-scheduling logic
> > to be too complicated. We agreed on using a `cause` field, with
> > `minion, task, network, libtaskotron, unknown` options, and storing
> > any other details in a key-value store. We will likely just
> > re-schedule any crashed task anyway, at the beginning, but this
> > allows us to hoard some data, and make more informed decision later
> > on. On top of that, the `fatal` flag can be set, to say that it is
> > not necessary to reschedule, as the crash is unlikely to be fixed by
> > that.
> >
> > This allows us to keep the re-scheduling logic rather simple, and most
> > imporantly decoupled from the parts that just report what went wrong.
>
> How far did you end up getting on this?
>
> Tim
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Proposal to CANCEL: 2017-01-02 Fedora QA Devel Meeting

2017-01-01 Thread Josef Skladanka
Agreed

On Sun, Jan 1, 2017 at 7:28 PM, Tim Flink  wrote:

> Monday is a holiday for me and I suspect that it is also a holiday for
> many other folks. I'm not aware of anything urgent which needs
> discussion so I'm proposing that we cancel our normally scheduled QA
> devel meeting.
>
> If there are any topics that I'm forgetting about and/or you think
> should be brought up with the group, reply to this thread and we can
> un-cancel the meeting.
>
> Tim
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Proposal to CANCEL: 2016-12-19 Fedora QA Devel Meeting

2016-12-20 Thread Josef Skladanka
+1, especially since all the relevant pepole here are on PTO.

On Mon, Dec 19, 2016 at 5:34 AM, Tim Flink  wrote:

> Most of the regular folks will be absent this week and I'm not aware of
> anything urgent to cover so I propose that we cancel the weekly Fedora
> QA devel meeting.
>
> If there are any topics that I'm forgetting about and/or you think
> should be brought up with the group, reply to this thread and we can
> un-cancel the meeting.
>
> Tim
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: 2016-12-15 @ 17:00 UTC - Outage for qadevel.cloud replacement

2016-12-14 Thread Josef Skladanka
Awesome!

On Wed, Dec 14, 2016 at 6:54 PM, Tim Flink  wrote:

> I realize this is a little last minute but there's no telling how much
> longer the current auth system will continue to work.
>
> I'm planning to take qadevel down (phabricator, some docs etc.)
> tomorrow so that I can finally replace it with an instance that has
> working auth among other improvements.
>
> This is going to be a rather large change and I expect that it will
> take at least 4 hours. If this is going to be a huge problem, please
> let me know soon.
>
> The big changes will be:
>   - new hostname
>  *.qadevel.cloud.fedoraproject.org will become
>  *.qa.fedoraproject.org
>
>   - better cert handling
>  no more errors when http:// is used
>
>   - new auth system
>  using fedora systems, no longer relying on persona
>
>   - newer version of phabricator
>
>   - lots of other boring changes under the hood :)
>
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: ResultsDB 2.0 - DB migration on DEV

2016-12-14 Thread Josef Skladanka
So, as we discussed during meeting, I have offloaded the data (for stg)
older than half a year to another database. This is how I did it (probably
could have been done more efficiently, but hey, this worked, and I'm not
postgres expert...):

$ pg_dump -Fc resultsdb_stg > resultsdb_stg.dump # dump the resultsdb_stg
to file
$ createdb -T template0 resultsdb_stg_archive # create new empty database
callend resultsdb_stg_archive
$ pg_restore -d resultsdb_stg_archive resultsdb_stg.dump # load data from
the dump to the resultsbd_stg_archive db
$ psql resultsdb_stg_archive
=# -- Get the newest result we want to keep in archives
=# select id, job_id from result where submit_time<'2016-06-01' order by
submit_time desc limit 1;
   id| job_id
-+
 7857664 | 308901

=# -- Since jobs can contain multiple results, let's select the first
result with the 'next' job_id (could be done as 'select id, job_id from
result where job_id = 308902 order by id limit 1;' too, but this would
automagically catch a hole in the job sequence)
=# select id, job_id from result where job_id > 308901 order by id limit 1;
   id| job_id
-+
 7857665 | 308902

=# -- delete all the result_data, results, and jobs, starting from what we
got in the previous query
=# delete from result_data where result_id >= 7857665;
=# delete from result where id >= 7857665;
=# delete from job where id >= 308902;

$ psql resultsdb_stg
=# -- since the db's were 'cloned' at the beginning, perform deletion of
the inverse set of data than we did in archive
=# delete from result_data where result_id < 7857665;
=# delete from result where id < 7857665;
=# delete from job where id < 308902;



On Wed, Dec 7, 2016 at 2:19 PM, Josef Skladanka  wrote:

>
>
> On Mon, Dec 5, 2016 at 4:25 PM, Tim Flink  wrote:
>
>> Is there a way we could export the results as a json file or something
>> similar? If there is (or if it could be added without too much
>> trouble), we would have multiple options:
>>
>
> Sure, adding some kind of export should be doable
>
>
>>
>> 1. Dump the contents of the current db, do a partial offline migration
>>and finish it during the upgrade outage by export/importing the
>>newest data, deleting the production db and importing the offline
>>upgraded db. If that still takes too long, create a second postgres
>>db containing the offline upgrade, switchover during the outage and
>>import the new results since the db was copied.
>>
>>
> I slept just two hours, so this is a bit entangled for me. So - my initial
> idea was, that we
>  - dump the database
>  - delete most of the results
>  - do migration on the small data set
>
> In paralel (or later on), we would
>  - create a second database (let's call it 'archive')
>  - import the un-migrated dump
>  - remove data that is in the production db
>  - run the lenghty migration
>
> This way, we have minimal downtime, and the data are available in the
> 'archive' db,
>
> With the archive db, we could either
> 1) dump the data and then import it to the prod db (again no down-time)
> 2) just spawn another resultsdb (archives.resultsdb?) instance, that would
> operate on top of the archives
>
> I'd rather do the second, since it also has the benefit of being able to
> offload old data
> to the 'archive' database (which would/could be 'slow by definition'),
> while keeping the 'active' dataset
> small enough, that it could all be in memory for fast queries,.
>
> What do you think? I guess we wanted to do something pretty similar, I
> just got lost a bit in what you wrote :)
>
>
>
>> 2. If the import/export process is fast enough, might be able to do
>>instead of the inplace migration
>>
>
> My gut feeling is that it would be pretty slow, but I have no relevant
> experience.
>
> Joza
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: ResultsDB 2.0 - DB migration on DEV

2016-12-07 Thread Josef Skladanka
On Mon, Dec 5, 2016 at 4:25 PM, Tim Flink  wrote:

> Is there a way we could export the results as a json file or something
> similar? If there is (or if it could be added without too much
> trouble), we would have multiple options:
>

Sure, adding some kind of export should be doable


>
> 1. Dump the contents of the current db, do a partial offline migration
>and finish it during the upgrade outage by export/importing the
>newest data, deleting the production db and importing the offline
>upgraded db. If that still takes too long, create a second postgres
>db containing the offline upgrade, switchover during the outage and
>import the new results since the db was copied.
>
>
I slept just two hours, so this is a bit entangled for me. So - my initial
idea was, that we
 - dump the database
 - delete most of the results
 - do migration on the small data set

In paralel (or later on), we would
 - create a second database (let's call it 'archive')
 - import the un-migrated dump
 - remove data that is in the production db
 - run the lenghty migration

This way, we have minimal downtime, and the data are available in the
'archive' db,

With the archive db, we could either
1) dump the data and then import it to the prod db (again no down-time)
2) just spawn another resultsdb (archives.resultsdb?) instance, that would
operate on top of the archives

I'd rather do the second, since it also has the benefit of being able to
offload old data
to the 'archive' database (which would/could be 'slow by definition'),
while keeping the 'active' dataset
small enough, that it could all be in memory for fast queries,.

What do you think? I guess we wanted to do something pretty similar, I just
got lost a bit in what you wrote :)



> 2. If the import/export process is fast enough, might be able to do
>instead of the inplace migration
>

My gut feeling is that it would be pretty slow, but I have no relevant
experience.

Joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-12-05 Thread Josef Skladanka
On Thu, Dec 1, 2016 at 6:04 PM, Adam Williamson 
wrote:

> On Thu, 2016-12-01 at 14:25 +0100, Josef Skladanka wrote:
> > On Wed, Nov 30, 2016 at 6:29 PM, Adam Williamson <
> adamw...@fedoraproject.org
> > > wrote:
> > > On Wed, 2016-11-30 at 18:20 +0100, Josef Skladanka wrote:
> > > > I would try not to go the third way, because that is really prone to
> > >
> > > erros
> > > > IMO, and I'm not sure that "per context" is always right. So for me,
> the
> > > > "TCMS" part of the data, should be:
> > > > 1) testcases (with required fields/types of the fields in the "result
> > > > response"
> > > > 2) testplans - which testcases, possibly organized into groups. Maybe
> > >
> > > even
> > > > dependencies + saying "I need testcase X to pass, Y can be pass or
> warn,
> > >
> > > Z
> > > > can be whatever when A passes, for the testplan to pass"
> > > >
> > > > But this is fairly complex thing, to be honest, and it would be the
> first
> > > > and only useable TCMS in the world (from my point of view).
> > >
> > > I have rather different opinions, actually...but I'm not working on
> > > this right now and I'd rather have something concrete to discuss than
> > > just opinions :)
> > >
> > > We should obviously set goals properly, before diving into
> implementation
> >
> > details :) I'm interested in what you have in mind, since I've been
> > thinking about this particular kind of thing for the last few years, and
> it
> > really depends on what you expect of the system.
>
> Well, the biggest point where I differ is that I think your 'third way'
> is kind of unavoidable. For all kinds of reasons.
>
> We re-use test cases between package update testing, Test Days, and
> release validation testing, for instance; some tests are more or less
> unique to some specific process, but certainly not all of them. The
> desired test environments may be significantly different in these
> different cases.
>

We also have secondary arch teams using release validation processes
> similar to the primary arch process: they use many of the same test
> cases, but the desired test environments are of course not the same.
>
>
I think we actually agree, but I'm not sure, since I don't really know what
you mean by "test environment" and how should it
1) affect the data stored with the result
2) affect the testcase itself

I have a guess, and I base the rest of my response on it, but I'd rather
know, than assume :)



> Of course, in a non-wiki based system you could plausibly argue that a
> test case could be stored along with *all* of its possible
> environments, and then the configuration for a specific test event
> could include the information as to which environments are relevant
> and/or required for that test event. But at that point I think you're
> rather splitting hairs...
>
> In my original vision of 'relval NG' the test environment wouldn't
> actually exist at all, BTW. I was hoping we could simply list test
> cases, and the user could choose the image they were testing, and the
> image would serve as the 'test environment'. But on second thought
> that's unsustainable as there are things like BIOS vs. UEFI where we
> may want to run the same test on the same image and consider it a
> different result. The only way we could stick to my original vision
> there would be to present 'same test, different environment' as another
> row in the UI, kinda like we do for 'two-dimensional test tables' in
> Wikitcms; it's not actually horrible UI, but I don't think we'd want to
> pretend in the backend that these were two completely different. I
> mean, we could. Ultimately a 'test case' is going to be a database row
> with a URL and a numeric ID. We don't *have* to say the URL key is
> unique. ;)
>

I got a little lost here, but I think I understand what you are saying.
This is IMO one of the biggest pain-points we have currently - the stuff
where we kind of consider "Testcase FOO" for BIOS and UEFI to be
the same, but different at the same time, and I think this is where the
TCMS should come in play, actually.

Because I believe, that there is a fundamental difference between
1) the 'text' of the testcase (which says 'how to do it' basically)
2) the (what I think you call) environment - aka UEFI vs BIOS, 64bit vs
ARM, ...
3) the testplan

And this might be us saying the same things, but we often can end up in

Re: Release validation NG: planning thoughts

2016-11-30 Thread Josef Skladanka
On Wed, Nov 30, 2016 at 11:14 AM, Adam Williamson <
adamw...@fedoraproject.org> wrote:

> On Wed, 2016-11-30 at 02:10 -0800, Adam Williamson wrote:
> > On Wed, 2016-11-30 at 09:38 +0100, Josef Skladanka wrote:
> > > So if this is what you wanted to do (data validation), it might be a
> good
> > > idea to have that submitter middleware.
> >
> > Yeah, that's really kind of the key 'job' of that layer. Remember,
> > we're dealing with *manual* testing here. We can't really just have a
> > webapp that forwards whatever the hell people manage to stuff through
> > its input fields into ResultsDB.
>
> I guess another way you could look at it is, this would be the layer
> where we actually define what kinds of manual test results we want to
> store in ResultsDB, and what the format for each type should be. I
> kinda like the idea that we could use the same middleware to do that
> job for various different frontends for submitting and viewing results,
> e.g. the webUI part of this project, a CLI app like relval, and a
> different webUI like testdays...
>
> Yes, that IMO makes a lot of sense. Especially if we want to target
multiple "input tools". Then it might make sense to have what I was
discussing in the previous post (and what you have been, I think talking
about)  - a format (two of them, actually) that defines:
1) what testcases are relevant for X (where X is, say Rawhide nightly
testing, Testday for translations, foobar)
2) required structure (fields, types of the field) of the response

The question here is, whether the "required structure" is better off "per
testcase" (i.e. "this testcase always requires these fields") or "per
context" (i.e. results for this "thing" always require these fields) or
event those combined ("this testcase, in this context, requires X, Y and Z,
but in this other context, it only needs FOOBAR")

I would try not to go the third way, because that is really prone to erros
IMO, and I'm not sure that "per context" is always right. So for me, the
"TCMS" part of the data, should be:
1) testcases (with required fields/types of the fields in the "result
response"
2) testplans - which testcases, possibly organized into groups. Maybe even
dependencies + saying "I need testcase X to pass, Y can be pass or warn, Z
can be whatever when A passes, for the testplan to pass"

But this is fairly complex thing, to be honest, and it would be the first
and only useable TCMS in the world (from my point of view).

Let's do it!
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-11-30 Thread Josef Skladanka
On Wed, Nov 30, 2016 at 11:10 AM, Adam Williamson <
adamw...@fedoraproject.org> wrote:

> On Wed, 2016-11-30 at 09:38 +0100, Josef Skladanka wrote:
> > So if this is what you wanted to do (data validation), it might be a good
> > idea to have that submitter middleware.
>
> Yeah, that's really kind of the key 'job' of that layer. Remember,
> we're dealing with *manual* testing here. We can't really just have a
> webapp that forwards whatever the hell people manage to stuff through
> its input fields into ResultsDB.
>

I'm not sure I'm getting it right, but the people will pass the data
through a "tool" (say web app) which will provide fields to fill, and will
most probably end up doing the data "sanitation" on its own. So the
"frontend" could store data directly in ResultsDB, since the frontend would
make the user fill all the fields. I guess I know what you are getting at
("but this is exactly the double validation!") but it is IMHO actually
harder to have "generic stupid frontend" that gets the "form schema" from
the middleware, shows the form, and blindly forwads data to the middleware,
showing errors back, than
1) having a separate app for that, that will know the validation rules
2) it being an actual frontend on the middleware, thus reusing the "check"
code internally


> R...we need to tell the web UI 'these are the
> possible scenarios for which you should prompt users to input results
> at all'
>
Agreed


> (which for release validation is all the 'notice there's a new
> compose, combine it with the defined release validation test cases and
> expose all that info to the UI' work),

That is IMO a separate problem, but yeah.


> and we need to take the data the
> web UI generates from user input, make sure it actually matches up with
> the schema we decide on for storing the results before forwarding it to
> resultsdb, and tell the web UI there's a problem if it doesn't.
>
And this is what I have been discussing in the first part of the reply.


> That's how I see it, anyhow. Tell me if I seem way off. :)
> --
> Adam Williamson
> Fedora QA Community Monkey
> IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
> http://www.happyassassin.net
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-11-30 Thread Josef Skladanka
On Tue, Nov 29, 2016 at 5:34 PM, Adam Williamson  wrote:

> On Tue, 2016-11-29 at 19:41 +0530, Kanika Murarka wrote:
> > 2. Keep a record of no. of validation test done by a tester and highlight
> > it once he login. A badge is being prepared for no. of validation testing
> > done by a contributor[1].
>
> Well, this information would kind of inevitably be collected at least
> in resultsdb and probably wind up in the transmitter component's DB
> too, depending on exactly how we set things up.
>

I think that this probably should be in ResultsDB - it's the actual stored
result data.
The transmitter component should IMO store the "semantics" (testplans,
stuff like that), and use the "raw" resultsdb data as a source to present
meaningful view.
I'd say that as a rule of thumb, replicating data on multiple places is a
sign of design error.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-11-30 Thread Josef Skladanka
On Mon, Nov 28, 2016 at 6:48 PM, Adam Williamson  wrote:

> On Mon, 2016-11-28 at 09:40 -0800, Adam Williamson wrote:
> > The validator/submitter component would be responsible for watching out
> > for new composes and keeping track of tests and 'test environments' (if
> > we keep that concept); it would have an API with endpoints you could
> > query for this kind of information in order to construct a result
> > submission, and for submitting results in some kind of defined form. On
> > receiving a result it would validate it according to some schemas that
> > admins of the system could configure (to ensure the report is for a
> > known compose, image, test and test environment, and do some checking
> > of stuff like the result status, user who submitted the result, comment
> > content, stuff like that). Then it'd forward the result to resultsdb.
>
> It occurs to me that it's possible resultsdb might be designed to do
> all this already, or it might make sense to amend resultsdb to do all
> or some of it; if that's the case, resultsdb folks, please do jump in
> and suggest it :)
>

That's what I thought, when reading the proposal - the "Submitter" seems
like an unnecessary layer, to some extent - submitting stuff to resultsdb
is pretty easy.
What resultsdb is not doing now, though is the data validation - let's say
you wanted to check that specific fields are set (on top of what resultsdb
requires, which basically is just testcase and outcome) - that can be done
in resultsdb (there is a diff with that functionality), but at the moment
only on global level. So it might not necessarily make sense to set e.g.
'compose' as a required field for the whole resultsdb, since
testday-related results might not even have that.
So if this is what you wanted to do (data validation), it might be a good
idea to have that submitter middleware. Or (and I'm not sure it's the
better solution) I could try and make that configuration more granular, so
you could set the requirements e.g. per namespace, thus effectively
allowing setting the constraints even per testcase. But that would need
even more though - should the constraints be inherited from the upper
layers? How about when all but one testcases in a namespace need to have
parameter X, but for the one, it does not make sense? (Probably a design
error, but needs to be thought-through in the design phase).

So, even though resultsDB could do that, it is borderline "too smart" for
it (I really want to keep any semantics out of ResultsDB). I'm not
necessarily against it (especially if we end up wanting that on more
places), but until now, we more or less worked with "clients that submits
data makes sure all required fields are set" i.e "it's not resultsdb's
place to say what is or is not required for a specific usecase". I'm not
against the change, but at least for the first implementation (of the
Release validation NG) I'd vote for the middleware solution. We can add the
data validation functionality to ResultsDB later on, when we have a more
concrete idea.

Makes sense?

Joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


ResultsDB 2.0 - DB migration on DEV

2016-11-25 Thread Josef Skladanka
So, I have performed the migration on DEV - there were some problems with
it going out of memory, so I had to tweak it a bit (please have a look at
D1059, that is what I ended up using by hot-fixing on DEV).

There still is a slight problem, though - the migration of DEV took about
12 hours total, which is a bit unreasonable. Most of the time was spent in
`alembic/versions/dbfab576c81_change_schema_to_v2_0_step_2.py` lines 84-93
in D1059. The code takes about 5 seconds to change 1k results. That would
mean at least 15 hours of downtime on PROD, and that, I think is unreal...

And since I don't know how to make it faster (tips are most welcomed), I
suggest that we archive most of the data in STG/PROD before we go forward
with the migration. I'd make a complete backup, and deleted all but the
data from the last 3 months (or any other reasonable time span).

We can then populate an "archive" database, and migrate it on its own,
should we decide it is worth it (I don't think it is).

What do you think?

J.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Proposal to CANCEL: 2016-10-31 Fedora QA Devel Meeting

2016-10-31 Thread Josef Skladanka
+1 to cancel

On Mon, Oct 31, 2016 at 5:58 AM, Tim Flink  wrote:

> I'm not aware of any topics that need to be discussed/reviewed as a
> group this week, so I propose that we cancel the weekly Fedora QA devel
> meeting.
>
> If there are any topics that I'm forgetting about and/or you think
> should be brought up with the group, reply to this thread and we can
> un-cancel the meeting.
>
> Tim
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: New ExecDB

2016-10-21 Thread Josef Skladanka
So, after a long discussion, we arrived to this solution.

We will clearly split up the "who to notify" part, and "should we
re-schedule" part of the proposal. The party to notify will be stored in
the `notify` field, with `taskotron, task, unknown` options. Initially any
crashes in `shell` or `python` directive, during formula parsing, and when
installing the packages specified in the formula's environment will be sent
to task maintainers, every other crash to taskotron maintainer. That covers
what I initially wanted from the multiple crashed states.

On top of that, we feel that having an information on "what went wrong" is
important, and we'd like to have as much detail as possible, but on the
other hand we don't want the re-scheduling logic to be too complicated. We
agreed on using a `cause` field, with `minion, task, network, libtaskotron,
unknown` options, and storing any other details in a key-value store. We
will likely just re-schedule any crashed task anyway, at the beginning, but
this allows us to hoard some data, and make more informed decision later
on. On top of that, the `fatal` flag can be set, to say that it is not
necessary to reschedule, as the crash is unlikely to be fixed by that.

This allows us to keep the re-scheduling logic rather simple, and most
imporantly decoupled from the parts that just report what went wrong.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: New ExecDB

2016-10-12 Thread Josef Skladanka
On Tue, Oct 11, 2016 at 1:14 PM, Kamil Paral  wrote:

> Proposal looks good to me, I don't have any strong objections.
>
> 1. If you don't like blame: UNIVERSE, why not use blame: TESTBENCH?
> 2. I think that having enum values in details in crash structure would be
> better, but I don't have strong opinion either way.
>
>
> For consistency checking, yes. But it's somewhat inflexible. If the need
> arises, I imagine the detail string can be in json format (or
> semicolon-separated keyvals or something) and we can store several useful
> properties in there, not just one.
>


I'd rather do the key-value thing as we do in ResultsDB than storing plalin
Json. Yes the new Postgres can do it (and can also search it to some
extent), but it is not all-mighty, and has its own problems.



> E.g. not only that Koji call failed, but what was its HTTP error code. Or
> not that dnf install failed, but also whether it was the infamous "no more
> mirror to try" error or a dependency error. I don't want to misuse that to
> store loads of data, but this could be useful to track specific issues we
> have hard times to track currently (e.g. our still existing depcheck issue,
> that happens only rarely and it's difficult for us to get a list of tasks
> affected by it). With this, we could add a flag "this is related to problem
> XYZ that we're trying to solve".
>
>
I probably understand, what you want, but I'd rather have a specified set
of values, which will/can be acted upon. Maybe changing the structure to
`{state, blame, cause, details}`, where the `cause` is still an enum of
known values but details is freeform, but strictly used for humans? So we
can "CRASHED->THIRDPARTY->UNKNOWN->"text of the exception" for example, or
"CRASHED->TASKOTRON->NETWORK->"dnf - no more mirrors to try".

I'd rather act on a known set of values, then have code like:

if ('dnf' in detail and 'no more mirrors' in detail) or ('DNF' in
detail and 'could not connect' in detail)

in the end, it is almost the same, because there will be problems with
clasifying the errors, and the more layers we add, the harder it gets -
that is the reason I initially only wanted to do the {state, blame} thing.
But I feel that this is not enough (just state and blame) information for
us to act upon - e.g. to decide when to automatically reschedule, and when
not, but I'm afraid that with the exploded complexity of the 'crashed
states' the code for handling the "should we reschedule" decisions will be
awfull. Notyfiing the right party is fine (that is what blame gives us),
but this is IMO what we should focus on a bit.

Tim, do you have any comments?
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


New ExecDB

2016-10-10 Thread Josef Skladanka
With ResultsDB and Trigger rewrite done, I'd like to get started on ExecDB.

The current ExecDB is more of a tech-preview, that was to show that it's
possible to consume the push notifications from Buildbot. The thing is,
that the code doing it is quite a mess (mostly because the notifications
are quite a mess), and it's directly tied not only to Buildbot, but quite
probably to the one version of Buildbot we currently use.
I'd like to change the process to a style, where ExecDB provides an API,
and Buildbot (or possibly any other execution tool we use in the future)
will just use that to switch the execution states.

ExecDB should be the hub, in which we can go to search for execution state
and statistics of our jobs/tasks. The execution is tied together via UUID,
provided by ExecDB at Trigger time. The UUID is passed around through all
the stack, from Trigger to ResultsDB.

The process, as I envision it, is:
1) Trigger consumes FedMsg
2) Trigger creates a new Job in ExecDB, storing data like FedMsg message
id, and other relevant information (to make rescheduling possible)
3) ExecDB provides the UUID, marks the Job s SCHEDULED and Trigger then
passes the UUID, along with other data, to Buildbot.
4) Buildbot runs runtask, (sets ExecDB job to RUNNING)
5) Libtaskotron is provided the UUID, so it can then be used to report
results to ResultsDB.
6) Libtaskotron reports to ResultsDB, using the UUID as the Group UUID.
7) Libtaskotron ends, creating a status file in a known location
8) The status file contains a machine-parsable information about the
runtask execution - either "OK" or a description of "Fault" (network
failed, package to be installed did not exist, koji did not respond... you
name it)
9) Buidbot parses the status file, and reports back to ExecDB, marking the
Job either as FINISHED or CRASHED (+details)

This will need changes in Buildbot steps - a step that switches the job to
RUNNING at the beginnning, and a step that handles the FINISHED/CRASHED
switch. The way I see it, this can be done via a simple CURL or HTTPie call
from the command line. No big issue here.

We should make sure that ExecDB stores data that:
1) show the execution state
2) allow job re-scheduling
3) describe the reason the Job CRASHED

1 is obviously the state. 2 I think can be satisfied by storing the Fedmsg
Message ID and/or the Trigger-parsed data, which are passed to Buildbot.
Here I'd like to focus on 3:

My initial idea was to have SCHEDULED, RUNNING, FINISHED states, and four
crashed states, to describe where the fault was:
 - CRASHED_TASKOTRON for when the error is on "our" side (minion could not
be started, git repo with task not cloned...)
 - CRASHED_TASK to use when there's an unhandled exception in the Task code
 - CRASHED_RESOURCES when network is down, etc
 - CRASHED_OTHER whenever we are not sure

The point of the crashed "classes" is to be able to act on different kind
of crash - notify the right party, or even automatically reschedule the
job, in the case of network failure, for example.

After talking this through with Kamil, I'd rather do something slightly
different. There would only be one CRASHED state, but the job would contain
additional information to
 - find the right person to notify
 - get more information about the cause of the failure
To do this, we came up with a structure like this:
  {state: CRASHED, blame: [TASKOTRON, TASK, UNIVERSE], details:
"free-text-ish description"}

The "blame" classes are self-describing, although I'd love to have a better
name for "UNIVERSE". We might want to add more, should it make sense, but
my main focus is to find the right party to notify.
The "details" field will contain the actual cause of the failure (in the
case we know it), and although I have it marked as free-text, I'd like to
have a set of values defined in docs, to keep things consistent.

Doing this, we could record that "Koji failed, timed out" (and blame
UNIVERSE, and possibly reschedule) or "DNF failed, package not found"
(blame TASK if it was in the formula, and notify the task maintained), or
"Minion creation failed" (and blame TASKOTRON, notify us, I guess).

Implementing the crash clasification will obviously take some time, but it
can be gradual, and we can start handling the "well known" failures soon,
for the bigger gain (kparal had some examples, IIRC).

So - what do you think about it? Is it a good idea? Do you feel like there
should be more (I can't really imagine there being less) blame targets
(like NETWORK, for example), and if so, why, and which? How about the
details - hould we go with pre-defined set of values (because enums are
better than free-text, but adding more would mean DB changes), or is
free-text + docs fine? Or do you see some other, better solution?

joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Resultsdb v2.0 - API docs

2016-10-03 Thread Josef Skladanka
So, what's the decision? I know I can "guesstimate", but I'd like to see a
group consensus before I actually start coding.

On Thu, Sep 29, 2016 at 7:31 AM, Josef Skladanka 
wrote:

>
>
> On Tue, Sep 27, 2016 at 6:06 PM, Kamil Paral  wrote:
>
>> ...
>> What are the use cases? I can think of one - yesterday Adam mentioned he
>> would like to save manual test results into resultsdb (using a frontend).
>> That would have no ExecDB entry (no UUID). Is that a problem in the current
>> design? This also means we would probably not create a group for this
>> result - is that also OK?
>>
>
> Having no ExecDB entry is not a problem, although it provides global UUID
> for our execution, the UUID from ExecDB is not necessary at all for
> ResultsDB (or the manual-testing-frontend). The point of ExecDB's UUID is
> to be able to tie together the whole automated run from the point of
> Trigger to the ResultsDB. But ResultsDB can (and does, if used that way)
> create Group UUIDs on its own. So we could still create a groups for the
> manual tests - e.g. per build - if we wanted to, the groups are made to be
> more usable (and easier to use) than the old jobs. But we definitely could
> do without them, just selecting the right results would (IMHO) be a bit
> more complicated without the groups.
>
> The thing here (which I guess is not that obvious) is, that there are
> different kinds of UUIDS, and that you can generate "non-random" ones,
> based on namespace and name- this is what we're going to use in OpenQA, for
> example, where we struggled with the "old"design of ResultsDB (you needed
> to create the Job during trigger time, and then propagate the id, so it's
> available in the end, at report time). We are going to use something like
> `uuid.uuid3("OpenQA in Fedora", "Build Fedora-Rawhide-20160928.n.0")`
> (pseudocode to some extent), to create the same group UUID for the same
> build. This approach can be easily replicated anywhere, to provide
> canonical UUIDs, if needed.
>
> Hope that I was at least a bit on topic :)
>
> j.
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Resultsdb v2.0 - API docs

2016-09-28 Thread Josef Skladanka
On Tue, Sep 27, 2016 at 6:06 PM, Kamil Paral  wrote:

> ...
> What are the use cases? I can think of one - yesterday Adam mentioned he
> would like to save manual test results into resultsdb (using a frontend).
> That would have no ExecDB entry (no UUID). Is that a problem in the current
> design? This also means we would probably not create a group for this
> result - is that also OK?
>

Having no ExecDB entry is not a problem, although it provides global UUID
for our execution, the UUID from ExecDB is not necessary at all for
ResultsDB (or the manual-testing-frontend). The point of ExecDB's UUID is
to be able to tie together the whole automated run from the point of
Trigger to the ResultsDB. But ResultsDB can (and does, if used that way)
create Group UUIDs on its own. So we could still create a groups for the
manual tests - e.g. per build - if we wanted to, the groups are made to be
more usable (and easier to use) than the old jobs. But we definitely could
do without them, just selecting the right results would (IMHO) be a bit
more complicated without the groups.

The thing here (which I guess is not that obvious) is, that there are
different kinds of UUIDS, and that you can generate "non-random" ones,
based on namespace and name- this is what we're going to use in OpenQA, for
example, where we struggled with the "old"design of ResultsDB (you needed
to create the Job during trigger time, and then propagate the id, so it's
available in the end, at report time). We are going to use something like
`uuid.uuid3("OpenQA in Fedora", "Build Fedora-Rawhide-20160928.n.0")`
(pseudocode to some extent), to create the same group UUID for the same
build. This approach can be easily replicated anywhere, to provide
canonical UUIDs, if needed.

Hope that I was at least a bit on topic :)

j.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: 2016-09-14 @ 14:00 UTC - QA Tools Video "Standup" Meeting

2016-09-22 Thread Josef Skladanka
I'd rather go with the option no. 1, but I don't really care that much
either way. So if one option suits you guys better, I'll comply.

J.

On Thu, Sep 22, 2016 at 9:59 AM, Martin Krizek  wrote:

> - Original Message -
> > From: "Tim Flink" 
> > To: qa-devel@lists.fedoraproject.org
> > Sent: Wednesday, September 14, 2016 4:59:49 PM
> > Subject: Re: 2016-09-14 @ 14:00 UTC -  QA Tools Video "Standup" Meeting
> >
> > 
> >
> > One of the topics that came up was how often to do these video
> > meetings. Having additional weekly meetings via video seems like
> > overkill but if there's an appropriate in-depth topic, meeting via
> > video to talk instead of type would be useful.
> >
> > The two options we came up with are:
> >
> > 1. Switch the first qadevel meeting of every month to be via video,
> >making sure that an agenda is sent out early enough for folks to be
> >prepared.
> >
> > 2. Pencil in a video meeting once or twice a month on the Wednesday
> >after qadevel meetings. Ask for video topics during the qadevel
> >meeting and on email. If there are enough topics suggested which
> >would benefit from talking instead of typing, meet on the following
> >Wednesday to discuss via video. If there is no need to meet via
> >video, skip it.
> >
> > I realize that I'm changing things up a little bit from what we were
> > talking about at the end of the meeting but I have a small concern
> > about option 1 - one of the issues that we had today is that folks
> > weren't prepared because we didn't set an agenda.
> >
> > If we switch one qadevel meeting per month  to video, how do we want to
> > handle setting the agenda early enough so that participants have enough
> > time to prepare?
> >
> > Any thoughts or preferences?
> >
>
> My vote would be for the option 2., mostly because the video meeting can
> be easily skipped if we don't have any topics to discuss and that
> setting an agenda would be done on a set time (Monday meeting). It seems
> to me that this could work well.
>
>
> Thanks,
> Martin
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Resultsdb v2.0 - API docs

2016-09-15 Thread Josef Skladanka
On Thu, Sep 15, 2016 at 4:20 PM, Tim Flink  wrote:

> On Mon, 15 Aug 2016 22:48:38 +0200
> Josef Skladanka  wrote:
>
> > Hey gang,
> >
> > I spent most of today working on the new API docs for ResultsDB,
> > making use of the even better Apiary.io tool.
> >
> > Before I put even more hours into it, please let me know, whether you
> > think it's fine at all - I'm yet to find a better tool for describing
> > APIs, so I'm definitely biased, but since it's the Documentation, it
> > needs to also be useful.
> >
> > http://docs.resultsdb20.apiary.io/
> >
> > I am also trying to put more work towards documenting the attributes
> > and the "usual" queries, so please try and think about this aspect of
> > the docs too.
>
> After the conversation about resultsdb yesterday, I have a proposal for
> a change to ResultsDB and clarification about how we'd be using it in
> Taskotron to answer some of the questions I asked earlier.
>
> 1. Add a null-able column to result to indicate the job it came from.
>This could be a URI or just UUID so long as the final URI could be
>computed from whatever is stored in this new column.
>
>
If we go this way, I'd rather add the whole URL, instead of just UUID - the
UUID thing is quite taskotron specific IMHO, but something like "exec_url"
(? I wish I had a better name) could be useful in a more general sense.


> 2. Change "group" to "tag" and plan on it being used for the grouping
>of results by/for humans. This isn't something that we'd be making
>use of right away but it seems like a logical feature to add given
>where things are going.
>
>
After thinking about it today, I'd rather keep the groups as they are now
- this is mostly about semantics, and on the practical level, I'd expect
that "tag" would be unique, and identified by name (the same as testcase).
The groups, on the other hand, are identified by UUID, which is not a nice
UID for tag, from the semantical point of view, IMO.
I could, of course, do the changes, and make the Group (Tag)
name-identified, but it is not a minor change, and would take considerable
effort to do.
The groups, as they are now, can have a description set (it might be a good
idea to change it to 'name' though, to express what's it supposed to be in
a better way), and thus we can effectively do the same as we would with
tags.
I also feel, that for other uses than Taskotron (OpenQA, Testdays) - it's
easier, and more spot-on to have the groups as they are now - grouping by
tag would be possible, but coming up with unique names, that also are
meaningful, programatically is tough, and unnecessarily complicated.
Generating UUID, and setting a reasonable name is not, on the other hand.

What do you guys think?


This would mean that we can find the job that every result came from
> without having to worry about grouping them at submission time. I can
> think of use cases where there either be no need for a job UUID/URI or
> one would not exist, hence the suggestion that the column could be
> empty.
>

If grouping at submission is the concern here, then it would be more than
easy to do - the idea here (maybe I did not communicate it properly) was to
use the ExecDB generated UUID as the identifier, the same way we do in the
whole stack.
Since the Groups can be created "on the fly" (meaning, that if you submit a
result, with a group-uuid that is not yet in the database, it is created
for you), we would not need to worry about it at all.
If we wanted to be a bit more descriptive, we could create the Group, and
set the Name/Description during trigger time (probably as a part of
creating the execdb job).

This would, of course, lead to having the 'exec_url' set in the Group's
'ref_url' and thus having the back-reference "by convention", as we have it
now, with Job. I don't think that either of the options (exec_url in
Result, or using group "by convention") is necessarily better than the
other, it's mostly about what semantics we want to have.

The underlying reason, I brought this up, is that some of our tests create
"unecessary" Jobs/Groups (i.e. 1 job to 1 result) at the moment (rpmlint,
abicheck, dockerautotest), and whether we think we should handle it
differenty. But I think that with what is coming, we'll be adding more of
the 1 job X results stuff (distigt tasks, basically), so it is not that big
of a deal.

The last question to ask is, whether the "execution grouping" is even
something usefull - what do we (would we) use the information that "these X
results come from the same execution"? Is it even something we care about?
I use the Job overview to have a bett

Re: RFR: New Dist-Git Task Storage Proposal

2016-09-14 Thread Josef Skladanka
On Tue, Sep 13, 2016 at 4:20 PM, Tim Flink  wrote:

> On Mon, 12 Sep 2016 14:44:27 -0600
> Tim Flink  wrote:
>
> > I wrote up a quick draft of the new dist-git task storage proposal
> > that was discussed in Brno after Flock.
> >
> > https://phab.qadevel.cloud.fedoraproject.org/w/taskotron/
> new_distgit_task_storage_proposal/
> >
> > Please review the document and either let me know (or fix in the wiki
> > page) things which aren't clear or bits that I forgot.
>
> I added more information to the wiki page about the default, or bare
> executable case which we discussed during/after flock.
>
> Tim
>

LGTM - I'd just add a link to documentation for the results.yaml format, if
we have any (if we don't then we'd better write one :D)
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


Re: Resultsdb v2.0 - API docs

2016-09-14 Thread Josef Skladanka
On Mon, Sep 12, 2016 at 11:39 PM, Tim Flink  wrote:

>
> I think we talked about this in person earlier but I didn't write any
> notes about it and I don't recall the details.
>
> How exactly are we going to be using Groups? The first thing that comes
> to mind is to group results by execution so that there would be a group
> of results which were all produced from the same run of the same task.
> That's kinda what we're using Job for in resultsdb 1.0 right now,
> anyways.
>
> I realize that the docs for resultsdb are supposed to be
> not-specific-to-taskotron but was there anything else we thought the
> Group might be useful for?
>
> Also, what do we want to do about a link to execdb? If we're planning
> to have a group for each execution's results, that could be the group's
> ref_url but that relies on convention which could change if Group is
> used for more than just grouping results by execution.
>

I see two (maybe three) options - either we'll be using the groups in the
same way we used
Jobs - to group results per execution as we do now, and use the group's
ref_url to point to execdb. If we ever need to use the groups for more than
that,
then we could just have the result in more than one group, and set
meaningful
descriptions.

The other way would be to not use Groups at all, and just store the execdb's
UUID in the key-value store. Those would then be rendered to an URL in
the frontend.

Last option would be a combination of both - we'd be using Groups as we do
now
to group by execution, but instead of the "default" resultsdb_frontend,
we'd use
something tailored for taskotron - we could show links to execdb in the
results
"view", and either disregard the existence of groups alltogether, or just
have
a special description (like "ExecDB related Group - ") that would get
filtered
out in the default "group" view.

I don't really see one being directly better than the others it's just what
we
want to do. I did not put much thought to it, as I just expected us to keep
it
basically the same.

Do you have any ideas?


> I assume that the new API will also help fix some of the slowness we've
> been seeing? IIRC, there were some schema changes which would probably
> help with query time.
>
> Tim


Yep, most of what was really slow should be solved now - or at least it
seemed so
from my tests.
The only thing we still have troubles are the really sparse results - the
issue which
we thought could get solved by the new Postgres, but wasn't.

On the other hand, it is a non-issue, as long as the query is limited by
datetime range.
If you only care about results that are "newer than X" the amount of data
really gets cut
down, and the queries are fast, even for the sparse results, since the DB
does not
need to crawl the whole dataset to be sure there's only LIMIT-n results.
This is how Bodhi queries the ResultsDB now - they use 'submitted'
timestamp as a
constraint.

If we communicate this behaviour, I think we'll be fine. I would almost go
as far as setting
a default time-constraint to (and I'm just thinking out loud here, no
reasons for the number
whatsoever) three months, and be done with it - if you ever want older
results, just set the
time-constraint yourself, and be avare that it probably will take time. I
don't see a reason
we (as in FedoraQA and the related processes) would need to regularly
access results
older than that anyway.

J.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


Re: Resultsdb v2.0 - API docs

2016-09-14 Thread Josef Skladanka
On Tue, Sep 13, 2016 at 8:19 PM, Randy Barlow 
wrote:

> Will the api/v1.0/ endpoint continue to function as-is for a while, to
> give integrators time to adjust to the new API? That would be ideal for
> Bodhi, so we can adjust our code to work with v2.0 after it is already in
> production. If not, we will need to coordinate bodhi and resultsdb releases
> at the same time.
>

Hey! There is a plan for the  v1.0 endpoint to work, even though being a
bit limited in features, but from what I remember about Bodhi, that will
not affect it at all.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


"New" trigger ready

2016-09-05 Thread Josef Skladanka
Hey gang,

so this Differential: https://phab.qadevel.cloud.fedoraproject.org/D963 and
this branch:
https://bitbucket.org/fedoraqa/taskotron-trigger/branch/feature/rules_engine
(was force-pushed to couple of times, so make sure to re-clone it) now
contain the final implementation of the "new trigger", I did the changes we
discussed on-site, and had it running in docker locally for a few days to
test it. As far as I can tell, it behaves OK, and I think it's ready to be
tested on DEV, at least.

What is _not_ done yet (but I think that it could be done in parallel with
the actual DEV deployment) is the jobrunner.py script. Although it should
not take that much time to get done, I'd like to get the discussion about
DEV-readiness started now. It's been in the works for quite some time, and
I must admit that I'm eager to see my baby go to school ;)

J.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


Re: Resultsdb v2.0 - API docs

2016-08-18 Thread Josef Skladanka
So, I have completed the first draft of the ResultsDB 2.0 API.
The documentation lives here: http://docs.resultsdb20.apiary.io/# and I'd
be glad if you could have a look at it.

The overall idea is still not changed - ResultsDB should be a "dumb"
results store, that knows next to nothing (if not nothing at all) about the
semantics/meaning of the data stored, and this should be applied in the
consumer. This is why, for example, no result override is planned, although
it might make sense to override a known fail to pas for some usecase (like
gating), it might not be the right thing to do for some other tool in the
pipeline, thus the override needs to happen at the consumer side.
What's not covered in detail is auth model - I only reflected it by
acknowledging the probable future presence of some kind of auth in the POST
queries (reserved _auth parameter), but the actual implementation is not a
problem to solve today.

On top of that I'd also want to know (and this is probably mostly question
for Ralph), whether it makes sense to try and keep both the old and new API
up for some time. It should not be that complicated to do, I'd just rather
not spend too much time on it, as changing the consumers (bodhi, as far as
I know) is most probably much less time consuming than keeping the old API
running. At the moment, I will probably make it happen, but if we agree
it's not worth the time...

Feel free to post comments/feature requests/whatever - I'd love for this to
be stable (or at least a base for non-breaking changes) for at least next
few years (lol I know, right...), so let's do it right :)

joza

On Mon, Aug 15, 2016 at 10:48 PM, Josef Skladanka 
wrote:

> Hey gang,
>
> I spent most of today working on the new API docs for ResultsDB, making
> use of the even better Apiary.io tool.
>
> Before I put even more hours into it, please let me know, whether you
> think it's fine at all - I'm yet to find a better tool for describing APIs,
> so I'm definitely biased, but since it's the Documentation, it needs to
> also be useful.
>
> http://docs.resultsdb20.apiary.io/
>
> I am also trying to put more work towards documenting the attributes and
> the "usual" queries, so please try and think about this aspect of the docs
> too.
>
> Thanks, Joza
>
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


Resultsdb v2.0 - API docs

2016-08-15 Thread Josef Skladanka
Hey gang,

I spent most of today working on the new API docs for ResultsDB, making use
of the even better Apiary.io tool.

Before I put even more hours into it, please let me know, whether you think
it's fine at all - I'm yet to find a better tool for describing APIs, so
I'm definitely biased, but since it's the Documentation, it needs to also
be useful.

http://docs.resultsdb20.apiary.io/

I am also trying to put more work towards documenting the attributes and
the "usual" queries, so please try and think about this aspect of the docs
too.

Thanks, Joza
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


Re: Request for Testing: New Auth Method for Phabricator

2016-07-21 Thread Josef Skladanka
Linking the account worked for me just fine, although I stumbled upon
the Err 500 while trying to log-in via persona (worked on the second
try, though).
After logging out, and re-logging in via Ipsilon for the first and
third time, this is what I got:

Unhandled Exception ("HTTPFutureHTTPResponseStatus")
[HTTP/400]
  Bad Request wrote:
> I've been working on moving our phabricator instance off of persona
> before that system is turned off in a few months.
>
> I have an extension deployed in staging and I'd like it to see a bit
> more testing before looking into deploying it in production.
>
> https://phab.qa.stg.fedoraproject.org/
>
> To link your existing account (on staging, this won't work on the
> production instance yet) to the new auth method:
>
> 1. Click on the "user" button next to the search bar when logged in
>
> 2. Click on "manage" on the left hand side of the screen
>
> 3. Click on "edit settings" on the right hand side of the screen
>
> 4. Click on "External Accounts"
>
> 5. Click on "Ipsilon" under "Add External Account
>
> 6. Log in with your FAS credentials.
>
> Please let me know if you try this and are successful or if you run
> into problems. I haven't been able to reproduce the 500 issue with
> persona on stg but I suspect it's intermittant and will try again later
> to see if I can fix it enough to be somewhat reliable.
>
> Tim
>
> ___
> qa-devel mailing list
> qa-devel@lists.fedoraproject.org
> https://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org
>
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


PoC of "configurable trigger"

2016-06-01 Thread Josef Skladanka
Source: https://bitbucket.org/fedoraqa/taskotron-trigger/branch/pony
Diff: https://phab.qadevel.cloud.fedoraproject.org/D872

This started as simple bike-shedding to make more sense in naming (so
everything is not named "Trigger"), but it went further :D

The main change here is, what I call "configurable trigger" - at the moment,
every time we want to add support for even the most basic new task (like the
package-specific task for docker), changes are needed in the trigger's source
code.

These changes add a concept of a "rules engine", that decides what tasks to
schedule based on data extracted from the received FedMessage, and a
set of rules.

The rules-engine is YAML, in a format like this::
```
- do:
  - {tasks: [depcheck, upgradepath]}
  when: {message_type: KojiTagChanged}
- do:
  - {tasks: [dockerautotest]}
  when: {message_type: KojiBuildCompleted, name: docker}
- do:
  - {tasks: [abicheck]}
  when:
message_type: KojiBuildCompleted
name:
  $in: ${critpath_pkgs}
  $nin: ['docker'] # critpath excludes
```

The rules are split in two parts `when` and `do`, the `when` clause is
a mongo query that will get evaluated against the dataset provided by
the FedMsg consumer. For example, the KojiBuildCompletedJobTrigger now
publishes this (values are fake, to make it more descriptive::

message_data = {
"_msg": {...snipped...},
"message_type": "KojiBuildCompleted",
"item": "docker-1.9.1-6.git6ec29ef.fc23",
"item_type": "koji_build",
"name": "docker",
"version": "1.9.1-6.git6ec29ef",
"release": "fc23",
"critpath_pkgs": [..., "docker", ...]
"distgit_branch": "f23",
}

So taking the rules, and the data, going from the top:

 # First rule's `when` is `False` as `message_type` is not `KojiTagChanged`
 # Second rule is `True` because both the `message_type` and name in the
   `when` clause match the data
 # Third rule does _not_ schedule anything, because even though `docker` is
   in `critpath_pkgs`, it also is part of the critpath excludes list, and
   so the rule is ignored

The `when` clauses are in fact mongo queries
,
evaluated using a Python library that implements it for querying Python objects.

The rules engine then takes the `do` clauses of the 'passed' rules, and
produces arguments for the `trigger_tasks()` calls. By default, `item`, and
`item_type` are taken from the `message_data`, `arches` is set to
`config.valid_arches`, and then all the key/values from the `do`'s body are
added on top. This means, that we can have a task, that for example forces
an architecture different than default::
```
- do:
  - {tasks: [awesome_arm_check], arches: [armhfp]}
  when: {message_type: KojiBuildCompleted}
```

The `do` clause can have multiple items in it, so something like this is
possible::
```
- do:
  - {tasks: [rpmlint]}
  - {tasks: [awesome_arm_check], arches: [armhfp]}
  when: {message_type: KojiBuildCompleted}
```

Triggering `rpmlint` on the default architectures, and `awesome_arm_check`
on `armhfp` for each package built in Koji.

This means, that when we want to trigger new (somewhat specific) tasks,
no changes are needed in the trigger's code, but just in the configuration,
to alter the rules. If we come to the point where more functionality is
needed, than it obviously calls for changes in the underlying code, in order
to add more key/values to the data provided by the Fedmsg consumer, or
adding more general functionality overall.

A good example of this is the dist-git style tasks problem. To solve it
I have added a new command (`$discover`)to the `do` section, that crawls the
provided git repo/branch, and schedules jobs for all `runtask.yml`'s found::
```
- do:
  - {$discover: {repo:
'http://pkgs.fedoraproject.org/git/rpms-checks/${name}.git', branch:
'${distgit_branch}'}}
  when: {message_type: KojiBuildCompleted}
```

In the bigger picture, this 'rules engine' functionality can be used to
make (for example) a web interface, that allows creating/altering the rules,
instead of changing the config file (the rules can as easily be taken from
a database, as from the config file), or even to provide a per-user triggering
capability - we could add a piece of code, that checks (selected) users'
Fedorapeople profile for a file, that contains rules in this format, and
then could simply run the engine on those rules+data from Fedmsg to decide
whether the user-defined tasks should be run.

It also somewhat reduces the tight bond between the trigger and FedMessage,
as the rules engine does not really care where did the data (used to evaluate
the rules) come from.

This is by no means final, but it IMO shows quite an interesting PoW/idea, that
was not that complicated to implement, and made the trigger lot better at what
it can 

Re: 2016-04-11 @ 14:00 UTC - Fedora QA Devel Meeting

2016-04-11 Thread Josef Skladanka
On Mon, Apr 11, 2016 at 2:49 PM, Josef Skladanka  wrote:
>
> ...

I won't be able to come today, updates are in the phriction document.
I'm going to mess around with the image building scripts a bit (I want
to make the failure reporting a bit more sane), but could use some
tasks on top of that.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
http://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


2016-04-11 @ 14:00 UTC - Fedora QA Devel Meeting

2016-04-11 Thread Josef Skladanka
# Fedora QA Devel Meeting
# Date: 2016-04-04
# Time: 14:00 UTC
(https://fedoraproject.org/wiki/Infrastructure/UTCHowto)
# Location: #fedora-meeting-1 on irc.freenode.net

It's been a few weeks since we had our last QA devel meeting and I'm
sure that everyone is chomping at the bit to get back to them.

Please put announcements and information under the "Announcements and
Information" section of the wiki page for this meeting:

https://phab.qadevel.cloud.fedoraproject.org/w/meetings/20160411-fedoraqadevel/



Proposed Agenda
===

Announcements and Information
-
  - Please list announcements or significant information items below so
the meeting goes faster

Tasking
---
  - Does anyone need tasks to do?


Open Floor
--
  - TBD
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
http://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


On Docker testing and pytest

2016-04-09 Thread Josef Skladanka
Resending to keep things public...

tl;dr; of the tl;dr;

https://www.youtube.com/watch?v=3DWB7CBdvXU

tl;dr;

Tim and I/lbrabec shared the same worry, that by being overly-facilitating
to docker, we might go into the spiral of doom, where in the end we'd have
to have and support specific tooling for all of the projects.

Using pytest for docker testing seemed silly to me, since I thought that we
should be adding a layer of docker-specific convenience code, thus going
into the spiral of doom, instead of using some pre-existing docker
convenience (tutum).
Tim was worried that using Tutum (docker specific tool) sends us down the
same spiral.

In the end, we agreed that Taskotron should be first and foremost an
universal runner. Tim mentioned pytest (AFAIK) because we will need to be
able to consume some more "standard" output format than result yaml (and
return code), and pytest might be a good source of this, while at the same
time providing kind of OK testsuite-like behavior.

For the problem at hand (but this is universal for the future problems
too), we say:
 1) we understand these output formats (result yaml, return code, in the
furture probably maybe junit)
 2) write the test any way you want (bash to compiled C), and as long as
you provide one of the options from #1 as output, we're fine
 3) if we did the tests, we'd do it this way: [insert foobar test using
docker containers], but #1 and #2 are still valid

Where #3 is the "reference implementation" aka "simple piece of code, that
shows how we'd do it, but is by no means binding".

We do not know, nor say, that pytest is _the_ tool for docker testing, and
we realistically expect that most of the tests will be just a bash script,
that will do what's necessary, and we (lbrabec) will just try and provide
_some_ reference implementation for a docker test in taskotron.


Joza

Tim, If I left something out, or messed up, please correct me. I'm going to
have a beer now...




(04:51:29 PM) lbrabec: welcome guys :)
(04:51:45 PM) tflink: is this the place where all the docker things are
figured out?
(04:52:04 PM) jskladan: and where we burn the witches tooo
(04:52:16 PM) tflink: this is an acceptable solution to docker
(04:52:38 PM) tflink: how do we test docker? we burn witches!
(04:54:46 PM) jskladan: so ad pytest & docker - I don't really see the
profit of making the image maintainers write tests in pytest - although I
get the "it is a testsuite" argument, docker is mostly interfaced with via
command line, and that is one of the worst things to do in Python.
(04:54:46 PM) jskladan: On top of that - why force a choice on the
maintainers, instead of just allowing them to "write a test(suite)" any way
they want (heck, even using pytest, if that's their choice), and just
running it as a regular task via taskotron?
(04:55:09 PM) tflink: who's forcing anyone to do anything?
(04:55:50 PM) jskladan: then I'm misunderstanding, and still don't
understand why should _we_ use pytest in any way
(04:55:52 PM) tflink: eh, it can be wrapped to be much less painful
(04:56:22 PM) tflink: because it offers a default option so there's not so
much overwhelming choice
(04:57:30 PM) tflink: if there's an easy(ish) default, that's what many
people will end up using
(04:57:45 PM) jskladan: from my POW, I don't see why we should treat Docker
testing any different than, ie. package-specific tests
(04:58:13 PM) tflink: which is why i was suggesting that we look into
something more generic
(04:58:34 PM) jskladan: more generic than what?
(04:58:45 PM) tflink: not something specific to docker
(04:59:11 PM) tflink: something that allows grouping of commands/actions
into test cases and makes reporting results easy for users
(05:00:32 PM) tflink: why would having everyone come up with their own
solution for that use case be better?
(05:00:42 PM) lbrabec: i always thought that the generic thing is
taskotron, and we are going to provide docker directive, that runs the
actual tests
(05:00:55 PM) tflink: we can
(05:00:59 PM) jskladan: ok, let me ask it in a different way - are we going
to "remove the overwhelming choice" for the package-specific tests too?
(05:01:31 PM) tflink: but there's a limit to what we're going to be able to
support if we do "this is for docker, this is for kubernetes, this is for
modules ..."
(05:01:38 PM) tflink: jskladan: I'd like to, yes
(05:02:05 PM) jskladan: so who's going to tell all the devs "well, what you
have now is nice, but you should really rewrite it to pytest"
(05:02:05 PM) jskladan: ?
(05:02:31 PM) tflink: but note that the base thing in my mind is "this is a
default which will likely make your lives easier. if there's a better tool,
please use it - if it returns something we can understand, it doesn't
matter but we won't be able to help as much if you run into issues"
(05:02:51 PM) tflink: nobody?
(05:03:33 PM) tflink: the target folks here are people writing tests for
things beyond what's already upstream
(05:04:03 PM) tflink: there is no way that we'

Re: Proposal to CANCEL: 2016-03-21 Fedora QA Devel Meeting

2016-03-21 Thread Josef Skladanka
ack

On Mon, Mar 21, 2016 at 6:47 AM, Tim Flink  wrote:

> I don't have any hugely important topics for the QA Devel meeting this
> week so instead of taking up 30-60 minutes of everyone's time this
> week, I propose that the meeting be canceled.
>
> If there is a topic that you would like to see discussed, reply to this
> thread with that topic and we can hold the meeting as it would have
> been scheduled.
>
> Otherwise, I'll sync up with folks about tasks during the week.
>
> Tim
>
> ___
> qa-devel mailing list
> qa-devel@lists.fedoraproject.org
> http://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org
>
>
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
http://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


Re: 2016-02-22 @ 15:00 UTC - Fedora QA Devel Meeting

2016-02-22 Thread Josef Skladanka
Top posting for consistence. Also, I won't be able to come today.
How about postponing the meeting until tomorrow?

J.

- Original Message -
> From: "Jan Sedlak" 
> To: "Fedora QA Development" 
> Sent: Monday, February 22, 2016 12:41:50 PM
> Subject: Re: 2016-02-22 @ 15:00 UTC - Fedora QA Devel Meeting
> 
> I too won't be able to attend.
> 
> 2016-02-22 10:52 GMT+01:00 Kamil Paral < kpa...@redhat.com > :
> 
> 
> > # Fedora QA Devel Meeting
> > # Date: 2016-02-22
> > # Time: 15:00 UTC
> > ( https://fedoraproject.org/wiki/Infrastructure/UTCHowto )
> > # Location: #fedora-meeting-1 on irc.freenode.net
> 
> Sorry, I won't attend the meeting today, I have an important errand in the
> city.
> ___
> qa-devel mailing list
> qa-devel@lists.fedoraproject.org
> http://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org
> 
> 
> ___
> qa-devel mailing list
> qa-devel@lists.fedoraproject.org
> http://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org
> 
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
http://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


Testcase namespacing - adding structure to result reporting

2016-02-08 Thread Josef Skladanka
This is an initial take on stuff that was discussed in person during Tim's
stay in Brno. Sending to the list for additional discussion/fine-tuning.
 
= What =

Talking rpmgrill-like checks, there will be a need to be able to facilitate
some kind of structure for representing that a check is composed of multiple
subchecks, for example:

check - FAILED
subcheck1 - PASSED
subcheck2 - PASSED
subcheck3 - FAILED
subcheck4 - PASSED

!IMPORTANT: ResultsDB will not be responsible for computing the result value
for an "upper level" Result from the subchecks - this is the check's (check
developer's) responsibility.

This could (should?) be done on two levels:
* physicall nesting the Results as such in the database structure
* namespacing Testcases

For the start, we decided to go with the simplistic approach of nesting the
Testcases via a simple namespacing - thus allowing a frontend/query tool to
reconstruct the structure at least to some extent e.g. by relying on a premise,
that Results that are a part of one Job can be converted to a tree-like
structure, based on the Testcase namespacing, at least to some extent, if
needed.


== Namespace structure ==

We'll be providing some top-level namespaces (list not yet final):
* app
* fedoraqa
* package
* scratch (?)

These will the further split to facilitate for a finer level of granularity,
e.g.:

app
testdays
powermanagement
pm-suspendr
fedoraqa
depcheck
rpmgrill
package

unit
func

Everything below the top-level will be 100% user defined. We might have
recommendations for specific namespaces (like package.), but we won't
be enforcing them.

The structure will be implemented (at least in the initial implementation) just
via the Testcase.name attribute in the DB, using dots as a separator. Later on,
we can easily add an easy way of using wildcards for searching (e.g.
app.testdays.*.pm-suspendr)

!IMPORTANT: the namespaces are not to be used to represent "additional data"
about the underlying result such as architecture, item under test, etc. 
This is what the Result's extra-data (ResultData) is there for.

NOTE: Although we do not encourage to store the results to the finest
granularity "just because" (e.g. individual results of a unittest testsuite),
we leave it to the check-developer's judgement. If there is a usecase for it,
let them do it, we don't care, as long as the DB is not extremely overloaded.


== Authentication/Authorization ==

We'll be continuing with the "expect no malice" approach we have right now.
There will be just a simple limitation in libtaskotron:

check git clone
if cloned: only allow non-pkg namespace if __our__ repo
else: do whatever, don't care

in libtaskotron:
check the git checkout like listed above
have whitelisted napespace repos in config

!FIXME: the mechanism above is just copied from tflink's notes, I can't
remember the details :/


== TODOs ==

* Change our checks to use the fedoraqa namespace
* Implement repo checking in libtaskotron
* Write docs for how to report stuff to ResultsDB
* Come up with root nodes for namespaces
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
http://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


Re: Log Data Retention

2015-11-09 Thread Josef Skladanka
> ... to delete all artifacts older than 4 months. If you have objections,
> speak up now.


OK with me.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: 2015-08-24 @ 14:00 UTC - Fedora QA Devel Meeting

2015-08-24 Thread Josef Skladanka
I won't be able to participate today - last minute change of plans out of my 
control - but I have no status updates anyway - most of the time was spent on 
catching up with things after Flock, and expense reports (yay!).

I'm, on the other hand, in need of tasks, so feel free to throw some my way 
(oops, that won't end good...).
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: 2015-07-27 @ 14:00 UTC - Fedora QA Devel Meeting

2015-07-27 Thread Josef Skladanka
I won't be able to make it to the meeting today, so please just C&P these:

#topic jskladan's update
#info T414 is cursed /me spent most of the week getting distracted by 
OtherThings(tm)
#info Docker is broken (machines can't be linked) - BUG #1244124
#info when `git apply` is misbehaving, check CR/LF vs LF
#info gremlins in Tim's machine cuased the WIP diff to be incomplete (found out 
on Friday), /me will carry on either from current state, or from the complete 
patch, if Tim finds it
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Coding Style

2015-06-18 Thread Josef Skladanka
- Original Message -
> From: "Kamil Paral" 
>
> Will we try to live with it in libtaskotron for a while, or should I create
> similar patches for all our projects right away?

I vote for doing it everywhere. I have already converted the ExecDB using 
autopep8
`autopep8  -r  --max-line-length 99  --in-place -a -a ./` as there is next to 
none
change in git-blame's blaming there.

For the other projects (including libtaskotron, once we merge the 
disposable-clients branch),
I suggest using fake author `git commit --author "Auto PEP8 
"`
for the autopep8 initial conversion's commit, so one can then easily dig-deeper 
with git-blame,
if needed.

Thoughts?
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Coding Style

2015-06-15 Thread Josef Skladanka
> > > I'm not picking on Josef here - I'm sure I've submitted code recently
> > > with lint errors, this was just the review I was looking at which
> > > triggered the idea:
> > > 
> > > https://phab.qadevel.cloud.fedoraproject.org/D389


No worries, I'm not taking it personaly. As I commented in the D389 - the "not 
compliant" parts of the code were mostly in the spirit ofthe rest of the code 
in the respective files (thus actually honoring the PEP8 
-https://www.python.org/dev/peps/pep-0008/#a-foolish-consistency-is-the-hobgoblin-of-little-minds
 ). Not saying that it is the best though.
 
> > > exceptions that we'd want, I'm proposing that we use strict PEP8 with
> > > almost no exceptions.

For me, strict PEP8 is next-to-unusable, and almost always leads to code like 
this:

+result = self.resultsdb.create_result(job_id=job_data['id'],
+  testcase_name=checkname,
+  outcome=detail.outcome,
+  summary=detail.summary
+  or None,
+  log_url=result_log_url,
+  item=detail.item,
+  type=detail.report_type,
+  **detail.keyvals
+  )

Hard to read, and heavily concentrated to the right edge of the 80-char mark.

> ...
> In this case it would involve asking Josef to stop putting spaces between
> parameter keyvals

I actually did stop doing that quite some time ago :)

First of all I'd suggest to move our codebase to strict PEP8 (or 
as-strict-as-possible), so we can have see how our code looks like, when PEP8 
compliant.
For starters, we could just plain use autopep8 - 
https://pypi.python.org/pypi/autopep8/
How about that?

J.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: 2015-06-01 Fedora QA Devel Meeting Minutes

2015-06-02 Thread Josef Skladanka
>   * tflink to pester jskladan

Sorry about that, /me misread some old "let's cancel the meeting" email...

ad Testdays:
  The Testday revamp is about half-done, as the process was interrupted by 
testing spree. I'm all in for 'killing' the old cloud machine, and I think it 
can be done ASAP.
  The new code will be ready long before the new cycle of Testdays, and it 
should be deployed by Ansible, as was mentioned during the meeting.

ad git in Phab:
  I tend to agree with kparal - as long as it's quite easy to setup repos, I do 
not really care, where is the repo hosted - especially if Phab is able to push 
to remote repos, thus keeping the (IMO) more visible Bitbucket repos up to date.

ad meeting time:
  I'm OK with the current time, I'll just need to be more careful with marking 
emails as read on my phone *facepalm*

J.



___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: openQA live image testing: ready for merge?

2015-03-12 Thread Josef Skladanka
Some preliminary feedback:

= openqa_fedora =

== _do_install_and_reboot.pm ==

Please delete the "anaconda_install_finish" needle, if it is unused.

anaconda_install_done needle: 
  * Why is only a part of the button selected?
  * What is the logic behind "assert_and_click" for multiple areas in one 
needle? Seems like the "click" is done on the last of the areas (judging from 
the contents of the needle) - is this _always_ true?

== main.pm ==

ad the contents of:
  _boot_to_default.pm
  _live_run_anaconda.pm
  _anaconda_select_language.pm

I'm absolutely for splitting this up a bit, but I'd rather have it done in a 
slightly different way:
  * rename _boot_to_default.pm to something in the likes of 
"_handle_bootloader.pm" (/me is bad with names, but it really just handles the 
grub options...)
  * merge _live_run_anaconda.pm and _anaconda_select_language.pm into one file, 
and call it something like "_get_to_anaconda_main_hub.pm"

This will keep the idea of having things split (so the "unless Kickstart" 
clause is just in one place), and will join the pieces, that IMHO should be 
together anyway.


== Needle changes ==

=== anaconda_spoke_done.json ===

Why change the needle, and why in this particular manner? The change looks 
unnecessary. If there is no particular reason for it, please revert to previous 
version.

=== bootloader_bios_live.json ===

The black area (last "match" area in the needle) is IMHO quite useless - I 
suspect it is a remain of the original bootloader needle. If there isn't a 
reason for having it there, please remove the area from the needle.

=== gdm.json ===

I'm not sure why you selected the particular bit of the screen, but it does not 
really make much sense to me. Why did you not select any of the more distinct 
areas of the gdm screen?

Also, I'd really like for the needle files to be named as close to the "tag" 
(i.e. "graphical_login" ) as possible, I know that you probably made this with 
other login managers in mind, but please use "graphical_login_gdm" as the name 
of the file, instead of plain "gdm".


= openqa_fedora_tools =

== conf_test_suites.py ==

I'm fairly certain that both default_install and package_set_minimal cover 
QA:Testcase_install_to_VirtIO.

== openqa_trigger.py ==

I really don't like the whole check_condition() thing. The name of the function 
does no correspond to what it does, which is quite unpleasant together with its 
side-effects (scheduling the jobs, and changing value of the jobs variable), 
and using variables from out of its scope.

Also, it seems that you forgot to actually fill the uni_done variable, 
resulting in `if condition and image.arch not in uni_done:` being effectively 
reduced to `if condition`, and `if not all(arch in uni_done for arch in 
arches):` reduced to `if True`.

So please:
 * find a more appropriate name for "check_condition()"
 * pass all necessary variables in arguments
 * make sure the uni_done variable is filled with the right data, and ideally 
rename it to something more descriptive of it's purpose.

I've spent an hour or so tackling it, so please consider this as an example: 
http://fpaste.org/197044/63062142/ but note that I have not ran the code (so 
typos are probably present).




I hope I'm not being too harsh, it is most certainly not my intent to come 
around that way,

J.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: openQA live image testing: ready for merge?

2015-03-12 Thread Josef Skladanka
Adam,

please set these up for review in Phabricator. I strongly suspect (given the 
time that I spent looking at the changes so far) that some discussion will be 
required, and Phab is _the_ place to do it.
Also, please make sure to rebase your repos to their current state, before 
creating the phab reviews.

For further development, I'd suggest creating an account on Bitbucket, and 
using the "core" repos - all the FedoraQA Devs can write to the repos, and all 
the "admins" and administer it. Once you have the accound, I'll add you to the 
Dev group, and having a "feature" branch in the  core repo seems quite better, 
given the development work-flows we currently adhere to.

Thanks,

joza
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: testdays is down again

2015-03-11 Thread Josef Skladanka
> ...
> it's down :(

Seems to be working for me. Could you describe "down" a bit more? :)

J.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: openqa_fedora_tools patch: add 'all' mode

2015-02-18 Thread Josef Skladanka
Adam,

the run_all code does not really make much sense to me, to be honest.
After some minor cleanup, the code looks like this:

171 def run_all(args, wiki=None):
172 """Do everything we can: test both Rawhide and Branched nightlies
173 if they exist, and test current compose if it's different from
174 either and it's new.
175 """
176 skip = None
177 (jobs, currev) = jobs_from_current(wiki)
178 print("Jobs from current validation event: {0}".format(jobs))
179 
180 yesterday = datetime.datetime.utcnow() - datetime.timedelta(days=1)
181 if currev and currev.compose == yesterday.strftime('%Y%m%d'):
182 skip = currev.milestone
183 
184 if not skip.lower() == 'rawhide':
185 rawhide_ffrel = fedfind.release.get_release(
186 release='Rawhide', compose=yesterday)
187 rawjobs = jobs_from_fedfind(rawhide_ffrel)
188 print("Jobs from {0}: {1}".format(rawhide_ffrel.version, rawjobs))
189 jobs.extend(rawjobs)
190 
191 if not skip.lower() == 'branched':
192 branched_ffrel = fedfind.release.get_release(
193 release=currev.release, compose=yesterday)
194 branchjobs = jobs_from_fedfind(branched_ffrel)
195 print("Jobs from {0}: {1}".format(branched_ffrel.version, 
branchjobs))
196 jobs.extend(branchjobs)
197 
198 if jobs:
199 report_results(jobs)
200 sys.exit()


Which on lines:
 177-178: Runs the OpenQA jobs for "current" event
 180: Creates a yesterday's date (formerly done on three lines in a weird 
way)
 181-182: IIUIC checks whether the "current" compose is from yesterday, and if 
so, then sets skip to either Rawhide or Branched
 184&191: Fails terribly, when the if-clause on 181 was False (because skip 
equals None in that case) => no job results will be reported to wiki matrices
  Also, it is kind of non-clear at the first read, that what it does is 
basically "when you should not skip rawhide, run jobs for rawhide and do the 
same for branched".
 I'd much rather see something like `if skip.lower() != 'rawhide':` 
and with a proper comment

I'm not really sure how the whole 181 if clause works, and why is it evidently 
always True, since you have not encountered the error.
Also, you mention a --yesterday parameter, which I have not really found in the 
code.

I pushed the slightly polished code to the repos, so make sure to pull :)
I'd really love to see some more comments in you code, which is using the 
wikitcms/relval internal attributes and so on (it gets somewhat wild in places).
Please have a look at the run_all() method and:
 * make sure that it handles the possible exceptions (please really do _not_ 
use empty except clause, it is the root of all evil)
 * document the if-clause on #181 and what it means


Thanks,

Josef

- Original Message -
> From: "Adam Williamson" 
> To: qa-devel@lists.fedoraproject.org
> Sent: Wednesday, February 18, 2015 9:24:17 AM
> Subject: openqa_fedora_tools patch: add 'all' mode
> 
> This adds an 'all' mode which runs for the current validation event
> compose if it hasn't already been done, then runs for the current
> date's Rawhide and Branched nightlies, if they exist and aren't the
> same as the current validation event. Has a --yesterday parameter to
> run on the nightlies from a day earlier instead, if your timezone /
> cron config don't hook up great with releng's. (In future we ought to
> have a daemon that listens for compose events from fedmsg or
> something). 'Today' and 'yesterday' are calculated in UTC.
> 
> Right now the non-'validation event' results are just going to sit in
> OpenQA, but I have Grand Plans to get 'em out via fedmsg and/or
> special wiki pages. For now only folks with VPN access or their own
> Coconut instance will be able to see the results, sorry!
> 
> We *may* wind up running the tests for a nominated nightly compose
> twice - once before it gets nominated, once after - but that doesn't
> seem like a huge problem. Obviously we ought to build one Glorious
> Unified Sausage Machine for nightlies which pre-flights 'em via OpenQA
> then does the nomination if that passes, but for now this is fine, I
> think.
> --
> Adam Williamson
> Fedora QA Community Monkey
> IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
> http://www.happyassassin.net
> 
> ___
> qa-devel mailing list
> qa-devel@lists.fedoraproject.org
> https://admin.fedoraproject.org/mailman/listinfo/qa-devel
> 
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: relval progress report

2014-10-07 Thread Josef Skladanka
Cool stuff, Adam!

where can I submit patches for review? :)

___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Coding Style

2014-06-03 Thread Josef Skladanka
> But if there's a strong desire for more columns, I'll manage. Can't hinder
> the team, can I? :)

Also, we should mention that by default the maximal line length is set to 79, 
not 80.

Let's just set it to 80 (as we already use it in the code), and forget about 
the heretic 100 idea :)
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Coding Style

2014-06-03 Thread Josef Skladanka
First of all - forget the max-line-length comment from earlier... I went 
through the pep8's configuration options, and there is basically next to none, 
so the overall decision will mostly need to be "keep it or drop it".

>  E251 =
> Josef is used to add spaces between keyword name and its value in method
> definitions or method calls. Personally, I find it more readable according
> to PEP8 (with no space), but Josef claims the opposite :)

I can, of course, try and change my mindset, but it is true, that i find it 
more readable the way I'm used to write it
 

>  E303 
>
> This forbids you from adding two blank lines between class methods. Another
> braindead checks.

Sadly, this can not be configured in any way.

>  E124 
>
> In a longer line of text, having the closing bracket really matching the
> opening one (the first example) is much easier on the eyes.

Agreed!

 
>  E122 =
> 
> I don't really understand the purpose of this error. If I move 'type(' to the
> third line, it goes away.

I must say, that I find the "proper" way enforced by pep8 to be a bit more 
readable/understandable. It makes sense, but we can IMHO easily dismiss this 
error.
 

>  E111, E113 
> 
> I'm not sure if we can suppress certain warnings only in
> a selected block of code, but if we check for PEP8 automatically, we should
> find out. There will be more use cases like this one.

There is an option to disable checks using #noqa comment, but (sadly) not for 
these errors.


All in all I'd be for configuration that ignores at least E128 and E303. My 
personal preference would be also ignoring E303 and E124.
I have no strong opinion on E122, although I'd vote for keeping it, should it 
ever come to it.
E111 and E113 should IMHO be kept, even if it means this kind of false 
positives.

J.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Coding Style

2014-06-03 Thread Josef Skladanka
> Outside any header requirements or directive documentation requirements,
> are there any changes to PEP8 that folks want to make? If so, please
> list the exceptions and why you think they should be adopted.

How about:

  [FORMAT]
  # Maximum number of characters on a single line.
  max-line-length=100

/me has no problem with 80 chars, but most of my monitors can easily handle two 
100-char on one vertically split-view in vim (and 80 chars is sometimes quite a 
pain)

J.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Documentation and Docstring Format

2014-04-17 Thread Josef Skladanka
> https://pythonhosted.org/an_example_pypi_project/sphinx.html#auto-directives
> 
> 
> I'm not suggesting that we drop everything and fix all the docstrings
> right now but I am suggesting that we start following the sphinx
> docstring format for new code and fix other-formatted docstrings as we
> come across them.
> 
> Any objections?

Well, my only objection is, that the Sphinx format has IMHO the worst impact on 
how docstrings look.
Maybe it is just me, but I use help() more often than html docs.

But other than that, I have no issues.

J.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Project Locations and Basic Setup

2014-04-08 Thread Josef Skladanka
>  - use gitflow and set 'develop' as the default branch
>  - host projects under bitbucket/fedoraqa
>* makes it easier to say "find our projects at this url" instead of
>  "find projects X,Y and Z here. find A and B here"

Absolutely no problem here.

>  - code submissions and issues/tasks are tracked through phabricator

At least for the utility-projects (pytap13 is IMHO the only one though), I'd 
rather still have the issues on both Bitbucket and in Phab. The thing is, that 
since this might (sometimes) get used out of taskotron/fedoraqa, it seems like 
too much of a hassle to request the 'outsiders' to register in our Phab, when 
the ticket can be posted via the Bitbucket.

Does it make sense?

J.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Default invocation of pytest for qadevel projects

2014-03-06 Thread Josef Skladanka
> Any thoughts on which of those (if either) would be better?

I do not really mind either, and do not have any strong preference. I'm used to 
having the non-functional tests run by default, but I can easily manage any way 
we decide to do it.

j.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: D19 Comments and Diff

2014-03-05 Thread Josef Skladanka

- Original Message -
> From: "Tim Flink" 
> To: qa-devel@lists.fedoraproject.org
> Sent: Wednesday, March 5, 2014 8:23:31 PM
> Subject: Re: D19 Comments and Diff
> 
> I'm generally of the mind that folks shouldn't have to dive into
> docstrings on tests in order to understand what is being tested. It is
> unavoidable in some cases where you have complex tests but that's
> usually a test or code smell which should at least be acknowledged.

Sure, I do agree. On the other hand, I believe that the developer running the 
tests should have at least an overall idea of what the "tested" code does 
(since he probably made some changes, and that triggered the need for running 
unit tests). I do not know why, but sometimes it seems like people (and I'm 
refering to my previous employments here) tend to believe (and I share this 
belief to some extent) that the "production" code can be complex (and by that I 
do not mean smelly), and the people reading/maintaining it will be able to 
understand it (with the possible help of comments). But at the same time, the 
(unit)tests must be written in a way, that first year high school student must 
be able to instantly understand them. Maybe it is a residue from the usual 
corporate "testing is done by dummies, programming is done by the geniuses" 
approach, I don't know. But I kind of tend to treat the tests as a helper tool 
for _the developer_.

> One of my counter-points here is that if the tests are so trivial, do
> we really need to have them? After a certain point, we can add tests
> but if they aren't incredibly meaningful, we'd just be adding
> maintenance burden and not gaining much from the additional coverage.

Sure, but the way I write tests is bottom up - I start from the simple ones, 
and traverse the way up into the "more complex" tests. I'm not saying that this 
is good/best approach, it just makes sense for me to know, that the basics are 
coverede before diving into the more "tricky" stuff.
 
> Overreact much? :-P

Yup, I sometimes tend to :D But once you see my head-desks you'll understand :D
 
> I may have gone a little too far and not spent enough time on the
> tests. I agree that some of the test names could be better and that
> there's not a whole lot of benefit to being rigid about "one assert per
> test" regardless of whether or not it's generally accepted as a good
> thing.

I know that the "One assert to rule them all" (hyperbole here) is usually 
considered _the_ approach, but all the time I see the "Have one assert per 
tests" guideline (which tends to be interpreted as _the rule_), there is this 
other guideline saying "Test one logical concept per test". And this is what I 
tend to do, and what (IMHO) Kamil did in his de-coupled patch. So not all tests 
with more than one assert are necessarily test smell.

And finding the right balance between the two guidelines is IMHO the goal we 
should aim for. So yes, having method names longer that the actual test code is 
something I consider... Not really that great :) But I understand that you 
wanted to show Kamil (and the rest of us) what can be done, and what the 
general guidelines behind the unit testing are, so I'm not trying to disregard 
the overall benefit.

> I also want to avoid increasing the maintenance burden of having a
> bunch of tests that look for things which don't really need to be
> tested (setting data members, checking default values etc.)

I agree, there is stuff that kind of can be taken for granted.

J.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: RFC: Taskotron task description format

2014-01-06 Thread Josef Skladanka
Hi Tim,

sorry for the late reply, this somewhat slipped my mind :(

Overall, I like the concept, and although I understand that this is 
Proof-of-Concept, I'm a bit worried about the get_argparser() method 
.

Would it mean, that we need to know all the possible arguments in advance? Or 
is this just a simple piece of code intended for easy-to-use demo?

J.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Taskbot: TAP vs Subunit

2013-11-04 Thread Josef Skladanka
- Original Message -
> From: "Nick Coghlan" 
> 
> I also realised that the YAML support in TAP likely gives you the
> ability to embed whatever you want if you discover the need, so it may
> make sense to start with the simpler format and make use of that
> embedded capability to transport other things.

Yup, that was exactly my plan :)

j.

___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Taskbot: TAP vs Subunit

2013-10-31 Thread Josef Skladanka
Lucas,

do you use any library for producing TAP format? Also, do you have any TAP 
parser, or do you just emit it? I was looking for something in Python, but all 
I got is either outdated, or non-complete.

Thanks,

Joza
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel