Re: RATS replacement with Koji image building

2014-03-11 Thread Kamil Paral
> On Mon, 10 Mar 2014 13:30:16 -0400 (EDT)
> Kamil Paral  wrote:
> 
> > I've finally had time to watch some DevConf talks I couldn't
> > personally attend. This one is very interesting:
> > 
> > http://www.youtube.com/watch?v=rWwugyV9J0Q&index=20&list=PLjT7F8YwQhr928YsRxmOs8hUyX_KG-S0T
> > 
> > I believe we could use it as a very simple RATS replacement. All the
> > heavy lifting would be done by someone else. I have created a ticket
> > about this: https://phab.qadevel.cloud.fedoraproject.org/T94
> 
> For non-iso image creation, sure. Can it support iso creation or
> anything that uses anaconda? It kinda sounds designed for cloud images
> instead of anything which is using anaconda.

I'm not sure if we understand each other. The disk image creation process (run 
by Koji) uses anaconda, and IIUIC the result is a disk image (to be used in 
VMs, for example), not an ISO. So we don't need to do anything with the 
resulting images, we just throw it away. We would be just interested in the 
result.

So the whole check could look like this (pseudo code):

task = koji.buildImage('ks.cfg', 'fc21', scratch=True)
task.wait()
if task.success():
  return PASSED
else:
  return FAILED


This could be run daily.

> 
> > Of course, this is not our immediate concern. But I felt like sharing
> > the video.
> 
> Thanks for sharing it, I didn't realize that the devconf videos were
> available.

They are, but most of them are not added to the correct youtube playlist. I've 
asked people to fix it. In the meantime, all of them should be available in 
this view:
http://www.youtube.com/user/RedHatCzech/videos

The quality is very low, unfortunately. The slides are available in their 
description.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Possible QA Devel Projects for GSoC 2014

2014-03-11 Thread Kamil Paral
> > > 
> > > Graphical Installation Testing
> > > 
> > > Continue the work that Jan started with his thesis or look into
> > > integrating something like openqa. The emphasis here is on the
> > > graphical interface since ks-based installation testing could be
> > > covered by stuff already written for beaker
> > 
> > After talking to Michal Hrusecky from OpenSUSE on DevConf, I'm pretty
> > convinced we should collaborate with them on OpenQA. They have
> > unleashed their whole Boosters team to work on it, and they're fixing
> > many of the previous pain-points (except for Perl, unfortunately).
> > They also try to have it pretty generic, without unneeded ties to
> > OpenSUSE infrastructure (e.g. they've just implemented OpenID login),
> > and they would really appreciate our collaboration.
> 
> We keep running into this and I really need to spend some time with
> OpenQA again. When I looked at it a couple years ago, there were several
> things that I didn't like about how the framework actually works
> (entire screenshot comparision, forcing keyboard interactions etc.) but
> it's possible that they've fixed those issues.

Look here:
https://www.google.cz/#q=openqa+site:http:%2F%2Flizards.opensuse.org

They use OpenCV instead of screenshot checksuming now. I'm not sure what you 
mean by keyboard interactions.

One major drawback is that they still don't support task distribution (to test 
clients). Everything is executed on a single machine. But they say they are 
able to run lots of test cases every single day, and we intend to run just a 
fraction of it, so performance-wise it shouldn't be a problem.

> > > 
> > > Disposable Client Support
> > > 
> > > 
> > > This is another of the big features that we'll be implementing
> > > before too long. It's one of the reasons that we made the shift
> > > from AutoQA to taskotron and is blocking features which folks say
> > > they want to see (user-submitted tasks, mostly).
> > > 
> > > This would involve some investigation into whether OpenStack would
> > > be practical, if there is another provisioning system we could use
> > > or if we'll be forced to roll our own (which I'd rather avoid).
> > > There should be some tie-in with the graphical installation support
> > > and possibly the gnome integration tests.
> > 
> > As usual, we're still missing the required pieces the student should
> > work with. But as a pilot and a way how to discover and evaluate
> > possible options, this could be interesting.
> 
> What are we missing that wouldn't be part of this project?

Well, are we sure now how exactly the client setup process will be hooked into 
taskotron or its underlying tools? Are we committed to using buildbot, or might 
it change?

> > > 
> > > System for apparent results storage and modification
> > > 
> > > 
> > > There has to be a better title for this but it would be one of the
> > > last major steps in enabling bodhi/koji to block builds/updates on
> > > check failures. The idea would be to provide an interface which can
> > > decide whether a build/update is OK based on what checks were
> > > passed/failed. It would have a mechanism for manual overrides and
> > > algorithmic overrides (ie, we know that foo has problem X and are
> > > working on it, ignore failures for now) so that we don't upset
> > > packagers more than we need to.
> > > 
> > > When Josef and I last talked about this, we weren't sure that
> > > putting this functionality into our results storage mechanism was
> > > wise. It's a different concern that has the potential to make a
> > > mess out of the results storage.
> > 
> > This is one of the more self-contained projects I think. It still
> > depends on some ResultsDB bits that are not ready yet, I think, but
> > doesn't depend on our test infra that much. I agree that we will need
> > something like this. IIUIC, this would be an API-accessible tool with
> > some web frontend. My only question is whether we want to have it
> > completely separate, or somehow integrated into ResultsDB web
> > frontend, for example. It might be weird to have two similar systems,
> > one for browsing the true results, and one for browsing the effective
> > results (e.g. waived, combined per updates, etc).
> 
> Having a single web frontend makes sense to me. I'm still not sure how
> the two systems would be integrated but I pretty much agree with Josef
> that the two systems need to be somewhat separated. Handling overrides
> and test cases inside the results storage system is also messy, just a
> different kind of messy :)

So, two different systems (i.e. two different databases) displayed in a singl

Re: RATS replacement with Koji image building

2014-03-11 Thread Tim Flink
On Tue, 11 Mar 2014 04:47:45 -0400 (EDT)
Kamil Paral  wrote:

> > On Mon, 10 Mar 2014 13:30:16 -0400 (EDT)
> > Kamil Paral  wrote:
> > 
> > > I've finally had time to watch some DevConf talks I couldn't
> > > personally attend. This one is very interesting:
> > > 
> > > http://www.youtube.com/watch?v=rWwugyV9J0Q&index=20&list=PLjT7F8YwQhr928YsRxmOs8hUyX_KG-S0T
> > > 
> > > I believe we could use it as a very simple RATS replacement. All
> > > the heavy lifting would be done by someone else. I have created a
> > > ticket about this:
> > > https://phab.qadevel.cloud.fedoraproject.org/T94
> > 
> > For non-iso image creation, sure. Can it support iso creation or
> > anything that uses anaconda? It kinda sounds designed for cloud
> > images instead of anything which is using anaconda.
> 
> I'm not sure if we understand each other. The disk image creation
> process (run by Koji) uses anaconda, and IIUIC the result is a disk
> image (to be used in VMs, for example), not an ISO. So we don't need
> to do anything with the resulting images, we just throw it away. We
> would be just interested in the result.

OK, I see what you were getting at. Are you sure that the process is
actually using anaconda? I thought that it was using stuff like oz [1]
for image generation and only accepts kickstarts to make the tools
uniform.

[1] https://github.com/clalancette/oz/wiki

> So the whole check could look like this (pseudo code):
> 
> task = koji.buildImage('ks.cfg', 'fc21', scratch=True)
> task.wait()
> if task.success():
>   return PASSED
> else:
>   return FAILED
> 
> 
> This could be run daily.

If you're right about koji using anaconda then yeah, this would make
sense. Otherwise, I'm not sure how much value we'd see since the cloud
folks sound like they're planning to run image composes on a daily
basis or so.

> > 
> > > Of course, this is not our immediate concern. But I felt like
> > > sharing the video.
> > 
> > Thanks for sharing it, I didn't realize that the devconf videos were
> > available.
> 
> They are, but most of them are not added to the correct youtube
> playlist. I've asked people to fix it. In the meantime, all of them
> should be available in this view:
> http://www.youtube.com/user/RedHatCzech/videos
> 
> The quality is very low, unfortunately. The slides are available in
> their description.

Sure, but they're understandable and better than nothing :)

TIm


signature.asc
Description: PGP signature
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Possible QA Devel Projects for GSoC 2014

2014-03-11 Thread Tim Flink
On Tue, 11 Mar 2014 05:02:28 -0400 (EDT)
Kamil Paral  wrote:

> > > > 
> > > > Graphical Installation Testing
> > > > 
> > > > Continue the work that Jan started with his thesis or look into
> > > > integrating something like openqa. The emphasis here is on the
> > > > graphical interface since ks-based installation testing could be
> > > > covered by stuff already written for beaker
> > > 
> > > After talking to Michal Hrusecky from OpenSUSE on DevConf, I'm
> > > pretty convinced we should collaborate with them on OpenQA. They
> > > have unleashed their whole Boosters team to work on it, and
> > > they're fixing many of the previous pain-points (except for Perl,
> > > unfortunately). They also try to have it pretty generic, without
> > > unneeded ties to OpenSUSE infrastructure (e.g. they've just
> > > implemented OpenID login), and they would really appreciate our
> > > collaboration.
> > 
> > We keep running into this and I really need to spend some time with
> > OpenQA again. When I looked at it a couple years ago, there were
> > several things that I didn't like about how the framework actually
> > works (entire screenshot comparision, forcing keyboard interactions
> > etc.) but it's possible that they've fixed those issues.
> 
> Look here:
> https://www.google.cz/#q=openqa+site:http:%2F%2Flizards.opensuse.org
> 
> They use OpenCV instead of screenshot checksuming now. I'm not sure
> what you mean by keyboard interactions.

IIRC, they were using opencv the last time I looked at openqa. The
image checksumming stuff is worse than the bits I had concerns about,
to be honest :)

What I mean by keyboard interactions is that you can't use the mouse -
it was a strict script of keyboard actions. The runner made keypresses
as scripted and nothing more.

> One major drawback is that they still don't support task distribution
> (to test clients). Everything is executed on a single machine. But
> they say they are able to run lots of test cases every single day,
> and we intend to run just a fraction of it, so performance-wise it
> shouldn't be a problem.

We'd still need to evaluate the system to see if it can do in reality
what we need it to do, what the level of integration work will be and
what kind of patches we'd need to write and submit.

I'm really not itching to write our own system here but at the same
time, I'm also not thrilled about the idea of jumping into a system we
have little to no control over just because it looks like it'd save us
time in the short term. As bad as NIH syndrome is, shoehorning an
existing library/system into a place where it isn't going to work well
and may cause us just as many problems is also not a good thing.

> > > > 
> > > > Disposable Client Support
> > > > 
> > > > 
> > > > This is another of the big features that we'll be implementing
> > > > before too long. It's one of the reasons that we made the shift
> > > > from AutoQA to taskotron and is blocking features which folks
> > > > say they want to see (user-submitted tasks, mostly).
> > > > 
> > > > This would involve some investigation into whether OpenStack
> > > > would be practical, if there is another provisioning system we
> > > > could use or if we'll be forced to roll our own (which I'd
> > > > rather avoid). There should be some tie-in with the graphical
> > > > installation support and possibly the gnome integration tests.
> > > 
> > > As usual, we're still missing the required pieces the student
> > > should work with. But as a pilot and a way how to discover and
> > > evaluate possible options, this could be interesting.
> > 
> > What are we missing that wouldn't be part of this project?
> 
> Well, are we sure now how exactly the client setup process will be
> hooked into taskotron or its underlying tools?

I'm not exactly sure how this will work, either. It's going to depend
on what we end up using for graphical testing, what openstack is
capable of, what cloud resources we have access to and what the cloud
SIG ends up needing for their testing.

> Are we committed to using buildbot, or might it change?

I don't really see how this is relevant. Can you elaborate on how using
buildbot or not would factor in here?

> > > > 
> > > > System for apparent results storage and modification
> > > > 
> > > > 
> > > > There has to be a better title for this but it would be one of
> > > > the last major steps in enabling bodhi/koji to block
> > > > builds/updates on check failures. The idea would be to provide
> > > > an interface which can decide whether a build/update is OK
> > > > based on what checks were passed/failed. It would have a
> > > > mechanism for m

Re: RATS replacement with Koji image building

2014-03-11 Thread Kamil Paral
> > I'm not sure if we understand each other. The disk image creation
> > process (run by Koji) uses anaconda, and IIUIC the result is a disk
> > image (to be used in VMs, for example), not an ISO. So we don't need
> > to do anything with the resulting images, we just throw it away. We
> > would be just interested in the result.
> 
> OK, I see what you were getting at. Are you sure that the process is
> actually using anaconda? I thought that it was using stuff like oz [1]
> for image generation and only accepts kickstarts to make the tools
> uniform.

The speaker says they use oz, and that anaconda is used to perform the 
installation. See 04:51 to see a screenshot of anaconda in action.

> 
> [1] https://github.com/clalancette/oz/wiki
> 
> > So the whole check could look like this (pseudo code):
> > 
> > task = koji.buildImage('ks.cfg', 'fc21', scratch=True)
> > task.wait()
> > if task.success():
> >   return PASSED
> > else:
> >   return FAILED
> > 
> > 
> > This could be run daily.
> 
> If you're right about koji using anaconda then yeah, this would make
> sense. Otherwise, I'm not sure how much value we'd see since the cloud
> folks sound like they're planning to run image composes on a daily
> basis or so.

If they do it even for development releases, we might not even need to run our 
own builds, we might just intercept their results. But we will probably want to 
test a different package set than they will.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Possible QA Devel Projects for GSoC 2014

2014-03-11 Thread Kamil Paral
> > Well, are we sure now how exactly the client setup process will be
> > hooked into taskotron or its underlying tools?
> 
> I'm not exactly sure how this will work, either. It's going to depend
> on what we end up using for graphical testing, what openstack is
> capable of, what cloud resources we have access to and what the cloud
> SIG ends up needing for their testing.
> 
> > Are we committed to using buildbot, or might it change?
> 
> I don't really see how this is relevant. Can you elaborate on how using
> buildbot or not would factor in here?

Hmm. In AutoQA, we used Autotest for managing test clients. Any disposable 
client support would most probably needed some support in Autotest. I assumed 
it's the same for Buildbot. Instead of using a pre-defined machine, it will 
need to be able to say "you there, create me a machine matching these 
requirements; zzz; thank you".


> > So, two different systems (i.e. two different databases) displayed in
> > a single web frontend, right? I guess it makes sense.
> 
> Yeah, that's what I had in mind, anyways.
> 
> Since student registration has started, I'd like to get our proposed
> ideas in the wiki soon. The question of whether any of these projects
> would be worth distracting folks from other dev/testing work remains -
> any thoughts on that front?

So far it seems you're the only candidate for mentoring, you it's probably up 
to your decision and past experience. Of course any of us will help the student 
when needed, but I assume most of the communication will be between the mentor 
and the student. Josef says that he recommend picking a project for which we 
don't need to spend weeks to introduce and explain to the student the whole 
project, our needs, etc. Something that is simple to explain, and can be 
implemented without being blocked on us.

> 
> It sounds like the results middleware project, the graphical
> installation project, the gnome-continuous project and _maybe_ the
> disposable client project are the best candidates. Any thoughts on the
> value for those?

I don't know much about gnome-continuous, but the rest of the projects you 
mentioned really seems to be the best picks. Results middleware is probably 
closest to Josef, graphical testing is closest to me and disposable clients is 
closest to you. When it comes to importance, disposable clients have probably 
the highest priority, then results middleware, and then comes the rest. But 
that's just my guesses.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel