Re: Volunteers needed -- Automated Image Testing for flavors

2015-08-21 Thread Nicholas Skaggs
Thanks for the updates svij. A number of you offered help in hosting the
service. Thanks for all of your offers! Since Max had both the hardware and
ability to host already, it seemed to make sense to allow him to self-host
this.

So, Max will work on getting the jenkins visible to everyone publicly and
reply back with it's location. Everyone will be able to view, and those in
the testcase admins group (https://launchpad.net/~ubuntu-testcase) will be
able to also build and rebuild jobs.

Since we'll want to maintain the jobs running the tests collectively, I've
setup a new launchpad project to host the job code.(
https://launchpad.net/community-image-testing). It's blank for now, but as
Max sets up the initial jobs, he'll commit to it. From there we can handle
job modifications collectively as a group via source control. Anyone will
be able to suggest changes as merge proposals, the same as any other
launchpad project. In this way, should we ever wish to, or need to migrate
to a new jenkins, it should be easy to do so.

As always, thoughts, comments, suggestions, etc are most welcome! It would
be helpful to get feedback on how we're setting this up to make sure
everyone can collaborate sanely.

With the jenkins and hosting issue out of the way, we really need to solve
the issue of the tests not running and ubiquity crashing while trying to
execute the tests.

https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1487098

As always, those brave souls who are willing to follow
http://bazaar.launchpad.net/~ubuntu-installer/ubiquity/trunk/view/head:/autopilot/README.md
and try and get ubiquity to run the tests and figure out the issues and/or
fix the tests themselves would be most appreciated!

Nicholas

On Mon, Aug 17, 2015 at 2:09 PM, Sujeevan (svij) Vijayakumaran 
s...@ubuntu.com wrote:

 Hi,

 I have a few updates for you. I tried to run the tests on ec2 and on
 digital ocean. The tests uses qemu with kvm to boot the isos.
 Unfortunately I turned off the usage of kvm, which resulted in two
 things: a) It's rather slow. b) irqbalance keeps crashing.

 I've tried to run the tests on different days, with differents isos and
 different ubuntu versions. The main issue was, that irqbalance on the
 slave (the booted iso system) keeps crashing. Atleast it did work at the
 beginning - sometimes.

 I would suggest to run the tests on real hardware. Running the tests in
 the cloud doesn't seem to be really doable, if irqbalance keeps
 crashing. If someone knows how to fix that, it might be a bit different.

 -- Sujee

 Am 01.08.2015 um 15:19 schrieb Sujeevan (svij) Vijayakumaran:
  Hi,
 
  Am 31.07.2015 um 22:32 schrieb Nicholas Skaggs:
  -svij and shrini agreed to setup a test jenkins instance to help answer
  our lingering questions on what we need. Specifically they'll be
 looking at
  where should we host this?
  can we test in the cloud?
  what type of setup should we have (how many slaves, how many instances)?
  and trying to get us all setup with a jenkins instance we can add jobs
  to and iterate on moving forward.
 
 
  I've set up an Jenkins-Master server today, but there isn't anything yet
  (http://jenkins.svij.org). It runs on digital ocean (for 10$/month +
  2$/month for backups)
 
  I also had a look into the tests to check the other questions. The sad
  thing is, that we can't host this on digitalocean, because digitalocean
  doesn't support nested kvm virtualisation. The tests do use local kvm on
  the host machine.
 
  We have three options now:
 
   * rent a physical machine, where we can run the tests on local kvm
   * buy a physical machine and host that somewhere (e.g. at someones
 home…)
   * rent a amazon ec2 instance (which is virtualized but uses hvm with
 xen)
 
  All three options are kind of expensive. The first option probably needs
  a contract for atleast a year (depends on the provider). IMHO the best
  solution is to use amazon ec2. We could write a script which starts an
  fresh ec2 instance and runs the tests. After that we can drop the ec2
  instance again. Running the ec2 instance (t2.medium with 2GB RAM) 24/7
  would nearly cost 40$… but they were idling most of the time anyway. So
  the best and cheapest option is to only use them, when there are new iso
  images to test. The jenkins master server needs to run 24/7, that could
  continue to run on digital ocean.
 
  I don't have experience with amazon aws/ec2, if theres something wrong,
  please correct me.
 
  Cheers,
  Sujeevan
 

 --
 Ubuntu-quality mailing list
 Ubuntu-quality@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-quality

-- 
Ubuntu-quality mailing list
Ubuntu-quality@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-quality


Re: Package QA Tracker

2015-08-21 Thread Nicholas Skaggs
On Wed, Aug 19, 2015 at 4:18 AM, Pasi Lallinaho p...@shimmerproject.org
wrote:

 On 19/08/15 09:27, floccul...@gmx.co.uk wrote:
  On 18/08/15 21:44, Nicholas Skaggs wrote:
  It can be setup anyway you wish. Historically it's been found useful
  to retain the results for an entire cycle at once for packages. When
  we removed older results, we got duplicate bugs and it was harder to
  see what had been touched and what had not. It would be interesting
  to have a discussion about how you want to use the tracker and what
  would be the most useful to you. It's likely we can make the changes
  you want without needing to make changes to the site/code. It would
  simply require a quorum among those who use it.
 
  Or.
 
  Something not needing quorum would be to have to work like the image
  tracker.
 
  Xubuntu have both types at our bit of that tracker.
 
  http://iso.qa.ubuntu.com/qatracker/milestones/340/builds
 
  64 and 32 bit refresh daily. The core package doesn't - just sits
  there for a whole cycle.
 
  If we did that then one flavours preference wouldn't affect anyone else.
 

 The package tracker can work in the same way as the ISO tracker already;
 you can add new builds per package at least manually. The problematic
 part with this are packages that are shared; you can only have one way
 with one package.

 In my opinion, if we want non-full-cycle builds, then the builds would
 have to update automatically and happen only when packages are actually
 updated. I don't think there is code that could do this currently.

The code to do this can  be seen in action on the debian installer. It does
technically exist, but I'm not sure it's something we want to pursue.
http://iso.qa.ubuntu.com/qatracker/milestones/340/builds/98875/testcases

It's updated only when a new version is published.


 Instead of trying to change how the builds work, another option would be
 adjusting how the current data is laid out. I could see that a version
 field in the reporting form could get us to similar results as with
 fiddling with builds if that field was somehow shown in the reported
 bugs list. This would obviously need new code too, but updates to the
 bug list have been planned for a long time - this is easily done with
 those changes.

Indeed. Folks interested in solving some of the longstanding bugs, please
check out the bug tracker and get in touch here on the list. The site is in
drupal. The wiki has everything you need to know to get started (
https://wiki.ubuntu.com/QATeam/Roles/Developer#Developing_the_QA_Trackers)
-- 
Ubuntu-quality mailing list
Ubuntu-quality@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-quality